report
stringlengths
319
46.5k
summary
stringlengths
127
5.75k
input_token_len
int64
78
8.19k
summary_token_len
int64
29
1.02k
Today, the Library is at an important crossroads in its long history. Its efficiency, effectiveness, and continued relevance may depend on its ability to address key issues about its future mission. The Library's mission and activities have continued to grow since its creation in 1800, and the growth of its mission has been matched or exceeded by the growth of its collections. Booz-Allen found that the Library's staff, management structure, and resources are in danger of being overwhelmed by this growth. Booz-Allen identified three alternative missions that could be considered to shape the Library's future. The three missions can be used to characterize the potential scope of activity and the customers the Library might serve: (1) Congress; (2) Congress and the nation; and (3) Congress, the nation, and the world community of libraries, publishers, and scholars. The current Library mission and activities fall somewhere between the latter two alternatives. Under the first mission alternative, the Library would refocus its functions on the original role of serving Congress. Collections would be limited to broadly defined congressional and federal government needs, and Congressional Research Service-provided information would continue to support legislative functions. There would be no national library, and leadership of the information/library community would be missing unless assumed by other organizations. Booz-Allen concluded that the Library would require significantly fewer staff and financial resources to carry out this mission. The second mission alternative would emphasize the Library's national role, and current activities of a global nature would be deemphasized. The national library role would be formally acknowledged, and the Library's leadership and partnering roles would be strengthened. This mission would require increased interaction with national constituencies. Booz-Allen concluded that the Library would require somewhat fewer staff and financial resources to carry out this mission. Under the third mission alternative, the Library would continue and perhaps broaden its activities to serve the worldwide communities of libraries, publishers, and scholars. Collections would expand substantially with accompanying translation and processing consequences. Booz-Allen concluded that this expanded mission would require increased staff and financial resources. After determining whom the Library will serve, the next step should be to decide how the Library will serve them. Booz-Allen identified two role options: (1) independent archive/knowledge developer and (2) information/knowledge broker. Within the role of independent archive/knowledge developer, the Library would continue to develop and manage collections independently in Library and other government facilities. Traditional, original cataloging and research or development functions would be performed primarily by Library components and staff. Library collections and facility requirements would continue to expand based on collection strategy and policy. Traditional areas of Library expertise, such as acquisitions, cataloging, and preservation, would continue to grow in importance and would drive future staffing and resource requirements. Within the role of information/knowledge broker, the Library's principal role would change from being a custodian of collections with an independent operational role to that of a comprehensive broker or referral agency. The Library would initiate collaborative and cooperative relationships with other libraries and consortia. It would use information technology to tell inquirers which library in the nation or the world has the specific information. Under this scenario, the Library's collections would be selectively retained and/or transferred to other institutions with arrangements for appropriate preservation. Other institutions would need to demonstrate their willingness and capability to participate in such a system. Booz-Allen assessed each of these mission and role options and discussed them during focus groups with Library management, congressional staff, external customers, and others. Many focus group participants perceived a need to systematically limit and consolidate the Library's global role. On the basis of these discussions as well as its other findings from the overall management review of the Library, Booz-Allen recommended that the Library's mission be focused within the Congress/nation alternative, and planning should begin toward a future mission of serving Congress and performing the role of a national information/knowledge broker. should include a thorough consideration of the appropriate role of technology in supporting the Library's operation. Third, the Library should initiate and guide this examination and debate. And fourth, at the end of the process, the mission of the Library should be affirmed by Congress, and resources should be provided at a level that would enable the Library to effectively fulfill the chosen mission. Regardless of what Congress ultimately affirms regarding the future mission of the Library, Booz-Allen also identified a number of management and operational issues that should be addressed. Booz-Allen reported that the Library's management processes could be more effective. First, it concluded that the Library should institute a more comprehensive planning and program execution process that provides for better integration of key management elements, such as strategic and operational planning, budget development, program execution, performance measurement, and evaluation. Second, Booz-Allen noted that the Library should improve the capability to make decisions and solve problems that cut across organizational lines primarily by clarifying roles, responsibilities, and accountability. Third, it pointed out that the Library should reengineer its support services, particularly in the areas of information resource management, facilities, security, and human resources, to improve the capability of its infrastructure to support the mission. Additionally, Booz-Allen noted that the Library does not manage its operations from a process management approach but instead uses a functional approach. For example, the Library has different groups to acquire, catalog, preserve, and service each collection. Under this functional approach, the Library is not in a good position to routinely consider such factors as current arrearage status or requirements for preservation, cataloging, and storage when coordinating and planning for acquisitions of large collections. These factors could be considered more effectively under a process management approach, because one group would perform these functions for each collection. This approach also would permit the information technology function to support one Library-wide infrastructure rather than its current duplicative and poorly integrated systems. One major benefit of using a process management approach and integrated information technology infrastructure is that it provides a better understanding of how to control, manage, and improve how the organization delivers its products and services. Booz-Allen made a number of specific recommendations targeted directly at improving the Library's management and operational processes. It emphasized that three organization-related recommendations are key to the Library's overall success in improving its management and operations. Booz-Allen recommended that the Library clarify the role of the Deputy Librarian to serve as the Library's Chief Operating Officer and vest the individual occupying that position with Library-wide operational decisionmaking authority; elevate the Chief Financial Officer's position to focus greater attention on improving the Library's financial systems and controls; and establish a Chief Information Officer position to provide leadership in technology across the organization, which should help the Library function more effectively in the electronic information age. The effective allocation and use of human and financial resources are paramount to support the day-to-day activities of the Library. However, Booz-Allen found that a variety of weaknesses hamper the Library's ability to maintain the intellectual capital of its workforce and that the Library has opportunities for increasing revenue. Booz-Allen made several recommendations to improve the Library's ability to deal with these important issues. The success of the Library's mission depends heavily on its human resources. Whether the mission is to serve Congress, the nation, or the world, its ultimate achievement rests with the quality of the Library staff. However, Booz-Allen found that the human resource function at the Library has some significant problems that may hamper the Library's ability to maintain its intellectual capital. First, the Library does not have a coordinated training program. Second, human resources' personnel and processes are not equipped to handle changes to recruitment, training, or selection requirements that may result from technology, changes to the Library's mission, or staff turnover. Third, the human resources services unit is not able to strategically plan for workload and staffing requirements because of its poor coordination among the Library service units. Fourth, ongoing problems in communications between managers and the unions inhibit their ability to plan together for future directions of the Library. Finally, the personnel management operations, particularly competitive selection and training, inhibit the Library's ability to bring on new staff members and get them trained quickly. Currently, it takes about 6 months to recruit and hire new employees. Booz-Allen recognized that improving the Library's operations would require additional funding. Thus, as part of its review, Booz-Allen looked for opportunities through which the Library could generate revenue to help offset the costs of improvements. It found that opportunities to significantly increase revenues exist in the copyright registration and cataloging areas. By fully recovering copyright registration costs, Booz-Allen estimated that the Library could receive additional revenue annually ranging from $12-$29 million, depending on different assumptions. The potential revenue to be generated from charging publishers a fee for cataloging could be about $7.5 million annually. Booz-Allen recognized that these additional potential revenue opportunities must be reviewed in light of past efforts to increase revenues and the Library's mission. For example, Congress decided in 1948 and 1989 not to recover full cost of copyright registration, and the perception in the library community is that cataloging is at the heart of what the Library does and forms an integral part of its mission. Consequently, both of these revenue opportunities need to be considered as part of reexamining the Library's mission with a view towards better balancing its mission and available resources. In order for the Library to have success with the implementation of any revenue opportunities, an appropriate support structure will be required. Therefore, Booz-Allen suggested that the Library needs to develop a legislative strategy that will provide it with the financial mechanisms and authority needed to implement new fee-based services. To date, Congress has not provided the Library with legislation authorizing fee-based services and all the different financial mechanisms needed to pursue a range of fee-based service opportunities. and services that are not consistent with a newly established mission. Booz-Allen interviews and focus groups identified the following Library products and services as possible candidates for reduction: selected special collections acquisitions, foreign acquisitions, selected English language acquisitions, original cataloging, exhibits, displays, and performances. As a part of the review of the Library's management, Price Waterhouse (1) audited the Library's fiscal year 1995 consolidated statement of financial position, (2) examined assertions made by Library management concerning the effectiveness of internal controls over financial reporting, (3) reviewed compliance with selected laws and regulations, and (4) examined assertions made by Library management concerning the safeguarding of the Library's collection. This was the first financial statement audit of the Library since our audit of the Library's fiscal year 1988 financial statements. Price Waterhouse found that the Library had mixed results in implementing GAO's recommendations made in its 1991 report. The Library made improvements including resolution of significant compliance and control problems in the Federal Library and Information Network (FEDLINK) program and implementation of a new financial management system in fiscal year 1995. Price Waterhouse also found that the Library established accounting policies and procedures to address many of the problems we found in our audit of the Library's 1988 financial statements. However, the Library had not supplemented that system with the processes necessary to generate complete, auditable financial statements. For example, the Library's new system had not been configured to generate the detailed trial balances necessary for an audit, and the system did not track significant account balances, including property and equipment and advances from others. Further, the Library did not record significant accounting entries, including those converting balances from the old system, in sufficient detail to permit effective audit analysis of the accounts. Price Waterhouse stated that this latter deficiency, coupled with the lack of comparable prior year information and audited opening balances, precluded it from auditing the Library's fiscal year 1995 operating statement. ". . . except for the effects of such adjustments, if any, as might have been determined to be necessary had (Price Waterhouse) been able to examine evidence regarding property and equipment balances, the Consolidated Statement of Financial Position presents fairly, in all material respects, the Library's financial position as of September 30, 1995, in conformity with the basis of accounting described in Note 1 to the Consolidated Statement of Financial Position." Price Waterhouse concluded that the Library's financial internal controls in place as of September 30, 1995, were not effective in safeguarding assets from material loss and in ensuring that there were no material misstatements in the Consolidated Statement of Financial Position. In addition to the material weaknesses over property and equipment that led Price Waterhouse to qualify its opinion on the Consolidated Statement of Financial Position, Price Waterhouse reported that the Library had material weaknesses in its financial reporting preparation process, reconciliations of cash accounts with the Department of the Treasury and of various general ledger balances with those in subsidiary records, and information technology security practices over its computer operations. Price Waterhouse concluded that the Library's internal controls in place on September 30, 1995, were effective in ensuring material compliance with relevant laws and regulations. However, Price Waterhouse reported that the Library continued to accumulate surpluses in certain gift funds that it operates as revolving funds, even though the Library does not have the statutory authority to do so. GAO previously reported this noncompliance in its audit of the Library's 1988 financial statements. GAO recommended that the Library obtain the statutory authority necessary to continue operating the revolving gift funds but it has not received such authority. Also, Price Waterhouse found one instance where the Library violated 2 U.S.C. 158a, which prohibits the Library from investing or reinvesting a gift of securities offered to the Library until acceptance of the gift has been approved by the Joint Committee on the Library. The Library believes this was an isolated error and is holding the proceeds pending approval by the committee. financial report preparation process, reconciliations of accounting records, accounting for property and equipment, computer security practices, enhancing information that is provided to management, financial services staffing, controls over the general ledger and reporting system, internal self-assessment of internal controls, computer operations disaster recovery plan, controls over cash handling and check processing, and trust fund accounting. Price Waterhouse concluded that the Library's management lacked reasonable assurance that the Library's internal control structure over safeguarding of collection assets against unauthorized acquisition, use, or disposition was generally effective as of September 30, 1995. Price Waterhouse found that the Library has not completed a comprehensive risk assessment and collection security plan to identify the risks to the collection, the proposed or established control activities to address the risks, the required information management needs to carry out its responsibilities, and the methods by which management could monitor the effectiveness of control procedures. Price Waterhouse concluded that without these practices and procedures, Library managers do not have reasonable assurance that the risk of unanticipated loss (theft, mutilation, destruction, or misplacement) of materials with significant market value, cultural or historical importance, or with significant information content is reduced to an acceptable level. Booz-Allen had similar findings in its review of how the Library managed security. procedures to periodically inventory key items in the collection; when staff are precluded from bringing personal items into storage areas; when it has reduced the number of non-emergency exits in the collections areas of the Library's buildings; when it has regular reporting, tracking, and follow-up of missing materials; when it has a coordinated approach to access by its own maintenance personnel and those of the Architect of the Capitol; and when it has sufficient surveillance cameras in areas where high-value materials are stored. Environmental risks would be effectively controlled when the Library has determined that high-value, irreplaceable items have been protected from possible fire and water damage and that its preservation program is targeting and treating its highest priority items in a timely fashion. Although the Library has been striving to improve the safeguarding of its collection since 1991, the findings of Price Waterhouse and Booz-Allen confirm that the Library continues to have a number of significant weaknesses in safeguarding the collection materials that the Library relies upon to serve Congress and the nation. Mr. Chairman, that concludes the overall summary of the review of the management of the Library of Congress. We would be pleased to answer any questions that you or other Members may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed two independent management and financial reviews of the Library of Congress. GAO noted that: (1) the management review found that the Library's mission needed to be reassessed because its enormous growth threatens to overwhelm its staff, structure, and resources; (2) alternative missions include service to Congress exclusively, Congress and the nation, or Congress, the nation, and the world community; (3) the first two mission options would require fewer staff and resources, but the third mission would require staff and funding increases; (4) after determining its mission, the Library's service role options include being an independent archive-knowledge developer or an information-knowledge broker; (5) the review recommended that the Library focus on the Congress-nation mission alternative and become a national information-knowledge broker; (6) the review also recommended that the Library improve its human and financial resources management and operational processes; (7) the financial review found that the Library had mixed results in implementing previous GAO recommendations and certain financial management weaknesses remain; (8) except for property and equipment accounts, the Library's financial statements presented fairly, in all material respects, its financial position as of September 30, 1995; (9) the Library's internal controls were not effective in safeguarding assets, ensuring that material misstatements did not occur, and ensuring that gifts complied with applicable laws and regulations; and (10) the review made several recommendations for safeguarding assets and improving accounting processes.
3,736
313
Under long-standing law, the so-called Posse Comitatus Act of 1878 (18 U.S.C. 1385) prohibits the use of the Departments of the Army or the Air Force to enforce the nation's civilian laws except where specifically authorized by the Constitution or Congress. While the language of section 1385 lists only the Army and the Air Force, DOD has made the provisions of section 1385 applicable to the Department of the Navy and the U.S. Marine Corps through a DOD directive (DOD Directive 5525.5, Jan. 15, 1986). Congress has enacted various pieces of legislation authorizing a military role in supporting civilian law enforcement agencies. For example, in the Department of Defense Authorization Act for Fiscal Year 1982 (P.L. 97-86), Congress authorizes the Secretary of Defense to provide certain assistance-type activities for civilian law enforcement activities. This legislation also provided, however, that such U.S. military assistance does not include or permit participation in a search, seizure, arrest, or other similar activity, unless participation in such activity is otherwise authorized by law. Beginning in the early 1980s, Congress authorized an expanded military role in supporting domestic drug enforcement efforts. As part of the national counterdrug effort, for example, the U.S. military provides federal, state, and local law enforcement agencies with a wide range of services, such as air and ground transportation, communications, intelligence, and technology support. DOD counterdrug intelligence support is provided by Joint Task Force Six, which is based at Fort Bliss (El Paso, TX). This component coordinates operational intelligence in direct support of drug law enforcement agencies. Moreover, under congressional authorization that was initially provided in 1989 (32 U.S.C. 112), DOD may provide funds annually to state governors who submit plans specifying how the respective state's National Guard is to be used to support drug interdiction and counterdrug activities. Such operations are conducted under the command and control of the state governor rather than the U.S. military. Also, federal, state, and local law enforcement personnel may receive counterdrug training at schools managed by the National Guard in California, Florida, and Mississippi. In 1989, Congress authorized the Secretary of Defense to transfer to federal and state agencies excess DOD personal property suitable for use in counterdrug activities, without cost to the recipient agency. In 1996, Congress authorized such transfers of excess DOD personal property suitable for use in law enforcement generally and not just specifically for counterdrug efforts. This Law Enforcement Support Program is managed by the Defense Logistics Agency. Military law enforcement agencies are major consumers of forensic laboratory services. The Army operates the U.S. Army Criminal Investigation Laboratory (Fort Gillem, GA), which provides forensic support regarding questioned documents, trace evidence, firearms and tool marks, fingerprints, imaging and technical services, drug chemistry, and serology. The Navy operates two limited-service forensic laboratories, which are referred to as Naval Criminal Investigative Service Regional Forensic Laboratories (Norfolk, VA, and San Diego, CA). Both Navy laboratories provide forensic support regarding latent prints, drug chemistry, arson, and questioned documents. The Air Force is the executive agent of the DOD Computer Forensics Laboratory (Linthicum, MD), which processes digital and analog evidence for DOD counterintelligence operations and programs as well as fraud and other criminal investigations. Generally, with the exception of participating with state or local law enforcement agencies in cases with a military interest, the military laboratories do not provide support to these agencies. In response to our inquiries, officials at each of the DOD components we contacted told us that they did not provide grants for any purposes, including crime technology-related assistance, to state and local law enforcement agencies during fiscal years 1996 through 1998. Moreover, we found no indications of crime technology-related grant assistance provided by DOD during our review of various DOD authorization, appropriations, and budget documents. According to the General Services Administration's Catalog of Federal Domestic Assistance, DOD can provide grants for a variety of purposes to some non-law enforcement agencies. For example, some DOD grants may assist state and local agencies in working with the Army Corps of Engineers to control and eradicate nuisance vegetation in rivers and harbors. DOD direct funding--$563.3 million total appropriations for fiscal years 1996 through 1998--was provided for the National Guard Bureau's counterdrug program, which covers the following six mission areas: (1) program management, (2) technical support, (3) general support, (4) counterdrug-related training, (5) reconnaissance/observation, and (6) demand reduction support. However, we determined that, with one exception, these mission areas did not involve activities that met our definition of crime technology assistance. The one exception involved courses at two of the National Guard's three counterdrug training locations in operation during fiscal years 1996 through 1998. We considered these courses to be a "support service," and they are discussed in the following section. Regarding support services and systems, DOD's crime technology assistance to state and local law enforcement totaled an estimated $30 million for fiscal years 1996 through 1998. As table 2 shows, this assistance was provided by various DOD components--the Defense Security Service, the DOD Computer Forensics Laboratory, the Intelligence Systems Support Office, Joint Task Force Six, the military branch investigative agencies, National Guard Bureau counterdrug training schools, and the U.S. Army Military Police School. More details about the assistance provided by each of these components are presented in respective sections following table 2. As table 2 shows, the Defense Security Service estimated that its assistance to state and local law enforcement totaled approximately $5,200 during fiscal years 1996 through 1998. This total represents responses to 59 requests--with estimated assistance costs ranging from $75 to $100 per request (or an average of $87.50 per request)--for information from the Defense Clearance and Investigations Index. A single, automated central repository, the Defense Clearance and Investigations Index, contains information on (1) the personnel security determinations made by DOD adjudicative authorities and (2) investigations conducted by DOD investigative agencies. This database consists of an index of personal names and impersonal titles that appear as subjects, co-subjects, victims, or cross-referenced incidental subjects in investigative documents maintained by DOD criminal, counterintelligence, fraud, and personnel security investigative activities. For example, state and local law enforcement agencies may request and receive completed Defense Security Service investigations in support of criminal investigations or adverse personnel actions. The DOD Computer Forensics Laboratory (Linthicum, MD) became operational in July 1998. The laboratory is responsible for processing, analyzing, and performing diagnoses of computer-based evidence involving counterintelligence operations and programs as well as fraud and other criminal cases. According to DOD officials, forensic analyses can be provided to state and local law enforcement when there is a military interest or, in certain other instances, when specific criteria are met. In the last 3 months of fiscal year 1998 (July through Sept.), according to DOD officials, the laboratory performed 84 forensic analyses, 2 of which were for law enforcement officials in the states of North Carolina and Tennessee, respectively. As table 2 shows, DOD estimated that its costs (which were based on prorated staff hours) in providing forensic assistance to the states were $14,000 (or $7,000 per analysis). For fiscal years 1996 through 1998, DOD obligated $28.1 million for the Gulf States Initiative. Using law enforcement intelligence software, the Gulf States Initiative is an interconnected communications system among the states of Alabama, Georgia, Louisiana, and Mississippi. Included in this system are (1) specialized software for the analysis of counterdrug intelligence information, (2) a secure and reliable communications network, and (3) standardized tools to analyze and report counterdrug intelligence information. Each state operates a drug intelligence center (located in the capital city) that is connected to the hubs in other states. This system allows states to process and analyze intelligence information. At the request of a domestic law enforcement agency, DOD's Joint Task Force Six coordinates operational, technological, intelligence, and training support for counterdrug efforts within the continental United States. For fiscal years 1996 through 1998, Joint Task Force Six officials estimated that the costs of crime technology assistance provided by this DOD component to state and local law enforcement totaled $48,800. As table 2 shows, this assistance consisted of two types--communications assessments ($16,300) and intelligence architecture assessments ($32,500). In providing such assistance, military personnel essentially acted as technical consultants in evaluating state or local agencies' (1) existing communications systems, including their locations and the procedures for using them, and/or (2) intelligence organizations, functions, and systems. The military branch investigative agencies generally do not unilaterally provide assistance to state and local law enforcement. However, if there is a military interest, a military investigative agency may jointly conduct an investigation with state or local authorities. (See table I.1 in app. I.) During such collaborative efforts, the Army, Air Force, and Navy may provide forensic support in areas involving, for example, fingerprints, drug chemistry, and questioned documents. The cost data presented for the military branch investigative agencies in table 2 are the costs associated with (1) forensic analyses involving joint or collaborative cases and (2) other technology-related assistance, such as technical training. For example: In 1997, the Air Force enhanced the quality of an audiotape used as evidence for a homicide investigation for Prince George's County, MD. The Air Force estimated its costs to be $8,400 for this assistance. In addition to the forensic analyses conducted during fiscal years 1996 through 1998, the Navy also provided technical training to 386 state and local law enforcement personnel. Such training covered various aspects of forensic technology, such as conducting DNA analyses and using computer databases. Although it does not have a forensic laboratory, the Marine Corps Criminal Investigation Division provided state and local law enforcement agencies with other types of assistance, such as the use of dog teams to detect explosives. However, we determined that these activities did not meet our definition of crime technology assistance. At two of its three counterdrug training locations in operation during fiscal years 1996 through 1998, the National Guard Bureau provided state and local law enforcement with courses that met our definition of crime technology assistance. According to National Guard Bureau officials, the two locations and the relevant courses (with a prorated estimated funding total of about $281,000 for the 3 fiscal years) are as follows: Multijurisdictional Counterdrug Task Force Training (St. Petersburg, FL): At this training location, the relevant course covered the use of technical equipment to intercept secure communications. This course accounted for about $60,000, or about 21 percent of the total $281,000 funding. Regional Counterdrug Training Academy (Meridian, MS): At this location, National Guard Bureau officials identified the following three relevant courses: (1) Basic Technical Service/Video Surveillance Operations, (2) Counterdrug Thermal Imagery Systems, and (3) Investigative Video Operations. These courses accounted for about $221,000, or the remaining 79 percent of the $281,000 funding total. The U.S. Army Military Police School (Fort Leonard Wood, MO) provided counterdrug training to state and local law enforcement agencies. Eight courses were conducted that focused on drug enforcement training for non-DOD students, including state and local law enforcement personnel. In response to our inquiry, DOD officials indicated that two of these courses--(1) Counterdrug Investigations and (2) Basic Analytical Investigative Techniques--fit our definition of crime technology assistance. For example, the Counterdrug Investigations course covered such topics as (1) criminal intelligence, (2) surveillance operations, and (3) technical surveillance equipment (audio/video). The Basic Analytical Investigative Techniques course trained law enforcement personnel how to maintain an automated criminal intelligence system under multijurisdictional narcotics scenarios. This course also covered such topics as (1) the analytical process, (2) sources of information, and (3) flowcharting. Regarding these 2 courses, Military Police School officials told us that training was provided to 2,121 state and local law enforcement personnel during fiscal years 1996 through 1998, at an estimated cost of over $1.4 million. During fiscal years 1996 through 1998, DOD's in-kind assistance to state and local law enforcement totaled about $95.9 million. As table 3 shows, this category of assistance was provided by two DOD components--the Defense Information Systems Agency (about $24 million in the procurement and transfer of new equipment) and the Defense Logistics Agency (about $72.0 million in the transfer of surplus equipment). More details about the in-kind assistance provided by each of these two components are presented in respective sections following table 3. The in-kind assistance (about $24 million) provided by the Defense Information Systems Agency consisted of the procurement and transfer of equipment for the following information-sharing or communications systems: Regional Police Information System ($3 million): Arkansas, Louisiana, and Texas use this system, which (1) provides automated information capabilities for detecting and monitoring illegal drug activities within each state's jurisdiction and (2) facilitates the sharing of both strategic and tactical intelligence among participating agencies. The Southwest Border States Anti-Drug Information System (about $21 million): This is a secure law enforcement counterdrug information- sharing system that connects intelligence databases of four southwest border states (Arizona, California, New Mexico, and Texas); the three Regional Information Sharing Systems in that area; and the El Paso Intelligence Center. This system provides for secure E-mail transmissions and includes a preestablished query system. The system allows all participants to query the databases of all other participants and has an administrative Web site server that offers key electronic services, such as providing agency contact information and system usage statistics. Through its Law Enforcement Support Program, the Defense Logistics Agency provided about $72.0 million of crime technology-related, in-kind assistance to state and local law enforcement during fiscal years 1996 through 1998. As table 3 shows, most of this assistance consisted of the following three types of equipment or assets: Automated data processing units, equipment, components, software, and control systems ($29.5 million); Radio and television equipment ($20.2 million); and Night vision equipment ($16.9 million). Collectively, these three categories accounted for $66.6 million or about 93 percent of the total crime technology-related, in-kind assistance (about $72.0 million) provided to state and local law enforcement by the Defense Logistics Agency during fiscal years 1996 through 1998. In its counterterrorism and counterdrug efforts, the federal government has invested considerable funds in recent years to develop technologies for detecting explosives and narcotics. For example, in 1996, we reported that DOD had spent over $240 million since 1991 to develop nonintrusive cargo inspection systems and counterdrug technologies for the Customs Service, the Drug Enforcement Administration, and other federal agencies. Although not directly intended for state and local law enforcement agencies, some of DOD's research and development efforts have had spin-off benefits for these agencies. That is, proven technologies have resulted in crime-fighting products' becoming commercially available for purchase by all levels of law enforcement. In citing two examples, DOD officials commented basically as follows: A "percussion actuated neutralization disruptor"--funded by DOD's Office of Special Operations and Low-Intensity Conflict--can be used to disarm or neutralize pipebombs. Since becoming commercially available, this device has widespread applicability in all states and municipalities. A "temporal analysis system" has been developed under DOD's Counterdrug Technology Development Program Office. This computer- based system, which analyzes time-series and other event-related data, allows law enforcement to predict a criminal's activities and movements. The DOD officials further commented that, while these items first became commercially available some time during fiscal years 1996 through 1998, the research and development funds associated with the items were obligated in years before 1996. We did not attempt to identify all relevant examples nor to quantify the costs associated with specific products because DOD's research and development efforts primarily and directly support federal agency needs rather than those of state and local law enforcement. Also, (1) any spin-off benefits to state and local law enforcement may not occur until years after federal research and development funds are expended and (2) the acquisition of commercially available products generally is dependent on these agencies' own budgets. To identify relevant crime technology assistance programs, we reviewed, among other sources, the General Services Administration's Catalog of Federal Domestic Assistance. Also, to identify funding amounts, we contacted cognizant DOD officials and reviewed budget and other applicable documents provided by DOD components. We did not independently verify the accuracy or reliability of the components' funding data. However, to obtain an indication of the overall quality of these data, we contacted DOD officials to clarify the funding data when needed. Appendix I presents more details about our objectives, scope, and methodology. We performed our work from May 1999 to September 1999 in accordance with generally accepted government auditing standards. On September 14, 1999, we provided DOD with a draft of this report for comment. On September 23, 1999, DOD's Office of the Inspector General orally informed us that the draft report had been reviewed by officials in relevant DOD components, and that these officials agreed with the information presented and had no comments. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after the date of this report. We are sending copies of this report to Senator Orrin G. Hatch, Chairman, and Senator Patrick J. Leahy, Ranking Minority Member, Senate Committee on the Judiciary; Representative Henry J. Hyde, Chairman, and Representative John Conyers, Jr., Ranking Minority Member, House Committee on the Judiciary; the Honorable William S. Cohen, Secretary of Defense; and the Honorable Jacob Lew, Director, Office of Management and Budget. Copies will also be made available to others upon request. If you or your staff have any questions about this report, please contact me on (202) 512-8777 or Danny R. Burton on (214) 777-5700. Key contributors to this assignment are acknowledged in appendix II.
Pursuant to a congressional request, GAO reviewed the crime technology assistance provided by the Department of Defense (DOD) to state and local law enforcement agencies during fiscal years (FY) 1996 through 1998, focusing on: (1) grants or other types of direct federal funding; (2) access to support services and systems, such as counterdrug or other intelligence centers; and (3) in-kind transfers of equipment or other assets. GAO noted that: (1) DOD said it provided no crime technology-related grants to state and local law enforcement agencies during FY 1996 through FY 1998; (2) although each state's National Guard received funds for its counterdrug program, these funds did not meet GAO's definition of crime technology assistance, with one exception; (3) GAO also did not find any other type of direct funding; (4) identifiable crime technology assistance provided by DOD to state and local law enforcement agencies during FY 1996 through FY 1998 totalled an estimated $125.9 million; (5) of this amount, about $95.9 million involved in-kind transfers, representing about 76 percent of the total; (6) although not directly intended for state and local law enforcement agencies, some of DOD's research and development efforts in recent years have had spin-off benefits for these agencies--particularly DOD's efforts to develop technologies for federal use in detecting explosives and narcotics; (7) for example, proven technologies have resulted in crime-fighting products--such as bomb detection equipment--becoming commercially available for purchase by all levels of law enforcement; and (8) GAO did not attempt to identify all relevant examples nor to quantify the costs associated with specific products because: (a) DOD's research and development efforts primarily and directly support federal agency needs; and (b) the acquisition of any resulting commercially available products generally is dependent on state and local law enforcement agencies' own budgets.
3,958
405
PTSD can develop following exposure to combat, natural disasters, terrorist incidents, serious accidents, or violent personal assaults like rape. People who experience stressful events often relive the experience through nightmares and flashbacks, have difficulty sleeping, and feel detached or estranged. These symptoms may occur within the first 4 days after exposure to the stressful event or be delayed for months or years. Symptoms that appear within the first 4 days after exposure to a stressful event are generally diagnosed as acute stress reaction or combat stress. Symptoms that persist longer than 4 days are diagnosed as acute stress disorder. If the symptoms continue for more than 30 days and significantly disrupt an individual's daily activities, PTSD is diagnosed. PTSD may occur with other mental health conditions, such as depression and substance abuse. Clinicians offer a range of treatments to individuals diagnosed with PTSD, including individual and group therapy and medication to manage symptoms. These treatments are usually delivered in an outpatient setting, but they can include inpatient services if, for example, individuals are at risk of causing harm to themselves. DOD's screening for PTSD occurs during its post-deployment process. During this process, DOD evaluates servicemembers' current physical and mental health and identifies any psychosocial issues commonly associated with deployments, special medications taken during the deployment, and possible deployment-related occupational/environmental exposures. The post-deployment process also includes completion by the servicemember of the post-deployment screening questionnaire, the DD 2796. DOD uses the DD 2796 to assess health status, including identifying servicemembers who may be at risk for developing PTSD following deployment. In addition to questions about demographics and general health, including questions about general mental health, the DD 2796 includes four questions used to screen servicemembers for PTSD. The four questions are: Have you ever had any experience that was so frightening, horrible, or upsetting that, in the past month, you have had any nightmares about it or thought about it when you did not want to? tried hard not to think about it or went out of your way to avoid situations that remind you of it? were constantly on guard, watchful, or easily startled? felt numb or detached from others, activities, or your surroundings? The completed DD 2796 is reviewed by a DOD health care provider who conducts a face-to-face interview to discuss any deployment-related health concerns with the servicemember. Health care providers that review the DD 2796 may include physicians, physician assistants, nurse practitioners, or independent duty medical technicians--enlisted personnel who receive advanced training to provide treatment and administer medications. DOD provides guidance for health care providers using the DD 2796 and screening servicemembers' physical and mental health. The guidance gives background information to health care providers on the purpose of the various screening questions on the DD 2796 and highlights the importance of a health care provider's clinical judgment when interviewing and discussing responses to the DD 2796. Health care providers may make a referral for a further mental health or combat/operational stress reaction evaluation by indicating on the DD 2796 that this evaluation is needed. When a DOD health care provider refers an OEF/OIF servicemember for a further mental health or combat/operational stress reaction evaluation, the provider checks the appropriate evaluation box on the DD 2796 and gives the servicemember information about PTSD. The provider does not generally arrange for a mental health evaluation appointment for the servicemember with a referral. See figure 1 for the portion of the DD 2796 that is used to indicate that a referral for a further mental health or combat/operational stress reaction evaluation is needed. DOD's health care system, TRICARE, delivers health care services to over 9 million individuals. Health care services, which include mental health services, are provided by DOD personnel in military treatment facilities or through civilian health care providers, who may be either network providers or nonnetwork providers. A military treatment facility is a military hospital or clinic on or near a military base. Network providers have a contractual agreement with TRICARE to provide health care services and are part of the TRICARE network. Nonnetwork providers may accept TRICARE allowable charges for delivering health care services or expect the beneficiary to pay the difference between the provider's fee and TRICARE's allowable charge for services. VA's health care system includes medical facilities, community-based outpatient clinics, and Vet Centers. VA medical facilities offer services which range from primary care to complex specialty care, such as cardiac or spinal cord injury. VA's community-based outpatient clinics are an extension of VA's medical facilities and mainly provide primary care services. Vet Centers offer readjustment and family counseling, employment services, bereavement counseling, and a range of social services to assist veterans in readjusting from wartime military service to civilian life. Vet Centers are also community points of access for many returning veterans, providing them with information and referrals to VA medical facilities. In January 2004, DOD implemented the Deployment Health Quality Assurance Program. As part of the program, each military service branch must implement its own quality assurance program and report quarterly to DOD on the status and findings of the program. The program requires military installation site visits by DOD and military service branch officials to review individual medical records to determine, in part, whether the DD 2796 was completed. The program also requires a monthly report from the Army Medical Surveillance Activity (AMSA), which maintains a database of all servicemembers' completed DD 2796s. DOD uses the information from the military service branches, site visits, and AMSA to develop an annual report on its Deployment Health Quality Assurance Program. DOD offers an extended health care benefit to some OEF/OIF veterans for a specific period of time, and VA offers health care services that include specialized PTSD services. For some OEF/OIF veterans, DOD offers three health care benefit options through the Transitional Assistance Management Program (TAMP) under TRICARE, DOD's health care system. The three benefit options are offered for 180 days following discharge or release from active duty. In addition, OEF/OIF veterans may purchase health care benefits through DOD's Continued Health Care Benefit Program (CHCBP) for 18 months. VA also offers health care services to OEF/OIF veterans following their discharge or release from active duty. VA's health benefits include health care services, including specialized PTSD services, which are delivered by clinicians who have concentrated their clinical work in the area of PTSD treatment and who work as a team to coordinate veterans' treatment. Through TAMP, DOD provides health care benefits that allow some OEF/OIF veterans to obtain health care services, which include mental health services, for 180 days following discharge or release from active duty. This includes services for those who may be at risk for developing PTSD. These OEF/OIF veterans can choose one of three TRICARE health care benefit options through TAMP. While the three options have no premiums, two of the options have deductibles and copayments and allow access to a larger number of providers. The options are TRICARE Prime--a managed care option that allows OEF/OIF veterans to obtain, without a referral, mental health services directly from a mental health provider in the TRICARE network of providers with no cost for services. TRICARE Extra--a preferred provider option that allows OEF/OIF veterans to obtain, without a referral, mental health services directly from a mental health provider in the TRICARE network of providers. Beneficiaries pay a deductible and a share of the cost of services. TRICARE Standard--a fee-for-service option that allows OEF/OIF veterans to obtain, without a referral, mental health services directly from any mental health provider, including those outside the TRICARE network of providers. Beneficiaries pay a deductible and a larger share of the costs of services than under the TRICARE Extra option. See Table 1 for a description of the beneficiary costs associated with each TRICARE option. In addition, OEF/OIF veterans may purchase DOD health care benefits through CHCBP for 18 months. CHCBP began on October 1, 1994, and like TAMP, the program provides health care benefits, including mental health services, for veterans making the transition to civilian life. Although benefits under this plan are similar to those offered under TRICARE Standard, the program is administered by a TRICARE health care contractor and is not part of TRICARE. OEF/OIF veterans must purchase the extended benefit within 60 days after their 180-day TAMP benefit ends. CHCBP premiums in 2006 were $311 for individual coverage and $665 for family coverage per month. Reserve and National Guard OEF/OIF veterans who commit to future service can extend their health care benefits after their CHCBP or TAMP benefits expire by purchasing an additional benefit through the TRICARE Reserve Select (TRS) program. As of January 1, 2006, premiums under TRS are $81 for individual coverage and $253 for family coverage per month. DOD also offers a service, Military OneSource, that provides information and counseling resources to OEF/OIF veterans for 180 days after discharge from the military. Military OneSource is a 24-hour, 7-days a week information and referral service provided by DOD at no cost to veterans. Military OneSource provides OEF/OIF veterans up to six free counseling sessions for each topic with a community-based counselor and also provides referrals to mental health services through TRICARE. VA also offers health care services to OEF/OIF veterans, and these services include mental health services that can be used for evaluation and treatment of PTSD. VA offers all of its health care services to OEF/OIF veterans through its health care system at no cost for 2 years following these veterans' discharge or release from active duty.21, 22 VA's mental health services, which are offered on an outpatient or inpatient basis, include individual and group counseling, education, and drug therapy. For those veterans with PTSD whose condition cannot be managed in a primary care or general mental health setting, VA has specialized PTSD services at some of its medical facilities. These services are delivered by clinicians who have concentrated their clinical work in the area of PTSD treatment. The clinicians work as a team to coordinate veterans' treatment and offer expertise in a variety of disciplines, such as psychiatry, psychology, social work, counseling, and nursing. Like VA's general mental health services, VA's specialized PTSD services are available on both an outpatient and inpatient basis. Table 2 lists the various outpatient and inpatient specialized PTSD treatment programs available in VA. See 38 U.S.C. SS 1710(e)(1)(D), 1712A(a)(2)(B) (2000), and VHA Directive 2004-017, Establishing Combat Veteran Eligibility. OEF/OIF veterans can receive VA health care services, including mental health services, without being subject to copayments or other cost for 2 years after discharge or release from active duty. After the 2-year benefit ends, some OEF/OIF veterans without a service- connected disability or with higher incomes may be subject to a copayment to obtain VA health care services. VA assigns veterans who apply for hospital and medical services to one of eight priority groups. Priority is generally determined by a veteran's degree of service-connected or other disability or on financial need. VA gives veterans in Priority Group 1 (50 percent or higher service-connected disabled) the highest preference for services and gives lowest preference to those in Priority Group 8 (no disability and with income exceeding VA guidelines). In addition to the 2-year mental health benefit, VA's 207 Vet Centers offer counseling services to all OEF/OIF veterans with combat experience, with no time limitation or cost to the veteran for the benefit. Vet Centers are also authorized to provide counseling services to veterans' family members to the extent this is necessary for the veteran's post-war readjustment to civilian life. VA Vet Center counselors may refer a veteran to VA mental health services when appropriate. Using data provided by DOD from the DD 2796s, we found that about 5 percent of the OEF/OIF servicemembers in our review may have been at risk for developing PTSD, and over 20 percent received referrals for further mental health or combat/operational stress reaction evaluations. About 5 percent of the 178,664 OEF/OIF servicemembers in our review responded positively to three or four of the four PTSD screening questions on the DD 2796. According to the clinical practice guideline jointly developed by VA and DOD, individuals who respond positively to three or four of the four PTSD screening questions may be at risk for developing PTSD. Of those OEF/OIF servicemembers who may have been at risk for PTSD, 22 percent were referred for further mental health or combat/operational stress reaction evaluations. Of the 178,664 OEF/OIF servicemembers who were deployed in support of OEF/OIF from October 1, 2001, through September 30, 2004, and were in our review, 9,145--or about 5 percent--may have been at risk for developing PTSD. These OEF/OIF servicemembers responded positively to three or four of the four PTSD screening questions on the DD 2796. Compared with OEF/OIF servicemembers in other service branches of the military, more OEF/OIF servicemembers from the Army and Marines provided positive answers to three or four of the PTSD screening questions--about 6 percent for the Army and about 4 percent for the Marines (see fig. 2). The positive response rates for the Army and Marines are consistent with research that shows that these servicemembers face a higher risk of developing PTSD because of the intensity of the conflict they experienced in Afghanistan and Iraq. We also found that OEF/OIF servicemembers who were members of the National Guard and Reserves were not more likely to be at risk for developing PTSD than other OEF/OIF servicemembers. Concerns have been raised that OEF/OIF servicemembers from the National Guard and Reserve are at particular risk for developing PTSD because they might be less prepared for the intensity of the OEF/OIF conflicts. However, the percentage of OEF/OIF servicemembers in the National Guard and Reserves who answered positively to three or four PTSD screening questions was 5.2 percent, compared to 4.9 percent for other OEF/OIF servicemembers. Of the 9,145 OEF/OIF servicemembers who may have been at risk for developing PTSD, we found that 2,029 or 22 percent received a referral-- that is, had a DD 2796 indicating that they needed a further mental health or combat/operational stress reaction evaluation. The Army and Air Force servicemembers had the highest rates of referral--23.0 percent and 22.6 percent, respectively (see fig. 3). Although the Marines had the second largest percentage of servicemembers who provided three or four positive responses to the PTSD screening questions (3.8 percent), the Marines had the lowest referral rate (15.3 percent) among the military service branches. During the post-deployment process, DOD relies on the clinical judgment of its health care providers to determine which servicemembers should receive referrals for further mental health or combat/operational stress reaction evaluations. Following a servicemember's completion of the DD 2796, DOD requires its health care providers to interview all servicemembers. For these interviews, DOD's guidance for health care providers using the DD 2796 advises the providers to "pay particular attention to" servicemembers who provide positive responses to three or four of the four PTSD screening questions on their DD 2796s. According to DOD officials, not all of the servicemembers with three or four positive responses to the PTSD screening questions need referrals for further evaluations. Instead, DOD instructs health care providers to interview the servicemembers, review their medical records for past medical history and, based on this information, determine which servicemembers need referrals. DOD expects its health care providers to exercise their clinical judgment in determining which servicemembers need referrals. DOD's guidance suggests that its health care providers consider, when exercising their clinical judgment, factors such as servicemembers' behavior, reasons for positive responses to any of the four PTSD screening questions on the DD 2796, and answers to other questions on the DD 2796. However, DOD has not identified whether these factors or other factors are used by its health care providers in making referral decisions. As a result, DOD cannot provide reasonable assurance that all OEF/OIF servicemembers who need referrals for further mental health or combat/operational stress reaction evaluations receive such referrals. DOD has a quality assurance program that, in part, monitors the completion of the DD 2796, but the program is not designed to evaluate health care providers' decisions to issue referrals for mental health and combat/operational stress reaction evaluations. As part of its review, the Deployment Health Quality Assurance Program requires DOD's military service branches to collect information from medical records on, among other things, the percentage of DD 2796s completed in each military service branch and whether referrals were made. However, the quality assurance program does not require the military service branches to link responses on the four PTSD screening questions to the likelihood of receiving a referral. Therefore, the program could not provide information on why some OEF/OIF servicemembers with three or more positive responses to the PTSD screening questions received referrals while others did not. DOD is conducting a study that is intended to evaluate the outcomes and quality of care provided by DOD's health care system. This study is part of DOD's National Quality Management Program. The study is intended to track those who responded positively to three or four PTSD screening questions on the DD 2796 and used the form as well to indicate they had other mental health issues, such as feeling depressed. One of the objectives of the study is to determine the percentage of those who were referred for further mental health or combat/operational stress reaction evaluations, based on their responses on the DD 2796. Many OEF/OIF servicemembers have engaged in the type of intense and prolonged combat that research has shown to be highly correlated with the risk for developing PTSD. During DOD's post-deployment process, DOD relies on its health care providers to assess the likelihood of OEF/OIF servicemembers being at risk for developing PTSD. As part of this effort, providers use their clinical judgment to identify those servicemembers whose mental health needs further evaluation. Because DOD entrusts its health care providers with screening OEF/OIF servicemembers to assess their risk for developing PTSD, the department should have confidence that these providers are issuing referrals to all servicemembers who need them. Variation among DOD's military service branches in the frequency with which their providers issued referrals to OEF/OIF servicemembers with identical results from the screening questionnaire suggests the need for more information about the decision to issue referrals. Knowing the factors upon which DOD health care providers based their clinical judgments in issuing referrals could help explain variation in the referral rates and allow DOD to provide reasonable assurance that such judgments are being exercised appropriately. However, DOD has not identified the factors its health care providers used in determining why some servicemembers received referrals while other servicemembers with the same number of positive responses to the four PTSD screening questions did not. We recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Health Affairs to identify the factors that DOD health care providers use in issuing referrals for further mental health or combat/operational stress reaction evaluations to explain provider variation in issuing referrals. In commenting on a draft of this report, DOD concurred with our conclusions and recommendation. DOD's comments are reprinted in appendix II. DOD noted that it plans a systematic evaluation of referral patterns for the post-deployment health assessment through the National Quality Management Program and that an ongoing validation study of the post-deployment health assessment and the post-deployment health reassessment is projected for completion in October 2006. Despite its planned implementation of our recommendation to identify the factors that its health care providers use to make referrals, DOD disagreed with our finding that it has not provided reasonable assurance that OEF/OIF servicemembers receive referrals for further mental health evaluations when needed. To support its position, DOD identified several factors in its comments that it stated may explain why some OEF/OIF servicemembers with the same number of positive responses to the four PTSD screening questions are referred while others are not. For example, DOD health care providers may employ watchful waiting instead of a referral for a further evaluation for servicemembers with three or four positive responses to the PTSD screening questions. Additionally, DOD stated in its technical comments that providers may use the referral category of "other" rather than place a mental health label on a referral by checking the further evaluation categories of mental health or combat/operational stress reaction. DOD also stated in its technical comments that health care providers may not place equal value on the four PTSD screening questions and may only refer servicemembers who indicate positive responses to certain questions. Although DOD identified several factors that may explain why some servicemembers are referred while others are not, DOD did not provide data on the extent to which these factors affect health care providers' clinical judgments on whether to refer OEF/OIF servicemembers with three or four positive responses to the four PTSD screening questions. Until DOD has better information on how its health care providers use these factors when applying their clinical judgment, DOD cannot reasonably assure that servicemembers who need referrals receive them. DOD's plans to develop this information should lead to reasonable assurance that servicemembers who need referrals receive them. DOD also described in its written comments its philosophy of clinical intervention for combat and operational stress reactions that could lead to PTSD. Central to its approach is the belief that attempting to diagnose normal reactions to combat and assigning too much significance to symptoms when not warranted may do more harm to a servicemember than good. While we agree that PTSD is a complex disorder that requires DOD health care providers to make difficult clinical decisions, issues relating to diagnosis and treatment are not germane to the referral issues we reviewed and were beyond the scope of our work. Instead, our work focused on the referral of servicemembers who may be at risk for PTSD because they answered three or four of the four PTSD screening questions positively, not whether they should be diagnosed and treated. Further, DOD implied that our position is that servicemembers must have a referral to access mental health care, but there are other avenues of care for servicemembers where a referral is not needed. We do not assume that servicemembers must have a referral in order to access these health care services. Rather, in this report we identify the health care services available to OEF/OIF servicemembers who have been discharged or released from active duty and focus on how decisions are made by DOD providers regarding referrals for servicemembers who may be at risk for PTSD. DOD also provided technical comments, which we incorporated as appropriate. VA provided comments on a draft of this report by e-mail. VA concurred with the facts in the draft report that related to VA. We are sending copies of this report to the Secretary of Veterans Affairs; the Secretary of Defense; the Secretaries of the Army, the Air Force, and the Navy; the Commandant of the Marine Corps; and appropriate congressional committees. We will also provide copies to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact me at (202) 512-7101 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix III. To describe the mental health benefits available to veterans who served in military conflicts in Afghanistan and Iraq--Operation Enduring Freedom (OEF) and Operation Iraqi Freedom (OIF), we reviewed the Department of Defense (DOD) health care benefits and Department of Veterans Affairs (VA) mental health services available for these veterans. We reviewed the policies, procedures, and guidance issued by DOD's TRICARE and VA's health care systems and interviewed DOD and VA officials about the benefits and services available for post-traumatic stress disorder (PTSD). We defined an OEF/OIF veteran as a servicemember who was deployed in support of OEF or OIF from October 1, 2001, through September 30, 2004, and had since been discharged or released from active duty status. We classified National Guard and Reserve members as veterans if they had been released from active duty status after their deployment in support of OEF/OIF. We interviewed officials in DOD's Office of Health Affairs about health care benefits, including length of coverage, offered to OEF/OIF veterans who are members of the National Guard and Reserves and have left active duty status. We attended an Air Force Reserve and National Guard training seminar in Atlanta, Georgia, for mental health providers, social workers, and clergy to obtain information on PTSD mental health services offered to National Guard and Reserve members returning from deployment. To obtain information on DOD's Military OneSource, we interviewed DOD officials and the manager of the Military OneSource contract about the services available and the procedures for referring OEF/OIF veterans for mental health services. We interviewed representatives from the Army, Air Force, Marines, and Navy about their use of Military OneSource. We interviewed VA headquarters officials, including mental health experts, to obtain information about VA's specialized PTSD services. We reviewed applicable statutes and policies and interviewed officials to identify the services offered by VA's Vet Centers for OEF/OIF veterans. In addition, to inform our understanding of the issues related to DOD's post-deployment process, we interviewed veterans' service organization representatives from The American Legion, Disabled American Veterans, and Vietnam Veterans of America. To determine the number of OEF/OIF servicemembers who may be at risk for developing PTSD and the number of these servicemembers who were referred for further mental health evaluations, we analyzed computerized DOD data. We worked with officials at DOD's Defense Manpower Data Center to identify the population of OEF/OIF servicemembers from the Contingency Tracking System deployment and activation data files. We then worked with officials from DOD's Army Medical Surveillance Activity (AMSA) to identify which OEF/OIF servicemembers had responded positively to one, two, three, or four of the four PTSD screening questions on the DD 2796 questionnaire. AMSA maintains a database of all servicemembers' completed DD 2796s. The DD 2796 is a questionnaire that DOD uses to identify servicemembers who may be at risk for developing PTSD after their deployment and contains the four PTSD screening questions that may identify these servicemembers. The four questions are: Have you ever had any experience that was so frightening, horrible, or upsetting that, in the past month, you have had any nightmares about it or thought about it when you did not want to? tried hard not to think about it or went out of your way to avoid situations that remind you of it? were constantly on guard, watchful, or easily startled? felt numb or detached from others, activities, or your surroundings? Because a servicemember may have been deployed more than once, some servicemembers' records at AMSA included more than one completed DD 2796. We obtained information from the DD 2796 that was completed following the servicemembers' most recent deployment in support of OEF/OIF. We removed from our review servicemembers who either did not have a DD 2796 on file at AMSA or completed a DD 2796 prior to DOD adding the four PTSD screening questions to the questionnaire in April 2003. In all, we reviewed DD 2796's completed by 178,664 OEF/OIF servicemembers. To determine the criteria we would use to identify OEF/OIF servicemembers who may have been at risk for developing PTSD, we reviewed the clinical practice guideline for PTSD developed jointly by VA and DOD, which states that three or more positive responses to the four questions indicate a risk for developing PTSD. Further, we reviewed a retrospective study that found that those individuals who provided three or four positive responses to the four PTSD screening questions were highly likely to have been previously given a diagnosis of PTSD prior to the screening. To determine the number of OEF/OIF servicemembers who may be at risk for developing PTSD and were referred for further mental health evaluations, we asked AMSA to identify OEF/OIF servicemembers whose DD 2796 forms indicated that they were referred for further mental health or combat/operational stress reaction evaluations by a DOD health care provider. To examine whether DOD has reasonable assurance that OEF/OIF veterans who needed further mental health evaluations received referrals, we reviewed DOD's policies and guidance, as well as policies and guidance for each of the military service branches (Army, Navy, Air Force, and Marines). Based on electronic testing of logical elements and our previous work on the completeness and accuracy of AMSA's centralized database, we concluded that the data were sufficiently reliable for the purposes of this report. NDAA also directed us to determine the number of OEF/OIF veterans who, because of their referrals, accessed DOD or VA health care services to obtain a further mental health or combat/operational stress reaction evaluation. However, as discussed with the committees of jurisdiction, we could not use data from OEF/OIF veterans' DD 2796 forms to determine if veterans accessed DOD or VA health care services because of their mental health referrals. DOD officials explained that the referral checked on the DD 2796 cannot be linked to a subsequent health care visit using DOD computerized data. Therefore, we could not determine how many OEF/OIF veterans accessed DOD or VA health care services for further mental health evaluations because of their referrals. We conducted our work from December 2004 through April 2006 in accordance with generally accepted government auditing standards. In addition to the contact named above, key contributors to this report were Marcia A. Mann, Assistant Director; Mary Ann Curran, Martha A. Fisher, Krister Friday, Lori Fritz, and Martha Kelly.
Many servicemembers supporting Operation Enduring Freedom (OEF) and Operation Iraqi Freedom (OIF) have engaged in intense and prolonged combat, which research has shown to be strongly associated with the risk of developing post-traumatic stress disorder (PTSD). GAO, in response to the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005, (1) describes DOD's extended health care benefit and VA's health care services for OEF/OIF veterans; (2) analyzes DOD data to determine the number of OEF/OIF servicemembers who may be at risk for PTSD and the number referred for further mental health evaluations; and (3) examines whether DOD can provide reasonable assurance that OEF/OIF servicemembers who need further mental health evaluations receive referrals. DOD offers an extended health care benefit to some OEF/OIF veterans for a specified time period, and VA offers health care services that include specialized PTSD services. DOD's benefit provides health care services, including mental health services, to some OEF/OIF veterans for 180 days following discharge or release from active duty. Additionally, some veterans may purchase extended benefits for up to 18 months. VA also offers health care services to OEF/OIF veterans following their discharge or release from active duty. VA offers health benefits for OEF/OIF veterans at no cost for 2 years following discharge or release from active duty. After their 2-year benefit expires, some OEF/OIF veterans may continue to receive care under VA's eligibility rules. Using data provided by DOD, GAO found that 9,145 or 5 percent of the 178,664 OEF/OIF servicemembers in its review may have been at risk for developing PTSD. DOD uses a questionnaire to identify those who may be at risk for developing PTSD after deployment. DOD providers interview servicemembers after they complete the questionnaire. A joint VA/DOD guideline states that servicemembers who respond positively to three or four of the questions may be at risk for PTSD. Further, we reviewed a retrospective study that found that those individuals who provided three or four positive responses to the four PTSD screening questions were highly likely to have been previously given a diagnosis of PTSD prior to the screening. Of the 5 percent who may have been at risk, GAO found that DOD providers referred 22 percent or 2,029 for further mental health evaluations. DOD cannot provide reasonable assurance that OEF/OIF servicemembers who need referrals receive them. According to DOD officials, not all of the servicemembers with three or four positive responses to the PTSD screening questions will need referrals for further mental health evaluations. DOD relies on providers' clinical judgment to decide who needs a referral. GAO found that DOD health care providers varied in the frequency with which they issued referrals to OEF/OIF servicemembers with three or more positive responses; the Army referred 23 percent, the Marines about 15 percent, the Navy 18 percent, and the Air Force about 23 percent. However, DOD did not identify the factors its providers used in determining which OEF/OIF servicemembers needed referrals. Knowing the factors upon which DOD health care providers based their clinical judgments in issuing referrals could help explain variation in the referral rates and allow DOD to provide reasonable assurance that such judgments are being exercised appropriately.
6,787
731
Today's Army faces an enormous challenge to balance risks and resources in order to meet its many missions. Since 1990, active Army ranks have been reduced from 770,000 to 495,000 personnel, a reduction of about 36 percent. Simultaneously, world events have dictated that forces be trained and ready to respond to potential high-intensity missions in areas such as Korea and the Persian Gulf while conducting peace enhancement operations around the world. The Army currently has 10 active combat divisions compared to the 18 it had at the start of Operation Desert Storm in 1991. Four of the 10 divisions are considered contingency divisions and would be the first to deploy in the event of a major theater war. These units are the 82nd Airborne, 101st Air Assault, 3rd Infantry, and 1st Cavalry divisions. The 2nd Infantry Division, while not a contingency force division, is already deployed in Korea. The remaining five divisions, which are the focus of my testimony, are expected to deploy in the event of a second simultaneous or nearly simultaneous major theater contingency or as reinforcements for a larger-than-expected first contingency. These units are the 1st Armored, 1st Infantry, 4th Infantry, 10th Infantry, and 25th Infantry divisions. Also, these divisions have been assigned the bulk of the recent peacekeeping missions in Bosnia and Haiti, and the 4th Infantry division over the last 2 years has been conducting the Army's advanced war-fighting experiment. Appendix I provides a list of the Army's current active divisions and the locations of each division's associated brigades. In the aggregate, the Army's later-deploying divisions were assigned 66,053, or 93 percent, of their 70,665 authorized personnel at the beginning of fiscal year 1998. However, aggregate numbers do not adequately reflect the condition that exists within individual battalions, companies, and platoons of these divisions. This is because excess personnel exist in some grades, ranks, and skills, while shortages exist in others. For example, while the 1st Armored Division was staffed at 94 percent in the aggregate, its combat support and service support specialties were filled at below 85 percent, and captains and majors were filled at 73 percent. In addition, a portion of each later-deploying division exists only on paper because all authorized personnel have not been assigned. All these divisions contain some squads, crews, and platoons in which no personnel or a minimum number of personnel are assigned. Assigning a minimum number of personnel to a crew means having fewer personnel than needed to fully accomplish wartime missions; for example, having five soldiers per infantry squad rather than nine, tank crews with three soldiers instead of four, or artillery crews with six soldiers rather than nine. We found significant personnel shortfalls in all the later-deploying divisions. For example: At the 10th Infantry Division, only 138 of 162 infantry squads were fully or minimally filled, and 36 of the filled squads were unqualified. At the 2nd and 3rd brigades of the 25th Infantry Division, 52 of 162 infantry squads were minimally filled or had no personnel assigned. At the 1st Brigade of the 1st Infantry Division, only 56 percent of the authorized infantry soldiers for its Bradley Fighting Vehicles were assigned, and in the 2nd Brigade, 21 of 48 infantry squads had no personnel assigned. At the 3rd Brigade of the 1st Armored Division, only 16 of 116 M1A1 tanks had full crews and were qualified, and in one of the Brigade's two armor battalions, 14 of 58 tanks had no crewmembers assigned because the personnel were deployed to Bosnia. In addition, at the Division's engineer brigade in Germany, 11 of 24 bridge teams had no personnel assigned. At the 4th Infantry Division, 13 of 54 squads in the engineer brigade had no personnel assigned or had fewer personnel assigned than required. The significance of personnel shortfalls in later-deploying divisions cannot be adequately captured solely in terms of overall numbers. The rank, grade, and experience of the personnel assigned must also be considered. For example, captains and majors are in short supply Army-wide due to drawdown initiatives undertaken in recent years. The five later-deploying divisions had only 91 percent and 78 percent of the captains and majors authorized, respectively, but 138 percent of the lieutenants authorized. The result is that unit commanders must fill leadership positions in many units with less experienced officers than Army doctrine requires. For example, in the 1st Brigade of the 1st Infantry Division, 65 percent of the key staff positions designated to be filled by captains were actually filled by lieutenants or captains that were not graduates of the Advanced Course. We found that three of the five battalion maintenance officers, four of the six battalion supply officers, and three of the four battalion signal officers were lieutenants rather than captains. While this situation represents an excellent opportunity for the junior officers, it also represents a situation in which critical support functions are being guided by officers without the required training or experience. There is also a significant shortage of NCOs in the later-deploying divisions. Again, within the 1st Brigade, 226, or 17 percent of the 1,450, total NCO authorizations, were not filled at the time of our visit. As was the case in all the divisions, a significant shortage was at the first-line supervisor, sergeant E-5 level. At the beginning of fiscal year 1998, the 5 later-deploying divisions were short nearly 1,900 of the total 25,357 NCOs authorized, and as of February 15, 1998, this shortage had grown to almost 2,200. In recent years, in reports and testimony before the Congress, we discussed the Status of Resources and Training System (SORTS), which is used to measure readiness, and reported on the need for improvements. SORTS data for units in the later-deploying divisions have often reflected a high readiness level for personnel because the system uses aggregate statistics to assess personnel readiness. For example, a unit that is short 20 percent of all authorized personnel in the aggregate could still report the ability to undertake most of its wartime mission, even though up to 25 percent of the key leaders and personnel with critical skills may not be assigned. Using aggregate data to reflect personnel readiness masks the underlying personnel problems I have discussed today, such as shortages by skill level, rank, or grade. Compounding these problems are high levels of personnel turnover, incomplete squads and crews, and frequent deployments, none of which are part of the readiness calculation criteria. Yet, when considered collectively, these factors create situations in which commanders may have difficulty developing unit cohesion, accomplishing training objectives, and maintaining readiness. Judging by our analysis of selected commanders' comments submitted with their SORTS reports and other available data, the problems I have just noted are real. However, some commanders apparently do not consider them serious enough to warrant a downgrade in the reported readiness rating. For example, at one engineer battalion, the commander told us his unit had lost the ability to provide sustained engineer support to the division. His assessment appeared reasonable, since company- and battalion-level training for the past 4 months had been canceled due to the deployment of battalion leaders and personnel to operations in Bosnia. As a result of this deployment, elements of the battalion left behind had only 33 to 55 percent of its positions filled. The commander of this battalion, however, reported an overall readiness assessment of C-2, which was based in part on a personnel level that was over 80 percent in the aggregate. The commander also reported that he would be able to achieve a C-1 status in only 20 training days. This does not seem realistic, given the shortages we noted. We found similar disconnects between readiness conditions as reported in SORTS and actual unit conditions at other armor, infantry, and support units. Many factors have contributed to shortfalls of personnel in the Army's later-deploying divisions, including (1) the Army's priority for assigning personnel to units, commands, and agencies; (2) Army-wide shortages of some types of personnel; (3) peacekeeping operations; and (4) the assignment of soldiers to joint and other Army command, recruiting, and base management functions. The Army uses a tiered system to allocate personnel and other resources to its units. The Army gives top priority to staffing DOD agencies; major commands such as the Central Command, the European Command, and the Pacific Command; the National Training Center; and the Army Rangers and Special Forces Groups. These entities receive 98 to 100 percent of the personnel authorized for each grade and each military occupational specialty. The 2nd Infantry Division, which is deployed in Korea, and the four contingency divisions are second in priority. Although each receives 98 to 100 percent of its aggregate authorized personnel, the total personnel assigned are not required to be evenly distributed among grades or military specialties. The remaining five later-deploying divisions receive a proportionate share of the remaining forces. Unlike priority one and two forces, the later-deploying units have no minimum personnel level. Army-wide shortages of personnel add to the shortfalls of later-deploying divisions. For example, in fiscal year 1997, the Army's enlistment goal for infantrymen was 16,142. However, only about 11,300 of those needed were enlisted, which increased the existing shortage of infantry soldiers by an additional 4,800 soldiers. As of February 15, 1998, Army-wide shortages existed for 28 Army specialties. Many positions in squads and crews are left unfilled or minimally filled because personnel are diverted to work in key positions where they are needed more. Also, because of shortages of experienced and branch-qualified officers, the Army has instituted an Officer Distribution Plan, which distributes a "fair share" of officers by grade and specialty among the combat divisions. While this plan has helped spread the shortages across all the divisions, we noted significant shortages of officers in certain specialties at the later-deploying divisions. Since 1995, when peacekeeping operations began in Bosnia-Herzegovina, there has been a sustained increase in operations for three of the later-deploying divisions: the 1st Armored Division, the 1st Infantry Division, and the 10th Infantry Division. For example, in fiscal year 1997, the 1st Armored Division was directed 89 times to provide personnel for operations other than war and contingency operations, training exercises, and for other assignments from higher commands. More than 3,200 personnel were deployed a total of nearly 195,000 days for the assignments, 89 percent of which were for operations in Bosnia. Similarly, the average soldier in the 1st Infantry Division was deployed 254 days in fiscal year 1997, primarily in support of peacekeeping operations. Even though the 1st Armored and 1st Infantry Divisions have had 90 percent or more of their total authorized personnel assigned since they began operations in Bosnia, many combat support and service support specialties were substantially understrength, and only three-fourths of field grade officers were in place. As a result, the divisions took personnel from nondeploying units to fill the deploying units with the needed number and type of personnel. As a further result, the commanders of nondeploying units have squads and crews with no, or a minimal number of, personnel. Unit commanders have had to shuffle personnel among positions to compensate for shortages. For example, they assign soldiers that exist in the largest numbers--infantry, armor, and artillery--to work in maintenance, supply, and personnel administration due to personnel shortages in these technical specialties; assign soldiers to fill personnel shortages at a higher headquarters or to accomplish a mission for higher headquarters; and assign soldiers to temporary work such as driving buses, serving as lifeguards, and managing training ranges--vacancies, in some cases, which have resulted from civilian reductions on base. At the time of our visit, the 1st Brigade of the 1st Infantry Division had 372, or 87 percent, of its 428 authorized dismount infantry. However, 51 of these 372 soldiers were assigned to duties outside their specialties to fill critical technical shortages, command-directed positions, and administrative and base management activities. These reassignments lowered the actual number of soldiers available for training to 75 percent daily. In Germany, at the 2nd Brigade of the 1st Infantry Division, 21 of 48 infantry squads had no personnel assigned due to shortages. From the remaining 27 squads that were minimally filled, the equivalent of another 5 squads of the Brigade's soldiers were working in maintenance, supply, and administrative specialties to compensate for personnel shortages in those specialties. The end result is that the brigade only had 22 infantry squads with 7 soldiers each rather than 48 squads with 9 soldiers each. According to Army officials, the reduction of essential training, along with the cumulative impact of the shortages I just outlined, has resulted in an erosion of readiness. Readiness in the divisions responsible for peacekeeping operations in Bosnia has been especially affected because the challenges imposed by personnel shortages are compounded by frequent deployments. Universally, division officials told us that the shortage of NCOs in the later-deploying divisions is the biggest detriment to overall readiness because crews, squads, and sections are led by lower-level personnel rather than by trained and experienced sergeants. Such a situation impedes effective training because these replacement personnel become responsible for training soldiers in critical skills they themselves may not have been trained to accomplish. At one division, concern was expressed about the potential for a serious training accident because tanks, artillery, and fighting vehicles were being commanded by soldiers without the experience needed to safely coordinate the weapon systems they command. According to Army officials, the rotation of units to Bosnia has also degraded the training and readiness of the divisions providing the personnel. For example, to deploy an 800-soldier task force last year, the Commander of the 3rd Brigade Combat Team had to reassign 63 soldiers within the brigade to serve in infantry squads of the deploying unit, strip nondeploying infantry and armor units of maintenance personnel, and reassign NCOs and support personnel to the task force from throughout the brigade. These actions were detrimental to the readiness of the nondeploying units. For example, gunnery exercises for two armor battalions had to be canceled and 43 of 116 tank crews became unqualified on the weapon system, the number of combat systems out of commission increased, and contractors were hired to perform maintenance. According to 1st Armored and 1st Infantry division officials, this situation has reduced their divisions' readiness to the point of not being prepared to execute wartime missions without extensive training and additional personnel. If the later-deploying divisions are required to deploy to a second major theater contingency, the Army plans to fill personnel shortfalls with retired servicemembers, members of the Individual Ready Reserve, and newly trained recruits. The number of personnel to fill the later deploying divisions could be extensive, since (1) personnel from later deploying divisions would be transferred to fill any shortages in the contingency units that are first to deploy and (2) these divisions are already short of required personnel. The Army's plan for providing personnel under a scenario involving two major theater contingencies includes unvalidated assumptions. For example, the plan assumes that the Army's training base will be able to quadruple its output on short notice and that all reserve component units will deploy as scheduled. Army officials told us that based on past deployments, not all the assumptions in their plans will be realized, and there may not be sufficient trained personnel to fully man later-deploying divisions within their scheduled deployment times. Finally, if retired personnel or Individual Ready Reserve members are assigned to a unit, training and crew cohesion may not occur prior to deployment because Army officials expect some units to receive personnel just before deployment. Finding solutions to the personnel problems I have discussed today will not be easy, given the Army's many missions and reduced personnel. While I have described serious shortfalls of personnel in each of later-deploying divisions, this condition is not necessarily new. What is new is the increased operating tempo, largely brought about because of peacekeeping operations, which has exacerbated the personnel shortfalls in these divisions. However, before any solutions can be discussed, the Army should determine whether it wants to continue to accept the current condition of its active force today, that is, five fully combat-ready divisions and five less than fully combat-capable divisions. The Army has started a number of initiatives that ultimately may help alleviate some of the personnel shortfalls I have described. These initiatives include targeted recruiting goals for infantry and maintenance positions; the advanced war-fighting experiment, which may reduce the number of personnel required for a division through the use of technology; and better integration of active and reserve forces. Efforts to streamline institutional forces may also yield personnel that could be used to fill vacancies such as these noted in my testimony. If such efforts do not yield sufficient personnel or solutions to deal with the shortages we have noted in this testimony, we believe it is important that the Army, at a minimum, review its current plans for rectifying these shortfalls in the event of a second major theater war. In particular, if the Army expects to deploy fully combat-capable divisions for such a war, it should review the viability of alleviating shortfalls predominately with reservists from the Individual Ready Reserve. This concludes my testimony. I will be happy to answer any questions you may have at this time. 1st Cavalry Division - headquarters and three brigades at Fort Hood, Tex. 3rd Infantry Division - headquarters and two brigades at Fort Stewart, Ga., and one brigade at Fort Benning, Ga. 82nd Airborne Division - headquarters and three brigades at Fort Bragg, N.C. 101st Airborne Division - headquarters and three brigades at Fort Campbell, Ky. 2nd Infantry Division - headquarters and two brigades in Korea, and one brigade at Fort Lewis, Wash. 1st Infantry Division - headquarters and two brigades in Germany, and one brigade at Fort Riley, Kans. 1st Armored Division - headquarters and two brigades in Germany, and one brigade at Fort Riley, Kans. 4th Infantry Division - headquarters and two brigades at Fort Hood, Tex., and one brigade at Fort Carson, Colo. 10th Mountain Division - headquarters and two brigades at Fort Drum, N.Y. 25th Infantry Division - headquarters and two brigades at Schofield Barracks, Hawaii, and one brigade at Fort Lewis, Wash. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed its preliminary findings from its ongoing evaluation of personnel readiness in the Army's five later-deploying divisions, focusing on the: (1) extent of personnel shortages in the divisions and the extent to which these shortages are reflected in readiness reports; (2) key factors contributing to personnel shortages and the impact such shortages have on readiness; (3) Army's plans for correcting such shortages should these divisions be called upon to deploy; and (4) issues to be considered in dealing with personnel shortages. GAO noted that: (1) in the aggregate, the Army's five later-deploying divisions had an average of 93 percent of their personnel on board at the time of GAO's visits; (2) however, aggregate data does not fully reflect the extent of shortages of combat troops, technical specialists, experienced officers, and noncommissioned officers (NCO) that exist in those divisions; (3) the readiness reporting system that contains the aggregate data on these divisions does not fully disclose the impact of personnel shortages on the ability of the divisions' units to accomplish critical wartime tasks; (4) as a result, there is a disconnect between the reported readiness of these forces in formal readiness reports and the actual readiness that GAO observed on its visits; (5) these disconnects exist because the unit readiness reporting system does not consider some information that has a significant impact on a unit's readiness, such as operating tempo, personnel shortfalls in key positions, and crew and squad staffing; (6) the Army's priority in assigning personnel to these divisions, Army-wide shortages of personnel, frequent deployments to peacekeeping missions, and the assignment of soldiers to other tasks outside of their specialty are the primary reasons for personnel shortfalls; (7) the impact of personnel shortages on training and readiness is exacerbated by the extent to which personnel are being used for work outside their specialties or units; (8) according to commanders in all the divisions, the collective impact of understaffing squads and crews, transferring to other jobs the NCOs from their crews and squads they are responsible for training, and assigning personnel to other units as fillers for exercises and operations have degraded their capability and readiness; (9) if the Army had to deploy these divisions for a high-intensity conflict, these divisions would fill their units with Individual Ready Reserve Soldiers, retired servicemembers, and newly recruited soldiers; (10) however, the Army's plan for providing these personnel includes assumptions that have not been validated, and there may not be enough trained personnel to fully staff or fill later-deploying divisions within their scheduled deployment times; and (11) solutions, if any, will depend upon how the Army plans to use these divisions in the future.
4,438
593
VHA's outpatient consult process is governed by a national policyoutlines the use of an electronic system for requesting and managing consults and delineates oversight responsibilities at the national, VISN, and VAMC level. Outpatient consults include requests by physicians or other providers for both clinical consultations and procedures. A clinical consultation is a request seeking an opinion, advice, or expertise regarding evaluation or management of a patient's specific clinical concern, whereas a procedure request is for a specialty care procedure, such as a colonoscopy. The consult process--displayed in figure 1--is governed by VHA's national consult policy, which requires VAMCs to manage consults using a national electronic consult system, and to provide timely and appropriate care to veterans. Outpatient consults typically are requested by a veteran's primary care provider using VHA's electronic consult system. To send a consult request, providers log on to the system and complete an electronic consult request template that may be customized by the VAMC's applicable specialty care clinic. The template requires the requesting provider to provide specific information, such as a diagnosis and a reason why the specialty care is needed, and may require additional information as determined by the specialty care clinic. For example, a gastroenterology template for abdominal pain used at one VAMC asked the requesting provider whether the treatment should be provided in person, reminded the provider about specific lab tests to be completed, and asked the provider to provide a brief history of the patient's symptoms. (See fig. 2.) This specialty care clinic had specific templates depending on the patient's symptoms. (See appendix I for examples of other templates used by the gastroenterology clinic at this VAMC.) After completing the template, the requesting provider electronically submits the consult for the specialty care provider to review. According to VHA's guideline, the specialty care provider is to review and determine whether to accept a consult within 7 days of the request. Typically, the provider's review involves determining whether to accept the consult--that the consult is needed and appropriate--and if the consult is accepted, determining its relative urgency--a process known as triaging. When reviewing a consult request, a specialty care provider may decide not to accept it, and will send the consult back to the requesting provider. This is referred to as discontinuing the consult, which a specialty care provider may decide to do for several reasons, including that the care is not needed, the patient refuses care, or the patient is deceased. In other cases the specialty care provider may determine that additional information is needed before accepting the consult; in such cases, the specialty care provider will send the consult back to the requesting provider, who can resubmit it with the needed information. If the provider accepts the consult, an attempt is made to contact the patient and schedule an appointment. Appointments resulting from outpatient consults, like other outpatient medical appointments, are subject to VHA's scheduling policy. This policy is designed to help VAMCs meet their commitment of scheduling medical appointments with no undue waits or delays for patients. According to VHA officials, the scheduler is to take into account the relative urgency of the consult, that is, the result of the reviewing specialty provider's triage decision, when attempting to schedule the appointment. If an appointment resulting from a consult is scheduled and held, VHA's policy requires the specialty care provider to appropriately document the results in the consult system, which would then close out the consult as completed. To do so, the provider updates the consult with the results of the appointment by entering a clinical progress note in the consult system. If the provider does not perform this step, or does not perform it appropriately, the consult remains open in the consult system. If an appointment is not held, specialty care clinic staff members are to document why they were unable to complete the consult. According to VHA's national consult policy, VHA central office officials have overall oversight responsibility for the consult process, including the measurement and monitoring of ongoing performance. The policy also requires VISN leadership to oversee the consult processes for VAMCs in their networks, and requires each VAMC to manage individual consults consistent with VHA's timeliness guidelines. To evaluate the timeliness of resolving consults across VAMCs, in September 2012, VHA created a national consult database from the information contained in its electronic consult system. After reviewing these data, VHA determined that they were inadequate for monitoring consults, because they had not been entered in the consult system in a consistent, standard manner, among other issues. For example, in addition to requesting consults for clinical concerns, VHA found that VAMCs also were using the consult system to request and manage a variety of administrative tasks, such as arranging patient travel to appointments. Additionally, VHA could not accurately determine whether patients actually received the care they needed, or if they received the care in a timely fashion. VHA found that this was due, in part, to the fact that data in the consult system included consults for both care that was clinically appropriate to be open for more than 90 days--known as future care consults--as well as those for care that was needed within 90 days. At the time of the database's creation, according to VHA officials, approximately 2 million consults (both clinical and administrative) were unresolved for more than 90 days. Subsequently, in October 2012, a task force convened by VA's Under Secretary for Health began addressing several issues, including those regarding VHA's consult system. In response to the task force recommendations, in May 2013, VHA launched the consult business rules initiative to standardize aspects of the consult process and develop consistent and reliable information on consults across all VAMCs. For example, the consult business rules initiative required that VAMCs limit their use of the consult system to requesting consults for care expected within 90 days, and distinguish between administrative and clinical consults in the consult system. As part of this initiative, VAMCs were required to complete four tasks between July 1, 2013, and May 1, 2014: Review and properly assign codes to consistently record consult requests in the consult system. Assign distinct identifiers in the electronic consult system to differentiate between clinical and administrative consults. Develop and implement strategies for managing requests for future care consults that are not needed within 90 days. Conduct a clinical review, as warranted, to determine if care has been provided or is still needed for unresolved consults--those open more than 90 days. After the initial implementation of these tasks, VHA required VAMCs to maintain adherence to the consult business rules initiative when processing consults. VHA was updating its national consult policy to incorporate aspects of the consult business rules initiative and expected to have a draft policy by September 2014. Our review of a sample of consults at five VAMCs found that veterans did not always receive outpatient specialty care in a timely manner, if at all. We found consults that were not processed in accordance with VHA timeliness guidelines--for example, consults that were not reviewed within 7 days or not completed within 90 days. We also found consults for which veterans did not receive the outpatient specialty care requested-- 64 of the 150 consults in our sample (43 percent)--and those for which the requested specialty care was provided, but the consults were not properly closed in the consult system. We found that specialty care providers at the five VAMCs we examined were not always able to make their initial consult reviews within VHA's 7-day guideline. Specifically, we found that for 31 of the150 consults in our sample (21 percent), specialty care providers did not meet the 7-day guideline, but they were able to meet the guideline for 119 of the consults (79 percent). (See table 1.) For one VAMC, nearly half the consults were not reviewed and triaged within 7 days, and for some consults, we found it took several weeks before the specialty care providers took action. Officials at this VAMC cited a shortage of providers needed to review and triage the consults in a timely manner. We also found that for the majority of the 150 consults in our sample, veterans did not receive care within 90 days of the date the consult was requested, in accordance with VHA's guideline. Specifically, veterans did not receive care within 90 days for 122 of the 150 consults we examined (81 percent). (See table 2.) We also found that for the 28 consults in our sample for which VAMCs provided care to veterans within 90 days, an extended amount of time elapsed before specialty care providers completed all but 1 of them in the consult system. As a result, the consults remained open in the system, making them appear as though the requested care was not provided within 90 days. Although 1 consult remained open for only 8 days from when the care was provided, for the remaining 27 consults, it took between 29 and 149 days from the time care was provided until the consults were completed in the system. In addition, of the 28 consults, we found that specialty care providers at one VAMC did not properly document the results of all 10 cardiology consults we reviewed, in order to close them in the system. Officials from four of the five VAMCs told us that specialty care providers often do not properly document that consults are complete, which requires the selection of the correct clinical progress note that corresponds to the patient's consult. Officials attributed this ongoing issue in part to the use of medical residents who rotate in and out of specialty care clinics after a few months, and lack experience with completing consults. Officials from one VAMC told us such rotations require VAMC leadership to ensure new residents are continually trained on how to properly complete consults. To help ensure that specialty care providers consistently choose the correct clinical progress note, this VAMC activated a technical solution consisting of a prompt in its consult system that instructs providers to choose the correct clinical progress note needed to complete consults. Officials stated that this has resulted in providers more frequently choosing the correct notes needed to complete consults. Examples of consults that were not completed in 90 days, or were closed without the veterans being seen, included: For 3 of 10 gastroenterology consults we examined for one VAMC, we found that between 140 and 210 days elapsed from the dates the consults were requested to when the veterans received care. For the consult that took 210 days, an appointment was not available and the veteran was placed on a waiting list before having a screening colonoscopy. For 4 of the 10 physical therapy consults we examined for one VAMC, we found that between 108 and 152 days elapsed, with no apparent actions taken to schedule appointments for the veterans for whom consults were requested. The veterans' medical records indicated that due to resource constraints, the clinic was not accepting consults for For 1 of these non-service-connected physical therapy evaluations.consults, several months passed before the veteran was referred to non-VA care, and was seen 252 days after the initial consult request. The other 3 consults were sent back to the requesting providers without the veterans receiving care. For all 10 of the cardiology consults we examined for one VAMC, we found that staff initially scheduled veterans for appointments between 33 and 90 days after the request, but medical records for those patients indicated that the veterans either cancelled or did not show for their initial appointments. In several instances, medical records indicated the veterans cancelled multiple times. For 4 of the consults, VAMC staff closed the consults without the veterans being seen; for the other 6 consults, VAMC staff rescheduled the appointments for times that exceeded VHA's 90-day guideline. VAMC officials cited increased demand for services, patient no-shows, and cancelled appointments, among the factors that hinder specialty care providers' ability to meet VHA's guideline for completing consults within 90 days. Several VAMC officials also noted a growing demand for both gastroenterology procedures, such as colonoscopies, as well as consultations for physical therapy evaluations, combined with a difficulty in hiring and retaining specialty care providers for these two clinical areas, as causes of periodic backlogs in providing these services. Officials at these VAMCs indicated that they try to mitigate backlogs by referring veterans to non-VA providers for care. Although officials indicated that use of non-VA care can help mitigate backlogs, several officials also indicated that this requires more coordination between the VAMC, the patient, and the non-VA provider; can require additional approvals for the care; and also may increase the amount of time it takes a VAMC specialty care provider to obtain the results (such as diagnoses, clinical findings and treatment plans) of medical appointments or procedures. Officials acknowledged that using non-VA care does not always prevent delays in veterans receiving timely care or in specialty care providers completing consults. Additionally, we identified one consult for which the patient experienced delays in obtaining non-VA care and died prior to obtaining needed care. In this case, the patient needed endovascular surgery to repair two aneurysms--an abdominal aortic aneurysm and an iliac aneurysm. According to the patient's medical record, the timeline of events surrounding this consult was: September 2013 - Patient was diagnosed with two aneurysms. October 2013 - VAMC scheduled patient for surgery in November, but subsequently cancelled the scheduled surgery due to staffing issues. December 2013 - VAMC approved non-VA care and referred the patient to a local hospital for surgery. Late December 2013 - After the patient followed up with the specialty care clinic, it was discovered that the non-VA provider lost the patient's information. The specialty care clinic staff resubmitted the patient's information to the non-VA provider. February 2014 - The consult was closed because the patient died prior to the surgery scheduled by the non-VA provider. According to VAMC officials, they conducted an investigation of this case. They found that the non-VA provider planned to perform the surgery on February 14, 2014, but the patient died the previous day. Additionally, they stated that according to the coroner, the patient died of cardiac disease and hypertension, and that the aneurysms remained intact. Since launching the consult business rules initiative in May 2013, VHA officials reported overseeing the consult process system-wide primarily by reviewing consult reports created from its national database to monitor VAMCs' progress in meeting VHA's timeliness guidelines. However, we found limitations in VHA's system-wide oversight, as well as in the oversight provided by the five VISNs included in our review. These limitations have affected the reliability of VHA's consult data and consequently VHA's ability to effectively assess VAMC performance in managing consults. VHA and VISNs do not routinely assess VAMCs' management of consults. Although VHA officials reported using system-wide consult data to help ensure that VAMCs are meeting VHA timeliness guidelines, and the five VISNs included in our review reported using consult data to monitor VAMCs they oversee, neither routinely assesses how VAMCs are actually managing consults. According to federal internal control standards, managers should perform ongoing monitoring, including independent assessments of performance.important to help VHA identify the underlying causes of delays and to help ensure that its consult data reliably reflects the number of, and length of time, veterans are waiting for care. VHA and VISN officials reported that they do not routinely audit consults to assess whether VAMC providers have been appropriately requesting, reviewing, and resolving consults in accordance with VHA's consult policy. Instead, VHA and VISN officials reported their oversight primarily relies on monitoring reports that track VAMCs' progress in reducing the number of consults unresolved for more than 90 days. VHA officials stated that they delegate oversight of unresolved consults to VAMCs and as such, do not conduct assessments of individual consults. Further, several VISN officials stated that they did not see the need for such assessments and that ongoing monitoring of consult data has been sufficient. Although VHA and the five VISNs included in our review do not routinely conduct such assessments, our work at five VAMCs found such reviews may help provide insights into the underlying causes of delays. Our examination of a sample of consults revealed several issues with VAMCs' specialty care clinics' management of consults, including delays in reviewing and scheduling consults, incorrectly discontinuing consults, and in some cases incorrectly closing a consult as complete even though care had not been provided. We discussed these issues with officials at the five VAMCs included in our review. Officials from two VAMCs stated that in responding to our questions, they researched the actions taken on each consult and learned about some of the root causes contributing to consult delays. For example, one VAMC found that its process for managing consults requested from other VAMCs was not clear to providers and needed to be improved to mitigate delays in processing such consults. Additionally, for a few of the consults for which we identified that care had not been provided, VAMC officials stated that, as a result of our findings, they contacted the veterans to schedule appointments when care was still needed. In addition, VHA officials stated that independent assessments of consults may be helpful and that they would consider conducting them in the future. By primarily relying on reviewing data and not routinely conducting an assessment of VAMCs' management of consults, VHA and VISN officials may be limited in identifying systemic issues affecting VAMCs' ability to provide veterans with timely access to care. VHA lacks documentation of how VAMCs addressed unresolved consults. One task under the consult business rules initiative required VAMCs to resolve consults that had been open for more than 90 days. VHA provided system-wide guidance outlining how to appropriately complete this task. VAMCs were to conduct clinical reviews of all non- administrative consults and determine whether the consult should be completed or discontinued--thus closing them in the consult system. However, VHA did not require VAMCs to document these decisions or the processes by which they were made, only to self-certify the task had been completed. Further, VHA did not require VISNs to independently verify that the task was completed appropriately. VAMC officials told us their reviews indicated that for many of the consults, care had been provided, but an incorrect clinical progress note was used. Therefore, officials had to select the correct note that corresponded to each consult, which completed the consult in the system. In addition, officials also told us that they discontinued many other consults because they found that patients were deceased or that patients had repeatedly cancelled appointments and thus, they determined that care was no longer needed. However, none of the five VAMCs in our review were able to provide us with specific documentation of these decisions and rationales. At one VAMC, for example, we found that a specialty care clinic discontinued 18 consults the same day that a task for addressing unresolved consults was due. Three of these 18 consults were part of our random sample, and we found no indication that a clinical review was conducted prior to the consults being discontinued. The lack of documentation is not consistent with federal internal control standards, which indicate that all transactions and other significant events need to be clearly documented and stress the importance of the creation and maintenance of related records, which provide evidence of execution of these activities. In addition to monitoring VAMC performance in completing the consult business rules initiative tasks, VHA officials told us they are continuing to monitor VAMCs' performance in addressing unresolved consults. In 2012, VHA estimated that approximately 2 million consults in its system were unresolved for more than 90 days. According to a VHA June 2014 consult tracking report, 285,877 consults were unresolved. attributed this reduction in the number of unresolved consults to implementation of the consult business rules initiative and their continued monitoring of VAMC performance in meeting VHA's consult timeliness guideline. Given the thousands of consults that have been closed by VAMCs, the lack of documentation and independent verification of how VAMCs addressed these unresolved consults raises questions about the reliability of VHA consult data and whether the data accurately reflects whether patients received the care needed in a timely manner, if at all. VHA officials told us that this number changes daily and expects it to continue to decline as VAMCs continue to resolve consults open more than 90 days. VAMCs were instructed to track future care consults either by developing markers so such consults could be identified in the consult system, or by using existing mechanisms outside of the consult system such as an electronic wait list. The electronic wait list is a component of the VistA scheduling system designed for recording, tracking, and reporting veterans waiting for medical appointments. completing this task, we found that each of the five VAMCs initially implemented strategies for managing future care consults that were, wholly or in part, non-approved VHA options. For example, one VAMC reported to us that initially its staff entered consult requests for future care into the consult system without the use of a future care flag, and subsequently discontinued these consults if they reached the 90-day threshold. Discontinuing future care consults closed them in the consult system, and thus prevented the consults from being monitored, which may have increased the risk of the VAMC losing track of these requests for specialty care. Further, during the course of our work, officials from three VAMCs reported revising their initial strategies for managing future care consults. (See table 3.) Some of these VAMCs continued to implement strategies that were non-approved VHA options and could have resulted in consult data that failed to distinguish future care consults from those that were truly delayed. According to federal internal control standards, managers should perform ongoing monitoring, including independent assessments of performance. However, because VHA officials relied on self- certifications submitted by VAMCs, they were not aware of the extent to which VAMCs implemented strategies that were not one of VHA's approved options, nor would they be aware of the extent to which VAMCs have since changed their strategies. As of June 2014, VHA officials told us they did not have detailed information on the various strategies VAMCs have implemented to manage future care consults, and they acknowledged that they had not conducted a system-wide review of VAMCs' strategies. Furthermore, VHA does not have a formal process by which VAMCs could share best practices system-wide. control standards, identifying and sharing information is an essential part of ensuring effective and efficient use of resources. We found that VAMCs may not be benefiting from the challenges and solutions other VAMCs discovered when implementing strategies for managing future care consults. For example, during our review, we found that one VAMC revised its initial strategy in a way that another VAMC had already found ineffective. Officials at that VAMC stated that they were implementing a new strategy to manage future care consults in a separate electronic system. However, another VAMC opted not to use a similar electronic system it piloted after finding that it confused providers and required extensive training; that VAMC opted instead to use future care markers in its consult system. A more systematic identification and sharing of best practices for managing future care consults would enable VAMCs to more efficiently implement effective strategies for managing specialty care consults. Officials from VAMCs in our review described sharing best practices with colleagues at other VAMCs in their VISN on an ad hoc basis. standardized data needed for conducting oversight. Additionally, according to federal internal control standards, management is responsible for developing the detailed policies, procedures, and practices to fit their agency's operations and to ensure that they are built into, and an integral part of, operations. However, we found that VHA has not developed a detailed, system-wide policy on how to address patient no-shows and cancelled appointments, two frequently noted causes of delays in providing care. Instead, VHA policies provide general guidance that state that after a patient does not show for or cancels an appointment, the specialty care clinic staff should review the consult and determine whether or not to reschedule the VHA officials told us that they allow each VAMC to appointment. determine its own approach to managing these occurrences. However, such variations in no-show and cancellation policies are reflected in the consult data, and as a result, this variation may make it difficult to assess and compare VAMCs' performance. For example, if a specialty care clinic allows a patient to cancel multiple specialty care appointments, the consult would remain open and could inaccurately suggest delays in care where none might exist. In contrast, if the specialty care clinic limited the number of patient cancellations, the consult would be closed after the allowed number and would not appear as a delay in care, even if a delay had occurred. See VHA Directive 2010-027, VHA Outpatient Scheduling Processes and Procedures (June 9, 2010) and VHA Directive 2008-056, VHA Consult Policy (Sept. 16, 2008). varied in its requirements.we found that specialty care providers had scheduled appointments for 127 of the consults, and that patient no-shows and cancelled appointments were among the factors contributing to delays in providing timely care for 66 of these consults (52 percent). Providing our nation's veterans with timely access to medical care, including outpatient specialty care, is a crucial responsibility of VHA. We and others have identified problems with VHA's consult process used to manage the outpatient specialty care needs of veterans. Our review of a sample of consults found that VAMCs did not always provide veterans with requested specialty care in a timely manner, if at all. In other cases, VAMCs were able to provide the needed care on a timely basis, but specialty care providers failed to properly complete or document the consults, making it appear as though care for veterans was delayed, even when it was not. Limitations in VHA's oversight of the consult process have affected the reliability of VHA's consult data and its usefulness for oversight. Although VHA officials cited VAMCs' progress in reducing the backlog of consults unresolved for more than 90 days, they have not independently verified that VAMCs appropriately closed these consults, calling into question the accuracy of these data. Due to their lack of oversight, VHA officials are not aware of the various strategies VAMCs implemented to manage future care consults, and thus when monitoring consult data, cannot adequately determine if future care consults are distinguishable from those that are truly delayed. Additionally, VHA has not developed a system-wide process for identifying and sharing VAMCs' best practices for managing future care and other types of consults; thus, VAMCs may be implementing strategies that others already have found ineffective or may be unaware of strategies that others have successfully implemented. Further, VHA's decentralized approach for handling patient no-shows and cancelled appointments, as well as other issues, makes it difficult to compare timeliness of providing outpatient specialty care system-wide. Ultimately, this decentralized approach may further limit the usefulness of the data and VHA's and VISNs' ability to assess VAMCs' performance in managing consults and providing timely care to our nation's veterans. To improve VHA's ability to effectively oversee the consult process, and help ensure VAMCs are providing veterans with timely access to outpatient specialty care, we recommend that the Secretary of Veterans Affairs direct the Interim Under Secretary for Health to take the following six actions: Assess the extent to which specialty care providers across all VAMCs, including residents who may be serving on a temporary basis, are using the correct clinical progress notes to complete consults in a timely manner, and, as warranted, develop and implement system- wide solutions such as technical enhancements, to ensure this is done appropriately. Enhance oversight of VAMCs by routinely conducting independent assessments of how VAMCs are managing the consult process, including whether they are appropriately resolving consults. This oversight could be accomplished, for example, by VISN officials periodically conducting reviews of a random sample of consults as we did in the review we conducted. Require specialty care providers to clearly document in the electronic consult system their rationale for resolving a consult when care has not been provided. Identify and assess the various strategies that all VAMCs have implemented for managing future care consults; including determining the potential effects these strategies may have on the reliability of consult data; and identifying and implementing measures for managing future care consults that will ensure the consistency of consult data. Establish a system-wide process for identifying and sharing VAMCs' best practices for managing consults that may have broader applicability throughout VHA, including future care consults. Develop a national policy for VAMCs to manage patient no-shows and cancelled appointments that will ensure standardized data needed for effective oversight of consults. We provided VA with a draft of this report for its review and comment. VA provided written comments, which are reprinted in appendix II. In its written comments, VA concurred with all six of the report's recommendations. To implement five of the recommendations, VA indicated that the VHA Deputy Under Secretary for Health for Operations and Management will take a number of actions, such as chartering a workgroup to develop clear standard operating procedures for completing and managing consults. VA indicated that target completion dates for implementing these recommendations range from December 2014 through December 2015. For the sixth recommendation, VA indicated that, by December 2014, VHA will establish a system-wide process that facilitates identifying and disseminating VAMC best practices for managing consults. VA also provided technical comments, which we have incorporated as appropriate. As arranged with your office, unless you publicly disclose the contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to the Secretary of Veterans Affairs and interested congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To send a consult request, providers log on to the consult system and complete an electronic consult request template developed by the VA medical center's specialty care clinic. As shown in figures 3 and 4 below, the information requested in these templates may vary depending on the patient's symptoms. After completing the template, the requesting provider electronically submits the consult for the specialty care provider to review. In addition to the contact named above, Janina Austin, Assistant Director; Jennie F. Apter; Jacquelyn Hamilton; David Lichtenfeld; Brienne Tierney; and Ann Tynan made key contributions to this report.
There have been numerous reports of VAMCs failing to provide timely care to veterans, including specialty care. In some cases, delays have reportedly resulted in harm to patients. In 2012, VHA found that its consult data were not adequate to determine the extent to which veterans received timely outpatient specialty care. In May 2013, VHA launched an initiative to standardize aspects of the consult process at its 151 VAMCs and improve its ability to oversee consults. GAO was asked to evaluate VHA's management of the consult process. This report evaluates (1) the extent to which VHA's consult process has ensured veterans' timely access to outpatient specialty care, and (2) how VHA oversees the consult process to ensure veterans are receiving outpatient specialty care in accordance with its timeliness guidelines. GAO reviewed documents and interviewed officials from VHA and from five VAMCs that varied based on size and location. GAO also reviewed a non-generalizeable sample of 150 consults requested across the five VAMCs. Based on its review of a non-generalizable sample of 150 consults requested from April 2013 through September 2013, GAO found that the Department of Veterans Affairs' (VA) Veterans Health Administration's (VHA) management of the consult process has not ensured that veterans always receive outpatient specialty care in a timely manner, if at all. Specifically, GAO found that for 122 of the 150 consults reviewed--requests for evaluation or management of a patient for a specific clinical concern--specialty care providers did not provide veterans with the requested care in accordance with VHA's 90-day timeliness guideline. For example, for 4 of the 10 physical therapy consults GAO reviewed for one VA medical center (VAMC), between 108 and 152 days elapsed with no apparent actions taken to schedule an appointment for the veteran. VAMC officials cited increased demand for services, and patient no-shows and cancelled appointments among the factors that lead to delays and hinder their ability to meet VHA's timeliness guideline. Further, for all but 1 of the 28 consults for which VAMCs provided care within 90 days, an extended amount of time elapsed before specialty care providers properly documented in the consult system that the care was provided. As a result, the consults remained open in the system, making them appear as though the requested care was not provided within 90 days. VHA's limited oversight of consults impedes its ability to ensure VAMCs provide timely access to specialty care. VHA officials reported overseeing the consult process primarily by reviewing data on the timeliness of consults; however, GAO found limitations in VHA's oversight, including oversight of its initiative designed to standardize aspects of the consult process. Specifically: VHA does not routinely assess how VAMCs are managing their local consult processes, and thus is limited in its ability to identify systemic underlying causes of delays. As part of its consult initiative, VHA required VAMCs to review a backlog of thousands of unresolved consults--those open more than 90 days--and if warranted to close them. However, VHA did not require VAMCs to document their rationales for closing them. As a result, questions remain about whether VAMCs appropriately closed these consults and if VHA's consult data accurately reflect whether veterans received the care needed in a timely manner, if at all. VHA does not have a formal process by which VAMCs can share best practices for managing consults. As a result, VAMCs may not be benefitting from the challenges and solutions other VAMCs have discovered regarding managing the consult process. VHA lacks a detailed system-wide policy for how VAMCs should manage patient no-shows and cancelled appointments for outpatient specialty care, making it difficult to compare timeliness in providing this care system-wide. Consequently, concerns remain about the reliability of VHA's consult data, as well as VHA's oversight of the consult process. GAO recommends that VHA take actions to improve its oversight of consults, including (1) routinely assess VAMCs' local consult processes, (2) require VAMCs to document rationales for closing unresolved consults, (3) develop a formal process for VAMCs to share consult management best practices, and (4) develop a policy for managing patient no-shows and cancelled appointments. VA concurred with all of GAO's recommendations and identified actions it is taking to implement them.
6,856
994
Our survey of the largest sponsors of DB pension plans reveals that they have made a number of revisions to their benefit offerings over approximately the last 10 years or so. Generally, respondents reported that they revised benefit formulas, converted some plans to hybrid plans (such as cash balance plans), or froze some of their plans. For example, 81 percent of responding sponsors reported that they modified the formulas of one or more of their DB plans. Respondents were asked to report changes for plans or benefits that covered only nonbargaining employees, as well as to report on plans or benefits that covered bargaining unit employees. Fifty-eight percent of respondents who reported on plans for collective- bargaining employees indicated they had generally increased the generosity of their DB plan formulas between January 1997 and the time of their response (see app. I, slide 12). In contrast, 48 percent of respondents reporting on plans for their nonbargaining employees had generally decreased the generosity of their DB plan formulas since 1997. "Unpredictability or volatility of DB plan funding requirements" was the key reason cited for having changed the benefit formulas of plans covering nonbargaining employees (see app. I, slide 14). "Global or domestic competitive pressures" in their industry was the key reason cited for the changes to the plans covering collectively bargained employees (see app. I, slide 13). With regard to plans for bargaining employees, however, a number of the sponsors who offered reasons for changes to bargaining unit plans also volunteered an additional reason for having modified their plans covering bargaining employees. Specifically, these sponsors wrote that inflation or a cost-of- living adjustment was a key reason for their increase to the formula. This suggests that such plans were flat-benefit plans that may have a benefit structure that was increased annually as part of a bargaining agreement. Meanwhile, sponsors were far more likely to report that they had converted a DB plan covering nonbargaining unit employees to a hybrid plan design than to have converted DB plans covering collectively bargained employees. For example, 52 percent of respondents who reported on plans for nonbargaining unit employees had converted one or more of their traditional plans to a cash balance or other hybrid arrangement (see app. I, slide 15). Many cited "trends in employee demographics" as the top reason for doing so (see app. I, slide 16). Among respondents who answered the cash balance conversion question for their collectively bargained plans, 21 percent reported converting one or more of their traditional plans to a cash balance plan. Regarding plan freezes, 62 percent of the responding firms reported a freeze, or a plan amendment to limit some or all future pension accruals for some or all plan participants, for one or more of their plans (see app. I, slide 18). Looking at the respondent's plans in total, 8 percent of the plans were described as hard frozen, meaning that all current employees who participate in the plan receive no additional benefit accruals after the effective date of the freeze, and that employees hired after the freeze are ineligible to participate in the plan. Twenty percent of respondents' plans were described as being under a soft freeze, partial freeze, or "other" freeze. Although not statistically generalizable, the prevalence of freezes among the large sponsor plans in this survey is generally consistent with the prevalence of plan freezes found among large sponsors through a previous GAO survey that was statistically representative. The vast majority of respondents (90 percent) to our most recent survey also reported on their 401(k)-type DC plans. At the time of this survey, very few respondents reported having reduced employer or employee contribution rates for these plans. The vast majority reported either an increase or no change to the employer or employee contribution rates, with generally as many reporting increases to contributions as reporting no change (see app. I, slide 21). The differences reported in contributions by bargaining status of the covered employees were not pronounced. Many (67 percent) of responding firms plan to implement or have already implemented an automatic enrollment feature to one or more of their DC plans. According to an analysis by the Congressional Research Service, many DC plans require that workers voluntarily enroll and elect contribution levels, but a growing number of DC plans automatically enroll workers. Additionally, certain DC plans with an automatic enrollment feature may gradually escalate the amount of the workers' contributions on a recurring basis. However, the Pension Protection Act of 2006 (PPA) provided incentives to initiate automatic enrollment for those plan sponsors that may not have already adopted an automatic enrollment feature. Seventy- two percent of respondents reported that they were using or planning to use automatic enrollment for their 401(k) plans covering nonbargaining employees, while 46 percent indicated that they were currently doing so or planning to do so for their plans covering collective-bargaining employees (see app. I, slide 22). The difference in automatic enrollment adoption by bargaining status may be due to the fact that nonbargaining employees may have greater dependence on DC benefits. That is, a few sponsors noted they currently automatically enroll employees who may no longer receive a DB plan. Alternatively, automatic enrollment policies for plans covering collective-bargaining employees may not yet have been adopted, as that plan feature may be subject to later bargaining. Health benefits are a large component of employer offered benefits. As changes to the employee benefits package may not be limited to pensions, we examined the provision of health benefits to active workers, as well as to current and future retirees. We asked firms to report selected nonwage compensation costs or postemployment benefit expenses for the year 2006 as a percentage of base pay. Averaging these costs among all those respondents reporting such costs, we found that health care comprised the single largest benefit cost. Active employee health plans and retiree health plans combined to represent 15 percent of base pay (see app. I, slide 24). DB and DC pension costs were also significant, representing about 14 percent of base pay. All of the respondents reporting on health benefits offered a health care plan to active employees and contributed to at least a portion of the cost. Additionally, all of these respondents provided health benefits to some current retirees, and nearly all were providing health benefits to retirees under the age of 65 and to retirees aged 65 and older. Eighty percent of respondents offered retiree health benefits to at least some future retirees (current employees who could eventually become eligible for retiree benefits), although 20 percent of respondents offered retiree health benefits that were fully paid by the retiree. Further, it appears that, for new employees among the firms in our survey, a retiree health benefit may be an increasingly unlikely offering in the future, as 46 percent of responding firms reported that retiree health care was no longer to be offered to employees hired after a certain date (see app. I, slide 25). We asked respondents to report on how an employer's share of providing retiree health benefits had changed over the last 10 years or so for current retirees. Results among respondents generally did not vary by the bargaining status of the covered employees (app. I, slide 27). However, 27 percent of respondents reporting on their retiree health benefits for plans covering nonbargaining retirees reported increasing an employer's share of costs, while only 13 percent of respondents reporting on their retiree health benefits for retirees from collective-bargaining units indicated such an increase. Among those respondents with health benefits covering nonbargained retirees, they listed "large increases in the cost of health insurance coverage for retirees" as a major reason for increasing an employer's share--not surprisingly. This top reason was the same for all of these respondents, as well as just those respondents reporting a decrease in the cost of an employer's share. Additionally, a number of respondents who mentioned "other" reasons for the decrease in costs for employers cited the implementation of predefined cost caps. Our survey also asked respondents to report on their changes to retiree health offerings for future retirees or current workers who may eventually qualify for postretirement health benefits. As noted earlier, 46 percent of respondents reported they currently offered no retiree health benefits to active employees (i.e., current workers) hired after a certain date. Reporting on changes for the last decade, 54 percent of respondents describing their health plans for nonbargaining future retirees indicated that they had decreased or eliminated the firm's share of the cost of providing health benefits (see app. I, slide 30). A smaller percentage (41 percent) of respondents reporting on their health benefits for collectively bargained future retirees indicated a decrease or elimination of benefits. The need to "match or maintain parity with competitor's benefits package" was the key reason for making the retiree health benefit change for future retirees among respondents reporting on their collective-bargaining employees (app. I, slide 32). We asked respondents to report their total, future liability (i.e., present value in dollars) for retiree health as of 2004. As of the end of the 2004 plan year, 29 respondents reported a total retiree health liability of $68 billion. The retiree health liability reported by our survey respondents represents 40 percent of the $174 billion in DB liabilities that we estimate for these respondents' DB plans as of 2004. According to our estimates, the DB liabilities for respondents reporting a retiree health liability were supported with $180 billion in assets as of 2004. We did not ask respondents about the assets underlying the reported $68 billion in retiree health liabilities. Nevertheless, these liabilities are unlikely to have much in the way of prefunding or supporting assets, due in large part to certain tax consequences. Although we did not ask sponsors about the relative sustainability of retiree health plans given the possible difference in the funding of these plans relative to DB plans, we did ask respondents to report the importance of offering a retiree health plan for purposes of firm recruitment and retention. Specifically, we asked about the importance of making a retiree health plan available relative to making a DB or DC pension plan available. Only a few respondents reported that offering DB or DC plans was less (or much less) important than offering a retiree health plan. Responding before October 2008--before the increasingly severe downturns in the national economy--most survey respondents reported they had no plan to revise benefit formulas or freeze or terminate plans, or had any intention to convert to hybrid plans before 2012. Survey respondents were asked to consider how their firms might change specific employee benefit actions between 2007 and 2012 for all employees. The specific benefit actions they were asked about were a change in the formula for calculating the rates of benefit accrual provided by their DB plan, a freeze of at least one DB plan, the conversion of traditional DB plans to cash balance or other hybrid designs, and the termination of at least one DB plan. For each possibility, between 60 percent and 80 percent of respondents said their firm was not planning to make the prospective change (see app I, slide 34). When asked about how much they had been or were likely to be influenced by recent legislation or account rule changes, such as PPA or the adoption of Financial Accounting Standards Board (FASB) requirements to fully recognize obligations for postretirement plans in financial statements, responding firms generally indicated these were not significant factors in their decisions on benefit offerings. Despite these legislative and regulatory changes to the pension environment, most survey respondents indicated that it was unlikely or very unlikely that their firms would use assets from DB plans to fund qualified health plans; increase their employer match for DC plans; terminate at least one DB plan; amend at least one DB plan to change (either increase or decrease) rates of future benefit accruals; convert a DB plan to a cash balance or hybrid design plan, or replace a DB plan with a 401(k)-style DC plan. Additionally, most respondents indicated "no role" when asked whether PPA, FASB, or pension law and regulation prior to PPA had been a factor in their decision (see app 1, slide 35). Though the majority of these responses indicated a trend of limited action related to PPA and FASB, it is interesting to note that, among the minority of firms that reported they were likely to freeze at least one DB plan for new participants only, most indicated that PPA played a role in this decision. Similarly, while only a few firms indicated that it was likely they would replace a DB plan with a 401(k)-style DC plan, most of these firms also indicated that both PPA and FASB played a role in that decision. There were two prospective changes that a significant number of respondents believed would be likely or very likely implemented in the future. Fifty percent of respondents indicated that adding or expanding automatic enrollment features to 401(k)-type DC plans was likely or very likely, and 43 percent indicated that PPA played a major role in this decision. This is not surprising, as PPA includes provisions aimed at encouraging automatic enrollment and was expected to increase the use of this feature. Forty-five percent of respondents indicated that changing the investment policy for at least one DB plan to increase the portion of the plan's portfolio invested in fixed income assets was likely or very likely--with 21 percent indicating that PPA and 29 percent indicating that FASB played a major or moderate role in this decision (see app 1, slide 36). Our survey did not ask about the timing of this portfolio change, so we cannot determine the extent of any reallocation that may have occurred prior to the decline in the financial markets in the last quarter of 2008. Finally, responding sponsors did not appear to be optimistic about the future of the DB system, as the majority stated there were no conditions under which they would consider forming a new DB plan. For the 26 percent of respondents that said they would consider forming a new DB plan, some indicated they could be induced by such changes as a greater scope in accounting for DB plans on corporate balance sheets and reduced unpredictability or volatility of plan funding requirements (see app I, slides 38). Conditions less likely to cause respondents to consider a new DB plan included increased regulatory requirements of DC plans and reduced PBGC premiums (see app I, slide 39). Until recently, DB pension plans administered by large sponsors appeared to have largely avoided the general decline evident elsewhere in the system since the 1980s. Their relative stability has been important, as these plans represent retirement income for more than three-quarters of all participants in single-employer plans. Today, these large plans no longer appear immune to the broader trends that are eroding retirement security. While few plans have been terminated, survey results suggest that modifications in benefit formulas and plan freezes are now common among these large sponsors. This trend is most pronounced among nonbargained plans but is also apparent among bargained plans. Yet, this survey was conducted before the current economic downturn, with its accompanying market turmoil. The fall in asset values and the ensuing challenge to fund these plans places even greater stress on them today. Meanwhile, the survey findings, while predating the latest economic news, add to the mounting evidence of increasing weaknesses throughout the existing private pension system that include low contribution rates for DC plans, high account fees that eat into returns, and market losses that significantly erode the account balances of those workers near retirement. Moreover, the entire pension system still only covers about 50 percent of the workforce, and coverage rates are very modest for low-wage workers. Given these serious weaknesses in the current tax-qualified system, it may be time for policymakers to consider alternative models for retirement security. We provided a draft of this report to the Department of Labor, the Department of the Treasury, and PBGC. The Department of the Treasury and PBGC provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Labor, the Secretary of the Treasury, and the Director of the PBGC, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on GAO's Web site at http://www.gao.gov. If you have or your staffs any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions are listed in appendix III. the nation's largest private sector DB plans: 1) What recent changes have employers made to their pension and benefit offerings? 2) What changes might employers make with respect to their pensions in the future, and how might these changes be influenced by changes in pension law and other factors? generalizable to all DB plan sponsors. However, the sample can serve as an important indicator of the health of the private DB system and the sample's possible importance to the Pension Benefit Guaranty Corporation (PBGC) The 44 sponsoring firms that responded represent an estimated: 25 percent (or $370 billion) of total DB system liabilities as of 19 percent (or 6 million) of the system's DB participants (active, separated-vested, retired as of 2004) business line was manufacturing, with other key areas being finance and information. (Figure 1) These firms reported employing on average 75,000 employees in their U.S. operations in 2006. increased or did not change employer contributions to 401(k) plans for their NB employees. (Figure 8) Main reasons for change included redesigned matching formula as well as compensation adjustments to attract top employees. The vast majority of respondents reported that plans covering NB employees either increased or did not change employee contributions. Main reasons among respondents reporting increased contributions included addition of automatic enrollment feature to one or more plans. 72 percent of large sponsors reported either using or planning to use auto enrollment for plans covering NB employees (Figure 9). either increased or did not change employer contributions to 401(k) plans for their bargaining unit employees. (Figure 8) No single reason stood out for this result. Bargaining unit employees of most sponsors did not change employee contributions. (Figure 8) 50 percent of large sponsors with plans covering CB employees reported either not using or not planning to use auto enrollment (Figure 9). (Figure 10) All responding DB plan sponsors offered health insurance to active employees and contributed to the cost All responding DB plan sponsors offered health insurance to at least some current retirees--nearly all to both pre-age 65 and age 65-plus employees 80 percent provided health insurance to at least some active employees who become eligible for the benefit upon retirement 20 percent provided health insurance that was fully paid by the retired employee (Figure 11) Compared to respondents reporting on their benefits covering CB employees, respondents with NB employees reported decrease in the employer's share of the cost of providing health benefits to current retirees (Figure 12) Main reasons were increases in cost of health insurance for retirees and for active employees (Figure 13) 46 percent of plan sponsors no longer offered retiree health benefits to active employees hired after a certain date. 54 percent decreased or eliminated the firm's share cost of providing health benefits for future retirees who were non-bargaining employees; (Figure 14) Primary reasons cited were large cost increases in health insurance for both retirees and active employees (Figure 15) 41 percent of sponsors with bargaining unit employees reported decrease in or elimination of firm's share of health care costs for future retirees (Figure 14) 26 percent reported no change Primary reason cited was match/maintain parity with competitor's benefits package (Figure 16) them definitely consider forming a new DB plan 26 percent of sponsors reported that there were conditions under which they would have considered offering a new DB plan; the most common conditions selected were: Provide sponsors with greater scope in accounting for DB plans on corporate balance sheets DB plans became more effective as an employee retention Reduced unpredictability or volatility in DB plan funding requirements (Figure 17) To achieve our objectives, we conducted a survey of sponsors of large defined-benefit (DB) pension plans. For the purposes of our study, we defined "sponsors" as the listed sponsor on the 2004 Form 5500 for the largest sponsored plan (by total participants). To identify all plans for a given sponsor, we matched plans through unique sponsor identifiers. We constructed our population of DB plan sponsors from the 2004 Pension Benefit Guaranty Corporation's (PBGC) Form 5500 Research Database by identifying unique sponsors listed in this database and aggregating plan- level data (for example, plan participants) for any plans associated with this sponsor. As a result of this process, we identified approximately 23,500 plan sponsors. We further limited these sponsors to the largest sponsors (by total participants in all sponsored plans) that also appeared on the Fortune 500 or Fortune Global 500 lists. We initially attempted to administer the survey to the first 100 plans that met these criteria, but ultimately, we were only able administer the survey to the 94 sponsoring firms for which we were able to obtain sufficient information for the firm's benefits representative. While the 94 firms we identified for the survey are an extremely small subset of the approximately 23,500 total DB plan sponsors in the research database, we estimate that these 94 sponsors represented 50 percent of the total single-employer liabilities insured by PBGC and 39 percent of the total participants (active, retired, and separated-vested) in the single-employer DB system as of 2004. The Web-based questionnaire was sent in December 2007, via e-mail, to the 94 sponsors of the largest DB pension plans (by total plan participants as of 2004) who were also part of the Fortune 500 or Global Fortune 500. This was preceded by an e-mail to notify respondents of the survey and to test our e-mail addresses for these respondents. This Web questionnaire consisted of 105 questions and covered a broad range of areas, including the status of current DB plans; the status of frozen plans (if any) and the status of the largest frozen plan (if applicable); health care for active employees and retirees; pension and other benefit practices or changes over approximately the last 10 years and the reasons for those changes (parallel questions asked for plans covering collectively bargained employees and those covering nonbargaining employees); prospective benefit plan changes; the influence of laws and accounting practices on possible prospective benefit changes; and opinions about the possible formation of a new DB plan. The first 17 questions and last question of the GAO Survey of Sponsors of Large Defined Benefit Pension Plans questionnaire mirrored the questions asked in a shorter mail questionnaire (Survey of DB Pension Plan Sponsors Regarding Frozen Plans) about benefit freezes that was sent to a stratified random sample of pension plan sponsors that had 100 or more participants as of 2004. Sponsors in the larger survey were, like the shorter survey, asked to report only on their single-employer DB plans. To help increase our response rate, we sent four follow-up e-mails from January through November 2008. We ultimately received responses from 44 plan sponsors, representing an overall response rate of 44 percent. To pretest the questionnaires, we conducted cognitive interviews and held debriefing sessions with 11 pension plan sponsors. Three pretests were conducted in-person and focused on the Web survey, and eight were conducted by telephone and focused on the mail survey. We selected respondents to represent a variety of sponsor sizes and industry types, including a law firm, an electronics company, a defense contractor, a bank, and a university medical center, among others. We conducted these pretests to determine if the questions were burdensome, understandable, and measured what we intended. On the basis of the feedback from the pretests, we modified the questions as appropriate. The practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages for the purpose of minimizing such nonsampling errors. We took the following steps to increase the response rate: developing the questionnaire, pretesting the questionnaires with pension plan sponsors, and conducting multiple follow-ups to encourage responses to the survey. We performed computer analyses of the sample data to identify inconsistencies and other indications of error and took steps to correct inconsistencies or errors. A second, independent analyst checked all computer analyses. We initiated our audit work in April 2006. We issued results from our survey regarding frozen plans in July 2008. We completed our audit work for this report in March 2009 in accordance with all sections of GAO's Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions. Barbara D. Bovbjerg, (202) 512-7215 or [email protected]. In addition to the contact above, Joe Applebaum, Sue Bernstein, Beth Bowditch, Charles Ford, Brian Friedman, Charles Jeszeck, Isabella Johnson, Gene Kuehneman, Marietta Mayfield, Luann Moy, Mark Ramage, Ken Stockbridge, Melissa Swearingen, Walter Vance, and Craig Winslow made important contributions to this report.
The number of private defined benefit (DB) pension plans, an important source of retirement income for millions of Americans, has declined substantially over the past two decades. For example, about 92,000 single-employer DB plans existed in 1990, compared to just under 29,000 single-employer plans today. Although this decline has been concentrated among smaller plans, there is a widespread concern that large DB plans covering many participants have modified, reduced, or otherwise frozen plan benefits in recent years. GAO was asked to examine (1) what changes employers have made to their pension and benefit offerings, including to their defined contribution (DC) plans and health offerings over the last 10 years or so, and (2) what changes employers might make with respect to their pensions in the future, and how these changes might be influenced by changes in pension law and other factors. To gather information about overall changes in pension and health benefit offerings, GAO asked 94 of the nation's largest DB plan sponsors to participate in a survey; 44 of these sponsors responded. These respondents represent about one-quarter of the total liabilities in the nation's single-employer insured DB plan system as of 2004. The survey was largely completed prior to the current financial market difficulties of late 2008. GAO's survey of the largest sponsors of DB pension plans revealed that respondents have made a number of revisions to their retirement benefit offerings over the last 10 years or so. Generally speaking, they have changed benefit formulas; converted to hybrid plans (such plans are legally DB plans, but they contain certain features that resemble DC plans); or frozen some of their plans. Eighty-one percent of responding sponsors reported that they modified the formula for computing benefits for one or more of their DB plans. Among all plans reported by respondents, 28 percent of these (or 47 of 169) plans were under a plan freeze--an amendment to the plan to limit some or all future pension accruals for some or all plan participants. The vast majority of respondents (90 percent, or 38 of 42 respondents) reported on their 401(k)-type DC plans. Regarding these DC plans, a majority of respondents reported either an increase or no change to the employer or employee contribution rates, with roughly equal responses to both categories. About 67 percent of (or 28 of 42) responding firms plan to implement or have already implemented an automatic enrollment feature to one or more of their DC plans. With respect to health care offerings, all of the (42) responding firms offered health care to their current workers. Eighty percent (or 33 of 41 respondents) offered a retiree health care plan to at least some current workers, although 20 percent of (or 8 of 41) respondents reported that retiree health benefits were to be fully paid by retirees. Further, 46 percent of (or 19 of 41) responding firms reported that it is no longer offered to employees hired after a certain date. At the time of the survey, most sponsors reported no plans to revise plan formulas, freeze or terminate plans, or convert to hybrid plans before 2012. When asked about the influence of recent legislation or changes to the rules for pension accounting and reporting, responding firms generally indicated these were not significant factors in their benefit decisions. Finally, a minority of sponsors said they would consider forming a new DB plan. Those sponsors that would consider forming a new plan might do so if there were reduced unpredictability or volatility in DB plan funding requirements and greater scope in accounting for DB plans on corporate balance sheets. The survey results suggest that the long-time stability of larger DB plans is now vulnerable to the broader trends of eroding retirement security. The current market turmoil appears likely to exacerbate this trend.
5,548
791
Established as a national program in the mid 1970s, WIC is intended to improve the health status of low-income pregnant and postpartum women, infants, and young children by providing supplemental foods and nutrition education to assist participants during critical times of growth and development. Pregnant and post-partum women, infants, and children up to age 5 are eligible for WIC if they are found to be at nutritional risk and have incomes below certain thresholds. According to USDA, research has shown that WIC helps to improve birth and dietary outcomes and contain health care costs, and USDA considers WIC to be one of the nation's most successful and cost-effective nutrition intervention programs. WIC participants typically receive food benefits--which may include infant formula--in the form of paper vouchers or checks, or through an electronic benefit transfer card, which can be used to purchase food at state-authorized retail vendors. USDA has established seven food packages that are designed for different categories and nutritional needs of WIC participants. Authorized foods must be prescribed from the food packages according to the category and nutritional needs of the participants. USDA recently revised the food packages to align with current nutrition science, largely based on recommendations of the National Academies' Institute of Medicine. Infants who are not exclusively breastfeeding can receive formula from WIC until they turn 1 year of age. While federal regulations specify the maximum amount of formula different categories of infants are authorized to receive, state and local agency staff also have some flexibility in determining precise amounts to provide, depending on an infant's nutritional needs. Staff at local WIC agencies play a critical role in determining infants' feeding categories, and they have the authority to provide them with less formula than the maximum amount allowed for each category, if nutritionally warranted. Nutrition specialists, such as physicians or nutritionists, working at the local agency perform nutritional assessments for prospective participants as part of certification procedures. They use the nutritional assessment information to appropriately target food packages to participants. USDA's role in operating WIC is primarily to provide funding and oversight, and state and local WIC agencies are charged with carrying out most administrative and programmatic functions of the program. Specifically, USDA provides grants to state agencies, which use the funds to reimburse authorized retail vendors for the food purchased by WIC participants and to provide services. As part of its federal monitoring and oversight obligations, USDA annually reviews the state plan for each state WIC agency, which provides important information about the agency's objectives and procedures for all aspects of administering WIC for the coming fiscal year. For their part, state agencies are responsible for developing WIC policies and procedures within federal requirements, entering into agreements with local agencies to operate the program, and monitoring and overseeing its implementation by these local agencies. The WIC oversight structure is part of the program's internal controls, which are an integral component of management. Internal control is not one event, but a series of actions and activities that occur on an ongoing basis. As programs change and as agencies strive to improve operational processes and implement new technological developments, management must continually assess and evaluate its internal controls to assure that the control activities being used are effective and updated when necessary. Management should design and implement internal controls based on the related cost and benefits. Effective internal controls include: (1) communicating information to management and others to enable them to carry out internal control and other responsibilities and (2) assessing the risks agencies face from both external and internal sources. USDA does not have data that can be used to determine the national extent of online sales of WIC formula, and department officials told us that USDA has not conducted a comprehensive study to assess these sales. According to the officials, the department does not collect data on this issue, in part because it is not the department's responsibility to sanction WIC participants for program violations. Rather, they said, it is the responsibility of state agencies to establish procedures to prevent and address participant violations, including attempts to sell WIC food benefits. According to state officials, states' monitoring efforts have revealed some WIC formula offered for sale online. Of the officials we spoke to from 12 states, those from 5 states said that they have found WIC formula offered for sale online by participants. Officials in 3 of these states said that they have found fewer than 0.5 percent of their WIC participants attempting these sales online. Officials in 2 other states did not estimate percentages but stated that the incidence is low. Consistent with these state accounts, our own monitoring of a popular e- commerce website for 30 days in four large metropolitan areas found few posts in which individuals explicitly stated they were attempting to sell WIC-provided formula. Specifically, we identified 2,726 posts that included the term "formula," and 2 of these posts explicitly stated that the origin of the formula was WIC. In both posts, the users indicated they were selling the WIC formula because they had switched to different brands of formula. A posting from late June 2014 included the container size in the title and stated: "I am looking to sell 5 [brand name] 12.5oz cans (NOT OPENED) because is super picky and does not want to drink it no matter what i do. will drink the kind for some reason. I told my WIC office to switch me to another brand but they say it might take 3 months. Im asking 35$ but best offer will do since the brand I buy is from so Im not looking to make a profit here if you consider each can is 16$ at the store. please text if interested!! A posting from early July 2014 included the brand, type, and container size in the title and stated: "I have 7 powder cans of they dnt expire for another year at least just got them from my wic n we ended up switching formulas so its $65.oo for pick up all 7 cans or $70 if i have to drive." From the same e-commerce website, we also identified 481 posts, of which any number could have been advertising WIC-provided formula. However, these posts did not state that the advertised formula was from WIC, and while the formula offered for sale was generally consistent with formula provided through WIC, we could not identify it as such. Specifically, during our 30 days of monitoring formula advertisements, we applied a number of criteria to narrow the broad pool of advertisements to identify those that may have been selling WIC formula. First, because state agencies are generally required to award single-source contracts for WIC formula, we searched for posts advertising formula brands that matched the state-specific WIC-contracted brand. We found that about three-quarters (2,013 posts) fit this criterion. We then reviewed each of these posts and determined that 346 of the posts fit each of three additional criteria, which we chose because they are generally consistent with WIC formula provided to infant participants. 1. The formula type, such as soy or sensitive, advertised for sale was equivalent to one of the types provided to WIC participants in the state in which the posting was made. 2. The volume of the formula container advertised was equivalent to the volume of one of the containers provided to WIC participants in the state in which the posting was made. 3. The amount of formula advertised represented a large proportion of the maximum amount of formula authorized to be provided to fully formula- fed WIC infant participants each month, averaged across all ages. Beyond the 346 posts that matched these three criteria, we found another 135 that met at least one, but not all, of the criteria. However, since we did not investigate any of these posts further, we do not know if any or all of these 481 posts were attempts to sell WIC formula. A posting from mid-June 2014 stated: "$10 a can! 14 -12.9 oz Cans of [brand name and type] Formula. Expiration Date is - July 1, 2015. Please take it all. I will not separate the formula! NOT FROM WIC!!! is now 14 months and no longer needs this. Email only please A posting from mid-June 2014: " Turn A Year Already, and we Just bought her 7 Brand New Cans of . She no longer needs Formula. Selling each Can for $10. Brand New, NOT Open. 12.4 Oz. EXP. 1 March. 2016." Through our monitoring efforts, and through interviews with USDA and state and local WIC officials, we identified a number of key challenges associated with distinguishing between WIC-obtained formula sales and other sales: Each state's specific WIC-contracted formula brand is typically available for purchase at retail stores by WIC participants and non- WIC participants alike, without an indicator on the packaging that some were provided through WIC. There are a number of reasons why individuals may have excess formula. For example, a WIC participant may obtain the infant's full monthly allotment of formula at one time; alternatively, non-WIC parents may purchase formula in bulk at a lower cost to save money. In either case, if the infant then stops drinking that type of formula, parents may attempt to sell the unused formula. Individuals posting formula for sale online are able to remain relatively anonymous, so WIC staff may not have sufficient information to link the online advertisement with a WIC participant. According to one WIC official we spoke with, staff in that state identify approximately one posting a week with sufficient detail about the seller--such as name or contact information--for staff to pursue. A WIC official from another state said that staff previously used phone numbers to identify WIC participants posting formula for sale, but they believe participants then began to list other people's phone numbers on posts. Advertisements for infant formula sales can be numerous online, and formula for sale originates from varied sources. For example, through our literature search, we found multiple news reports on stolen infant formula advertised for sale online. USDA has taken steps aimed at clarifying that the online sale of WIC benefits is a participant violation. For example, in 2013, USDA proposed regulations that would expand the definition of program violation to include offering to sell WIC benefits, specifically including sales or attempts made online. Earlier, in 2012, USDA issued guidance to WIC state agencies clarifying that the sale of, or offer to sell, WIC foods verbally, in print, or online is a participant violation. This guidance stated that, in accordance with federal regulations, USDA expects states to sanction and issue claims against participants for all program violations, but it did not provide direction on ways to prevent online sales of WIC foods, including formula. That same year, USDA also sent letters to four e-commerce websites--through which individuals advertise the sale of infant formula--requesting that they notify their customers that the sale of WIC benefits is prohibited, and two of the companies agreed to post such a notification. More generally, USDA has highlighted the importance of ensuring WIC program integrity through guidance issued in recent years aimed at encouraging participants to report WIC program fraud, waste, and abuse to the USDA Office of the Inspector General (OIG). For example, in 2012, USDA disseminated a poster developed by the OIG and attached it to a guidance document describing its purpose, which includes informing WIC participants and staff how to report violations of laws and regulations relating to USDA programs. The following year, USDA issued additional guidance that encouraged states to add contact information for the OIG to WIC checks or vouchers, or to their accompanying folders or sleeves. USDA indicated that both guidance documents were intended to facilitate participant reports of suspected fraud, waste, and abuse to the OIG, but neither specifically directed states to publicize the fact that attempting to sell WIC benefits, either online or elsewhere, qualifies as an activity that should be reported. Although WIC regulations require that state agencies establish procedures to control participant violations, we found that states vary in whether their required procedures include informing participants of the prohibition against selling WIC formula. The WIC regulations require that all participants (or their caretakers) be informed of their rights and responsibilities and sign a written statement of rights and obligations during the certification process. The regulations also require certain program violations to be included in the information provided on rights and responsibilities. However, according to USDA officials, the sale of WIC food benefits is not required to be included, nor do the regulations require participants be informed of this violation through other means. In our review of rights and responsibilities statements from 25 states' WIC policy and procedure manuals, we found that 7 did not require local agency staff to inform participants that selling WIC benefits is against program rules. Inconsistent communication to participants about this violation conflicts with federal internal control standards, and participants who are unaware of this prohibition may sell excess formula online, thus inappropriately using program resources. Based on these findings, we recommended in our December 2014 report that USDA instruct state agencies to include in the rights and responsibilities statement that participants are not allowed to sell WIC food benefits, including online. USDA agreed with this recommendation, and in April 2015, department officials reported that they intend to revise WIC regulations to require state agencies to include in participant rights and responsibilities statements the prohibition against selling WIC food benefits online. In the interim, USDA included this as a best practice in the 2016 WIC State Plan guidance it disseminated to state agencies on April 6, 2015. Department officials indicated that USDA expects states to move forward on this action and not wait for regulations. In addition, we found that states vary in the ways they identify attempted sales of WIC formula through monitoring efforts, and USDA has not collected information on states' efforts to address these sales. Of the officials that we spoke to from 12 states, those from 9 states mentioned that they regularly monitor online advertisements. However, the method of monitoring and the level of effort devoted to this activity varied across states. For example, officials in one state said that a number of staff within the state office, as well as a number of those in local agencies, search social media websites daily. In contrast, officials from another state said that staff spend about a half day each week monitoring online sites for attempted sales of WIC food benefits, and an official from a different state said that staff monitor for such sales only when time allows. A USDA official told us that the department would like to provide more support to states in pursuing likely cases of participant fraud related to the online sale of WIC food benefits, but it has not yet determined how to be of assistance. USDA officials indicated they believe states are monitoring attempted sales of WIC formula online to identify this participant violation; however, the department has not gathered information on the status of state efforts to address online sales. Although USDA officials review each WIC state plan annually to ensure that it is consistent with federal requirements, a state's procedures for identifying participant violations are not among the required elements for WIC state plans included in federal statute and regulations. Because USDA does not require that state agencies document their procedures for identifying participant sales of WIC foods, including online sales of infant formula, USDA does not know whether or how states are working to ensure program integrity in this area. The fact that the department does not work more directly with states on this issue is also inconsistent with federal internal control standards. We recommended in our December 2014 report that USDA require state agencies to articulate their procedures for identifying attempted sales of WIC food benefits in their WIC state plans and analyze the information to ascertain the national extent of state efforts. USDA agreed with this recommendation, and department officials reported in April 2015 that they intend to revise WIC regulations to require state agencies to include in state plans their procedures for identifying attempted sales of WIC food benefits. In the interim, USDA included this as a best practice in the 2016 WIC State Plan guidance it disseminated to state agencies on April 6, 2015. USDA and the states also lack information to determine cost-effective approaches for monitoring these attempted sales. According to USDA, state, and local WIC officials, because of the various challenges state WIC staff face in distinguishing between WIC-obtained formula sales and other sales, the return on investment for monitoring these sales is low. One USDA official noted that it is difficult for states to prove that participants are selling WIC food benefits, which increases the amount of time and effort state staff need to spend to address these cases. Officials from one state WIC agency and one local WIC agency we spoke to said that efforts by state and local agency staff to identify and address online WIC formula sales result in few confirmed cases and draw away scarce resources from other aspects of administering the program. One USDA official said that states that sanction a participant for attempting to sell WIC formula without sufficient evidence that it occurred will likely have the violation overturned during the administrative appeal process. These cases also appear unlikely to result in court involvement, as when we asked the 19 officials from 12 states how these cases were addressed, only one said that a couple had gone through the legal system. Federal internal control standards state that agencies should design and implement internal controls based on the related costs and benefits. According to USDA, because of the substantial risks associated with improper payments and fraud related to WIC vendor transactions, both USDA and the states have focused their oversight efforts in recent years on addressing vulnerabilities in the management of this area, rather than focusing on possible participant violations. However, because the use of the Internet as a marketplace has substantially increased in recent years and the national extent of online sales of WIC food benefits is unknown, USDA and the states have insufficient information to assess the benefits of oversight efforts related to this participant violation. Because of this, we recommended in our December 2014 report that USDA collect information to assess the national extent of attempted online sales of WIC formula benefits and determine cost-effective techniques states can use to monitor online classified advertisements. USDA agreed with this recommendation, and department officials reported in April 2015 that they plan to explore ways to assess the extent of online sales of WIC formula and identify and share best practices, cost- effective techniques, or new approaches for monitoring online advertisements with state agencies. To do this, they noted that they will draw on funds designated for addressing high-priority programmatic issues. We believe this approach will help states to strike the appropriate balance of costs and benefits when determining how to target their program integrity resources. Chairman Rokita, Ranking Member Fudge, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions you may have at this time. If you or your staff have any questions about this statement, please contact Kay E. Brown, Director, Education, Workforce, and Income Security, at 202-512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this statement include Sarah Cornetto, Aimee Elivert, Rachel Frisk, and Sara Pelton. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
WIC provides supplemental foods--including infant formula--and assistance to low-income pregnant and postpartum women, infants, and young children. WIC regulations prohibit participants from selling the foods they receive from the program. However, the Internet has substantially increased as a marketplace in recent years, and news reports suggest that some participants have attempted to sell WIC formula online. This testimony addresses: (1) what is known about the extent to which participants sell WIC formula online, and (2) USDA actions to prevent and address online sales of WIC formula. It is based on a December 2014 report, and includes April 2015 updates on actions USDA has taken to address the report's recommendations, which GAO obtained by analyzing USDA documents. For the 2014 report, GAO reviewed relevant federal laws, regulations, and USDA guidance; monitored online advertisements to sell formula in four metropolitan areas; reviewed a non-generalizable sample of policy manuals from 25 states, selected for their varied WIC caseloads and geography; and interviewed USDA and state and local officials. The U.S. Department of Agriculture (USDA) does not have data to determine the national extent of online sales of infant formula provided by the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC). Nevertheless, state WIC officials and GAO's own limited monitoring suggest that some WIC participants have offered formula for sale online. Of the officials we spoke with in 12 states, those from 5 states said that they have found WIC formula offered for sale online by participants. GAO monitored one online classified advertisements website in four large metropolitan areas for 30 days and found 2 posts in which individuals attempted to sell formula specifically identified as WIC--from among 2,726 that advertised infant formula generally. A larger number, 481 posts, advertised formula generally consistent with the formula brand, type, container volume, and amount provided to WIC participants, but these posts did not indicate the source of the formula. Because WIC participants purchase the same formula brands and types from stores as non-WIC customers, monitoring attempted online sales of WIC formula can present a challenge. State officials GAO spoke with cited other challenges to monitoring online sales, such as the fact that individuals posting formula for sale online are able to remain relatively anonymous, and their posts may contain insufficient information to allow staff to identify them as WIC participants. USDA has taken some steps toward helping states prevent and address online sales of WIC formula but has not collected information that could assist states in determining cost-effective approaches for monitoring such sales. In December 2014, GAO found that USDA had not specifically directed states to tell participants that selling WIC formula was a participant violation, which could have led to participants making these sales without realizing doing so was against program rules. GAO also found that states were not required to report their procedures for controlling participant violations--including sales of WIC benefits--to USDA, leaving the department without information on state efforts to ensure program integrity in this area. Through interviews with state and local WIC agency officials from 12 states, GAO found that states varied in the method and level of effort devoted to monitoring these sales and lacked information to determine cost-effective approaches for monitoring. Without information on the national extent of online sales of WIC benefits or effective monitoring techniques, both USDA and the states are unable to target their resources effectively to address related risks. As a result, GAO recommended that USDA require state agencies to inform participants of the prohibition against selling WIC formula and describe to USDA how they identify attempted sales. GAO also recommended that USDA collect information about the national extent of attempted online sales of WIC formula benefits and determine cost-effective techniques states can use to monitor them. In response, USDA issued revised guidance in April 2015 stating that it expects states to (1) inform participants that selling WIC benefits violates program rules and (2) report their procedures for monitoring attempted WIC benefit sales to USDA. Also in April 2015, USDA officials reported that although they had not yet taken action to assess the national extent of online sales and determine cost-effective techniques to monitor them, they planned to explore ways to do so. GAO recommended, in December 2014, that USDA better ensure WIC participants are aware of the prohibition against selling formula, require states to describe how they identify attempted sales, and assess online sales, including techniques for monitoring. USDA agreed, has taken some action, and plans to do more.
4,122
928
To obtain information on whether CPOT investigations were consistent with the mission of the HIDTA program, we reviewed the Office of National Drug Control Policy Reauthorization Act of 1998, ONDCP's appropriations statutes and accompanying committee reports, ONDCP's strategic plans and policies, and ONDCP's Web site. We also reviewed all HIDTA applications (38) to ONDCP from HIDTAs that received discretionary funds for various investigation activities linked to the CPOT list in fiscal years 2002 and 2003, and compared them with the mission of the HIDTA program. At 11 selected HIDTA sites--Appalachia; Atlanta; Central Florida; Lake County, Indiana; Los Angeles; Milwaukee; Nevada; North Texas; Oregon; Rocky Mountain; and Washington-Baltimore--we interviewed HIDTA management officials and task force leaders to discuss whether their investigative activities were consistent with the HIDTA mission. We selected these 11 HIDTAs to ensure geographic spread (east coast, central, west coast) across the country. To obtain information about ONDCP's distribution of CPOT funding, we interviewed ONDCP officials and obtained statistics they provided on HIDTAs that received CPOT funding in fiscal years 2002, 2003, and 2004 (app.I). We also reviewed ONDCP documents and correspondence that described the basis for ONDCP's decision for awarding HIDTAs CPOT funding. In addition, we discussed with officials at three HIDTAs-- Washington-Baltimore, North Texas, and Los Angeles--how CPOT funding was being used. We selected these three HIDTAs because they had received funds for both fiscal years 2002 and 2003 and were geographically dispersed. We also interviewed officials from 8 of the 13 HIDTAs (Appalachia, Atlanta, Central Florida, Lake County, Milwaukee, Nevada, Oregon, and Rocky Mountain) that did not apply for or applied for but did not receive CPOT funding in fiscal years 2002 and 2003. We selected these HIDTAs to reflect broad geographic segments of the country. We determined that the data presented in appendixes I and II from ONDCP, the Organized Crime Drug Enforcement Task Force (OCDETF), the Drug Enforcement Administration (DEA), and the Federal Bureau of Investigation (FBI) are sufficiently reliable, for the purposes of this review, based on interviews with agency officials and a review of their information systems documentation. In 1988, Congress established the White House's Office of National Drug Control Policy to, among other things, coordinate the efforts of federal drug control agencies and programs and establish the HIDTA program. By fiscal year 2004, ONDCP had designated 28 drug trafficking areas (HIDTAs) as centers of illegal drug production, manufacturing, importation, or distribution within the United States with a federally funded HIDTA program budget of about $225 million. Each HIDTA is to develop and implement an annual strategy to address the regional drug threat. The initiatives involve the active participation of federal, state, and local law enforcement agencies to enhance and assist the coordination of drug trafficking control efforts in the region. To encourage HIDTAs to conduct CPOT investigations, ONDCP utilized discretionary funding. In fiscal year 2004, ONDCP allocated about $8 million in discretionary funding to HIDTAs to support their drug initiatives that link with international drug trafficking organizations on the CPOT list. This funding is not meant to supplant or replace existing agency/program budgets intended for similar purposes, according to ONDCP guidance to the HIDTAs. OCDETF is a nationwide law enforcement task force program administered within Justice that targets major narcotic trafficking and money laundering organizations using the combined resources and expertise of its federal member agencies together with state and local investigators. Its mission is to identify, investigate, and prosecute members of high-level drug trafficking enterprises and to dismantle or disrupt the operations of those organizations. To help carry out this mission and to focus investigative resources on major sources of supply, OCDETF member agencies developed the CPOT list of major international drug trafficking organizations. In September 2002, at the request of the U.S. Attorney General, OCDETF issued the first CPOT list, naming international drug trafficking organizations most responsible for supplying illegal drugs to the United States. OCDETF member agencies developed criteria for determining whether an international drug organization was to be placed on the CPOT list. Criteria include whether the international organization operates nationwide in multiple regions of the United States and deals in substantial quantities of illegal drugs or illicit chemicals on a regular basis that have a demonstrable impact on the nation's drug supply. OCDETF compiles and issues the CPOT list at the beginning of each fiscal year, with the intent that federal law enforcement agencies will target their investigations on CPOT organizations. OCDETF member agencies control the CPOT list and its distribution. OCDETF also collaborates with ONDCP on reviews of CPOT funding applications by HIDTAs that link their initiatives with the CPOT list. CPOT investigations were not inconsistent with the mission of the HIDTA program because HIDTAs' targeting of local drug traffickers linked with international organizations on the CPOT list was one possible strategy for achieving the program's goal of eliminating or reducing significant sources of drug trafficking in their regions. The mission of the HIDTA program is not expressly stated in current law. However, ONDCP has developed a mission statement that reflects the legislative authority for the HIDTA program, specifically, to enhance and coordinate U.S. drug control efforts among federal, state, and local law enforcement agencies to eliminate or reduce drug trafficking and its harmful consequences in critical regions of the United States. The primary legislative authority for the HIDTA program is the Reauthorization Act, which provides guidance on the mission of the program by setting out factors for the Director of ONDCP to consider in determining which regions to designate as HIDTAs. The factors contained in the act are the extent to which 1. the area is a center of illegal drug production, manufacturing, importation, or distribution; 2. state and local law enforcement have shown a determination to respond aggressively to drug trafficking in the area by committing resources to respond to it; 3. drug-related activities in the area are having a harmful impact in other areas of the country; and 4. a significant increase in federal resources is necessary to respond adequately to drug-related activities in the area. In addition, House and Senate Appropriations Committee reports on ONDCP's appropriations have stated that the program was established to provide assistance to federal, state, and local law enforcement entities operating in those areas most adversely affected by drug trafficking. The use of a portion of the HIDTA program's discretionary funds to focus on CPOT investigations is not inconsistent with ONDCP's mission statement for the program and the legislative authority on which it is based, particularly the first and third factors in the Reauthorization Act. Drug traffickers operating in a HIDTA may be linked with the CPOT list because of their role in major international drug trafficking activities, including illegal distribution in multiple regions of the United States. Given such activities, they would contribute to the HIDTA's status as a center of illegal drug importation and distribution and have a harmful impact in other regions. Similarly, in keeping with appropriations committee statements on the purpose of the program, HIDTA involvement in CPOT investigations is one way of assisting federal, state, and local operations in areas where the significant adverse effects of drug trafficking activities are due in part to links to international criminal organizations. Thus, for HIDTAs to investigate and disrupt or dismantle regional drug traffickers that are linked with CPOT organizations is not inconsistent with the HIDTA program's stated mission and its legislative authority. ONDCP distributed discretionary funds to HIDTAs to help support their investigations of drug traffickers linked with international organizations on the CPOT list by reviewing and approving HIDTA applications for funding. In fiscal years 2002, 2003, and 2004, ONDCP distributed CPOT funds to a total of 17 of the 28 HIDTAs. A Justice official who participates in the evaluation of HIDTA applications for CPOT funding said that ONDCP encourages applications for CPOT funding where additional funds are likely to benefit an initiative and move the investigation forward. Some HIDTAs chose not to apply because they face a domestic drug threat that does not have a link to any international CPOT organization activity. Other HIDTAs that have applied for funds did not receive CPOT funding because they did not have sufficient investigative resources to uncover the link to a CPOT organization. In commenting on a draft of this report, Justice said that while this may be true in some circumstances, it was also often the case that HIDTAs may have had sufficient resources but simply had not yet taken the investigation far enough to justify the award of discretionary funds. During fiscal years 2002 and 2003, 6 HIDTAs did not apply and 7 applied but were not approved for CPOT funding. In fiscal year 2004, 17 of the 28 HIDTAs did not receive CPOT funding--10 did not apply and 7 applied but were not approved for funding. ONDCP and HIDTA officials mentioned several reasons why some HIDTAs may not receive funding. First, some HIDTAs were denied funding if the investigative activities in their funding applications were not consistent with the HIDTA mission and linked to a CPOT organization. Second, ONDCP did not provide clear guidance or sufficient information for HIDTAs to develop their applications for CPOT funds, although it took steps to clarify its guidance and create opportunity for all HIDTAs to participate. Third, reducing the amount of discretionary funds available for CPOT funding in fiscal year 2004 affected the number of HIDTAs that received this funding. Fourth, HIDTAs' local priorities may not link to any CPOT organization activity. ONDCP granted CPOT funding for HIDTA investigative activities that it determined demonstrated a link to the CPOT list and were consistent with the mission of the HIDTA program. As an example, one of the applications we reviewed requested CPOT funding for overtime pay, video cameras, portable computers, and wiretaps for surveillance activities to target a complex criminal organization involved in the distribution of significant quantities of heroin and cocaine as well as related homicides, abductions, arson, assaults, fraud, and witness tampering. Surveillance of the organization indicated that it was being supplied with drugs through an affiliate of a Latin American/Caribbean-based CPOT organization. Therefore, these drug activities were linked to an organization on the CPOT list, and the investigations also were consistent with the HIDTA program's mission, in that these activities contributed to eliminating or reducing significant sources of drug trafficking within the HIDTA region. Drug investigation activities that were not consistent with the HIDTA program's mission were not to receive CPOT funds from ONDCP, even if they showed a CPOT link. Specifically, it is inconsistent with the HIDTA program's mission to supplant funds from other sources. Rather CPOT funds are meant to supplement funding for investigations that support the HIDTA mission. For example, in one HIDTA application, a request was made for $686,000 for the HIDTA to provide software to a cellular telephone company located in a Caribbean country to monitor the cellular telephone calls of a CPOT organization. The application also asked for travel expenses of $7,500 to send a prosecutor and two HIDTA investigators to that country to review the cellular telephone records. ONDCP officials told us that they denied funding for these activities because ONDCP guidance to the HIDTAs regarding CPOT funding states that the funds cannot be used to "supplant," or replace, existing agency/program budgets intended for similar purposes because to do so would be inconsistent with the HIDTA mission. In commenting on a draft of this report, ONDCP made the clarifying statement that CPOT funding is provided for investigations of major drug trafficking organizations affiliated with CPOTs. However, HIDTAs do not participate in international investigations, and CPOT funding cannot be used to conduct or supplement investigations in places like Colombia or Afghanistan. In another application, a request was made for $120,000 to pay for street lighting in a drug-infested crime area of a major U.S. city to aid the HIDTA surveillance task force in pursuing drug enforcement operations. ONDCP officials told us that they determined the activity was not consistent with the HIDTA mission because CPOT funding cannot be used to supplant a city's budget for street maintenance and improvements. In some cases, ONDCP's lack of clear guidance or sufficient information limited some HIDTAs' ability to apply for CPOT funding. For example, some HIDTA officials told us that in fiscal year 2002, ONDCP did not provide clear directions in its guidance about how HIDTAs were to document the link between their investigations and the CPOT list. However, in fiscal year 2003, ONDCP's officials recognized the problem and, at quarterly meetings, discussed with HIDTAs how to document links between their investigations and the CPOT list, thus resolving the problem. In addition, ONDCP was only able to provide a partial CPOT list to officials in all HIDTAs in each of the 3 fiscal years it provided CPOT funding, even though applications were to include a link between their investigations and the CPOT list. The partial list contained some of the largest organizations in operation and ones that were most frequently targeted by law enforcement. ONDCP, in its guidance, advised HIDTAs that they could obtain the entire list from their Justice contacts. Some HIDTA officials said not having a full list available to them from ONDCP limited their ability to apply for CPOT funding. In fiscal year 2004, ONDCP created an opportunity for all HIDTAs to participate. According to OCDETF officials, access to the full CPOT list is restricted to federal law enforcement officials. Commenting on a draft of this report, Justice said these restrictions are driven by the fact that the member agencies have designated the list as "law enforcement sensitive," because disclosure of certain investigative information contained on the list might jeopardize ongoing investigations of targeted organizations. As a result, access to the full CPOT list is restricted to OCDETF-member federal law enforcement agencies. Nonparticipating federal agencies, HIDTA directors, state and local police officials, and non-law enforcement federal agencies such as ONDCP could obtain the list from U.S. Attorneys or Special Agents-in-Charge of the OCDETF member agencies on a need-to- know basis. To facilitate the distribution of discretionary CPOT funding, however, OCDETF provided a partial list, which contained information on some of the largest organizations and those commonly known to, and targeted by, the law enforcement community, to ONDCP. Since HIDTA officials have said that they need to know who is on the CPOT list to determine which of their investigations qualify for CPOT funds, ONDCP, in its guidance, advised HIDTAs to obtain the full CPOT list through their Justice contacts. However, officials from 2 HIDTAs we spoke to said that they had some difficulty in obtaining the full CPOT list. We spoke with officials from 8 of the 13 HIDTAs that either did not apply or applied for and did not receive CPOT funds in either of the first 2 years (fiscal years 2002 and 2003) ONDCP awarded CPOT funds. Officials from 2 of the HIDTAs said that obtaining the full list was a problem because for one HIDTA, they did not have the full CPOT list within the time needed to complete the application, and the other HIDTA said there was not a formal procedure for obtaining the full CPOT list. Officials from 6 of the 8 HIDTAs said it was not a problem, however, because they were able to obtain the full CPOT list from their Justice contacts. Although these examples may not typify all HIDTAs, they nevertheless indicate that not every HIDTA was able to readily access the full CPOT list and that it would be difficult to show how their investigations qualify for CPOT funds without having the full list. Although ONDCP believed the CPOT information it provided was sufficient for all HIDTAs to fairly compete for discretionary CPOT funding, an ONDCP official responsible for CPOT funding acknowledged that not receiving a full CPOT list most likely reduced opportunities for some HIDTAs to receive CPOT funding or discouraged others from applying for funds. All HIDTAs are eligible to apply to receive CPOT funding, according to ONDCP officials, even though 13 of the 28 HIDTAs did not apply for or applied for but did not receive CPOT funding in fiscal years 2002 and 2003. In fiscal year 2004, ONDCP's guidance identified three international organizations that trafficked in illegal drugs in all HIDTAs. ONDCP officials said that this additional guidance would allow all HIDTAs to focus their limited funding on these three organizations and would allow a baseline of opportunity for all HIDTAs to apply for CPOT funding. ONDCP stated it would give preference to funding applications that had links to these three CPOT organizations. Ten of the 11 HIDTAs that received CPOT funds in fiscal year 2004 linked their applications to the three CPOTs referenced in ONDCP's guidance. Providing HIDTAs with the names of three CPOT organizations that operated in all the HIDTA regions established a baseline of opportunity for the HIDTAs to apply for funding despite receiving a limited number of CPOT organizational targets from ONDCP. Commenting on a draft of this report, Justice acknowledged that the HIDTAs did face some difficulty regarding the distribution of the CPOT list. However, through participation with ONDCP in evaluating applications for CPOT funding, Justice officials noticed that--for those HIDTAs that applied--problems associated with the limited distribution of the list appeared to be confined to fiscal year 2002, when the list was first developed. In subsequent years, law enforcement agencies, including those in the HIDTAs, were more familiar with the CPOT list and how to gain access to it. The CPOT funding amount almost tripled from fiscal year 2002 to fiscal year 2003 but was cut in half in fiscal year 2004. Given the reduction in discretionary funding allocated to CPOT funding, ONDCP officials said that even if HIDTAs link their investigations to the CPOT list, and do not supplant other funding sources, they are not guaranteed CPOT funding. They recognized that reduced funding affected HIDTA participation. As shown in figure 1, fiscal year 2004 funding was reduced from $16.5 million to $7.99 million. In the first year, 8 HIDTAs received funding. In the second year, 14 HIDTAs received funding, and in the third year, when funding was reduced, 11 HIDTAs received funding. Despite more than a 50 percent drop in funding in fiscal year 2004, 2 of 11 HIDTAs received CPOT funding for the first time. While there could be multiple causes, we also noted that the number of HIDTAs that did not apply in fiscal year 2004 compared with prior years increased from 6 to 10. ONDCP officials said that the limited CPOT funds must be directed at those HIDTAs where, in the judgment of those officials who reviewed the CPOT applications, the supply of drugs from CPOT organizations had the best chance of being interrupted. Commenting on a draft of this report, ONDCP agreed that the reduction of CPOT funding in fiscal year 2004 affected HIDTA participation but added that this observation, while accurate, should be stated within the context of all discretionary funding activities. ONDCP consulted with Congress prior to allocating the discretionary funding, as required by the report language accompanying the ONDCP's appropriations. As a result of those consultations, ONDCP decided to reduce the amount available for funding CPOT-related investigations in order to fund other activities. Thus, while the reduction in fiscal year 2004 for CPOT-related funding resulted in fewer HIDTAs receiving CPOT funding, that should not have caused a decline in applications for other discretionary funding activities. For more detailed information on the amounts funded to each HIDTA, see appendix I. Figure 2 shows the 17 HIDTAs that received CPOT funding at least once during fiscal years 2002 through 2004 and the 11 that have not received funding. Within certain HIDTAs, law enforcement tended to focus more on domestic drug enforcement than on developing links with CPOT organizations. Officials at three HIDTAs we spoke to told us that in fiscal years 2002 and 2003, they did not apply for CPOT funding because their biggest drug problems were domestic drug producers and distributors, such as those organizations involved in methamphetamine and marijuana. As a result, their strategy was to focus on these local drug traffickers that they were required by law to investigate, and those investigations did not necessarily link with CPOT organizations. In addition, according to some HIDTA law enforcement officials, local law enforcement officers in their HIDTA focused on local investigations rather than those potentially linked with CPOT organizations because they saw a direct benefit to their city or countyprosecution of local targets accompanied by drug and asset seizures. Also, HIDTA officials said that while their law enforcement officers initiated numerous investigations, they do not always have enough funds to proceed to a level that may link the HIDTA investigation to the CPOT list. Commenting on a draft of this report, ONDCP did not disagree with the facts above but emphasized that HIDTAs should be focusing on investigations of local activities that reach beyond the boundaries of the HIDTA, consistent with their designation as centers of illegal drug trafficking activities that affect other parts of the country. On December 27, 2004, we provided a draft of this report for review and comment to ONDCP and Justice. ONDCP commented on our analysis that the use of some discretionary funding for the HIDTA program to support CPOT-related drug trafficking investigations was not inconsistent with the HIDTA mission because it was one possible strategy to eliminate or reduce significant sources of drug trafficking in their regions. Justice generally agreed with the substance of the report and provided clarifications that we also incorporated in this report where appropriate. Both agencies focused their comments and clarifications on the second objective: how ONDCP distributed discretionary funds to HIDTAs for CPOT investigations and why some HIDTAs did not receive funding. ONDCP stressed their belief that the information they provided to HIDTAs was sufficient for all HIDTAs to fairly compete for limited CPOT funding, and that although CPOT funding was reduced in fiscal year 2004, HIDTAs could still participate in other discretionary funding activities. Finally, ONDCP believes that while some HIDTAs' investigations may not link to CPOTs, HIDTAs should focus on finding that link, given their designation as centers of illegal trafficking that affect other parts of the country. Justice emphasized that their restrictions on the distribution of the CPOT list were soundly based, allowed for HIDTAs to gain access to the full list, and were not intended to withhold access to the CPOT list from HIDTA personnel. They acknowledged that HIDTAs did face some difficulty but were confident the problem has been overcome. We incorporated their perspectives as appropriate. The full text of the ONDCP Deputy Director for State and Local Affairs' letter, and the Department of Justice's Associate Deputy Attorney General's memo are presented in appendix III and IV, respectively. We will provide copies of this report to appropriate departments and other interested congressional committees. In addition, we will send copies to the Attorney General of the United States and the Director of the Office of National Drug Control Policy. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO's Web site at http://www.gao.gov. Major contributors to this report are listed in appendix V. If you or your staffs have any questions concerning this report, contact me on (202) 512-8777. During fiscal year 2003, a total of 744 CPOT investigations were conducted by OCDETF member law enforcement agencies. The majority of those investigations (497, or 67 percent) were multi-agency OCDETF investigations, involving participation from DEA, FBI, ICE, IRS and other member agencies, while the remaining were conducted individually by DEA (191, or 26 percent) or FBI (56, or 8 percent). For fiscal year 2004, the majority of CPOT investigations continued to be multi-agency OCDETF investigations. For the first 7 months of fiscal year 2004, 72 percent (548 of 761) of CPOT investigations conducted by member law enforcement agencies were designated as OCDETF investigations. OCDETF officials attributed fiscal year 2004 increases in CPOT investigations over fiscal year 2003 to OCDETF's emphasis on identifying links between targeted domestic organizations and the CPOT list. As previously mentioned, OCDETF is composed of member agencies that worked together on the 497 CPOT investigations in fiscal year 2003. Member agencies either led investigations or supported other OCDETF member agencies in these investigations. The bar chart in figure 3 shows the number of drug investigations in which each OCDETF member agency participated. For example, DEA participated in 402 CPOT investigations, the highest level of participation by any member agency. FBI participated in 320 investigations, many of which it conducted jointly with DEA along with other member agencies. DEA and FBI are the only OCDETF member agencies that conducted separate CPOT investigations. Generally, these investigations were handled outside of OCDETF because they did not yet satisfy the criteria for OCDETF designation--that is, they were investigations conducted exclusively by foreign offices or investigations that had not yet developed to a sufficient level to be designated as OCDETF cases. For the first 7 months of fiscal year 2004, data showed that DEA separately conducted 23 percent (172 of 761) and FBI separately conducted 5 percent (41 of 761) of investigations linked to CPOTs in addition to their participation in OCDETF investigations. These two agencies conducted their CPOT investigations out of their own agency's direct appropriations. These CPOT investigations can subsequently become eligible for OCDETF funding when OCDETF's criteria are met. For example, besides being linked to the CPOT list, DEA and FBI investigations are to involve multiple law enforcement agencies, among other things, in order to qualify as OCDETF-designated CPOT investigations. Figure 4 shows the relationship among OCDETF, DEA, and FBI in their handling of CPOT investigations and shows that DEA and the FBI conduct CPOT investigations both separately and collectively with other OCDETF member agencies. Figure 4 also shows the collaborative relationship between ONDCP and Justice. In addition to those named above, the following individuals contributed to this report: Frances Cook, Grace Coleman, David Dornisch, Michael H. Harmond, Weldon McPhail, and Ron Salo.
In fiscal year 2002, the Attorney General called upon law enforcement to target the "most wanted" international drug traffickers responsible for supplying illegal drugs to America. In September 2002, law enforcement, working through the multi-agency Organized Crime Drug Enforcement Task Force (OCDETF) Program, developed a list of these drug traffickers, known as the Consolidated Priority Organization Target List (CPOT), to aid federal law enforcement agencies in targeting their drug investigations. Also, the White House's Office of National Drug Control Policy (ONDCP) collaborated with law enforcement to encourage existing High Intensity Drug Trafficking Areas (HIDTA) to conduct CPOT investigations. According to ONDCP, the 28 HIDTAs across the nation are located in centers of illegal drug production, manufacturing, importation, or distribution. ONDCP distributed discretionary funds to supplement some HIDTAs' existing budgets beginning in fiscal year 2002 to investigate CPOT organizations. Out of concern that a CPOT emphasis on international drug investigations would detract from the HIDTA program's regional emphasis, the Senate Committee on Appropriations directed GAO to examine whether investigations of CPOT organizations are consistent with the HIDTA program's mission and how ONDCP distributes its discretionary funds to HIDTAs for CPOT investigations. The mission of the HIDTA program is to enhance and coordinate U.S. drug control efforts among federal, state, and local law enforcement agencies to eliminate or reduce drug trafficking and its harmful consequences in HIDTAs. CPOT investigations were not inconsistent with this mission because HIDTAs' targeting of local drug traffickers linked with international organizations on the CPOT list was one possible strategy for achieving the program's goal of eliminating or reducing significant sources of drug trafficking in their regions. GAO found that in fiscal years 2002 through 2004, ONDCP distributed discretionary funds to 17 of the 28 HIDTAs for CPOT investigations. Some HIDTA officials said they did not receive CPOT funding for several reasons including unclear guidance, insufficient application information to the HIDTAs for funding, and local priorities not linking with CPOT investigations. Reduced discretionary funding in fiscal year 2004 for CPOT investigations affected the number of HIDTAs that received this funding.
6,076
499
Three main types of pipelines--gathering, transmission, and distribution--carry hazardous liquid and natural gas from producing wells to end users (residences and businesses) and are managed by about 3,000 operators. Transmission pipelines carry these products, sometimes over hundreds of miles, to communities and large-volume users, such as factories. Transmission pipelines tend to have the largest diameters and operate at the highest pressures of any type of pipeline. PHMSA has estimated there are more than 400,000 miles of hazardous liquid and natural gas transmission pipelines across the United States. PHMSA administers two general sets of pipeline safety requirements and works with state pipeline safety offices to inspect pipelines and enforce the requirements. The first set of requirements is minimum safety standards that cover specifications for the design, construction, testing, inspection, operation, and maintenance of pipelines. The second set is part of a supplemental risk-based regulatory program termed "integrity management." Under transmission pipeline integrity management programs, operators are required to systematically identify and mitigate risks to pipeline segments that are located in highly populated or environmentally sensitive areas (called "high-consequence areas"). According to PHMSA, industry, and state officials, responding to either a hazardous liquid or natural gas pipeline incident typically includes detecting that an incident has occurred, coordinating with emergency responders, and shutting down the affected pipeline segment. Under PHMSA's minimum safety standards, operators are required to have a plan that covers these steps for all of their pipeline segments and to follow that plan during an incident. Officials from PHMSA and state pipeline safety offices perform relatively minor roles during an incident, as they rely on operators and emergency responders to take actions to mitigate the consequences of such events. Operators must report incidents that meet certain thresholds--including incidents that involve a fatality or injury, excessive property damage or product release, or an emergency shutdown--to the federal National Response Center. Operators must also conduct an investigation to identify the root cause and lessons learned, and report to PHMSA. Federal and state authorities may use their discretion to investigate some incidents, which can involve working with operators to determine the cause of the incident. While prior research shows that most of the fatalities and damage from an incident occur in the first few minutes following a pipeline rupture, operators can reduce some of the consequences by taking actions that include closing valves that are spaced along the pipeline to isolate segments. The amount of time it takes to close a valve depends upon the equipment installed on the pipeline. For example, valves with manual controls (referred to as "manual valves") require a person to arrive on site and either turn a wheel crank or activate a push-button actuator. Valves that can be closed without a person at the valve's location (referred to as "automated valves") include remote-control valves, which can be closed via a command from a control room, and automatic-shutoff valves, which can close without human intervention based on sensor readings. Automated valves generally take less time to close than manual valves. PHMSA's minimum safety standards dictate the spacing of all valves, regardless of type of equipment installed to close them, while integrity management regulations require that transmission pipeline operators conduct a risk assessment for pipelines in high-consequence areas that includes the consideration of automated valves. Multiple variables--some controllable by transmission pipeline operators--can influence the ability of operators to respond quickly to an incident, according to PHMSA officials, pipeline safety officials, and industry stakeholders and operators. Ensuring a quick response is important because according to pipeline operators and industry stakeholders, reducing the amount of time it takes to respond to an incident can reduce the amount of property and environmental damage stemming from an incident and, in some cases, the number of fatalities and injuries. For example, several natural gas pipeline operators noted that a faster incident response time could reduce the amount of property damage from secondary fires (after an initial pipeline rupture) by allowing fire departments to extinguish the fires sooner. In addition, hazardous liquid pipeline operators told us that a faster incident response time could result in lower costs for environmental remediation efforts and less product lost. We identified five variables that can influence incident response time and are within an operator's control, and four other variables that influence a pipeline operator's ability to respond to an incident but are beyond an operator's control. The effect a given variable has on a particular incident response will vary according to the specifics of the situation. The five variables within an operator's control are: location of qualified operator response personnel, control room management, and relationships with local first responders. The four factors beyond an operator's control are: weather conditions, and other operators' pipelines in the same area. (See table 1 for further detail.) Appendix II provides several examples of response time in past incidents; response time varied from several minutes to days depending on the presence and interaction of the variables just mentioned. As noted, one variable that influences operators' response times to incidents is the type of valve installed on the pipeline. Research and industry stakeholders indicate that the primary advantage of installing automated valves--as opposed to other safety measures--is related to the time it takes to respond to an incident. Although automated valves cannot mitigate the fatalities, injuries, and damage that occur in an initial blast, quickly isolating the pipeline segment through automated valves can reduce subsequent damage by reducing the amount of hazardous liquid and natural gas released. Research and industry stakeholders also identified two disadvantages operators should consider when determining whether to install automated valves related to potential accidental closures and the monetary costs of purchasing and installing the equipment. Specifically, automated valves can lead to accidental closures, which can have severe, unintended consequences, including loss of service to residences and businesses. In addition, according to operators, vendors and contractors, the monetary costs of installing automated valves can range from tens of thousands to a million dollars per valve, which may be significant expenditures for some pipeline operators. According to operators and other industry stakeholders, considering monetary costs is important when making decisions to install automated valves because resources spent for this purpose can take away from other pipeline safety efforts. Specifically, operators and industry stakeholders told us they often would rather focus their resources on incident prevention to minimize the risk of an incident instead of focusing resources on incident response. PHMSA officials stated that they generally support the idea that pipeline operators be given some flexibility to target spending where the operator believes it will have the most safety benefit. Research and industry stakeholders also indicate the importance of determining whether to install valves on a case-by-case basis because the advantages and disadvantages can vary considerably based on factors specific to a unique valve location. These sources indicated that the location of the valve, existing shutdown capabilities, proximity of personnel to the valve's location, the likelihood of an ignition, type of product being transported, operating pressure, topography, and pipeline diameter, among other factors, all play a role in determining the extent to which an automated valve would be advantageous. Operators we met with are using a variety of methods for determining whether to install automated valves that consider--on a case-by-case basis--whether these valves will improve response time, the potential for accidental closure, and monetary costs. For example, two natural gas pipeline operators told us that they applied a decision tree analysis to all pipeline segments in highly populated and frequented areas. They used the decision tree to guide a variety of yes-or-no questions on whether installing an automated valve would improve response time to less than an hour and provide advantages for locations where people might have difficulty evacuating quickly in the event of a pipeline incident. Other hazardous liquid pipeline operators said they used computer-based spill modeling to determine whether the amount of product release would be significantly reduced by installing an automated valve. In our report, we note that PHMSA has not developed a performance- based framework for incident response times, although some organizations in the pipeline industry have done so. We and others have recommended that the federal government move toward performance- based regulatory approaches to allow those being regulated to determine the most appropriate way to achieve desired, measurable outcomes. According to our past work, such a framework should include: (1) national goals, (2) performance measures that are linked to those national goals, and (3) appropriate performance targets that promote accountability and allow organizations to track their progress toward goals. While PHMSA has established a national goal for incident response times, it has not linked performance measures or targets to this goal. Specifically, PHMSA directs operators to respond to certain incidents--emergencies that require an immediate response--in a "prompt and effective" manner, but neither PHMSA's regulations nor its guidance describe ways to measure progress toward meeting this goal. Without a performance measure and target for a prompt and effective incident response, PHMSA cannot quantitatively determine whether an operator meets this goal and track their performance over time. PHMSA officials told us that because pipeline incidents often have unique characteristics, developing a performance measure and associated target for incident response time would be difficult. In particular, it would be challenging to establish a performance measure using incident response time in a way that would always lead to the desired outcome of a prompt and effective response. In addition, officials stated it would be difficult to identify a single response time target for all incidents, as pipeline operators likely should respond to some incidents more quickly than others. Defining performance measures and targets for incident response can be challenging, but one possible way for PHMSA to move toward a more quantifiable, performance-based approach would be to develop strategies to improve incident response based on nationwide data. For example, performing an analysis of nationwide incident data--similar to PHMSA's current analyses of fatality and injury data--could help PHMSA determine response times for different types of pipelines (based on characteristics such as location, operating pressure, and diameter); identify trends; and develop strategies to improve incident response. However, we found that PHMSA does not have the reliable nationwide data on incident response time data it would need to conduct such analyses. Specifically, the response time data PHMSA currently collects are unreliable for two reasons: (1) operators are not required to fill out certain time-related fields in the PHMSA incident-reporting form and (2) when operators do provide these data, they are interpreting the intended content of the data fields in different ways. Our report recommended that PHMSA improve incident response data and use these data to evaluate whether to implement a performance-based framework for incident response times. PHMSA agreed to consider this recommendation. We also found that PHMSA needs to do a better job of sharing information on ways operators can make decisions to install automated valves. For example, many of the operators we spoke with were unaware of existing PHMSA enforcement and inspection guidance that could be useful for operators in determining whether to install automated valves on transmission pipelines. In addition, while PHMSA inspectors see examples of how operators make decisions to install automated valves during integrity management inspections, they do not formally collect this information or share it with other operators. Given the variety of risk- based methods for making decisions about automated valves across the operators we spoke with, we believe that both operators and inspectors would benefit from exposure to some of the methods used by other operators to make decisions on whether to install automated valves. Our report recommended that PHMSA share guidance and information on operators' decision-making approaches to assist operators with these determinations. PHMSA also agreed to consider this recommendation. Chairman Rockefeller this concludes my prepared remarks. I am happy to respond to any questions that you or other Members of the Committee may have at this time. For questions about this statement, please contact Susan Fleming, Director, Physical Infrastructure, at (202) 512-3824 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement include Sara Vermillion (Assistant Director), Sarah Arnett, Melissa Bodeau, Russ Burnett, Matthew Cook, Colin Fallon, Robert Heilman, David Hooper, and Josh Ormond. GAO recently issued two reports related to the safety of certain types of pipelines. The first, GAO-12-388, reported on the safety of gathering pipelines, which currently are largely unregulated by the federal government. The second, GAO-12-389R, reported on the potential safety effects of applying less prescriptive requirements, currently levied on distribution pipelines, to low-stress natural gas transmission pipelines. Further detail on each report is provided below. For the full report text, go to www.gao.gov. Included in the nation's pipeline network are an estimated 200,000 or more miles of onshore gathering pipelines, which transport products to processing facilities and larger pipelines. Many of these pipelines have not been subject to federal regulation because they are considered less risky due to their generally rural location and low operating pressures. For example, out of the more than 200,000 estimated miles of natural gas gathering pipelines, the Pipeline and Hazardous Materials Safety Administration (PHMSA) regulates roughly 20,000 miles. Similarly, of the 30,000 to 40,000 estimated miles of hazardous liquid gathering pipelines, PHMSA regulates about 4,000 miles. While the safety risks of onshore gathering pipelines that are not regulated by PHMSA are generally considered to be lower than for other types of pipelines, PHMSA does not collect comprehensive data to identify the safety risks of unregulated gathering pipelines. Without data on potential risk factors--such as information on construction quality, maintenance practices, location, and pipeline integrity--pipeline safety officials are unable to assess and manage safety risks associated with gathering pipelines. Further, some types of changes in pipeline operational environments could also increase safety risks for federally unregulated gathering pipelines. Specifically, land-use changes are resulting in development encroaching on existing pipelines, and the increased extraction of oil and natural gas from shale deposits is resulting in the construction of new gathering pipelines, some of which are larger in diameter and operate at higher pressure than older pipelines. As a result, PHMSA is considering collecting data on federally unregulated gathering pipelines. However, the agency's plans are preliminary, and the extent to which PHMSA will collect data sufficient to evaluate the potential safety risks associated with these pipelines is uncertain. In addition, we found that the amount of sharing of information to ensure the safety of federally unregulated pipelines among state and federal pipeline safety agencies appeared limited. For example, some state and PHMSA officials we interviewed had limited awareness of safety practices used by other states. Increased communication and information sharing about pipeline safety practices could boost the use of such practices for unregulated pipelines. We recommended that PHMSA should collect data on federally unregulated onshore hazardous liquid and gas gathering pipelines, subsequent to an analysis of the benefits and industry burdens associated with such data collection. Data collected should be comparable to what PHMSA collects annually from operators of regulated gathering pipelines (e.g., fatalities, injuries, property damage, location, mileage, size, operating pressure, maintenance history, and the causes of incidents and consequences). Also, we recommended that PHMSA establish an online clearinghouse or other resource for states to share information on practices that can help ensure the safety of federally unregulated onshore hazardous liquid and gas gathering pipelines. This resource could include updates on related PHMSA and industry initiatives, guidance, related PHMSA rulemakings, and other information collected or shared by states. PHMSA concurred with our recommendations and is taking steps to implement them. Gas transmission pipelines typically move natural gas across state lines and over long distances, from sources to communities. Transmission pipelines can generally operate at pressures up to 72 percent of specified minimum yield strength (SMYS). By contrast, local distribution pipelines generally operate within state boundaries to receive gas from transmission pipelines and distribute it to commercial and residential end users. Distribution pipelines typically operate well below 20 percent of SMYS. Connecting the long-distance transmission pipelines to the local distribution pipelines are lower stress transmission pipelines that may transport natural gas for several miles at pressures between 20 and 30 percent of SMYS. Applying PHMSA's distribution integrity management requirements to low-stress transmission pipelines would result in less prescriptive safety requirements for these pipelines. Overall, requirements for distribution pipelines are less prescriptive than requirements for transmission pipelines in part because the former operate at lower pressure and pose lower risks in general than the latter. For example, the integrity management regulations for transmission pipelines allow three types of in-depth physical inspection. In contrast, distribution pipeline operators can customize their integrity management programs to the complexity of their systems, including using a broader range of methods for physical inspection. While PHMSA officials stated that "less prescriptive" does not necessarily mean less safe, they also stated that distribution integrity management requirements for distribution pipelines can be more difficult to enforce than integrity management requirements for transmission pipelines. In general, the effect of changing PHMSA's requirement for low-stress transmission pipelines for pipeline safety is unclear. While the consequences of a low-stress transmission pipeline failure are generally not severe because these pipelines are more likely to leak than rupture, the point at which a gas pipeline fails by rupture is uncertain and depends on a number of factors in addition to pressure, such as the size or type of defect and the materials used to conduct the pipeline. In addition, the mileage and location of pipelines that would be affected by such a regulatory change are currently unknown, although PHMSA recently changed its reporting requirements to collect such information. The concern is that because distribution pipelines are located in highly populated areas, the low-stress transmission pipelines that are connected to them could also be located in highly populated areas. As a result, we considered the current regulatory approach of applying more prescriptive transmission pipeline requirements reasonable. Operators we spoke with stated that the amount of time it takes to respond to an incident can vary depending on a number of variables (see table 2).
Pipelines are a relatively safe means of transporting natural gas and hazardous liquids; however, catastrophic incidents can and do occur. Such an incident occurred on December 11, 2012, near Sissonville, West Virginia, when a rupture of a natural gas transmission pipeline destroyed or damaged 9 homes and badly damaged a section of Interstate 77. Large-diameter transmission pipelines such as these that carry products over long distances from processing facilities to communities and large-volume users make up more than 400,000 miles of the 2.5 million mile natural gas and hazardous liquid pipeline network in the United States. The Department of Transportation's (DOT) Pipeline and Hazardous Materials Safety Administration (PHMSA), working in conjunction with state pipeline safety offices, oversees this network, which transports about 65 percent of the energy we consume. The best way to ensure the safety of pipelines, and their surrounding communities, is to minimize the possibility of an incident occurring. PHMSA's regulations require pipeline operators to take appropriate preventive measures such as corrosion control and periodic assessments of pipeline integrity. To mitigate the consequences if an incident occurs, operators are also required to develop leak detection and emergency response plans. One mitigation measure operators can take is to install automated valves that, in the event of an incident, close automatically or can be closed remotely by operators in a control room. Such valves have been the topic of several National Transportation Safety Board (NTSB) recommendations since 1971 and a PHMSA report issued in October 2012. As mandated in the Pipeline Safety, Regulatory Certainty, and Job Creation Act of 2011, we issued a January 2013 report on the ability of transmission pipeline operators to respond to a hazardous liquid or natural gas release from an existing pipeline segment. This statement is based on this report and addresses (1) variables that influence the ability of transmission pipeline operators to respond to incidents and (2) opportunities to improve these operators' responses to incidents. This statement also provides information from two other recent GAO reports on pipeline safety. Numerous variables--some of which are under operators' control--influence the ability of transmission pipeline operators to respond to incidents. For example, the location of response personnel and the use of manual or automated valves can affect the amount of time it takes for operators to respond to incidents. However, because the advantages and disadvantages of installing an automated valve are closely related to the specifics of the valve's location, it is appropriate that operators decide whether to install automated valves on a case-by-case basis. Several operators we spoke with have developed approaches to evaluate the advantages and disadvantages of installing automated valves, such as using spill-modeling software to estimate the potential amount of product released and extent of damage that would occur in the event of an incident. One method PHMSA could use to improve operator response to incidents is to develop a performance-based approach for incident response times. While defining performance measures and targets for incident response can be challenging, PHMSA could move toward a performance-based approach by evaluating nationwide data to determine response times for different types of pipeline (based on location, operating pressure, and pipeline diameter, among other factors). First, though, PHMSA must improve the data it collects on incident response times. These data are not reliable because operators are not required to fill out certain time-related fields in the reporting form and because operators told us they interpret these data fields in different ways. Furthermore, while PHMSA conducts a variety of information-sharing activities, the agency does not formally collect or share evaluation approaches used by operators to decide whether to install automated valves, and not all operators we spoke with were aware of existing PHMSA guidance designed to assist operators in making these decisions. We recommended that PHMSA should: (1) improve incident response data and use those data to explore the feasibility of developing a performance-based approach for improving operators' responses to pipeline incidents and (2) assist operators in deciding whether to install automated valves by formally collecting and sharing evaluation approaches and ensuring operators are aware of existing guidance. PHMSA agreed to consider these recommendations.
3,968
859
The NCR is a unique regional partnership, in that it is the only region that has a statutorily created and federally funded office devoted solely to supporting coordination and cooperation within the region. Appendix I provides more information about the region and the organizations responsible for supporting preparedness coordination. We have reported in the past on preparedness efforts for the NCR. Our past work for Congress has tracked the evolution and development of increasingly effective efforts to develop a coordinated NCR preparedness strategy, along with some opportunities for continuing improvement in strategy-related efforts. See appendix II for more information about our past NCR work. We have previously identified six characteristics of effective strategies that could be applied to the NCR. We noted that these six characteristics would help to enable its implementers to effectively shape policies, programs, priorities, resource allocations, and standards and enable relevant stakeholders to achieve intended results. These characteristics call for strategies to include (1) purpose, scope, and methodology; (2) problem definition and risk assessment; (3) goals, subordinate objectives, activities, and performance measures; (4) resources, investments, and risk management; (5) organizational roles, responsibilities, and coordination; and (6) integration and implementation. More information about the six desirable strategy characteristics and their application to a regional preparedness strategy appears in appendix III. The 2010 NCR strategy addresses why the strategy was produced, the scope of its coverage, and the process by which it was developed. The introduction to the plan specifies that it was produced to help identify the capabilities needed to strengthen the region's homeland security efforts and to define the framework for achieving those capabilities. The scope of the plan, as outlined in the introduction, is strategic investment in new and existing capabilities to help all localities in the NCR prepare for, prevent, protect against, respond to, and recover from all-hazards threats and events. Specifically, the plan's goals and objectives are designed to build new and expanded capabilities and to ensure maintenance of previous investments. Additionally, the aim of these capabilities, according to the plan, is to help support the localities in the NCR as they execute their operational plans in all phases of homeland security. The plan's methodology appendix specifies that the effort to produce the 2010 plan started with an NCR partner-led assessment of progress under the 2006 NCR Strategic Plan and stakeholder recommendations on how best to update the goals to reflect current priorities of the NCR. As part of this effort, subject-matter experts identified priority capabilities from the 2010 UASI Investment Justifications that serve as the foundation for the plan's goals and objectives. Additionally, the appendix outlines how the NCR partners (1) accounted for legislative, policy, and economic factors; (2) facilitated stakeholder engagement; (3) drew on capabilities-based analysis to identify priorities; and (4) designed capability initiatives to be specific and measurable. The 2010 NCR strategy generally addresses the particular problems and threats the strategy is directed towards, and the NCR has undertaken efforts to assess threats, vulnerabilities, and consequences. In our September 2006 statement on NCR strategic planning, we noted that an ongoing risk-assessment methodology is important to help ensure identification of emerging risks. It is not clear from the strategy how the NCR plans to update risk information, but according to responsible NCR officials, a regional risk assessment will be conducted every 2-4 years, and during this fiscal year the NCR will be making decisions about the timing and methodology for the next regional risk assessment. In addition, the officials said risk information can enter prioritization decisions as subject matter experts bring to bear their knowledge of critical- infrastructure sector-specific risk assessments and lessons learned from regional and worldwide incidents. The 2010 NCR Strategic Plan includes a profile of the region that details how particular social, economic, and critical-infrastructure factors in the region serve to increase both the threat and consequence components of its profile. For example, the plan's profile explains that the NCR has more than 340,000 federal workers; 2,000 political, social, and humanitarian nonprofit organizations; more than 20 million tourists per year; 4,000 diplomats at more than 170 embassies; and some of the most important symbols of national sovereignty and democratic heritage. The plan notes that the region needs to be prepared for a variety of threats and challenges. The region has historically experienced, and in some cases routinely experiences, natural events such as ice, snowstorms, and flooding; special events such as international summits, inaugurations, and parades; and human-caused threats such as terrorist attacks. The plan identifies previously conducted risk-assessment efforts that, along with other information, helped inform the identification of priority goals, objectives, and activities. First, the NCR's Hazard Information and Risk Assessment, conducted in 2006, was used to identify threats and vulnerabilities and then to consider consequences of various incidents. Second, NCRC conducted another assessment--the NCR Strategic Hazards Identification Evaluation for Leadership Decisions (SHIELD)--in 2008. NCRC developed SHIELD with input from federal, state, local, and private-sector partners and in collaboration with DHS's Office of Risk Management and Analysis. SHIELD's analysis ranks potential critical- infrastructure hazards and provides options for risk reduction, with a focus on probable scenarios for the region. The 2010 NCR strategy addresses what the strategy is trying to achieve, and steps to achieve those results in the next 3 to 5 years; however, the Performance Management Plan to help monitor progress toward those results is not expected to be finalized until December 31, 2011. The strategy clearly identifies updated and prioritized goals from the previous version of the strategy. Each of these four goals is accompanied by supporting objectives, which in turn, are supported by more targeted initiatives. According to the strategy, the goals, objectives, and initiatives were developed by multiple stakeholders, including emergency managers, first responders, health-care officials, and information- technology specialists, among others, and focus on developing and sustaining key capabilities in the region. (A full description of the goals, objectives, and initiatives identified in the 2010 NCR strategy appears in appendix IV.) In our work on desirable strategy characteristics, we reported that identification of priorities, milestones, and performance measures can aid implementing parties in achieving results in specific timeframes--and could enable more effective oversight and accountability. The strategy states that a Performance Measurement Plan will guide monitoring of the strategy's implementation to evaluate progress in achieving its goals and objectives. NCR provided us with a draft copy of the Performance Measurement Plan, which is currently under development. Our review of this draft showed that the NCR has begun efforts to develop measures. While the 2010 plan states that the initiatives it defines are intended to be attained during the next 3 to 5 years, the strategy does not currently communicate specific milestones for achieving the plan's objectives and initiatives. However, according to NCR officials, with the annual planning and implementation cycle beginning in January 2012, they plan to enter into a new phase of their strategy efforts, designed to make the strategy process more data-driven and project-management focused. According to the officials, this phase entails each objective being assigned a designated leader, who will be responsible for setting milestones and monitoring project plans for achieving his or her objective across the region. The Performance Measurement Plan template information for each initiative includes (1) the strategic goal and objective the initiative supports; (2) a scale to track progress toward achieving the initiative; (3) the initiative's relationship to DHS's Target Capabilities List; (4) applicable national standards; and (5) multiple metrics for each initiative to be tracked separately for Maryland, Virginia, and Washington, D.C. For example, in the draft plan, the NCR initiative to "catalog all critical infrastructure and key resources in the NCR and conduct consequence- of-loss analysis" ties in with three separate DHS Target Capabilities and is based on the DHS National Infrastructure Protection Plan's definition of Tier-2 Critical Assets. It then provides five separate metrics to monitor the identification and documentation of assets, as well as the completion of consequence and loss analyses. A senior official in the NCR said that subject-matter experts are currently completing progress reports on the metrics for each of the initiatives in the strategy. The 2010 NCR strategy contains information and processes designed to help address what the strategy will cost, the sources and types of resources and investments needed, and where resources and investments should be targeted based on balancing risk reductions with costs. According to the strategic plan, its implementation will be guided by investment plans that define the activities required to achieve the goals and objectives, and an annual work plan will lay out grant-funded projects needed to complete the investment plans. We have reviewed draft copies of 16 investment plans, which are out for NCR partner comment until December 22, 2011. Our review of the draft investment plans show that they specify their relationship to the strategic objective they are designed to support, but we did not evaluate how well the specific content of each investment plan is designed to achieve those objectives. In our work on desirable strategy characteristics, we reported that, ideally, a strategy would identify appropriate mechanisms to allocate resources, such as grants, in-kind services, loans, and user fees, based on identified needs. The strategic plan notes that the UASI grant program provides a key source of funding for achieving the priority capabilities in the NCR's Strategic Plan. The strategic plan's methodology appendix states that the 2010 UASI Investment Justifications serve as the foundation for the strategic plan's goals and objectives. In previous NCR work, we raised concerns about NCR's singular focus on UASI resources.plan states that the NCR draws upon federal grant programs outside of those provided by DHS, such as public health-related grants from the Department of Health and Human Services and Department of Justice. However, it is not clear that NCR has a systematic process for identifying and allocating funding other than UASI to help achieve priority objectives. According to responsible officials, NCR officials coordinate with local, state, and federal jurisdictions to help ensure UASI investments do not duplicate existing federal, state, and local assets. These officials also said the new Management Review Process, set to begin in January 2012, is to help with the identification and documentation of available resources. Similarly, the plan does not identify nonfinancial resources--such as Department of Defense (DOD) NORTHCOM or National Guard Bureau resources--that potentially could support priority objectives.government has an array of resources that can be made available, at request, to assist state and local response. For example, DOD has significant capabilities to augment a federal chemical, biological, radiological, nuclear, and high-yield explosive (CBRNE) response, like The federal those identified in the strategic plan, and also contributes to the organization, training, and equipping of state-controlled military units focused on consequence management. According to the 2010 strategic plan's methodology appendix, the region's priorities are informed by risk assessments--specifically SHIELD--gap analyses, after-action reports, and other studies. According to NCR officials, NCR and its jurisdictions coordinate with various DOD organizations to ensure the availability of CBRNE assets. Moreover, they said that subject-matter experts also bring their knowledge of other resources and capabilities to bear during efforts to identify gaps and prioritize resources. However, they acknowledged they have not systematically considered how existing federal capabilities--like DOD resources--relate to efforts to build the capabilities within their priority objectives, but are considering how they might further enhance coordination in the future. We will continue to monitor this issue as we conduct future work on NCR preparedness. The 2010 NCR strategy addresses the roles and responsibilities of the various NCR organizations. We previously reported that identifying which organizations will implement the strategy, their roles and responsibilities, and mechanisms for coordinating their efforts helps answer the fundamental question about who is in charge, not only during times of crisis, but also during all phases of preparedness efforts: prevention, vulnerability reduction, and response and recovery. The NCR has responsibility for coordinating information and resources from multiple jurisdictions at the federal, state, and local levels to ensure that strategic goals are met. According to the 2010 NCR strategy, NCR stakeholders have constructed the strategy to complement state and local operational plans. Operational plans remain the responsibility of state and local emergency-management agencies, and state and local emergency-operations plans describe how each jurisdiction will coordinate its response to an event regionally. The Governance appendix to the NCR strategic plan details the various organizations involved in preparedness for all-hazards disasters in the region and their roles and responsibilities. For example, the Emergency Preparedness Council is described as the body that provides oversight of the Regional Emergency Coordination Plan and the NCR Strategic Plan to identify and address gaps in readiness in the NCR, among other responsibilities. Additionally, the appendix lays out the Regional Emergency Support Function committees for functions most frequently used to provide support for disasters and emergencies in the region. According to the plan, representatives from various sectors work together toward building capabilities within each support function and the chairs of the committees provide leadership in identifying gaps in regional capabilities in the committee's areas of responsibility and identify the need for UASI funds or other resources to address those gaps. An example of a Regional Emergency Support Function committee is the Agriculture and Natural Resources Committee which focuses on nutrition assistance, animal and plant disease and pest response, food safety and security, as well as the safety and well-being of household pets. Finally, the appendix highlights the Regional Programmatic Working Groups which consist of practitioners, policymakers, and representatives from the government, civic, and private sectors. The groups serve to fill gaps, coordinate across the Regional Emergency Support Function, and provide more focused attention on high-priority areas. For example, the Exercise and Training Operations Panel Working Group supports training and exercises for all Regional Emergency Support Functions. The 2010 NCR strategy addresses how the plan is intended to integrate with the NCR jurisdictions' strategies' goals, objectives, and activities and their plans to implement the strategy. An appendix dedicated to the plan's alignment with national and state strategic plans lays out how the NCR's strategic plan aligns with related federal, state, and local strategies, programs and budgets, and emergency plans. The appendix states that the aim of the NCR strategic plan is to align regional strategic planning efforts with federal, state, and local planning efforts by identifying common goals, objectives, and initiatives to be implemented by the region. In addition, it says the strategic plan provides a framework by which state and local entities can plan, resource, and track priority homeland security-related programs and budgets. The NCR faces a significant challenge coordinating federal, state, local, and regional authorities for domestic preparedness activities. Due to the size and complexity of the NCR, coordination with relevant jurisdictions may confront challenges related to, among other things, different organizational cultures, varying procedures and work patterns among organizations, and a lack of communication between departments and agencies. A well-defined, comprehensive homeland security strategic plan for the NCR is essential for effectively coordinating investments in capabilities to address the risks that the region faces, and our preliminary observations are that the 2010 Strategic Plan was comprehensively developed. However, we have previously noted that strategies themselves are not endpoints, but rather, starting points. As with any strategic planning effort, implementation is the key. The ultimate measure of value for a strategy is how useful it is as guidance for policymakers and decisionmakers in allocating resources and balancing priorities. It remains to be seen the extent to which the plan is implemented effectively. We will continue to monitor this as part of our ongoing work. Chairmen Akaka and Pryor, Ranking Members Johnson and Paul, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Committee may have at this time. For further information about this statement, please contact William O. Jenkins, Jr., Director, Homeland Security and Justice Issues, at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contact named above, the following individuals from GAO's Homeland Security and Justice Team also made key contributions to this testimony: Chris Keisling, Assistant Director; Kathryn Godfrey, Susana Kuebler, David Lysy, Linda Miller, and Tracey King. The National Capital Region (NCR) is a complex multijurisdictional area comprising the District of Columbia and surrounding counties and cities in the states of Maryland and Virginia (as shown in figure 1) and is home to the federal government, many national landmarks, and military installations. In addition to being the headquarters to all three branches of the federal government, the NCR receives more than 20 million tourists each year. The NCR is the fourth-largest U.S. metropolitan area in the country and is also close to other densely populated areas, including Baltimore and Philadelphia. Those living and working in the NCR rely on a variety of critical infrastructure and key resources including transportation, energy, and water. The transportation system contains the nation's second-largest rail transit and fifth-largest bus systems. The intricate network of major highways and bridges serve the region's commuters and businesses, and the NCR also has two major airports within its borders. These attributes both heighten the threat and raise the consequences to the region in the instance of human-caused incidents. An incident caused by any hazard could result in catastrophic human, political, and economic harm to the region, as well as the entire nation. The Homeland Security Act established the Office of National Capital Region Coordination (NCRC) within the Department of Homeland Security. The NCRC is responsible for overseeing and coordinating federal programs for and relationships with state, local, and regional authorities in the NCR and for assessing and advocating for the resources needed by state, local, and regional authorities in the NCR to implement efforts to secure the homeland, among other things. One of the NCRC mandates is to coordinate with federal, state, local, and regional agencies and the private sector in the NCR to ensure adequate planning, information sharing, training, and execution of domestic preparedness activities among these agencies and entities. Figure 2, below, depicts the NCR organizational structure. GAO product Homeland Security: Management of First Responder Grants in the National Capital Region Reflects the Need for Coordinated Planning and Performance Goals, GAO-04-433 (Washington, D.C.: May 28, 2004) Findings and recommendations NCR faced several challenges organizing and implementing efficient and effective regional preparedness programs. Among these challenges included the lack of a coordinated strategic plan, performance standards, and reliable, centrally sourced data on funds available and the purposes for which they were spent. We concluded that, without these basic elements, it would be difficult to assess first-responder capacities, identify first-responder funding priorities, and evaluate the effective use of federal funds to enhance first-responder capacities and preparedness. We recommended, for example, that the Secretary of Homeland Security (1) work with local National Capital Region (NCR) jurisdictions to develop a coordinated strategic plan to establish goals and priorities. Department of Homeland Security (DHS) generally agreed with our recommendations and NCR finalized its first strategic plan in 2006. Homeland Security: Effective Regional Coordination Can Enhance Emergency Preparedness, GAO-04-1009 (Washington, D.C. Sept. 15, 2004) Homeland Security: Managing First Responder Grants to Enhance Emergency Preparedness in the National Capital Region, GAO-05-889T (Washington, D.C.: July 14, 2005) The characteristics of effective regional coordination we previously identified were applicable to the NCR's efforts to coordinate emergency preparedness. We noted that, if implemented as planned and as observed in its early stage, the NCR's Urban Area Security Initiative (UASI) program would include a collaborative regional organization. While we remained concerned that the NCR did not include a full array of homeland-security grants in its planning, we reported that the NCR's UASI program planned to address those issues by identifying non-UASI funding sources and collecting information about the funding allocations, expenditures, and purposes, as well as data on spending by NCR jurisdiction. NCR is currently planning to implement a process to help ensure identification of other funding resources. In this statement, we reported on the implementation of the recommendations from our May 2004 report. DHS was working with the NCR jurisdictions to develop a coordinated strategic plan. At that time, we identified the need for NCR to gather data regarding the funding available and used for implementing the plan and enhancing first-responder capabilities in the NCR--data that were not routinely available. We reported that such data would allow DHS to implement and monitor the future plan, identify and address preparedness gaps, and evaluate the effectiveness of expenditures by conducting assessments based on established guidelines and standards. We remained concerned that no systematic gap analysis had been completed for the region. We noted that the NCR planned to complete an effort to use the Emergency Management Accreditation Program (EMAP) as a means of conducting a gap analysis and assess NCR jurisdictions against EMAP's national preparedness standards. Since we last reported, the District of Columbia has received its EMAP accreditation. Homeland Security: The Status of Strategic Planning in the National Capital Region, GAO-06-559T (Washington, D.C.: Mar. 29, 2006) At the time of this report, a completed NCR strategic plan was not yet available. We identified five areas that would be important for the NCR as it completed a strategic plan. Specifically, we reported that a well-defined, comprehensive strategic plan for the NCR was essential for assuring that the region is prepared for the risks it faces and that the NCR could focus on strengthening (1) initiatives that will accomplish objectives under the NCR strategic goals, (2) performance measures and targets that indicate how the initiatives will accomplish identified strategic goals, (3) milestones or time frames for initiative accomplishment, (4) information on resources and investments for each initiative, and (5) organizational roles, responsibilities, and coordination and integration and implementation plans. GAO product Homeland Security: Assessment of the National Capital Region Strategic Plan, GAO-06-1096T (Washington, D.C.: Sept. 28, 2006) Findings and recommendations We concluded that the 2006 NCR strategic plan included all six characteristics we consider desirable for a regional homeland-security strategy. To illustrate, the plan includes regional priorities and presents the rationale for the goals and related objectives and initiatives. However, we noted that the substance of the information within these six characteristics could be improved to guide decision makers. We previously outlined a set of desirable characteristics for strategies involving complex endeavors that require coordination and collaboration among multiple entities. The desirable characteristics are presented in table 1, along with a brief description and the benefit of each characteristic. Goal Ensure Interoperable Communications Capabilities Ensure response partners have the ability to transmit and receive voice, data, and video communications. Initiatives Increase access to voice systems capable of transmitting and receiving voice information to and from National Capital Region (NCR) response partners. Ensure response partners can communicate and share necessary, appropriate data in all environments and on a day-to-day basis. Develop and maintain secure data communications governed by common standards and operating procedures. Share Computer Aided Dispatch data between jurisdictions and other related data systems to streamline the process of capturing 911 information and responding to incidents. Share Geographic Information System data between jurisdictions and other related data systems. Ensure response partners can communicate and share necessary, appropriate video information in all environments on a day-to-day basis. Increase access to video systems capable of transmitting and receiving video information to and from NCR response partners. Enhance Information Sharing and Situational Awareness Ensure NCR partners share the information needed to make informed and timely decisions; take appropriate actions; and communicate accurate, timely information with the public. Ensure the public has all information necessary to make appropriate decisions and take protective actions. Improve the dissemination of accurate, timely information to the public using multiple venues, including social media outlets, to ensure that the content of emergency messages and alerts is easily accessible and available to the public. Define, obtain, and share appropriate situational information with NCR partners so that they have the necessary information to make informed decisions. Define essential elements of data and information for situational awareness for each discipline and all partners in the NCR. Then develop, maintain, and utilize business practices and common technical standards for situational awareness in order to make informed decisions. Improve the NCR's ability to collect, analyze, share, and integrate intelligence and law enforcement information so that NCR partners receive appropriate information. Ensure all NCR fusion centers share information through secure and open systems, produce relevant and standardized analytical products, and share information in a timely manner with appropriate NCR partners. Ensure NCR partners have the systems, processes, security clearances, tools, and procedures to access, gather, and share appropriate intelligence, law enforcement, and classified data. Goal Enhance Critical Infrastructure Protection Enhance the protection and resilience of critical infrastructure and key resources (CI/KR) in the NCR to reduce their vulnerability to disruption from all-hazards events. Objectives Understand and prioritize risks to CI/KR. Catalog all CI/KR in the NCR and conduct consequence-of-loss analysis. Conduct a comprehensive risk analysis of the NCR CI/KR, including a review of the critical systems upon which they depend and the interdependencies of those systems. Develop and implement a plan for sharing CI/KR information among public and private entities throughout the NCR. Reduce vulnerabilities and enhance resiliency of CI/KR. Develop and implement sector vulnerability- reduction plans. Conduct a technology-feasibility assessment and develop a plan for technology investments for CI/KR. Develop and implement a cybersecurity plan for NCR critical systems. Ensure continuity of critical services required during emergencies and disaster recovery. Identify key facilities throughout the NCR that require backup critical services. Assess facilities' plans for loss of critical services. Promote broad participation in CI/KR community outreach and protection programs. Develop a community-awareness training and education program. Develop a strategy for using CI/KR data to inform law enforcement. Establish a regional business information- sharing committee. Monitor Critical Infrastructure to provide situational awareness and to promote rapid response. Develop and implement a plan for a comprehensive CI/KR monitoring program. Develop and implement a plan that integrates CI/KR monitoring information into response operations. Goal Ensure Development and Maintenance of Regional Core Capabilities Develop and maintain the basic building blocks of preparedness and response by ensuring the NCR develops a baseline of capabilities including: Mass Casualty, Health Care System Surge, and Mass Prophylaxis; Mass Care and Evacuation; Citizen Participation, Alert, and Public Information; Chemical, Biological, Radiological, Nuclear, and Explosive Detection and Response; and Planning, Training, and Exercises. Initiatives Ensure that private health care, federal, state, and local public health, and EMS programs and providers in the NCR can increase surge capacity to respond to mass- casualty incidents and events requiring mass prophylaxis. Establish a regional monitoring and response system that allows for health and medical-response partners to track patients, hospital bed availability, alerts, and EMS/hospital activity in a shared, secure environment. Ensure the ability to track patients from the start of pre-hospital care to discharge from the health-care system during both daily operations and mass-casualty incidents. Improve the region's capacity to evacuate and provide mass care for the public, including special needs individuals, when impacted by an all-hazards event. Develop, coordinate, and integrate local and state evacuation plans so that evacuation polices and routes complement each other to ensure the NCR's ability to coordinate evacuation across the region. Ensure the NCR's ability to provide sheltering and feeding for the first 72 hours following an incident for individuals in the general population, persons with special needs, persons with special medical needs, and pets. Strengthen individual, community, and workplace preparedness for emergency events through public engagement and citizen participation designed to reach the general population and special needs citizens in response to and recovery from all-hazards events. Sustain the NCR's ability to alert and warn residents, businesses, and visitors using multiple methods including social media. Bolster recruitment, management, and retention of volunteers through Community Emergency Response Team, other citizen corps programs, Volunteer Organizations Active in Disaster member agencies, the Medical Reserve Corps, and registration in Emergency System for Advance Registration of Volunteer Health Professionals programs. Initiatives Ensure post-incident human services and recovery assistance throughout the NCR including case management, emergency housing, behavioral health, spiritual care, and family reunification. Ensure the NCR has region-wide capacity to detect, respond, and recover in a timely manner from CBRNE events and other attacks requiring tactical response and technical rescue. Enhance the NCR's ability to detect chemical, biological, radiological, and other types of contamination. Ensure region-wide access to Type 1 hazardous material (HazMat), bomb response/Explosive Ordnance Device units, and tactical teams and ensure each unit/team is able to respond in a reasonable amount of time. Ensure all responders in the NCR have access to Personal Protective Equipment, equipment, and apparatus that match the identified capability needs. Establish a regional monitoring and response system that provides health and medical-response partners with central access to biosurveillance. Improve capacity to develop and coordinate plans among all NCR partners and ensure the availability of region-wide training and exercise programs to strengthen preparedness, response, and recovery efforts from all-hazards events. Develop and exercise key regional emergency response and recovery plans. Ensure regional procedures, memoranda of understanding, and mutual-aid agreements are in place to allow for rapid coordination of resources including health assets across jurisdictional boundaries. Develop and update a matrix of training and exercises that meet Homeland Security Exercise and Evaluation Program standards needed to maintain core regional capabilities. This matrix should address new and emerging threats and concerns raised in gap analyses and after-action reports from events and exercises. Although the specific elements needed for situational awareness vary according to the field and area of expertise, the term "situational awareness" in the 2010 strategic plan refers to the ability to identify, monitor, and process important information, understand the interrelatedness of that information and its implications, and apply that understanding to make critical decisions in the present and near future. For example, if the region is threatened by a hurricane, awareness of the status of roads, shelters, traffic, available medical resources, power outages, and the like is important in making decisions about what type of assistance is needed and where it is needed. To coordinate an effective response, NCR partners need to share their information and have access to the information of others. The NCR fusion centers include the Maryland Coordination and Analysis Center, the Washington Regional Threat and Analysis Center, the NCR Intelligence Center, and the Virginia Fusion Center. A fusion center is a physical location where data can be collected from a variety of sources, including but not limited to police departments, fire departments, health departments, and the private sector. Experts analyze the incoming information and create intelligence products, which can be used to maximize resources, streamline operations, and improve the ability to address all-hazards incidents and threats. Fusion centers help to prevent terrorism and criminal activities as well as support preparedness for man-made and natural hazards to trigger quick and effective response to all- hazards events. Critical services are defined as life-sustainment services during an emergency and include energy (electric power and gas), water supply, transportation, food, and communications. These are all supplied routinely by the CI/KR sectors. During a disaster, providing critical life-sustaining services ensures that government and private health, safety, and emergency services continue, and that plans are in place to compensate for losses among interdependent systems. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the status of efforts to enhance emergency preparedness in the National Capital Region (NCR). The NCR is a partnership among the District of Columbia, the State of Maryland, the Commonwealth of Virginia, area local governments, the Department of Homeland Security's (DHS) Office for National Capital Region Coordination (NCRC) within the Federal Emergency Management Agency (FEMA), and nonprofit organizations and private sector interests. The partnership aims to help the region prepare for, prevent, protect against, respond to, and recover from "all-hazards" threats or events. Gridlock and hazardous conditions during recent events like the January 26, 2011, snow and ice storm and the August 23, 2011, earthquake demonstrate the importance of regional communication and coordination in the NCR and that challenges remain. Well-crafted and executed operational plans are critical for effective response to emergencies, but sound strategic planning is also important. A coordinated strategy to establish and monitor the achievement of regional goals and priorities is fundamental to enhancing emergency preparedness and response capabilities in the NCR. We reported on this issue repeatedly from 2004 through 2006. This testimony focuses on the extent to which strategic planning for NCR preparedness is consistent with characteristics we have previously identified as desirable for strategies for complex undertakings, such as NCR preparedness. This statement is based on work we recently completed for Congress. The 2010 NCR strategic plan, when accompanied by its supporting documents--investment plans, work plans, and a Performance Management Plan--collectively referred to in this statement as the NCR strategy, is largely consistent with the six characteristics of a strategy that we advocated for complex homeland-security undertakings where multiple organizations must act together to achieve goals and objectives. However, neither the Performance Management Plan nor the investment plans have yet been finalized; decisions remain regarding how the NCR will conduct future regional risk assessments; and it is not clear that NCR has systematic processes in place to identify the full range of resources available to support its goals. Finally, it is important to keep in mind that strategies themselves are not endpoints, but rather, starting points. As with any strategic planning effort, implementation is the key. The ultimate measure of the 2010 NCR strategy's value is how useful it is as guidance for policymakers and decisionmakers in allocating resources and balancing priorities.
7,091
488
In 1986, the United States, the FSM, and the RMI entered into the original Compact of Free Association. The compact provided a framework for the United States to work toward achieving its three main goals: (1) to secure self-government for the FSM and the RMI, (2) to ensure certain national security rights for all of the parties, and (3) to assist the FSM and the RMI in their efforts to advance economic development and self-sufficiency. Under the original compact, the FSM and RMI also benefited from numerous U.S. federal programs, while citizens of both nations exercised their right under the compact to live and work in the United States as "nonimmigrants" and to stay for long periods of time. Although the first and second goals of the original compact were met, economic self-sufficiency was not achieved under the first compact. The FSM and the RMI became independent nations in 1978 and 1979, respectively, and the three countries established key defense rights, including securing U.S. access to military facilities on Kwajalein Atoll in the RMI through 2016. The compact's third goal was to be accomplished primarily through U.S. direct financial assistance to the FSM and the RMI that totaled $2.1 billion from 1987 through 2003. However, estimated FSM and RMI per capita GDP levels at the close of the compact did not exceed, in real terms, those in the early 1990s, although U.S. assistance had maintained income levels that were higher than the two countries could have achieved without support. In addition, we found that the U.S., FSM, and RMI governments provided little accountability over compact expenditures and that many compact-funded projects experienced problems because of poor planning and management, inadequate construction and maintenance, or misuse of funds. In 2003, the United States approved separate amended compacts with the FSM and RMI that (1) continue the defense relationship, including a new agreement providing U.S. military access to Kwajalein Atoll in the RMI through 2086; (2) strengthen immigration provisions; and (3) provide an estimated $3.6 billion in financial assistance to both nations from 2004 through 2023, including about $1.5 billion to the RMI (see app. I). The amended compacts identify the additional 20 years of grant assistance as intended to assist the FSM and RMI governments in their efforts to promote the economic advancement and budgetary self-reliance of their people. Financial assistance is provided in the form of annual sector grants and contributions to each nation's trust fund. The amended compacts and their subsidiary agreements, along with the countries' development plans, target the grant assistance to six sectors--education, health, public infrastructure, the environment, public sector capacity building, and private sector development--prioritizing two sectors, education and health. To provide increasing U.S. contributions to the FSM's and the RMI's trust funds, grant funding decreases annually and will likely result in falling per capita grant assistance over the funding period and relative to the original compact (see fig. 1). For example, in 2004 U.S. dollar terms, FSM per capita grant assistance will fall from around $1,352 in 1987 to around $562 in 2023, and RMI per capita assistance will fall from around $1,170 in 1987 to around $317 in 2023. Under the amended compacts, annual grant assistance is to be made available in accordance with an implementation framework that has several components (see app. II). For example, prior to the annual awarding of compact funds, the countries must submit development plans that identify goals and performance objectives for each sector. The FSM and RMI governments are also required to monitor day-to-day operations of sector grants and activities, submit periodic financial and performance reports for the tracking of progress against goals and objectives, and ensure annual financial and compliance audits. In addition, the U.S. and FSM Joint Economic Management Committee (JEMCO) and the U.S. and RMI Joint Economic Management and Financial Accountability Committee (JEMFAC) are to approve annual sector grants and evaluate the countries' management of the grants and their progress toward compact goals. The amended compacts also provide for the formation of FSM and RMI trust fund committees to, among other things, hire money managers, oversee the respective funds' operation and investment, and provide annual reports on the effectiveness of the funds. The RMI economy shows limited potential for developing sustainable income sources other than foreign assistance to offset the annual decline in U.S. compact grant assistance. In addition, the RMI has not enacted economic policy reforms needed to improve its growth prospects. The RMI's economy shows continued dependence on government spending of foreign assistance and limited potential for expanded private sector and remittance income. Since 2000, the estimated public sector share of GDP has grown, with public sector expenditure in 2005--about two-thirds of which is funded by external grants--accounting for about 60 percent of GDP. The RMI's government budget is characterized by limited tax revenue paired with growing government payrolls. For example, RMI taxes have consistently provided less than 30 percent of total government revenue; however, payroll expenditures have roughly doubled, from around $17 million in 2000 to around $30 million in 2005. The RMI development plan identifies fishing and tourism as key potential private sector growth industries. However, the two industries combined currently provide less than 5 percent of employment, and both industries face significant constraints to growth that stem from structural barriers and a costly business environment. According to economic experts, growth in these industries is limited by factors such as geographic isolation, lack of tourism infrastructure, inadequate interisland shipping, a limited pool of skilled labor, and a growing threat of overfishing. Although remittances from emigrants could provide increasing monetary support to the RMI, evidence suggests that RMI emigrants are currently limited in their income-earning opportunities abroad owing to inadequate education and vocational skills. For example, the 2003 U.S. census of RMI migrants in Hawaii, Guam, and the Commonwealth of the Northern Marianas Islands reveals that only 7 percent of those 25 years and older had a college degree and almost half of RMI emigrants lived below the poverty line. Although the RMI has undertaken efforts aimed at economic policy reform, it has made limited progress in implementing key tax, land, foreign investment, and public sector reforms that are needed to improve its growth prospects. For example: The RMI government and economic experts have recognized for several years that the RMI tax system is complex and regressive, taxing on a gross rather than net basis and having weak collection and administrative capacity. Although the RMI has focused on improving tax administration and has raised some penalties and tax levels, legislation for income tax reform has failed and needed changes in government import tax exemptions have not been addressed. In attempts to modernize a complex land tenure system, the RMI has established land registration offices. However, such offices have lacked a systematic method for registering parcels, instead waiting for landowners to voluntarily initiate the process. For example, only five parcels of land in the RMI had been, or were currently being, registered as of June 2006. Continued uncertainties over land ownership and land values create costly disputes, disincentives for investment, and problems regarding the use of land as an asset. Economic experts and private sector representatives describe the overall climate for foreign investment in the RMI as complex and nontransparent. Despite attempts to streamline the process, foreign investment regulations remain relatively burdensome, with reported administrative delays and difficulties in obtaining permits for foreign workers. The RMI government has endorsed public sector reform; however, efforts to reduce public sector employment have generally failed, and the government continues to conduct a wide array of commercial enterprises that require subsidies and compete with private enterprises. As of June 2006, the RMI had not prepared a comprehensive policy for public sector enterprise reform. Although the RMI development plan includes objectives for economic reform, until August 2006--two years into the amended compact-- JEMFAC did not address the country's slow progress in implementing these reforms. The RMI has allocated funds to priority sectors, although several factors have hindered its use of the funds to meet long-term development needs. Further, despite actions taken to effectively implement compact grants, administrative challenges have limited its ability to ensure use of the grants for its long-term goals. In addition, although OIA has monitored early compact activities, it has also faced capacity constraints. The RMI allocated compact funds largely to priority sectors for 2004-2006. The RMI allocated about 33 percent, 40 percent, and 20 percent of funds to education, infrastructure, and health, respectively (see fig. 2). The education allocation included funding for nine new school construction projects, initiated in October 2003 through July 2006. However, various factors, such as land use issues and inadequate needs assessments, have limited the government's use of compact funds to meet long-term development needs. For example: Management and land use issues. The RMI government and Kwajalein landowners have been disputing the management of public entities and government use of leased land on the atoll. Such tensions have negatively affected the construction of schools and other community development initiatives. For example, the government and landowners disagreed about the management of the entity designated to use the compact funds set aside for Ebeye special needs; consequently, about $3.3 million of the $5.8 million allocated for this purpose had not been released for the community's benefit until after September 2006. In addition, although the RMI has completed some infrastructure projects where land titles were clear and long-term leases were available, continuing uncertainty regarding land titles may delay future projects. Lack of planning for declining U.S. assistance. Despite the goal of budgetary self-reliance, the RMI lacks concrete plans for addressing the annual decrement in compact funding, which could limit its ability to sustain current levels of government services in the future. RMI officials told us that they can compensate for the decrement in various ways, such as through the yearly partial adjustment for inflation provided for in the amended compacts or through improved tax collection. However, the partial nature of the adjustment causes the value of the grant to fall in real terms, independent of the decrement, thereby reducing the government's ability to pay over time for imports, such as energy, pharmaceutical products, and medical equipment. Additionally, the RMI's slow progress in implementing tax reform will limit its ability to augment tax revenues. The RMI has taken steps to effectively implement compact assistance, but administrative challenges have hindered its ability to ensure use of the funds for its long-term development goals. The RMI established development plans that include strategic goals and objectives for the sectors receiving compact funds. Further, in addition to establishing JEMFAC, the RMI designated the Ministry of Foreign Affairs as its official contact point for compact policy and grant implementation issues. However, data deficiencies, report shortcomings, capacity constraints, and inadequate communication have limited the RMI and U.S. governments' ability to consistently ensure the effective use of grant funds to measure progress, and monitor day-to-day activities. Data deficiencies. Although the RMI established performance measurement indicators, a lack of complete and reliable data has prevented the use of these indicators to assess progress. For example, the RMI submitted data to JEMFAC for only 15 of the 20 required education performance indicators in 2005, repeating the submission in 2006 without updating the data. Also, in 2005, the RMI government reported difficulty in comparing the health ministry's 2004 and 2005 performance owing to gaps in reported data--for instance, limited data were available in 2004 for the outer island health care system. Report shortcomings. The usefulness of the RMI's quarterly performance reports has also been limited by incomplete and inaccurate information. For example, the RMI Ministry of Health's 2005 fourth-quarter report contained incorrect outpatient numbers for the first three quarters, according to a hospital administrator. Additionally, we found several errors in basic statistics in the RMI quarterly reports for education, and RMI Ministry of Education officials and officials in other sectors told us that they had not been given the opportunity to review the final performance reports compiled by the statistics office prior to submission. Capacity constraints. Staff and skill limitations have constrained the RMI's ability to provide day-to-day monitoring of sector grant operations. However, the RMI has submitted its single audits on time. In addition, although the single audit reports for 2004 and 2005 indicated weaknesses in the RMI's financial statements and compliance with requirements of major federal programs, the government has developed corrective action plans to address the 2005 findings related to such compliance. Lack of communication. Our interviews with U.S. and RMI department officials, private sector representatives, NGOs, and economic experts revealed a lack of communication and dissemination of information by the U.S. and RMI governments on issues such as JEMFAC decisions, departmental budgets, economic reforms, legislative decisions, and fiscal positions of public enterprises. Such lack of information about government activities creates uncertainty for public, private, and community leaders, which can inhibit grant performance and improvement of social and economic conditions. As administrator of the amended compact grants, OIA monitored sector grant and fiscal performance, assessed RMI compliance with compact conditions, and took action to correct persistent shortcomings. For example, since 2004, OIA has provided technical advice and assistance to help the RMI improve the quality of its financial statements and develop controls to resolve audit findings and prevent recurrences. However, OIA has been constrained in its oversight role owing to staffing challenges and time-consuming demands associated with early compact implementation challenges in the FSM. Market volatility and choice of investment strategy could lead to a wide range of RMI trust fund balances in 2023 (see app. III) and potentially prevent trust fund disbursements in some years. Although the RMI has supplemented its trust fund balance with additional contributions, other sources of income are uncertain or entail risks. Furthermore, the RMI's trust fund committee has faced challenges in effectively managing the fund's investment. Market volatility and investment strategy could have a considerable impact on projected trust fund balances in 2023. Our analysis indicates that, under various scenarios, the RMI's trust fund could fall short of the maximum allowed disbursement level--an amount equal to the inflation- adjusted compact grants in 2023--after compact grants end, with the probability of shortfalls increasing over time (see fig. 3). For example, under a moderate investment strategy, the fund's income is only around 10 percent likely to fall short of the maximum distribution by 2031. However, this probability rises to almost 40 percent by 2050. Additionally, our analysis indicates a positive probability that the fund will yield no disbursement in some years; under a moderate investment strategy the probability is around 10 percent by 2050. Despite the impact of market volatility and investment strategy, the trust fund committee's reports have not yet assessed the fund's potential adequacy for meeting the RMI's long- term economic goals. RMI trust fund income could be supplemented from several sources, although this potential is uncertain. For example, the RMI received a commitment from Taiwan to contribute $40 million over 20 years to the RMI trust fund, which improved the RMI fund's likely capacity for disbursements after 2023. However, the RMI's limited development prospects constrain its ability to raise tax revenues to supplement the fund's income. Securitization--issuing bonds against future U.S. contributions--could increase the fund's earning potential by raising its balances through bond sales. However, securitization could also lead to lower balances and reduced fund income if interest owed on the bonds exceeds investment returns. The RMI trust fund committee has experienced management challenges in establishing the trust fund to maximize earnings. Contributions to the trust fund were initially placed in a low-interest savings account and were not invested until 16 months after the initial contribution. As of June 2007, the RMI trust fund committee had not appointed an independent auditor or a money manager to invest the fund according to the proposed investment strategy. U.S. government officials suggested that contractual delays and committee processes for reaching consensus and obtaining administrative support contributed to the time taken to establish and invest funds. As of May 2007, the committee had not yet taken steps to improve these processes. Since enactment of the amended compacts, the U.S. and RMI governments have made efforts to meet new requirements for implementation, performance measurement, and oversight. However, the RMI faces significant challenges in working toward the compact goals of economic advancement and budgetary self-reliance as the compact grants decrease. Largely dependent on government spending of foreign aid, the RMI has limited potential for private sector growth, and its government has made little progress in implementing reforms needed to increase investment opportunities and tax income. In addition, JEMFAC did not address the pace of reform during the first 2 years of compact implementation. Further, both the U.S. and RMI governments have faced significant capacity constraints in ensuring effective implementation of grant funding. The RMI government and JEMFAC have also shown limited commitment to strategically planning for the long-term, effective use of grant assistance or for the budgetary pressure the government will face as compact grants decline. Because the trust fund's earnings are intended as a main source of U.S. assistance to the RMI after compact grants end, the fund's potential inadequacy to provide sustainable income in some years could impact the RMI's ability to provide government services. However, the RMI trust fund committee has not assessed the potential status of the fund as an ongoing source of revenue after compact grants end in 2023. Our prior reports on the amended compacts include recommendations that the Secretary of the Interior direct the Deputy Assistant Secretary for Insular Affairs, as chair of the RMI management and trust fund committees, to, among other things, ensure that JEMFAC address the lack of RMI progress in implementing reforms to increase investment and tax income; coordinate with other U.S. agencies on JEMFAC to work with the the RMI to establish plans to minimize the impact of declining assistance; coordinate with other U.S. agencies on JEMFAC to work with the RMI to fully develop a reliable mechanism for measuring progress toward compact goals; and ensure the RMI trust fund committee's assessment and timely reporting of the fund's likely status as a source of revenue after 2023. Interior generally concurred with our recommendations and has taken actions in response to several of them. For example, in August 2006, JEMFAC discussed the RMI's slow progress in implementing economic reforms. Additionally, the trust fund committee decided in June 2007 to create a position for handling the administrative duties of the fund. Regarding planning for declining assistance and measuring progress toward compact goals, JEMFAC has not held an annual meeting since the December 2006 publication of the report containing those recommendations. Mr. Chairman and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For future contacts regarding this testimony, please call David Gootnick at (202) 512-3149 or [email protected]. Individuals making key contributions to this testimony included Emil Friberg, Jr., Ming Chen, Tracy Guerrero, Julie Hirshen, Leslie Holen, Reid Lowe, Mary Moutsos, Kendall Schaefer, and Eddie Uyekawa. FSM grants (Section 211) (Section 215) (Section 211) (Section 216) (Section 212)a For both the FSM and the RMI, annual grant amounts include $200,000 to be provided directly by the Secretary of the Interior to the Department of Homeland Security, Federal Emergency Management Agency, for disaster and emergency assistance purposes. The grant amounts do not include the annual audit grant, capped at $500,000, that will be provided to both countries. These dollar amounts shall be adjusted each fiscal year for inflation by the percentage that equals two-thirds of the percentage change in the U.S. gross domestic product implicit price deflator, or 5 percent, whichever is less in any one year, using the beginning of 2004 as a base. Grant funding can be fully adjusted for inflation after 2014, under certain U.S. inflation conditions. "Kwajalein Impact" funding is provided to the RMI government, which in turn compensates Kwajalein Atoll landowners for U.S. access to the atoll for military purposes. FSM/RMI propoe grnt budget for ech ector tht inclde proviion report to used to: - Monitor gener - Expenditre, performnce go, nd pecific performnce indictor - Brekdown of peronnel expenditre nd other co - Informtion on U.S. federl progr nd other donor United Ste evuate the propoed ector grnt budget for: - Contency with fnding requirement in the compct nd relted - Identify poitive event thccelerte performnce otcome nd prolem encontered nd their impct on grnt ctivitie nd performnce measureopertion to ensure complince with grnt condition Submit nnual report to the U.S. U.S. dollar (in illion) U.S. dollar (in illion) 1. 1. 0. 0. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 2003, the U.S. government extended its economic assistance to the Republic of the Marshall Islands (RMI) through an Amended Compact of Free Association. From 2004 to 2023, the United States will provide an estimated $1.5 billion to the RMI, with annually decreasing grants as well as increasing contributions to a trust fund. The assistance, targeting six sectors, is aimed at assisting the country's efforts to promote economic advancement and budgetary self-reliance. The trust fund is to be invested and provide income for the RMI after compact grants end. The Department of the Interior (Interior) administers and oversees this assistance. Drawing on prior GAO reports ( GAO-05-633 , GAO-06-590 , GAO-07-163 , GAO-07-513 , GAO-07-514R), this testimony discusses (1) the RMI's economic prospects, (2) implementation of the amended compact to meet long-term goals, and (3) potential trust fund earnings. In conducting its prior work, GAO visited the RMI, reviewed reports, interviewed officials and experts, and used a simulation model to project the trust fund's income. Prior GAO reports recommended, among other things, that Interior work with the RMI to address lack of progress in implementing reforms; plan for declining grants; reliably measure progress; and ensure timely reporting on the fund's likely status as a source of revenue after 2023. Interior agreed with GAO's recommendations. The RMI has limited prospects for achieving its long-term development goals and has not enacted policy reforms needed to achieve economic growth. The RMI economy depends on public sector spending of foreign assistance rather than on private sector or remittance income. At the same time, the two private sector industries identified as having growth potential--fisheries and tourism--face significant barriers to expansion because of a costly business environment. RMI emigrants also lack marketable skills needed to increase revenue from remittances. Despite declining grants under the compact, RMI progress in implementing key policy reforms to improve the private sector environment, such as tax or land reform, has been slow. In August 2006, the RMI's compact management committee began to address the country's slow progress in implementing reforms. Although the RMI has made progress in implementing compact assistance, it faces several challenges in allocating and using this assistance to support its long-term development goals. RMI grant allocations have reflected compact priorities by targeting health, education, and infrastructure. However, political disagreement over land use and management of public entities has negatively affected infrastructure projects. The RMI also has not planned for long-term sustainability of services that takes into account declining compact assistance. Inadequate baseline data and incomplete performance reports have further limited the RMI's ability to adequately measure progress. Although single-audit reporting has been timely, insufficient staff and skills have limited the RMI's ability to monitor day-to-day sector grant operations. Interior's Office of Insular Affairs (OIA) has conducted administrative oversight of the sector grants but has been constrained by competing oversight priorities. The RMI trust fund may not provide sustainable income for the country after compact grants end. Market volatility and the choice of investment strategy could cause the RMI trust fund balance to vary widely, and there is increasing probability that in some years the trust fund will not reach the maximum disbursement level allowed--an amount equal to the inflation-adjusted compact grants in 2023--or be able to disburse any income. In addition, although the RMI has supplemented its trust fund income with a contribution from Taiwan, other sources of income are uncertain or entail risk. Trust fund management processes have also been problematic; as of June 2007, the RMI trust fund committee had not appointed an independent auditor or a money manager to invest the fund according to the proposed investment strategy.
4,792
850
As stated in IRS's fiscal year 2016 collection program letter, the collection program's mission is to collect delinquent taxes and secure delinquent tax returns through the fair and equitable application of the tax laws, including the use of enforcement tools when appropriate and providing education to taxpayers to facilitate future compliance. As we have previously reported, IRS's collection program largely uses automated processes to categorize unpaid tax or unfiled tax return cases and send them to a collection phase to be potentially selected for collection activities. The automated Inventory Delivery System (IDS) categorizes and routes cases based on many factors, such as type of tax and amount owed. As shown in figure 1, IDS analyzes cases to identify and filter out cases that should not be pursued further (shelved) and determine whether cases should be sent to either the telephone phase (the Automated Collection System, or ACS) or the in-person phase (Field Collection) for potential selection. Through IDS routing, the Field Collection program generally makes the first effort to enforce filing and payment requirements for higher-priority cases that are not resolved by sending notices. The Field Collection program is also used to enforce compliance for lower-priority cases left unresolved by ACS's efforts. The Field Collection program is organized to make direct contact with individuals and business officials to enforce tax filing and payment requirements. The program divides the United States into seven areas. Each area is run by an area director who reports to the Director of Field Collection. Each area is typically divided into six to eight territories, each headed by a territory manager. Each territory, on average, contains six groups that are run by group managers. Group managers directly oversee an average of eight revenue officers. Cases sent to the Field Collection program for potential selection are generally identified by the taxpayer's ZIP code and aligned with Field Collection program groups around the nation, each of which works cases in a set of ZIP codes in its geographic proximity. Group managers select and assign collection cases to revenue officers for resolution. Revenue officers are generally assigned to work cases in designated ZIP codes handled by the group. Cases are removed from Field Collection's inventory of cases for potential selection when they are assigned to a revenue officer for resolution, are shelved; or expire under statute of limitations laws. Unless cases sent to the Field Collection program are assigned to a revenue officer for collection work, delinquent taxpayers may not receive contact from IRS to attempt to resolve the delinquency aside from annual reminder notices. Since 2010, Field Collection staff have been reduced by 50 percent from a 2010 high of 7,268 full-time equivalents (FTE), as shown in figure 2. Field Collection revenue officers have consistently closed fewer cases each year since a high in fiscal year 2011, as shown in figure 3. In fiscal year 2015, more than 40 percent of closed cases were closed by shelving rather than a revenue officer working the case. The figure also shows that the year-end Field Collection inventory and queue has generally remained stable in recent years. Automated systems classify collection cases into a hierarchy of five priority levels, as shown in figure 4. The priority levels are divided into two files--the group hold file and the group queue. Collection cases in the group hold file are generally considered the highest priority and are the first cases group managers evaluate for assignment. These cases are considered mandatory because group managers typically are required to evaluate whether to assign these cases within 45 days to the first available and qualified revenue officer or document why the cases were not assigned by the deadline or were removed from the hold file. Unlike collection cases in other priority levels, group hold file cases require immediate evaluation for assignment or an explanation if they are not assigned. Group managers must even assign some mandatory cases in less than 45 days. For example, collection cases involving missed or lower-than-expected employment tax payments--known as federal tax deposit alerts within IRS--should be assigned within 7 days. Other mandatory collection cases include those involving IRS employees, transfers from other areas within Field Collection, and current cases where additional delinquent taxes have been assessed. The group queue contains the other four priority levels' collection cases-- accelerated high, high, medium, and low. The automated system assigns these priorities based on a number of criteria including the balance due amount, return type, tax year of the case, and last return amount. Accelerated high priority collection cases--second priority in selection consideration--are cases that IRS has determined are among the most important to pursue and group managers are generally expected to assign them from the queue first. Characteristics of cases in this category might include those with balances due greater than a selected high-dollar amount or individual delinquent taxpayers with income greater than a selected high amount. Non-accelerated high priority cases are third priority in selection consideration. Characteristics of these cases may include businesses with recent unpaid employment tax liabilities and those with balances due that fall into a range of selected high-dollar amounts. Characteristics of collection cases designated medium and low priority may include balances due within, or less than, a range of relatively moderate dollar amounts (in comparison to high priority cases) and certain case age parameters that IRS views as lower priority. Characteristics of low-priority cases include remaining cases that do not meet the criteria of higher-priority levels. IRS's automated systems send new cases weekly to group managers' hold files and queues. Group managers we met with explained that they sequentially review the hold file and queue cases at each priority level to take into account several case selection considerations. These considerations can include revenue officers' availability, including their geographic proximity to the taxpayer's location, since Field Collection activities often involve face-to-face interaction. Group managers also consider the characteristics of the cases available for assignment, such as whether a business is still active or operating thus increasing the potential for collectability (see figure 5). The automated systems determine the anticipated difficulty and appropriate category of revenue officer that can be assigned to a case based on the queue priority level and other characteristics of the case, such as complexity. These categories are based on the revenue officer's pay scale, which is aligned with the federal General Schedule (GS) pay system. Revenue officers in the Field Collection program generally are GS-9, 11, 12, or 13. This approach generally ensures that higher paid revenue officers with more experience are assigned the more challenging or complex cases. In most instances, group hold file and accelerated high-priority cases all must be assigned as soon as a revenue officer with the appropriate characteristics is available. However, IRS guidance provides group managers discretion to pass over these cases and select lower-priority cases when there are justifiable reasons or business needs. For example, a group manager can bypass an accelerated high-priority case when, in the group manager's judgment, assignment of that case at the time would be too burdensome based on the size and complexity of the revenue officer's current caseload or when a revenue officer's current caseload has reached inventory levels prescribed in the Internal Revenue Manual. On March 10, 2016, when we received a snapshot of all assigned and unassigned cases in IRS's inventory management system, the majority of cases group managers had selected and assigned to revenue officers were accelerated high- and high-priority cases (see table 1). Likewise the majority of unassigned cases were medium- and low-priority cases. Although IRS officials did not have historical data readily available to analyze and confirm, they agreed that this mix of cases that we observed on March 10, 2016, is likely typical as the case selection process is geared toward selecting higher priority cases. The primary weakness we identified in our analysis of Field Collection case selection processes is a lack of clearly defined and measurable objectives that support the collection program's mission. According to federal internal control standards, objectives defined in clear and measurable terms are a foundation for improving accountability and providing necessary assurance that a program's mission will be achieved. The lack of clearly defined and communicated objectives also negatively impacts other aspects of Field Collection case selection processes that we believe are most relevant to assuring mission achievement. Specifically, the lack of clearly defined objectives directly impacts IRS's ability to effectively measure Field Collection performance, assess risks to the achievement of objectives, and assess the continued effectiveness of automated processes. Finally, we identified the lack of adequate procedures to guide group managers' use of judgment in selecting cases. These deficiencies increase the risk that Field Collection case selections may not contribute to the program's mission as well as they otherwise could. Having program objectives clearly defined in measurable terms is a foundation that allows managers to take steps to assure a program achieves its mission, according to federal internal control standards. This includes selecting appropriate methods to communicate internally the necessary quality information to achieve program objectives. IRS guides Field Collection employees through a number of different channels, including: the Internal Revenue Manual (IRM), which is IRS's official compendium of personnel guidance; annual program letters; and occasional memos and e-mails. However, none of the communications we reviewed clearly defined the collection program or case selection objectives. For example, the IRM does not state the objectives of the Field Collection program or what role case selection plays in supporting achievement of those objectives. Similarly, although annual collection program letters to staff stated the program mission and listed distinct activities and case types to focus on in the fiscal year grouped under IRS strategic goals, they did not present clearly defined program or case selection objectives sufficient for purposes of internal control. The objectives are unclear in part because the terms are so general that they do not enable management to assess risks, establish control procedures, or link to related performance measures. An August 2013 email from the Director of Field Collection stated that group managers should select cases so that the mix of assigned cases mirrors what is available in the inventory. This guidance suggests a program objective but neither the e-mail nor any other guidance identifies it as such. The only IRS communication we obtained that identified program and case selection objectives was a document IRS provided to us in March 2016. According to IRS officials, the Collection program developed the document in response to prior recommendations we made in reviewing other aspects of collection case selection processes. However, as shown in table 2, our analysis of the document shows that it does not fully document and communicate program objectives, as recommended by federal internal control standards. The lack of clear and consistently communicated objectives was also evident in our focus group discussions with Field Collection managers. We asked managers to describe the objectives in choosing which case to assign a given revenue officer. Participants provided a range of responses. For example, many participants identified an objective of assigning revenue officers a mix of cases that reflects the current inventory. IRS officials explained that the mix of cases refers to the ratios between cases where the taxpayer has a balance due versus those that have not filed a tax return. This case selection objective can also mean balancing the ratio of individual and business taxpayer cases so that the mix of assigned cases mirrors what is available for assignment. This principle reflects the guidance provided in the August 2013 email from the Director of Field Collection. Focus Group participants also described productivity, or resource use, as an objective. For example, one participant said, "I look at cases that are going to be more productive rather than assigning old, inactive cases. The more productive cases are those cases that have come to Field Collection more recently or have more recent [collection assessments or unfiled returns]. The older cases are stale." In contrast, several focus group participants said that the program's automated prioritization system sometimes gives higher priority levels to cases that are older and may not be collectable, such as cases that have been assigned to ACS for a long time and have not been resolved. Some participants also stated that balancing the revenue officer's workload was an objective. According to these participants, this involves looking at the number and complexity of the current assigned workload of a given revenue officer to ensure that the next case assigned does not overburden the officer. In a March 2016 email to staff, the Director of Collection defined fairness in the program as having three components: (1) fairness to the taxpaying public by pursuing those who fail to voluntarily comply, (2) an equitable process to select cases expected to best promote voluntary compliance and other apparent Collection goals or objectives, and (3) respect and adherence to policies and procedures that safeguard relevant taxpayer rights in the collection process. This effort to define fairness came in response to recommendations we made in reviewing other aspects of IRS's collection selection processes. While the effort demonstrates progress, our analysis of this email shows that it still does not meet applicable standards for clearly defining objectives and communicating them with methods appropriate for use in internal control, as detailed in table 3. Because of the shortcomings identified in table 3, IRS risks that employees implementing control procedures may not understand how fairness applies to their work. For example, territory and group managers in our focus groups offered a variety of opinions and perspectives of how to assure fairness in case selection. Specifically, when we asked focus group participants what fairness means to them and how they apply fairness in case selection, managers' responses included: avoiding conflicts of interest, such as cases where the group manager or revenue officer has a prior relationship with an individual or business; selecting cases with consideration of geography, such as to ensure there are no areas where taxpayers are in a "tax free zone;" and diversifying selections by type of business, selecting cases so that the Field Collection program provides broad coverage, cases selected are representative, and no one group of taxpayers is selected more than others. Our focus group discussions also showed that managers had inconsistent views on the meaning of fairness in case selection and that some may not fully understand how to apply fairness or believe the selection process precludes unfair selection. In half of the group manager focus groups, at least one participant said he did not know what the role of fairness is in case selection or did not consider fairness in assigning cases. Some also said that choosing any case for assignment would be fair because all of the cases represent noncompliance and the automated selection process fairly prioritizes cases for potential selection. According to IRS officials, they have not clearly defined Field Collection program and case selection objectives and fairness because they believe their efforts to define them in the document and email described above were sufficient. However, without clearly defined and clearly understood objectives aligned to the Field Collection mission, program management lacks reasonable assurance that case selection processes support achievement of IRS's mission, including applying tax law with integrity and fairness to all. The lack of clear and consistent objectives also impacts IRS's ability to measure program performance, assess risks to the program mission, and determine whether the automated processes used are still appropriate. We found that the Field Collection program tracks some case assignment and closure data. Specifically, Field Collection management compares open case inventory to a portion of the case inventory awaiting assignment. IRS officials, including managers in all eight of our focus groups, noted that they use case mix data to monitor or adjust case selections on a monthly basis to achieve this balance. Our analysis of Field Collection case data suggests that, overall at the national level, the program's mix of assigned cases is aligned--to some degree--with the available inventory by noncompliance type and taxpayer type, as shown in table 4. However, because the Field Collection program has not yet established clearly defined objectives and does not have related performance measures it lacks a way to measure program performance effectively over time. Federal internal control standards state that measurable objectives allow management to assess program performance in achieving them. For example, if one of Field Collection's objectives was to achieve fairness and it defined fairness to include ensuring broad coverage of the taxpayer population in collection status, then the Field Collection program would need to establish measures to assess its achievement of this objective. Similarly, if a case selection objective was to assign them so that cases assigned to revenue officers reflect the Field Collection group inventory, then IRS would need to clearly link this objective to related performance measures to which staff were held accountable. We identified a number of potential data elements in the case selection system that could be helpful to IRS in developing such performance measures, as shown in table 5. We found that IRS currently has two approaches for assessing risks within the agency. These approaches are: Internal controls framework. The procedures in IRM 1.4.2 govern IRS's processes for monitoring and improving internal controls, which include identifying and mitigating risks. Managers are expected to understand the risks associated with their operations and ensure that controls are in place and operating properly to mitigate those risks. Enterprise Risk Management (ERM). ERM is broader in scope than internal controls, focusing on service-wide risks. ERM is intended to help the service in setting strategy to consider risk and how much risk the service is willing to accept. IRS implemented ERM in February 2014 to alert IRS management to IRS-wide risks and to serve as an early-warning system to identify emerging challenges and address them before they affect operations. However, in order to use both of these approaches effectively to identify, analyze, and manage risk, IRS needs to have clearly defined, measurable objectives. Federal internal control standards state that effectively managing a program to achieve its mission involves comprehensively considering and assessing potential risks in the program's internal and external operating environments and establishing risk tolerances (the acceptable level of variation in performance relative to the achievement of objectives). Such tolerances are often stated in terms of performance measures, which allow performance assessment toward achieving objectives. Lacking clearly defined and associated performance measures therefore hinders the Field Collection program's ability to effectively assess, identify, and address risks to the achievement of its mission. Without clearly defined objectives, risks to achieving those objectives cannot be identified and analyzed, nor can risk tolerances be determined. According to IRS officials, the Field Collection program has not assessed risks posed by case selection processes because selection processes are well designed. However, unless Field Collection management identifies and understands the significance of the risks to achieving identified objectives, IRS lacks sufficient assurance that the program's case selection processes support achievement of objectives and respond to the identified risks within acceptable tolerances. The Field Collection program's automated prioritization and decision support systems are control procedures that are intended to help guide staff to reduce risks in making decisions. For example, the priority levels may help guide group managers to generally select the types of cases management considers higher priority, such as those that could yield more revenue or other positive compliance results, which potentially reduces the risk of using resources inefficiently. However, because Field Collection lacks program and case selection objectives, it is not clear what objectives the automated processes support or which specific risks they are intended to address. According to federal internal control standards, periodic reviews of controls assure procedures continue to work as intended. Monitoring internal control design and effectiveness, and revising control procedures as needed provides sufficient evidence that the controls continue to be effective in addressing risks (which can change over time) and support achievement of program objectives. Although IRS occasionally makes and documents ad hoc changes to these automated processes to improve results, Field Collection lacks documented procedures to periodically review automated case selection policies, procedures, and related activities, such as the case characteristics and thresholds used to classify cases by priority level. IRS established the queue priority categories in 2000 and modified them in 2001, but did not have available documentation of periodic assessments to assure they continued to be effective in the intervening 15 years. According to IRS officials, Field Collection lacks documented procedures for periodic assessments because selection processes are well designed. However, without periodic reviews IRS lacks reasonable assurance that the case selection processes are still effective in working toward achieving the program's mission, including fairness to all taxpayers. Management is responsible for establishing operating procedures and communicating them to staff to ensure they are followed so that objectives are achieved. Establishing and communicating guidance-- such as documenting procedures--provides necessary assurance that the staff responsible for implementing procedures understand and apply them to effectively achieve program objectives. The Field Collection program has established and communicated operating procedures to guide automated aspects of case selection. However, Field Collection has provided insufficient guidance to group managers on the use of professional judgment when manually selecting cases. As we noted earlier, we learned about the judgment group managers exercise in selecting cases by talking with Field Collection officials. For example, during the focus groups, managers described how professional judgment factors into the case selection process. Some group managers said they may choose to select a given case because of its geographic proximity to other cases assigned to the revenue officer. Similarly, several group managers discussed how they used professional judgment based on previous experience to assess a case's potential productivity for resulting in collection. This is consistent with the findings of a September 2014 report from the Treasury Inspector General for Tax Administration (TIGTA). Although group managers use professional judgment when selecting cases for assignment--resulting in the commitment of revenue officer resources and some cases being selected over others--IRS has limited guidance on how to exercise such judgment. IRS's official guidance--the Internal Revenue Manual (IRM)--does not guide group managers on how to exercise judgment, such as by listing the factors that ought to be taken into account to help ensure that Field Collection program and case selection objectives are achieved. The only place the IRM acknowledges professional judgment is in a note that states, "There are many considerations when assigning work such as: risk level, case grade, current inventory, geographical issues, etc." The only other program- wide guidance we identified was the August 2013 email from the Director of Field Collection stating that cases should be selected so that the mix of assigned cases mirrors what is available in the queue. According to IRS officials, Field Collection has not developed and documented guidance for how group managers are to exercise professional judgment in case selection because they consider current procedures sufficient, such as relying on group mangers to understand local conditions, relying on their previous experience as revenue officers or gaining necessary experience on the job. However, the use of professional judgment without sufficient guidance presents risks and results in Field Collection management not having sufficient assurance that the case selection decisions group managers make support achievement of the program's mission of applying the tax law with integrity and fairness to all. The Field Collection program's automated systems and the decisions made by group managers determine if some collection cases are pursued sooner, later, or at all. Case selections can affect federal spending, revenue collected, and taxpayer confidence in the tax system's fairness, which can affect overall voluntary compliance. Therefore, it is important that the Field Collection program select and pursue collection cases that are most likely to produce results in support of IRS's mission, including applying tax laws with integrity and fairness to all. Without clearly defined and measurable objectives the Field Collection program cannot know, or provide taxpayers assurance that, its case selection procedures are effectively supporting its mission. Further, without objectives and other controls IRS will not be able to monitor performance; identify, assess, and manage risks; or ensure that its automated process are still effective. Moreover, while the use of professional judgment is to be expected in the selection of cases for assignment, without guidance for managers, IRS will not have assurance that selections are being made consistently across its regional offices. To ensure that Field Collection program case selection processes support IRS's and the Collection program's mission, including applying tax laws with integrity and fairness to all, we recommend that the Commissioner of Internal Revenue take the following five actions. Develop, document, and communicate Field Collection program and case selection objectives, including the role of fairness, in clear and measurable terms sufficient for use in internal control. Develop, document, and implement performance measures clearly linked to the Field Collection program and case selection objectives. Incorporate program and case selection objectives into existing risk management systems or use other approaches to identify and analyze potential risks to achieving those objectives so that Field Collection can establish risk tolerances and appropriate control procedures to address risks. Develop, document, and communicate control procedures guidance for group managers to exercise professional judgment in the Field Collection program case selection process to achieve fairness and other program and collection case selection objectives. Develop, document, and implement procedures to periodically monitor and assess the design and operational effectiveness of both automated and manual control procedures for collection case selection to assure their continued effectiveness in achieving program objectives. We provided a draft of this report to the Commissioner of Internal Revenue for review and comment. The Deputy Commissioner for Service and Enforcement provided written comments on August 25, 2016, which are reprinted in appendix II. IRS agreed with our recommendations and described actions it plans to take to address each of them. IRS stated that it appreciates GAO's support and guidance as it continues to seek opportunities to improve Field Collection case selection controls and case selection throughout IRS. IRS states that our report does not identify any instances where the selection of a case was considered inappropriate or unfair. However, as described in our scope and methodology, we did not design our study to look for cases of inappropriate selection but rather to assess the internal controls that help safeguard the case selection processes. By evaluating the Field Collection program's internal control framework for selection, we were able to determine whether IRS has processes in place that provide reasonable assurance of fair case selection. IRS outlines planned actions to address each of our recommendations. However, it is not clear that these actions will be fully responsive to the first recommendation that IRS develop, document, and communicate Field Collection program and case selection objectives, including the role of fairness, in clear and measurable terms. IRS states that the Small Business/Self-Employed Division (SB/SE) will develop fiscal year 2017 program objectives that align with the mission of SB/SE and that the Collection program will develop and document specific Field Collection and case selection activities that will support SB/SE objectives. Our concern is that it is not clear how these efforts will address our recommendation to establish Field Collection (not division-level) program and case selection objectives. As described in this report, listing distinct activities or case types to focus on in a fiscal year does not meet the internal control standard of clearly defining and communicating program objectives in specific and measurable terms. Since it is not clear that the actions IRS described will result in Field Collection program and case selection objectives sufficient for internal control purposes, IRS's ability to address our related recommendations to establish performance measures, assess program risks, and monitor control procedure effectiveness may be limited. Clearly defining objectives is the foundation for effective implementation of internal control standards, including assurance that program operations effectively address risks to program objectives and support the achievement of objectives over time. In response to our recommendation to develop, document, and communicate control procedures guidance for Field Collection group managers to exercise professional judgement in case selection, IRS stated they would review current procedures and guidance and make changes if necessary. Given that we found little documented guidance on the appropriate use of professional judgement, IRS lacks sufficient assurance that case selections support achievement of the program's mission of applying the tax law with integrity and fairness to all. IRS provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Chairmen and Ranking Members of other Senate and House committees and subcommittees that have appropriation, authorization, and oversight responsibilities for IRS. We will also send copies of the report to the Secretary of the Treasury, Commissioner of Internal Revenue, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our objectives were to (1) describe the Field Collection program's processes (automated and manual) for prioritizing and selecting cases and (2) assess how well Field Collection case selection processes support the collection program's mission, including applying tax laws "with integrity and fairness to all." To describe the case selection processes, we reviewed program documents and interviewed knowledgeable IRS officials, including officials in the Small Business and Self-Employed Division Collection and Field Collection offices. Our document review included guidance in the Internal Revenue Manual (IRM) and automated system manuals. Our analysis included both automated and manual processes that may involve IRS staff. We analyzed these processes to outline and graphically depict systems and processes IRS uses to prioritize and select cases. To provide information on the assigned and unassigned case inventory, we analyzed data from IRS Field Collection's main inventory management and case selection information system, ENTITY. The data included characteristics such as the dollars due on selectable and assigned cases, the age of the cases, and the priority levels of the cases as determined in the prioritization process. These data describe a one- time snapshot of IRS Field Collection case inventory characteristics on March 10, 2016. The data were only available as a snapshot because, according to IRS officials, ENTITY is the only source for data on the priority level of each case and the data on priority levels are updated frequently and are not stored. To assess the reliability of the ENTITY March 10, 2016, snapshot data we present in the report tables, we interviewed knowledgeable IRS officials and manually tested the data for missing data, outliers, or obvious errors. We also reviewed relevant documentation on management reports and case routing data. In addition, we received another snapshot of the case inventories for May 25, 2016, and compared the data to the March 10 snapshot. We analyzed the data to determine if it changed significantly between the two points in time--which, for the purposes of our analysis, we determined would be a greater than 10 percent change--and found no significant changes. We found the data sufficiently reliable for the analysis that we conducted in this review. To evaluate how well the case selection processes support program goals, we compared the selection process and procedures to selected standards in Standards for Internal Control in the Federal Government, to include the standard that managers define program objectives, assess risks to the objectives, and design controls to support the achievement of the objectives and address the risks. We selected the standards by assessing which are among the most relevant to ensuring the selection processes support mission achievement given our objectives and the program context. These standards include to define program objectives in clear and measurable terms, which is an internal control foundation for other selected standards to assess risks and establish risk tolerances; to design and implement control procedures to guide operations and address risks; and to establish performance measures and procedures for assessing control procedures to assess program performance in achieving objectives and ensure that controls effectively address risks and support achievement of objectives over time. Our review of the design of controls included the IRM and other Field Collection program documents that we used to describe the case selection process in objective one. We conducted eight focus groups with a non-generalizable, nation-wide random sample of IRS Field Collection managers--two focus groups with territory mangers and six with group managers--to collect evidence on the implementation of the case selection process. We received a list of all Field Collection group and territory managers from IRS. To ensure that managers selected had sufficient experience in their respective positions to actively contribute to the focus groups, we removed managers that were "acting" or had less than two years of experience in their position. We arranged the list of managers in a random order. Managers were assigned a focus group date and time in order of their random selection, controlling for their time zones, and were given the option to participate in the focus group or not. 43 of the 46 group managers that agreed to participate in the focus groups actually did; all 16 territory managers participated. All of the focus groups were conducted by phone in the week of March 28, 2016. We asked all eight focus groups questions about internal controls in the Field Collection case selection process, including the program objectives of the case selection process and the case characteristics managers consider when making case selections and assignments. We documented the responses from the focus group participants and categorized the responses into themes. We analyzed the themes for their frequency and pervasiveness through the focus groups. We looked for patterns or trends across all eight focus groups and for differences between the group and territory manager focus groups. We conducted this performance audit from August 2015 to September 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. James R. McTigue, Jr. (202) 512-9110 or [email protected]. In addition to the above named contact, Brian James (Assistant Director), David Dornisch, Steven Flint, Travis Hill, Ted Hu, Ronald W. Jones, Kay Kuhlman, Donna Miller, Justin Riordan, and Andrew J. Stephens made key contributions to this report.
IRS's Field Collection program is where IRS revenue officers make in-person contact with noncompliant individuals and business officials to enforce tax return filing and payment requirements. Sound processes for selecting cases are critical to maintain taxpayer confidence in the tax system and use federal resources efficiently. GAO was asked to review the processes IRS uses to select collection cases for potential enforcement action. This report (1) describes the Field Collection program's automated and manual processes for prioritizing and selecting cases and (2) assesses how well Field Collection case selection processes support the collection program's mission, including applying tax laws "with integrity and fairness to all." To address these objectives, GAO reviewed IRS documents and conducted interviews with IRS officials knowledgeable about the case selection processes, including a series of focus groups with IRS Field Collection managers. GAO evaluated how well the processes adhere to relevant federal standards for internal control. The Internal Revenue Service (IRS) uses automated processes to prioritize cases to be potentially selected for in-person contact to resolve a tax collection issue (see figure), but group managers in the Field Collection program manually select the cases to assign to revenue officers. For example, when reviewing cases, group managers consider characteristics of the revenue officer available--such as current workload--and case characteristics--such as potential collectability--when deciding whether to assign a case. GAO found weaknesses in the Field Collection program's internal controls for case selection, including: Program objectives are not clearly defined and communicated. IRS has not sufficiently developed and communicated specific and measurable program objectives, including fairness. GAO heard different interpretations of program objectives and the role of fairness from focus group participants. Without clearly defined and clearly understood objectives aligned to its mission, Field Collection management does not have reasonable assurance that case selection processes support achievement of that mission. Further, the lack of clearly articulated objectives undercuts the effectiveness of Field Collection management's efforts to measure performance and assess risks. Documentation and assessment of case selection risks are inadequate. The Field Collection program's automated prioritization and decision support systems are control procedures that may guide staff to reduce risks. However, the Field Collection program does not have documented procedures for periodically reviewing automated aspects of case selection. Further, the Field Collection program lacks sufficient guidance for group managers to exercise judgment in case selection. These deficiencies limit the Field Collection management's ability to provide reasonable assurance that selection decisions effectively support achievement of IRS's mission. GAO is making five recommendations, including that IRS: develop and document objectives in clear and measurable terms, including fairness; provide guidance for group managers' use of judgment in selecting cases; and develop procedures to assess automated and manual processes. IRS agreed with the recommendations and outlined planned steps to address them.
7,041
571
NEDCTP's mission is to deter and detect the introduction of explosive devices into the transportation system. As of June 2014, NEDCTP has deployed 802 of 985 canine teams for which it is able to fund across the transportation system.type for which funding is available, as well as describes their roles, responsibilities, and costs to TSA. There are four types of LEO teams: aviation, mass transit, maritime, and multimodal, and three types of TSI teams: air cargo, multimodal, and PSC. Since our January 2013 report, TSA has taken steps to analyze key data on the performance of its canine teams to better identify program trends, as we recommended. In January 2013, we reported that TSA collected and used key canine program data in its Canine Website System (CWS), a central management database, but it could better analyze these data to identify program trends. Table 2 highlights some of the key data elements included in the CWS. In January 2013, we found that NEDCTP was using CWS data to track and monitor canine teams' performance. Specifically, field canine coordinators reviewed CWS data to determine how many training and utilization minutes canine teams conducted on a monthly basis. NEDCTP management used CWS data to determine, for example, how many canine teams were certified in detecting explosive odors, as well as the number of teams that passed short notice assessments. However, in our January 2013 report, we also found that TSA had not fully analyzed the performance data it collected in CWS to identify program trends and areas that were working well or in need of corrective action. For example: Training minutes: TSA tracked the number of training minutes canine teams conducted on a monthly basis, as well as the types of explosives and search areas used when training, to ensure teams maintained their proficiency in detecting explosive training aids. However, we found that TSA did not analyze training minute data over time (from month to month) and therefore was unable to determine trends related to canine teams' compliance with the requirement. On the basis of our analysis of TSA's data, we determined that some canine teams were repeatedly not in compliance with TSA's 240- minute training requirement, in some cases for 6 months or more in a 1-year time period. Utilization minutes: We found that TSA collected and analyzed data monthly on the amount of cargo TSI air cargo canine teams screened in accordance with the agency's requirement. However, it was unclear how the agency used this information to identify trends to guide longer-term future program efforts and activities, since our analysis of TSA's cargo screening data from September 2011 through July 2012 showed that TSI air cargo teams nationwide generally exceeded their monthly requirement. We concluded that TSA could increase the percentage of cargo it required TSI canine teams to screen. Certification rates: We found that TSA tracked the number of certified and decertified canine teams, but was unable to analyze these data to identify trends in certification rates because these data were not consistently tracked and recorded prior to 2011. Specifically, we could not determine what, if any, variances existed in the certification rates among LEO and TSI teams over time because CTES reported it was unable to provide certification rates by type of canine team for calendar years 2008 through 2010. According to CTES, the agency recognized the deficiency and was in the process of implementing procedures to address data collection, tracking, and record-keeping issues. Short notice assessments (covert tests): We found that when TSA was performing short notice assessments (prior to their suspension in May 2012), it was not analyzing the results beyond the pass and fail rates. We concluded that without conducting the assessments and analyzing the results of these tests to determine if there were any search areas or type of explosives in which canine teams were more effective compared with others, and what, if any, training may have been needed to mitigate deficiencies, TSA was missing an opportunity to fully utilize the results. Final canine responses: Our analysis of final canine responses and data on corresponding swab samples used to verify the presence of explosives odor revealed that canine teams were not submitting swab samples to NEDCTP's Canine Explosives Unit (CEU). Specifically, we determined that the number of swab samples sent by canine handlers to CEU for scientific review was far lower than the number of final canine responses recorded in CWS. We concluded that without the swab samples, TSA was not able to more accurately determine the extent to which canine teams were effectively detecting explosive materials in real-world scenarios. In January 2013, we recommended that TSA regularly analyze available data to identify program trends and areas that are working well and those in need of corrective action to guide program resources and activities. These analyses could include, but not be limited to, analyzing and documenting trends in proficiency training minutes, canine utilization, results of short notice assessments and final canine responses, performance differences between LEO and TSI canine teams, as well as an assessment of the optimum location and number of canine teams that should be deployed to secure the U.S. transportation system. TSA concurred with our recommendation and has taken actions to address it. Specifically, TSA is monitoring canine teams' training minutes over time by producing annual reports. TSA also reinstated short notice assessments in July 2013, and in the event a team fails, the FCC completes a report that includes an analysis of the team's training records to identify an explanation for the failure. In April 2013, TSA reminded canine handlers of the requirement to submit swab samples of their canines' final responses, and reported that the number of samples submitted that same month, increased by 450 percent, when compared with sample submissions in April 2012. CEU is producing reports on the results of its analysis of the swab samples for the presence of explosives odor. In June 2014, TSA officials told us that in March 2014, NEDCTP stood up a new office, known as the Performance Measurement Section, to perform analyses of canine team data. We believe that these actions address the intent of our recommendation and could better position TSA to identify program trends to better target resources and activities based on what is working well and what may be in need of corrective action. In our January 2013 report, we found that TSA began deploying PSC teams in April 2011 prior to determining the teams' operational effectiveness. However, in June 2012, the DHS Science and Technology Directorate (S&T) and TSA began conducting effectiveness assessments On the basis of to help demonstrate the effectiveness of PSC teams.these assessments, DHS S&T and TSA's NEDCTP recommended that the assessment team conduct additional testing and that additional training and guidance be provided to canine teams. See the hyperlink in the note for figure 2 for videos of training exercises at one airport showing instances when PSC teams detected, and failed to detect, explosives odor. In January 2013, we concluded that TSA could have benefited from completing effectiveness assessments of PSCs before deploying them on a nationwide basis to determine whether they are an effective method of screening passengers in the U.S. airport environment. We also reported in January 2013 that TSA had not completed an assessment to determine where within the airport PSC teams would be most effectively utilized, but rather TSA leadership focused on initially deploying PSC teams to a single location within the airport--the sterile area--because it thought it would be the best way to foster stakeholders', specifically airport operators' and law enforcement agencies', acceptance of the teams. Stakeholders were resistant to the deployment of PSC teams because they have civilian handlers, and TSA's response resolution protocols do not require the teams to be accompanied by a law enforcement officer. According to TSA's Assistant Administrator for the Office of Security Operations, to alleviate airport stakeholders' concerns regarding TSA's response resolution protocols, the agency initially deployed PSC teams to the sterile areas, thereby enabling TSA to gather data on the value of PSC teams in the airport environment while reducing the likelihood of a final response from a PSC, since an individual has already passed through several layers of screening when entering the sterile area. However, aviation stakeholders we interviewed raised concerns about this deployment strategy, stating that PSC teams would be more effectively utilized in non-sterile areas of the airport, such as curbside or in the lobby areas. TSA subsequently deployed PSC teams to the passenger screening checkpoints. However, DHS S&T did not plan to assess the effectiveness of PSCs on the public side, beyond the checkpoint, since TSA was not planning to deploy PSCs to the public side of the airport when DHS S&T designed its test plan. In January 2013, we concluded that comprehensive effectiveness assessments that include a comparison of PSC teams in both the sterile and public areas of the airport could help TSA determine if it is beneficial to deploy PSCs to the public side of airports, in addition to or in lieu of the sterile area and checkpoint. During the June 2012 assessment of PSC teams' effectiveness, TSA conducted one of the search exercises with three conventional canine teams. Although this assessment was not intended to be included as part of DHS S&T's and TSA's formal assessment of PSC effectiveness, the results of the assessment suggested, and TSA officials and DHS S&T's Canine Explosives Detection Project Manager agreed, that a systematic assessment of PSCs with conventional canines could provide TSA with information to determine whether PSCs provide an enhanced security benefit compared with conventional LEO aviation canine teams that have already been deployed to airport terminals. In January 2013, we concluded that an assessment would help clarify whether additional investments for PSC training are warranted. We also concluded that since PSC teams are trained in both conventional and passenger screening methods, TSA could decide to convert existing PSC teams to conventional canine teams, thereby limiting the additional resource investments associated with training and maintaining the new PSC teams. We recommended that TSA expand and complete testing, in conjunction with DHS S&T, to assess the effectiveness of PSCs and conventional canines in all airport areas deemed appropriate prior to making additional PSC deployments to help (1) determine whether PSCs are effective at screening passengers, and resource expenditures for PSC training are warranted, and (2) inform decisions regarding the type of canine team to deploy and where to optimally deploy such teams within airports. TSA concurred and has taken some actions to address our recommendation, but further action is needed to fully address it. Specifically, in June 2014, TSA reported that through its PSC Focused Training and Assessment Initiative, a two-cycle assessment to establish airport-specific optimal working areas, assess team performance, and train teams on best practices, it had assessed PSC teams deployed to 27 airports, cumulating in a total of 1,048 tests. On the basis of these tests, TSA determined that PSC teams are effective and should be deployed at the checkpoint queue. In February 2014, TSA launched a third PSC assessment cycle to determine how PSCs' effectiveness changes over time in order to determine their optimal duration time when working the checkpoint queue (i.e., how many minutes they can work and continue to be effective). Although TSA has taken steps to determine whether PSC teams are effective and where in the airport environment to optimally deploy such teams, as of June 2014, TSA has not compared the effectiveness of PSCs and conventional canines in order to determine if the greater cost of training canines in the passenger screening method is warranted. According to TSA, the agency does not plan to include conventional canine teams in PSC assessments because conventional canines have not been through the process used with PSC canines to assess their temperament and behavior when working in proximity to people. While we recognize TSA's position that half of deployed conventional canines are of a breed not accepted for use in the PSC program, other conventional canines are suitable breeds, and have been paired with LEO aviation handlers working in proximity with people since they patrol airport terminals, including ticket counters and curbside areas. We continue to believe that TSA should conduct an assessment to determine whether conventional canines are as effective detecting explosives odor on passengers when compared with PSC teams working in the checkpoint queue. As we reported, since PSC teams are trained in both conventional and passenger screening methods, TSA could decide to convert existing PSC teams to conventional canine teams, thereby limiting the additional resource investments associated with training and maintaining PSC teams. In our January 2013 report, we found that TSA's 2012 Strategic Framework calls for the deployment of PSC teams based on risk; however, airport stakeholder concerns about the appropriateness of TSA's response resolution protocols for these teams resulted in PSC teams not being deployed to the highest-risk airports. TSA officials stated that PSC teams were not deployed to the highest-risk airports for various reasons, including concerns from an airport law enforcement association about TSA's decision to deploy PSC teams with civilian TSI handlers and the appropriateness of TSA's response resolution protocols. These protocols require the canine handler to be accompanied by two additional personnel that may, but not always, include a law enforcement officer. According to representatives from an airport law enforcement association, these protocols are not appropriate for a suicide bombing attempt requiring an immediate law enforcement response. TSA's decision to deploy PSC teams only to airports where they would be willingly accepted by stakeholders resulted in PSC teams not being deployed to the highest- risk airports on its high-risk list. Moreover, PSC teams that were deployed to high-risk airports, specifically two airports we visited, were not being used for passenger screening because TSA and the local law enforcement agencies had not reached agreement on the PSC response resolution protocols. We recommended that if PSCs are determined to provide an enhanced security benefit, TSA should coordinate with airport stakeholders to deploy future PSC teams to the highest-risk airports, and ensure that deployed PSC teams are utilized as intended, consistent with its statutory authority to provide for the screening of passengers and their property. TSA concurred with our recommendation, and has taken action to address it. Specifically, as of June 2014, the PSC teams for which TSA had funding and not already deployed to a specific airport at the time our report was issued have been deployed to, or allocated to, the highest-risk airports. According to TSA, it was successful in deploying PSC teams to airports where they were previously declined by aviation stakeholders for various reasons. For example, TSA officials explained that stakeholders have realized that PSCs are an effective means for detecting explosives odor, and no checkpoints have closed because of a nonproductive response. PSCs also help reduce wait times at airport checkpoints because PSC teams are one method by which TSA can operate Managed Inclusion--a tool that allows passengers who have not, for example, enrolled in TSA PreTM to access to PreTM screening lanes. According to TSA, PSC teams provide an added layer of security, making it possible for TSA to provide expedited screening to passengers who have not enrolled in TSA PreTM and therefore have not had a background check. In November 2013, TSA also reported it was making progress in working with stakeholders to allow PSC teams to work at checkpoints at airports where PSC teams were not previously performing passenger screening, but rather were training and screening air cargo. In June 2014, TSA officials reported that of all the airports where PSC teams had been deployed, all but one airport agreed to allow TSA to conduct screening of individuals at passenger screening checkpoint queues. We believe that these actions address the intent of our recommendation, contingent upon TSA comparing PSC teams with conventional canine teams. Chairman Hudson, Ranking Member Richmond, and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For questions about this statement, please contact Jennifer Grover at (202) 512-7141 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Chris Ferencik (Assistant Director), Chuck Bausell, Lisa Canini, Josh Diosomito, Michele Fejfar, Eric Hauswirth, Richard Hung, Thomas Lombardi, Jessica Orr, and Michelle Woods. Key contributors to the previous work that this testimony is based on are listed in the report. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
TSA has implemented a multilayered system composed of people, processes, and technology to protect the nation's transportation system. One of TSA's security layers is NEDCTP, composed of over 800 deployed explosives detection canine teams, including PSC teams trained to detect explosives on passengers. This testimony addresses the extent to which TSA has (1) regularly analyzed data to identify program trends and areas working well or in need of corrective action, and (2) comprehensively assessed the effectiveness of PSCs, and coordinated with stakeholders to deploy PSC teams to the highest-risk airports and utilize them as intended. This statement is based on a report GAO issued in January 2013 and selected updates obtained from October 2013 through June 2014. For the selected updates, GAO reviewed TSA documentation, including the results of PSC effectiveness assessments, and interviewed agency officials on the status of implementing GAO's recommendations. In January 2013, GAO reported that the Transportation Security Administration (TSA) collected and used key canine program data in support of its National Explosives Detection Canine Team Program (NEDCTP), but could better analyze these data to identify program trends. For example, GAO found that in reviewing short notice assessments (covert tests), TSA did not analyze the results beyond the pass and fail rates. Therefore, TSA was missing an opportunity to determine if there were any search areas or types of explosives in which canine teams were more effective compared with others, and what, if any, training may be needed to mitigate deficiencies. GAO recommended that TSA regularly analyze available data to identify program trends and areas that are working well and those in need of corrective action to guide program resources and activities. TSA concurred and has taken actions that address the intent of our recommendation. For example, in the event a team fails a short notice assessment, TSA now requires that canine team supervisors complete an analysis of the team's training records to identify an explanation for the failure. In January 2013, GAO found that TSA began deploying passenger screening canine (PSC) teams--teams of canines trained to detect explosives being carried or worn on a person--in April 2011 prior to determining the teams' operational effectiveness and where within an airport PSC teams would be most effectively utilized. GAO recommended that TSA expand and complete testing to assess the effectiveness of PSCs and conventional canines (trained to detect explosives in stationary objects) in all airport areas deemed appropriate prior to making additional PSC deployments. This would help (1) determine whether PSCs are effective at screening passengers, and resource expenditures for PSC training are warranted, and (2) inform decisions regarding the type of canine team to deploy and where to optimally deploy such teams. TSA concurred and has taken steps to address the recommendation, but additional action is needed. Specifically, TSA launched a PSC training and assessment initiative and determined PSCs to be most effective when working at the airport checkpoint, but TSA does not plan to conduct a comparison of PSC teams with conventional canine teams as GAO recommended. In January 2013, GAO also found that TSA's 2012 Strategic Framework calls for the deployment of PSC teams based on risk; however, airport stakeholder concerns related to the composition and capabilities of PSC teams resulted in the teams not being deployed to the highest-risk airports. GAO recommended that if PSCs are determined to provide an enhanced security benefit compared with conventional canine teams, TSA should coordinate with airport stakeholders to deploy future PSC teams to the highest-risk airports. TSA concurred and has taken steps to address the recommendation. Specifically, the PSC teams for which TSA had funding and not already deployed to a specific airport at the time GAO's report was issued have been deployed to, or allocated to, the highest-risk airports. GAO is making no new recommendations in this statement.
3,766
845
As we move further into the 21st century, it becomes increasingly important for the Congress, OMB, and executive agencies to face two overriding questions: What is the proper role for the federal government? How should the federal government do business? GPRA serves as a bridge between these two questions by linking results that the federal government seeks to achieve to the program approaches and resources that are necessary to achieve those results. The performance information produced by GPRA's planning and reporting infrastructure can help build a government that is better equipped to deliver economical, efficient, and effective programs that can help address the challenges facing the federal government. Among the major challenges are instilling a results orientation, ensuring that daily operations contribute to results, understanding the performance consequences of budget decisions, coordinating crosscutting programs, and building the capacity to gather and use performance information. The cornerstone of federal efforts to successfully meet current and emerging public demands is to adopt a results orientation; that is, to develop a clear sense of the results an agency wants to achieve as opposed to the products and services (outputs) an agency produces and the processes used to produce them. Adopting a results-orientation requires transforming organizational cultures to improve decisionmaking, maximize performance, and assure accountability--it entails new ways of thinking and doing business. This transformation is not an easy one and requires investments of time and resources as well as sustained leadership commitment and attention. Based on the results of our governmentwide survey in 2000 of managers at 28 federal agencies, many agencies face significant challenges in instilling a results-orientation throughout the agency, as the following examples illustrate. At 11 agencies, less than half of the managers perceived, to at least a great extent, that a strong top leadership commitment to achieving results existed. At 26 agencies, less than half of the managers perceived, to at least a great extent, that employees received positive recognition for helping the agency accomplish its strategic goals. At 22 agencies, at least half of the managers reported that they were held accountable for the results of their programs to at least a great extent, but at only 1 agency did more than half of the managers report that they had the decisionmaking authority they needed to help the agency accomplish its strategic goals to a comparable extent. Additionally, in 2000, significantly more managers overall (84 percent) reported having performance measures for the programs they were involved with than the 76 percent who reported that in 1997, when we first surveyed federal managers regarding governmentwide implementation of GPRA. However, at no more than 7 of the 28 agencies did 50 percent or more of the managers respond that they used performance information to a great or very great extent for any of the key management activities we asked about. As I mentioned earlier, we are now moving to a more difficult but more important phase of GPRA--using results-oriented performance information on a routine basis as a part of agencies' day-to-day management and for congressional and executive branch decisionmaking. GPRA is helping to ensure that agencies are focused squarely on results and have the capabilities to achieve those results. GPRA is also showing itself to be an important tool in helping the Congress and the executive branch understand how the agencies' daily activities contribute to results that benefit the American people. To build leadership commitment and help ensure that managing for results becomes the standard way of doing business, some agencies are using performance agreements to define accountability for specific goals, monitor progress, and evaluate results. The Congress has recognized the role that performance agreements can play in holding organizations and executives accountable for results. For example, in 1998, the Congress chartered the Office of Student Financial Assistance as a performance- based organization, and required it to implement performance agreements. In our October 2000 report on agencies' use of performance agreements, we found that although each agency developed and implemented agreements that reflected its specific organizational priorities, structure, and culture, our work identified five common emerging benefits from agencies' use of results-oriented performance agreements. (See fig. 1.) Strengthens alignment of results-oriented goals with daily operations Fosters collaboration across organizational boundaries Enhances opportunities to discuss and routinely use performance information to make program improvements Provides results-oriented basis for individual accountability Maintains continuity of program goals during leadership transitions Performance agreements can be effective mechanisms to define accountability for specific goals and to align daily activities with results. For example, at the Veterans Health Administration (VHA), each Veterans Integrated Service Network (VISN) director's agreement includes performance goals and specific targets that the VISN is responsible for accomplishing during the next year. The goals in the performance agreements are aligned with VHA's, and subsequently the Department of Veterans Affairs' (VA), overall mission and goals. A VHA official indicated that including corresponding goals in the performance agreements of VISN directors contributed to improvements in VA's goals. For example, from fiscal years 1997 through 1999, VHA reported that its performance on the Prevention Index had improved from 69 to 81 percent. A goal requiring VISNs to produce measurable increases in the Prevention Index has been included in the directors' performance agreements each year from 1997 through 1999. The Office of Personnel Management recently amended its regulations for members of the Senior Executive Service requiring agencies to appraise senior executive performance using measures that balance organizational results with customer, employee, and other perspectives in their next appraisal cycles. The regulations also place increased emphasis on using performance results as a basis for personnel decisions, such as pay, awards, and removal. We are planning to review agencies' implementation of the amended regulations. Program evaluations are important for assessing the contributions that programs are making to results, determining factors affecting performance, and identifying opportunities for improvement. The Department of Agriculture's Animal and Plant Health Inspection Service (APHIS) provides an example of how program evaluations can be used to help improve performance by identifying the relationships between an agency's efforts and results. Specifically, APHIS used program evaluation to identify causes of a sudden outbreak of Mediterranean Fruit Flies along the Mexico-Guatemala border. The Department of Agriculture's fiscal year 1999 performance report described the emergency program eradication activities initiated in response to the evaluation's findings and recommendations, and linked the continuing decrease in the number of infestations during the fiscal year to these activities. However, our work has shown that agencies typically do not make full use of program evaluations as a tool for performance measurement and improvement. After a decade of government downsizing and curtailed investment, it is becoming increasingly clear that today's human capital strategies are not appropriately constituted to adequately meet current and emerging needs of the government and its citizens in the most efficient, effective, and economical manner possible. Attention to strategic human capital management is important because building agency employees' skills, knowledge, and individual performance must be a cornerstone of any serious effort to maximize the performance and ensure the accountability of the federal government. GPRA, with its explicit focus on program results, can serve as a tool for examining the programmatic implications of an agency's strategic human capital management challenges. However, we reported in April 2001 that, overall, agencies' fiscal year 2001 performance plans reflected different levels of attention to strategic human capital issues. When viewed collectively, we found that there is a need to increase the breadth, depth, and specificity of many related human capital goals and strategies and to better link them to the agencies' strategic and programmatic planning. Very few of the agencies' plans addressed succession planning to ensure reasonable continuity of leadership; performance agreements to align leaders' performance expectations with the agency's mission and goals; competitive compensation systems to help the agency attract, motivate, retain, and reward the people it needs; workforce deployment to support the agency's goals and strategies; performance management systems, including pay and other meaningful incentives, to link performance to results; alignment of performance expectations with competencies to steer the workforce towards effectively pursuing the agency's goals and strategies; and employee and labor relations grounded in a mutual effort on the strategies to achieve the agency's goals and to resolve problems and conflicts fairly and effectively. In a recent report, we concluded that a substantial portion of the federal workforce will become eligible to retire or will retire over the next 5 years, and that workforce planning is critical for assuring that agencies have sufficient and appropriate staff considering these expected increases in retirements. OMB recently instructed executive branch agencies and departments to submit workforce analyses by June 29, 2001. These analyses are to address areas such as the skills of the workforce necessary to accomplish the agency's goals and objectives; the agency's recruitment, training, and retention strategies; and the expected skill imbalances due to retirements over the next 5 years. OMB also noted that this is the initial phase of implementing the President's initiative to have agencies restructure their workforces to streamline their organizations. These actions indicate OMB's growing interest in working with agencies to ensure that they have the human capital capabilities needed to achieve their strategic goals and accomplish their missions. Major management challenges and program risks confronting agencies continue to undermine the economy, efficiency, and effectiveness of federal programs. As you know, Mr. Chairman, this past January, we updated our High-Risk Series and issued our 21-volume Performance and Accountability Series and governmentwide perspective that outlines the major management challenges and program risks that federal agencies continue to face. This series is intended to help the Congress and the administration consider the actions needed to support the transition to a more results-oriented and accountable federal government. GPRA is a vehicle for ensuring that agencies have the internal management capabilities needed to achieve results. OMB has required that agencies' annual performance plans include performance goals for resolving their major management problems. Such goals should be included particularly for problems whose resolution is mission-critical, or which could potentially impede achievement of performance goals. This guidance should help agencies address critical management problems to achieve their strategic goals and accomplish their missions. OMB's attention to such issues is important because we have found that agencies are not consistently using GPRA to show how they plan to address major management issues. A key objective of GPRA is to help the Congress, OMB, and executive agencies develop a clearer understanding of what is being achieved in relation to what is being spent. Linking planned performance with budget requests and financial reports is an essential step in building a culture of performance management. Such an alignment infuses performance concerns into budgetary deliberations, prompting agencies to reassess their performance goals and strategies and to more clearly understand the cost of performance. For the fiscal year 2002 budget process, OMB called for agencies to prepare an integrated annual performance plan and budget and asked the agencies to report on the progress they had made in better understanding the relationship between budgetary resources and performance results and on their plans for further improvement. In the 4 years since the governmentwide implementation of GPRA, we have seen more agencies make more explicit links between their annual performance plans and budgets. Although these links have varied substantially and reflect agencies' goals and organizational structures, the connections between performance and budgeting have become more specific and thus more informative. We have also noted progress in agencies' ability to reflect the cost of performance in the statements of net cost presented in annual financial statements. Again, there is substantial variation in the presentation of these statements, but agencies are developing ways to better capture the cost of performance. Virtually all of the results that the federal government strives to achieve require the concerted and coordinated efforts of two or more agencies. There are over 40 program areas across the government, related to a dozen federal mission areas, in which our work has shown that mission fragmentation and program overlap are widespread, and that crosscutting federal program efforts are not well coordinated. To illustrate, in a November 2000 report, and in several recent testimonies, we noted that overall federal efforts to combat terrorism were fragmented. These efforts are inherently difficult to lead and manage because the policy, strategy, programs, and activities to combat terrorism cut across more than 40 agencies. As we have repeatedly stated, there needs to be a comprehensive national strategy on combating terrorism that has clearly defined outcomes. For example, the national strategy should include a goal to improve state and local response capabilities. Desired outcomes should be linked to a level of preparedness that response teams should achieve. We believe that, without this type of specificity in a national strategy, the nation will continue to miss opportunities to focus and shape the various federal programs combating terrorism. Crosscutting program areas that are not effectively coordinated waste scarce funds, confuse and frustrate program customers, and undercut the overall effectiveness of the federal effort. GPRA offers a structured and governmentwide means for rationalizing these crosscutting efforts. The strategic, annual, and governmentwide performance planning processes under GPRA provide opportunities for agencies to work together to ensure that agency goals for crosscutting programs complement those of other agencies; program strategies are mutually reinforcing; and, as appropriate, common performance measures are used. If GPRA is effectively implemented, the governmentwide performance plan and the agencies' annual performance plans and reports should provide the Congress with new information on agencies and programs addressing similar results. Once these programs are identified, the Congress can consider the associated policy, management, and performance implications of crosscutting programs as part of its oversight of the executive branch. Credible performance information is essential for the Congress and the executive branch to accurately assess agencies' progress towards achieving their goals. However, limited confidence in the credibility of performance information is one of the major continuing weaknesses with GPRA implementation. The federal government provides services in many areas through the state and local level, thus both program management and accountability responsibilities often rest with the state and local governments. In an intergovernmental environment, agencies are challenged to collect accurate, timely, and consistent national performance data because they rely on data from the states. For example, earlier this spring, the Environmental Protection Agency identified, in its fiscal year 2000 performance report, data limitations in its Safe Drinking Water Information System due to recurring reports of discrepancies between national and state databases, as well as specific misidentifications reported by individual utilities. Also, the Department of Transportation could not show actual fiscal year 2000 performance information for measures associated with its outcome of less highway congestion. Because such data would not be available until after September 2001, Transportation used projected data. According to the department, the data were not available because they are provided by the states, and the states' reporting cycles for these data do not match its reporting cycle for its annual performance. Discussing data credibility and related issues in performance reports can provide important contextual information to the Congress. The Congress can use this discussion, for example, to raise questions about the problems agencies are having in collecting needed results-oriented information and the cost and data quality trade-offs associated with various collection strategies.
This testimony discusses the Government Performance and Results Act (GPRA) of 1993. During the last decade, Congress, the Office of Management and Budget, and executive agencies have worked to implement a statutory framework to improve the performance and accountability of the executive branch and to enhance executive branch and congressional decisionmaking. The core of this framework includes financial management legislation, especially GPRA. As a result of this framework, there has been substantial progress in the last few years in establishing the basic infrastructure needed to create high-performing federal organizations. The issuance of agencies' fiscal year 2000 performance reports, in addition to updated strategic plans, annual performance plans, and the governmentwide performance plans, completes two full cycles of annual performance planning and reporting under GPRA. However, much work remains before this framework is effectively implemented across the government, including transforming agencies' organizational cultures to improve decisionmaking and strengthen performance and accountability.
3,137
191
Established in 1934, Ex-Im operates as an independent agency of the U.S. government and is the official export credit agency of the United States. In 1983, Congress required Ex-Im to make available for fiscal year 1986 and thereafter not less than 10 percent of its aggregate loan, guarantee, and insurance authority for financing exports by small businesses. In 2002, Congress established several new requirements for Ex-Im relating to small business, including increasing from 10 to 20 percent the proportion of Ex- Im's aggregate loan, guarantee, and insurance authority that must be made available for the direct benefit of small businesses. When reauthorizing the bank's charter in 2006, Congress again established new requirements for Ex-Im, including a small business division with an office of financing for socially and economically disadvantaged small business concerns and small business concerns owned by women, designating small business specialists in all divisions, creating a small business committee to advise the bank president, and defining standards to measure the bank's success in financing small business. Ex-Im has taken steps to meet these requirements. Ex-Im uses the Small Business Administration methodology to determine whether a company qualifies as a small business. To apply this methodology, Ex-Im obtains company information through its application process. Ex-Im also subscribes to Dun and Bradstreet, a commercial information vendor, which provides information about companies, including Standard Industrial Classification (SIC) codes. Ex-Im uses the SIC codes provided by Dun and Bradstreet to determine a company's small business standing by obtaining the corresponding North American Industry Classification System (NAICS) code through the Small Business Administration website. Ex-Im offers a variety of financing instruments, including loan guarantees, export credit insurance, and working capital guarantees. Ex-Im provides its insurance either directly to exporters (non-bank-held insurance) or to banks which in turn finance U.S. exporters (bank-held insurance). For the bank-held insurance policies, Ex-Im authorizes the policy for the bank, which does not know at the time it applies for the financing which exporters will eventually use the export credit insurance. Between fiscal years 2002 and 2007, Ex-Im increased the percentage of its financing for small businesses and continued to finance most small business transactions through insurance or working capital guarantees. Ex-Im met the Congressional requirement to make available not less than 20 percent of its financing authority for small businesses in 2006 and 2007. In fiscal year 2006, Ex-Im's small business financing was 26.2 percent of its total financing and in fiscal year 2007 it increased to 26.7 percent. In fiscal years 2002 through 2005, Ex-Im did not reach the goal, with its small business financing share ranging from 16.9 percent to 19.7 percent. (See fig. 1.) The percent of Ex-Im financing directly benefiting small business depends on the value of small business financing compared to the value of non- small business financing. (See fig. 2.) While the small business financing value slowly increased between fiscal years 2001 and 2007, the value for non-small business financing was noticeably lower in 2006 and 2007, compared to 2005. Ex-Im has primarily used three types of tools to finance small business transactions: non-bank-held insurance, working capital guarantees, and bank-held insurance (see fig. 3). In 2007, each tool was used to finance about 30 percent of the $3.4 billion Ex-Im made available for small business transactions. The remaining 8 percent of small business financing was through medium- and long-term loans and guarantees. This pattern contrasts with non-small business financing, where the largest share is through medium- and long-term loans and guarantees. Ex-Im's use of bank-held insurance has posed some challenges for accurately calculating the small business financing share, in part because Ex-Im does not know who the exporter will be prior to authorizing the bank-held insurance transaction and therefore cannot make a small- business designation at that time. For bank-held insurance and credit guarantee facilities, Ex-Im estimates the share of the financing benefiting small business based on data regarding previous shipments under those types of transactions. These estimates of the small business share of authorized transactions can differ significantly from the small business amounts actually shipped under the authorizations. For example, in 2005 Ex-Im authorized a $10 million short-term insurance policy under which no shipments had been reported prior to our March 2006 report. In contrast, in 2005 Ex-Im also authorized a $50 million short-term insurance policy where shipments under the policy exceeded $87 million for a 6- month period (or $174 million on an annualized basis). In our 2006 report, we found weaknesses in Ex-Im's data and data systems for tracking small business financing and made recommendations for improvement, and Ex-Im has taken steps to address those weaknesses. We reported that, while Ex-Im generally classified companies' small business status correctly, weaknesses in its data and data systems limited its ability to accurately determine its small business financing amounts and share. In implementing "Ex-Im Online" and certain internal control measures, Ex-Im has improved its ability to accurately measure small business financing. Based on our review of independent data and Ex-Im's paper transaction files, GAO reported in 2006 that Ex-Im's classification of companies' small business status was generally correct. From our review of Ex-Im's electronic databases and Dun and Bradstreet data on companies' sales and employment, we estimated that, 83 percent of the time, Ex-Im's small business designation matched the designation based on Dun and Bradstreet data. Based on a review of Ex-Im's official paper transaction files in instances where Ex-Im and Dun and Bradstreet's designations differed, we determined that Ex-Im's designation was justified in most instances. In our 2006 report, we identified weaknesses in Ex-Im's process for calculating its small business financing and made some corresponding recommendations for improvement. The weaknesses ranged from internal control weaknesses that may affect only a few transactions a year to more significant weaknesses in Ex-Im's system for estimating about one-third of its small business support. We reported two internal control weaknesses in Ex-Im data systems used to calculate and report on Ex-Im's small business financing; by implementing its interactive database, Ex-Im Online, the bank has largely addressed those weaknesses. First, we found that Ex-Im's electronic data systems used to calculate its small business support did not contain complete or up-to-date information on companies' small business status. As a result, to obtain the most current information for these companies, Ex-Im officials needed to identify and locate paper transaction files. While Ex-Im's paper files generally supported its small business designation, we found a significant number of discrepancies between Ex-Im's paper and electronic files. Second, we found that Ex-Im's data systems sometimes contained conflicting information for the same company. Ex-Im maintained information about insurance transactions and participants in one data system and information about loans and guarantee transactions and participants in another data system. According to Ex-Im, updating information in a company's record (including its small business designation) in one database did not update the company's record in the other database. As a result, the two databases could, and in some cases did, have conflicting information about the same company. GAO recommended that Ex-Im improve the completeness, accuracy, and consistency of its transaction data. Since the issuance of the GAO report, Ex-Im Bank has implemented a number of controls to enhance and reinforce the bank's methodology for capturing relevant information for reporting small business statistics. Most notably, Ex-Im replaced its previous data systems with Ex-Im Online, an interactive, web-based process that allows exporters, brokers, and financial institutions to transact with Ex-Im electronically. According to Ex-Im, more than seventy-five percent of all applications are now submitted online, omitting the need to transfer information from paper copies to the bank's electronic files. Ex-Im officials stated that Ex-Im Online also includes a direct feed from Dun and Bradstreet, which provides current demographic information about a company so that Ex-Im can make an accurate assessment of the company's small business status. In addition to initiating Ex-Im Online, Ex-Im changed its internal procedures to require documented dual signoff on the small business determination for each transaction. We reported two weaknesses in Ex-Im's system for estimating small business financing when the exporter is not known at the time Ex-Im authorizes the transaction, which applied to about one-third of Ex-Im's total small business financing for fiscal year 2004. First, we found that Ex- Im's estimates might not accurately reflect the amount of small business financing under bank-held insurance policies because of large differences between the amount of financing authorized and the amount of financing used to actually ship goods. For both fiscal years 2004 and 2005, the value of shipments under bank-held insurance policies was a fraction of the total authorized value of the bank-held insurance policies. For example, according to Ex-Im records, it authorized $3.4 billion of bank-held insurance transactions for fiscal year 2004, but there were only $280 million in shipments under bank-held insurance policies in the first 6 months of the fiscal year. Ex-Im applied its estimate of the small business share of transactions, based on these shipments, to the $3.4 billion of bank-held insurance policies it authorized during the year, and determined that about $720 million of the authorized value of bank-held insurance policies during the year directly benefited small business. Thus, the method resulted in estimates of small business shares for the authorized value of these types of transactions based on a very small share (about 8 percent) of the total authorized value. Also, we found that Ex-Im classified the small business status of a significant portion of the companies making shipments as "unknown" and excluded them from its calculation of the estimate of its small business support. Of the $280 million of shipments under bank-held insurance for 2004, for example, an Ex-Im official classified about $128 million (or nearly half) as shipments by companies whose small business status was "unknown" and excluded these shipments from its calculation of total shipments. GAO recommended that Ex-Im improve its system for estimating the value and proportion of direct small business support for those transactions where the exporter is not known at the time Ex-Im authorizes the transaction. According to Ex-Im, its implementation of Ex-Im Online improves these estimates because borrowers can now enter their shipment reports directly into Ex-Im Online. According to Ex-Im, two- thirds are being entered in this manner. Ex-Im officials stated that such automated submission of shipment information has significantly reduced the amount of shipments by exporters whose small business status is unknown. They stated that only 3 percent of the fiscal year 2007 shipments under bank-held insurance were by exporters whose small business status was unknown. They also stated that, for credit guarantee facilities, no shipments were recorded by exporters whose small business status was unknown. GAO also recommended that Ex-Im engage an external auditor to audit its annual, legislatively mandated report on its direct support for small business. Ex-Im engaged Mayer Hoffman McCann P.C., its internal auditor, to perform the audit. With respect to credit guarantee facilities, bank-held policies, and non-bank-held insurance (i.e., single buyer/multi-buyer) policies, the auditors found that Ex-Im's process to obtain and calculate eligible small business counts operates in accordance with its policy and approved methodology. However, the auditors found exceptions to stated policy during their review of the working capital guarantee and non-credit guarantee facilities programs. For example, in the working capital guarantee program, the auditors noted a number of exceptions related to the completion of data fields that would have "flagged" these accounts as small business. The auditors stated that they believed that Ex-Im management was taking action to strengthen supervisory edit controls over these processes. Ex-Im is statutorily required to report on the number of its authorized transactions that directly benefit small business; in our 2006 report we found that Ex-Im's method of determining this number included some transactions that did not directly benefit small business. Ex-Im has frequently reported that about 85 percent of its authorized transactions directly benefit small business. For instance, in fiscal year 2004, it reported that 2,572 (or 83 percent) of its authorized transactions directly supported small businesses. This count was based on crediting all 698 bank-held insurance policies as directly benefiting small business. We reported that while many of these transactions directly benefit small business, they may not all directly benefit small business, as evidenced by the fact that Ex- Im's own estimate showed that about 20 percent of the value of bank-held insurance policies directly benefited small business during 2004. GAO recommended that Ex-Im more accurately determine and clearly report the number of transactions that directly benefit small business; however Ex-Im officials disagreed with this recommendation and have not changed their methodology. Ex-Im officials stated that they reviewed their process and believe that it is appropriate. A senior official also noted that since the methodology has been used for a number of years, the bank can confidently report trends. The bank also believes that their methodology provides a conservative estimate. Since GAO's last report on small business financing in March 2006, Ex-Im has made a number of changes. It also surpassed the target of allocating 20 percent of its financing to small business for both 2006 and 2007. While this is partly due to a drop in the overall level of financing provided to other customers by the bank, Ex-Im has shown increases in the level of business with small firms over several years. In addition, Ex-Im has made changes in its data systems which allow Congress to have a greater level of confidence in its reporting on small business and other matters, and it has instituted new internal controls to further increase accuracy in categorizing firms' small business status. Managing its resources going forward to respond to ongoing Congressional interest in the composition of Ex-Im's financing will, undoubtedly, entail new challenges for the bank. We look forward to working with Ex-Im further on issues related to evaluation of its small business financing efforts, including those directed at businesses owned by disadvantaged individuals and minorities, as mandated by the Congress with the strong support of this Committee. Madam Chairwoman, this concludes my prepared remarks. I would be pleased to respond to any questions you or other members of the committee may have at this time. Should you have any questions about this testimony, please contact Loren Yager at (202) 512-4347 or [email protected]. Celia Thomas, Miriam A. Carroll and Jason Bair also made major contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Export-Import Bank (Ex-Im) provides loans, loan guarantees, and insurance to support U.S. exports. Its level of support for small business has been a long-standing issue of congressional interest. In 2002, Congress increased the proportion of financing Ex-Im must make available for small business to 20 percent. In 2006, Congress directed Ex-Im to make organizational changes related to small business and to better evaluate its small business efforts. This statement discusses (1) trends in Ex-Im's small business financing since fiscal year 2000 and (2) the weaknesses GAO found in the tracking and reporting of Ex-Im's small business financing and the steps Ex-Im has taken to address them. This testimony is based primarily on GAO's March 2006 report (GAO-06-351) concerning Ex-Im's small business program. In that report, we recommended that Ex-Im (1) improve the data it maintains on its customers with regard to their small business status; (2) improve its system for estimating the value and proportion of direct small business support for those transactions where the exporter is not known at the time of authorization; (3) more accurately determine and clearly report the number of transactions that directly benefit small business; and (4) have its auditor audit Ex-Im's reporting of its direct support for small business. Ex-Im agreed with three of the four recommendations. We discuss the actions Ex-Im has taken to implement our suggestions in this statement. The share of Ex-Im financing directly benefiting small business has increased over recent years, surpassing the required 20 percent in 2006 and 2007. The percentage increase reflects a slow increase in Ex-Im financing for small businesses, while financing for non-small businesses was noticeably lower in 2006 and 2007 compared to 2005. Ex-Im continues to finance most small business transactions through insurance or working capital guarantees. In our 2006 report, we found weaknesses in Ex-Im's data and data systems for tracking small business financing and made recommendations for improvement, and Ex-Im has taken steps to address those weaknesses. We reported that while Ex-Im generally classified companies' small business status correctly, weaknesses limited its ability to accurately determine small business financing values. For transactions where Ex-Im can identify the exporter at the time it authorizes the transaction, we found that internal control weaknesses in Ex-Im's data systems limited its ability to accurately determine small business financing amounts and share. For transactions where Ex-Im cannot identify the exporter up-front, we found that weaknesses in its system for estimating small business financing also limited its ability to accurately measure and report on such financing. We also reported some limitations in Ex-Im's calculation of the number--as opposed to the value--of transactions benefiting small business. GAO made four recommendations. Ex-Im has taken several steps in response to those recommendations. Most notably, Ex-Im replaced its previous data systems with "Ex-Im Online," an interactive, web-based process that allows exporters, brokers, and financial institutions to transact with Ex-Im electronically. According to Ex-Im, this has resulted in more timely and accurate information on Ex-Im's financing.
3,362
696
FNS' quality control system measures the states' performance in accurately determining food stamp eligibility and calculating benefits. Under this system, the states calculate their payment errors by annually drawing a statistically valid sample of at least 300 to 1,200 active cases, depending on the average monthly caseload; by reviewing the case information; and by making home visits to determine whether households were eligible for benefits and received the correct benefit payment. FNS regional offices validate the results by reviewing a subset of each state's sample to determine its accuracy, making adjustments to the state's overpayment and underpayment errors as necessary. To determine each state's combined payment error rate, FNS adds overpayments and underpayments, then divides the sum by total food stamp benefit payments. As shown in figure 1, the national combined payment error rate for the Food Stamp Program was consistently above 9 percent from fiscal year 1993 through fiscal year 1999. About 70 percent of the food stamp payment errors resulted in overpayments to recipients, while about 30 percent resulted in underpayments. FNS' payment error statistics do not account for the states' efforts to recover overpayments; in fiscal year 1999, the states collected $213 million in overpayments. (See app. II for information about states' error rates and collections of overpayments.) Errors in food stamp payments occur for a variety of reasons. For example, food stamp caseworkers may miscalculate a household's eligibility and benefits because of the program's complex rules for determining who are members of the household, whether the value of a household's assets (mainly vehicles and bank accounts) is less than the maximum allowable, and the amount of a household's earned and unearned income and deductible expenses. Concerning the latter, food stamp rules require caseworkers to determine a household's gross monthly income and then calculate a net monthly income by determining the applicability of six allowable deductions: a standard deduction, an earned income deduction, a dependent care deduction, a medical deduction, a child support deduction, and an excess shelter cost deduction. (See app. III for the factors that state caseworkers consider in calculating a household's excess shelter cost deduction.) The net income, along with other factors such as family size, becomes the basis for determining benefits. Other payment errors occur after benefits have been determined primarily because households do not always report changes in income that can affect their benefits and the states do not always act on reported changes, as required by food stamp law. To reduce the likelihood of payment errors, FNS regulations require that states certify household eligibility at least annually, and establish requirements for households to report changes that occur after certification. In certifying households, states are required to conduct face- to-face interviews, typically with the head of the household, and obtain pertinent documentation at least annually. In establishing reporting requirements, the states have the option of requiring households to use either (1) monthly reporting, in which households with earned income file a report on their income and other relevant information each month; or (2) change reporting, in which all households report certain changes, including income fluctuations of $25 or more, within 10 days of the change. According to FNS, many states have shifted from monthly reporting to change reporting because of the high costs associated with administering a monthly reporting system. However, change reporting is error-prone because households do not always report changes and the states do not always act on them in a timely fashion, if at all. Each of the 28 states we contacted has taken many actions to reduce payment error rates. Further, 80 percent of the states took each of five actions: (1) case file reviews by supervisors or special teams to verify the accuracy of food stamp benefit payments, (2) special training for caseworkers, (3) analyses of quality control data to identify causes of payment errors, (4) electronic database matching to identify ineligible participants and verify income and assets, and (5) use of computer software programs to assist caseworkers in determining benefits. It is difficult to link a specific state action to its effect on error rates because other factors also affect error rates. However, almost all state food stamp officials cited case file reviews by supervisors and others as being one of their most effective tools for reducing error rates. Additionally, state officials most often cited the competing pressure of implementing welfare reform as the primary challenge to reducing food stamp payment errors in recent years. The following subsection summarizes our findings on state actions to reduce payment errors. Case file reviews to verify payment accuracy: In 26 of the 28 states we contacted, supervisors or special teams reviewed case files to verify the accuracy of benefit calculations and to correct any mistakes before the state's quality control system identified them as errors. Supervisory reviews, used by 22 states, typically require that supervisors examine a minimum number of files compiled by each caseworker. For example, Alaska requires monthly supervisory review of five cases for each experienced caseworker and all cases for each new caseworker. Furthermore, 20 states, including many of the states using supervisory review, use special teams to conduct more extensive reviews designed to identify problems in specific offices, counties, or regions. Reviewers correct mistakes before they are detected as quality control errors, where possible; identify the reasons for the mistakes; and prescribe corrective actions to prevent future errors. For example, in Genesee County, Michigan, the teams read about 2,800 case files, corrected errors in nearly 1,800, and provided countywide training in such problem areas as shelter expenses and earned income. In Massachusetts, caseworkers reviewed all case files in fiscal year 2000 because of concerns that the state's error rate would exceed the national average and that FNS would impose financial sanctions. Massachusetts corrected errors in about 13 percent of the case files reviewed; these would have been defined as payment errors had they been identified in a quality control review. Special training for caseworkers: In addition to the training provided to new caseworkers, 27 states provided a range of training for new and experienced caseworkers aimed at reducing payment errors. For example, these states conducted training specifically targeted to calculating benefits for certain categories of food stamp households, such as those with earned income or those with legal noncitizens, for which rules are more likely to be misapplied. Many states also conducted training to update caseworkers and supervisors on food stamp policy changes that affect how benefits are calculated; new policies often introduce new calculation errors because caseworkers are unfamiliar with the revised rules for calculating benefits, according to several state officials. Analysis of quality control data: Twenty-five states conducted special analyses of their quality control databases to identify common types of errors made in counties or local offices for use in targeting corrective actions. For example, California created a quality control database for the 19 largest of its 54 counties and generated monthly reports for each of the 19 counties to use. Georgia assigned a staff member to review each identified quality control error and work with the appropriate supervisor or worker to determine why the error occurred and how it could be prevented in the future. With this process, officials said, counties are much more aware of their error cases, and now perceive quality control as a tool for reducing errors. In Michigan, an analysis of quality control data revealed that caseworkers were misinterpreting a policy that specified when to include adults living with a parent in the same household, and changes were made to clarify the policy. Electronic database matching: All 28 states matched their food stamp rolls against other state and federal computer databases to identify ineligible participants and to verify participants' income and asset information. For example, all states are required to match their food stamp rolls with state and local prisoner rolls. In addition, most states routinely match their food stamp participants with one or more of the following: (1) their department of revenue's "new hires" database (a listing of recently employed individuals in the state) to verify income, (2) the food stamp rolls of neighboring states to identify possible fraud, and (3) their department of motor vehicle records to verify assets. Officials in four states said the "new hires" match reduced payment errors by allowing caseworkers to independently identify a change in employment status that a household had not reported and that would likely affect its benefits. Mississippi food stamp officials said the vehicle match helped reduce payment errors because caseworkers verified the value of applicants' vehicles as part of determining eligibility. Computer assistance in calculating benefits: Twenty-three states had developed computer software for caseworkers to use in determining an applicant's eligibility and/or in calculating food stamp benefit amounts. Twenty-two of the states have software that determines eligibility and calculates benefits based on information caseworkers enter; the remaining states' software is limited to calculating benefits after the caseworker has determined eligibility. These programs may also cross- check information to correct data entry errors; provide automated alerts that, for example, a household member is employed; and generate reminders for households, for example, to schedule an office visit. The most advanced software programs had online interview capabilities, which simplified the application process. Some states had automated case management systems that integrated Food Stamp Program records with their Medicaid and other assistance programs, which facilitated the administration of these programs. Some states took other actions to reduce their payment errors. For example, even though FNS regulations only require that food stamp households be recertified annually, 16 states increased the frequency with which certain types of food stamp households must provide pertinent documentation for recertifying their eligibility for food stamp benefits. In particular, 12 of the 16 states now require households with earned income to be recertified quarterly because their incomes tend to fluctuate, increasing the likelihood of payment errors. More frequent certification enables caseworkers to verify the accuracy of household income and other information, allowing caseworkers to make appropriate adjustments to the household's benefits and possibly avoid a payment error. However, more frequent certification can also inhibit program participation because it creates additional reporting burdens for food stamp recipients. In addition to more frequent certification, five states reported that they access credit reports and public records to determine eligibility and benefits. Seven states have formed change reporting units in food stamp offices serving certain metropolitan areas, so that participants notify these centralized units, instead of caseworkers, about starting a new job or other reportable changes. Food stamp officials in 20 of the 28 states told us that they have primarily relied on case file reviews by supervisors and others to verify payment accuracy and reduce payment errors. For example, Georgia officials noted one county's percentage of payment errors dropped by more than half as a result of the state's requirement that management staff in 10 urban counties re-examine files after a supervisor's review. In each of the past 3 years, Ohio food stamp administrators have reviewed up to 100 cases per county per year and have awarded additional state funding to counties with low error rates. In fiscal year 1999, the counties used $2.5 million in state funds primarily for payment accuracy initiatives. There was less consensus about the relative usefulness of other initiatives in reducing payment errors. Specifically, food stamp officials in 13 states told us that special training for caseworkers was one of their primary initiatives; officials in 8 states cited recertifying households more frequently; officials in 6 states identified the use of computer software to determine eligibility and/or benefits; officials in 5 states identified computer database matches; and officials in 4 states cited analyses of quality control data. Food stamp officials in 22 of the states we contacted cited their states' implementation of welfare reform as a challenge to reducing error rates in recent years. In particular, implementing welfare reform programs and policy took precedence over administering the Food Stamp Program in many states--these programs competed for management attention and resources. In Connecticut, for example, caseworkers were directed to help participants find employment; therefore, the accuracy of food stamp payments was deemphasized, according to state officials. Similarly, Hawaii officials said agency leadership emphasized helping recipients to find employment and instituted various programs to accomplish this, which resulted in less attention to payment accuracy. Furthermore, officials from 14 states said welfare reform led to an increase in the number of working poor. This increased the possibility of errors because the income of these households is more likely to fluctuate than income of other food stamp households. State food stamp officials cited three other impediments to their efforts to reduce payment errors, although far less frequently. First, officials in 12 states cited a lack of resources, such as a shortage of caseworkers to manage food stamp caseloads, as a challenge to reducing error rates. Georgia, Mississippi, and Texas officials said caseworker turnover was high, and New Hampshire officials said they currently have a freeze on hiring new caseworkers. Second, officials in 10 states cited problems associated with either using, or making the transition from, outdated automated systems as challenges to reducing payment errors. For example, New Hampshire officials found that their error rate increased from 10.2 percent in fiscal year 1998 to 12.9 percent in fiscal year 1999 after they began to use a new computer system. In addition, Connecticut and Maryland officials noted that incorporating rules changes into automated systems is difficult and often results in error-prone manual workarounds until the changes are incorporated. Finally, officials in nine states told us that food stamp eligibility revisions in recent years, particularly for legal noncitizens, have increased the likelihood of errors. To encourage the states to reduce error rates, FNS has employed financial sanctions and incentives, approved waivers of reporting requirements for certain households, and promoted initiatives to improve payment accuracy through the exchange of information among the states. However, state food stamp officials told us the single most useful change for reducing error rates would be for FNS to propose legislation to simplify requirements for determining Food Stamp Program eligibility and benefits. Simplifying food stamp rules would not necessarily alter the total amount of food stamp benefits given to participants, but it may reduce the program's administrative costs (the states spent $4.1 billion to provide $15.8 billion in food stamp benefits in fiscal year 1999). FNS officials and others expressed concern, however, that some simplification options may reduce FNS' ability to precisely target benefits to each individual household's needs. The three principal methods FNS has used to reduce payment errors in the states are discussed in the following subsections. As required by law, FNS imposes financial sanctions on states whose error rates exceed the national average. These states are required to either pay the sanction or provide additional state funds--beyond their normal share of administrative costs--to be reinvested in error-reduction efforts, such as additional training in calculating benefits for certain households. FNS imposed $30.6 million in sanctions on 16 states with payment error rates above the national average in fiscal year 1999 and $78.2 million in sanctions on 22 states in fiscal year 1998--all of which were reinvested in error- reduction efforts. (See app. IV.) Food stamp officials in 22 states reported that their agencies had increased their commitment to reducing payment errors in recent years; officials in 14 states stated that financial sanctions, or the threat of sanctions, was the primary reason for their increased commitment. For example, when the Texas Department of Human Services requested money to cover sanctions prior to 1995, the Texas legislature required the department to report quarterly on its progress in reducing its payment error rate. Officials in Texas, which has received enhanced funding for the past 2 fiscal years, cited the department's commitment and accountability to the Texas legislature as primary reasons for reducing the error rate over the years and for maintaining their focus on payment accuracy. FNS also rewards states primarily on the basis of their combined payment error rate being less than or equal to 5.9 percent--well below the national average. FNS awarded $39.2 million in enhanced funding to six states in fiscal year 1999 and $27.4 million to six states in fiscal year 1998. In the past 5 years, 16 states have received enhanced funding at least once. Officials in one state told us that the enhanced funding remained in the state's general fund, while officials in four states said the enhanced funding supplemented the state's appropriation for use by the Food Stamp Program and other assistance programs. For example, in Arkansas, the food stamp agency used its enhanced funding for training, systems development, and equipment. Arkansas officials told us that enhanced funding was a major motivator for their agency, and they have seen an increase in efforts to reduce payment errors as a direct result. In July 1999, FNS announced that it would expand the availability of waivers of certain reporting requirements placed on food stamp households. FNS was concerned that the increase in employment among food stamp households would result in larger and more frequent income fluctuations, which would increase the risk of payment errors. FNS also was concerned that the states' reporting requirements were particularly burdensome for the working poor and may, in effect, act as an obstacle to their participation in the program. This is because eligible households may not view food stamp benefits as worth the time and effort it takes to obtain them. As of November 2000, FNS had granted reporting waivers to 43 states, primarily for households with earned income. (See app. V.) The three principal types of waivers are explained below: The threshold reporting waiver raises the earned income changes that households must report to more than $100 per month. (Households still must report if a member gains or loses a job.) Without this waiver, households would be required to report any wage or salary change of $25 or more per month. Ohio uses this type of waiver (with a smaller $80-per-month threshold) specifically for self-employed households. Ohio credits the use of this and other types of reporting waivers to the decrease in its error rate from 11.2 percent in 1997 to 8.4 percent in 1999. The status reporting waiver limits the changes that households must report to three key events: (1) gaining or losing a job, (2) moving from part-time to full-time employment or vice versa, and (3) experiencing a change in wage rate or salary. This waiver eliminates the need for households to report fluctuations in the number of hours worked, except if a member moves from part-time to full-time employment. Texas officials cited the implementation of the status reporting waiver in 1994 as a primary reason that their error rate dropped by nearly 3 percentage points (from over 12 percent) in 1995. Texas' error rate reached a low of about 4.6 percent in 1999. The quarterly reporting waiver eliminates the need for households with earned income to report any changes during a 3-month period, provided the household provides required documentation at the end of the period. The waiver reduces payment errors because any changes that occurred during a quarter were not considered to be errors and households more readily understood requirements for reporting changes. Food stamp officials in Arkansas, which implemented a quarterly reporting waiver in 1995, believe that their quarterly reporting waiver is a primary reason for their subsequent stable error rate. FNS expects that reporting waivers will reduce the number of payment errors made because households are more likely to report changes and, with fewer reports to process, the states will be able to process changes accurately and within required time frames. However, the lower payment error rates that result from these waivers are primarily caused by a redefinition of a payment error, without reducing the Food Stamp Program's benefit costs. For example, a pay increase of $110 per month that is not reported until the end of the 3-month period is not a payment error under Arkansas' quarterly reporting waiver, while it would be an error if there were no waiver. As a result, the quarterly reporting waiver may reduce FNS' estimate of overpayments and underpayments. FNS estimated, in July 1999, that the quarterly waiver would increase food stamp benefit costs by $80 million per year, assuming that 90 percent of the states applied for the waiver. Of the 10 states that do not have a reporting waiver, 7 require monthly reporting for households with earned income. The advantage of monthly reporting is that benefits are issued on the basis of what has already occurred and been documented. In addition, regular contact with food stamp households allows caseworkers to quickly detect changes in the household's situation. However, monthly reporting is more costly to administer and potentially can exacerbate a state's error rate, particularly if it cannot keep up with the volume of work. A Hawaii food stamp official told us that monthly reporting contributed to recent increases in Hawaii's error rate because caseworkers have not processed earned income changes on time, while Connecticut officials said their food stamp workers were making mistakes by rushing to meet deadlines. As part of the food stamp quality control program, FNS' seven regional offices have assembled teams of federal and state food stamp officials to identify the causes of payment errors and ways to improve payment accuracy. Each region also has held periodic conferences in which states from other regions were invited to highlight their successes and to respond to questions about implementing their initiatives. Examples of topics at recent conferences in FNS' northeastern region included best payment accuracy practices and targeting agency-caused errors. FNS' regional offices also have made funds available for states to send representatives to other states to learn first-hand about initiatives to reduce payment errors. Since 1996, FNS has compiled catalogs of states' payment accuracy practices that provide information designed to help other states develop and implement similar initiatives. Food stamp officials in all 28 states we contacted called for simplifying complex Food Stamp Program rules, and most of these states would like to see FNS involved in advocating simplification. In supporting simplification, the state officials generally cited caseworkers' difficulty in correctly applying food stamp rules to determine eligibility and calculate benefits. For example, Maryland's online manual for determining a household's food stamp benefits is more than 300 pages long. Specifically, the state officials cited the need to simplify requirements for (1) determining a household's deduction for excess shelter costs and (2) calculating a household's earned and unearned income. Food stamp officials in 20 of the 28 states we contacted said simplifying the rules for determining a household's allowable shelter deduction would be one of the best ways to reduce payment errors. The Food Stamp Program generally provides for a shelter deduction when a household's monthly shelter costs exceed 50 percent of income after other deductions have been allowed. Allowable deductions include rent or mortgage payments, property taxes, homeowner's insurance, and utility expenses. Several state officials told us that determining a household's shelter deduction is prone to errors because, for example, caseworkers often need to (1) determine whether to pro-rate the shelter deduction if members of a food stamp household share expenses with others, (2) determine whether to use a standard utility allowance rather than actual expenses, and (3) verify shelter expenses, even though landlords may refuse to provide required documentation. Food stamp officials in 18 states told us that simplifying the rules for earned income would be one of the best options for reducing payment errors because earned income is both the most common and the costliest source of payment errors. Generally, determining earned income is prone to errors because caseworkers must use current earnings as a predictor of future earnings and the working poor do not have consistent employment and earnings. Similarly, officials in six states told us that simplifying the rules for unearned income would help reduce payment errors. In particular, state officials cited the difficulty caseworkers have in estimating child support payments that will be received during the certification period because payments are often intermittent and unpredictable. Because households are responsible for reporting changes in unearned income of $25 or more, differences between estimated and actual child support payments often result in a payment error. FNS officials and advocates for food stamp participants, however, have expressed concern about some possible options for simplifying the rules for determining eligibility and calculating benefits. For example, in determining a household's allowable shelter deduction, if a single standard deduction were used for an entire state, households in rural areas would likely receive greater benefits than they would have using actual expenses, while households in urban areas would likely receive smaller benefits. In this case, simplification may reduce FNS' ability to precisely target benefits to each individual household's needs. FNS officials also pointed out that likely reductions in states' payment error rates would reflect changes to the rules for calculating food stamp benefits rather than improved performance by the states. FNS has begun to examine alternatives for improving the Food Stamp Program, including options for simplifying requirements for determining benefits, as part of its preparations for the program's upcoming reauthorization. More specifically, FNS hosted a series of public forums, known as the National Food Stamp Conversation 2000, in seven cities attended by program participants, caseworkers, elected officials, antihunger advocates, emergency food providers, health and nutrition specialists, food retailers, law enforcement officials, and researchers. Simplification of the Food Stamp Program was one of the issues discussed at these sessions as part of a broad-based dialogue among stakeholders about aspects of the program that have contributed to its success and features that should be strengthened to better achieve program goals. FNS is currently developing a variety of background materials that will integrate the issues and options raised in these forums. FNS has not yet begun to develop proposed legislation for congressional consideration in reauthorizing the Food Stamp Program. FNS and the states have taken actions aimed at reducing food stamp payment errors, which currently stand at about 10 percent of the program's total benefits. Financial sanctions and enhanced funding have been at least partially successful in focusing states' attention on minimizing errors. However, this "carrot and stick" approach can only accomplish so much, because food stamp regulations for determining eligibility and benefits are extremely complex and their application inherently error-prone and costly to administer. Furthermore, this approach, carried to extremes, can create incentives for states to take actions that may inhibit achievement of one of the agency's basic missions--providing food assistance to those who are in need. For example, increasing the frequency that recipients must report income changes could decrease errors, but it could also have the unintended effect of discouraging participation by the eligible working poor. This would run counter not only to FNS' basic mission but also to an overall objective of welfare reform--helping people move successfully from public assistance into the workforce. Simplifying the Food Stamp Program's rules and regulations offers an opportunity to, among other things, reduce payment error rates and promote program participation by eligible recipients. FNS has taken initial steps in examining options for simplification through its forums with stakeholders. However, it is unclear the extent to which FNS will build on these ideas to (1) systematically develop and analyze the advantages and disadvantages of various simplification options, and (2) if warranted, submit the legislative changes needed to implement simplification proposals. To help ease program administration and potentially reduce payment errors, we recommend that the Secretary of Agriculture direct the Administrator of the Food and Nutrition Service to (1) develop and analyze options for simplifying requirements for determining program eligibility and benefits; (2) discuss the strengths and weaknesses of these options with representatives of the congressional authorizing committees; and (3) if warranted, submit legislative proposals to simplify the program. The analysis of these options should include, among other things, estimating expected program costs, effects on program participation, and the extent to which the distribution of benefits among recipients could change. We provided the U.S. Department of Agriculture with a draft of this report for review and comment. We met with Agriculture officials, including the Director of the Program Development Division within the Food and Nutrition Service's Food Stamp Program. Department officials generally agreed with the information presented in the report and provided technical clarifications, which we incorporated as appropriate. Department officials also agreed with the thrust of our recommendations. However, they expressed reservations about the mechanics of implementing our recommendation that they discuss simplification options with representatives of the congressional authorizing committees. In particular, they noted the importance of integrating consultation on policy options with the process for developing the President's annual budget request. In addition, they urged a broader emphasis on consideration of policy options that meet the full range of program objectives, including, for example, ending hunger, improving nutrition, and supporting work. We agree that simplification options should be discussed in the larger context of achieving program objectives. However, we believe that an early dialogue about the advantages and disadvantages of simplification options will facilitate the congressional debate on one of the most important and controversial issues for reauthorizing the Food Stamp Program. Copies of this report will be sent to the congressional committees and subcommittees responsible for the Food Stamp Program; the Honorable Jacob Lew, Director, Office of Management and Budget; and other interested parties. We will also make copies available upon request. Please contact me at (202) 512-5138 if you or your staff have any questions about this report. Key contributors to this report are listed in appendix VI. To examine states' efforts to minimize food stamp payment errors, we analyzed information obtained through structured telephone interviews with state food stamp officials in 28 states. We selected the 28 states to include states with the lowest payment error rates, states with the highest error rates, and the 10 states with the most food stamp participants in fiscal year 1999. Overall, the states we interviewed included 14 states with payment error rates below the national average and 14 states with error rates above the national average. They delivered about 74 percent of all food stamp benefits in fiscal year 1999. We supplemented the structured interviews with information obtained from visits to Maryland, Massachusetts, Michigan, and Texas. To examine what the Department of Agriculture's Food and Nutrition Service (FNS) has done and could do to help states reduce food stamp payment errors, we relied in part on information obtained from our telephone interviews, as well as with information obtained from discussions with officials at FNS' headquarters and each of its seven regional offices. We also analyzed FNS documents and data from its quality control system. exceeding 130 percent of the monthly poverty income guideline for its household size. To qualify for this option, a state must have a certification period of 6 months or more. The threshold reporting waiver raises the earned income changes that households must report to more than $100 per month. (Households still must report if a member gains or loses a job.) Without this waiver, households would be required to report any wage or salary change of $25 or more per month. The status reporting waiver limits the income changes that households must report to three key events: (1) gaining or losing a job, (2) moving from part-time to full-time employment or vice versa, and (3) a change in the wage rate or salary. The quarterly reporting waiver eliminates the need for households with earned income to report any changes during a 3-month period, provided the household provides required documentation at the end of the period. The 5-hour reporting waiver limits changes that households must report to three key events: (1) gaining or losing a job; (2) a change in wage rate or salary; and (3) a change in hours worked of more than 5 hours per week, if this change is expected to continue for more than a month. In addition to those named above, Christine Frye, Debra Prescott, and Michelle Zapata made key contributions to this report. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system)
In fiscal year 2000, the Department of Agriculture's Food Stamp Program, administered jointly by the Food and Nutrition Service (FNS) and the states, provided $15 billion in benefits to an average of 17.2 million low-income persons each month. FNS, which pays the full cost of food stamp benefits and half of the states' administrative costs, promulgates program regulations and oversees program implementation. The states run the program, determining whether households meet eligibility requirements, calculating monthly benefits the households should receive, and issuing benefits to participants. FNS assesses the accuracy of states' efforts to determine eligibility and benefits levels. Because of concerns about the integrity of Food Stamp Program payments, GAO examined the states' efforts to minimize food stamp payment errors and what FNS has done and could do to encourage and assist the states reduce such errors. GAO found that all 28 states it examined had taken steps to reduce payment errors. These steps included verifying the accuracy of benefit payments calculated through supervisory and other types of casefile reviews, providing specialized training for food stamp workers, analyzing quality control data to determine causes of errors and developing corrective actions, matching food stamp rolls with other federal and state computer databases to identify ineligible participants, and using computer software to assist caseworkers in determining benefits. To reduce payment errors, FNS has imposed financial sanctions on states with high error rates and has waived some reporting requirements.
6,939
299
The Department of the Interior (Interior), created by the Congress in 1849, oversees and manages the nation's publicly owned natural resources, including parks, wildlife habitat, and crude oil and natural gas resources on over 500 million acres onshore and in the waters of the Outer Continental Shelf. In this capacity, Interior is authorized to lease federal oil and gas resources and to collect the royalties associated with their production. Onshore, Interior's Bureau of Land Management is responsible for leasing federal oil and natural gas resources, whereas offshore, MMS has leasing authority. To lease lands or waters for oil and gas exploration, companies generally must first pay the federal government a sum of money that is determined through a competitive auction. This money is called a bonus bid. After the lease is awarded and production begins, the companies must also pay royalties to MMS based on a percentage of the cash value of the oil and natural gas produced and sold. Royalty rates for onshore leases are generally 12 and a half percent whereas offshore, they range from 12 and a half percent for water depths greater than 400 meters to 16 and two-thirds percent for water depths less than 400 meters. However, the Secretary of the Interior recently announced plans to raise the royalty rate to 16 and two-thirds percent for most future leases issued in waters deeper than 400 meters. MMS also has the option of taking a percentage of the actual oil and natural gas produced, referred to as "taking royalties in kind," and selling it themselves or using it for other purposes, such as filling the nation's Strategic Petroleum Reserve. Based on our work to date, the Deep Water Royalty Relief Act (DWRRA) will likely cost the federal government billions of dollars in forgone royalties, but precise estimates of the costs are not possible at this time for several reasons. First, the failure of MMS to include price thresholds in the 1998 and 1999 leases and current attempts to renegotiate these leases has created uncertainty about which leases will ultimately receive relief. Second, a recent lawsuit is questioning whether MMS has the authority to set price thresholds for the leases issue from 1996 through 2000. The outcome of this litigation could dramatically affect the amount of forgone revenues. Finally, assessing the ultimate fiscal impact of royalty relief is an inherently complex task, involving uncertainty about future production and prices. In October 2004, MMS preliminarily estimated that the total costs of royalty relief for deep water leases issued under the act could be as high as $80 billion, depending on which leases ultimately received relief. MMS made assumptions about several conditions when generating this estimate and these assumptions need to be updated in 2007 to more accurately portray potential losses. In addition, the costs of forgone royalties need to be measured against any potential benefits of royalty relief, including accelerated drilling and production of oil and gas resources, increased oil and gas production, and increased fees that companies are willing to pay through bonus bids for these leases. The Congress passed DWRRA in 1995, when oil and gas prices were low and production was declining both onshore and in the shallow waters of the Gulf of Mexico. The act contains provisions to encourage the exploration and development of oil and gas resources in waters deeper than 200 meters lying largely in the western and central planning areas of the Gulf of Mexico. The act mandates that royalty relief apply to leases issued in these waters during the five years following the act's passage-- from November 28, 1995 through November 28, 2000. As a safeguard against giving away all royalties, two mechanisms are commonly used to ensure that royalty relief is limited and available only under certain conditions. The first mechanism limits royalty relief to specified volumes of oil and gas production called "royalty suspension volumes," which are dependent upon water depth. Royalty suspension volumes establish production thresholds above which royalty relief no longer applies. That is, once total production for a lease reaches the suspension volume, the lessee must begin paying royalties. Royalty suspension volumes are expressed in barrels of oil equivalent, which is a term that allows oil and gas companies to combine oil and gas volumes into a single measure, based on the relative amounts of energy they contain. The royalty suspension volumes applicable under DWRRA are as follows: (1) not less than 17.5 million barrels of oil equivalent for leases in waters of 200 to 400 meters, (2) not less than 52.5 million barrels of oil equivalent for leases in waters of 400 to 800 meters, and (3) not less than 87.5 million barrels of oil equivalent for leases in waters greater than 800 meters. Hence, there are incentives to drill in increasingly deeper waters. Before 1994, companies drilled few wells in waters deeper than 500 meters. MMS attributes additional leasing and drilling in deep waters to the passage of these incentives but also cites other factors for increased activity, including improved three-dimensional seismic surveys, some key deep water discoveries, high deep water production rates, and the evolution of deep water development technology. After the passage of DWRRA, uncertainty existed as to how royalty suspension volumes would apply. Interior officials employed with the department when DWRRA was passed said that they recommended to the Congress that the act should state that royalty suspension volumes apply to the production volume from an entire field. However, oil and gas companies paying royalties under the act interpreted the royalty suspension volumes as applying to individual leases within a field. This is important because an oil and gas field commonly consists of more than one lease, meaning that if royalty suspension volumes are set for each lease within a field rather than for the entire field, companies are likely to owe fewer royalties. For example, if a royalty suspension volume is based on an entire field composed of three leases, a company producing oil and gas from a 210 million barrel-oil field---where the royalty suspension volume is set at 100 million---would be obligated to pay royalties on 110 million barrels (210 minus 100). However, if the same 210-million barrel field had the same suspension volume of 100 million barrels applied to each of the three leases, and 70 million barrels were produced from each of the three leases, no royalties would be due because no lease would have exceeded its royalty suspension volume. After passage of the act, MMS implemented royalty relief on a field-basis and was sued by the industry. Interior lost the case in the Fifth Circuit Court of Appeals. In October 2004, MMS estimated that this decision will cost the federal government up to $10 billion in forgone future royalty revenues. A second mechanism that can be used to limit royalty relief and safeguard against giving away all royalties is the price threshold. A price threshold is the price of oil or gas above which royalty relief no longer applies. Hence, royalty relief is allowed only so long as oil and gas prices remain below a certain specified price. At the time of the passage of DWRRA, oil and gas prices were low--West Texas Intermediate, a key benchmark for domestic oil, was about $18 per barrel, and the average U.S. wellhead price for natural gas was about $1.60 per million British thermal units. In an attempt to balance the desire to encourage production and ensure a fair return to the American people, MMS relied on a provision in the act which states that royalties may be suspended based on the price of production from the lease. MMS then established price thresholds of $28 per barrel for oil and $3.50 per million British thermal units for gas, with adjustments each year since 1994 for inflation, that were to be applied to leases issued under DWRRA. As with the application of royalty suspension volumes, problems arose with the application of these price thresholds. From 1996 through 2000-- the five years after passage of DWRRA--MMS issued 3,401 leases under authority of the act. MMS included price thresholds in 2,370 leases issued in 1996, 1997, and 2000 but did not include price thresholds in 1,031 leases issued in 1998 and 1999. This failure to include price thresholds has been the subject of congressional hearings and investigations by Interior's Office of the Inspector General. In October 2004, MMS estimated that the cost of not including price thresholds on the 1998 and 1999 leases could be as high as $10 billion. MMS also estimated that through 2006, about $1 billion had already been lost. To stem further losses, MMS is currently attempting to renegotiate the leases issued in 1998 and 1999 with the oil and gas companies that hold them. To date, MMS has announced successful negotiations with five of the companies holding these leases and has either not negotiated or not successfully negotiated with 50 other companies. In addition to forgone royalty revenues from leases issued in 1998 and 1999, leases issued under DWRRA in the other three years--1996, 1997, and 2000--are subject to losing royalty revenues due to legal challenges regarding price thresholds. In 2006, Kerr McGee Corporation sued MMS over the application of price thresholds to leases issued between November 28, 1995 and November 28, 2000, claiming that the act did not authorize Interior to apply price thresholds to those leases. MMS estimated in October 2004 that if price thresholds are disallowed for the leases it issued in 1996, 1997, and 2000, an additional $60 billion in royalty revenue could be lost. Trying to predict the fiscal impacts of royalty relief is a complex and time- consuming task involving considerable uncertainty. We reviewed MMS's 2004 estimates and concluded that they had followed standard engineering and financial practices and had generated the estimates in good faith. However, any analysis of forgone royalties involves estimating how much oil and gas will be produced in the future, when it will be produced, and at what prices. While there are standard engineering techniques for predicting oil and gas volumes that will eventually be recovered from a lease that is already producing, there is always some level of uncertainty involved. Predicting how much oil and gas will be recovered from leases that are capable of producing but not yet connected to production infrastructure is more challenging but certainly possible. Predicting production from leases not yet drilled is the most challenging aspect of such an analysis, but there are standard geological, engineering, and statistical methods that can shed light on what reasonably could be expected from the inventory of 1996 through 2000 leases. Overall, the volume of oil and gas that will ultimately be produced is highly dependent upon price and technology, with higher prices and better technology inducing greater exploration, and ultimately production, from the remaining leases. Future oil prices, however, are highly uncertain, as witnessed by the rapidly increasing oil and gas prices over the past several years. It is therefore prudent to assess anticipated royalty losses using a range of oil and gas prices rather than a single assumed price, as was used in the MMS estimate. Given the degree of uncertainty in predicting future royalty revenues from deepwater oil and gas leases, we are using current data to carefully examine MMS's 2004 estimate that up to $80 billion in future royalty revenues could be lost. There are now two additional years of production data for these leases, which will greatly improve the accuracy of estimating future production and its timing. We are also examining the impact of several variables, including changing oil and gas prices, revised estimates of the amount of oil and gas that these leases were originally expected to produce, the availability of deep water rigs to drill untested leases, and the present value of royalty payments. To fully evaluate the impacts of royalty relief, one must consider the potential benefits in addition to the costs of lost royalty revenue. For example, a potential benefit of royalty relief is that it may encourage oil and gas exploration that might not otherwise occur. Successful exploration could result in the production of additional oil and gas, which would benefit the country by increasing domestic supplies and creating employment. While GAO has not assessed the potential benefits of royalty relief, others have, including the Congressional Budget Office (CBO) in 1994, and consultants under contract with MMS in 2004. The CBO analysis was theoretical and forward-looking and concluded that the likely impact of royalty relief on new production would be very small and that the overall impact on federal royalty revenues was also likely to be small. However, CBO cautioned that the government could experience significant net losses if royalty relief was granted on leases that would have produced without the relief. The consultant's 2004 study stated that potential benefits could include increases in the number of leases sold, increases in the number of wells drilled and fields discovered, and increases in bonus bids--the amount of money that companies are willing to pay the federal government for acquiring leases. However, questions remain about the extent to which such benefits would offset the cost of lost royalty revenues. Although leases are no longer issued under the Deep Water Royalty Relief Act of 1995, royalty relief can be provided under two existing authorities: (1) the Secretary of the Interior's discretionary authority and (2) the Energy Policy Act of 2005. The Outer Continental Shelf Lands Act of 1953, as amended, granted the Secretary of the Interior the discretionary authority to reduce or eliminate royalties for leases issued in the Gulf of Mexico in order to promote increased production. The Secretary's exercising of this authority can effectively relieve the oil and gas producer from paying royalties. MMS administers several royalty relief programs in the Gulf of Mexico under this discretionary authority. MMS intends for these discretionary programs to provide royalty relief for leases in deep waters that were issued after 2000, deep gas wells located in shallow waters, wells nearing the end of their productive lives, and special cases not covered by other programs. The Congress also authorized additional royalty relief under the Energy Policy Act of 2005, which mandates relief for leases issued in the Gulf of Mexico during the five years following the act's passage, provides relief for some wells that would not have previously qualified for royalty relief, and addresses relief in certain areas of Alaska. Under discretionary authority, MMS administers a deep-water royalty relief program for leases that it issued after 2000. This program is similar to the program that DWRRA mandated for leases issued during the five years following its passage (1996 through 2000) in that royalty relief is dependent upon water depth and applicable royalty suspension volumes. However, this current program is implemented solely under the discretion of MMS on a sale-by-sale basis. Unlike under DWRRA, the price thresholds and the water depths to which royalty relief applies vary somewhat by lease sale. For example, price thresholds for leases issued in 2001 were $28 per barrel for oil and $3.50 per million British thermal units for natural gas, with adjustments for inflation since 2000. As of March 2006, MMS reported that it issued 1,897 leases with royalty relief under this discretionary authority, but only 9 of these leases were producing. To encourage the drilling of deep gas wells in the shallow waters of the Gulf of Mexico, MMS implements another program, the "deep gas in shallow water" program, under final regulations it promulgated in January 2004. MMS initiated this program to encourage additional production after noting that gas production had been steadily declining since 1997. To qualify for royalty relief, wells must be drilled in less than 200 meters of water and must produce gas from intervals below 15,000 feet. The program exempts from royalties from 15 to 25 billion cubic feet of gas per well. According to MMS's analysis, these gas volumes approximate the smallest reservoirs that could be economically developed without the benefit of an existing platform and under full royalty rates. In 2001, MMS reported that the average size of 95 percent of the gas reservoirs below 15,000 feet was 15.7 billion cubic feet, effectively making nearly all of this production exempt from royalties had it been eligible for royalty relief at that time. This program also specifies a price threshold for natural gas of $9.91 per million British thermal units in 2006, substantially exceeding the average NYMEX futures price of $6.98 for 2006, and ensuring that all gas production is exempt from royalties in 2006. Finally, MMS administers two additional royalty relief programs in the Gulf of Mexico under its discretionary authority. One program applies to leases nearing the end of their productive lives. MMS intends that its provisions will encourage the production of low volumes of oil and gas that would not be economical without royalty relief. Lessees must apply for this program under existing regulations. MMS administers another program for special situations not covered by the other programs. Lessees who believe that other more formal programs do not provide adequate encouragement to increase production or development can request royalty relief by making their case and submitting the appropriate data. As of March 2006, no leases were receiving royalty relief under the "end of productive life," and only three leases were receiving royalty relief under the "special situations" programs. The Congress authorized additional royalty relief under the Energy Policy Act of 2005. Royalty relief provisions are contained in three specific sections of the act, which in effect: (1) mandate royalty relief for deep water leases sold in the Gulf of Mexico during the five years following passage of the act, (2) extend royalty relief in the Gulf of Mexico to deep gas produced in waters of more than 200 meters and less than 400 meters, and (3) specify that royalty relief also applies to certain areas off the shore of Alaska. In the first two situations, the act specifies the amount of oil and/or gas production that would qualify for royalty relief and provides that the Secretary may make royalty relief dependent upon market prices. Section 345 of the Energy Policy Act of 2005 mandates royalty relief for leases located in deep waters in the central and western Gulf of Mexico sold during the five years after the act's passage. Similar to provisions in DWRRA, specific amounts of oil and gas are exempt from royalties due to royalty suspension volumes corresponding to the depth of water in which the leases are located. However, production volumes are smaller than those authorized under DWRRA, and this specific section of the Energy Policy Act clearly states that the Secretary may place limitations on royalty relief based on market prices. For the three sales that MMS conducted since the passage of the act, MMS included prices thresholds establishing the prices above which royalty relief would no longer apply. These price thresholds were $39 per barrel for oil and $6.50 per million British thermal units for gas, adjusted upward for inflation that has occurred since 2004. The royalty-free amounts, referred to as royalty suspension volumes, are as follows: 5 million barrels of oil equivalent per lease between 400 and 800 meters; 9 million barrels of oil equivalent per lease between 800 and 1,600 meters; 12 million barrels of oil equivalent per lease between 1,600 and 2,000 meters; and 16 million barrels of oil equivalent per lease in water greater than 2,000 meters. MMS has already issued 1,105 leases under this section of the act. Section 344 of the Energy Policy Act of 2005 contains provisions that authorize royalty relief for deep gas wells in additional waters of the Gulf of Mexico that effectively expands the existing royalty-relief program for "deep gas in shallow water" that MMS administers under pre-existing regulations. The existing program has now expanded from waters less than 200 meters to waters less than 400 meters. A provision within the act exempts from royalties gas that is produced from intervals in a well below 15,000 feet so long as the well is located in waters of the specified depth. Although the act does not specifically cite the amount of gas to be exempt from royalties, it provides that this amount should not be less than the existing program, which currently ranges from 15 to 25 billion cubic feet. The act also contains an additional incentive that could encourage deeper drilling--royalty relief is authorized on not less than 35 billion cubic feet of gas produced from intervals in wells greater than 20,000 feet deep. The act also states that the Secretary may place limitations on royalty relief based on market prices. Finally, the Energy Policy Act of 2005 contains provisions addressing royalty relief in Alaska that MMS is already providing. Section 346 of the act amends the Outer Continental Shelf Lands Act of 1953 by authorizing royalty relief for oil and gas produced off the shore of Alaska. MMS has previously included royalty relief provisions within notices for sales in the Beaufort Sea of Alaska in 2003 and 2005. All of these sales offered royalty relief for anywhere from 10 million to 45 million barrels of oil, depending on the size of the lease and the depth of water. Whether leases will be eligible for royalty relief and the amount of this royalty relief is also dependent on the price of oil. There currently is no production in the Beaufort Sea. Although there have been no sales to date under this provision of the act, MMS is proposing royalty relief for a sale in the Beaufort Sea in 2007. Section 347 of the Energy Policy Act also states that the Secretary may reduce the royalty on leases within the Naval Petroleum Reserve of Alaska in order to encourage the greatest ultimate recovery of oil or gas or in the interest of conservation. Although this authority already exists under the Naval Petroleum Reserves Production Act of 1976, as amended, the Secretary must now consult with the State of Alaska, the North Slope Borough, and any Regional Corporation whose lands may be affected. In order to meet U.S. energy demands, environmentally responsible development of our nation's oil and gas resources should be part of any national energy plan. Development, however, should not mean that the American people forgo a reasonable rate of return for the extraction and sale of these resources, especially in light of the current and long-range fiscal challenges facing our nation, high oil and gas prices, and record industry profits. Striking a balance between encouraging domestic production in order to meet the nation's increasing energy needs and ensuring a fair rate of return for the American people will be challenging. Given the record of legal challenges and mistakes made in implementing royalty relief to date, we believe this balance must be struck in careful consideration of both the costs and benefits of all royalty relief. As the Congress continues its oversight of these important issues, GAO looks forward to supporting its efforts with additional information and analysis on royalty relief and related issues. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Committee may have at this time. For further information about this testimony, please contact me, Mark Gaffigan, at 202-512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Contributors to this testimony include Dan Haas, Assistant Director; Ron Belak; John Delicath; Glenn Fischer; Frank Rusco; and Barbara Timmerman. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Oil and gas production from federal lands and waters is vital to meeting the nation's energy needs. As such, oil and gas companies lease federal lands and waters and pay royalties to the federal government based on a percentage of the oil and gas that they produce. The Minerals Management Service (MMS), an agency in the Department of the Interior, is responsible for collecting royalties from these leases. In order to promote oil and gas production, the federal government at times and in specific cases has provided "royalty relief," waiving or reducing the royalties that companies must pay. However, as production from these leases grows and oil and gas prices have risen since a major 1995 royalty relief act, questions have emerged about the financial impacts of royalty relief. Based on our work to date, GAO's statement addresses (1) the likely fiscal impacts of royalty relief on leases issued under the Outer Continental Shelf Deep Water Royalty Relief Act of 1995 and (2) other authority for granting royalty relief that could further impact future royalty revenue. To address these issues our ongoing work has included, among other things, analyses of key production data maintained by MMS; and reviews of appropriate portions of the Outer Continental Shelf Deep Water Royalty Relief Act of 1995, the Energy Policy Act of 2005, and Interior's regulations on royalty relief. While precise estimates remain elusive at this time, our work to date shows that royalty relief under the Outer Continental Shelf Deep Water Royalty Relief Act of 1995 will likely cost billions of dollars in forgone royalty revenue--at least $1 billion of which has already been lost. In October 2004, MMS estimated that forgone royalties on deep water leases issued under the act from 1996 through 2000 could be as high as $80 billion. However, there is much uncertainty in these estimates. This uncertainty stems from ongoing legal challenges and other factors that make it unclear how many leases will ultimately receive royalty relief and the inherent complexity in forecasting future royalties. We are currently assessing MMS's estimate in light of changing oil and gas prices, revised estimates of future oil and gas production, and other factors. Additional royalty relief that can further impact future royalty revenues is currently provided under the Secretary of the Interior's discretionary authority and the Energy Policy Act of 2005. Discretionary programs include royalty relief for certain deep water leases issued after 2000, certain deep gas wells drilled in shallow waters, and wells nearing the end of their productive lives. The Energy Policy Act of 2005 mandates relief for leases issued in the Gulf of Mexico during the five years following the act's passage, provides relief for some gas wells that would not have previously qualified for royalty relief, and addresses relief in certain areas of Alaska.
4,956
579
Cargo containers are an important segment of maritime commerce. Approximately 90 percent of the world's cargo moves by container. Each year, approximately 16 million oceangoing cargo containers enter the U.S. carried aboard thousands of container vessels. In 2002, approximately 7 million containers arrived at U.S seaports, carrying more than 95 percent of the nation's non-North American trade by weight and 75 percent by value. Many experts on terrorism--including those at the Federal Bureau of Investigation and academic, think tank and business organizations-- have concluded that the movement of oceangoing cargo containers are vulnerable to some form of terrorist action. A terrorist incident at a seaport, in addition to killing people and causing physical damage, could have serious economic consequences. In a 2002 simulation of a terrorist attack involving cargo containers, every seaport in the United States was shut down, resulting in a loss of $58 billion in revenue to the U.S. economy, including spoilage, loss of sales, and manufacturing slowdowns and halts in production. CBP is responsible for preventing terrorists and weapons of mass destruction from entering the United States. As part of its responsibility, it has the mission to address the potential threat posed by the movement of oceangoing containers. To perform this mission, CBP has inspectors at the ports of entry into the United States. While most of the inspectors assigned to seaports perform physical inspections of goods entering the country, some are "targeters"--they review documents and intelligence reports and determine which cargo containers should undergo additional documentary reviews and/or physical inspections. These determinations are not just based on concerns about terrorism, but also concerns about illegal narcotics and/or other contraband. The CBP Commissioner said that the large volume of imports and its limited resources make it impossible to physically inspect all oceangoing containers without disrupting the flow of commerce. The Commissioner also said it is unrealistic to expect that all containers warrant such inspection because each container poses a different level of risk based on a number of factors including the exporter, the transportation providers, and the importer. These concerns led to CBP implementing a layered approach that attempts to focus resources on potentially risky cargo containers while allowing other cargo containers to proceed without disrupting commerce. As part of its layered approach, CBP employs its Automated Targeting System (ATS) computer model to review documentation on all arriving containers and help select or "target" containers for additional documentary review and/or physical inspection. The ATS was originally designed to help identify illegal narcotics in cargo containers. ATS automatically matches its targeting rules against the manifest and other available data for every arriving container, and assigns a level of risk (i.e., low, medium, high) to each container. At the port level, inspectors use ATS, as well as other data (e.g., intelligence reports), to determine whether to inspect a particular container. In addition, CBP has a program, called the Supply Chain Stratified Examination, which supplements the ATS by randomly selecting additional containers to be physically examined. The results of the random inspection program are to be compared to the results of ATS inspections to improve targeting. If CBP officials decide to inspect a particular container, they might first use equipment such as the Vehicle and Cargo Inspection System (VACIS) that takes a gamma-ray image of the container so inspectors can see any visual anomalies. With or without VACIS, inspectors can open a container and physically examine its contents. Other components of the layered approach include the Container Security Initiative (CSI) and the Customs-Trade Partnership Against Terrorism (C- TPAT). CSI is an initiative whereby CBP places staff at designated foreign seaports to work with foreign counterparts to identify and inspect high- risk containers for weapons of mass destruction before they are shipped to the United States. C-TPAT is a cooperative program between CBP and members of the international trade community in which private companies agree to improve the security of their supply chains in return for a reduced likelihood that their containers will be inspected. Risk management is a systematic process to analyze threats, vulnerabilities, and the criticality (or relative importance) of assets to better support key decisions linking resources with prioritized efforts for results. Risk management is used by many organizations in both government and the private sector. In recent years, we have consistently advocated the use of a risk management approach to help implement and assess responses to various national security and terrorism issues. We have concluded that without a risk management approach that provides insights about the present threat and vulnerabilities as well as the organizational and technical requirements necessary to achieve a program's goals, there is little assurance that programs to combat terrorism are prioritized and properly focused. Risk management could help to more effectively and efficiently prepare defenses against acts of terrorism and other threats. Key elements of a risk management approach are listed below. Threat assessment: A threat assessment identifies adverse events that can affect an entity, which may be present at the global, national, or local level. Vulnerability assessment: A vulnerability assessment identifies weaknesses in physical structures, personnel protection systems, processes or other areas that may be exploited by terrorists. Criticality assessment: A criticality assessment identifies and evaluates an entity's assets or operations based on a variety of factors, including importance of an asset or function. Risk assessment: A risk assessment qualitatively and/or quantitatively determines the likelihood of an adverse event occurring and the severity, or impact, of its consequences. Risk characterization: Risk characterization involves designating risk on a scale, for example, low, medium, or high. Risk characterization forms the basis for deciding which actions are best suited to mitigate risk. Risk mitigation: Risk mitigation is the implementation of mitigating actions, taking into account risk, costs, and other implementation factors. Systems Approach: An integrated systems approach to risk management encompasses taking action in all organizational areas, including personnel, processes, technology, infrastructure, and governance. Monitoring and evaluation: Monitoring and evaluation is a continuous repetitive assessment process to keep risk management current and relevant. It includes external peer review, testing, and validation. Modeling can be an important part of a risk management approach. To assess modeling practices related to ATS, we interviewed terrorism experts and representatives of the international trade community who were familiar with modeling related to terrorism and/or ATS and reviewed relevant literature. There are at least four recognized modeling practices that are applicable to ATS as a decision-support tool. Conducting external peer review: External peer review is a process that includes an assessment of the model by independent and qualified external peers. While external peer reviews cannot ensure the success of a model, they can increase the probability of success by improving the technical quality of projects and the credibility of the decision- making process. Incorporating additional types of information: To identify documentary inconsistencies, targeting models need to incorporate various types of information to perform complex "linkage" analyses. Using only one type of information will not be sufficient enough to yield reliable targeting results. Testing and validating through simulated terrorist events: A model needs to be tested by staging simulated events to validate it as a targeting tool. Simulated events could include "red teams" that devise and deploy tactics in an attempt to define a system's weaknesses, and "blue teams" that devise ways to mitigate the resulting vulnerabilities identified by the red team. Using random inspections to supplement targeting: A random selection process can help identify and mitigate residual risk (i.e., the risk remaining after the model-generated inspections have been done), but also help evaluate the performance of the model relative to other approaches. CBP has taken several positive steps to address the terrorism risks posed by oceangoing cargo containers. For example, CBP established the National Targeting Center to serve as the national focal point for targeting imported cargo containers and distributing periodic intelligence alerts to the ports. CBP also modified its ATS, which was originally designed to identify narcotics contraband, to include targeting rules for terrorism that could identify high-risk containers for possible physical screening and inspection. In addition, CBP developed a training course for staff responsible for targeting cargo containers. Further, CBP also promulgated regulations aimed at improving the quality and timeliness of transmitted cargo manifest data for use in the targeting system. However, while its strategy incorporates some elements of risk management, CBP has not performed a comprehensive set of threat, criticality, vulnerability and risk assessments that experts said are vital for determining levels of risk for each container and the types of responses necessary to mitigate that risk. Regarding recognized modeling practices, CBP has not subjected ATS to external peer review or testing as recommended by the experts we contacted. Further, CBP has implemented a random inspection designed to improve its targeting rules, but officials at ports can waive the inspections. CBP has recognized the potential threat posed by oceangoing cargo containers and has reviewed and updated some aspects of its layered targeting strategy. According to CBP officials, several of the steps that CBP has taken to improve its targeting strategy have resulted in more focused targeting of cargo containers that may hold weapons of mass destruction. CBP officials told us that, given the urgency to take steps to protect against terrorism after the September 11, 2001, terrorist attacks, that they had to take an "implement and amend" approach. That is, they had to immediately implement targeting activities with the knowledge they would have to amend them later. Steps taken by CBP include the following: In November 2001, the U.S. Customs Service established the National Targeting Center to serve as the national focal point for targeting imported cargo for inspection. Among other things, the National Targeting Center interacts with the intelligence community and distributes to the ports any intelligence alerts it receives. The National Targeting Center also assists targeters in conducting research on incoming cargo, attempts to improve the targeting of cargo, and manages a national targeting training program for CBP targeters. In August 2002, CBP modified the ATS as an anti-terrorism tool by developing terrorism-related targeting rules and implementing them nationally. According to CBP officials responsible for ATS, these targeting rules were developed in consultation with selected intelligence agencies, foreign governments, and companies. CBP is now in the process of enhancing the ATS terrorism-related rules. The newest version of the ATS rules, which is still being tested, gives added risk points when certain rules apply collectively to the same container. CBP refers to this as the "bundling" of rules. In these circumstances, CBP would assume an elevated level of risk for the cargo. Related to this, CBP is currently in the process of developing and implementing further enhancements--known as the "findings module"--to capture additional information related to individual inspections of cargo containers, such as whether an inspection resulted in the discovery of contraband. In 2002, CBP also developed a 2-week national training course to train staff in targeting techniques. The course is intended to help ensure that seaport targeters have the necessary knowledge and ability to conduct effective targeting. The course is voluntary and is conducted periodically during the year at the Los Angeles, Long Beach and Miami ports, and soon it will be conducted at the National Targeting Center. In fiscal year 2003, approximately 442 inspectors completed the formal training and CBP plans to train an additional 374 inspectors in fiscal year 2004. In February 2003, CBP began enforcing new regulations about cargo manifests--called the "24 hour rule"--which requires the submission of complete and accurate manifest information 24 hours before a container is loaded on a ship at a foreign port. Penalties for non- compliance can include a CBP order not to load a container on a ship at the port of origin or monetary fines. The rule is intended to improve the quality and timeliness of the manifest information submitted to CBP, which is important because CBP relies extensively on manifest information for targeting. According to CBP officials we contacted, although no formal evaluations have been done, the 24-hour rule is beginning to improve both the quality and timeliness of manifest information. CBP officials acknowledged, however, that although improved, manifest information still is not always accurate or reliable data for targeting purposes. While CBP's targeting strategy incorporates some elements of risk management, our discussions with terrorism experts and our comparison of CBP's targeting system to recognized risk management practices showed that the strategy does not fully incorporate all key elements of a risk management framework. Elements not fully incorporated are discussed below. CBP has not performed a comprehensive set of assessments for cargo containers. CBP has attempted to assess the threat of cargo containers through contact with governmental and non-governmental sources. However, it has not assessed the vulnerability of cargo containers to tampering or exploitation throughout the supply chain, nor has it assessed which port assets and operations are the most critical in relation to their mission and function. These assessments, in addition to threat assessments, are needed to understand and identify actions to mitigate risk. CBP has not conducted a risk characterization for different forms of cargo, or the different modes of transportation used to import cargo. CBP has made some efforts in this regard by characterizing the risk of each oceangoing cargo containers as either low, medium, or high-risk. But, CBP has not performed a risk characterization to assess the overall risk of cargo containers, or determine how this overall risk characterization of cargo containers compares with sea cargo arriving in other forms, such as bulk cargo (e.g., petroleum and chemical gas shipments) or break-bulk cargo (e.g., steel and wood shipments). Additionally, CBP has not conducted risk characterization to compare the risk of cargo containers arriving by sea with the risk of cargo containers (or other cargo) arriving by other modes, such as truck or rail. These characterizations would enable CBP to better assess and prioritize the risks posed by oceangoing cargo containers and incorporate mitigation activities in an overall strategy. CBP actions at the ports to mitigate risk are not part of an integrated systems approach. Risk mitigation encompasses taking action in all organizational areas, including personnel, processes, technology, infrastructure, and governance. An integrated approach would help assure that taking action in one or more areas would not create unintended consequences in another. For example, taking action in the areas of personnel and technology--adding inspectors and scanning equipment at a port--without at the same time ensuring that the port's infrastructure is appropriately reconfigured to accept these additions and their potential impact (e.g., more physical examinations of containers), could add to already crowded conditions at that port and ultimately defeat the purpose of the original actions. We recognize that CBP implemented the ATS terrorist targeting rules in August 2002 due to the pressing need to utilize a targeting strategy to protect cargo containers against terrorism, and that CBP intends to amend the strategy as necessary. However, implementing a comprehensive risk management framework would help to ensure that information is available to management to make choices about the best use of limited resources. This type of information would help CBP obtain optimal results and would identify potential enhancements that are well-conceived, cost-effective, and work in tandem with other system components. Thus, it is important for CBP to amend its targeting strategy within a risk management framework that takes into account all of the system's components and their vital linkages. Interviews with terrorism experts and representatives from the international trade community who are familiar with CBP's targeting strategy and/or terrorism modeling told us that the ATS is not fully consistent with recognized modeling practices. Challenges exist in each of the four recognized modeling practice areas that these individuals identified: external peer review, incorporating different types of information, testing and validating through simulated events, and using random inspections to supplement targeting. With respect to external review, CBP consulted primarily with in-house subject matter experts when developing the ATS rules related to terrorism. CBP officials told us that they considered these consultations to be an extensive process of internal, or governmental, review that helped adapt ATS to meet the terrorist threat. With a few exceptions, CBP did not solicit input from the extended international trade community or from external terrorism and modeling experts. With respect to the sources and types of information, ATS relies on the manifest as its principal data input, and CBP does not mandate the transmission of additional types of information before a container's risk level is assigned. Terrorism experts, members of the international trade community, and CBP inspectors at the ports we visited characterized the ship's manifest as one of the least reliable or useful types of information for targeting purposes. In this regard, one expert cautioned that even if ATS were an otherwise competent targeting model, there is no compensating for poor input data. Accordingly, if the input data are poor, the outputs (i.e., the risk assessed targets) are not likely to be of high quality. Another problem with manifests is that shippers can revise them up to 60 days after the arrival of the cargo container. According to CBP officials, about one third of these manifest revisions resulted in higher risk scores by ATS--but by the time these revisions were received, it is possible that the cargo container may have left the port. These problems with manifest data increase the potential value of additional types of information. With respect to testing and validation, CBP has not attempted to test and validate ATS through simulated events. The National Targeting Center Director told us that 30 "events" (either real or simulated) are needed to properly test and validate the system. Yet CBP has not conducted such simulations to test and validate the system. Without testing and validation, CBP will not know whether ATS is a statistically valid model and the extent to which it can identify high-risk containers with reasonable assurance. The only two known instances of simulated tests of the targeting system were conducted without CBP's approval or knowledge by the American Broadcast Company (ABC) News in 2002 and 2003. In an attempt to simulate terrorist smuggling highly enriched uranium into the United States, ABC News sealed depleted uranium into a lead-lined pipe that was placed into a suitcase and later put into a cargo container. In both instances, CBP targeted the container that ABC News used to import the uranium, but it did not detect a visual anomaly from the lead-lined pipe using the VACIS and therefore did not open the container. With respect to instituting random inspections, CBP has a process to randomly select and examine containers regardless of the risk. The program--the Supply Chain Stratified Examination--measures compliance with trade laws and refocused it to measure border security compliance. One aspect of this new program is random inspections. However, CBP guidance states that port officials may waive the random inspections if available resources are needed to conduct inspections called for by ATS targeting or intelligence tips. Accordingly, although the containers targeted for inspection may be randomly selected, the containers being inspected from the program may not be a random representation. Therefore, CBP may not be able to learn all possible lessons from the program and, by extension, may not be in a position to use the program to improve the ATS rules. Our visits to six seaports found that the implementation of CBP's targeting strategy faces a number of challenges. Specifically, CBP does not have a uniform national system for reporting and analyzing inspection statistics by risk category that could be used for program management and oversight. We also found that the targeters at ports that completed the national training program were not tested and certified, so there is no assurance that they have the necessary skills to perform targeting functions. Further, we found that space limitations and safety concerns constrain the ports in their utilization of screening equipment, which can affect the efficiency of examinations. A CBP official told us that CBP does not have a national system for reporting and analyzing inspection statistics by risk category. While officials at all the ports provided us with inspection data, the data from some ports were generally not available by risk level, were not uniformly reported, were difficult to interpret, and were not complete. In addition, we had to contact ports several times to obtain these data, indicating that basic data on inspections were not readily available. All five ports that gave information on sources of data said they had extracted data from the national Port Tracking System. However, this system did not include information on the number of non-intrusive examinations or physical examinations conducted, according to risk category. Moreover, a CBP headquarters official stated that the data in the Port Tracking System are error prone, including some errors that result from double counting. One port official told us that the Port Tracking System was not suitable for extracting the examination information we had requested, so they had developed a local report to track and report statistics. Our findings are consistent with a March 2003 Treasury Department Inspector General Report which found, among other things, that inspection results were not documented in a consistent manner among the ports and examination statistics did not accurately reflect inspection activities. A CBP official said that they are in the process of developing a replacement for the Port Tracking System to better capture enforcement statistics but this new system is still in its infancy. Separately, CBP officials said that they are trying to capture the results of cargo inspections through an enhancement to ATS called the findings module. A National Targeting Center official stated that the findings module would allow for more consistency in capturing standardized inspection results and would also serve as a management control tool. National Targeting Center officials said that the module would be able to categorize examination results according to the level of risk. A CBP official told us the module was being implemented nationwide in late November 2003. While the ATS findings module shows potential as a useful tool for capturing inspection results, it is too soon to tell whether it will provide CBP management with consistent, complete inspection data for analyzing and improving the targeting strategy. While over 400 targeters have completed the new national targeting training, CBP has no mechanism to test or certify their competence. These targeters play a crucial role because they are responsible for making informed decisions about which cargo containers will be inspected and which containers will be released. According to National Targeting Center officials, the goal is for each U.S. seaport to have at least one targeter who has completed national targeting training so that the knowledge and skills gained at the training course can be shared with other targeters at their port of duty. To train other staff, however, the targeter who took the training must have attained a thorough understanding of course contents and their application at the ports. Because the targeters who complete the training are not tested or certified on course materials, CPB has little assurance that the targeters could perform their duties effectively or that they could train others to perform effectively. CBP could have better assurance that staff can perform well if CBP tested or certified their proficiency after they have completed the national targeting training. This would also increase the likelihood that course participants are in a position to effectively perform targeting duties and could train others at the ports on how to target potentially suspicious cargo. Further, it would lessen the likelihood that those who did not do well in class are placed in these important positions. Such testing and certification of targeting proficiency would demonstrate CBP's intent to ensure that those responsible for making decisions about whether and how to inspect containers have the knowledge and skills necessary to perform their jobs well. One of the key components of the CBP targeting and inspection process is the use of non-intrusive inspection equipment. CBP uses inspection equipment, including VACIS gamma-ray imaging technology, to screen selected cargo containers and to help inspectors decide which containers to further examine. A number of factors constrain the use of non-intrusive inspection equipment, including crowded port terminals, mechanical breakdowns, inclement weather conditions, and the safety concerns of longshoremen at some ports. Some of these constraints, such as space limitations and inclement weather conditions, are difficult if not impossible to avoid. According to CBP and union officials we contacted, concern about the safety of VACIS is a constraint to using inspection equipment. Union officials representing longshoremen at some ports expressed concerns about the safety of driving cargo containers through the VACIS because it emits gamma rays when taking an image of the inside of the cargo container. Towing cargo containers through a stationary VACIS unit reportedly takes less time and physical space than moving the VACIS equipment over stationary cargo containers that have been staged for inspection purposes. As a result of these continuing safety concerns, some longshoremen are unwilling to drive containers through the VACIS. CBP's response to these longshoremen's concerns has been to stage containers away from the dock, arraying containers in rows at port terminals so that the VACIS can be driven over a group of containers for scanning purposes. However, as seaports and port terminals are often crowded, and there is often limited space to expand operations, it can be space-intensive and time consuming to stage containers. Not all longshoremen's unions have safety concerns regarding VACIS inspections. For example, at the Port of New York/New Jersey, longshoremen's concerns over the safety of operating the VACIS were addressed after the union contacted a consultant and received assurances about the safety of the equipment. Similar efforts by CBP to convince longshoremen's unions about the safety of VACIS have not been successful at some of the other ports we visited. In closing, as part of a program to prevent terrorists from smuggling weapons of mass destruction into the United States, CBP has taken a number of positive steps to target cargo containers for inspection. However, we found several aspects of their targeting strategy are not consistent with recognized risk management and modeling practices. CBP faces a number of other challenges in implementing its strategy to identify and inspect suspicious cargo containers. We are now in the process of working with CBP to discuss our preliminary findings and to develop potential recommendations to resolve them. We plan to provide the subcommittee with our final report early next year. This concludes my statement. I would now be pleased to answer any questions for the subcommittee. For further information about this testimony, please contact me at (202) 512-8816. Seto Bagdoyan, Stephen L. Caldwell, Kathi Ebert, Jim Russell, Brian Sklar, Keith Rhodes, and Katherine Davis also made key contributions to this statement. To assess whether the CBP's development of its targeting strategy is consistent with recognized risk management and modeling practices, we compiled a risk management framework and recognized modeling practices, drawn from an extensive review of relevant public and private sector work, prior GAO work on risk management, and our interviews with terrorism experts. We selected these individuals based on their involvement with issues related to terrorism, specifically concerning containerized cargo, the ATS, and modeling. Several of the individuals that we interviewed were referred from within the expert community, while others were chosen from public texts on the record. We did not assess ATS's hardware or software, the quality of the threat assessments that CBP has received from the intelligence community, or the appropriateness or risk weighting of its targeting rules. To assess how well the targeting strategy has been implemented at selected seaports in the country, we visited various CBP facilities and the Miami, Los Angeles-Long Beach, Philadelphia, New York-New Jersey, New Orleans, and Seattle seaports. These seaports were selected based on the number of cargo containers processed and their geographic dispersion. At these locations, we observed targeting and inspection operations; met with CBP management and inspectors to discuss issues related to targeting and the subsequent physical inspection of containers; and reviewed relevant documents, including training and operational manuals, and statistical reports of targeted and inspected containers. At the seaports, we also met with representatives of shipping lines, operators of private cargo terminals, the local port authorities, and Coast Guard personnel responsible for the ports' physical security. We also met with terrorism experts and representatives from the international trade community to obtain a better understanding of the potential threat posed by cargo containers and possible approaches to countering the threat, such as risk management. We conducted our work from January to November 2003 in accordance with generally accepted government auditing standards. Maritime Security: Progress Made in Implementing Maritime Transportation Security Act, but Concerns Remain. GAO-03-1155T. Washington, D.C.: September 9, 2003. Container Security: Expansion of Key Customs Programs Will Require Greater Attention to Critical Success Factors. GAO-03-770. Washington, D.C.: July 25, 2003. Homeland Security: Challenges Facing the Department of Homeland Security in Balancing its Border Security and Trade Facilitation Missions. GAO-03-902T. Washington, D.C.: June 16, 2003. Container Security: Current Efforts to Detect Nuclear Material, New Initiatives, and Challenges. GAO-03-297T. Washington, D.C.: November 18, 2002. Customs Service: Acquisition and Deployment of Radiation Detection Equipment. GAO-03-235T. Washington, D.C.: October 17, 2002. Port Security: Nation Faces Formidable Challenges in Making New Initiatives Successful. GAO-02-993T. Washington, D.C.: August 5, 2002. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Homeland Security: Key Elements of a Risk Management Approach. GAO-02-150T. Washington, D.C.: October. 12, 2001. Federal Research: Peer Review Practices at Federal Science Agencies Vary. GAO/RCED-99-99. Washington, D.C.: March 17, 1999. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
After the attacks of September 11, 2001, concerns intensified that terrorists would attempt to smuggle a weapon of mass destruction into the United States. One possible method for terrorists to smuggle such a weapon is to use one of the 7 million cargo containers that arrive at our nation's seaports each year. The Department of Homeland Security's U.S. Customs and Border Protection (CBP) is responsible for addressing the potential threat posed by the movement of oceangoing cargo containers. Since CBP cannot inspect all arriving cargo containers, it uses a targeting strategy, which includes an automated targeting system. This system targets some containers for inspection based on a perceived level of risk. In this testimony, GAO provides preliminary findings on its assessment of (1) whether CBP's development of its targeting strategy is consistent with recognized key risk management and computer modeling practices and (2) how well the targeting strategy has been implemented at selected seaports around the country. CBP has taken steps to address the terrorism risks posed by oceangoing cargo containers. These include establishing a National Targeting Center, refining its automated targeting system, instituting a national training program for its personnel that perform targeting, and promulgating regulations to improve the quality and timeliness of data on cargo containers. However, while CBP's strategy incorporates some elements of risk management, it does not include other key elements, such as a comprehensive set of criticality, vulnerability and risk assessments that experts told GAO are necessary to determine risk and the types of responses necessary to mitigate that risk. Also, CBP's targeting system does not include a number of recognized modeling practices, such as subjecting the system to peer review, testing and validation. By incorporating the missing elements of a risk management framework and following certain recognized modeling practices, CBP will be in a better position to protect against terrorist attempts to smuggle weapons of mass destruction into the United States. CBP faces a number of challenges at the six ports we visited. CBP does not have a national system for reporting and analyzing inspection statistics and the data provided to us by ports were generally not available by risk level, were not uniformly reported, were difficult to interpret, and were incomplete. CBP officials told us they have just implemented a new module for their targeting system, but it is too soon to tell whether it will provide consistent, complete inspection data for analyzing and improving the targeting strategy. In addition, CBP staff that received the national targeting training were not tested or certified to ensure that they had learned the basic skills needed to provide effective targeting. Further, space limitations and safety concerns about inspection equipment constrained the ports in their utilization of screening equipment, which has affected the efficiency of examinations.
6,600
584
Discharge permits establish limits on the amounts and types of pollutants that can be released into waterways. Under the Clean Water Act, concentrated animal feeding operations that discharge pollutants to surface waters must obtain permits from EPA or authorized states. However, unlike municipal and most industrial facilities that are allowed to discharge some waste, concentrated animal feeding operations are required to construct and operate facilities that do not release any waste to surface waters, except in extraordinary circumstances. Under EPA's prior regulations, animal feeding operations could be defined as CAFOs and require discharge permits if they, among other things had more than 1,000 animal units, had more than 300 animal units and either discharged through a man-made device into navigable waters or directly into waters of the United States that originate outside the facility, or were of any size but had been determined by EPA or the state permitting authority to contribute significantly to water pollution. Under these regulations, a large animal feeding operation did not need a permit if it only discharged during a 25-year, 24-hour storm event--the amount of rainfall during a 24-hour period that occurs on average once every 25 years or more. In addition, the regulations did not generally require permits for chicken operations that use dry manure-handling systems--that is, systems that do not use water to handle their waste. Further, animal wastes that were applied to crop and pastureland were generally not regulated. EPA has authorized 44 states and the U.S. Virgin Islands to administer the discharge permit program for CAFOs. To become an authorized state, the state must have discharge permit requirements that are at least as stringent as the requirements imposed under the federal program and must contain several key provisions. These provisions include allowing for public participation in issuing permits; issuing permits that must be renewed every 5 years; including authority for EPA and authorized states to take enforcement action against those who violate permit conditions; and providing for public participation in the state enforcement process by either allowing the public to participate in any civil or administrative action or by providing assurance that the state will investigate citizen complaints. According to EPA, public participation in the permitting and enforcement process is critical because it allows the public to express its views on the proposed operations and to assist EPA and state authorities in ensuring that permitted operations remain in compliance. The CAFO program has had two major shortcomings that have led to inconsistent and inadequate implementation by the authorized states. These shortcomings include (1) exemptions in EPA's regulations that have allowed as many as 60 percent of the largest animal feeding operations to avoid obtaining permits and (2) minimal oversight of state CAFO programs by EPA. Although EPA maintains that it has limited tools to compel states to properly implement the CAFO program, it recently has had limited success in persuading some authorized states to begin issuing discharge permits that include all program requirements. Two exemptions in CAFO regulations have allowed large numbers of animal feeding operations to avoid obtaining discharge permits. However, EPA believes that many of these operations may degrade water quality. The first exemption allowed operations to avoid obtaining discharge permits if they discharge waste only during 25-year, 24-hour rainstorm events. However, based on its compliance and enforcement experience, EPA believes that many of the operations using this exemption should, in fact, have a discharge permit because they are likely discharging more frequently. For example, when EPA proposed changes to the CAFO regulations, it stated that operations using this exemption were not taking into consideration discharges that may occur as a result of overfilling the waste storage facility, accidental spills, or improper land application of manure and wastewater. The second exemption allowed about 3,000 confined chicken operations that use dry manure-handling systems to avoid obtaining permits. EPA believes that chicken operations using dry manure-handling systems should obtain permits because EPA and state water quality assessments found that nutrients from confined chicken operations, similar to other large livestock operations, contaminate waters through improper storage, accidental spills, and land application. As a result of these exemptions, we estimate that only about 40 percent (4,500 of 11,500) of confined animal feeding operations currently have discharge permits. In addition, EPA believes about 4,000 smaller animal feeding operations may threaten water quality and may also need to be permitted. According to EPA and state officials, these smaller operations are generally not permitted because federal and state programs have historically focused their limited resources dedicated to CAFOs on regulating only the largest operations. EPA's limited oversight of the states has contributed to inconsistent and inadequate implementation by the authorized states. In particular, our surveys show that 11 authorized states--with a total of more than 1,000 large animal feeding operations-do not properly issue discharge permits. Although eight of these states issue some type of permit to CAFOs, the permits do not meet all EPA requirements, such as including provisions for public participation in issuing permits. The remaining three states do not issue any type of permit to CAFOs, thereby leaving facilities and their wastes essentially unregulated. EPA officials believe that most large operations either discharge or have a potential to discharge animal waste to surface waters and should have discharge permits. The two states that lead the nation in swine production illustrate how programs can meet some EPA permit requirements but not others. For example, while Iowa's permits for uncovered operations (see fig. 1) meet all program requirements, its permits for covered operations (see fig. 2) do not. Contrary to EPA requirements that permits are renewed every 5 years, Iowa issues these permits for indefinite periods of time. While North Carolina issues permits to both covered and uncovered animal feeding operations, these permits do not include all EPA requirements, such as provisions for public participation or allowing for EPA enforcement of the state permit. Michigan and Wisconsin also illustrate how two authorized states with a similar number of animal feeding operations differ in program implementation. According to USDA estimates, both states have over 100 operations with more than 1,000 animal units that could be defined as CAFOs. While Wisconsin had issued 110 permits to these operations, Michigan had not issued any, according to our survey. As a result, waste discharges from facilities in Michigan remained unregulated under the CAFO program. EPA officials acknowledged that until the mid-1990s the agency had placed little emphasis on and directed few resources to the CAFO program and that this inattention has contributed to inconsistent and inadequate implementation by authorized states. Instead, the agency gave higher priority and devoted greater resources to its permit program for the more traditional point sources of pollution--industrial and municipal waste treatment facilities. However, as EPA's and the states' efforts have reduced pollution from these sources, concerns grew in the 1990s that the increasing number of large concentrated animal feeding operations could potentially threaten surface water quality. In response, EPA began placing more emphasis and directing more resources to the CAFO program. As a result, some states that had not previously issued discharge permits began to do so. As shown in figure 3, EPA has historically assigned significantly more personnel resources to the industrial and municipal portions of the NPDES permit program. In the four regions we reviewed, the number of full-time equivalent positions dedicated to the CAFO program has increased since 1997--from 1 to 6 percent--but this increase has, for the most part, been at the expense of the industrial and municipal portions of the permit program. EPA officials told us that due to budget constraints, any increase in resources in one program area requires the reduction of resources in others. In addition to resource constraints, EPA officials say that the agency has little leverage to compel states to issue permits with all required elements because the agency's primary recourses in such situations are to either (1) withhold grant funding it provides to states for program operations or (2) withdraw the states' authority to run the entire NPDES permit program, including the regulation of industrial and municipal waste treatment facilities. EPA has been reluctant to use these tools because it maintains that withholding grant funding would further weaken the states' ability to properly implement the program and EPA does not have the resources to directly implement the permit program in additional states. To date, EPA has never withheld grants or withdrawn a state's authority. However, EPA has had limited success in persuading some authorized states to begin issuing discharge permits with all EPA requirements. For example, Michigan has been an authorized state since 1973, but only agreed in 2002 to begin issuing discharge permits. This agreement followed an EPA investigation that revealed several unpermitted CAFOs. Similarly, EPA recently persuaded Iowa to increase the issuance of discharge permits to uncovered feedlots. However, to date the agency has not been able to convince the state to issue permits to its covered operations, even though EPA believes these types of operations should also have permits. In 2002, EPA was also successful in persuading three other authorized states--Florida, North Carolina, and South Carolina--to begin issuing discharge permits that meet all program requirements. According to our surveys of the regions and states, EPA's revised regulations--eliminating the 25-year, 24-hour storm exemption; explicitly including dry-manure chicken operations; and extending permit coverage to include the land application areas under the control of CAFO--address some key problems of the CAFO program. However, they will also increase EPA's oversight responsibility and require authorized states to increase their permitting, inspection, and enforcement activities. Furthermore, neither EPA nor the states have planned how they will face these challenges or implement the revised program. EPA's decision to eliminate regulatory exemptions should strengthen the permit program because the revised regulations will extend coverage to more animal feeding operations that have the potential to contaminate waterways. As previously mentioned, the 25-year, 24-hour storm exemption has proven particularly problematic for EPA and the states because it allowed CAFO operators to bypass permitting altogether. By eliminating this exemption, we estimate that an additional 4,000 large animal feeding operations will require permits. According to our survey results, the elimination of this exemption could significantly improve the program. In addition, EPA's decision to also explicitly require permits for large dry-manure chicken operations will increase the number of permitted facilities by another 3,000. Lastly, CAFO operators are, for the first time, required to either (1) apply for a permit or (2) provide evidence to demonstrate that they have no potential to discharge to surface waters. In addition to eliminating regulatory exemptions, EPA also extended permit coverage to include the application of animal waste to crop and pastureland controlled by the CAFO. Specifically, CAFO operators who apply manure to their land will be required to develop and implement nutrient management plans that, among other things, specify how much manure can be applied to crop and pastureland to minimize potential adverse effects on the environment. CAFO operators will need to maintain the plan on site and, upon request, make it available to the state permit authority for review. Although EPA believes that the revised regulations will improve the CAFO program, the changes will create resource and administrative challenges for the authorized states. We estimate that the revised regulations could increase the number of operations required to obtain permits by an estimated 7,000--from about 4,500 permits currently issued, to about 11,500. States will therefore need to increase their efforts to identify, permit, and inspect animal feeding operations and, most likely, will have to increase their enforcement actions. However, many states have not yet identified and permitted CAFOs that EPA believes should already have been covered by the CAFO program. Therefore, increased permitting requirements could prove to be a daunting task. For example, Iowa has only permitted 32 operations out of more than 1,000 of its animal feeding operations that have more than 1,000 animal units. Furthermore, states may need to identify and permit an estimated 4,000 operations with fewer than 1,000 animal units that EPA believes may be discharging. Finally, when states inspect CAFOs, they will need to determine if the operation's nutrient management plan is being properly implemented. According to state officials, meeting these demands will require additional personnel. However, most of the states we visited cannot hire additional staff and would have to redeploy personnel from other programs. For example, Iowa and North Carolina, two states with a large number of potential CAFOs, each have less than one full-time employee working in the CAFO program. While the burden of implementing the revised regulations will fall primarily on the states, EPA will need to increase its oversight of state programs to ensure that the states properly adopt and implement the new requirements. This oversight effort will be especially important in light of the large number of animal feeding operations that will need permits under the revised regulations. Although most of the regions have not determined precisely what additional resources they will need to adequately carry out their increased responsibilities, EPA officials told us that, like the states, they will have to redeploy resources from other programs. Despite the challenges that EPA and the states will face in implementing the revised CAFO program, they have not yet prepared for their additional responsibilities. According to our survey of 10 EPA regions, the regions and states have not estimated the resources they will need to implement the revised CAFO program. EPA, for its part, has not developed a plan for how it intends to carry out its increased oversight responsibilities under the revised regulations, such as ensuring that authorized states properly permit and inspect CAFOs and take appropriate enforcement action. EPA and state officials told us they intend to wait until the revised regulations are issued before they begin planning for their implementation. EPA did not formally consult with USDA when it was developing the proposed CAFO regulations published in January 2001, but the department has played a greater role in providing input for the revised regulations. EPA and USDA developed a joint animal feeding operation strategy in 1998 to address the adverse environmental and public health effects of animal feeding operations. However, USDA's involvement in developing the proposed CAFO regulations was generally limited to responding to EPA requests for data. USDA officials told us that they were asked to provide substantive comments only after the Office of Management and Budget suggested that EPA solicit USDA's views. However, USDA officials maintained that they did not have sufficient time to fully assess the proposed regulations and discuss its concerns with EPA before the proposed regulations were published in January 2001. In June 2001, to address USDA concerns, EPA and USDA established an interagency workgroup on the proposed revisions to the CAFO regulations. Under this arrangement, USDA provided technical information that identified how the proposed regulations could adversely affect the livestock industry and suggested alternative approaches that would mitigate these effects. For example, through this interagency workgroup, USDA suggested that EPA consider allowing states greater flexibility in regulating smaller operations. USDA also raised concerns that EPA's proposed nutrient management plan was not entirely consistent with USDA's existing comprehensive nutrient management plan and would be confusing to operators. EPA agreed to take these concerns into consideration when it prepared the final revisions to the regulations. In July 2001, to further strengthen the cooperative process, EPA and USDA developed Principles of Collaboration to ensure that the perspectives of both organizations are realized. In essence, the principles recognize that USDA and EPA have clear and distinct missions, authorities, and expertise, yet can work in partnership on issues on mutual concern. To ensure that both EPA and USDA work together constructively, the principles call for EPA and USDA to establish mutually agreeable time frames for joint efforts and provide adequate opportunities to review and comment on materials developed in collaboration prior to public release. According to USDA and EPA officials, this new arrangement has improved the agencies' working relationship. Although EPA has historically given the CAFO program relatively low priority, it has recently placed greater attention on it as a result of the 1989 lawsuit and the growing recognition of animal feeding operations' contributions to water quality impairment. The implementation of the CAFO program has been uneven because of regulatory exemptions and the lower priority EPA and the states have assigned to it. Although EPA has had some recent success in persuading states to begin issuing discharge permits that include all program requirements, agency officials say that their ability to compel states to do so is limited. While the revised regulations will help address the regulatory problems, they will also increase states' burdens for permitting, inspecting, and taking enforcement actions. Because several states have yet to fully implement the previous, more limited, program, EPA will need to increase its oversight of state programs in order to ensure that the new requirements are properly adopted and carried out by the states. EPA and the states have not identified what they will need to do--or the required resources--to carry out these increased responsibilities. For example, they have not determined how they intend to accomplish their expanded roles and responsibilities within current staff levels. To help ensure that the potential benefits of the revised CAFO program are realized, we recommend that the Administrator, EPA, develop and implement a comprehensive tactical plan that identifies how the agency will carry out its increased oversight responsibilities under the revised program. Specifically, this plan should address what steps the agency will take to ensure that authorized states are properly permitting and inspecting CAFOs and taking appropriate enforcement actions against those in noncompliance. In addition, the plan should identify what, if any, additional resources will be needed to carry out the plan and how these resources will be obtained; and work with authorized states to develop and implement their own plans that identify how they intend to carry out their increased permitting, inspection, and enforcement responsibilities within specified time frames. These plans should also address what, if any, additional resources will be needed to properly implement the program and how these resources will be obtained. We provided EPA and USDA with a draft of this report for review and comment. The Director of Animal Husbandry and Clean Water Programs, along with other USDA officials, provided oral comments for USDA. EPA provided written comments. Both agencies expressed agreement with the findings and recommendations in the report. EPA and USDA also provided technical comments that we incorporated into the report as appropriate. EPA's written comments are presented in appendix II. We are sending copies of this report to the Administrator of the Environmental Protection Agency, the Secretary of Agriculture, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-3841. Key contributors to this report are listed in appendix III. To determine the problems EPA faced in administering the CAFO program and the potential challenges the states and EPA may face when implementing revisions to its CAFO regulations, we surveyed all 10 EPA regional offices. Our survey asked regional officials to provide information on program management and oversight of authorized states' CAFO programs, resources dedicated to the program, problems EPA has faced administering the program, and the potential challenges the states and EPA might face in implementing revisions to the CAFO program. In addition, we interviewed EPA officials in 4 of the 10 regions. We judgmentally selected the 4 regions that represent 23 states with an estimated 70 percent of large animal feeding operations that could be designated as CAFOs under the revised regulations. Because EPA and most states do not know precisely how many animal feeding operations should have discharge permits, we used USDA's estimate of the number of potential CAFOs based on livestock type and the number of animals on the farm from the 1997 Census of Agriculture. These regions and their represented states are Region 3-Philadelphia: Delaware, Maryland, Pennsylvania, Virginia, and Region 4-Atlanta: Alabama, Florida, Georgia, Kentucky, Mississippi, North Carolina, South Carolina, and Tennessee; Region 5-Chicago: Illinois, Indiana, Michigan, Minnesota, Ohio, and Region 7-Kansas City: Iowa, Kansas, Missouri, and Nebraska. To determine how the 44 authorized states and the U.S. Virgin Islands administer the program and to obtain their views on the challenges they might encounter in implementing the revised regulations, we interviewed program officials in four authorized states--Iowa, North Carolina, Pennsylvania, and Wisconsin. We judgmentally selected these states from among the four regions we visited because they have large numbers of confined poultry, swine, and dairy and beef cattle operations. We did not evaluate how EPA directly administers the program in the states and territories not authorized to implement the CAFO program because these states contained less than 5 percent of large CAFOs. EPA administers the program directly because these states have not asked for authority to administer the program. To examine the extent of USDA's involvement in developing the proposed revisions to EPA's CAFO regulations, we interviewed officials in USDA's Natural Resources Conservation Service and EPA. We also observed an EPA and USDA Working Group Meeting on Concentrated Animal Feeding Operations. We conducted our review from January 2002 through October 2002 in accordance with generally accepted government auditing standards. In addition to the individual named above, Mary Denigan-Macauley, Oliver Easterwood, Lynn Musser, Paul Pansini, and John C. Smith made key contributions to this report.
Congress is concerned that waste from animal feeding operations continues to threaten water quality. In light of this concern, GAO was asked to review the Environmental Protection Agency's (EPA) administration of its regulatory program for animal feeding operations and to determine the potential challenges states and EPA may face when they begin to implement the revisions to this program. GAO surveyed all EPA regional offices and four states with large numbers of animal feeding operations that may be subject to EPA regulations. Until the mid-1990s, EPA placed little emphasis on and had directed few resources to its animal feeding operations permit program because it gave higher priority to other sources of water pollution. In addition, regulatory exemptions have allowed many large operations to avoid regulation. As a result of these problems, many operations that EPA believes are polluting the nation's waters remain unregulated. Implementation of revised regulations raise management and resource challenges for the states and the agency. For example, because the number of animal feeding operations subject to the regulations will increase dramatically, states will need to increase their efforts to identify, permit, and inspect facilities and take appropriate enforcement actions against those in noncompliance. For its part, EPA will need to increase its oversight of state programs to ensure that the new requirements are adopted and implemented. Neither the states nor EPA have determined how they will meet these challenges.
4,534
283
To determine why the Erie WSO was spun down before completion of the Secretary of Commerce's report on 32 areas of concern, we analyzed documents that described the spin-down and reviewed the Secretary's report. We also discussed the timeline of these events with NWS officials. To determine what weather services were provided before and after the Erie office was spun down, we reviewed NWS site implementation plans for the Cleveland, Pittsburgh, and Central Pennsylvania weather offices, and interviewed former employees of the Erie WSO and officials at each of the three WFOs. We also discussed the services provided and concerns raised about the quality and types of services with (1) members of Save Our Station, a group dedicated to saving the Erie WSO, (2) Erie television station meteorologists, (3) the National Air Traffic Controllers Association safety representative at Erie International Airport, (4) officials at Presque Isle State Park, Erie, (5) the officer in charge of the U.S. Coast Guard Station in Erie, and (6) emergency management officials and representatives of emergency volunteer organizations, such as Skywarn, in each of the nine counties that constituted the Erie WSO warning area. We reviewed NWS' responses to concerns raised. We identified safety concerns raised regarding the weather services provided at the Erie airport and obtained NWS' responses to these concerns through interviews with the National Air Traffic Controllers Association safety representative at Erie International Airport, the manager of the Aviation Weather Requirements Division, the Federal Aviation Administration (FAA), and NWS officials. To identify concerns raised about small-craft advisories on Lake Erie, we interviewed (1) officials at Presque Isle State Park, (2) the officer in charge of the U.S. Coast Guard station in Erie, (3) the commander of the Greater Erie Boating Association, and (4) members of Save Our Station. We reviewed NWS documents relating to aviation weather and the small-craft advisories on Lake Erie and obtained NWS' responses to safety concerns. To determine if reliable statistical or other evidence existed that addressed degradation of service, we reviewed NWS verification statistics for severe weather events in the nine counties included in the Erie WSO county warning area prior to and after spin-down of the Erie office. We discussed the methodology and process used to develop these statistics, and their reliability, with NWS officials. In addition, we discussed NWS verification statistics and studies with a professor emeritus and an associate professor of meteorology at Pennsylvania State University and also with the chairperson of the Modernization Transition Committee. Further, we reviewed available NWS lake-effect snow study reports. We interviewed the NWS Eastern region team responsible for the lake-effect snow study and the director of the Office of Meteorology at NWS headquarters. In discussions with representatives of Save Our Station, county emergency management directors, and volunteer organizations, we obtained specific examples of weather events that these individuals believed demonstrated evidence of degradation of service. In addition, we reviewed the National Research Council (NRC) report on NWS modernization and the Secretary's report on 32 areas of concern, with specific reference to radar coverage. To understand the ability of NWS' new radars and other data tools available to forecasters to provide adequate coverage for severe weather event warnings and lake-effect snow, we discussed this topic with NWS officials and the study director of NRC, the chairperson of the Modernization Transition Committee, a member of the Secretary's report team who was the acknowledged expert on NWS radar, the former chairperson of NRC's Modernization Committee (who is also a professor emeritus of meteorology), and an associate professor of meteorology at Pennsylvania State University. We performed our work at NWS headquarters in Silver Spring, Maryland; at the NWS Eastern region in Bohemia, New York; at the Cleveland, Pittsburgh, and Central Pennsylvania WFOs; and at the Erie WSO. In addition, we conducted telephone interviews with emergency management officials and emergency volunteers in the Erie WSO county warning area. We performed our work from April to August 1997, in accordance with generally accepted government auditing standards. As agreed with your offices, we did not assess the adequacy of the NWS responses to identified concerns, and we did not assess the adequacy of reports discussed in this report. The Secretary of Commerce provided written comments on a draft of this report. These comments are discussed at the end of this report and are reprinted in appendix II. NWS began a nationwide modernization program in the 1980s to upgrade observing systems, such as satellites and radars, and design and develop advanced forecaster computer workstations. The goals of the modernization are to achieve more uniform weather services across the nation, improve forecasts, provide better detection and prediction of severe weather and flooding, permit more cost-effective operations through staff and office reductions, and achieve higher productivity. As part of its modernization program, NWS plans to shift its field office structure from 52 Weather Service Forecast Offices and 204 WSOs, to one with 119 WFOs. NWS field offices provide basic weather services such as forecasts, severe weather warnings, warning preparedness, and--where applicable--aviation and marine forecasts. Warnings include "short-fused"--events such as tornadoes, flash floods, and severe storms--and "long-fused"--events such as gales and heavy snow. NWS broadcasts forecasts and warnings over the National Oceanic and Atmospheric Administration's (NOAA) Weather Radio. NWS offices transmit hourly weather updates and severe weather warnings as they are issued on hundreds of NOAA Weather Radio stations around the country. Warning preparedness includes coordinating with local emergency management, law enforcement agencies, and the media on notification of and response to severe weather events, and training volunteer weather observers to collect and report data under a program commonly called Skywarn. NWS relies heavily on supplemental data provided by Skywarn volunteers' reports on severe weather events. Under NWS' restructuring plan, the Erie WSO is slated for closure and has been spun down operationally. When fully functioning, this office's primary role was to provide severe weather warnings to nine counties in northwestern Pennsylvania, operate an on-site radar, and take surface-condition weather observations. Under the NWS field office restructuring, responsibility for Erie's nine counties is divided among three WFOs: Erie and Crawford counties are served by the Cleveland WFO; Venango and Forest counties are served by the Pittsburgh WFO; and Cameron, Elk, McKean, Potter, and Warren counties are served by the Central Pennsylvania WFO (located at State College, Pennsylvania).Figures 1 and 2 present maps of the premodernized and modernized office structures for the northwestern Pennsylvania area. Under the field office restructuring, the three offices assuming coverage responsibility for Erie's nine counties have been in the process of installing new systems and equipment, such as new radars, and training staff in using the new technologies. In addition, each office taking on part of Erie's former responsibilities communicated modernization and restructuring changes with the newly-assumed counties' emergency response community, volunteer weather observers, the media, and the public. Once sufficient systems and staff were in place, the three WFOs--Cleveland, Pittsburgh, and Central Pennsylvania--began assuming responsibility for their respective counties. Erie gradually phased out its routine radar operation; it was responsible for augmenting ASOS until October 1996 when FAA took over responsibility for this function. Two other NWS changes affected the Erie area, but were not part of the spin-down or required for consideration in making an office closure certification; these changes affected the number and type of forecasts issued and the area covered by the forecasts. First, in both the premodernized and modernized environments, the 2-day forecast is broken into four 12-hour periods. However, with access to improved, real-time data from new technology--primarily the new radars implemented as part of the modernization--NWS in 1994 added a short-term forecast, called the Nowcast, which is a 6-hour forecast. The second change NWS implemented during modernization was a reduction in the area covered by its zone forecast. Before modernization, forecast zones (i.e., the areas for which a particular forecast was issued) could include several counties as well as specific localized forecasts for high-population areas. In October 1993, NWS reduced the size of its zones to single counties to allow forecasters to take advantage of improved data and make more specific forecasts and warnings. Because of this ability to be more specific, most NWS areas discontinued the localized forecasts for high-population areas. The Weather Service Modernization Act requires that before any office may be closed, the Secretary of Commerce must certify to the Congress that closing the field office will not degrade service to the affected area. This certification must include (1) a description of local weather characteristics and weather-related concerns that affect the weather services provided within the service area, (2) a detailed comparison of the services provided within the service area and the services to be provided after such action, (3) a description of recent or expected modernization of NWS operations that will enhance services in the area, (4) identification of areas within a state that will not receive coverage (at an elevation of 10,000 feet or below) by the modernized radar network, (5) evidence, based upon a demonstration of modernized NWS operations, used to conclude that services will not be degraded from such action, and (6) any report of the Modernization Transition Committee that evaluates the proposed certification. In response to concerns from members of the Congress, the Department of Commerce agreed to take several steps to identify community concerns regarding modernization changes, such as office closures, and study the potential for degradation of service. First, the Department published a notice in the Federal Register in November 1994, requesting comments on service areas where it was believed that premodernized weather services may be degraded with planned modernization changes. Next, the Department contracted with NRC to conduct an independent scientific assessment of proposed modernized radar coverage and consolidation of field offices in terms of the no degradation of service requirement. In addition, NRC established criteria for identifying service areas where the elimination of older radars could degrade services. Finally, the Secretary of Commerce applied the NRC criteria to identified areas of concern to determine whether a degradation of service is likely to occur. The resulting report, Secretary's Report to Congress on Adequacy of NEXRAD Coverage and Degradation of Weather Services Under National Weather Service Modernization for 32 Areas of Concern, was issued in October 1995. NWS started spinning down the Erie WSO by transferring warning responsibilities to the three assuming WFOs in August 1994 before the Department of Commerce began its review of areas of concern. However, Erie community members raised questions in June 1994, several months before Erie was identified as one of the areas of concern through the Federal Register process. NWS continued with its plans to spin down the office because officials believed they would be providing the best service to the area by relying on modernized radars in other offices. Erie continued surface observations and radar operations until October 1996 and March 1997, respectively. The starting point for the Department of Commerce study of areas of concern was the November 1994 Federal Register announcement soliciting concerns about NWS modernization and restructuring plans. In February 1995, Erie was identified as 1 of 32 areas of concern. The Department of Commerce reviewed the 32 areas between June and August 1995, and issued its report in October 1995. The report concluded that with the exception of lake-effect snow, the assuming WFOs will be able to detect severe weather phenomena over northwestern Pennsylvania. In addition, the report recommended that NWS (1) compare the adequacy of the assuming WFOs' new radars and other data sources with Erie's old radar in identifying lake-effect snow over a 2-year period and (2) transmit data from Erie's radar to nearby WFOs to support the lake-effect snow study and facilitate the continued spin-down of the Erie office. The three weather offices that assumed responsibility for the counties formerly served by the Erie WSO provide generally the same types of services that the Erie office had provided, with the exception of the general public's local or toll-free telephone access to NWS personnel. The general public in the nine counties must now call long-distance to contact the Cleveland, Central Pennsylvania, and Pittsburgh WFOs. Services for Erie and Crawford counties are now provided entirely by the Cleveland WFO. There are few changes to the services that were provided by the Erie WSO. The primary changes are the discontinuance of the localized forecast for the city of Erie and the addition of the Nowcast. As noted before, localized forecasts were discontinued because of changes in the size and detail of zone forecasts. Another significant change is the transfer of ASOS augmentation to FAA. This relieves NWS of maintaining staff on-site to take observations. Table 1 presents a detailed comparison of the services provided to Erie and Crawford counties before and after spin-down. The Pittsburgh WFO now provides all services to Venango and Forest counties with the exception of issuing NOAA weather radio reports and updates. Changes in services to these counties are minimal as Pittsburgh was already providing many services to these areas. The only significant change is the addition of the short-term forecast--the Nowcast--which was not provided in premodernization. Table 2 presents a detailed comparison of services provided before and after spin-down. Services for Cameron, Elk, McKean, Potter, and Warren counties are now provided mostly by the Central Pennsylvania WFO. Since this office is not yet fully staffed, forecasting and long-fused warning services are still provided by Pittsburgh. Again, with the exception of the Nowcast, no major changes have occurred for these counties. Since many of these counties are mountainous, NOAA Weather Radio service does not reach all areas. NWS believes service will be improved when additional transmitters are installed in fiscal year 1998. The Central Pennsylvania and Pittsburgh WFOs will program these transmitters. Table 3 presents a detailed comparison of services provided before and after spin-down. Many concerns have been raised about the specific services being provided by NWS as well as the quality of the service provided. Most concerns had been brought to NWS' attention and NWS provided responses to them. Other concerns brought to our attention either had not been reported to NWS or NWS had not officially responded. We discussed these concerns with NWS officials and received their responses. The most common concern--voiced by almost every individual we spoke with--was with the ability of distant radars to detect all types of weather phenomena. Table 4 presents concerns raised by users in Erie and Crawford counties and NWS' responses. The primary concern voiced from five of the seven counties now served by the Central Pennsylvania and Pittsburgh WFOs was the ability of distant radars to provide adequate coverage for severe weather phenomena in order to issue accurate and timely forecasts and warnings. Some users in counties at the fringes of radar coverage questioned NWS' ability to track approaching severe weather outside the range of an office's radar. NWS' responses to these concerns were to assure county officials and residents that the new radars and other components of the modernization, such as satellites and improved weather models, would enable NWS to provide better service to their areas. Furthermore, WFOs can access radar data from nearby WFOs. For example, if a severe storm was moving eastward into northwestern Pennsylvania, Central Pennsylvania and Pittsburgh staff would likely access data from Cleveland's radar to help determine the path and intensity of the event. One individual expressed concern that during severe weather events, there may not be sufficient staff to operate the amateur radio equipment, which is used to communicate with Skywarn volunteers. According to NWS, there are licensed amateur radio operators on staff. However, if licensed staff are not available during severe events, NWS can call on volunteers to help operate the equipment. These concerns seemed to have been allayed as most officials told us that service provided by the new offices is at least equal to the service provided before modernization. A few concerns have been raised regarding weather services provided at the Erie International Airport and the timeliness of small-craft advisories for Lake Erie. The most commonly cited concern was with ASOS, which has been the subject of much scrutiny since its nationwide deployment. We reported on several ASOS issues in 1995, such as specific sensor problems and the system's difficulty reporting actual, prevailing conditions in rapidly changing or patchy weather conditions. NWS has implemented modifications to address sensor problems and, in some places, including Erie, added sensors to better report representative observations. In addition, since ASOS does not replace all human observations, human observers must continue to take manual observations at airports such as Erie to supplement the system (this process is called augmentation) and correct the system when it is not accurately reporting current conditions. Under an NWS/FAA interagency agreement, FAA accepted augmentation responsibility for the Erie ASOS in October 1996. At that point, NWS weather observers were discontinued at Erie and air traffic controllers became responsible for augmenting ASOS observations and correcting the system when it reported inaccurate conditions. Concerns surround the issue of whether this ASOS augmentation responsibility is too much for air traffic controllers. FAA recognizes these concerns and has sponsored an independent study of the impact of ASOS augmentation. According to the manager of FAA's Aviation Weather Requirements Division, a report is expected in the fall of 1997. Table 5 presents specific safety concerns raised and NWS responses. There are several sources of evidence that address whether a degradation of service has occurred in the Erie area. NWS' statistical verification program collects performance data on the issuance of forecasts and warnings and provides information necessary to compare "premodernized" and "modernized" performance. Overall, data for the former nine-county Erie WSO area show an improvement in service under the three WFOs. Studies by NRC and the Department of Commerce analyzed the ability of the new radars and other components of the modernization to detect certain weather phenomena and assessed the potential for degradation of weather services in the Erie area. NRC concluded that the ability to detect three severe weather phenomena, including lake-effect snow, was questionable. The Department of Commerce's study expanded on NRC's work and concluded that lake-effect snow was the only phenomena that remained a concern. NWS is completing a 3-year study of its ability to detect and predict lake-effect snow in the Great Lakes area, which includes northwestern Pennsylvania. Since the 1980s, NWS has assessed the accuracy and timeliness of its severe weather warnings and public and aviation forecasts through a statistical verification program. The verification process includes determining the accuracy of the forecast elements of maximum and minimum temperature and probability of precipitation. Several elements of the aviation forecasts are likewise verified. Severe weather warnings are verified by determining whether an event for which a warning was issued occurred. The elements calculated for warning verification are probability of detection (i.e., NWS' ability to detect weather events--the higher the probability, the better the performance), false alarm rate, and lead time. If a warning was issued but a severe weather event did not occur, a higher false alarm rate results. If a severe weather event occurred without a warning, the probability of detection goes down. Warning and forecast verification statistics historically have been used to help weather office managers determine trends in performance and identify areas needing improvement. With modernization, the statistics are included in the certification package as support either for or against a determination of degradation of service. NWS officials stressed, however, that verification statistics are not the most important component of the no-degradation assessment. Rather, they said, they rely most heavily on feedback from users to determine satisfaction with the level of service being provided and whether degradation has occurred. The verification statistics for the nine former Erie office counties show an overall improvement to the area in warning service. Appendix I presents the warning verification data for the nine-county area. The statistics also show slight improvement for public forecast service. The aviation forecast verification statistics show a negligible decline from .33 to .32, on a scale from 0 to 1 with 1 being the best performance. NWS officials cautioned that there are limitations to the verification program and resulting data. For example, since the number and type of weather events vary from year to year, it is impossible to directly compare performance from one year to another. In addition, it is more difficult to verify events in sparsely populated areas. Finally, NWS officials acknowledged that severe weather warning verification procedures vary across offices. In August 1994, the Department of Commerce contracted with NRC to study NWS' modernized radar network coverage and identify any gaps that could result in a degradation of weather service. In addition, NRC was to develop criteria for the Department to use in determining the potential for degradation of service in those areas of concern identified through the public comment process. In June 1995, NRC issued its report, Toward a New National Weather Service: Assessment of NEXRAD Coverage and Associated Weather Services. Overall, NRC concluded that weather services on a national basis would be improved substantially under the new radar network. For example, compared with the old radar network, the modernized radar network will cover a much broader area of the contiguous United States and provide greater coverage for detecting specific severe weather phenomena, such as supercells, mini-supercells, and macrobursts. NRC also noted that the new radars are just one element in a composite weather system that includes satellites, automated surface observing equipment, wind profilers, improved numerical forecast models, and cooperative networks of human observers and spotters. NRC cautioned, however, that at old radar sites where radar coverage is to be provided by a new radar some distance away, there is the potential for degradation in radar-detection coverage capability. In particular, northwestern Pennsylvania was one such area with degraded radar coverage for macrobursts, mini-supercells, and lake-effect snow. NRC recommended NWS study the area to determine whether the degraded radar coverage would result in a degradation of weather service. Figure 3 shows the approximate gap in radar coverage for lake-effect snow over northwestern Pennsylvania. As agreed with concerned members of the Congress, the Department of Commerce used NRC's criteria to evaluate the potential for degradation in the 32 areas identified via the Federal Register process and assessed the potential for degradation of service for the radar gaps identified in NRC's report. The Secretary's team conducted additional research into the capabilities of the new radars and found that the effective range of detection was greater than estimated by NRC. Specifically, the team concluded that the new radars serving the former Erie WSO area would be able to detect macrobursts and mini-supercells for northwestern Pennsylvania. It was still clear, however, that the radars could not adequately detect some lake-effect snow events in the Erie area. Therefore, the Secretary's team recommended that NWS compare the adequacy of the assuming WFOs' new radars and other data sources with Erie's old radar in identifying lake-effect snow over a 2-year period to determine how well the composite weather system could help detect and predict lake-effect snow over the area in question. In addition, the report recommended that NWS keep the Erie radar (an older vintage) operational until the results of the study were compiled, which was done. NWS began a lake-effect snow study in November 1994, 1 year before the Secretary's team recommended that a similar assessment be done. NWS initiated the study to improve its ability to detect and predict lake-effect snow, as well as in response to concerns raised by congressional staff and residents of northern Indiana and northwestern Pennsylvania; these areas were scheduled to lose old radars and, instead, receive coverage from more distant but modernized radars. The goal of the study was to find ways of improving the warning and forecast services associated with lake-effect snow events. In response to the Secretary's team's recommendation, however, another goal was added to this study--to determine whether lake-effect snow detection would be degraded over northwestern Pennsylvania, if the Erie radar and office were shut down. Data on lake-effect snow were collected over the three winter seasons between 1994 and 1997. While the broad study area included all areas in New York, Pennsylvania, Ohio, and Indiana that experience lake-effect snow, a seven-county area was established surrounding Erie on which more specific analysis would be performed. After each winter season, a data report was issued by NWS. These reports conclude that NWS has made significant progress in improving its ability to detect and forecast lake-effect snow, however, there are still questions about the level of this service being provided to northwestern Pennsylvania. For example, NWS' Eastern Region reported that for about 35 percent of lake-effect snow events, the composite weather system will be insufficient to compensate for the degradation in radar coverage over northwestern Pennsylvania. In addition, this report stated that NWS is not able to provide detailed, short-term forecasts (Nowcasts) during lake-effect snow events like it can for other areas that have better radar coverage. The Eastern Region's report and the director of NWS' Office of Meteorology point out, however, that this problem does not constitute a degradation of service because the probability of detection for lake-effect snow in the seven-county study area has improved since 1993. Even though degradation has not occurred, according to the Eastern Region report and the director, this level of service is still unacceptable because lake-effect snow is the Erie area's most severe weather condition and the community does not receive the same level of service that other lake communities receive. As a result, the Eastern Region report recommended that a radar be installed to provide better coverage for this severe weather phenomenon in northwestern Pennsylvania. The director of the Office of Meteorology agrees with this recommendation, but points out that since data from this new radar would be transmitted to existing WFOs, an additional weather office is not needed in the Erie area. NWS' final report of the lake-effect snow study is expected this fall. Any conclusions and recommendations from the lake-effect snow study will be reviewed by the Secretary's team, which will make recommendations to the Secretary regarding specific actions to be taken. Once the results of the lake-effect snow study are finalized and actions taken to address degradation concerns, if any, NWS officials told us they will pursue closure certification for the Erie office. In commenting on a draft of this report, the Department of Commerce took no exceptions to the information presented and acknowledged that we had conducted thorough work in researching the issues and preparing the report. The Department reiterated that, after NOAA presents the Secretary's team with the results of the lake-effect snow study, it will review and evaluate the findings, conclusions, and recommendations and determine the need for a radar in northwestern Pennsylvania. The Department's written response is reprinted in appendix II. As agreed with your offices, unless you publicly announce the contents of this report earlier, we will not distribute it until 10 days from the date of this letter. At that time we will send copies to the Ranking Minority Member, House Committee on Science, and the Chairmen and Ranking Minority Members of the Senate Committee on Commerce, Science, and Transportation; House and Senate Committees on Appropriations; House Committee on Government Reform and Oversight; and Senate Committee on Governmental Affairs; and to the Director, Office of Management and Budget. We are also sending copies to Senators Arlen Specter and Rick Santorum; Congressman John Peterson; the Secretary of Commerce; the Administrator, National Oceanic and Atmospheric Administration; and the Acting Director of the National Weather Service. Copies will be made available to others upon request. Please contact me at (202) 512-6408 if you or your staffs have any questions concerning this report. I can also be reached by e-mail at [email protected]. Major contributors to this report are listed in appendix III. Lead-time (minutes) Lead-time (minutes) Lead-time (minutes) Keith A. Rhodes, Technical Director Mark E. Heatwole, Assistant Director Patricia J. Macauley, Information Systems Analyst-in-Charge J. Michael Resser, Business Process Analyst Michael P. Fruitman, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined how the National Weather Service (NWS) had implemented modernization and restructuring activities in northwestern Pennsylvania, focusing on identifying: (1) why the Erie, Pennsylvania, weather service office (WSO) was spun down prior to the Department of Commerce's October 1995 report on 32 areas of concern; (2) what types of services were provided to the counties served by the Erie office before and after office spin-down, as well as what public concerns have been raised, and how NWS responded to them; (3) what safety concerns have been raised to weather services at the Erie airport and to the timeliness of small-craft advisories for Lake Erie, including how NWS responded to public concerns about these issues; and (4) whether any reliable statistical or other evidence exists that addresses whether a degradation of service in the Erie area has occurred as a result of the modernization and office restructuring. GAO noted that: (1) NWS started spinning down the Erie WSO by transferring warning responsibilities to the three assuming Weather Forecast Offices (WFO) in August 1994 before the Department of Commerce began its review of the 32 areas of concern in June 1995; (2) concerns about the Erie office closure, however, were made known as early as June 1994; (3) NWS continued with its plans to spin down the office because officials believed that they would be providing the best service to the area by relying on modernized radars in other offices; (4) the three WFOs that assumed responsibility for the counties formerly served by the Erie WSO provide generally the same types of services that the Erie office had provided, with the exception of the general public's local or toll-free telephone access to NWS personnel; (5) the major concerns surrounding the transfer of responsibilities relate to whether radar coverage over the counties formerly served by Erie would be adequate, and whether forecasts and warnings are at least equal in accuracy and timeliness to those previously issued by Erie; (6) NWS responses to such concerns include analyzing its ability to detect severe weather phenomena over northwestern Pennsylvania, as well as providing data on how well the assuming offices are issuing forecasts and warnings; (7) a few concerns also have been raised regarding NWS service to the Erie airport and the timeliness of small-craft advisories for Lake Erie; (8) the most commonly voiced concern regarded an automated surface observing system (ASOS) and requirements for air traffic controllers to augment it with human observations; (9) the Federal Aviation Administration (FAA) has sponsored a study of the impact of its augmentation responsibilities at airports such as Erie and will be issuing a report in the fall of 1997; (10) several studies present evidence that a degradation in service has not occurred in northwestern Pennsylvania; however, the ability to detect and predict lake-effect snow remains a concern; (11) NWS is completing a lake-effect snow study to determine the effectiveness of the modernized weather system in detecting and forecasting lake-effect snow; (12) the Director of NWS' Office of Meteorology told GAO that he will recommend a radar for the Erie area; and (13) however, NWS has not yet taken a position on the need for a radar, and the Secretary of Commerce is scheduled to make the final decision on any action to be taken in northwestern Pennsylvania.
6,454
734
Available studies and credit reporting industry data disagree on the extent of errors in credit reports. The limited literature on credit report accuracy indicated high rates of errors in credit report data. In contrast, the major CRAs and CDIA stated that they did not track errors specifically but that the data the credit industry maintained suggested much lower rates of errors. Both the literature and the data provided by the credit industry had serious limitations that restricted our ability to assess the overall level credit reporting accuracy. Yet, all of the studies identified similar types and causes of errors. While data provided by the credit industry did not address type and cause of errors, representatives from the three major CRAs and CDIA cited types and causes similar to those cited in the literature. The credit industry has developed and implemented procedures to help ensure accuracy of credit report data, although no one has assessed the efficacy of these procedures. Moreover, FTC tracks consumer disputes regarding the accuracy of information in credit reports and has taken eight enforcement actions directly or indirectly involving credit report accuracy since 1996. We identified three studies completed after the 1996 FCRA amendments that directly addressed credit report accuracy, and one that indirectly addressed the topic. One of these reports, published in December 2002 by Consumer Federation of America, presents the frequency and types of errors drawn from files requested by mortgage lenders on behalf of consumers actively seeking mortgages. The Consumer Federation of America initially reviewed 1,704 credit files representing consumers from 22 states and subsequently re-examined a sample of 51 three-agency merged files. In this sample of merged files, the study found wide variation in the information maintained by the CRAs, and that errors of omission were common in credit reports. For example, the report stated that about: 78 percent of credit files omitted a revolving account in good standing; 33 percent of credit files were missing a mortgage account that had 67 percent of credit files omitted other types of installment accounts that had never been late; 82 percent of the credit files had inconsistencies regarding the balance on revolving accounts or collections; and 96 percent of the credit files had inconsistencies regarding an account's credit limit. A March 1998 U.S. Public Interest Research Group (U.S. PIRG) study found similar frequencies of errors in 133 credit files representing 88 individual consumers. U.S. PIRG reported that 70 percent of the files reviewed contained some form of error. The errors ranged in severity from those unlikely to have negative repercussions to those likely to cause a denial of credit. For example, the report found: 41 percent of the credit files contained personal identifying information that was long-outdated, belonged to someone else, was misspelled, or was otherwise incorrect; 29 percent of the credit files contained an error--accounts incorrectly marked as delinquent, credit accounts that belonged to someone else, or public records or judgments that belonged to someone else--that U.S. PIRG stated could possibly result in a denial of credit; and 20 percent of the credit files were missing a major credit card account, loan, mortgage, or other account that demonstrated the creditworthiness of the consumer. Similar to the U.S. PIRG study, a 2000 survey conducted by Consumers Union and published by Consumer Reports asked 25 Consumers Union staffers and their family members to apply for their credit reports and then review them. In all, Consumers Union staff and family members received and evaluated 63 credit reports and in more than half of the reports, they found inaccuracies that they reported as having the potential to derail a loan or deflect an offer for the lowest-interest credit card. The inaccuracies identified were similar to those reported by the Consumer Federation of America and U.S. PIRG--inclusion of information belonging to other consumers, inappropriately attributed debts, inaccurate demographic information, and inconsistencies between the credit reports provided by the three major CRAs regarding the same consumer. While not specifically assessing the accuracy of credit reports, a Federal Reserve Bulletin article found that credit reports contained inconsistencies and cited certain types of data furnishers, including collection agencies and public entities, as a primary source for some of the inconsistencies found. Among the study's findings: Approximately 70 percent of the consumers in the study's sample had a missing credit limit on one or more of their revolving accounts, Approximately 8 percent of all accounts showed positive balances but were not up to date, Between 1 and 2 percent of the files were supplied by creditors that reported negative information only, and Public records inconsistently reported actions such as bankruptcies and collections. An important aspect of the Federal Reserve study was that it used a statistically valid and representative sample of credit reports, and received access to this sample with the cooperation of one of the three major CRAs. However, because the sample came from one CRA only, the findings of the study may not be representative of other CRAs. Representatives of the three major CRAs and CDIA told us that they do not maintain data on the frequency of errors in credit reports. However, the industry does maintain data that suggest errors are infrequent in cases of an adverse action. CDIA stated that the three major CRAs provided or disclosed approximately 16 million credit reports, out of approximately 2 billion reports sold annually in the marketplace. According to CDIA data, 84 percent of the disclosures followed an adverse action and only 5 percent of disclosures went to people who requested their reports out of curiosity. Out of these disclosures, CRA officials stated that an extremely small percentage of people identified an error. An Arthur Andersen study, conducted in 1992, found a similar infrequent rate of errors arising from adverse actions. Under commission by the Associated Credit Bureaus (now CDIA), the study reportedly found that only 36 consumers--out of a sample of 15,703 people denied credit-- disputed erroneous information that resulted in a reversal of the original negative credit decision. Similarly, in an attempt to respond to our data request, CDIA produced data gathered by a reseller over a two-week period that indicated that out of 189 mortgage consumers, only 2 consumers (1 percent) had a report that contained an inaccuracy. In our conversation with data furnishers, we discovered that two conduct internal audits on the accuracy of the information they provide to the CRAs. These data furnishers indicated that the information they provide and the CRAs maintain is accurate 99.8 percent of the time. While consumer disputes do not provide a reliable measure of credit report accuracy, CRA representatives told us that disputes provide an indicator of what people perceive as errors when reviewing their credit files. A CDIA official stated that five types of disputes comprise about 90 percent of all consumer disputes received by the three major CRAs. These five dispute types are described as: Claims account has been closed; Dispute present or previous account status, payment history, or Dispute related to disposition of account included in or excluded from Not my account. Although CDIA could not provide a definitive ranking for all five types of disputes, it did state that "not my account" was the most frequently received dispute. After receiving a consumer's dispute, FCRA requires a CRA to conduct a reinvestigation. The purpose of reinvestigation is to either verify the accuracy of the disputed information, or to confirm and remove an error. CDIA provided data on the disposition of dispute reinvestigations by categories of those received by the three major CRAs in 2002. CRA officials explained that the data represents the first 3 quarters of 2002, and that each CRA reported data on a different quarter. CDIA declined to provide the total number of consumer disputes. Table 1 shows the frequency of these four disposition categories. Specifically, the table indicates that over half of all disputes required the CRA to modify a credit report in some way, though not necessarily to remove an error. It is important to emphasize that not every dispute leads to identifying an error. Indeed, many disputes, as the table indicates, resulted in a verification of accuracy or an update of existing information. Additionally, CRA and CDIA representatives stated that many disputes resulted in the CRA clarifying or explaining why a piece of information was included in the credit report. For example, if recently married consumers obtained a copy of their files, they might not see their married names on file. In such cases, the files still accurately reflected the most current information provided to the CRA, but the consumer may have perceived the less-than- current information as an error while the CRA would not. The CRA representative cited another example of a consumer seeing an account listed with a creditor he or she did not recognize. However, the account in question was with a retailer that subsequently outsourced its lending to another company. In this case, the information was correct but the consumer was not aware of the outsourcing. One CRA representative indicated that over 50 percent of the calls they received resulted in what they consider "consumer education." We cannot determine the frequency of errors in credit reports based on the Consumer Federation of America, U.S. PIRG, and Consumers Union studies. Two of the studies did not use a statistically representative methodology because they examined only the credit files of their employees who verified the accuracy of the information, and it was not clear if the sampling methodology in the third study was statistically projectable. Moreover, all three studies counted any inaccuracy as an error regardless of the potential impact. Similarly, the studies used varying definitions in identifying errors, and provided sometimes obscure explanations of how they carried out their work. Because of this, the findings may not represent the total population of credit reports maintained by the CRAs. Moreover, none of these groups developed their findings in consultation with members of the credit reporting industry, who, according to a CDIA representative, could have verified or refuted some of the claimed errors. Beyond these limitations, a CDIA official stated that these studies misrepresented the frequency of errors because they assessed missing information as an error. According to CRA officials errors of omission may be mitigated in certain instances because certain lenders tend to use merged credit report files in making lending decisions, such as mortgage lenders and increasingly credit card lenders. CRA officials explained that while complete and current data are necessary for a wholly accurate credit file, both are not always available to them. For instance, credit-reporting cycles, which dictate when CRAs receive data updates from data furnishers, may affect the timeliness of data. CRAs rely on these updates, which may come daily, weekly, or monthly depending on the data furnisher's reporting cycle. If a data furnisher provided information on a monthly basis there would be a lag between a consumer's payment, for example, and the change in credit file information. Likewise, if a data furnisher reported to one CRA but not to another, the two reports would differ in content and could produce different credit scores. It is important to note that reporting information to the CRAs is voluntary on the part of data furnishers. While the Federal Reserve Bulletin article noted inconsistencies as an area of concern, it recognized that all credit reports would not contain identical information. Along with misrepresenting error frequency by counting omitted information, industry officials believed that the literature misrepresented the frequency of errors because the literature defined errors differently than the credit industry. The CRAs and CDIA stated that they consider only those errors that could have a meaningful impact on a person's credit worthiness as real errors. This distinction is critical to assessing accuracy, as, according to the CDIA, a mistake in a consumer's name might literally be an inaccuracy, but may ultimately have no impact on the consumer. The data provided by CDIA and the CRAs have serious limitations as well. For example, neither CDIA nor CRA officials provided an explanation of the methodology for the collection of data provided by CDIA and for the assessments cited by the CRAs. Moreover, because these data related primarily to those errors that consumers disputed after an adverse action, they excluded a potentially large population of errors. Specifically, these data excluded errors that would cause a credit grantor to offer less favorable terms on a loan rather than deny the loan application. The data also excluded errors in cases where consumers were not necessarily seeking a loan and therefore did not have a need to review their credit reports. Additionally, as stated earlier, only a small percentage of consumers requested credit reports simply out of curiosity. While the CDIA representatives felt that these data were useful for assessing a level of accuracy, they agreed that by focusing on these data only, the industry did not consider a potentially large set of errors. While both the literature and credit industry representatives cited similar types and causes of errors, neither the literature nor the credit industry data identified one particular type or cause of error as the most common. All respondents stated that error type could range from wrong names and incorrect addresses to inaccurate account balances and erroneous information from public records. Based on the literature we reviewed and on our discussions with CRA and data furnisher officials, we could not identify any one cause or source most responsible for errors. However, the Consumer Federation 2002 study, the Federal Reserve Bulletin article, and a representative from the National Foundation for Credit Counseling stated they felt data furnishers often caused more errors than did CRAs or consumers. According to several respondents, this was particularly true for data furnishers, such as collection agencies and public entities that did not rely on accurate credit reports for lending decisions. For example, while a bank needs accurate information in assessing lending risk, and thus attempts to report accurate information, a collection agency does not rely on credit reports for business decisions, and therefore has less of an incentive to report fully accurate information. Data furnishers told us that they did not consider CRAs as a significant cause of errors, but stated that difficulty in matching consumer identification information might cause some errors. Data furnishers also stated that the quality control efforts among data furnishers might vary due to the extent of data integrity procedures in place. They explained that some smaller data furnishers might not have sophisticated quality control procedures because implementing such a system was expensive. On the other hand, errors might occur at any step in the credit reporting process. Consumers could provide inaccurate names or addresses to a data furnisher. A data furnisher might introduce inaccuracies while processing information, performing data entry, or passing information on to the CRAs. And, CRAs might process data erroneously. Figure 1 shows some common causes for errors that might occur during the credit reporting process. CRAs and data furnishers also cited other causes of errors. For example, collection agencies and public records on bankruptcies, tax liens, and judgments were cited as major sources of errors. CRA officials and data furnishers said the growing number of fraudulent credit "repair" clinics that coach consumers to make frivolous reinvestigation requests in an effort to get accurate, though negative, information off the credit report also might cause errors, as disputed information a CRA cannot verify within 30 days is deleted from the consumer's credit report. File segregation, a tactic in which a consumer with a negative credit history tries to create a new credit file by applying for credit using consistent but inaccurate information, was another reported cause for inaccurate credit data. The credit industry has been working on systems to help ensure accuracy since the "reasonable procedures" standard took effect under FCRA in 1970. Within the last decade, CDIA has led efforts to implement industry systems and processes to increase the accuracy of credit reports. In commenting upon accuracy, representatives from CDIA, the CRAs, the Federal Reserve, and the data furnishers stated that credit score models were highly calibrated and accurate and, on the aggregate level, credit reports and scores were highly predictive of credit risk. During the 1970s, the Associated Credit Bureaus (now CDIA) attempted to increase report accuracy by introducing Metro 1, a method of standardizing report formats. The goals of Metro 1 were to create consistency in reporting rules and impose a data template on the industry. In conjunction with the industry, in 1996 CDIA created Metro 2, an enhancement of the Metro 1 format that enables a finer distinction for reporting information. For example, Metro 2 allowed CDIA to implement an "Active Military Code" to protect the credit reports of troops serving overseas. Since active military personnel are legally entitled to longer periods to make credit payments without penalty, this new code ensured that data furnishers did not incorrectly report accounts as delinquent. While use of the Metro format is voluntary, CRAs currently receive over 99 percent of the volume of credit data--30,000 furnishers providing a total of 2 billion records per month--in either Metro 1 or Metro 2 format, with over 50 percent sent in Metro 2. One data furnisher who recently switched from Metro 1 to Metro 2 found that data accuracy improved overall as evidenced by the reduction in the number of data rejections by the CRAs and dispute data. Those data furnishers that do not use the Metro formats provide data on compact disc, diskette, tape, or other type of electronic media. While use of standardized reporting formats ensures more consistent reporting of information, because the industry has never conducted a study to set a baseline level of error frequency in credit reports, and does not currently collect such data, no one knows the extent to which these systems have improved accuracy in credit reports. FTC has taken eight formal enforcement actions since the passage of the 1996 FCRA amendments against CRAs, data furnishers, and resellers that directly or indirectly relate to credit report accuracy. FTC receives and tracks FCRA complaint data against CRAs by violation type and uses this data to identify areas that may warrant an enforcement action. While these data cannot provide the number of violations or frequency of errors in credit reports, since each complaint does not necessarily correspond to a violation, they can give a sense of the relative frequency of complaints surrounding CRAs. We discuss complaint data in more detail in the next section. According to FTC staff, accuracy in the context of FCRA means more than the requirement that CRAs establish "reasonable procedures to assure maximum possible accuracy of their reports." They explained that the statute also seeks to improve accuracy of credit reports by a "self-help" process in which the different participants comply with duties imposed by FCRA. First, creditors and others that furnish information are responsible for accuracy. Second, credit bureaus must take reasonable steps to ensure accuracy. Finally, users of credit reports must notify consumers (provide adverse action notices) about denials of a loan, insurance, job, or other services because of something in their credit report. FTC staff stated that it is crucial that consumers receive adverse action notices so that they can obtain their credit reports and dispute any inaccurate information. For that reason, the Commission has made enforcement in this area a priority. FTC staff stated that their primary enforcement mechanism is to pursue action against a CRA or data furnisher that showed a pattern of repeated violations of the law identified through consumer complaints. According to FTC staff, the Commission has taken eight enforcement actions against CRAs, furnishers, or lenders, since 1996 that directly or indirectly addressed credit report accuracy. One case pertained to a furnisher providing inaccurate information to a CRA, two cases pertained to a furnisher or CRA failing to investigate a consumer dispute, and two actions were taken against lenders that did not provide adverse action notices as required by statute. The remaining three cases were against the major CRAs for blocking consumer calls and having excessive hold times for consumers calling to dispute information on their credit reports. In addition to enforcing FCRA, FTC also provides consumer educational materials and advises consumers on their rights (such as the right to sue a CRA or data furnisher for damages and recoup legal expenses). To date, no comprehensive assessments have addressed the impact of the 1996 FCRA credit report accuracy amendments or the potential effects inaccuracies have had on consumers. In addition, because it has not conducted surveys, FTC was not able to provide overall trend data on the frequency of errors in credit reports. Industry officials as well as two studies we reviewed suggest that errors and inaccuracies in credit reports have the potential to both help and hurt individual consumers, while in some instances errors or inaccuracies may have no effect on the consumer's credit score. The impact of any particular error or inaccuracy in a particular credit report will be dependent on the unique and specific circumstances of the consumer. Data on the impact of the 1996 FCRA amendments on credit report accuracy was not available. For instance, we could not identify impact information from the literature we reviewed and industry officials with whom we spoke said they did not collect such data. Furthermore, FTC could not provide overall trend data but did provide FCRA-related consumer complaint data involving CRAs. FTC staff could not say what the trend in the frequency of errors in credit reports has been since the 1996 amendments because that data is not available. However, FTC officials provided consumer complaint data that shows from 1997 through 2002, the number of FCRA complaints involving CRAs received annually by FTC increased from 1,300 to almost 12,000. The most common complaints cited against CRAs in 2002 pertained to the violations are listed below: Provided inaccurate information (5,956 complaints); Failed to reinvestigate disputed information (2,300 complaints); Provided inadequate phone help (1,291 complaints); Disclosed incomplete/improper credit file to customer (1,033 Improperly conducted reinvestigation of disputed item (771 complaints). Consumer complaint data involving CRAs and FCRA provisions represent 3.1 percent of the total complaints FTC received directly from consumers on all matters in 2002. The FTC staff explained that their knowledge was limited to complaints that came into the agency and that they did not conduct general examinations or evaluations that would enable them to project trends. FTC staff cautioned that it would not be appropriate to conclude that since the complaints against CRAs were on the rise, accuracy of credit reports was deteriorating. They stated that the increase in the number of complaints could be due to greater consumer awareness of FTC's role with respect to credit reporting, as well as a general trend towards increased consumer awareness of credit reporting and scoring. CRAs and the literature suggest that credit-reporting errors could have both a positive and negative effect on consumers. One CRA stated that errors occur randomly and may result in either an increase, decrease, or no change in a credit score. Another CRA stated that information erroneously omitted from a credit report such as a delinquency, judgment, or bankruptcy filing would tend to raise a credit score while that same information erroneously posted to the report would tend to lower the score. The Consumer Federation of America study cited earlier also analyzed 258 files to determine whether inconsistencies were likely to raise or lower credit scores. In approximately half the files reviewed (146 files, or 57 percent), the study could not clearly identify whether inconsistencies in credit reports were resulting in a higher or lower score. The study determined that in the remaining 112 files there was an even split between files that would result in a higher or lower score. The Federal Reserve Bulletin article previously mentioned also concluded that limitations in consumer reporting agency records have the potential to both help and hurt individual consumers. The article further stated that consumers who were hurt by ambiguities, duplications, and omissions in their files had an incentive to correct them, but consumers who were helped by such problems did not. Industry officials and the literature we reviewed suggested that the impact of an error in a consumer's credit report was dependent on the specific circumstance of the information contained in a credit file. CRA and data furnisher officials further pointed out that a variety of factors such as those identified by Fair Isaac, a private software firm that produces credit score models, might impact a credit score. According to the Fair Isaac Web site, their credit score model considers five main categories of information along with their general level of importance to arrive at a score. These categories and their respective weights in determining a credit score include payment history (35 percent), amounts owed (30 percent), length of credit history (15 percent), types of credit in use (10 percent) and new credit (10 percent). As such, no one piece of information or factor alone determines a credit score. For one person, a given factor might be more important than for someone else with a different credit history. In addition, as the information in a credit report changes, so does the importance of any factor in determining a credit score. Fully understanding the impact of errors on consumer's credit scores would require access to consumer credit reports, discussions with consumers to identify errors, and discussions with data furnishers to determine what impact, if any, correction of errors might have on decisions made based on the content of a credit report. The lack of comprehensive information regarding the accuracy of consumer credit reports inhibits any meaningful discussion of what more could or should be done to improve credit report accuracy. Available studies suggest that accuracy could be a problem, but no study has been performed that is representative of the universe of credit reports. Furthermore, any such study would entail the cooperation of the CRAs data furnishers, and consumers to fully assess the impact of errors on credit scores and underwriting decisions. Because of the importance of accurate credit reports to the fairness of our national credit system, it would be useful to perform an independent assessment of the accuracy of credit reports. Such an assessment could be conducted by FTC or paid for by the industry. The assessment would then form the basis for a more complete and productive discussion of the costs and benefits of making changes to the current system of credit reporting to improve credit report accuracy. Another option for improving the accuracy of credit reports would be to create the opportunity for more reviews of credit reports by consumers. One way this could be accomplished would be to expand the definition of what constitutes an adverse action. Currently, consumers are only entitled to receive a free copy of their credit reports when they receive adverse action notices for credit denials or if they believe that they have been the victim of identity theft. When consumers see their credit reports, they have a chance to identify errors and ask for corrections to ensure the accuracy of their credit reports. Expanding the criteria for adverse actions to include loan offers with less than the most favorable rates and terms would likely increase the review of credit files by consumers. Such added review of credit files would in all likelihood help to further ensure the overall accuracy of consumer credit reports. However, the associated costs to the industry would also need to be considered against the anticipated benefits of increasing consumer access to credit reports. For further information regarding this testimony, please contact Harry Medina at (415) 904-2000. Individuals making key contributions to this statement include Janet Fong, Jeff R. Pokras, Mitchell B. Rachlis, and Peter E. Rumble. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Accurate credit reports are critical to the credit process--for consumers attempting to obtain credit and to lending institutions making decisions about extending credit. In today's sophisticated and highly calibrated credit markets, credit report errors can have significant monetary implications to consumers and credit granters. In recognition of the importance of this issue, the Senate Committee on Banking, Housing, and Urban Affairs asked GAO to (1) provide information on the frequency, type, and cause of credit report errors, and (2) describe the impact of the 1996 amendments to the Fair Credit Reporting Act (FCRA) on credit report accuracy and potential implications of reporting errors for consumers. Information on the frequency, type, and cause of credit report errors is limited to the point that a comprehensive assessment of overall credit report accuracy using currently available information is not possible. Moreover, available literature and the credit reporting industry strongly disagree about the frequency of errors in consumer credit reports, and lack a common definition for "inaccuracy." The literature and industry do identify similar types of errors and similar causes of errors. Specifically, several officials and reports cited collection agencies and governmental agencies that provide information on bankruptcies, liens, collections, and other actions noted in public records as major sources of errors. Because credit report accuracy is essential to the business activities of consumer reporting agencies and credit granters, the credit industry has developed and implemented procedures to help ensure accuracy. However, no study has measured the extent to which these procedures have improved accuracy. While the Federal Trade Commission (FTC) tracks consumer complaints on FCRA violations, these data are not a reliable measure of credit report accuracy. Additionally, FTC has taken eight formal enforcement actions directly or indirectly related to credit report accuracy since Congress enacted the 1996 FCRA amendments. Neither the impact of the 1996 FCRA amendments on credit report accuracy nor the potential implications of errors for consumers is known. Specifically, because comprehensive or statistically valid data on credit report errors before and after the passage of the 1996 FCRA amendments have not been collected, GAO could not identify a trend associated with error rates. Industry officials and studies indicated that credit report errors could either help or hurt individual consumers depending on the nature of the error and the consumer's personal circumstances. To adequately assess the impact of errors in consumer reports would require access to the consumer's credit score and the ability to determine how changes in the score affected the decision to extend credit or the terms of the credit granted. Ultimately, a meaningful independent review in cooperation with the credit industry would be necessary to assess the frequency of errors and the implications of errors for individual consumers.
5,984
548
DOD has a mandate to deliver high-quality products to warfighters when they need them and at a price the country can afford. Quality and timeliness are especially critical to maintain DOD's superiority over others, to counter quickly changing threats, and to better protect and enable the warfighter. U.S. weapons are the best in the world, but the programs to acquire them frequently take significantly longer and cost more money than promised and often deliver fewer quantities and capabilities than planned. It is not unusual for time and money to be underestimated by 20 to 50 percent. Considering that DOD is investing $1.4 trillion to acquire over 75 major weapon systems as of March 2015, cost increases of this magnitude have sizeable effects. Typically, when costs and schedules increase, the buying power of the defense dollar is reduced. Consequences associated with this history of acquisition include: the warfighter gets less capability than promised; weapons perform well, but not as well as planned and are harder to trade-offs made to pay for cost increases--in effect, opportunity costs--are not explicit. This state of weapon acquisition is not the result of inattention. Many reforms have been instituted over the past several decades, but the above outcomes persist. DOD is in the midst of a series of "Better Buying Power" initiatives begun in June 2010 that have resulted in some improvements, but it is too early to assess their long term impact. The decision to start a new program is the most highly leveraged point in the product development process. Establishing a sound business case for individual programs depends on disciplined requirements and funding processes. A solid, executable business case provides credible evidence that (1) the warfighter's needs are valid and that they can best be met with the chosen concept, and (2) the chosen concept can be developed and produced within existing resources--that is, proven technologies, design knowledge, adequate funding, and adequate time to deliver the product when it is needed. A program should not go forward into product development unless a sound business case can be made. If the business case measures up, the organization commits to the development of the product, including making the financial investment. At the heart of a business case is a knowledge-based approach to product development that is both a best practice among leading commercial firms and the approach reflected in DOD's acquisition regulations. For a program to deliver a successful product within available resources, managers should demonstrate high levels of knowledge before significant commitments are made. In essence, knowledge supplants risk over time. Establishing a business case calls for a realistic assessment of risks and costs; doing otherwise undermines the intent of the business case and invites failure. This process requires the user and developer to negotiate whatever trade-offs are needed to achieve a match between the user's requirements and the developer's resources before system development begins. Key enablers of a good business case include: Firm, Feasible Requirements: requirements should be clearly defined, affordable, and clearly informed--thus tempered--by systems engineering; once programs begin, requirements should not change without assessing their potential disruption to the program. Mature Technology: science and technology organizations should shoulder the technology development burden, proving technologies can work as intended before they are included in a weapon system program. The principle here is not to avoid technical risk but rather take risk early and resolve it ahead of program start. Incremental, Knowledge-based Acquisition Strategy: rigorous systems engineering coupled with more achievable requirements are essential to achieve faster delivery of needed capability to the warfighter. Building on mature technologies, such a strategy provides time, money, and other resources for a stable design, building and testing of prototypes, and demonstration of mature production processes. Realistic Cost Estimate: sound cost estimates depend on a knowledge-based acquisition strategy, independent assessments, and sound methodologies. An oft-cited quote of David Packard, former Deputy Secretary of Defense, is: "We all know what needs to be done. The question is why aren't we doing it?" We need to look differently at the familiar outcomes of weapon systems acquisition--such as cost growth, schedule delays, large support burdens, and reduced buying power. Some of these undesirable outcomes are clearly due to honest mistakes and unforeseen obstacles. However, they also occur not because they are inadvertent but because they are encouraged by the incentive structure. It is not sufficient to define the problem as an objective process that is broken. Rather, it is more accurate to view the problem as a sophisticated process whose consistent results are indicative of its being in equilibrium. The rules and policies are clear about what to do, but other incentives force compromises. The persistence of undesirable outcomes such as cost growth and schedule delays suggests that these are consequences that participants in the process have been willing to accept. These undesirable outcomes share a common origin: decisions are made to move forward with programs before the knowledge needed to reduce risk and make those decisions is sufficient. There are strong incentives within the acquisition culture to overpromise a prospective weapon's performance while understating its likely cost and schedule demands. Thus, a successful business case--one that enables the program to gain approval--is not necessarily the same as a sound one. Incentive to overpromise: The weapon system acquisition culture in general rewards programs for moving forward with unrealistic business cases. Strong incentives encourage deviations from sound acquisition practices. In the commercial marketplace, investment in a new product represents an expense. Company funds must be expended and will not provide a return until the product is developed, produced, and sold. In DOD, new products represent revenue, in the form of a budget line. A program's return on investment occurs as soon as the funding decision is made. Competition with other programs vying for defense dollars puts pressure on program sponsors to project unprecedented levels of performance (often by counting on unproven technologies) while promising low cost and short schedules. These incentives, coupled with a marketplace that is characterized by a single buyer (DOD), low volume, and limited number of major sources, create a culture in weapon system acquisition that encourages undue optimism about program risks and costs. Program and Funding Decisions: Budget requests, Congressional authorizations, and Congressional appropriations are often made well in advance of major program decisions, such as the decision to approve the start of a program. At the time these funding decisions are made, less verifiable knowledge is available about a program's cost, schedule, and technical challenges. This creates a vacuum for optimism to fill. When the programmatic decision point arrives, money is already on the table, which creates pressure to make a "go" decision prematurely, regardless of the risks now known to be at hand. Budgets to support major program commitments must be approved well ahead of when the information needed to support the decision is available. Take, for example, a decision to start a new program scheduled for August 2016. The new program would have to be included in the Fiscal Year 2016 budget. This budget request would be submitted to Congress in February 2015--18 months before the program decision review is actually held. It is likely that the requirements, technologies, and cost estimates for the new program--essential to successful execution-- may not be very solid at the time of funding decisions. Once the hard- fought budget debates result in funds being appropriated for the program, it is very hard to take it away later, when the actual program decision point is reached. To be sure, this is not to suggest that the acquisition process is foiled by bad actors. Rather, program sponsors and other participants act rationally within the system to achieve goals they believe in. Competitive pressures for funding simply favor optimism in setting cost, schedule, technical, and other estimates. Insufficient Business Cases Are Sanctioned by Funding Approvals: To the extent Congress approves funds for such programs as requested, it sanctions--and thus rewards--optimism and unexecutable business cases. Funding approval--authorizing programs and appropriating funds--is one of the most powerful oversight tools Congress has. The reality is once funding starts, other tools of oversight are relatively weak-- they are no match for the incentives to overpromise. So, if funding is approved for a program despite having an unrealistic schedule or requirements, that decision reinforces those characteristics instead of sound acquisition practices. Pressure to make exceptions for programs that do not measure up are rationalized in a number of ways: an urgent threat needs to be met; a production capability needs to be preserved; despite shortfalls, the new system is more capable than the one it is replacing; and the new system's problems will be fixed in the future. It is the funding approvals that ultimately define acquisition policy. Recently, I testified before the Senate Armed Services Committee on the Ford Class Aircraft Carrier. We reported in 2007 that ship construction was potentially underestimated by 22 percent, critical technologies were immature, and schedules were likely to slip. In other words, the carrier did not have a good business case. Nonetheless, funding was approved as requested. Today, predicted cost increases have occurred, the technologies have slipped nearly 5 years, and the program schedule has been delayed. Notably, the carrier represents a typical program without a good business case and its outcomes of cost increases and schedule delays are not unique. Funding approvals rewarded the unrealistic business case, reinforcing its success rather than that of a sound business case. Since 1990, GAO has identified a number of reforms aimed at improving acquisition outcomes. Several of those are particularly relevant to changing the acquisition culture and will take the joint efforts of Congress and DOD. Reinforce desirable principles at the start of new programs: The principles and practices programs embrace are determined not by policy, but by decisions. These decisions involve more than the program at hand: they send signals on what is acceptable. If programs that do not abide by sound acquisition principles receive favorable funding decisions, then seeds of poor outcomes are planted. The challenge for decision makers is to treat individual program decisions as more than the case at hand. They must weigh and be accountable for the broader implications of what is acceptable or "what will work" and be willing to say no to programs that run counter to best practices. The greatest point of leverage is at the start of a new program. Decision makers must ensure that new programs exhibit desirable principles before funding is approved. Programs that present well-informed acquisition strategies with reasonable and incremental requirements and reasonable assumptions about available funds should be given credit for a good business case. Every year, there is what one could consider a "freshman" class of new acquisitions. This is where DOD and Congress must ensure that they embody the right principles and practices, and make funding decisions accordingly. Identify significant program risks upfront and resource them: Weapon acquisition programs by their nature involve risks, some much more than others. The desired state is not zero risk or elimination of all cost growth. But we can do better than we do now. The primary consequences of risk are often more time and money and unplanned--or latent--concurrency in development, testing, and production. Yet, when significant risks are taken, they are often taken under the guise that they are manageable and that risk mitigation plans are in place. Such plans do not set aside time and money to account for the risks taken. Yet in today's climate, it is understandable--any sign of weakness in a program can doom its funding. Unresourced risk, then, is the "spackle" of the acquisition system that enables the system to operate. This needs to change. If programs are to take significant risks, whether they are technical in nature or related to an accelerated schedule, these risks should be declared and the resource consequences acknowledged and provided. Less risky options and potential off-ramps should be presented as alternatives. Decisions can then be made with full information, including decisions to accept the risks identified. If the risks are acknowledged and accepted by DOD and Congress, the program should be supported. More closely align budget decisions and program decisions: Requesting funding for programs 18 or so months ahead of when they will need it stems from a budgeting and planning process intended to make sure money is available in the future. Ensuring that programs are thus affordable is a sound practice. But, DOD and Congress need to explore ways to bring funding decisions closer in alignment with program decisions. This will require more thought and study. The alternative is that DOD and Congress will have to hold programs accountable for sound business cases at the time funding is approved, even if it is 18 months in advance of the program decision. Separate Technology Development from Product Development: Leading commercial companies minimize problems in product development by separating technology development from product development and fully developing technologies before introducing them into the design of a system. These companies develop technology to a high level of maturity in a science and technology environment which is more conducive to the ups and downs normally associated with the discovery process. This affords the opportunity to gain significant knowledge before committing to product development and has helped companies reduce costs and time from product launch to fielding. Although DOD's science and technology enterprise is engaged in developing technology, there are organizational, budgetary, and process impediments which make it difficult to bring technologies into acquisition programs. For example, it is easier to move immature technologies into weapon system programs because they tend to attract bigger budgets than science and technology projects. Stronger and more uniform incentives are needed to encourage the development of technologies in the right environment to reduce the cost of later changes, and encourage the technology and acquisition communities to work more closely together to deliver the right technologies at the right time. Develop system engineering and program manager capacity: Systems engineering expertise is essential throughout the acquisition cycle, but especially early when the feasibility of requirements are being determined, the technical and engineering demands of a design are being understood, and when an acquisition strategy for conducting production development is laid out. DOD has fallen short in its attempts to fill systems engineering positions. These positions should be filled and their occupants involved and empowered early to help get programs on a good footing--i.e., a good business case--from the start. Program managers are essential to the success of any program. Program managers handed a program with a poor business case are not put in a position to succeed. Even with a good business case, program managers must have the skill set, business acumen, tenure, and career path to make programs succeed and be rewarded professionally. DOD has struggled to create this environment for program managers. Describing the current acquisition process as "broken" is an oversimplification, because it implies that it can merely be "fixed". The current process, along with its outcomes, has been held in place by a set of incentives--a culture--that has been resistant to reforms and fixes. Seen instead as a process in equilibrium, it is clear that changing it requires a harder, long-term effort by both DOD and Congress. There have been a number of recent reforms directed at DOD. Congress shares responsibility for the success of these reforms in the actions it takes on funding programs, specifically by creating enablers for sound business cases, and creating disincentives for programs that do not measure up. Chairman Thornberry, Ranking Member Smith, and Members of the Committee, this concludes my statement and I would be happy to answer any questions. If you or your staff has any questions about this statement, please contact Paul L. Francis at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are David Best, Assistant Director; R. Eli DeVan; Laura Greifner; and Alyssa Weir. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
DOD's acquisition of major weapon systems has been on GAO's high risk list since 1990. Over the years, Congress and DOD have continually explored ways to improve acquisition outcomes, including reforms that have championed sound management practices, such as realistic cost estimating, prototyping, and systems engineering. Too often, GAO reports on the same kinds of problems today that it did over 20 years ago. This testimony discusses (1) the performance of the current acquisition system; (2) the role of a sound business case in getting better acquisition outcomes; (3) systemic reasons for persistent problems; and (4) thoughts on actions DOD and Congress can take to get better outcomes from the acquisition process. This statement draws from GAO's extensive body of work on DOD's acquisition of weapon systems and the numerous recommendations GAO has made on both individual weapons and systemic improvements to the acquisition process. U.S. weapon acquisition programs often take significantly longer, cost more than promised and deliver fewer quantities and capabilities than planned. It is not unusual for time and money to be underestimated by 20 to 50 percent. As the Department of Defense (DOD) is investing $1.4 trillion to acquire over 75 major weapon systems as of March 2015, cost increases of this magnitude have sizeable effects. When costs and schedules increase, the buying power of the defense dollar is reduced. Beyond the resource impact, consequences include the warfighter receiving less capability than promised, weapons performing not as well as planned and being harder to support, and trade-offs made to pay for cost increases--in effect, opportunity costs--not being made explicit. GAO's work shows that establishing a sound business case is essential to achieving better program outcomes. A program should not go forward without a sound business case. A solid, executable business case provides credible evidence that (1) the warfighter's needs are valid and that they can best be met with the chosen concept, and (2) the chosen concept can be developed and produced within existing resources--such as technologies, design knowledge, funding, and time. Establishing a sound business case for individual programs depends on disciplined requirements and funding processes, and calls for a realistic assessment of risks and costs; doing otherwise undermines the intent of the business case and makes the above consequences likely. Yet, business cases for many new programs are deficient. This is because there are strong incentives within the acquisition culture to overpromise a prospective weapon's performance while understating its likely cost and schedule demands. Thus, a successful business case is not necessarily the same as a sound one. Competition with other programs for funding creates pressures to overpromise. This culture is held in place by a set of incentives that are more powerful than policies to follow best practices. Moreover, the budget process calls for funding decisions before sufficient knowledge is available to make key decisions. Complementing these incentives is a marketplace characterized by a single buyer, low volume, and limited number of major sources. Thus, while it is tempting to describe the acquisition process as broken, it is more instructive to view it as in equilibrium: one in which competing forces consistently lead to starting programs with slim chances of being delivered on time and within cost. Over the years, GAO has identified a number of reforms aimed at improving acquisition outcomes. Several of those are particularly relevant to changing the acquisition culture and will take the joint efforts of Congress and DOD: Ensure that new programs exhibit desirable principles before funding is approved. Identify significant program risks up front and allot sufficient resources. More closely align budget and program decisions. Mature technology before including it in product development. Develop system engineering and program manager capacity--sufficient personnel with appropriate expertise and skills.
3,523
785
As we reported in April 2011, ICE CTCEU investigates and arrests a small portion of the estimated in-country overstay population due to, among other things, ICE's competing priorities; however, these efforts could be enhanced by improved planning and performance management. CTCEU, the primary federal entity responsible for taking enforcement action to address in-country overstays, identifies leads for overstay cases; takes steps to verify the accuracy of the leads it identifies by, for example, checking leads against multiple databases; and prioritizes leads to focus on those the unit identifies as being most likely to pose a threat to national security or public safety. CTCEU then requires field offices to initiate investigations on all priority, high-risk leads it identifies. According to CTCEU data, as of October 2010, ICE field offices had closed about 34,700 overstay investigations that CTCEU headquarters assigned to them from fiscal year 2004 through 2010. These cases resulted in approximately 8,100 arrests (about 23 percent of the 34,700 investigations), relative to a total estimated overstay population of 4 million to 5.5 million. About 26,700 of those investigations (or 77 percent) resulted in one of these three outcomes: (1) evidence is uncovered indicating that the suspected overstay has departed the United States; (2) evidence is uncovered indicating that the subject of the investigation is in-status (e.g., the subject filed a timely application with the United States Citizenship and Immigration Services (USCIS) to change his or her status and/or extend his or her authorized period of admission in the United States); or (3) CTCEU investigators exhaust all investigative leads and cannot locate the suspected overstay. Of the approximately 34,700 overstay investigations assigned by CTCEU headquarters that ICE field offices closed from fiscal year 2004 through 2010, ICE officials attributed the significant portion of overstay cases that resulted in a departure finding, in-status finding, or with all leads being exhausted generally to difficulties associated with locating suspected overstays and the timeliness and completeness of data in DHS's systems used to identify overstays. Further, ICE reported allocating a small percentage of its resources in terms of investigative work hours to overstay investigations since fiscal year 2006, but the agency expressed an intention to augment the resources it dedicates to overstay enforcement efforts moving forward. Specifically, from fiscal years 2006 through 2010, ICE reported devoting from 3.1 to 3.4 percent of its total field office investigative hours to CTCEU overstay investigations. ICE attributed the small percentage of investigative resources it reported allocating to overstay enforcement efforts primarily to competing enforcement priorities. According to the ICE Assistant Secretary, ICE has resources to remove 400,000 aliens per year, or less than 4 percent of the estimated removable alien population in the United States. In June 2010, the Assistant Secretary stated that ICE must prioritize the use of its resources to ensure that its efforts to remove aliens reflect the agency's highest priorities, namely nonimmigrants, including suspected overstays, who are identified as high risk in terms of being most likely to pose a risk to national security or public safety. As a result, ICE dedicated its limited resources to addressing overstays it identified as most likely to pose a potential threat to national security or public safety and did not generally allocate resources to address suspected overstays that it assessed as noncriminal and low risk. ICE indicated that it may allocate more resources to overstay enforcement efforts moving forward and that it planned to focus primarily on suspected overstays whom ICE has identified as high risk or who recently overstayed their authorized periods of admission. ICE was considering assigning some responsibility for noncriminal overstay enforcement to its Enforcement and Removal Operations (ERO) directorate, which has responsibility for apprehending and removing aliens who do not have lawful immigration status from the United States. However, ERO did not plan to assume this responsibility until ICE assessed the funding and resources doing so would require. ICE had not established a time frame for completing this assessment. We reported in April 2011 that by developing such a time frame and utilizing the assessment findings, as appropriate, ICE could strengthen its planning efforts and be better positioned to hold staff accountable for completing the assessment. We recommended that ICE establish a target time frame for assessing the funding and resources ERO would require in order to assume responsibility for civil overstay enforcement and use the results of that assessment. DHS officials agreed with our recommendation and stated that ICE planned to identify resources needed to transition this responsibility to ERO as part of its fiscal year 2013 resource-planning process. Moreover, although CTCEU established an output program goal and target, and tracked various performance measures, it did not have a mechanism in place to assess the outcomes of its efforts, particularly the extent to which the program was meeting its mission as it relates to overstays--to prevent terrorists and other criminals from exploiting the nation's immigration system. CTCEU's program goal is to prevent criminals and terrorists from exploiting the immigration system by proactively developing cases for investigation, and its performance target is to send 100 percent of verified priority leads to field offices as cases. CTCEU also tracks a variety of output measures, such as the number of cases completed their associated results (i.e., arrested, departed, in- status, or all leads exhausted) and average hours spent to complete an investigation. While CTCEU's performance target permits it to assess an output internal to the program--the percentage of verified priority leads it sends to field offices for investigation--it does not provide program officials with a means to assess the impact of the program in terms of preventing terrorists and other criminals from exploiting the immigration system. We reported that by establishing such mechanisms, CTCEU could better ensure that managers have information to assist in making decisions for strengthening overstay enforcement efforts and assessing performance against CTCEU's goals. In our April 2011 report, we recommended that ICE develop outcome-based performance measures--or proxy measures if program outcomes cannot be captured--and associated targets on CTCEU's progress in preventing terrorists and other criminals from exploiting the nation's immigration system. DHS officials agreed with our recommendation and stated that ICE planned to work with DHS's national security partners to determine if measures could be implemented. In addition to ICE's overstay enforcement activities, in April 2011 we reported that the Department of State and CBP are responsible for, respectively, preventing ineligible violators from obtaining a new visa or being admitted to the country at a port of entry. According to Department of State data, the department denied about 52,800 nonimmigrant visa applications and about 114,200 immigrant visa applications from fiscal year 2005 through fiscal year 2010 due, at least in part, to applicants having previously been unlawfully present in the United States for more than 180 days, according to statute. Similarly, CBP reported that it refused admission to about 5,000 foreign nationals applying for admission to the United States from fiscal year 2005 through 2010 (an average of about 830 per year) specifically because of the applicants' previous status as unlawfully present in the United States for more than 180 days. DHS has not yet implemented a comprehensive biometric system to match available information provided by foreign nationals upon their arrival and departure from the United States. In August 2007, we reported that while US-VISIT biometric entry capabilities were operating at air, sea, and land ports of entry, exit capabilities were not, and that DHS did not have a comprehensive plan or a complete schedule for biometric exit implementation. In addition, we reported that DHS continued to propose spending tens of millions of dollars on US-VISIT exit projects that were not well-defined, planned, or justified on the basis of costs, benefits, and risks. Moreover, in November 2009, we reported that DHS had not adopted an integrated approach to scheduling, executing, and tracking the work that needed to be accomplished to deliver a comprehensive exit solution as part of the US-VISIT program. We concluded that, without a master schedule that was integrated and derived in accordance with relevant guidance, DHS could not reliably commit to when and how it would deliver a comprehensive exit solution or adequately monitor and manage its progress toward this end. We recommended that DHS ensure that an integrated master schedule be developed and maintained. DHS concurred and reported, as of July 2011, that the documentation of schedule practices and procedures is ongoing, and that an updated schedule standard, management plan, and management process that are compliant with schedule guidelines are under review. More specifically, with regard to a biometric exit capability at land ports of entry, we reported in December 2006 that US-VISIT officials concluded that, for various reasons, a biometric US-VISIT exit capability could not be implemented without incurring a major impact on land facilities. In December 2009, DHS initiated a land exit pilot to collect departure information from temporary workers traveling through two Arizona land ports of entry. Under this pilot, temporary workers who entered the United States at these ports of entry were required to register their final departure by providing biometric and biographic information at exit kiosks located at the ports of entry. DHS planned to use the results of this pilot to help inform future decisions on the pedestrian component of the long- term land exit component of a comprehensive exit system. With regard to air and sea ports of entry, in April 2008, DHS announced its intention to implement biometric exit verification at air and sea ports of entry in a Notice of Proposed Rule Making. Under this notice, commercial air and sea carriers would be responsible for developing and deploying the capability to collect biometric information from departing travelers and transmit it to DHS. DHS received comments on the notice and has not yet published a final rule. Subsequent to the rule making notice, on September 30, 2008, the Consolidated Security, Disaster Assistance, and Continuing Appropriations Act, 2009, was enacted, which directed DHS to test two scenarios for an air exit solution: (1) airline collection and transmission of biometric exit data, as proposed in the rule making notice and (2) CBP collection of such information at the departure gate. DHS conducted two pilots in 2009, and we reported on them in August 2010. Specifically, we reported that the pilots addressed one statutory requirement for a CBP scenario to collect information on exiting foreign nationals. However, DHS was unable to address the statutory requirement for an airline scenario because no airline was willing to participate. We reported on limitations with the pilots, such as the reported scope and approach of the pilots including limitations not defined in the pilot evaluation plan like suspending exit screening at departure gates to avoid flight delays, that curtailed their ability to inform a decision for a long-term air exit solution and pointed to the need for additional sources of information on air exit's operational impacts. We recommended that the Secretary of Homeland Security identify additional sources of information beyond the pilots, such as comments from the Notice of Proposed Rule Making, to inform an air exit solution decision. DHS agreed with the recommendation and stated that the pilots it conducted would not serve as the sole source of information to inform an air exit solution decision. In July 2011, DHS stated that it continues to examine all options in connection with a final biometric air exit solution and has recently given consideration to using its authority to establish an advisory committee to study and provide recommendations to DHS and Congress on implementing an air exit program. In the absence of a comprehensive biometric entry and exit system for identifying and tracking overstays, US-VISIT and CTCEU primarily analyze biographic entry and exit data collected at land, air, and sea ports of entry to identify overstays. In April 2011, we reported that DHS's efforts to identify and report on visa overstays were hindered by unreliable data. Specifically, CBP does not inspect travelers exiting the United States through land ports of entry, including collecting their biometric information, and CBP did not provide a standard mechanism for nonimmigrants departing the United States through land ports of entry to remit their arrival and departure forms. Nonimmigrants departing the United States through land ports of entry turn in their forms on their own initiative. According to CBP officials, at some ports of entry, CBP provides a box for nonimmigrants to drop off their forms, while at other ports of entry departing nonimmigrants may park their cars, enter the port of entry facility, and provide their forms to a CBP officer. These forms contain information, such as arrival and departure dates, used by DHS to identify overstays. If the benefits outweigh the costs, a mechanism to provide nonimmigrants with a way to turn in their arrival and departure forms could help DHS obtain more complete and reliable departure data for identifying overstays. We recommended that the Commissioner of CBP analyze the costs and benefits of developing a standard mechanism for collecting these forms at land ports of entry, and develop a standard mechanism to collect them, to the extent that benefits outweigh the costs. CBP agreed with our recommendation and stated it planned to complete a cost-effective independent evaluation. Further, we previously reported on weaknesses in DHS processes for collecting departure data, and how these weaknesses impact the determination of overstay rates. The Implementing Recommendations of the 9/11 Commission Act required that DHS certify that a system is in place that can verify the departure of not less than 97 percent of foreign nationals who depart through U.S. airports in order for DHS to expand the Visa Waiver Program. In September 2008, we reported that DHS's methodology for comparing arrivals and departures for the purpose of departure verification would not inform overall or country-specific overstay rates because DHS's methodology did not begin with arrival records to determine if those foreign nationals departed or remained in the United States beyond their authorized periods of admission. Rather, DHS's methodology started with departure records and matched them to arrival records. As a result, DHS's methodology counted overstays who left the country, but did not identify overstays who have not departed the United States and appear to have no intention of leaving. We recommended that DHS explore cost-effective actions necessary to further improve the reliability of overstay data. DHS reported that it is taking steps to improve the accuracy and reliability of the overstay data, by efforts such as continuing to audit carrier performance and work with airlines to improve the accuracy and completeness of data collection. Moreover, by statute, DHS is required to submit an annual report to Congress providing numerical estimates of the number of aliens from each country in each nonimmigrant classification who overstayed an authorized period of admission that expired during the fiscal year prior to the year for which the report is made. DHS officials stated that the department has not provided Congress annual overstay estimates regularly since 1994 because officials do not have sufficient confidence in the quality of the department's overstay data--which is maintained and generated by US- VISIT. As a result, DHS officials stated that the department cannot reliably report overstay rates in accordance with the statute. In addition, in April 2011 we reported that DHS took several steps to provide its component entities and other federal agencies with information to identify and take enforcement action on overstays, including creating biometric and biographic lookouts--or electronic alerts--on the records of overstay subjects that are recorded in databases. However, DHS did not create lookouts for the following two categories of overstays: (1) temporary visitors who were admitted to the United States using nonimmigrant business and pleasure visas and subsequently overstayed by 90 days or less; and (2) suspected in-country overstays who CTCEU deemed not to be a priority for investigation in terms of being most likely to pose a threat to national security or public safety. Broadening the scope of electronic lookouts in federal information systems could enhance overstay information sharing. In April 2011, we recommended that the Secretary of Homeland Security direct the Commissioner of Customs and Border Protection, the Under Secretary of the National Protection and Programs Directorate, and the Assistant Secretary of Immigration and Customs Enforcement to assess the costs and benefits of creating biometric and biographic lookouts for these two categories of overstays. Agency officials agreed with our recommendation and have actions under way to address it. For example, agency officials stated that they have met to assess the costs and benefits of creating lookouts for those categories of overstays. As we reported in March 2011, the Visa Security Program faces several key challenges in implementing operations at overseas posts. For example, we reported that Visa Security Program agents' advising and training of consular officers, as mandated by section 428 of the Homeland Security Act, varied from post to post, and some posts provided no training to consular officers. We contacted consular sections at 13 overseas posts, and officials from 5 of the 13 consular sections we interviewed stated that they had received no training from the Visa Security Program agents in the last year, and none of the agents we interviewed reported providing training on specific security threats. At posts where Visa Security Program agents provided training for consular officers, topics covered included fraudulent documents, immigration law, human smuggling, and interviewing techniques. In March 2011, we recommended that DHS issue guidance requiring Visa Security Program agents to provide training for consular officers as mandated by section 428 of the Homeland Security Act. DHS concurred with our recommendation and has actions under way to address it. Further, in March 2011 we reported that Visa Security Program agents performed a variety of investigative and administrative functions beyond their visa security responsibilities, including criminal investigations, attache functions, and regional responsibilities. According to ICE officials, Visa Security Program agents perform non-program functions only after completing their visa security screening and vetting workload. However, both agents and Department of State officials at some posts told us that these other investigative and administrative functions sometimes slowed or limited Visa Security Program agents' visa security-related activities. We recommended that DHS develop a mechanism to track the amount of time spent by Visa Security Program agents on visa security activities and other investigations, in order to determine appropriate staffing levels and resource needs for Visa Security Program operations at posts overseas to ensure visa security operations are not limited. DHS did not concur with our recommendation, stating that ICE currently tracks case investigation hours through its data system, and that adding the metric to the Visa Security Program tracking system would be redundant. However, DHS's response did not address our finding that ICE does not have a mechanism that allows the agency to track the amount of time agents spend on both investigation hours and hours spent on visa security activities. Therefore, we continue to believe the recommendation has merit and should be implemented. Moreover, we found that ICE's use of 30-day temporary duty assignments to fill Visa Waiver Program positions at posts created challenges and affected continuity of operations at some posts. Consular officers we interviewed at 3 of 13 posts discussed challenges caused by this use of temporary duty agents. The Visa Security Program's 5-year plan also identified recruitment of qualified personnel as a challenge and recommended incentives for Visa Security Program agents as critical to the program's mission, stating, "These assignments present significant attendant lifestyle difficulties. If the mission is to be accomplished, ICE, like State, needs a way to provide incentives for qualified personnel to accept these hardship assignments." However, according to ICE officials, ICE had not provided incentives to facilitate recruitment for hardship posts. ICE officials stated that they have had difficulty attracting agents to Saudi Arabia, and ICE agents at post told us they have little incentive to volunteer for Visa Security Program assignments. Thus, we recommended that DHS develop a plan to provide Visa Security Program coverage at high-risk posts where the possibility of deploying agents may be limited. DHS agreed with our recommendation and is taking steps to implement it. In addition, ICE developed a plan to expand the Visa Security Program to additional high-risk visa-issuing posts, but ICE had not fully adhered to the plan or kept it up to date. The program's 5-year expansion plan, developed in 2007, identified 14 posts for expansion between 2009 and 2010, but 9 of these locations had not been established at the time of our March 2011 report, and ICE had not updated the plan to reflect the current situation. Furthermore, ICE had not fully addressed remaining visa risk in high-risk posts that did not have a Visa Security Program presence. ICE, with input from the Department of State, developed a list of worldwide visa-issuing posts that are ranked according to visa risk. Although the expansion plan stated that risk analysis is the primary input to Visa Security Program site selection and that the expansion plan represented an effort to address visa risk, ICE had not expanded the Visa Security Program to some high-risk posts. For example, 11 of the top 20 high-risk posts identified by ICE and Department of State were not covered by Visa Security Program at the time of our review. The expansion of the Visa Security Program may be limited by a number of factors--including budget limitations and objections from Department of State officials at some posts--and ICE had not identified possible alternatives that would provide the additional security of Visa Security Program review at those posts that do not have a program presence. In May 2011, we recommended that DHS develop a plan to provide Visa Security Program coverage at high-risk posts where the possibility of deploying agents may be limited. DHS concurred with our recommendation and noted actions under way to address it, such as enhancing information technology systems to allow for screening and reviewing of visa applicants at posts worldwide. As we reported in May 2011, DHS implemented the Electronic System for Travel Authorization (ESTA) to meet a statutory requirement intended to enhance Visa Waiver Program security and took steps to minimize the burden on travelers to the United States added by the new requirement. However, DHS had not fully evaluated security risks related to the small percentage of Visa Waiver Program travelers without verified ESTA approval. DHS developed ESTA to collect passenger data and complete security checks on the data before passengers board a U.S. bound carrier. DHS requires applicants for Visa Waiver Program travel to submit biographical information and answers to eligibility questions through ESTA prior to travel. Travelers whose ESTA applications are denied can apply for a U.S. visa. In developing and implementing ESTA, DHS took several steps to minimize the burden associated with ESTA use. For example, ESTA reduced the requirement that passengers provide biographical information to DHS officials from every trip to once every 2 years. In addition, because of ESTA, DHS has informed passengers who do not qualify for Visa Waiver Program travel that they need to apply for a visa before they travel to the United States. Moreover, most travel industry officials we interviewed in six Visa Waiver Program countries praised DHS's widespread ESTA outreach efforts, reasonable implementation time frames, and responsiveness to feedback but expressed dissatisfaction over ESTA fees paid by ESTA applicants. In 2010, airlines complied with the requirement to verify ESTA approval for almost 98 percent of the Visa Waiver Program passengers prior to boarding, but the remaining 2 percent-- about 364,000 travelers-- traveled under the Visa Waiver Program without verified ESTA approval. In addition, about 650 of these passengers traveled to the United States with a denied ESTA. As we reported in May 2011, DHS had not yet completed a review of these cases to know to what extent they pose a risk to the program. DHS officials told us that, although there was no official agency plan for monitoring and oversight of ESTA, the ESTA office was undertaking a review of each case of a carrier's boarding a Visa Waiver Program traveler without an approved ESTA application; however, DHS had not established a target date for completing this review. DHS tracked some data on passengers that travel under the Visa Waiver Program without verified ESTA approval but did not track other data that would help officials know the extent to which noncompliance poses a risk to the program. Without a completed analysis of noncompliance with ESTA requirements, DHS was unable to determine the level of risk that noncompliance poses to Visa Waiver Program security and to identify improvements needed to minimize noncompliance. In addition, without analysis of data on travelers who were admitted to the United States without a visa after being denied by ESTA, DHS cannot determine the extent to which ESTA is accurately identifying individuals who should be denied travel under the program. In May 2011, we recommended that DHS establish time frames for the regular review and documentation of cases of Visa Waiver Program passengers traveling to a U.S. port of entry without verified ESTA approval. DHS concurred with our recommendation and committed to establish procedures to review quarterly a representative sample of noncompliant passengers to evaluate, identify, and mitigate potential security risks associated with the ESTA program. Further, in May 2011 we reported that to meet certain statutory requirements, DHS requires that Visa Waiver Program countries enter into three information-sharing agreements with the United States; however, only half of the countries had fully complied with this requirement and many of the signed agreements have not been implemented. Half of the countries entered into agreements to share watchlist information about known or suspected terrorists and to provide access to biographical, biometric, and criminal history data. By contrast, almost all of the 36 Visa Waiver Program countries entered into an agreement to report lost and stolen passports. DHS, with the support of interagency partners, established a compliance schedule requiring the last of the Visa Waiver Program countries to finalize these agreements by June 2012. Although termination from the Visa Waiver Program is one potential consequence for countries not complying with the information- sharing agreement requirement, U.S. officials have described it as undesirable. DHS, in coordination with the Departments of State and Justice, developed measures short of termination that could be applied to countries not meeting their compliance date. In addition, as of May 2011, DHS had not completed half of the most recent biennial reports on Visa Waiver Program countries' security risks in a timely manner. In 2002, Congress mandated that, at least once every 2 years, DHS evaluate the effect of each country's continued participation in the program on the security, law enforcement, and immigration interests of the United States. The mandate also directed DHS to determine based on the evaluation whether each Visa Waiver Program country's designation should continue or be terminated and to submit a written report on that determination to select congressional committees. According to officials, DHS assesses, among other things, counterterrorism capabilities and immigration programs. However, DHS had not completed the latest biennial reports for 18 of the 36 Visa Waiver Program countries in a timely manner, and over half of these reports are more than 1 year overdue. Further, in the case of 2 countries, DHS was unable to demonstrate that it had completed reports in the last 4 years. DHS cited a number of reasons for the reporting delays. For example, DHS officials said that they intentionally delayed report completion because they frequently did not receive mandated intelligence assessments in a timely manner and needed to review these before completing Visa Waiver Program country biennial reports. We recommended that DHS take steps to address delays in the biennial country review process so that the mandated country reports can be completed on time. DHS concurred with our recommendation and reported that it would consider process changes to address our concerns with the timeliness of continuing Visa Waiver Program reports. This concludes my prepared testimony statement. I would be pleased to respond to any questions that members of the Subcommittee may have. For further information regarding this testimony, please contact Richard M. Stana at (202) 512-8777 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Rebecca Gambler, Assistant Director; Jeffrey Baldwin-Bott; Frances Cook; David Hinchman; Jeremy Manion; Taylor Matheson; Jeff Miller; Anthony Moran; Jessica Orr; Zane Seals; and Joshua Wiener. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The attempted bombing of an airline on December 25, 2009, by a Nigerian citizen with a valid U.S. visa renewed concerns about the security of the visa process. Further, unauthorized immigrants who entered the country legally on a temporary basis but then overstayed their authorized periods of admission--overstays--could pose homeland security risks. The Department of Homeland Security (DHS) has certain responsibilities for security in the visa process and for addressing overstays. DHS staff review visa applications at certain Department of State overseas posts under the Visa Security Program. DHS also manages the Visa Waiver Program through which eligible nationals from certain countries can travel to the United States without a visa. This testimony is based on GAO products issued in November 2009, August 2010, and from March to May 2011. As requested, this testimony addresses the following issues: (1) overstay enforcement efforts, (2) efforts to implement a biometric exit system and challenges with the reliability of overstay data, and (3) challenges in the Visa Security and Visa Waiver programs. Federal agencies take actions against a small portion of the estimated overstay population, but strengthening planning and assessment of overstay efforts could improve enforcement. Within DHS, U.S. Immigration and Customs Enforcement's (ICE) Counterterrorism and Criminal Exploitation Unit (CTCEU) is the lead agency responsible for overstay enforcement. CTCEU arrests a small portion of the estimated overstay population in the United States because of, among other things, ICE's competing priorities, but ICE expressed an intention to augment its overstay enforcement resources. From fiscal years 2006 through 2010, ICE reported devoting about 3 percent of its total field office investigative hours to CTCEU overstay investigations. ICE was considering assigning some responsibility for noncriminal overstay enforcement to its Enforcement and Removal Operations directorate, which apprehends and removes aliens subject to removal from the United States. In April 2011, GAO reported that by developing a time frame for assessing needed resources and using the assessment findings, as appropriate, ICE could strengthen its planning efforts. Moreover, in April 2011, GAO reported that CTCEU tracked various performance measures, but did not have a mechanism to assess the outcomes of its efforts. GAO reported that by establishing such a mechanism, CTCEU could better ensure that managers have information to assist in making decisions. DHS has not yet implemented a comprehensive biometric system to match available information (e.g., fingerprints) provided by foreign nationals upon their arrival and departure from the United States and faces reliability issues with data used to identify overstays. GAO reported that while the United States Visitor and Immigrant Status Indicator Technology Program's biometric entry capabilities were operating at ports of entry, exit capabilities were not, and DHS did not have a comprehensive plan for biometric exit implementation. DHS conducted pilots to test two scenarios for an air exit solution in 2009, and in August 2010, GAO concluded that the pilots' limitations, such as limitations not defined in the pilot evaluation plan like suspending exit screening at departure gates to avoid flight delays, curtailed DHS's ability to inform a decision for a long-term exit solution. Further, in April 2011, GAO reported that there is not a standard mechanism for nonimmigrants departing the United States through land ports of entry to remit their arrival and departure forms. Such a mechanism could help DHS obtain more complete departure data for identifying overstays. GAO identified various challenges in the Visa Security and Visa Waiver programs related to planning and assessment efforts. For example, in March 2011, GAO found that ICE developed a plan to expand the Visa Security Program to additional high-risk posts, but ICE had not fully adhered to the plan or kept it up to date. Further, ICE had not identified possible alternatives that would provide the additional security of Visa Security Program review at those high-risk posts that do not have a program presence. In addition, DHS implemented the Electronic System for Travel Authorization (ESTA) to meet a statutory requirement intended to enhance Visa Waiver Program security and took steps to minimize the burden on travelers to the United States added by the new requirement. However, DHS had not fully evaluated security risks related to the small percentage of Visa Waiver Program travelers without verified ESTA approval. GAO has made recommendations in prior reports that, among other things, call for DHS to strengthen management of overstay enforcement efforts, mechanisms for collecting data from foreign nationals departing the United States, and planning for addressing certain Visa Security and Visa Waiver programs' risks. DHS generally concurred with these recommendations and has actions planned or underway to address them.
6,255
1,017
Chattanooga is located in VA's Mid South Healthcare Network, which comprises Tennessee and portions of nine other states. For CARES purposes, the Mid South Network designated a 75-county area as a health care delivery market--referred to as the Central Market. In fiscal year 2001, 78,656 enrolled veterans resided in this market. As figure 1 shows, Chattanooga, Tennessee, is located in the southeastern part of the Central Market, which serves veterans residing in the central portion of Tennessee, as well as veterans in southern Kentucky and northern Georgia. Within this market, VA currently operates hospitals located in Murfreesboro and Nashville, Tennessee, and six community-based clinics (including one located in Chattanooga). Although VA does not operate a hospital in the Chattanooga area, a broad range of non-VA medical services and providers is available in the Chattanooga area, including 16 hospitals. Of 5 hospitals located in the city itself, the largest is the Erlanger Medical Center--a tertiary care referral center and the region's only Level One trauma center. In addition, there is a wide variety of specialty care, such as cardiology and rheumatology, provided by non-VA physicians in the Chattanooga area. Imaging, diagnostic, and laboratory services, such as endoscopy, colonoscopy, or nuclear medicine scanning, are also available. The range of inpatient medicine and surgery services available at Chattanooga-area hospitals is comparable to services provided at VA hospitals in Nashville and Murfreesboro, according to VA Mid South Network officials. For purposes of our study, we defined the Chattanooga area as Hamilton County, which includes the City of Chattanooga, and 17 surrounding counties. In fiscal year 2001, 21 percent (16,379 enrolled veterans) of all enrolled veterans in the Central Market resided in this area. Figure 2 highlights the 18-county Chattanooga area. As figure 3 shows, VA estimates that the veteran population in the Chattanooga area will decline by about 25,600 veterans from fiscal year 2001 through fiscal year 2022--a decrease of almost 27 percent. During that same period, however, VA projects that Chattanooga-area veterans enrolled in VA's health care system will rise by about 5,000--an increase of more than 30 percent. Moreover, within the Central Market, VA expects the enrolled veterans' workload for inpatient hospital and outpatient primary and specialty care to double through fiscal year 2022, in large part, as a result of the projected growth in the Chattanooga-area enrolled population as well as the aging of that population. For example, 43 percent of the 16,379 enrolled veterans were 65 years of age or older as of September 2001. Almost all Chattanooga-area veterans faced travel times that exceeded VA's travel time guidelines for accessing inpatient hospital care. Also, about half faced travel times that exceeded VA's guideline for outpatient primary care. In addition, appointment waiting times for initial outpatient primary care and specialty care consultations exceeded VA's guidelines, although VA officials recently have taken several steps to shorten appointment waiting times. Almost all (99 percent) of the 16,379 Chattanooga-area enrolled veterans, as of September 2001, faced travel times that exceeded VA guidelines for travel to the nearest VA hospitals in Murfreesboro and Nashville. Almost two-thirds of Chattanooga-area veterans whose travel times exceeded VA guidelines lived in five urban counties to which the 60-minute guideline applies--Hamilton and Bradley counties in Tennessee and Catoosa, Walker, and Whitfield counties in Georgia. The rest (36 percent) lived in rural counties to which the 90-minute guideline applies. As figure 4 shows, Chattanooga is about 120 minutes by car from Murfreesboro, the nearest VA hospital. Therefore, those veterans residing in the five urban counties faced travel times to Murfreesboro or Nashville that were double VA's 60- minute urban travel guideline; veterans living in most of the 13 rural counties also faced travel times well beyond VA's 90-minute rural guideline. Moreover, VA provided over 95 percent of its inpatient hospital workload for Chattanooga-area veterans at VA hospitals in Murfreesboro and Nashville during fiscal year 2002, with less than 5 percent provided by non- VA hospitals in Chattanooga. During that fiscal year, Chattanooga-area veterans had a total of 685 admissions that resulted in a total workload of 7,213 bed days of care. Of these admissions, 580 (6,895 bed days of care) were to the VA hospitals in Murfreesboro or Nashville; the remaining 105 admissions (318 bed days of care) were to Chattanooga hospitals, primarily the Erlanger Medical Center. Local admissions were few, in part, because Mid South Network officials imposed restrictions on the VA Chattanooga clinic's referral practices. For example, when purchasing care on a fee-for-service basis, providers were to refer veterans to local hospitals only when care was not available at VA hospitals in Murfreesboro or Nashville or the veterans' medical conditions precluded travel to those sites. Also, in implementing a contract with the Erlanger Medical Center, network officials instructed VA clinic providers to limit referrals to Erlanger to only veterans with less severe medical conditions, such as those who did not require surgery or hospital stays longer than 5 days. Network officials stated that restrictions were not related to the availability of local care, in that the array of services available at Chattanooga-area hospitals was comparable to services provided at VA hospitals in Murfreesboro and Nashville. Rather, they said that such restrictions were necessary to manage resources effectively, as well as to ensure the patient workload needed to support medical education activities at VA's Murfreesboro hospital. We estimate that during fiscal year 2002, these referral restrictions applied to 246 admission decisions that were recommended by Chattanooga clinic providers. Of these admissions, almost 60 percent were to VA hospitals in Murfreesboro or Nashville rather than non-VA hospitals in Chattanooga and were generally consistent with the restrictions imposed by the Mid South Network. The remaining 40 percent (101 admissions) were to non- VA hospitals in Chattanooga, with about two-thirds financed on a fee-for- service basis and the rest through the VA-Erlanger contract. In fiscal year 2001, more than half (about 8,400) of the 16,379 Chattanooga- area enrolled veterans faced travel times that exceeded VA's 30-minute travel guideline for accessing care at VA's nearest primary care clinic. The remaining 8,000 Chattanooga-area enrolled veterans lived within 30 minutes of VA community-based clinics in Chattanooga, Tullahoma, or Knoxville. Although VA also operates outpatient primary care clinics in its hospitals in Murfreesboro and Nashville, these clinics are all considerably farther than the 30 minutes travel time from the Chattanooga-area veterans' residences. Of the 8,400 enrolled veterans who faced travel times to a VA primary care clinic that were longer than 30 minutes, about 3,375 (40 percent) were in four counties, each of which had from 775 to 884 such enrolled veterans. The remaining 5,030 enrolled veterans were in 14 other Chattanooga-area counties, each of which had from 117 to 608 enrolled veterans who faced travel times that exceeded VA's guideline. As figure 5 shows, 4 counties had fewer than 250 such veterans. Of 1,858 Chattanooga-area veterans awaiting initial visits with Chattanooga clinic outpatient primary care providers during fiscal year 2002, fewer than 7 percent (126) received appointments within VA's appointment waiting time guideline of 30 days or less from the time of the request. Chattanooga clinic officials explained that these scheduling delays were exacerbated by increased requests for outpatient primary care initial appointments--averaging 50 per week. In response, Chattanooga clinic officials have taken a variety of actions to expedite the scheduling of initial outpatient primary care appointments. For example, they have increased the number of providers and necessary support personnel and extended the clinic's hours of operation to include Saturdays and evenings. Also, they made arrangements for a provider at VA's Tullahoma, Tennessee, clinic to see some Chattanooga-area enrolled veterans for initial outpatient primary care appointments, with subsequent outpatient primary care appointments scheduled with Chattanooga clinic providers. As a result of these efforts, waiting times for many Chattanooga-area veterans were shorter than they otherwise would have been, although they continued to exceed VA's 30-day guideline. For example, in the first quarter of fiscal year 2002, 99 percent of veterans seeking initial primary care appointments waited longer than 6 months; by the fourth quarter of fiscal year 2002, 66 percent waited 6 months or longer. Moreover, Chattanooga clinic officials told us that appointments for enrolled veterans seeking initial outpatient primary care visits, as of July 2003, were generally scheduled within 60 days--a significant improvement but still twice as long as VA's 30-day appointment waiting time guideline. Clinic officials said that given the challenges involved in hiring providers and support staff at the clinic and the increasing workload, further waiting time reductions will be difficult to achieve. Waiting times for outpatient specialty care appointments that exceed VA's 30-day guideline have been a long-standing problem for Chattanooga-area veterans. For example, using data from VA's 1999 IG report on Chattanooga veterans' care, we found that for veterans served at the Chattanooga clinic, only 9 percent of 353 sampled outpatient specialty consultation requests were scheduled within 30 days. Moreover, 45 percent of Chattanooga-area veterans seeking outpatient specialty care appointments waited more than 60 days, including 16 percent who waited longer than 90 days. Similarly, our analysis of 468 requests for outpatient specialty care appointments made by Chattanooga clinic providers during October 2002 found long waiting times. For example, 21 percent of these specialty care appointments took more than 90 days to be scheduled, compared to 16 percent in 1999, based on data from the IG report. However, a slightly higher percentage of the October 2002 requests for appointments were scheduled within 30 days--13 percent compared to 9 percent, based on the IG's data. However, during fiscal year 2003, VA officials took several steps--such as expanded use of non-VA specialists in the Chattanooga area--that they said significantly shortened the long waiting times that enrolled veterans previously experienced to obtain outpatient specialty care appointments. Chattanooga clinic officials informed us that as of July 2003, providers' requests for outpatient specialty care appointments--with the exception of dermatology, neurology, and urology appointments--were generally scheduled within VA's 30-day waiting time guideline. Chattanooga clinic officials attributed the fiscal year 2003 reduction in the time necessary to obtain an outpatient specialty care appointment primarily to the expanded use of local specialists on a fee-for-service basis. Other steps that VA officials took to reduce the time necessary to obtain outpatient specialty care appointments included increased use of telemedicine--a system that allows patients and providers physically located in a specially equipped Chattanooga clinic exam room to consult with VA specialists in Murfreesboro and Nashville without actually traveling to those locations. Also, support staff in the Chattanooga clinic was increased, including the addition of an administrator to coordinate the scheduling of local fee-basis specialty care. To emphasize the importance of VA's 30-day appointment waiting time guideline to clinic staff and the flexibility of obtaining care locally, the clinic manager said that when one provider could not schedule an appointment within 30 days, the manager contacted other local providers to determine who could meet the time frame, so that VA's waiting time guideline could be met as often as possible. VA's draft CARES plan includes a proposal to shorten Chattanooga-area veterans' travel times by purchasing inpatient care from non-VA hospitals in Chattanooga. However, it also proposes to shift inpatient workload from VA's Murfreesboro hospital to VA's Nashville hospital, which would lengthen travel times for Chattanooga-area veterans who are unable to receive care locally and who would have otherwise been served at the Murfreesboro hospital. Regarding outpatient care, the draft CARES plan calls for a range of actions, including opening new community-based clinics, that could shorten both travel and appointment waiting times for initial outpatient primary care and specialty care appointments. As a result of the draft CARES plan, travel times for inpatient care for some veterans would decrease while it would increase for others. The plan proposes increased purchasing of inpatient medicine and surgery from non-VA hospitals in Chattanooga, as well as shifting inpatient surgery and medicine workload not necessary to support the needs of long-term psychiatry and nursing home patients in the Murfreesboro facility to its hospital in Nashville. The plan, however, does not describe the extent to which these changes could affect veterans in the 18-county Chattanooga area. To assess the potential impact of the proposed changes, we compared VA's workload data for Chattanooga-area veterans during fiscal year 2002 and Mid South Network officials' estimates of Chattanooga-area veterans' workload to be provided in Murfreesboro, Nashville, and non-VA hospitals as a result of the proposed workload shifts. During fiscal year 2002, about 5 percent of Chattanooga-area veterans' workload was purchased locally and 95 percent was provided in VA hospitals in Murfreesboro and Nashville. The draft national CARES plan does not quantify the extent to which VA plans to contract locally for the inpatient medicine and surgery workload in Chattanooga. Based on our analysis of workload projections contained in the plan's supporting documents, we estimate that local purchases would amount to 29 percent of the inpatient medicine and surgery workload from the 18 Chattanooga-area counties, compared to 5 percent that VA purchased in fiscal year 2002--a fivefold increase. While this represents a significant improvement, it nonetheless means that over 70 percent of the inpatient medicine and surgery workload generated by Chattanooga-area veterans would continue to be served at the VA hospitals in Murfreesboro or Nashville. Furthermore, three-quarters of all local purchases are expected to benefit enrolled veterans in Hamilton and Bradley counties, primarily because these two counties have the largest enrolled populations. Mid South Network officials told us that as in the past, the inpatient workload to be purchased from non-VA hospitals in Chattanooga would be based on the severity of veterans' medical conditions. Chattanooga-area veterans with less severe conditions would be served in Chattanooga; those with more severe conditions would continue to travel to Nashville to receive inpatient care. However, VA expects to place fewer restrictions on local purchases of hospital care than under the VA-Erlanger contract. For example, under the draft CARES plan, inpatient surgeries would be performed locally. All such surgeries were routinely referred to VA hospitals in Murfreesboro or Nashville during fiscal year 2002. Also, we estimate that shifting inpatient workload from the VA hospital in Murfreesboro to Nashville would result in lengthened travel times for Chattanooga-area veterans who do not have care purchased locally and who otherwise would have been served at the Murfreesboro hospital. We estimate that 14 percent of the Chattanooga-area veterans' workload would be affected by the shift, given that an estimated 54 percent of the total workload would be handled in Nashville, compared to 40 percent in fiscal year 2002. Affected veterans would experience diminished access to inpatient care, in that their travel times, which already exceed VA's travel time guidelines, would be about 20 minutes longer than the travel times they would experience if care were provided in Murfreesboro. The draft CARES plan calls for opening new community-based clinics and other changes that would reduce travel and waiting times for enrolled veterans residing in the 18-county Chattanooga area. In fiscal year 2001, about 8,400 Chattanooga-area enrolled veterans faced travel times for primary care that exceeded VA's 30-minute guideline. The proposed clinics, to be located in McMinn, Roane, and Warren counties in Tennessee and Whitfield County in Georgia, would reduce travel times for about 2,700 (one-third) of those enrolled veterans so that they would be within the 30- minute guideline. The remaining 5,700 enrolled veterans would continue to face travel times longer than VA's 30-minute guideline. Figure 6 shows the distribution by county of those Chattanooga-area enrolled veterans who, as of September 2001, would have lived more than 30 minutes from a VA primary care clinic had the four proposed clinics been operational in that year. The draft CARES plan does not provide a target date for opening the Chattanooga-area clinics because VA did not classify them as the highest national priorities, and as such, did not include them on the list of clinics to be opened by the end of fiscal year 2010. To be considered the highest priority, the number of enrolled veterans who do not meet access guidelines would have to be greater than 7,000 enrollees per clinic. The four proposed clinics are significantly smaller in that they are expected to provide 30-minute access for a total of about 2,700 additional Chattanooga- area enrolled veterans. If opened, Mid South Network officials expect the four new community- based clinics to shift a portion of the outpatient primary and specialty care workload away from the Chattanooga clinic. Redistributing workload in this way would likely benefit many veterans whose outpatient primary and specialty care appointment waiting times exceed VA's guidelines. Moreover, these new clinics would be expected to complement other actions that could enhance outpatient primary and specialty care access, including reduced appointment waiting times for Chattanooga-area veterans. For example, the draft CARES plan proposes to expand capacity at existing community-based clinics and increase the use of telemedicine and purchases of specialty outpatient services from non-VA providers. The plan does not provide specifics or time frames for what, where, or when such actions would occur. In making nationwide CARES decisions, we recognize that the Secretary of Veterans Affairs will need to make trade-offs regarding the costs and benefits of alternatives for better aligning VA's capital assets and services. As part of this process, the Secretary will need to decide whether additional improvements to access, beyond those in the draft national CARES plan, are warranted in the Chattanooga area. Although the draft CARES plan proposes actions that could enhance Chattanooga-area veterans' access to VA health care, the majority of Chattanooga-area veterans are expected to continue to face travel times for inpatient medicine and surgery services that far exceed VA's inpatient travel guidelines, even if VA purchases an estimated 29 percent of inpatient workload from non-VA, Chattanooga-area providers as the draft CARES plan proposes. Moreover, access to hospital care for some Chattanooga-area veterans could actually worsen because the proposed transfer of inpatient workload from VA's Murfreesboro hospital to its Nashville hospital would require some veterans previously served in Murfreesboro to drive farther for inpatient care, affecting an estimated 14 percent of Chattanooga-area veterans' workload. Given that the non-VA hospitals in Chattanooga can provide an array of inpatient medicine and surgery services comparable to VA's hospitals in Murfreesboro and Nashville, it seems possible that VA could purchase more than 29 percent of Chattanooga-area veteran's inpatient workload locally. Moreover, even though the draft CARES plan proposes opening four community-based clinics, these clinics would likely not be opened before fiscal year 2011. Although they would enhance outpatient access for 2,700 Chattanooga-area veterans, about 5,700 enrolled veterans would continue to face travel times for outpatient primary care that exceed VA's guideline because existing and proposed clinics are more than 30 minutes from where they live. We recommend that as part of his deliberations concerning whether additional access improvements for Chattanooga-area veterans beyond those contained in the draft CARES plan are warranted, the Secretary of Veterans Affairs explore alternatives such as purchasing inpatient care locally for a larger proportion of Chattanooga- area veterans' workload, particularly focusing on those veterans who may experience longer travel times as a result of the proposed shift of inpatient workload from Murfreesboro to Nashville; expediting the opening of the four proposed community-based clinics; and providing primary care locally for more of those veterans whose access will remain outside VA's travel guideline, despite the opening of the four clinics. In written comments on a draft of this report, VA's Under Secretary for Health thanked us for our recommendations and stated that he will provide them to the Secretary for consideration during his review of the CARES Commission's report and ask that he consider them in the final CARES decision-making process. VA also provided technical comments that we included, where appropriate, to clarify or expand our discussion. We are sending copies of this report to the Secretary of Veterans Affairs and other interested parties. In addition, this report will be available at no charge on GAO's Web site at http://www.gao.gov. We will also make copies available to others upon request. If you or your staff have any questions about this report, call me at (202) 512-7101. Other GAO staff who contributed to this report are listed in appendix II. Our objectives were to (1) assess how Chattanooga-area veterans' access to inpatient hospital and outpatient primary and specialty care compared to the Department of Veterans Affairs' (VA) established travel time and appointment waiting time guidelines and (2) determine how VA's draft Capital Asset Realignment for Enhanced Services (CARES) plan could affect Chattanooga-area veterans' access to such care. For purposes of our work, Chattanooga-area veterans comprise those residing in 18 counties-- Hamilton County, which includes the city of Chattanooga, and 17 surrounding counties; the 18 counties are all closer (as measured by travel time) to the VA clinic and non-VA hospitals in Chattanooga than to VA hospitals and clinics in Murfreesboro and Nashville. We obtained information from and interviewed officials at VA's Mid South Network and its Chattanooga clinic; VA headquarters, including the CARES National Program Office; the Erlanger Medical Center in Chattanooga, Tennessee; and the VA Inspector General's Office of Healthcare Inspections. Regarding travel times, we examined how Chattanooga-area veterans' access to VA health care compared to VA guidelines by using a model developed by the Department of Energy to calculate the time needed for enrolled veterans to travel from their residences to the nearest VA hospitals and clinics. This model takes into account key variables affecting travel times, including speed limits attainable on different types of roads, such as rural roads or interstate highways. We evaluated its methodology and assumptions and found them to be sufficiently accurate for our purposes. We used VA's CARES databases for demographic and workload information for the 16,379 veterans from those 18 counties who were enrolled in VA's health care system as of fiscal year 2001. We compared these results with the inpatient and outpatient primary care travel time guidelines that VA used in its CARES planning to determine the percentage of enrollees, by county, who lived within the inpatient and outpatient access guidelines. We did not analyze travel times for outpatient specialty care because VA did not have guidelines for such care. In addition, we determined Chattanooga veterans' access to inpatient care at non-VA Chattanooga hospitals by obtaining inpatient admissions data and other information from officials of the Mid South Network; the VA Chattanooga clinic; the Erlanger Medical Center in Chattanooga; and VA's network data service centers in Atlanta, Georgia, Chicago, Illinois, Tuscaloosa, Alabama, and Durham, North Carolina. We used VA's Computerized Patient Record System to extract data from 60 of 580 medical records to compile a generalizable profile of all fiscal year 2002 admissions of Chattanooga-area veterans to VA hospitals in Murfreesboro and Nashville. To evaluate information contained in the VA-Erlanger inpatient contract, we reviewed contract documents and conducted interviews with VA's clinic staff and network officials, including those in the network's business office, as well as legal and other officials from the Erlanger Medical Center. Regarding waiting times, we interviewed Mid South Network and Chattanooga clinic staff and analyzed workload data compiled by clinic staff. For example, we analyzed the clinic's fiscal year 2002 waiting lists to identify the number of veterans who enrolled for primary care and the number of days they waited for their first appointment with a primary care provider. We compared these results to VA's 30-day appointment waiting time guideline. In addition, using automated medical records and clinic data, we collected information on Chattanooga clinic providers' requests for specialty consultations. We used this information to determine the number of days needed to obtain an appointment with a specialist. In May 2003, we reviewed all such requests made by clinic providers in October 2002, selecting this time frame to ensure that VA staff had sufficient time to schedule the requested appointments by the time we conducted our review. We then analyzed the results from this review and compared these results to VA's 30-day waiting time guidelines and also to the waiting times reported by VA's Inspector General in his office's 1999 performance review of the Chattanooga clinic. To determine how VA's draft CARES plan could affect Chattanooga-area veterans' access to VA inpatient health care services, we examined the draft national CARES plan; the Mid South Network's CARES planning documents; and workload data produced by VA's CARES Program Office, the Mid South Network office, and the Chattanooga clinic. We also held discussions with VA officials. To evaluate effects of the CARES proposal to shift inpatient workload from VA's Murfreesboro hospital to Nashville and non-VA hospitals in Chattanooga, we analyzed Mid South Network data for Chattanooga-area veterans' inpatient workload at those locations during fiscal year 2002 and estimated the workload that would be served at those locations if the CARES proposal were implemented. In addition, we used the Department of Energy driving time model to analyze the extent to which access would change if VA opened the additional primary care clinics proposed in the national draft CARES plan. Also, we analyzed the reliability of key databases to ensure that there were no material errors or inconsistencies. For example, we used information obtained through our medical record review to cross-check inpatient workload data regarding admissions to Murfreesboro and Nashville during fiscal year 2002 and found those data to be sufficiently reliable. Also, we compared outpatient specialty consultation information with appointment scheduling information contained in VA's computerized record system. Lastly, we compared CARES demographic data on Chattanooga-area veterans with data in VA's national enrollment data file for fiscal year 2002. Lisa Gardner, Julian Klazkin, John Mingus, Daniel Montinez, Keith Steck, and Paul Reynolds made major contributions to this report. VA Health Care: Framework for Analyzing Capital Asset Realignment for Enhanced Services Decisions. GAO-03-1103. Washington, D.C.: August 18, 2003. Department of Veterans Affairs: Key Management Challenges in Health and Disability Programs. GAO-03-756T. Washington, D.C.: May 8, 2003. VA Health Care: Improved Planning Needed for Management of Excess Real Property. GAO-03-326. Washington, D.C.: January 29, 2003. High-Risk Series: Federal Real Property. GAO-03-122. Washington, D.C.: January 2003. Major Management Challenges and Program Risks: Department of Veterans Affairs. GAO-03-110. Washington, D.C.: January 2003. VA Health Care: More National Action Needed to Reduce Waiting Times, but Some Clinics Have Made Progress. GAO-01-953. Washington, D.C.: August 31, 2001. VA Health Care: Community-Based Clinics Improve Primary Care Access. GAO-01-678T. Washington, D.C.: May 2, 2001. Veterans' Health Care: VA Needs Better Data on Extent and Causes of Waiting Times. GAO/HEHS-00-90. Washington, D.C.: May 31, 2000. VA Health Care: VA Is Struggling to Address Asset Realignment Challenges. GAO/T-HEHS-00-88. Washington, D.C.: April 5, 2000. VA Health Care: Improvements Needed in Capital Asset Planning and Budgeting. GAO/HEHS-99-145. Washington, D.C.: August 13, 1999. VA Health Care: Challenges Facing VA in Developing an Asset Realignment Process. GAO/T-HEHS-99-173. Washington, D.C.: July 22, 1999. Veterans' Affairs: Progress and Challenges in Transforming Health Care. GAO/T-HEHS-99-109. Washington, D.C.: April 15, 1999. VA Health Care: Capital Asset Planning and Budgeting Need Improvement. GAO/T-HEHS-99-83. Washington, D.C.: March 10, 1999. Executive Guide: Leading Practices in Capital Decision-Making. GAO/AIMD-99-32. Washington, D.C.: December 1998. VA Health Care: Status of Efforts to Improve Efficiency and Access. GAO/HEHS-98-48. Washington, D.C.: February 6, 1998.
Veterans residing in Chattanooga, Tennessee, have had difficulty accessing Department of Veterans Affairs (VA) health care. In response, VA has acted to reduce travel times to medical facilities and waiting times for appointments with primary and specialty care physicians. Recently, VA released a draft national plan for restructuring its health care system as part of a planning initiative known as Capital Asset Realignment for Enhanced Services (CARES). GAO was asked to assess Chattanooga-area veterans' access to inpatient hospital and outpatient primary and specialty care against VA's guidelines for travel times and appointment waiting times and to determine how the draft CARES plan would affect Chattanooga-area veterans' access to such care. Almost all (99 percent) of the 16,379 enrolled veterans in the 18-county Chattanooga area, as of September 2001, faced travel times that exceeded VA's guidelines for accessing inpatient hospital care. During fiscal year 2002, only a few Chattanooga-area veterans were admitted to non-VA hospitals in Chattanooga--constituting about 5 percent of inpatient workload. In addition, over half (8,400) of Chattanooga-area enrolled veterans faced travel times that exceeded VA's 30-minute guideline for outpatient primary care. Also, waiting times for scheduling initial outpatient primary and specialty care appointments frequently exceeded VA's 30-day guideline. VA's draft CARES plan would shorten travel times for some Chattanooga-area veterans but lengthen travel times for others. Under the plan, the amount of inpatient care VA purchases from non-VA hospitals in Chattanooga would increase from 5 percent to 29 percent, thereby reducing those veterans' travel times to within VA's guidelines. The plan also proposes to shift some inpatient workload from VA's Murfreesboro hospital to its Nashville hospital. As a result, an estimated 54 percent of inpatient workload for Chattanooga-area enrolled veterans will be provided in Nashville compared to 40 percent in fiscal year 2002, thereby lengthening some veterans' travel times by about 20 minutes. The plan also proposes opening four new community-based clinics, which would bring about 2,700 more Chattanooga-area enrolled veterans within VA's 30-minute travel guideline for primary care, leaving about 5,700 enrolled veterans with travel times for such care that exceed VA's guideline. These clinics likely would not open before fiscal year 2011, given priorities specified in the plan.
6,674
555
In the United States, commercial motor carriers account for less than 5 percent of all highway crashes, but these crashes result in about 13 percent of all highway deaths, or about 5,500 of the approximately 43,000 nationwide highway fatalities that occur annually. In addition, about 160,000 of the approximately 3.2 million highway injuries per year involve motor carriers. While the fatality rate for trucks has generally decreased over the past 30 years, it has been fairly stable since 2002. (See fig. 1.) The fatality rate for buses decreased slightly from 1975 to 2005, but it has more annual variability than the fatality rate for trucks due to a much smaller total vehicle miles traveled. FMCSA's primary mission is to reduce the number and severity of crashes involving large trucks and buses. FMCSA relies heavily on the results of compliance reviews to determine whether carriers are operating safely and, if not, to take enforcement action against them. FMCSA conducts these on-site reviews to determine carriers' compliance with safety regulations that address areas such as alcohol and drug testing of drivers, driver qualifications, driver hours of service, vehicle maintenance and inspections, and transportation of hazardous materials. FMCSA uses a data-driven analysis model called SafeStat to assess carriers' risks relative to all other carriers based on safety indicators, such as their crash rates and safety violations identified during roadside inspections and prior compliance reviews. A carrier's score is calculated based on its performance in four safety evaluation areas: accidents and driver, vehicle, and safety management violations. (See fig. 2.) SafeStat identifies many carriers that pose a high risk for crashes and is about twice as effective (83 percent) as randomly selecting carriers for compliance reviews. As a result, it has value for improving motor carrier safety. However, two enhancements that we analyzed could lead to FMCSA identifying carriers that pose greater crash risks overall. These approaches entail giving more weight to crashes than the current SafeStat model does. FMCSA has concerns about these approaches, in part, because placing more emphasis on accidents would require it to place less emphasis on other types of problems. FMCSA recognizes that SafeStat can be improved, and as part of its Comprehensive Safety Analysis 2010 reform initiative--which is aimed at improving its processes for identifying and dealing with unsafe carriers and drivers--the agency is considering replacing SafeStat by 2010. In June 2007, we reported that FMCSA could improve SafeStat's ability to identify carriers that pose high crash risks if it applied a statistical approach, called the negative binomial regression model, to the four SafeStat safety evaluation areas instead of its current approach. We used this approach to determine whether systematic analyses of data through regression modeling offered improved results in identifying carriers that pose high crash risks over FMCSA's model, which uses expert judgment and professional experience to apply weights to each of the safety evaluation areas. The negative binomial model results in a rank order listing of carriers by crash risk and the predicted number of crashes. This differs from SafeStat's current approach, which gives the highest priority to carriers that are deficient in three or more safety evaluation areas or that score over a certain amount--SafeStat categories A and B. (See table 1.) The other enhancement that we analyzed--the results of which are preliminary--utilized the existing SafeStat overall design but examined the effect of providing greater priority to carriers that scored among the worst 5 percent of carriers in the accident safety evaluation area (SafeStat category D). We chose this approach because we found that while the driver, vehicle, and safety management evaluation areas are correlated with the future crash risk of a carrier, the accident evaluation area correlates most with future crash risk. This approach would retain the overall SafeStat framework and categorization--categories A through G for carriers with safety problems--but would substitute carriers in category D (the accident category) for carriers in categories A and B that have either (1) lower overall SafeStat scores or (2) lower accident area scores. We compared the performance of our regression model approach and placing greater weight on carriers that scored among the worst 5 percent of carriers in SafeStat category D to the current SafeStat model. The comparison showed that both these approaches performed better than the current SafeStat approach. (See table 2.) For example, the regression model approach identified carriers with an average of 111 crashes per 1,000 vehicles over an 18-month period compared with the current SafeStat approach, which identified carriers for compliance reviews with an average of 102 crashes per 1,000 vehicles. This 9 percent improvement would have enabled FMCSA to identify carriers with almost twice as many crashes in the following 18 months as those carriers identified in its current approach (19,580 v. 10,076). Placing greater emphasis on carriers in category D provided superior results to the current SafeStat approach both in terms of identifying carriers with higher crash rates (from 6 to 9 percent higher) and greater numbers of crashes (from about 600 to 800 more). In addition, the regression approach performed at least as well as placing greater emphasis on carriers in category D in terms of identifying carriers with the highest crash rates and much better in identifying carriers with the greatest number of crashes. Because both the approaches that we analyzed would identify a larger number of carriers that pose high crash risks, FMCSA would choose the number of carriers to review based on the resources available to it, much as it currently does. We believe that our statistically based regression model is preferable to placing greater weight on carriers in category D because it provides for a systematic assessment of the relative contributions of accidents and driver, vehicle, and safety management violations. We recommended that FMCSA adopt such an approach. By its very nature the regression approach looks for the "best fit" in identifying the degree to which prior accidents and driver, vehicle, and safety management violations identify the likelihood of carriers having crashes in the future, compared to the current SafeStat approach, in which the relationship among the four evaluation areas is based on expert judgment. In addition, because the regression model could be run monthly--as is the current SafeStat model--any change in the degree to which accidents and driver, vehicle, and safety management violations better identify future crashes will be automatically considered as different weights to the four evaluation areas are assigned. This is not the case with the current SafeStat model, in which the evaluation area weights generally remain constant over time. FMCSA agreed that use of a negative binomial regression model looks promising but officials said that the agency believes that placing more emphasis on the accident area would be counterproductive. First, FMCSA is concerned that this would require placing correspondingly less emphasis on the types of problems the compliance review is designed to address so that crashes can be reduced (i.e., the lack of compliance with safety regulations related to drivers, vehicles, and safety management that is captured in the other evaluation areas). Along this line, FMCSA said that compliance reviews of carriers in SafeStat category D have historically resulted in fewer serious violations than compliance reviews of carriers in SafeStat category A or B. We agree with FMCSA that the use of the approaches that we are discussing here today could tilt enforcement heavily toward carriers with high crash rates and away from carriers with compliance issues. We disagree, however, that this would be counterproductive. We found that while driver, vehicle, and safety management evaluation area scores are correlated with the future crash risk of a carrier, high crash rates are a stronger predictor of future crashes than poor compliance with safety regulations. FMCSA's mission--as well as the ultimate purpose of compliance reviews--is to reduce the number and severity of truck and bus crashes. Second, FMCSA officials said that placing more emphasis on the accident evaluation area would increase emphasis on the least reliable type of data used by SafeStat--crash data--and in so doing, it would increase the sensitivity of the results to crash data quality issues. However, in June 2007 we reported that FMCSA has made considerable efforts to improve the reliability of crash data. The report also concluded that as FMCSA continues its efforts to have states improve crash data, any sensitivity of results from our statistically based model to crash data quality issues should diminish. As part of its Comprehensive Safety Analysis 2010, a reform initiative aimed at improving its processes for identifying and dealing with unsafe carriers and drivers, FMCSA is considering replacing SafeStat with a new tool by 2010. The new tool could take on greater importance in FMCSA's safety oversight framework because the agency is considering using the tool's assessments of carriers' safety to determine whether carriers are fit to continue operating. In contrast, SafeStat is primarily used now to prioritize carriers for compliance reviews, and determinations of operational fitness are made only after compliance reviews are completed. FMCSA also plans to develop a tool to assess the safety status of individual drivers, along with tools for dealing with unsafe drivers. Even though FMCSA is considering replacing SafeStat, we believe that implementing either of the approaches discussed in this statement would be worthwhile because it would be relatively easy to do and result in immediate safety benefits that could save lives. Our preliminary assessment is that FMCSA manages its compliance reviews in a way that meets our standards for internal control, thereby promoting thoroughness and consistency in the reviews. It does so by establishing compliance review policies and procedures through an electronic manual and training, using an information system to document the results of its compliance reviews, and monitoring performance. We also found that compliance reviews cover most of the major areas of the agency's safety regulations. FMCSA's communication of its policies and procedures related to conducting compliance reviews meets our standards for internal control. These standards state that an organization's policies and procedures should be recorded and communicated to management and others within the entity who need it and in a form (that is, for example, clearly written and provided as a paper or electronic manual) and within a time frame that enables them to carry out their responsibilities. FMCSA records and communicates its policies and procedures electronically through its Field Operations Training Manual, which it provides to all federal and state investigators and their managers. The manual includes guidance on how to prepare for a compliance review (for example, by reviewing information on the carrier's accidents, drivers, and inspections), and it explains how this information can help the investigator focus the compliance review. It also specifies the minimum number of driver and vehicle maintenance records to be examined and the minimum number of vehicle inspections to be conducted during a compliance review. FMCSA posts updates to the manual that automatically download to investigators and managers when they connect to the Internet. In addition to the manual, FMCSA provides classroom training to investigators and requires that investigators successfully complete that training and examinations before they conduct a compliance review. According to FMCSA officials, investigators then receive on-the-job training, in which they accompany an experienced investigator during compliance reviews. Investigators can also take additional classroom training on specialized topics throughout their careers. FMCSA's documentation of compliance reviews meets our standards for internal control. These standards state that all transactions and other significant events should be clearly and promptly documented, and the documentation should be readily available for examination. FMCSA and state investigators use an information system to document the results of their compliance reviews, including information on crashes and any violations of the safety regulations that they identify. This documentation is readily available to FMCSA managers, who told us that they review it to help ensure completeness and accuracy. FMCSA officials told us that the information system also helps ensure thoroughness and consistency by prompting investigators to follow FMCSA's policies and procedures, such as requirements to meet a minimum sample size. The information system also includes checks for consistency and reasonableness and prompts investigators when the information they enter appears to be inaccurate. FMCSA said managers may assess an investigator's thoroughness by comparing the rate of violations the investigator identified over the course of several compliance reviews to the average rate for investigators in their division office; a rate that is substantially below the average suggests insufficient thoroughness. FMCSA's performance measurement and monitoring of its compliance review activities meet our standards for internal control. These standards state that managers should compare actual performance to planned or expected results and analyze significant differences. According to FMCSA and state managers and investigators, the managers review all compliance reviews in each division office and state to ensure thoroughness and consistency across investigators and across compliance reviews. The investigators we spoke with generally found these reviews to be helpful, and several investigators said that the reviews helped them learn policies and procedures and ultimately perform better compliance reviews. In addition to assessing the performance of individual investigators, FMCSA periodically assesses the performance of FMCSA division offices and state agencies and conducted an agencywide review of its compliance review program in 2002. According to officials at one of FMCSA's service centers, the service centers lead triennial reviews of the compliance review and enforcement activities of each division office and its state partner. These reviews assess whether the division offices and state partners are following FMCSA policies and procedures, and they include an assessment of performance data for items such as the number of compliance reviews conducted, rate of violations identified, and number of enforcement actions taken. The officials said that some reviews identify instances in which division offices have deviated from FMCSA's compliance review policies but that only minor adjustments by the division offices are needed. The officials also said that the service centers compile best practices identified during the reviews and share these among the division offices and state partners. FMCSA's review also concluded that most investigators were not following FMCSA's policy requiring them to perform vehicle inspections as part of a compliance review if the carrier had not already received the required number of roadside vehicle inspections. Since conducting its 2002 review, FMCSA changed its policy so that inspecting a minimum number of vehicles is no longer a strict requirement--if an investigator is unable to inspect the minimum number of vehicles, he or she must explain why in the compliance review report. From fiscal year 2001 through fiscal year 2006, each of the nine major applicable areas of the safety regulations was consistently covered by most of the approximately 76,000 compliance reviews conducted by FMCSA and the states. (See table 3.) For the most part, 95 percent or more of the compliance reviews covered each major applicable area in the agency's safety regulations. An FMCSA official told us that not every compliance review is required to cover these nine areas. For example, follow-up compliance reviews of carriers rated unsatisfactory or conditional are sometimes streamlined to cover only the one or a few areas of the regulations in which the carrier had violations. As another example, minimum insurance coverage regulations apply only to for-hire carriers and private carriers of hazardous materials; they do not apply to private passenger and nonhazardous materials carriers. However, according to an FMCSA official, the area of these regulations that had the lowest rate of coverage--vehicle parts and accessories necessary for safe operation--is required for all compliance reviews except streamlined reviews. Vehicle inspections are supposed to be a key investigative technique for assessing compliance with this area, and an FMCSA official said that the lower rate of coverage for the parts and accessories area likely reflects the small number of vehicle inspections that FMCSA and the states conduct during compliance reviews. Our preliminary assessment is that FMCSA placed many carriers rated unsatisfactory in fiscal year 2005 out of service and followed up with nearly all of the rest to determine whether they had improved. In addition, FMCSA monitors carriers to identify those that are violating out-of-service orders. However, it does not take additional action against many violators of out-of-service orders that it identifies. Furthermore, FMCSA does not assess maximum fines against all carriers, as we believe the law requires, partly because FMCSA does not distinguish between carriers with a pattern of serious safety violations and those that repeat a serious violation. FMCSA followed up with at least 1,189 of 1,196 carriers (99 percent) that received a proposed safety rating of unsatisfactory following compliance reviews completed in fiscal year 2005. These follow-ups resulted in either upgraded safety ratings or the carriers being placed out of service. Specifically, Based on follow-up compliance reviews, FMCSA upgraded the final safety ratings of 658 carriers (325 to satisfactory and 333 to conditional). FMCSA assigned a final rating of unsatisfactory to 309 carriers. FMCSA issued out-of-service orders to 306 of these carriers. An FMCSA official told us that it did not issue out-of-service orders to the remaining three carriers either because the agency could not locate them or because the carrier was still subject to an out-of-service order that FMCSA issued several years prior to the 2005 compliance review. After FMCSA reviewed evidence of corrective action submitted by carriers, it upgraded the final safety ratings of 214 carriers (23 to satisfactory and 191 to conditional). Due to an error in assigning the proposed safety rating to one carrier, FMCSA upgraded its final safety rating to conditional. For the remaining 14 carriers, FMCSA did not (1) provide us information on whether and how it followed up with 7 carriers in time for us to incorporate it in this statement and (2) respond to our request to clarify its follow-up approach for another 7 carriers in time for us to incorporate it in this statement. Under its policies, FMCSA is generally required to assign the carrier a final rating of unsatisfactory and to issue it an out-of-service order after either 45 or 60 days, depending on the nature of the carrier's business. Of the about 300 out-of-service orders that FMCSA issued to carriers rated unsatisfactory following compliance reviews conducted in fiscal year 2005, FMCSA told us that 89 percent were issued on time, 9 percent were issued between 1 and 10 days late, and 2 percent were issued more than 10 days late. We are working with FMCSA to verify these numbers. An FMCSA official told us that in the few instances where an out-of-service order was issued more than 1 week late, the primary reason for the delay was that the responsible FMCSA division office had difficulty scheduling follow-up compliance reviews and thus held off on issuing the orders. FMCSA uses two primary means to try to ensure that carriers that have been placed out of service do not continue to operate. First, FMCSA partners with states to help them suspend, revoke, or deny vehicle registration to carriers that have been placed out of service. FMCSA refers to these partnerships as the Performance and Registration Information Systems Management program (PRISM). PRISM links FMCSA databases with state motor vehicle registration systems and roadside inspection personnel to help identify vehicles operated by carriers that have been issued out-of-service orders. As of January 2007, 45 states had been awarded PRISM grants and 27 states were operating with PRISM capabilities. Second, FMCSA monitors carriers for indicators--such as roadside inspections, moving violations, and crashes--that they may be violating an out-of-service order and visits some of the suspect carriers to examine their records to determine whether they did indeed violate the order. FMCSA told us it is difficult to detect carriers operating in violation of out- of-service orders because its resources do not allow it to visit each carrier or conduct roadside inspections on all vehicles, and we agree. In fiscal years 2005 and 2006, 768 of 1,996 carriers (38 percent) that were subject to an out-of-service order had a roadside inspection or crash; FMCSA cited only 26 of these 768 carriers for violating an out-of-service order. An FMCSA official told us that some of these carriers, such as carriers that were operating intrastate or that had leased its vehicles to other carriers, may not have been violating the out-of-service order. He said that FMCSA did not have enough resources to determine whether each of the carriers was violating an out-of-service order. From August 2006 through February 2007, FMCSA data indicate that the agency performed compliance reviews on 1,136 of the 2,220 (51 percent) carriers that were covered by its mandatory compliance review policy. The Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users act requires that FMCSA conduct compliance reviews on carriers rated as SafeStat category A or B for 2 consecutive months. In response to this requirement, FMCSA implemented a policy in June 2006 requiring a compliance review within 6 months for any such carrier unless the carrier had received a compliance review within the previous 12 months. An FMCSA official told us that the agency did not have enough resources to conduct compliance reviews on all of the 2,220 carriers within 6 months. In April 2007, FMCSA revised the policy because it believes that it required compliance reviews for some carriers that did not need them, leaving FMCSA with insufficient resources to conduct compliance reviews on other carriers that did need them. Specifically, FMCSA believes that carriers that had already had a compliance review were targeted unnecessarily after they had corrected identified violations, but these violations continued to adversely affect their SafeStat rating because SafeStat penalizes carriers for violations regardless of whether they have been corrected. The new policy requires compliance reviews within 6 months for carriers that have been in SafeStat category A or B for 2 consecutive months and received their last compliance 2 or more years ago (or have never received a compliance review) and offers some discretion to FMCSA division offices. For example, division offices can decide not to conduct a compliance review if its SafeStat score is based largely on violations that have been corrected or on accidents that occurred prior to the carrier's last compliance review. We believe that these changes are consistent with the act's requirement and give FMCSA appropriate discretion in allocating its compliance review resources. FMCSA does not assess the maximum fines against all carriers as we believe the law requires. The law requires FMCSA to assess the maximum allowable fine for each serious violation by a carrier that is found (1) to have committed a pattern of such violations (pattern requirement) or (2) to have previously committed the same or a related serious violation (repeat requirement). However, FMCSA's policy on maximum fines does not fully meet these requirements. FMCSA enforces both requirements using what is known as the "three-strikes rule," applying the maximum allowable fine when it finds that a motor carrier has violated the same regulation three times within a 6-year period. FMCSA officials said they interpret both parts of the act's requirements to refer to repeat violations, and because they believe that having two distinct policies on repeat violations would confuse motor carriers, it has chosen to address both requirements with its single three-strikes policy. FMCSA's interpretation does not carry out the statutory mandate to impose maximum fines in two different cases. In contrast to FMCSA, we read the statute's use of the distinct terms "a pattern of violations" and "previously committed the same or a related violation" as requiring FMCSA to implement two distinct policies. A basic principle of statutory interpretation is that distinct terms should be read as having distinct meanings. In this case, the statute not only uses different language to refer to the violations for which maximum fines must be imposed but also sets them out separately and makes either type of violation subject to the maximum penalties. Therefore, one carrier may commit a variety of serious violations and another carrier may commit the same or a substantially similar serious violation as a previous violation; the language on its face requires FMCSA to assess the maximum allowable fine in both situations--patterns of violations as well as repeat offenses. FMCSA could define a pattern of serious violations in numerous ways that are consistent with the act's pattern requirement. Our assessment of eight potential definitions shows that the number of carriers that would be subject to maximum fines depends greatly on the definition. (See table 4.) For example, a definition calling for two or more serious violations in each of at least four different regulatory areas during a compliance review would have made 38 carriers subject to maximum fines in fiscal year 2006. In contrast, a definition calling for one or more serious violations in each of at least three different regulatory areas would have made 1,529 carriers subject to maximum fines during that time. We also interpret the statutory language for the repeat requirement as calling for a "two-strikes" rule as opposed to FMCSA's three-strikes rule interpretation. FMCSA's interpretation imposes the maximum fine only after a carrier has twice previously committed such violations. The language of the statute does not allow FMCSA's interpretation; rather, it requires FMCSA to assess the maximum allowable fine for each serious violation against a carrier that has previously committed the same serious violation. In fiscal years 2004 through 2006, more than four times as many carriers had a serious violation that constituted a second strike than carriers that had a third strike. (See table 5.) For example, in fiscal year 2006, 1,320 carriers had a serious violation that constituted a second strike, whereas 280 carriers had a third strike. Carriers that commit a pattern of violations may also commit a second strike violation. For example, three of the seven carriers that had two or more serious violations in each of at least five different regulatory areas also had a second strike in fiscal year 2006. Were FMCSA to make policy changes along the lines discussed here, we believe that the new policies should address how to deal with carriers with serious violations that both are part of a pattern and repeat the same or similar previous violations. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee might have. For further information on this statement, please contact Susan Fleming at (202) 512-2834 or [email protected]. Individuals making key contributions to this testimony were David Goldstein, Eric Hudson, and James Ratzenberger. Motor Carrier Safety: A Statistical Approach Will Better Identify Commercial Carriers That Pose High Crash Risks Than Does the Current Federal Approach. GAO-07-585. Washington, D.C.: June 11, 2007. Unified Motor Carrier Fee System: Progress Made but Challenges to Implementing New System Remain. GAO-07-771R. Washington, D.C.: May 25, 2007. Consumer Protection: Some Improvements in Federal Oversight of Household Goods Moving Industry Since 2001, but More Action Needed to Better Protect Individual Consumers. GAO-07-586. Washington, D.C.: May 16, 2007. Transportation Security: DHS Efforts to Eliminate Redundant Background Check Investigations. GAO-07-756. Washington, D.C.: April 26, 2007. Truck Safety: Share the Road Safely Pilot Initiative Showed Promise, but the Program's Future Success Is Uncertain. GAO-06-916. Washington, D.C.: September 8, 2006. Federal Motor Carrier Safety Administration: Education and Outreach Programs Target Safety and Consumer Issues, but Gaps in Planning and Evaluation Remain. GAO-06-103. Washington, D.C.: December 19, 2005. Large Truck Safety: Federal Enforcement Efforts Have Been Stronger Since 2000, but Oversight of State Grants Needs Improvement. GAO-06- 156. Washington, D.C.: December 15, 2005. Highway Safety: Further Opportunities Exist to Improve Data on Crashes Involving Commercial Motor Vehicles. GAO-06-102. Washington, D.C.: November 18, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Federal Motor Carrier Safety Administration (FMCSA) has the primary federal responsibility for reducing crashes involving large trucks and buses. FMCSA uses its "SafeStat" tool to select carriers for reviews for compliance with its safety regulations based on the carriers' crash rates and prior safety violations. FMCSA then conducts these compliance reviews and can place carriers out of service if they are found to be operating unsafely. This statement is based on a recent report (GAO-07-585) and other nearly completed work. GAO assessed (1) the extent to which FMCSA identifies carriers that subsequently have high crash rates, (2) how FMCSA ensures that its compliance reviews are conducted thoroughly and consistently, and (3) the extent to which FMCSA follows up with carriers with serious safety violations. GAO's work was based on a review of laws, program guidance, and analyses of data from 2004 through early 2006. FMCSA generally does a good job in identifying carriers that pose high crash risks for subsequent compliance reviews, ensuring the thoroughness and consistency of those reviews, and following up with high-risk carriers. SafeStat is nearly twice as effective (83 percent) as random selection in identifying carriers that pose high crash risks. However, its effectiveness could be improved by using a statistical approach (negative binomial regression), which provides for a systematic assessment to apply weights to the four SafeStat safety evaluation areas (accidents and driver, vehicle, and safety management violations) rather than FMCSA's approach, which relies on expert judgment. The regression approach identified carriers that had twice as many crashes in the subsequent 18 months as did the carriers identified by the current SafeStat approach. FMCSA is concerned that adopting this approach would result in it placing more emphasis on crashes and less emphasis on compliance with its safety management, vehicle, and driver regulations. GAO believes that because (1) the ultimate purpose of compliance reviews is to reduce the number and severity of truck and bus crashes and (2) GAO's and others' research has shown that crash rates are stronger predictors of future crashes than is poor compliance with FMCSA's safety regulations, the regression approach would improve safety. GAO's preliminary assessment is that FMCSA promotes thoroughness and consistency in its compliance reviews through its management processes, which meet GAO's standards for internal controls. For example, FMCSA uses an electronic manual to record and communicate its compliance review policies and procedures and teaches proper compliance review procedures through both classroom and on-the-job training. Furthermore, investigators use an information system to document their compliance reviews, and managers review these data, helping to ensure thoroughness and consistency between investigators. For the most part, FMCSA and state investigators cover the nine major applicable areas of the safety regulations (e.g., driver qualifications and vehicle condition) in 95 percent or more of compliance reviews, demonstrating thoroughness and consistency. GAO's preliminary assessment is that FMCSA follows up with almost all carriers with serious safety violations, but it does not assess the maximum fines against all serious violators that GAO believes the law requires. FMCSA followed up with at least 1,189 of 1,196 carriers (99 percent) that received proposed unsatisfactory safety ratings from compliance reviews completed in fiscal year 2005. For example, FMCSA found that 873 of these carriers made safety improvements and it placed 306 other carriers out of service. GAO also found that FMCSA (1) assesses maximum fines against carriers for the third instance of a violation, whereas GAO reads the statute as requiring FMCSA to do so for the second violation and (2) does not always assess maximum fines against carriers with a pattern of varied serious violations, as GAO believes the law requires.
6,022
795
The National Flood Insurance Act of 1968 established NFIP as an alternative to providing direct assistance after floods. NFIP, which provides government-guaranteed flood insurance to homeowners and businesses, was intended to reduce the federal government's escalating costs for repairing flood damage after disasters. FEMA, which is within the Department of Homeland Security (DHS), is responsible for the oversight and management of NFIP. Since NFIP's inception, Congress has enacted several pieces of legislation to strengthen the program. The Flood Disaster Protection Act of 1973 made flood insurance mandatory for owners of properties in vulnerable areas who had mortgages from federally regulated lenders and provided additional incentives for communities to join the program. The National Flood Insurance Reform Act of 1994 strengthened the mandatory purchase requirements for owners of properties located in special flood hazard areas (SFHA) with mortgages from federally regulated lenders. Finally, the Bunning-Bereuter-Blumenauer Flood Insurance Reform Act of 2004 authorized grant programs to mitigate properties that experienced repetitive flooding losses. Owners of these repetitive loss properties who do not mitigate may face higher premiums. To participate in NFIP, communities agree to enforce regulations for land use and new construction in high-risk flood zones and to adopt and enforce state and community floodplain management regulations to reduce future flood damage. Currently, more than 20,000 communities participate in NFIP. NFIP has mapped flood risks across the country, assigning flood zone designations based on risk levels, and these designations are a factor in determining premium rates. NFIP offers two types of flood insurance premiums: subsidized and full risk. The National Flood Insurance Act of 1968 authorizes NFIP to offer subsidized premiums to owners of certain properties. These subsidized premium rates, which represent about 40 to 45 percent of the cost of covering the full risk of flood damage to the properties, apply to about 22 percent of all NFIP policies. To help reduce or eliminate the long-term risk of flood damage to buildings and other structures insured by NFIP, FEMA has used a variety of mitigation efforts, such as elevation, relocation, and demolition. Despite these efforts, the inventories of repetitive loss properties--generally, as defined by FEMA, those that have had two or more flood insurance claims payments of $1,000 or more over 10 years--and policies with subsidized premium rates have continued to grow. In response to the magnitude and severity of the losses from the 2005 hurricanes, Congress increased NFIP's borrowing authority from the Treasury to $20.8 billion. We have previously identified four public policy goals for evaluating the federal role in providing natural catastrophe insurance: charging premium rates that fully reflect actual risks, limiting costs to taxpayers before and after a disaster, encouraging broad participation in natural catastrophe insurance encouraging private markets to provide natural catastrophe insurance. Taking action to achieve these goals would benefit both NFIP and the taxpayers who fund the program but would require trade-offs. I will discuss the key areas that need to be addressed, actions that can be taken to help achieve these goals, and the trade-offs that would be required. As I have noted, NFIP currently does not charge all program participants rates that reflect the full risk of flooding to their properties. First, the act requires FEMA to charge many policyholders less than full-risk rates to encourage program participation. While the percentage of subsidized properties was expected to decline as new construction replaced subsidized properties, today nearly one out of four NFIP policies is based on a subsidized rate. Second, FEMA may "grandfather" properties that are already in the program when new flood maps place them in higher-risk zones, allowing some property owners to pay premium rates that apply to the previous lower-risk zone. FEMA officials told us that they made the decision to allow grandfathering because of external pressure to reduce the effects of rate increases, and considerations of equity, ease of administration, and the goals of promoting floodplain management. Similarly, FEMA recently introduced a new rating option called the Preferred Risk Policy (PRP) Eligibility Extension that in effect equals a temporary grandfathering of premium rates. While these policies typically would have to be converted to more expensive policies when they were renewed after a new flood map came into effect, FEMA has extended eligibility for these lower rates. Finally, we have also raised questions about whether NFIP's full-risk rates reflect actual flood risks. Because many premium rates charged by NFIP do not reflect the full risk of loss, the program is less likely to be able to pay claims in years with catastrophic losses, as occurred in 2005, and may need to borrow from Treasury to pay claims in those years. Increasing premium rates to fully reflect the risk of loss--including the risk of catastrophic loss--would generally require reducing or eliminating subsidized and grandfathered rates and offers several advantages. Specifically, increasing rates could: result in premium rates that more fully reflected the actual risk of loss; decrease costs for taxpayers by reducing costs associated with postdisaster borrowing to pay claims; and encourage private market participation, because the rates would more closely approximate those that would be charged by private insurers. However, eliminating subsidized and grandfathered rates and increasing rates overall would increase costs to some homeowners, who might then cancel their flood policies or elect not to buy them at all. According to FEMA, subsidized premium rates are generally 40 to 45 percent of rates that would reflect the full risk of loss. For example, the projected average annual subsidized premium was $1,121 as of October 2010, discounted from the $2,500 to $2,800 that would be required to cover the full risk of loss. In a 2009 report, we also analyzed the possibility of creating a catastrophic loss fund within NFIP (one way to help pay for catastrophic losses). Our analysis found that in order to create a fund equal to 1 percent of NFIP's total exposure by 2020, the average subsidized premium--which typically is in one of the highest-risk zones--would need to increase from $840 to around $2,696, while the average full-risk premium would increase from around $358 to $1,149. Such steep increases could reduce participation, either because homeowners could no longer afford their policies or simply deemed them too costly, and increase taxpayer costs for postdisaster assistance to property owners who no longer had flood insurance. However, a variety of actions could be taken to mitigate these disadvantages. For example, subsidized rates could be phased out over time or not transferred with the property when it is sold. Moreover, as we noted in our past work, targeted assistance could be offered to those most in need to help them pay increased NFIP premiums. This assistance could take several forms, including direct assistance through NFIP, tax credits, or grants. In addition, to the extent that those who might forego coverage were actually required to purchase it, additional actions could be taken to better ensure that they purchased policies. According to RAND Corporation, in SFHAs, where property owners with loans from federally insured or regulated lenders are required to purchase flood insurance, as few as 50 percent of the properties had flood insurance in 2006. In order to reduce expenses to taxpayers that can result when NFIP borrows from Treasury, NFIP needs to be able to generate enough in premiums to pay its claims, even in years with catastrophic losses--a goal that is closely tied to that of eliminating subsidies and other reduced rates. Since the program's inception, NFIP premiums have come close to covering claims in average loss years but not in years of catastrophic flooding, particularly 2005. Unlike private insurance companies, NFIP does not purchase reinsurance to cover catastrophic losses. As a result, NFIP has funded such losses after the fact by borrowing from Treasury. As we have seen, such borrowing exposes taxpayers to the risk of loss. NFIP still owes approximately $17.8 billion of the amount it borrowed from Treasury for losses incurred during the 2005 hurricane season. The high cost of servicing this debt means that it may never be repaid, could in fact increase, and will continue to affect the program's solvency and be a burden to taxpayers. Another way to limit costs to taxpayers is to decrease the risk of losses by undertaking mitigation efforts that could reduce the extent of damage from flooding. FEMA has taken steps to help homeowners and communities mitigate properties by making improvements designed to reduce flood damage--for example, elevation, relocation, and demolition. As we have reported, from fiscal year 1997 through fiscal year 2007, nearly 30,000 properties were mitigated using FEMA funds. Increasing mitigation efforts could further reduce flood damage to properties and communities, helping to put NFIP on a firmer financial footing and reducing taxpayers' exposure. FEMA has made particular efforts to address the issue of repetitive loss properties through mitigation. These properties account for just 1 percent of NFIP's insured properties but are responsible for 25 to 30 percent of claims. Despite FEMA's efforts, the number of repetitive loss properties increased from 76,202 in 1997 to 132,100 in March 2011, or by about 73 percent. FEMA also has some authority to raise premium rates for property owners who refuse mitigation offers in connection with the Severe Repetitive Loss Pilot Grant Program. In these situations, FEMA can initially increase premiums to up to 150 percent of their current amount and may raise them again (by up to the same amount) on properties that incur a claim of more than $1,500. However, FEMA cannot increase premiums on property owners who pay the full-risk rate but refuse a mitigation offer, and in no case can rate increases exceed the full- risk rate for the structure. In addition, FEMA is not allowed to discontinue coverage for those who refuse mitigation offers. As a result, FEMA is limited in its ability to compel owners of repetitive loss properties to undertake flood mitigation efforts. Mitigation offers significant advantages. As I have noted, mitigated properties are less likely to be at a high risk for flood damage, making it easier for NFIP to charge them full-risk rates that cover actual losses. Allowing NFIP to deny coverage to owners of repetitive loss properties who refused to undertake mitigation efforts could further reduce costs to the program and ultimately to taxpayers. One disadvantage of increased mitigation efforts is that they can impose up-front costs on homeowners and communities required to undertake them and could raise taxpayers' costs if the federal government elected to provide additional mitigation assistance. Those costs could increase still further if property owners who were dropped from the program for refusing to mitigate later received federal postdisaster assistance. These trade-offs are not insignificant, although certain actions could be taken to reduce them. For example, federal assistance such as low-cost loans, grants, or tax credits could be provided to help property owners pay for the up-front costs of mitigation efforts. Any reform efforts could explore ways to improve mitigation efforts to help ensure maximum effectiveness. For example, FEMA has three separate flood mitigation programs. Having multiple programs may not be the most cost-efficient and effective way to promote mitigation and may unnecessarily complicate mitigation efforts. Increasing participation in NFIP, and thus the size of the risk pool, would help ensure that losses from flood damage did not become the responsibility of the taxpayer. Participation rates have been estimated to be as low as 50 percent in SFHAs, where property owners with loans from federally insured and regulated lenders are required to purchase flood insurance, and participation in lower-risk areas is significantly lower. For example, participation rates outside of SFHAs have been found to be as low as 1 percent, leaving significant room to increase participation. Expanding participation in NFIP would have a number of advantages. As a growing number of participants shared the risks of flooding, premium rates could be lower than they would be with fewer participants. Currently, NFIP must take all applicants for flood insurance, unlike private insurers, and thus is limited in its ability to manage its risk exposure. To the extent that properties added to the program were in geographic areas where participation had historically been low and in low- and medium-risk areas, the increased diversity could lower rates as the overall risk to the program decreased. Further, increased program participation could reduce taxpayer costs by reducing the number of property owners who might draw on federally funded postdisaster assistance. However, efforts to expand participation in NFIP would have to be carefully implemented, for several reasons. First, as we have noted, NFIP cannot reject applicants on the basis of risk. As a result, if participation increased only in SFHAs, the program could see its concentration of high- risk properties grow significantly and face the prospect of more severe losses. Second, a similar scenario could emerge if mandatory purchase requirements were expanded and newly covered properties were in communities that did not participate in NFIP and thus did not meet standards--such as building codes--that could reduce flood losses. As a result, some of the newly enrolled properties might be eligible for subsidized premium rates or, because of restrictions on how much FEMA can charge in premiums, might not pay rates that covered the actual risk of flooding. Finally, historically FEMA has attempted to encourage participation by charging lower rates. However, doing so results in rates that do not fully reflect the risks of flooding and exposes taxpayers to increased risk. Moderating the challenges associated with expanding participation could take a variety of forms. Newly added properties could be required to pay full-risk rates, and low-income property owners could be offered some type of assistance to help them pay their premiums. Outreach efforts would need to include areas with low and moderate flood risks to help ensure that the risk pool remained diversified. For example, FEMA's goals for NFIP include increasing penetration in low-risk flood zones, among homeowners without federally related mortgages in all zones, and in geographic areas with repetitive losses and low penetration rates. Currently, the private market provides only a limited amount of flood insurance coverage. In 2009, we reported that while aggregate information was not available on the precise size of the private flood insurance market, it was considered relatively small. The 2006 RAND study estimated that 180,000 to 260,000 insurance policies for both primary and gap coverage were in effect. We also reported that private flood insurance policies are generally purchased in conjunction with NFIP policies, with the NFIP policy covering the deductible on the private policy. Finally, we reported that NFIP premiums were generally less expensive than premiums for private flood insurance for similar coverage. For example, one insurer told us that for a specified amount of coverage for flood damage to a structure, an NFIP policy might be as low as $500, while a private policy might be as high as $900. Similar coverage for flood damage to contents might be $350 for an NFIP policy but around $600 for a private policy. Given the limited nature of private sector participation, encouraging private market participation could transfer some or all of the federal government's risk exposure to the private markets and away from taxpayers. However, identifying ways to achieve that end has generally been elusive. In 2007, we evaluated the trade-offs of having a mandatory all-perils policies that would include flood risks. For example, it would alleviate uncertainty about the types of natural events homeowners insurance covered, such as those that emerged following Hurricane Katrina. However, at the time the industry was generally opposed to an all- perils policy because of the large potential losses a mandatory policy would entail. Increased private market participation is also not without potential disadvantages. First, if the private markets provide coverage for only the lowest-risk properties currently in NFIP, the percentage of high-risk properties in the program would increase. This scenario could result in higher rates as the amount needed to cover the full risk of flooding increased. Without higher rates, however, the federal government would face further exposure to loss. Second, private insurers, who are able to charge according to risk, would likely charge higher rates than NFIP has been charging unless they received support from the federal government. As we have seen, such increases could create affordability concerns for low-income policyholders. Strategies to help mitigate these disadvantages could include requiring private market coverage for all property owners-- not just those in high-risk areas--and, as described earlier, providing targeted assistance to help low-income property owners pay for their flood coverage. In addition, Congress could provide options to private insurers to help lower the cost of such coverage, including tax incentives or federal reinsurance. As Congress weighs NFIP's various financial challenges in its efforts to reform the program, it must also consider a number of operational and management issues that may limit efforts to meet program goals and impair NFIP's stability. For the past 35 years, we have highlighted challenges with NFIP and its administration and operations. For example, most recently we have identified a number of issues impairing the program's effectiveness in areas that include the reasonableness of payments to Write-Your-Own (WYO) insurers, the adequacy of financial controls over the WYO program, and the adequacy of oversight of non- WYO contractors. In our ongoing work examining FEMA's management of NFIP--covering areas including strategic planning, human capital planning, intra-agency collaboration, records management, acquisition management, and information technology--some similar issues are emerging. For example, preliminary results of our ongoing work show that FEMA: does not have a strategic plan specific to NFIP with goals, objectives, and performance measures for guiding and measuring the program; lacks a strategic human capital plan that addresses the critical competencies required for its workforce; does not have effective collaborative practices that would improve the functioning of program and support offices; lacks a centralized electronic document management system that would allow its various offices to easily access and store documents; has only recently implemented or is still developing efforts to improve some acquisition management functions, making it difficult to assess the effects of these actions; and does not have an effective system to manage flood insurance policy and claims data, despite having invested roughly 7 years and $40 million on a new system whose development has been halted. While FEMA has begun to acknowledge and address some of these management challenges, additional work remains to be done to address these issues. Our final report will include recommendations to address them. Congressional action is needed to increase the financial stability of NFIP and limit taxpayer exposure. GAO previously identified four public policy goals that can provide a framework for crafting or evaluating proposals to reform NFIP. First, any congressional reform effort should include measures for charging premium rates that accurately reflect the risk of loss, including catastrophic losses. Meeting this goal would require changing the law governing NFIP to reduce or eliminate subsidized rates, limits on annual rate increases, and grandfathered or other rates that did not fully reflect the risk of loss. In taking such a step, Congress may choose to provide assistance to certain property owners, and should consider providing appropriate authorization and funding of such incentives to ensure transparency. Second, because of the potentially high costs of individual and community mitigation efforts, which can reduce the frequency and extent of flood damage, Congress may need to provide funding or access to funds for such efforts and consider ways to improve the efficiency of existing mitigation programs. Moreover, if Congress wished to allow NFIP to deny coverage to owners of properties with repetitive losses who refused mitigation efforts, it would need to give FEMA appropriate authority. Third, Congress could encourage FEMA to continue to increase participation in the program by expanding targeted outreach efforts and limiting postdisaster assistance to those individuals who choose not to mitigate in moderate- and high-risk areas. And finally, to address the goal of encouraging private sector participation, Congress could encourage FEMA to explore private sector alternatives to providing flood insurance or for sharing insurance risks, provided such efforts do not increase taxpayers' exposure. For its part, FEMA needs to take action to address a number of fundamental operational and managerial issues that also threaten the stability of NFIP and have contributed to its remaining on GAO's high-risk list. These include improving its strategic planning, human capital planning, intra-agency collaboration, records management, acquisition management, and information technology. While FEMA continues to make some progress in some areas, fully addressing these issues is vital to its long-term operational efficiency and financial stability. Chairman Biggert, Ranking Member Gutierrez, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any of the questions you or other members of the Subcommittee may have at this time. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact Orice Williams Brown at (202) 512-8678 or [email protected]. This statement was prepared under the direction of Patrick Ward. Key contributors were Tania Calhoun, Emily Chalmers, Nima Patel Edwards, and Christopher Forys. FEMA Flood Maps: Some Standards and Processes in Place to Promote Map Accuracy and Outreach, but Opportunities Exist to Address Implementation Challenges. GAO-11-17. Washington, D.C.: December 2, 2010. National Flood Insurance Program: Continued Actions Needed to Address Financial and Operational Issues. GAO-10-1063T. Washington, D.C.: September 22, 2010. National Flood Insurance Program: Continued Actions Needed to Address Financial and Operational Issues. GAO-10-631T. Washington, D.C.: April 21, 2010. Financial Management: Improvements Needed in National Flood Insurance Program's Financial Controls and Oversight. GAO-10-66. Washington, D.C.: December 22, 2009. Flood Insurance: Opportunities Exist to Improve Oversight of the WYO Program. GAO-09-455. Washington, D.C.: August 21, 2009. Information on Proposed Changes to the National Flood Insurance Program. GAO-09-420R. Washington, D.C.: February 27, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. Flood Insurance: Options for Addressing the Financial Impact of Subsidized Premium Rates on the National Flood Insurance Program. GAO-09-20. Washington, D.C.: November 14, 2008. Flood Insurance: FEMA's Rate-Setting Process Warrants Attention. GAO-09-12. Washington, D.C.: October 31, 2008. National Flood Insurance Program: Financial Challenges Underscore Need for Improved Oversight of Mitigation Programs and Key Contracts. GAO-08-437. Washington, D.C.: June 16, 2008. Natural Catastrophe Insurance: Analysis of a Proposed Combined Federal Flood and Wind Insurance Program. GAO-08-504. Washington, D.C.: April 25, 2008. National Flood Insurance Program: Greater Transparency and Oversight of Wind and Flood Damage Determinations Are Needed. GAO-08-28. Washington, D.C.: December 28, 2007. National Disasters: Public Policy Options for Changing the Federal Role in Natural Catastrophe Insurance. GAO-08-7. Washington, D.C.: November 26, 2007. Federal Emergency Management Agency: Ongoing Challenges Facing the National Flood Insurance Program. GAO-08-118T. Washington, D.C.: October 2, 2007. National Flood Insurance Program: FEMA's Management and Oversight of Payments for Insurance Company Services Should Be Improved. GAO-07-1078. Washington, D.C.: September 5, 2007. National Flood Insurance Program: Preliminary Views on FEMA's Ability to Ensure Accurate Payments on Hurricane-Damaged Properties. GAO-07-991T. Washington, D.C.: June 12, 2007. Coastal Barrier Resources System: Status of Development That Has Occurred and Financial Assistance Provided by Federal Agencies. GAO-07-356. Washington, D.C.: March 19, 2007. Budget Issues: FEMA Needs Adequate Data, Plans, and Systems to Effectively Manage Resources for Day-to-Day Operations. GAO-07-139. Washington, D.C.: January 19, 2007. National Flood Insurance Program: New Processes Aided Hurricane Katrina Claims Handling, but FEMA's Oversight Should Be Improved. GAO-07-169. Washington, D.C.: December 15, 2006. GAO'S High-Risk Program. GAO-06-497T. Washington, D.C.: March 15, 2006. Federal Emergency Management Agency: Challenges for the National Flood Insurance Program. GAO-06-335T. Washington, D.C.: January 25, 2006. Federal Emergency Management Agency: Improvements Needed to Enhance Oversight and Management of the National Flood Insurance Program. GAO-06-119. Washington, D.C.: October 18, 2005. Determining Performance and Accountability Challenges and High Risks. GAO-01-159SP. Washington, D.C.: November 2000. Standards for Internal Control in the Federal Government. GAO/AIMD-00-21.3.1. Washington, D.C.: November 1999. Budget Issues: Budgeting for Federal Insurance Programs. GAO/T-AIMD-98-147. Washington, D.C.: April 23, 1998. Budget Issues: Budgeting for Federal Insurance Programs. GAO/AIMD-97-16. Washington, D.C.: September 30, 1997. National Flood Insurance Program: Major Changes Needed If It Is To Operate Without A Federal Subsidy. GAO/RCED-83-53. Washington, D.C.: January 3, 1983. Formidable Administrative Problems Challenge Achieving National Flood Insurance Program Objectives. RED-76-94. Washington, D.C.: April 22, 1976. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The National Flood Insurance Program (NFIP) has been on GAO's high-risk list since 2006, when the program had to borrow from the U.S. Treasury to cover losses from the 2005 hurricanes. The outstanding debt is $17.8 billion as of March 2011. This sizeable debt, plus operational and management challenges that GAO has identified at the Federal Emergency Management Agency (FEMA), which administers NFIP, have combined to keep the program on the high-risk list. NFIP's need to borrow to cover claims in years of catastrophic flooding has raised concerns about the program's long-term financial solvency. This testimony 1) discusses ways to place NFIP on a sounder financial footing in light of public policy goals for federal involvement in natural catastrophe insurance and 2) highlights operational and management challenges at FEMA that affect the program. In preparing this statement, GAO relied on its past work on NFIP and on its ongoing review of FEMA's management of NFIP, which focuses on its planning, policies, processes, and systems. The management review includes areas such as strategic and human capital planning, acquisition management, and intra-agency collaboration. Congressional action is needed to increase the financial stability of NFIP and limit taxpayer exposure. GAO previously identified four public policy goals that can provide a framework for crafting or evaluating proposals to reform NFIP. These goals are: (1) charging premium rates that fully reflect risks, (2) limiting costs to taxpayers before and after a disaster, (3) encouraging broad participation in the program, and (4)encouraging private markets to provide flood insurance. Successfully reforming NFIP would require trade-offs among these often competing goals. For example, currently nearly one in four policyholders does not pay full-risk rates, and many pay a lower subsidized or "grandfathered" rate. Reducing or eliminating less than full-risk rates would decrease costs to taxpayers but substantially increase costs for many policyholders, some of whom might leave the program, potentially increasing postdisaster federal assistance. However, these trade-offs could be mitigated by providing assistance only to those who needed it, limiting postdisaster assistance for flooding, and phasing in premium rates that fully reflected risks. Increasing mitigation efforts to reduce the probability and severity of flood damage would also reduce flood claims in the long term but would have significant up-front costs that might require federal assistance. One way to address this trade-off would be to better ensure that current mitigation programs were effective and efficient. Encouraging broad participation in the program could be achieved by expanding mandatory purchase requirements or increasing targeted outreach to help diversify the risk pool. Such efforts could help keep rates relatively low and reduce NFIP's exposure but would have to be effectively managed to help ensure that outreach efforts were broadly based. Encouraging private markets is the most difficult challenge because virtually no private market for flood insurance exists for most residential and commercial properties. FEMA's ongoing efforts to explore alternative structures may provide ideas that could be evaluated and considered. Several operational and management issues also limit FEMA's progress in addressing NFIP's challenges, and continued action by FEMA will be needed to help ensure the stability of the program. For example, in previous reports GAO has identified weaknesses in areas that include financial controls and oversight of private insurers and contractors, and has made many recommendations to address them. While FEMA has made progress in addressing some areas, preliminary findings from GAO's ongoing work indicate that these issues persist and need to be addressed as Congress works to more broadly reform NFIP. GAO has made numerous recommendations aimed at improving financial controls and oversight of private insurers and contractors, among others.
5,740
804
The United States experienced heavy aircraft and aircrew losses to enemy air defenses during the Vietnam War. Since then, the services have recognized air defense suppression as a necessary component of air operations. Consequently, when a crisis arises, suppression aircraft are among the first to be called in and the last to leave. Radar is the primary means used by enemy forces to detect, track, and target U.S. aircraft with missiles and guns. Hence, U.S. suppression aircraft focus on trying to neutralize, degrade, or destroy the enemy's air defense radar equipment. U.S. suppression aircraft, using missiles and jammers, generally begin suppressing enemy air defenses after they begin emitting radio-frequency signals. Also, in some cases, aircraft launch antiradiation missiles that can search for and destroy enemy radars if they are turned on. At some risk to the aircraft and aircrews, suppression aircraft must be in the vicinity of the enemy air defenses to complete their mission. Enemy radars in the past were usually fixed in position, operated independent of each other, and turned on for lengthy periods of time--all of which made them relatively easy to find and suppress through electronic warfare or physical attack. Such was the case in Operation Desert Storm, when suppression aircraft such as EA-6B and the now-retired EF-111 and F-4G played a vital role in protecting other U.S. aircraft from radar-guided missile systems. In fact, strike aircraft were normally not permitted to conduct air operations unless protected by these suppression aircraft. The EA-6B and EF-111 were equipped with transmitters to disrupt or "jam" radar equipment used by enemy surface-to-air missiles or antiaircraft artillery systems. The F-4G, F/A-18, and EA-6B used antiradiation missiles that homed in on enemy radar systems to destroy them. The Air Force replaced the F-4G with a less capable aircraft, the F-16CG, but did not upgrade or replace the EF-111. According to DOD, countries have sought to make their air defenses more resistant to suppression. These efforts include increasing the mobility of their surface-to-air missiles and radar equipment, connecting radars together into integrated air defense systems, and adding sophisticated capabilities so that the radar can detect aircraft while turned on for a shorter period of time. These defenses use various means to track and target aircraft, including modern telecommunications equipment and computers to create networks of early warning radar, missile system radar, and passive detection systems that pick up aircraft communications or heat from aircraft engines. Integrated networks provide air defense operators with the ability to track and target aircraft even if individual radar elements of the network are jammed or destroyed. Since the end of Desert Storm in 1991, U.S. suppression aircraft have been continuously deployed to protect fighter aircraft maintaining the no-fly zones over Iraq. More recently, these aircraft have been deployed to Yugoslavia and Afghanistan. In 1999, during Operation Allied Force in Yugoslavia and Kosovo, these aircraft were extremely important for protecting strike aircraft from enemy radar-guided missiles. However, according to the Defense Intelligence Agency, these aircraft were unable to destroy their integrated air defense system because Yugoslav forces often engaged in elaborate efforts to protect their air defense assets. These efforts reduced Yugoslav opportunities to engage U.S. and coalition aircraft because their air defense assets could not be used and protected simultaneously. Nevertheless, in two separate incidents, Yugoslav forces managed to shoot down an F-117 stealth fighter and an F-16CG. In addition to the two losses, the inability of the United States to counter Yugoslav air defenses that included radar and infrared guided missiles made it necessary for U.S. forces to (1) fly thousands of dedicated suppression missions, pushing suppression forces in Europe to their limits, and (2) raise their strike missions to higher altitudes or keep low-flying aircraft such as the Army's Apache attack helicopters out of combat to reduce risk from infrared missile threats. DOD now primarily uses Navy and Marine Corps EA-6Bs for radar jamming and Air Force EC-130s for communications jamming. Recently, EA-6Bs and EC-130s saw combat in Operation Enduring Freedom in Afghanistan. Air defenses there were relatively weak compared to those faced by U.S. aircraft in Yugoslavia, placing fewer demands on suppression aircraft to jam air defense systems. This gave the EA-6B an opportunity to exploit new techniques to jam ground communications by working with the EC-130 and other electronic intelligence gathering aircraft. Since our January 2001 report, the services have had some success in improving their suppression capabilities, but they have not reached a level needed to counter future threats. When the Air Force retired the EF-111 without a replacement, the Navy's EA-6B became DOD's primary airborne radar jammer, providing suppression support for all the services. High demand for the aircraft has exacerbated current wing and engine problems, and the Navy has been unable to meet its overall requirements. Efforts are underway to address the EA-6B's problems and improve its suppression equipment, but the Navy projects that the declining EA-6B inventory will be insufficient to meet DOD's needs beyond 2009. The Air Force's F-16CJ fleet has grown and the aircraft's capabilities are being improved, but it still lacks some of the capabilities of the F-4G, the aircraft it replaced. Also, the Air Force and the Navy have improvements underway for other systems such as the EC-130 and antiradiation missiles but face funding challenges. Finally, to the extent there are gaps in suppression capabilities, U.S. fighter aircraft and helicopters must rely on self-protection equipment to suppress enemy air defenses, but some of this equipment has been proven to be unreliable. The services have some programs underway to improve this self-protection equipment, such as developing new towed decoys, but, as discussed below, these programs have been hampered by technical and funding issues. The Navy does not have enough EA-6Bs to meet DOD's suppression needs due to wing fatigue and engine problems that have grounded aircraft; downtime required for routinely scheduled depot level maintenance; and, in the future, downtime to install major capability upgrades in the aircraft. Because of its limited numbers and high rate of use by the warfighting commanders, DOD designated the EA-6B as a "low density, high demand" asset to support worldwide joint military operations. EA-6Bs are included in all aircraft carrier deployments and support the Air Force's Aerospace Expeditionary Forces. To meet a requirement to field 104 aircraft out of a total inventory of 124 (with an average age of 19 years), the Navy refurbished 20 retired EA-6Bs. Subsequently, in 2001, 2 EA-6Bs crashed, reducing the total inventory to 122 aircraft. Also in that year, the Navy planned to raise the requirement to 108 aircraft and establish an additional EA-6B squadron, but that has been delayed until March 2004. In February 2002, the Navy had only 91 EA-6Bs available for operations instead of the 104 required. As a result, while the Navy has been able to meet operational commitments, it has been unable to meet some of its training and exercise requirements. The Navy is currently taking action to remedy EA-6B wing fatigue and engine failures, and flight restrictions have been put in place. However, because wing fatigue has continued to grow, the Navy may have to ground additional aircraft. The Navy plans to replace a total of 67 wing center sections to remedy the problem, and it will spend $4.4 million each for such replacements for 17 aircraft in the fiscal year 2002 budget. In addition, DOD's 2002 supplemental funds covered 8 additional wing replacements, and the Navy is programming funds for 10 more wing replacements for each year in the Future Years Defense Plan. In 2001, the Navy also began experiencing problems with the EA-6B's engines. Premature failure of certain engine bearings caused some engines to fail, and it may have caused the crash of two aircraft in 2001. The Navy grounded over 50 engines until they could be overhauled, but it expects to have them back in service by late this year. The constant deployment of this "low density" EA-6B fleet for contingency operations has contributed to its deterioration and to other maintenance- related problems. For example, to maintain the readiness of squadrons deployed to Kosovo and other ongoing commitments, the Navy took spare parts and personnel from nondeployed squadrons and subjected the EA-6B to above average cannibalization of parts. This impacted the ability of nondeployed units to train and maintain aircrew proficiency. The constant deployments also added to personnel problems in terms of quality of life. EA-6B crews, for example, are often away from home for extended periods of time creating hardships for their families. Given the EA-6B's age and high rate of use, the Navy says that even if the EA-6B fleet's problems are remedied, it will be unable to meet force structure requirements in 2009, and all EA-6B aircraft will be out of the force by 2015. Therefore, the Navy says it needs a replacement aircraft to begin entering the force by 2009 if requirements are to be met. The Navy has been upgrading its EA-6B electronic warfare equipment over the years, and it is currently modifying its radar signal receiver and related equipment. The modification program, known as the Improved Capability Program (ICAP) III, provides improved radar locating and jamming capabilities to counter modern enemy air defense threats. As of January 2002, according to DOD, ICAP III engineering and manufacturing development was about 94 percent complete, and the modification began testing on the first aircraft in November 2001. The Navy expects ICAP III to reach initial operational capability in 2005 and to be installed on all EA-6Bs by 2010, about the time when the aircraft begins to reach the end of its service life. The Navy is considering using a modified version of the ICAP III equipment on whatever follow-on suppression aircraft are developed and fielded, and is also upgrading the EA-6B jammer pods to increase the number of frequencies that can be jammed. The Air Force is procuring 30 additional F-16CJ suppression aircraft to meet force structure requirements for the Air Force's Aerospace Expeditionary Forces. In all, 219 F-16CJ aircraft will be available. To fully implement its concept of operations for the Expeditionary Forces, the Air Force also plans to increase the capability of the latest model F-16C/Ds (block 40) and the F-16CJs (block 50) to be used for both attack and suppression missions. To accomplish this, the F-16C/Ds will be modified to carry the HARM Targeting System, and the F-16CJs will be modified to carry the Advanced Target Pod. The HARM Targeting System will provide situational awareness to the F-16C/Ds and targeting information to the HARM missile to permit them to perform the suppression mission. The Advanced Target Pod will enable the F-16CJs to deliver precision-guided munitions. The Air Force recently upgraded the HARM Targeting System and is procuring additional systems. The upgrade (known as R-6) provides better and faster targeting information to the missile, but even with this pod the F-16CJ still lacks some of the capabilities of the retired F-4G. The Air Force completed the R-6 upgrade on fielded systems in December 2001 and systems subsequently produced will have it. Once 31 additional systems are delivered in 2002, the F-16CJs will have a total inventory of 202 systems, short of the Air Force's original goal of having 1.1 systems per aircraft, or about 240 systems. Also, the Air Force has partially funded additional upgrades (called R-7) for the HARM Targeting System in 2003, and plans to fully fund the upgrade in the 2004 budget cycle, according to Air Force operational requirements officials. These officials also stated that they are considering funding for additional R-7 HARM Targeting Systems for F-16CJs and F-16C/Ds in the 2004 budget submission. The Air Force is also upgrading the capabilities of the EC-130 Compass Call Aircraft, which perform primarily communications jamming missions. The upgrades are intended to improve the aircraft's jamming capabilities, reliability, and maintainability. The EC-130 is another "low density, high demand" asset with a total of only 13 operational aircraft, of which 11 are being funded for upgrade. Gaps in the services' air defense suppression aircraft make it essential that other aircraft have the ability to protect themselves from enemy defenses. The services have already identified serious reliability problems with current self-protection systems on U.S. combat aircraft, including jammers, radar warning receivers, and countermeasures dispensers. Most of the current systems use older technology and have logistics support problems due to obsolescence. Also, as we reported last year, the self- protection systems on strike aircraft may have more problems than the services estimate. In reviewing test results using the new Joint Service Electronic Combat System Tester, we found that aircraft the services believed to be mission capable were not because of faults in their electronic combat systems that were undetected by older test equipment. The faults ranged from the identification of parts needing to be replaced inside the electronic combat systems, to the wiring, antennas, and control units that connect the systems to the aircraft. For example, 41 of 44 F-15C aircraft and 10 of 10 F-18C aircraft previously believed to be fully mission capable were subsequently found to have one or more faults in their self-protection systems, and 1 F-18C had 12 such faults. Coupled with the problems in the suppression aircraft, these shortcomings could create survivability problems for the aircraft should they encounter significant enemy air defense capabilities in some future conflict. The services have some programs underway to improve self-protection capabilities such as the joint Navy and Air Force Integrated Defensive Electronic Countermeasures (IDECM) system and the Precision Location and Identification (PLAID) system. The IDECM system will provide the F-15, F/A-18E/F, and B-1B aircraft with improved self-protection through jammers and towed decoys. The system has experienced some delays in engineering and development, and the estimated procurement cost has doubled. The PLAID system will provide aircrews with accurate location and identification of enemy air defense systems. The services expect to field both systems in 2004. The services have initiated additional research and development efforts to improve their ability to suppress enemy air defenses, but they face technology challenges and/or a lack of funding priority for many of these programs. The Miniature Air Launched Decoy (MALD), which an Air Force analysis has shown could make a significant contribution to aircraft survivability, illustrates this problem. MALD is supposed to mimic an aircraft and draw enemy air defenses away from the real aircraft. A recently completed Advanced Concept Technology Demonstration, it had been funded by the Air Force for an initial small procurement of 300 decoys, with potential for further procurement. According to the Air Force, after experiencing technical problems, MALD did not meet user needs, and its procurement cost estimates increased. Thus, the Air Force canceled the procurement and restructured MALD to address deficiencies highlighted in the demonstration. The Navy has been developing its own decoy, the Improved Tactical Air Launched Decoy (ITALD), but it has procured only part of its inventory objective. Despite recurring congressional increases for the past several fiscal years, the Navy has not submitted budget requests for ITALDs or procured units to complete its inventory objective because of competing priorities. Also, the Navy is upgrading the HARM missile used to attack shipborne and ground-based radars. The first phase of the upgrade improves missile accuracy by incorporating global positioning and inertial navigation systems into the missile. A second upgrade, the Advanced Anti-Radiation Guided Missile, will add millimeter wave capability to allow the missile to target radars that have stopped emitting. While the Air Force employs the HARM missile as well, it is not involved in the HARM upgrade program. DOD has acknowledged the gap in U.S. air defense suppression capabilities for some time and has conducted several studies to identify solutions, but it has had little success in closing the gap. Our past work and the work of others have cited the need for DOD to establish some coordinating entity to develop a comprehensive strategy that addresses this capability gap. In response to our previous report, DOD stated that its Airborne Electronic Attack Analysis of Alternatives would provide the basis for such a strategy. However, the analysis was limited to assessing options for replacing the EA-6B rather than assessing the needs of the overall suppression mission. Upon completion of the analysis, the Navy and the Air Force proposed options for replacing EA-6B capabilities, and DOD is currently evaluating these proposals for consideration in the 2004 budget submission. In fiscal year 2000, Congress expressed concerns that DOD did not have a serious plan for a successor to the EA-6B aircraft and directed DOD to conduct the Airborne Electronic Attack Analysis of Alternatives for replacing the EA-6B. DOD indicated in its response to our January 2001 report that the analysis would lead to a DOD-wide strategy and balanced set of acquisition programs to address the overall gaps between suppression needs and capabilities. However, it was only intended to address the airborne electronic attack aspect of the suppression mission and therefore did not address the acknowledged problems with aircraft self-protection systems or the technical and funding challenges of other service programs such as the Navy's ITALD program, the Air Force's MALD program, and the Air Force's EC-130 modifications. The Navy took the lead on the joint analysis with participation by all the services. The analysis, completed in December 2001, concluded that the services needed a standoff system or a combination of systems to operate at a distance from enemy targets and a stand-in system that would provide close-in suppression protection for attacking aircraft where the threat is too great for the standoff systems. The analysis established the capabilities of the EA-6B upgraded with ICAP III as the foundation for any future system. It presented the Navy and the Air Force with detailed models of estimated costs and capabilities of 27 mixes of new and/or upgraded aircraft to consider for follow-on electronic attack capabilities but did not recommend any particular option. These options ranged in estimated 20-year life cycle costs from $20 billion to $80 billion. In conjunction with the analysis, the services formed a Joint Requirements Coordination and Oversight Group to coordinate operational requirements for airborne electronic attack, review ongoing and planned production programs for the mission, and exchange information among the services to avoid unnecessary duplication. A key activity of the group is to coordinate Navy and Air Force proposals for replacing the EA-6B. According to group members, this mechanism will help address airborne electronic attack needs through the coordination of complementary systems agreed to by the services. In June 2002, the services presented their proposals for follow-on capabilities to the Office of the Secretary of Defense. According to the services, the Navy proposed to replace the EA-6B with an electronic attack version of its new F/A-18E/F fighter and attack aircraft. The Air Force proposed adapting the B-52H bomber for standoff suppression by adding jamming pods to it, plus a stand-in suppression capability provided by a MALD-type decoy with jamming capabilities or an unmanned aerial vehicle equipped with jammers. The services see these proposals as a coordinated, effective solution to the near- and far-term needs for airborne electronic attack. DOD is currently conducting an additional analysis of the proposals, and the Secretary will decide later this year what proposals to include in the fiscal year 2004 budget submission. The development of systems to replace the EA-6B will help close the gap between DOD's suppression capabilities and needs. However, the service proposals that are currently being considered by DOD do not provide an integrated, comprehensive solution to the overall suppression needs. In addition, while the Joint Requirements Coordination and Oversight Group provides a mechanism to coordinate the services' efforts, it has not been directed to develop a comprehensive strategy and monitor its implementation. Other assessments have also pointed to the lack of a coordinated approach to addressing the gap in air suppression capabilities. At DOD's request, the Institute for Defense Analyses studied problems in acquiring electronic warfare systems. The Institute found several causes for the problems, including uncertainties in characterizing rapidly changing threats and systems requirements, lack of adequate and stable funding, complexity of electronic warfare hardware and software, challenges in integrating the hardware and software on platforms, and difficulties in getting and keeping experienced electronic warfare personnel. Among other things, the Institute recommended that DOD establish central offices for electronic warfare matters in the Joint Chiefs of Staff and in each service, create a senior oversight panel, and prepare an annual electronic warfare roadmap to help correct some of the problems DOD faces in electronic warfare acquisition programs. While DOD has not established a coordinating entity to provide leadership for the suppression mission, it has recognized the need for such entities in other cross-service initiatives areas such as the development and fielding of unmanned aerial vehicles. In October 2001, the Under Secretary of Defense for Acquisition, Technology and Logistics established a joint unmanned aerial vehicles planning task force that will develop and coordinate road maps, recommend priorities for development and procurement efforts, and prepare implementing guidance to the services on common programs and functions. The air defense suppression mission continues to be essential for maintaining air superiority. Over the past several years, however, the quantity and quality of the services' suppression equipment have declined while enemy air defense tactics and equipment have improved. DOD has recognized a gap exists in suppression capabilities but has made little progress in closing it. In our view, progress in improving capabilities has been hampered by the lack of a comprehensive strategy, cross-service coordination, and funding commitments that address the overall suppression needs. DOD relies on individual service programs to fill the void, but these programs have not historically received a high priority, resulting in the now existing capability gap. We continue to believe that a formal coordinating entity needs to be established to bring the services together to develop an integrated, cost-effective strategy for addressing overall joint air defense suppression needs. A strategy is needed to identify mission objectives and guide efforts to develop effective and integrated solutions for improving suppression capabilities. To close the gap between enemy air defense suppression needs and capabilities, we recommend that the Secretary of Defense establish a coordinating entity and joint comprehensive strategy to address the gaps that need to be filled in the enemy air defense suppression mission. The strategy should provide the means to identify and prioritize promising technologies, determine the funding, time frames, and responsibilities needed to develop and acquire systems, and establish evaluation mechanisms to track progress in achieving objectives. In written comments to a draft of this report, DOD concurred with our recommendations and supported the need for a mechanism to coordinate electronic warfare strategy and systems acquisition. DOD stated that the Office of the Secretary of Defense (Acquisition, Technology and Logistics) is currently restructuring its staff to address cross-cutting issues, including the creation of an Assistant Director of Systems Integration for Electronic Warfare and an Integrated Product Team process to formulate a comprehensive approach to the electronic warfare mission area, including defense suppression. We believe this is a good step forward. DOD also stated that we were overly critical in our characterization of individual defense suppression systems and failed to acknowledge its full range of capabilities to suppress air defenses. We recognize that the services have substantial capabilities but remain concerned because there are insufficient aircraft to meet overall requirements and improvements have not kept pace with evolving threats. Several service-specific attempts have been made to remedy the acknowledged gap in capabilities, but they have faltered in competition for funding. In some cases, Congress intervened with guidance and increases to services' budget requests for defense suppression to ensure that DOD addresses the capabilities gap. We believe that creation of a comprehensive strategy and effective coordinating entity would strengthen DOD's ability to compete for funding and address the gap. DOD's comments are reprinted in appendix II. In addition, DOD provided technical comments that we incorporated into the report where appropriate. To assess the condition of DOD's suppression capabilities and DOD's progress in developing a strategy for closing the gap in suppression capabilities, we interviewed Office of the Secretary of Defense, Joint Chiefs of Staff, Defense Advanced Research Program Agency, Air Force, Army, Navy, and Marine Corps officials responsible for electronic warfare requirements and programs. We also interviewed service program managers for the EA-6B, EC-130, F-16CJ, HARM, aircraft self-protection systems, and programs under development. We also met with officials from selected EA-6B squadrons and an EA-6B maintenance depot. We interviewed Defense Intelligence Agency officials and reviewed related intelligence documents to ascertain the capabilities of current and future enemy air defense systems. We also discussed air defense suppression programs and issues with various DOD contractors, including RAND Corporation, Northrup-Grumman Corporation, General Atomics Aeronautical Systems, Incorporated, and Raytheon Systems Company. We reviewed pertinent DOD, service, and contractor documents addressing the status of suppression capabilities, plans for maintaining them, and potential solutions for closing the gap in capabilities. Specific locations we visited are listed in appendix I. We performed our review from October 2001 through August 2002 in accordance with generally accepted government auditing standards. As you know, the head of a federal agency is required under 31 U.S.C. 720 to submit a written statement of actions taken on our recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform not later than 60 days after the date of the report and to the House and Senate Committees on Appropriations with the agency's first request for appropriations made more than 60 days after the date of the report. We are sending copies of this report to the Secretaries of the Army, Air Force, and Navy; the Commandant of the Marine Corps; and interested congressional committees. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please contact me on (202)512-4841. Major contributors to this report were Michael Aiken, Gaines Hensley, John Oppenheim, Terry Parker, Robert Pelletier, and Robert Swierczek. Office of the Secretary of Defense, Washington, D.C. Joint Chiefs of Staff, Washington, D.C. Headquarters Elements, Air Force, Army, Marine Corps, and Navy, Washington, D.C. Defense Intelligence Agency, Washington, D.C.
U.S. military aircraft are often at great risk from enemy air defenses, and the services use specialized aircraft to neutralize or destroy them. In January 2001, GAO reported that a gap existed between the services' suppression capabilities and their needs and recommended that a comprehensive strategy was needed to fix the situation. In response to GAO's report, DOD emphasized that a major study underway at the time would provide the basis for a Department-wide strategy and lead to a balanced set of acquisition programs between the services. This report updates our previous work and assesses actions that DOD has taken to improve its suppression capabilities. The Department of Defense continues to face a gap between its need to suppress enemy air defenses and its capabilities to do so, despite some progress in upgrading its capabilities. There are not enough existing suppression aircraft to meet overall requirements, some aircraft are experiencing wing and engine problems, and improvements are needed to counter evolving threats. DOD's primary suppression aircraft, the EA-6B, is also reaching the end of its life cycle and a replacement is needed as early as 2009. Furthermore, some aircraft self-protection equipment, which provide additional suppression capabilities, have also been found to be unreliable. DOD has not yet developed an integrated, comprehensive approach to the U.S. air defense suppression mission but has recently completed an Analysis of Alternatives that presented the services with 27 options for replacing the aging EA-6B. The services formed a coordinating group to assess the options, and in June 2002 presented service-specific proposals to the Office of the Secretary of Defense for analysis and consideration in the 2004 budget. However, the Analysis of Alternatives did not provide the basis for a comprehensive strategy to address the department's overall suppression needs. It only analyzed the airborne electronic attack portion of the mission and did not address needed improvements in aircraft self-protection systems or the technical and funding challenges of other service programs such as the Navy's and Air Force's air-launched decoy programs.
5,899
433
Farming has always been a risky endeavor, and farmers have always had to manage risk as a part of doing business. Over the years, the federal government has played an active role in several ways to help mitigate the effects of production losses and low prices on farm income. For example, USDA's Risk Management Agency (RMA) administers the federal crop insurance program to protect farmers against major production losses. Under this program, RMA subsidizes the federal multiple-peril crop insurance program, which allows insured farmers to receive an indemnity payment if production falls below a certain level. In addition, to help protect farmers against the risk of low crop prices, USDA's Farm Service Agency administered price- and income-support programs for farmers who grew certain crops--corn, wheat, grain sorghum, barley, oats, cotton, and rice. The 1996 farm bill changed the government's role. It replaced the income- support programs with "production flexibility contracts" that provide for fixed but declining annual payments to participating farmers from 1996 through 2002. These government payments--known as transition payments--are not tied to market prices, and participating farmers are not restricted with regard to the type or amount of crops that they plant, as they were in the earlier programs. Furthermore, unlike the deficiency payments of the last 6 decades, the transition payments do not rise in years when crop prices are low, nor do they fall in years when prices are high. As shown in table 1, the 1996 farm bill specified that transition payments would total about $36 billion over the 7-year period, declining from about $5.6 billion in 1996 to about $4 billion in 2002. By giving farmers increased flexibility in deciding which crops to plant, the 1996 farm bill allows them to choose the particular crop or combination of crops that they believe offers the best chance to maximize their profits and offset the decline in income resulting from lower government payments. However, the increased flexibility in planting decisions brings other risks. For example, small increases in expected profits may lead many farmers to decide to increase the acreage devoted to a particular crop. This, in turn, could result in the increased production of the crop nationwide and ultimately in lower prices as a result of the greater supply. Section 192 of the 1996 farm bill required that USDA, in consultation with the Commodity Futures Trading Commission (CFTC), educate farmers in managing the financial risks inherent in producing and marketing agricultural commodities. The act specified that, as a part of such education activities, USDA may develop and implement programs to assist and train farmers in using (1) forward contracts, which enable farmers to lock in a price for their crop or livestock production prior to harvest or slaughter, (2) crop insurance, which ensures compensation if crop yields are substantially lower than expected, and (3) hedging--buying or selling futures or options contracts on a commodity exchange, such as the Chicago Board of Trade--which reduces the risk of receiving lower prices for crops or livestock. The act authorized USDA to use its existing research and extension authorities and resources to implement this provision. In March 1997, the Secretary of Agriculture organized a steering committee to direct the government's education activities for managing agricultural risk. The steering committee is chaired by RMA's administrator and includes a CFTC commissioner; the administrator of USDA's Cooperative State Research, Education, and Extension Service (CSREES); and the director of USDA's National Office of Outreach. These agencies have different responsibilities. RMA primarily administers the federal crop insurance program; the 1996 farm bill expanded its authority to include a broader risk management perspective. CFTC, which regulates commodity futures and options trading in the United States, also develops and maintains research and informational programs concerning futures and options trading for farmers, commodity market users, and the general public. CSREES develops and conducts agricultural research, higher education, and extension programs to provide education and technical assistance to farmers and the general public. USDA's National Office of Outreach is responsible for ensuring that information, technical assistance, and training are available to all USDA customers, with an emphasis on underserved populations. USDA's 1996 Agricultural Resource Management Study (Phase 3), based on a statistical sample of farmers, found that about 42 percent of the nation's 2 million farmers used at least one of the risk management tools--forward contracts, crop insurance, or hedging--to manage their income risk. In 1996, a substantially greater percentage of farmers with agricultural sales of at least $100,000 (large-scale farmers) used each risk management tool than did farmers whose agricultural sales were less than $100,000 (small- scale farmers). Similarly, a greater percentage of farmers whose primary crops were corn, wheat, or cotton purchased crop insurance and used forward contracts than did farmers who grew other field crops. (App. II provides detailed data on farmers' use of risk management tools by sales level, commodity, geographic region, and the receipt of USDA transition payments.) Table 2 shows that, among all U.S. farmers, a substantially greater percentage of large-scale farmers used each risk management tool than did small-scale farmers in 1996. Among large-scale farmers, at least 52 percent purchased crop insurance, at least 55 percent used forward contracts, and at least 32 percent engaged in hedging. In contrast, no more than 16 percent of small-scale farmers purchased crop insurance, no more than 29 percent used forward contracts, and no more than 22 percent engaged in hedging. Available data were insufficient to determine whether large-scale farmers hedged with futures or options contracts to a greater extent than small-scale farmers in 1996. Table 3 shows that at least 70 percent of those large-scale farmers who received transition payments purchased crop insurance, at least 66 percent used forward contracts, and at least 34 percent engaged in hedging in 1996. However, the minimum extent of usage was even greater among farmers who had more than $500,000 in sales and received transition payments--at least 73 percent purchased crop insurance, at least 78 percent used forward contracts, and at least 50 percent engaged in hedging in 1996. As table 4 shows, among all U.S. farmers, a greater percentage of those whose primary crop was corn, wheat, or cotton purchased crop insurance and engaged in forward contracting than did farmers who grew other field crops or raised livestock in 1996. Among farmers who primarily grew corn, wheat, and cotton, at least 54 percent purchased crop insurance and at least 50 percent used forward contracts. In contrast, among farmers who primarily raised other field crops, 43 percent at most purchased crop insurance and 45 percent at most used forward contracts. In addition, hedging was used by at least 35 percent of cotton farmers, which was a higher percentage than for farmers who grew other field crops in 1996. However, available data were insufficient to determine whether corn and wheat farmers engaged in hedging with futures or options contracts to a greater extent than did farmers who primarily raised other crops or livestock. Table 5 shows that, among corn farmers who received transition payments, at least 54 percent purchased crop insurance, at least 61 percent used forward contracts, and at least 31 percent engaged in hedging in 1996. Among wheat farmers who received transition payments, at least 81 percent purchased crop insurance, at least 46 percent used forward contracts, and at least 15 percent engaged in hedging. Among cotton farmers who received transition payments, at least 88 percent purchased crop insurance, at least 59 percent used forward contracts, and at least 25 percent engaged in hedging. To prepare farmers for managing their risks, USDA has focused primarily on developing regional or state partnerships of government, university, and private organizations to foster a risk management educational program. The university partners developed and implemented a series of regional and local risk management conferences targeted initially at groups that influence farmers--bankers, crop insurance agents, grain elevator operators, and agricultural educators. USDA expects that these individuals will provide farmers with specific information for using risk management tools as the program continues. During fiscal year 1998, USDA also awarded 17 grants for risk management education projects, provided funding to land grant universities to promote additional risk management education efforts, and funded the development of an electronic risk management education library. In fiscal year 1998, USDA obligated $5 million of RMA's $10 million for crop insurance research to RMA's risk management education initiatives-- amounting to about $2.50 per U.S. farmer. These funds were the predominant source of risk management education funding within USDA. In comparison, a CSREES official told us that CSREES typically obligates only about $100,000 per year, primarily for specific risk management education projects. The official noted that land grant universities may also use a portion of their general CSREES education funding to support risk management education projects; however, the amount that universities spent in fiscal year 1998 is not known. For fiscal year 1999, USDA has allocated $1 million of RMA's $3.5 million for crop insurance research to risk management education. In response to the 1996 farm bill's requirement that it educate farmers about managing their production and marketing risks, USDA used a September 1997 national risk management education summit to initiate a series of 20 national and regional risk management education conferences. USDA's conferences focused on developing partnerships with "third-party influencers" in an effort to leverage the available government funds to train those who are in a position to educate farmers on risk management tools. According to USDA's director of risk management education, the training would enable third-party influencers to demonstrate to farmers how the various tools fit together in an overall risk management and marketing plan. These individuals interact frequently with farmers and are in a position to influence the risk management decisions farmers make. For example, land grant college or extension service educators provide various training and advisory services to farmers on both the production and business aspects of farming. Crop insurance agents meet with farmers several times during the year as the farmers decide on insurance coverage levels and provide the agents with information on acres planted and final crop production levels. The bank or farm credit services loan officers meet with farmers to discuss business plans and arrange for operating loans. Commodity brokers interact with farmers who choose to engage in hedging with futures or options. Farmers interact with grain elevator operators when they sell their crops on either a cash or forward contract basis. According to RMA, the conferences helped participants to gain information and knowledge about areas outside their own expertise. For example, commodity brokers learned more about crop insurance, and crop insurance agents learned more about the futures market. As of December 1998, USDA's major conferences had reached a relatively small percentage of the target groups' members. Table 6 shows that 335 (2 percent) of about 15,000 crop insurance agents in the United States had attended a USDA-sponsored risk management conference. Similarly, only 251 bankers and 96 grain elevator operators had attended the conferences, although there are about 3,200 agricultural banks and about 10,000 grain elevators in the United States. About 20 percent of the conference attendees were USDA or other government agency employees, rather than members of the groups influencing farmers. Conference speakers generally presented broad, overview information about a number of farm management areas without providing detailed information addressing specific problems in any single area. According to RMA officials, providing overview information was appropriate because it enabled participants to appreciate how their specialty area interacts with other areas for the benefit of farmers. USDA also expanded the scope of the conferences to discuss more than the two risk areas that the 1996 farm bill had identified--producing and marketing agricultural commodities. Sections of the conferences also addressed tools for reducing financial risks, legal risks, and human resource risks, in addition to tools for reducing production and marketing risks. RMA officials noted that financial, legal, and human resource risks are also significant concerns for farmers. RMA officials consider the risk management conferences to be a first step in developing regional and state partnerships with USDA, universities, and private organizations to provide risk management education to farmers. USDA has designated five land grant university educators as regional coordinators of its risk management education program. (App. III identifies, for each region, the coordinator's university affiliation, the associated RMA regional service offices, and the states covered.) The regional coordinators are responsible for (1) working with private sector partners, including bankers, crop insurance company representatives, and farmer organizations, to develop regional and local conferences, meetings and other training efforts and (2) serving as a focal point for providing information about the risk management education opportunities in each region. State and local educational activities, training sessions, and events sponsored by these partnerships have begun to reach additional farmers and individuals who influence farmers' decisions. In fiscal year 1998, USDA spent $1.5 million to support the risk management conferences and initiate regional partnerships, including about $300,000 for the conferences, $250,000 for publications and materials, $133,000 for the regional coordinating offices, and $45,000 for an evaluation project. USDA also spent about $350,000 for special outreach projects designed to enhance the risk management skills of small and minority producers in areas described as underserved by traditional risk management tools and $50,000 to sponsor a Future Farmers of America essay contest on risk management. In addition to sponsoring conferences and developing regional partnerships, USDA awarded a series of risk management education and research grants totaling $3 million. In February 1998, USDA issued a request for proposals in the Federal Register. Subsequently, a peer review team, working under the risk management education steering committee, evaluated 107 proposals requesting over $19 million. In June 1998, USDA awarded 17 risk management education grants, ranging from $19,172 to $250,000, and averaging about $178,000. USDA awarded 12 grants to land grant colleges and universities, 3 to other educational entities, 1 to a crop insurance industry organization, and 1 to a grain elevator industry organization. Most of the grants included additional public and private sector partners who agreed to participate in the projects with the primary grantees. With expected project completion dates ranging from the summer of 1999 through the fall of 2001, the projects are currently ongoing, and thus, in many cases, the training phase has not begun. The grant projects target diverse audiences--ranging from farmers with limited resources, farmers growing specific commodities in individual states or regions, and dairy farmers to crop insurance agents and grain elevator operators across the country--and were for diverse purposes. For example, the grantees focused on different geographic coverages: seven planned national coverage, four targeted regional audiences, and six directed their efforts in a single state. Similarly, some of the grantees focused on particular groups: four targeted limited resource or minority farmers, one focused on the risk management needs of citrus farmers, and one focused on dairy farmers. Typically, the projects focused on training, including a curriculum development phase, a "train the trainer" phase, and a series of seminars or workshops. However, two grants provided for research about farmers' use of and need for risk management tools. (App. IV provides information about the grantees, grant amount, and objectives for each of the 17 grants.) As a third element of its risk management education initiative in fiscal year 1998, USDA provided $362,000, divided among 96 land grant colleges and universities, to promote and augment their risk management education programs. According to USDA, these funds enabled the cooperative extension system to reach farmers during the winter of 1998-99 with a substantial risk management curriculum, including (1) regional video teleconferences, (2) small producer workshops at the local level, and (3) fact sheets, teaching guides, and classroom visual aids adapted to agricultural conditions in a particular state. In the fourth part of its response to the legislative mandate, USDA entered into a $200,000 contract with the University of Minnesota to develop an Internet website that provides an electronic library of risk management education materials. As of January 1999, the website contained over 700 risk management publications, presentations, decision aids, and other materials either resident on the site or linked to it. This information is useful to farmers as well as to the groups that influence them. On average, about 60 individuals per day made use of the website in January 1999. We provided the U.S. Department of Agriculture with a draft of this report for review and comment. We met with Agriculture officials, including the Administrator of the Risk Management Agency, who stated that the agency agreed with the report and that the report was balanced and accurate. However, the Department believed that the report should (1) provide more detailed information on how the $5 million for risk management education initiatives was spent, (2) discuss the Risk Management Agency's regional and local risk management conferences in the context of its broader effort to establish public and private partnerships, and (3) discuss the Risk Management Agency's efforts to provide risk management education through land grant universities as a separate initiative. We revised the report to more fully identify the various education initiatives that the Risk Management Agency has funded, explain that one of the purposes of the agency's conferences was to foster public-private partnerships, and identify the support for the outreach efforts of land grant universities as a separate initiative. In addition, the Department provided comments to improve the report's technical accuracy, which we incorporated as appropriate. To determine the extent to which various groups of farmers have used risk management tools, we obtained national agricultural survey data from USDA's Agricultural Resource Management Study (Phase 3) for 1996-- formerly called the Farm Costs and Returns Survey. The 1996 survey, based on a statistical sample, provides the most current, comprehensive data on farmers' use of risk management tools. About 7,300 farmers responded to the risk management questions. The 1997 study did not include specific questions about risk management strategies because it was designed to accommodate questions required by the 1997 agricultural census. USDA's Economic Research Service, which recently published an analysis of the 1996 survey data, provided the statistical data for this report. To identify education programs and projects USDA has directed or initiated to prepare farmers for managing risk, we interviewed and obtained documentation from USDA headquarters and regional officials, as well as from regional risk management coordinators. To determine the groups or individuals who have participated in or been served by these programs, we interviewed and obtained documentation from cognizant USDA officials, academicians, and other private sector organizations involved in planning and carrying out risk management seminars and other educational and research efforts. We also interviewed representatives of farmer organizations about RMA's approach. We performed our work from June 1998 through February 1999 in accordance with generally accepted government auditing standards. We did not, however, independently verify data obtained from USDA officials and documents. USDA's Agricultural Resource Management Study data are the only comprehensive data available that examine farmers' use of risk management tools. We are sending copies of this report to Representative Larry Combest, Chairman, House Committee on Agriculture, and appropriate congressional committees. We are also sending copies to the Honorable Dan Glickman, the Secretary of Agriculture; the Honorable Jacob Lew, Director, Office of Management and Budget; and other interested parties. We will also make copies available upon request. Please contact me at (202) 512-5138 if you or your staff have any questions about this report. Major contributors to this report are Richard Cheston, Mary Kenney, Renee McGhee-Lenart, and Robert R. Seely, Jr. The following are brief explanations of the three risk management tools discussed in our report: Crop insurance: Protects participating farmers against the financial losses caused by events such as droughts, floods, hurricanes, and other natural disasters. Federal crop insurance offers farmers two primary types of insurance coverage. The first--called catastrophic insurance-- provides protection against the extreme losses of crops for the payment of a $60 processing fee, whereas the second--called buyup insurance-- provides protection against the more typical smaller losses of crops in exchange for a premium paid by the farmer. Forward contract: A cash market transaction in which two parties agree to buy or sell a commodity or asset under agreed-upon conditions. For example, a farmer or rancher agrees to sell, and a local grain elevator or packing plant agrees to buy, the commodity or livestock at a specific future time for an agreed-upon price or on the basis of an agreed on pricing mechanism. With this agreement, a farmer locks in a final price for a commodity prior to harvest or slaughter. Hedging: The purchase or sale of a futures contract or an option on an organized exchange, such as the Chicago Board of Trade. A hedge is a temporary substitute for an intended subsequent transaction in the cash market to minimize the risk of an adverse price change. For example, corn farmers interested in locking in the sale price of all or part of their crops would sell corn futures as a temporary substitute for the cash market sale they intend to make at a later date. The sales transaction is carried out through a commodity broker. More specifically: Futures contract: An agreement for the purchase or sale of a standardized amount of a commodity, of standardized quality grades, during a specific month, on an organized exchange and subject to all terms and conditions included in the rules of that exchange. Option: The right, but not the obligation, to buy or sell a specified number of underlying futures contracts or a specified amount of a commodity, currency, index, or financial instrument at an agreed- upon price on or before a given future date. Other tools are also available to help farmers manage their risks. For a brief discussion of these tools, see "Risk Management: Farmers Sharpen Tools to Confront Business Risks," Agricultural Outlook, March 1999. This appendix provides detailed information that we obtained from the U.S. Department of Agriculture's (USDA) Economic Research Service concerning farmers' use of risk management strategies. This information is based on the 1996 Agricultural Resource Management Study; about 7,300 farm operators responded to the risk management questions. Using the data the Service provided, we calculated confidence intervals. The Economic Research Service's estimates and associated confidence intervals are presented in tables II.1 through II.12. Confidence interval (Dollars in millions) Agricultural sales(Dollars in millions) Defined as operator has household income under $20,000, farm assets under $150,000, and gross farm sales under $100,000. Defined as operator's primary occupation is retired. Defined as operator's primary occupation is "other"--neither farming nor retired. Defined as operated by nonfamily corporations, cooperatives, or hired managers. Confidence interval (Range) Confidence interval (Range) Confidence interval (Range) 19-29 (1)-19Defined as operator has household income under $20,000, farm assets under $150,000, and gross farm sales under $100,000. The operator's primary occupation is retired. The operator's primary occupation is "other"--neither farming nor retired. Operated by nonfamily corporations, cooperatives, or hired managers. Confidence interval (Dollars in millions) Agricultural sales(Dollars in millions) 100,034 (1,117) $4,510.8 ($63.2) Defined as operator has household income under $20,000, farm assets under $150,000, and gross farm sales under $100,000. Confidence interval calculations are not exact because of the small sample size or other characteristics of the sample results. Defined as operator's primary occupation is retired. Defined as operator's primary occupation is "other"--neither farming nor retired. Defined as operated by nonfamily corporations, cooperatives, or hired managers. Confidence interval (Range) Confidence interval (Range) Confidence interval (Range) 36-56 (14)-4418-42 (9)-31Defined as operator has household income under $20,000, farm assets under $150,000, and gross farm sales under $100,000. USDA is required to protect the privacy of respondents by withholding data if it receives too few responses in a particular category. Defined as operator's primary occupation is retired. Defined as operator's primary occupation is "other"--neither farming nor retired. Defined as operated by nonfamily corporations, cooperatives, or hired managers. Confidence interval (Dollars in millions) Agricultural sales(Dollars in millions) Table II.6: Percentage of Farmers Who Used Each Risk Management Tool, by Principal Commodity, 1996 Confidence interval (Range) Confidence interval (Range) Table II.7: Number of Farmers Who Received Transition Payments and the Value of Their Agricultural Sales, by Principal Commodity, 1996 Confidence interval (Dollars in millions) Agricultural sales(Dollars in millions) (1,605) $4,470.5 Table II.8: Percentage of Farmers Who Used Each Risk Management Tool Among Those Who Received Transition Payments, by Principal Commodity, 1996 Confidence interval (Range) Confidence interval (Range) Confidence interval (Range) USDA is required to protect the privacy of respondents by withholding data if it receives too few responses in a particular category. Confidence interval (Dollars in millions) Agricultural sales (Dollars in millions) $7,707.4 ($38.8) Includes Kansas, Nebraska, North Dakota, and South Dakota. Includes Illinois, Indiana, Iowa, Missouri, and Ohio. Includes Michigan, Minnesota, and Wisconsin. Includes Arizona, Colorado, Idaho, Montana, Nevada, New Mexico, Utah, and Wyoming. Includes California, Oregon, and Washington. Includes Kentucky, North Carolina, Tennessee, Virginia, and West Virginia. Includes Alabama, Florida, Georgia, and South Carolina. Includes Arkansas, Louisiana, and Mississippi. Includes Connecticut, Delaware, Maine, Maryland, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont. Includes Oklahoma and Texas. Confidence interval (Range) Confidence interval (Range) Confidence interval (Range) Includes Kansas, Nebraska, North Dakota, and South Dakota. Includes Illinois, Indiana, Iowa, Missouri, and Ohio. Includes Michigan, Minnesota, and Wisconsin. Includes Arizona, Colorado, Idaho, Montana, Nevada, New Mexico, Utah, and Wyoming. Includes California, Oregon, and Washington. Includes Kentucky, North Carolina, Tennessee, Virginia, and West Virginia. Includes Alabama, Florida, Georgia, and South Carolina. Includes Arkansas, Louisiana, and Mississippi. Includes Connecticut, Delaware, Maine, Maryland, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont. Includes Oklahoma and Texas. Confidence interval (Dollars in millions) Agricultural sales(Dollars in millions) 9,632 (2,845) $2,962.3 ($1,145.3) Includes Kansas, Nebraska, North Dakota, and South Dakota. Includes Illinois, Indiana, Iowa, Missouri, and Ohio. Includes Michigan, Minnesota, and Wisconsin. Includes Arizona, Colorado, Idaho, Montana, Nevada, New Mexico, Utah, and Wyoming. Includes California, Oregon, and Washington. Includes Kentucky, North Carolina, Tennessee, Virginia, and West Virginia. Includes Alabama, Florida, Georgia, and South Carolina. Includes Arkansas, Louisiana, and Mississippi. Includes Connecticut, Delaware, Maine, Maryland, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont. Confidence interval calculations are not exact because of the small sample size or other characteristics of the sample results. Includes Oklahoma and Texas. Confidence interval (Range) Confidence interval (Range) Confidence interval (Range) 42-84 (7)-7141-73 (32)-9067-87 (32)-82Includes Kansas, Nebraska, North Dakota, and South Dakota. Includes Illinois, Indiana, Iowa, Missouri, and Ohio. Includes Michigan, Minnesota, and Wisconsin. Includes Arizona, Colorado, Idaho, Montana, Nevada, New Mexico, Utah, and Wyoming. Includes California, Oregon, and Washington. Includes Kentucky, North Carolina, Tennessee, Virginia, and West Virginia. Includes Alabama, Florida, Georgia, and South Carolina. Includes Arkansas, Louisiana, and Mississippi. Includes Connecticut, Delaware, Maine, Maryland, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont. Includes Oklahoma and Texas. Cognizant RMA regional service office(s) Integrated Risk Management Education ($248,461) Grantee: South Central Technical College (North Mankato, Minnesota) The objective of this project is to develop an integrated risk management education curriculum and deliver it via educational programs for farmers in Minnesota, North Dakota, and South Dakota. The project will develop local educational teams of agricultural professionals. Understanding Farmer Risk Management Decision Making and Educational Needs ($243,388) Grantee: Mississippi State University The objective of this project is to develop the knowledge base to guide the design and implementation of effective risk management programs for agricultural producers. The project will identify the risk management objectives of diverse agricultural producers, investigate perceptions and understanding of risk management tools and strategies, examine the factors influencing choices of risk management strategy, and study how information and analysis influence producers' perceptions and risk management choices. Risk Management Education With Focus on Producers and Lender Stakeholders ($250,000) Grantee: Pennsylvania State University The objective of this project is to help farmers and lenders manage risks and expand the understanding of risk management with a focus on farmer liquidity constraints. The project will develop and distribute a risk management curriculum to farmers, provide training and workshops, improve risk management financial expertise with workshop applications tailored to lenders, and use computers and telecommunications in risk management education. Managing Risks and Profits for the National Grain Industry: A Whole-Farm Approach ($72,180) Grantee: Ohio State University Extension Service The objective of this project is to create and deliver information and analytical tools to help grain farmers and agribusinesses manage their risks and profits for entire farms. The project will create and revise risk management programs for whole-farm assessment, analyze profit levels and cash-flow risks, create a risk management center at Iowa State University, measure the risk tolerance of farm operators, and analyze the effectiveness of innovative information delivery systems. National Program for Integrated Dairy Risk Management Education and Research ($129,600) Grantee: Ohio State University The objective of this project is to focus public and private expertise on generating understandable, useful, and results-oriented knowledge and tools for the dairy industry. The project will develop a risk management educational curriculum for dairy producers, conduct symposia and regional training workshops, develop relevant computer software, and distribute information electronically. Optimal Grain Marketing: Integrated Approach to Balance Risks and Revenues ($232,800) Grantee: National Grain and Feed Foundation The objective of this project is to develop information on commonly available risk management tools coupled with an assessment of how such tools can be expected to perform. The project will reach 500 elevator operators and 20,000 farmer customers with a standardized methodology for evaluating new products, with an emphasis on the use of cash contracts. Agricultural Risk Management Education for Small and Socially Disadvantaged Farmers ($229,808) Grantee: Virginia State University Cooperative Extension Service The objective of this project is to create risk management educational materials and help socially disadvantaged and limited-resource farmers in Virginia, Maryland, Delaware, and North Carolina understand how to manage risk. This project will nurture a partnership between the private crop insurance industry and certain land-grant colleges in the four states, providing a model for similar efforts elsewhere. The project will also integrate risk management education into outreach, training, and technical assistance programs for small-scale farmers. Delivery of Agricultural Risk Management Education to Extension Officers and Small-Scale Farmers ($150,000) Grantee: Alcorn State University The objective of this project is to develop and implement risk management education for students, extension agents, small-scale farmers, limited- resource cooperatives, industry groups, and community-based organizations within 28 Mississippi counties. It will help small-scale farmers limit their exposure to marketing, financial, and legal risks. Georgia Agricultural Risk Management Education Program ($250,000) Grantee: Georgia Department of Education The objective of this project is to train producers and agribusinesses in risk management. The project will train young farmers to provide risk management assistance and provide instructional material and technology to increase managerial skills in agricultural operations. It will provide risk management training for minority, limited-resource farmers, and migrant workers in 134 Georgia counties and establish a certified risk management program for farm workers. Pacific Northwest Risk Management Education Project ($236,339) Grantee: Washington State University The objective of this project is to help Pacific Northwest cereal grain producers improve and apply risk management skills. The project will develop a research-based educational curriculum to increase understanding of risk management tools and integrate areas of risk management in a decision-making process for small grain producers. The project will deliver a producer-oriented risk management program to more than 1,000 grain producers. Risk Management Research and Education for the Florida Citrus Industry ($19,172) Grantee: University of Florida Cooperative Extension Service The objective of this project is to develop appropriate risk management tools and strategies for citrus growers in 32 southern Florida counties. This project will help growers to understand their increased exposure to risk and to use risk management tools and strategies. Risk Management Education: A Risk-Management Club Approach ($150,000) Grantee: Kansas State University The objective of this project is to extend applied risk management information to agricultural producers and agricultural businesses in Kansas. The project will establish local risk management clubs and survey club members to determine risk perceptions, risk management skill levels, and educational needs. It will plan and conduct educational meetings, and carry out follow-up evaluations to measure the effectiveness of the risk management club approach. Leveraging Risk Management Education Using Crop Insurance Agents ($166,500) Grantee: National Crop Insurance Services The objective of this project is to broaden the understanding of risk management principles among more than 15,000 crop insurance agents nationwide. The project will train crop insurance agents in risk management and foster a partnership involving extension specialists, crop insurance agents, and socially disadvantaged and limited-resource farmers. The project will begin a conference series on risk management modeled after one in North Dakota. Economic Performance and Producer Use of Market Advisory Service Products ($250,000) Grantee: University of Illinois Cooperative Extension Service The objective of this project is to provide producers of corn, soybeans, and wheat with an objective, comprehensive evaluation of the economic performance of crop market advisory services. It will describe subscribers' use of market advisory services, current risk management practices, and the educational needs of crop producers. Comprehensive Risk and Business Planning: A Case Plan Approach ($106,841) Grantee: University of Nebraska The objective of this project is to help producers and others in risk management consulting and educational efforts understand comprehensive business planning. Participants will learn to prepare business plans for each commodity to address various situations. The project will encourage producer groups to develop comprehensive risk management and business plans, and will create and maintain an online forum on risk and financial management. Develop AgRisk 2000 ($206,150) Grantee: University of Illinois Cooperative Extension Service The objective of this project is to develop and provide a comprehensive risk management tool that which can be used by farmers, lenders, and service providers to evaluate pre-harvest risk management strategies. The project is targeted at producers located in the Corn Belt, Wheat Belt, Delta Region, and Southern States. Risk Management Education for Limited-Resource Latino Family Farmers in California's Central Coast ($85,000) Grantee: Association for Community Based Education The objective of this project is to improve the risk management skills of limited-resource Latino family farmers in California's central coast. The project will improve the farmers' capacity to understand the risk associated with their business, analyze risks and use information in problem-solving and decision-making, and incorporate risk management education into a small-farm production and management curriculum. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Agriculture's (USDA) efforts to educate farmers about risk management, focusing on: (1) the extent of farmers' use of risk management tools; and (2) educational programs and projects USDA has directed or initiated to prepare farmers for managing risks and determining the groups or individuals who have participated in or been served by these programs. GAO noted that: (1) in 1996, about 42 percent of the nation's 2 million farmers used one or more risk management tools to limit potential income losses resulting from falling market prices or production failures, according to USDA estimates; (2) the use of these tools varied by farmers' level of sales and primary commodity (crop or livestock); (3) the use of crop insurance and forward contracts to reduce risk was more prevalent among farmers: (a) with at least $100,000 in annual sales of agricultural products than among those with annual sales under $100,000; and (b) whose primary crops were corn, wheat, and cotton than among those who primarily grew other crops; (4) of those farmers who received USDA transition payments and had sales of at least $100,000, at least 70 percent purchased crop insurance, at least 66 percent used forward contracts, and at least 34 percent engaged in hedging in 1996; (5) in fiscal year 1998, USDA obligated $5 million for four educational initiatives to prepare farmers for managing risk; (6) to develop government and private sector partnerships to foster risk management education, USDA sponsored a series of risk management conferences targeted at bankers, agricultural educators, crop insurance agents, commodity brokers, and grain elevator operators; (7) however, these initial conferences reached only a relatively small percentage of these target groups' members; (8) USDA intends to use partnerships with private- sector organizations to further expand its educational outreach activities; (9) USDA awarded 17 risk management education and research grants that are primarily designed to develop risk management education curriculums for training such diverse groups as farmers with less than $20,000 in annual income, farmers who grow specific crops in individual states or regions, crop insurance agents, and grain elevator operators across the country; (10) the expected completion dates for these projects range from the summer of 1999 through the fall of 2001; (11) USDA provided funding to supplement land grant universities' risk management education efforts; and (12) USDA contracted with the University of Minnesota to develop an Internet library that, as of January 1999, contained over 700 risk management publications and other education materials for farmers.
7,909
511
The Howard M. Metzenbaum Multiethnic Placement Act of 1994 is one of several recent congressional initiatives to address concerns that children remain in foster care too long. As originally enacted, the law provided that the placement of children in foster or adoptive homes could not be denied or delayed solely because of the race, color, or national origin of the child or of the prospective foster or adoptive parents. However, the act expressly permitted consideration of the racial, ethnic, or cultural background of the child and the capacity of prospective parents to meet the child's needs in these areas when making placement decisions--if such a consideration was one of a number of factors used to determine the best interests of a child. Furthermore, it required states to undertake efforts to recruit foster and adoptive families that reflect the racial and ethnic diversity of children in need of care. The 1996 amendment clarified that race, color, or national origin may be considered only in rare circumstances when making placement decisions.As amended, the act states that placement cannot be denied or delayed because of race, color, or national origin. Furthermore, the amendment removed language that allowed routine consideration of these factors in assessing both the best interests of the child and the capacity of prospective foster or adoptive parents to meet the needs of a child. An agency making a placement decision that uses race, color, or national origin would need to prove to the courts that the decision was justified by a compelling government interest and necessary to the accomplishment of a legitimate state purpose--in this case, the best interests of a child. Thus, under the law, the "best interests of a child" is defined on a narrow, case-specific basis, whereas child welfare agencies have historically assumed that same-race placements are in the best interests of all children. The amendment also added an enforcement provision that penalizes states that violate the amended act. The penalties range from 2 percent to 5 percent of the federal title IV-E funds the state would have received, depending upon whether the violation is the first or a subsequent one in the fiscal year. HHS estimates that the maximum penalty for a state with a large foster care population could be as high as $10 million in one year. Any agency, private or public, is subject to the provisions of the amended act if it receives federal funds. Agencies that receive funds indirectly, as a subrecipient of another agency, must also comply with the act. Such funds include but are not limited to foster care funds for programs under title IV-E of the Social Security Act, block grant funds, and discretionary grants. Before placements can be made, a child welfare agency must have an available pool of prospective foster and adoptive parents. In order to become foster or adoptive parents in California, applicants undergo a process that requires them to open all aspects of their home and personal life to scrutiny. Typically, these prospective parents attend an orientation and are fingerprinted and interviewed. They then attend mandatory training that can last up to 10 weeks. If they meet the minimum qualifications--such as a background free from certain types of criminal convictions--their personal life is then reviewed in detail by caseworkers.This review is called a homestudy. According to one county, 20 percent or fewer applicants reach this milestone. A homestudy addresses the financial situation, current and previous relationships, and life experiences of the applicant. It also addresses the abilities and desires of the applicant to parent certain types of children--including children of particular races--and other issues. Only when the homestudy process is completed, a written report of its findings approved by a child welfare agency, and the home found to meet safety standards is an applicant approved as a foster or adoptive parent. Caseworkers may then consider whether a prospective foster or adoptive parent would be an appropriate caregiver for a particular foster child. Social work practice uses the best interests of the child as its guiding principle in placement decisions. Caseworkers exercise professional judgment to balance the many factors that historically have been included when defining that principle. When considering what is in the best interests of the child, both physical and emotional well-being factors such as the safety, security, stability, nurturance, and permanence for the child are taken into consideration. In social work practice, the need for security and stability has included maintaining cultural heritage. The caseworker's placement decision may also be affected by the administrative procedures used in an agency, the size of the pool of potential foster and adoptive parents, and, in some cases, individual caseworkers' beliefs. An agency may have a centralized system for providing caseworkers with information on available homes, or it may be left to the caseworker to seek out an available foster home. Depending on the size of the pool of potential foster or adoptive parents and the needs of the child, a caseworker may have few or many homes to consider when making a placement decision. In any case, good casework practice includes making individualized, needs-based placements reflecting the best interests of a child. While the thrust of the act, as amended, is toward race-blind foster care and adoption placement decisions, other federal policies that guide placement decisions inherently tend toward placing children with parents of the same race. The Indian Child Welfare Act of 1978 grants Native American tribes exclusive jurisdiction over specific Native American child welfare issues. The Multiethnic Placement Act does not affect the application of tribal jurisdiction. Section 505 of the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 amended section 471(a) of the Social Security Act to require states to consider giving priority to relatives of foster children when making placement decisions. Some states, such as California, require that caseworkers first try to place a child with relatives--known as kinship caregivers--before considering other types of placement. Consequently, the Multiethnic Placement Act affects about one-half of the California foster care caseload--those foster and adoptive children who are not under tribal jurisdiction or cared for by relatives. HHS, the state of California, and foster care and adoption agencies in the two counties we reviewed took actions to inform agencies and caseworkers about the passage of the 1994 act. HHS also provided technical assistance to states, including working with states to ensure that state laws were consistent with the act. California changed state law and regulations, and the two counties we reviewed also changed policies to conform to the new law. In addition, the two counties provided training on the act to caseworkers responsible for making placement decisions. HHS recognized the significance of the change in casework practice that the 1994 law would require of child welfare agencies by restricting the use of race in placement decisions. In response, HHS launched a major effort to provide policy guidance and technical assistance. The underpinning for HHS' actions was coordination among its units that do not customarily issue joint policies--such as the Children's Bureau and the Office for Civil Rights--to ensure that the agency provided consistent guidance. These two units have the responsibility within HHS for implementing the act. The Children's Bureau administers programs of federal financial assistance to child welfare agencies and has responsibility for enforcing compliance with the laws authorizing that assistance. The Office for Civil Rights has the responsibility for enforcing compliance with civil rights laws. HHS officials told us that this internal coordination was also essential because the agency itself needed to undergo cultural changes. For example, in order to provide joint guidance, officials in the Office for Civil Rights needed to understand a social work perspective on the role of race in making placement decisions, and officials in the Children's Bureau needed to understand civil rights principles in the context of their programs. Officials told us that they also notified agency grantees of the act and reviewed selected documents to see that they were consistent with it. Within 6 weeks of enactment of the new law, HHS issued a memorandum to states that summarized the act and provided its text. About 5 months later--and 6 months before the act went into effect--HHS issued its policy guidance. (See app. III for the text of the guidance.) The guidance, jointly issued by the Children's Bureau and the Office for Civil Rights, was based on legal principles in title VI of the Civil Rights Act of 1964. The guidance introduced key legal concepts and identified certain illegal practices, such as the use of a time period during which a search would occur only for foster or adoptive parents of the same race as the foster child. Some states believed that HHS' guidance regarding the use of race in placement decisions was more restrictive than provided for in the act. However, HHS maintained that its guidance accurately reflected the statutory and constitutional civil rights principles involved. To assist states in understanding what they must do to comply with the act, officials from the Children's Bureau and the Office for Civil Rights jointly provided training to state officials and discussed the new law with state child welfare directors in at least 10 states. In addition, HHS contracted with a National Resource Center for a monograph on the new law; the monograph was released at the time the act went into effect and provided additional guidance for states' use when implementing the act. Finally, HHS made other information and resources available to states from its contracted Resource Centers, including assistance to individual states. To ensure that state laws were consistent with the act, the Office for Civil Rights reviewed each state's statutes, regulations, and policies. It then worked with states whose laws did not conform to initiate corrective action. The review found that the statutes, rules, or policies of 28 states and the District of Columbia did not conform. All of them completed changes to comply with the 1994 law. Furthermore, as part of its ongoing efforts to determine whether agency policies and caseworker actions comply with civil rights law, including the act, the Office for Civil Rights continued to investigate complaints of discrimination that were filed with the agency. Past complaints have consisted, for example, of charges brought by foster parents who were not allowed to adopt a child who had been in their care; the denial of the opportunity to adopt the child was allegedly because the child was of a different race than the foster parents. Implementation of the 1994 act required changes to law and regulations at the state level and to policies at the county level. The state of California began its implementation efforts in August 1995 by issuing an informational memorandum to alert counties to the act before it went into effect. In addition, state officials began a collaborative effort with an association of county child welfare officials to devise an implementation strategy. The state also began the process of amending its state law to comply with the federal statute. When amended, the state law eliminated a discriminatory requirement that same-race placements be sought for 90 days before transracial placements could be made. The state also revised its adoption regulations after the state law was passed. State officials told us that it was not necessary to revise the foster care regulations because they were already consistent with the act. Although the change in state law eliminated the requirement to seek same-race placements, that provision had not previously been included in the foster care regulations. In addition, state officials believe that the act focused primarily on adoption issues. Thus, adoption regulations required revision, whereas foster care regulations did not. In the counties we reviewed, one county finished revision of its foster care and adoption policies in February 1996. The other county issued a memorandum to its staff in January 1996 to alert them to the new law. However, that county has not formally revised its foster care or adoption policies in over 20 years, according to one county official. The state and counties planned training on the 1994 law, but only the counties actually conducted any. The state planned to roll out training, but suspended the planned training when the act was amended in August 1996. State officials told us that they needed to revise the training to reflect the amendment. The two counties, however, developed their own training programs by relying on information they obtained from the county child welfare association. In both counties, supervisors in the adoption unit took the lead in developing and presenting one-time training sessions to foster care and adoption caseworkers. Most, if not all, foster care and adoption caseworkers in the two counties received training. Both counties also incorporated training on the 1994 act into their curriculums for new caseworkers. Following amendment of the act, HHS was slower to revise its policy guidance and provided less technical assistance to states than it did after the passage of the 1994 act. While California informed its counties of the change in federal law, it did not do so until 3 months after HHS issued its policy guidance on the amended act. Although HHS did not repeat its technical assistance effort to assist states in understanding the amended law, the state and counties we reviewed provided some training on the amended act to staff. HHS did not notify states of the change in the law until 3 months after its passage and did not issue policy guidance on the amendment until 6 months after the notification. (See app. IV for the text of the guidance.) As was the case with the policy guidance on the original act, HHS' revised guidance was issued jointly by the Children's Bureau and the Office for Civil Rights. The policy guidance noted changes in the language of the law, such as the elimination of the provision that explicitly permitted race to be considered as one of a number of factors. The guidance also described the penalties for violating the amended act and emphasized civil rights principles and key legal concepts that were included in the earlier guidance on the original act. The new guidance expressed HHS' view that the amended act was consistent with the constitutional and civil rights principles that HHS used in preparing its original guidance. However, it was not until May 1998, when we submitted a set of questions based on concerns that county officials and caseworkers raised with us, that HHS issued guidance answering practical questions about changes in social work practice needed to make casework consistent with the amended act. (See app. V for a list of the questions and answers.) The guidance on social work practice issues clarified, for example, that public agencies cannot use race to differentiate between otherwise acceptable foster placements even if such a consideration does not delay or deny a child's placement. The agency did not repeat the joint outreach and training to state officials that it provided for the 1994 act. While the technical assistance provided by the Resource Centers is ongoing, the monograph on the act has not yet been updated to reflect the amendment. The Office for Civil Rights took several actions to ensure that state actions were consistent with the amended act. It addressed case-by-case complaints of violations and, in 1997, began reviews in selected locations. Officials told us that it was not necessary to conduct another comprehensive review of state statutes because they said they would work with states on a case-by-case basis. In addition, officials explored the use of AFCARS to monitor foster care and adoption placements. HHS officials who work with AFCARS confirmed that neither the historical data needed to determine placement patterns related to race that may have existed before the 1994 act's effective date nor the current information on most states' foster children--including California's--was sufficiently complete or adequate to allow its consideration in determining whether placement decisions included use of race-based criteria. Passage of the amendment in 1996 again required changes in state law, regulations, and policy. A bill was introduced in the California legislature in February 1998 to make California State law consistent with the federal amendment. The bill originally contained language to delete a nonconforming provision in state law that explicitly allows consideration of race as one of a number of factors in a placement decision. However, state officials told us the bill has been stalled in the legislative process and its passage is uncertain. Although federal law takes precedence over state law when such situations arise, an HHS Office for Civil Rights official told us that HHS encourages states to pass conforming legislation. Furthermore, state officials told us that state regulations on adoption and foster care placement cannot be changed until this bill becomes law. Therefore, California regulations continue to reflect only the 1994 law. In September 1997, the state notified its counties of the amendment to the act. Although counties can change their own policies without state actions, in the two counties we visited, only one has begun that process: in that county, the adoption unit has begun to update its regulations, but the foster care unit has not done so. Despite the lack of a change in state law, the state resumed its training activities in February 1998, when it offered its first training seminar on the amended act. A limited number of county workers in the southern portion of the state attended that seminar, which included 3 hours of training. The state held two additional training sessions in the state and plans to include training on the amended act at two other seminars. To date, the state has targeted the training to licensing and recruitment staff--who work with potential foster and adoptive parents--and not to caseworkers or supervisors who place children in foster and adoptive homes. But it is these latter staff who are most directly responsible for placement decisions and thus for complying with the amended act's provisions. Finally, one of the two counties we visited is now developing written training material to reflect the 1996 amendment and has provided formal training on it to some workers. The other county charged its supervisors with training their staff one-on-one. Officials at all levels of government face a diverse set of challenges as they continue to implement the amended act. Major issues that remain include changing caseworkers' and practitioners' beliefs about the importance of race-based placement decisions, developing a shared understanding at all levels of government about allowable placement practices, and developing an effective federal compliance monitoring system. The belief that race or cultural heritage is central to a child's best interests when making a placement is so inherent in social work theory and practice that a policy statement of the National Association of Social Workers still reflects this tenet, despite changes in the federal law. Matching the race of a child and parent in foster care placements and public agency adoptions was customary and required in many areas for the last 20 years. The practice was based on the belief that children who are removed from their homes will adapt to their changed circumstances more successfully if they resemble their foster or adoptive families and if they maintain ties to their cultural heritage. In this context, the childrens' needs were often considered more compelling than the rights of adults to foster or adopt children. One state official made this point directly, stating that her purpose is to find families for children, not children for prospective parents. Officials' and caseworkers' personal acceptance of the value of the act and the 1996 amendment varies. Some told us that they welcomed the removal of routine race-matching from the child welfare definition of best interests of a child and from placement decisions. Those who held this belief said the act and the 1996 amendment made placement decisions easier. Others spoke of the need for children--particularly minority children--always to be placed in homes that will support a child's racial identity. For those individuals, that meant a home with same-race parents. Furthermore, some who value the inclusion of race in placement decisions told us that they do not believe that the past use of race in the decision-making process delayed or denied placements for children. State program officials in California are struggling to understand the amended act in the context of casework practice issues. They are waiting for the HHS Children's Bureau or the federal National Resource Centers to assist them in making the necessary changes in day-to-day casework practices. In particular, the use of different definitions by caseworkers and attorneys of what constitutes actions in a child's best interests makes application of the act and the amendment to casework practice difficult. State officials characterized the federal policy guidance as "too legalistic." Furthermore, although officials from the Office for Civil Rights have provided training to state officials and continue to be available to conduct training, these state officials do not consider Office for Civil Rights officials capable of providing the desired guidance on how to conduct casework practice consistent with the amended act; as a result, state officials are hesitant to request such guidance from the Office for Civil Rights. The officials in the two counties we visited said their implementation efforts were hampered by the lack of guidance and information available to them from federal and state sources. The questions on casework practice that we submitted to HHS arose in the course of our discussions with county officials and caseworkers. County officials stressed that they began their implementation efforts with little federal and state technical assistance to help them understand the implications of the act for making foster care and adoption placement decisions; they relied instead on an association of county child welfare officials to obtain the information they needed. Despite the counties' efforts to independently obtain information to proceed with implementation, documents we reviewed in both counties reflected a lack of understanding of the provisions of the amended act. For example, in one county, a draft informational document that was being prepared to inform caseworkers about the amended act included permission for caseworkers to consider the ethnic background of a child as one of a number of factors in a placement decision, even though the 1996 amendment removed similar wording from federal law. In addition, while the caseworkers we interviewed were aware that the act and the 1996 amendment do not allow denial or delay of placements related to race, color, or national origin, some caseworkers were unsure how and when they are allowed to consider such factors in making placement decisions. The need for clear guidance on practical casework issues was demonstrated in a state-sponsored training session we attended in February 1998. The training consisted of presentations from four panelists: an attorney from the HHS Office for Civil Rights, an attorney from a National Resource Center, and two representatives from private agencies that recruit minority foster and adoptive parents for the state of California. While the panelists' presentations noted that placements could not be denied or delayed for race-based reasons, they offered contradictory views of permissible activities under the law. For example, the panelists were asked if race could be used to choose a placement when two available families are equally suitable to meet the needs of a child but one family is of the same race as the child. The attorney from the Office for Civil Rights advised that race could not be used as the determining factor in that example, whereas the attorney from the Resource Center said that a case could be made for considering race in that circumstance. The state has since modified the training session to provide a more consistent presentation. However, the paucity of practical guidance contributes to continued uncertainty about allowable actions under the amended act. For example, although the act and the 1996 amendment apply equally to foster and adoption placements, some state and county officials told us that they believe it applies primarily to adoption placements. Federal officials will need to seek new ways to identify appropriate data and documentation that will allow them to effectively determine whether placement decisions conform to the provisions of the amended act. Federal AFCARS information is the primary source of federal administrative data about foster care and adoption. It allows HHS to perform research on and evaluate state foster care and adoption programs, and it assists HHS in targeting technical assistance efforts, among other uses. However, AFCARS data are not sufficient to determine placement patterns related to race that may have existed before the 1994 act's effective date. Our examination of AFCARS indicated that the future use of this database for monitoring changes in placement patterns directly related to the amended act is unlikely. For example, the database lacks sufficient information on the racial identity of foster and adoptive children and their foster parents to conduct the type of detailed analysis of foster care and adoption patterns that would likely be needed to identify discriminatory racial patterns. Analysis of any administrative data will be hampered by difficulties in interpreting the results. Data showing a change in the percentage of same-race placements would not, alone, indicate whether the amended act was effective in restricting race-based placement practices. For example, an increase in the percentage of same-race placements for black foster children could indicate that the amended act is not being followed. Conversely, the same increase could mean that the amended act is being followed but more black foster and adoptive parents are available to care for children because of successful recruitment efforts. If relevant information on changes in the pool of foster and adoptive parents is not available for analysis--as is the case with AFCARS data--then it would not be possible to rule out the success of recruitment efforts as a contributor to an increase in same-race placements. While case files are another source of information about placement decisions, and such files are used in one type of review periodically performed by HHS, reviewing those files may provide little documentation to assist in determining whether placement decisions are consistent with the amended act's restrictions on the use of race-based factors. In the two counties we visited, the processes caseworkers described for making placement decisions generally lacked a provision for documenting the factors considered, the placement options available, or the reason a particular placement was chosen. Our review of a very limited number of case files in one county, and our experience reading case files for other foster care studies, confirmed that it is unlikely the content of placement decisions can be reconstructed from the case files. The Multiethnic Placement Act, as amended, has been difficult for agencies to implement. Successful implementation requires changing state laws, policies, and regulations; organizational and personal beliefs in the value of race as a significant factor in making foster and adoptive placements; and casework practices so that they incorporate civil rights principles into the definition of a child's best interests. The federal and state agencies we reviewed began the administrative portion of this task immediately after enactment in 1994. But early prompt action was not sustained after the act was amended. Furthermore, our discussions with California state officials, and our observation of state-sponsored training sessions, suggest that federal policy guidance was not sufficiently practice-oriented to allow caseworkers to understand how to apply the law to the placement decisions they make. Because foster care and adoption placement decisions are largely dependent upon the actions of individual caseworkers, their willingness to accept a redefinition of what is in the best interests of a child is critical to the successful implementation of this legislation. While some caseworkers welcomed the new law, others frankly discussed with us their concerns about eliminating almost all racial considerations from placement decisions. HHS and the state of California face the challenge to better explain to practitioners how to integrate social work and legal perspectives on the role of race in making decisions that are in a child's best interests. Because these perspectives are not compatible, tension between them is inevitable. Without a resolution to that tension, full implementation of the amended act may be elusive. We provided HHS, the state of California, and the two counties in California that we reviewed with the opportunity to comment on a draft of this report. We received comments from HHS, the state of California, and San Diego County. In commenting on a draft of the report, HHS expanded on two topics addressed in the report: technical assistance, including training; and monitoring for compliance with the act and its amendment. In discussing technical assistance, HHS reiterated its implementation efforts as described in our report, provided information on related actions it has taken in states other than California, and noted that it expects to publish the updated monograph on the amended act in the fall of 1998. In commenting on the challenge of developing a compliance monitoring system, HHS described its pilot efforts to integrate monitoring of compliance with the amended act into its overall monitoring of child welfare outcomes and noted that it expects to publish a notice of its proposed monitoring processes in the Federal Register in October 1998. We agree that an integrated approach to compliance monitoring of child welfare issues could be an effective one. However, because we have not seen HHS' proposal, we cannot assess whether the proposed monitoring will be sufficient to ensure that foster care and adoption placements are consistent with the requirements of the amended act. In this regard, HHS agreed that AFCARS data have limited utility in tracking state compliance with the amended act. HHS also made technical comments, which we incorporated where appropriate. The full text of HHS' comments are contained in appendix VI. The state of California and San Diego County provided technical comments, which we incorporated where appropriate. As agreed with your office, we will make no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to the Secretary of Health and Human Services and program officials in California. We will also make copies available to others on request. Please contact me on (202) 512-7215 if you or your staff have any questions. Other GAO contacts and staff acknowledgments are listed in appendix VII. In addition to those named above, Patricia Elston led the federal fieldwork and coauthored the draft, and Anndrea Ewertsen led the California fieldwork and coauthored the draft. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the implementation of the Multiethnic Placement Act of 1994, as amended, at the federal level and in states with large and ethnically diverse foster care caseloads, focusing on: (1) efforts by federal, state, and local agencies to implement the 1994 act in the areas of assistance; (2) efforts by federal, state, and local agencies in these same areas to implement the 1996 amendment to the act; and (3) the challenges all levels of government face to change placement practices. GAO noted that: (1) the Department of Health and Human Services (HHS) and California initiated collaborative, multipronged efforts to inform agencies and caseworkers about the Multiethnic Placement Act of 1994; (2) HHS program officials recognized that the act requires child welfare agencies to undergo a historic change in how foster care and adoption placement decisions are made by limiting the use of race as a factor; (3) within 6 weeks of the act's passage, HHS took the first step in a comprehensive approach to implementation that involved issuing policy guidance and providing technical assistance; (4) some states believed that HHS' policy was more restrictive regarding the use of race in placement decisions than provided for in the act; (5) after enactment of the 1996 amendment, HHS did not update its policy guidance for 9 months, and it has done little to address casework practice issues; (6) California has yet to conform its state laws and regulations to the amended act; (7) the state provided training to some county staff, but the training was not targeted toward staff who have primary responsibility for placing children in foster or adoptive homes; (8) both counties have provided some training to caseworkers on the 1996 amendment, either through formal training sessions or one-on-one training by supervisors, however, only one county has begun to revise its policies; (9) changing long-standing social work practices, translating legal principles into practical advice for caseworkers, and developing compliance monitoring systems are among the challenges remaining for officials at all levels of government in changing placement decisionmaking; (10) the implementation of this amended act predominantly relies on the understanding and willingness of caseworkers to eliminate race from the placement decisions they make; (11) while agency officials and caseworkers understand that this legislation prohibits them from delaying or denying placements on the basis of race, not all believe that eliminating race will result in placements that are in the best interests of children; (12) state and local officials and caseworkers demonstrated lingering confusion about allowable actions under the law; (13) the state training session GAO attended on the amended act showed that neither the state nor HHS has provided clear guidance to caseworkers to apply the law to casework practice; and (14) federal efforts to determine whether placement decisions are consistent with the amended act's restrictions on the use of race-based factors will be hampered by difficulties in identifying data that are complete and sufficient.
6,459
626
GPRA is intended to shift the focus of government decisionmaking, management, and accountability from activities and processes to the results and outcomes achieved by federal programs. New and valuable information on the plans, goals, and strategies of federal agencies has been provided since federal agencies began implementing GPRA. Under GPRA, annual performance plans are to clearly inform the Congress and the public of (1) the annual performance goals for agencies' major programs and activities, (2) the measures that will be used to gauge performance, (3) the strategies and resources required to achieve the performance goals, and (4) the procedures that will be used to verify and validate performance information. These annual plans, issued soon after transmittal of the president's budget, provide a direct linkage between an agency's longer-term goals and mission and day-to-day activities. Annual performance reports are to subsequently report on the degree to which performance goals were met. The issuance of the agencies' performance reports, due by March 31, represents a new and potentially more substantive phase in the implementation of GPRA--the opportunity to assess federal agencies' actual performance for the prior fiscal year and to consider what steps are needed to improve performance, and reduce costs in the future. VA's mission reflects the nation's historic commitment to care for veterans, their families, and their survivors. VA administers a variety of programs, including one of the world's largest health care systems. VA estimates that, in fiscal year 2000, it spent about $42 billion--more than 80 percent of its total budget--to provide health care services to 3.6 million veterans and to pay disability compensation and pensions to over 3.2 million veterans and their families and survivors. This section discusses our analysis of VA's performance in achieving its selected key outcomes and the strategies VA has in place, including strategic human capital management and information technology, for accomplishing these outcomes. In discussing these outcomes, we have also provided information drawn from our prior work on the extent to which VA provided assurance that the performance information it is reporting is credible. Overall, VA reported making good progress towards achieving its key outcome of providing quality health care to veterans at a reasonable cost to the government in fiscal year 2000. For example, VA reported that its average cost per patient was 2 percent less than last year. VA also reported that performance improved for most of its key measures compared to last year's performance. However, VA reported a decline in performance for two key measures (see table 1). VA's performance report, in general, demonstrated progress toward achieving its key performance goals. The key goals--the goals VA's senior management consider most important--show how well VA is doing in providing quality health care to veterans at a reasonable cost. For each key measure, VA provided a discussion of the extent to which it met its fiscal year 2000 goal. In addition, VA provided baseline and performance data, where available, to show the extent to which performance has changed over several fiscal years. For most of the key goals that VA did not achieve, it explained why the goals were not achieved. Also, VA provided supplementary information to show that the performance deficiency was either not significant or that VA's performance improved in fiscal year 2000. For example, VA reported that while it did not meet its patient satisfaction goals, its performance on other patient satisfaction surveys showed that VA patients were more satisfied than patients of private-sector health care providers. Also, VA noted that the differences between its planned and actual performance on the patient satisfaction measures were not significant, because they were within the margin of error for its annual patient satisfaction survey. VA could have done a better job, however, of explaining in its performance report why some key performance goals were not met. For example, VA did not explain why it did not meet its goal to have at least 75 percent of patients with scheduled appointments see a provider within 20 minutes of their scheduled appointment time. It provided a partial explanation of why it did not obtain at least 3.7 percent of its medical care funding from alternative revenue streams. However, VA's performance plan cited factors contributing to the decline in collections. For example, the plan noted that more veterans are enrolling in managed care organizations from which VA cannot typically collect because it is not a participating provider. In addition, VA's performance report included two key health care performance measures that VA has not yet quantified. These measures, based in part on previous GAO reports and recommendations, are for the percentages of patients who are able to obtain initial appointments within 30 days for primary or specialty care. Also, VA is in the process of improving its ability to collect the necessary data to measure its performance. It plans to use fiscal year 2001 data as its baseline for setting future annual performance goals. VA's performance report shows VA's continuing efforts to address deficiencies in the quality of its performance data. For most--though not all--of its key health care performance measures, VA identified the sources of performance data, and how data quality is assured. VA's data quality initiatives include, for example, hiring a full-time Data Quality Coordinator, and revising coding procedures and training to help improve the collection of clinical data. Also, due in part to the IG's recommendations, VA implemented edit checks of its system data to improve the quality of the data used to report the number of unique VA patients. In addition to health care performance data, VA also needs quality financial data. VA received an unqualified opinion on its fiscal year 2000 financial audit report. However, VA continues to experience problems with its financial management systems, including information security and integrated financial management systems weaknesses. Further, VA is unable to accumulate cost data at the activity level. Reliable cost information is needed for VA to assess its operating performance. VA's performance report generally provides clear and reasonable descriptions of its strategies for correcting performance deficiencies and improving future performance on its key performance measures, even in those areas where VA met its fiscal year 2000 performance goal. For example, while VA met its goal for the Chronic Disease Care Index, which is a significant quality indicator, it provided strategies to continue to improve its performance in the future, including initiatives to improve patient safety and provide clinical training to medical staff. VA also identified numerous strategies and initiatives for improving performance in areas where performance goals were not met, such as enhancing provider/patient communications, expanding access to VA health care through increased use of community- based outpatient clinics and use of short-term contracts with non-VA specialists, expanding the use of clinical guidelines, and educating patients and staff on prevention programs. VA identified human capital strategies to improve patient access and appointment timeliness. For example, VA plans to hire additional clinical staff to improve access and appointment timeliness, add specialists to its primary care teams to provide veterans with a greater variety of services, even at some community-based clinics. VA's discussions included some information technology strategies regarding health care. For example, VA is integrating telemedicine technologies into ambulatory care delivery systems to increase patient access and efficiency of health care delivery. VA noted that its facilities are equipped with compatible video-conferencing technology for facilitating geographically remote clinical consultations and patient examinations. VA reported making little progress toward achieving its key outcome of processing compensation and pension benefit claims timely and accurately in fiscal year 2000. Although VA did not meet any of its fiscal year 2000 key performance goals for this outcome, VA reported some improvement in the time required to resolve appeals of claims. However, VA reported that performance declined from fiscal year 1999 to 2000 with respect to the other key measures (see table 2). In its performance report, VA provided a clear discussion of the extent to which it met each of its key performance goals for fiscal year 2000. In addition, VA provided baseline and performance data where available, to show the performance over several fiscal years. In the discussion of its performance, VA noted that it expected timeliness to worsen in fiscal year 2001 because of the effect additional legislative and regulatory requirements will likely have on claims-processing time. For two of the key goals, VA explained why the goals were not achieved. Some of the reasons VA cited for the shortfall in the claims-processing timeliness and/or the national accuracy rate included VA (1) underestimating how long it would take to realize the positive impact of initiatives such as increased staffing, improved quality reviews, and training directed at specific deficiencies; (2) using a more rigorous quality review system than in the past; and (3) having to address complex regulatory changes affecting the manner in which claims are processed. Based, in part, on previous GAO reports and recommendations on claims processing, VA is strengthening its system for reviewing claims accuracy--the Statistical Technical Accuracy Review--by collecting more specific data on deficiencies concerning incorrect decisions in those regional offices that have accuracy problems. In addition, VA is evaluating and disseminating information on regional office practices that hold promise for improving performance nationwide, according to VA officials. While VA explained why it did not achieve its accuracy and timeliness goals for disability rating-related claims, it did not explain why it did not meet the timeliness goal for appeals resolution. VA noted improvement in its appeals resolution timeliness although it did not meet its established goal. VA's performance report provides an increasing assurance that its performance information is credible. For example, VA is conducting independent reviews of a sample of claims to assess accuracy rates and weekly assessments of transactions to identify questionable timeliness data from regional offices. VA provided clear and reasonable discussions of strategies, including information technology initiatives, for improving future performance on key claims-processing goals. For example, VA is rewriting claims- processing manuals in an easy-to-understand format to enable employees to find information quickly. In addition, VA has implemented the Veterans On-Line APPlications (VONAPP) that allows veterans to electronically complete and submit applications for compensation, pension, and other benefits. However, we recently testified that VONAPP faces potential security vulnerabilities as a result of weaknesses in general support systems and operating subsystems access controls that affect the department's overall computer operation. Also, VA is developing the Veterans Service Network's Compensation and Pension Benefits Replacement System, which is expected to provide greater access to claimant information through a state-of-the-art automated environment. We testified that this project has suffered from numerous problems and schedule delays, which threaten the overall success of the initiative. In addition, VA is piloting, testing, or enhancing the operational capability for (1) the Compensation and Pension Record Interchange to provide enhanced accessibility to VHA records, (2) the Personnel Information Exchange System to allow for electronic exchange of military personnel records with DOD, and (3) the Virtual VA, to create a work environment for electronic claims processing. VA's performance report discusses human capital strategies for dealing with the fact that one-fourth of its claims-processing staff will become eligible to retire over the next 5 years. VA's succession planning strategy includes recruiting new staff, redirecting staff from other offices, and providing training. VA hired over 450 new claims-processing staff during fiscal year 2000. In addition, VA plans to redirect 200 existing staff to claims-processing positions and hire nearly 250 new staff during fiscal year 2001. Although VA identified human capital strategies for hiring, redirecting, and training staff, its performance plan does not identify performance goals and measures that are linked to the program improvement planned. VA is continuing to develop computer-assisted training modules and other materials on claims processing under the Training and Performance Support System to train the large wave of new hires and current employees who will replace prospective retirees. VA reported making good progress toward achieving its key outcome of assisting disabled veterans in acquiring and maintaining suitable employment. For the second year in a row, VA reported exceeding its key performance goal for this outcome. VA reported that 65 percent of the veterans who exited the VR&E program returned to work in fiscal year 2000--more than its goal of 60 percent. Also, VA reported its performance improved by 12 percentage points over its fiscal year 1999 performance. VA's performance report clearly explains the initiatives it believes were responsible for exceeding the goal. For example, VA refocused the program to make the primary goal obtaining suitable employment, improved the assessment of veterans' work skills transferable to the civilian labor market, and increased the number of placements in suitable jobs. To improve the credibility of VR&E's performance information, VA continues to have regional office staff regularly review a sample of cases for quality and VA headquarters staff evaluate data for validity and reliability. VA provides reasonable and clear discussions of strategies to continue to place veterans in suitable employment. As part of these strategies, VA is changing the skill mix of its staff from vocational rehabilitation specialists to employment specialists, and from counseling psychologists to vocational rehabilitation counselors. In addition, VA has established a Blue Ribbon Panel to review the program's policies and practices and evaluate them against best practices of other organizations. Although VA is responsible for VR&E, it partners with the Department of Labor's (DOL) Veterans' Employment and Training Service (VETS), that also helps veterans obtain training and employment. VA conducts joint training with DOL for VETS-funded state and local training and job placement staff. We have reported that VETS does not have clear goals and strategies for targeting veterans for employment assistance . We have made several recommendations to improve VETS, including that DOL clearly define the program's target populations so that staff know where to place their priorities. For the selected key outcomes, this section describes improvements or remaining weaknesses in VA's (1) fiscal year 2000 performance report in comparison with its fiscal year 1999 report, and (2) fiscal year 2002 performance plan in comparison with its fiscal year 2001 plan. It also discusses the degree to which VA's fiscal year 2000 report and fiscal year 2002 plan address concerns and recommendations by the Congress, GAO, the Inspectors General, and others. VA made improvements to its performance report. For example, VA improved its discussion of major management challenges identified by GAO and VA's IG. VA added a section describing its efforts to address major management challenges identified by GAO and VA's IG. The Office of Inspector General, for each management challenge it identified, described the challenge and identified recommendations that VA has, and has not, implemented. For example, regarding inappropriate benefit payments, the IG noted that VA has implemented its recommendation to enter into a matching agreement with the Social Security Administration for prison records. However, the IG noted that VA has not yet implemented recommendations to identify and adjust the benefits of incarcerated veterans and dependents, recover overpayments to veterans who have been released from prison, and establish a method to ensure that regional offices properly adjust benefits for incarcerated veterans and dependents in a timely manner. Another improvement made by VA included reporting, for the first time, obligations by strategic goal. In addition, VA's fiscal year 2000 performance report continues to provide reasonable discussions of its (1) progress in meeting key performance goals, (2) strategies for improving performance in the future, and (3) efforts to improve quality of performance data. We discussed these items previously under the key outcomes. Finally, VA provided a clearer understanding of the compensation and pension claims-processing timeliness in its fiscal year 2000 performance report compared to last year's report. Although VA continued to report the combined performance of compensation and pension, in this year's report VA also presented the performance data separately for each. VA made several improvements to its fiscal year 2002 performance plan. For example, VA has identified additional key measures it believes are important to assessing how well it is meeting the needs of veterans and their families. These include additional measures of patient safety, health care cost-effectiveness, and customer satisfaction with VA services. In general, VA continues to provide adequate discussions of strategies for improving future performance and update performance goals based on past performance. Also, VA provides additional information on (1) costs associated with meeting strategic goals and objectives, (2) efforts to improve data quality, and (3) ways to address major management challenges. VA's fiscal year 2002 performance plan represents a significant change in the way VA measures its performance toward achieving its key outcome of providing quality health care to veterans at a reasonable cost. Starting with fiscal year 2001, VA will no longer have key measures for the percentage increase in the number of unique patients; the percentage decrease in per- patient costs; the decrease in the percentage of health care funding from alternative revenue streams; and the percentage of medical residents trained in primary care. VA is adding several new key performance measures to better assess progress toward achieving this outcome. For example, VA has designated the following as key measures: A measure related to patient safety--the percentage of root-cause analyses not correctly completed within 45 days of an adverse patient event. This is a quality measure, based on VA's system for continuously improving patient safety at its medical facilities. When medical errors occur, VA medical staffs are required to prepare root-cause analyses to identify the reasons for these errors. This information, in turn, can be used to identify corrective actions. Two indexes of overall VA medical care that include elements of quality, patient access, customer satisfaction, and cost. According to VA, these measures represent more sophisticated ways to measure the efficiency of its medical care than the former key measure of cost per patient, because they measure not just efficiency in providing care, but efficiency in providing high-quality and accessible care that meets patients' needs. These indexes include six other key goals in the fiscal year 2002 performance plan--the revised Chronic Disease Care and Prevention Indexes; the three appointment timeliness measures; and the inpatient and outpatient customer satisfaction measures--plus per-patient costs. VA reported that it, in general, makes changes to key measures (1) when actual performance has met or exceeded original strategic goals, (2) when further performance improvements are unlikely or unreasonable, (3) to ensure that measures are consistent with its strategic plan, and (4) when it develops better ways to measure its performance. VA continues to provide clear and reasonable discussions of strategies for improving performance and continues to revise its performance goals based on past performance. As previously discussed for each key outcome, VA provided strategies for how it expects to achieve its goals. VA described additional strategies in its plan that do not appear in the performance report. For example, VA will expand its initiative to process claims from active-duty service members awaiting discharge from military service. In addition, VA adjusted performance goals based on its fiscal year 2000 performance as well as external factors, such as new duty-to-assist legislation. For example, VA increased its fiscal year 2001 timeliness goal for processing of disability rating-related claims from 142 days to 202 days and established a goal of 273 days for fiscal year 2002. VA also revised its fiscal year 2001 claims-processing accuracy goal from 85 percent to 72 percent, and established a goal of 75 percent for fiscal year 2002. In its fiscal year 2002 performance plan, VA provides additional information on the estimated costs of meeting its fiscal year 2002 performance goals. The fiscal year 2001 performance plan included VA's estimates of obligations needed to meet each of its strategic goals. VA's fiscal year 2002 performance plan also provided estimated obligations by strategic objective. Because each of VA's four main strategic goals covers multiple objectives related to different VA programs, presenting cost data by objective provides a clearer linkage of funding to achievement of performance goals. Meanwhile, VA continues to work with the Office of Management and Budget (OMB) on a plan to restructure its budget accounts, so VA's budget presentations to the Congress can better link proposed funding with specific levels of performance. VA's fiscal year 2002 performance plan includes a more detailed discussion of its efforts to improve the quality of its performance data. For example, the Veterans Health Administration identifies in more detail its initiatives to improve the quality of its data on patient care developed by its health care facilities. It has initiatives to improve the quality of coding at facilities to ensure that the care provided to veterans is being correctly recorded. The Veteran Benefits Administration also provided more detailed information on its data quality efforts. It created a Data Management Office to work with its program offices to identify strategies and initiatives to address the collection, processing, and storage of quality data. In the fiscal year 2002 performance plan, VA restructured its discussion of major management challenges to mirror the challenges identified by GAO in our January 2001 Performance and Accountability Series report on VA, and the challenges identified by VA's IG in November 2000. For each of these challenges, VA provided information on the nature of the challenge, and the status of its efforts to resolve it. However, VA could have provided more specific discussions of its plans to address its major management challenges. For example, in it discussion of the challenges we identified for VA's health care program, VA generally restated findings from our January 2001 report to describe its current status and future plans. VA addressed all six of the major management challenges identified by GAO, and generally described goals or actions that VA is taking or plans to take in response to them. GAO has identified two governmentwide high- risk areas: strategic human capital management and information security. VA has established strategies for achieving strategic goals and objectives for human capital management and information security. VA has established a performance goal and identified milestones for implementing certain strategies to address information security. However, VA has not identified performance goals and measures for human capital management linked to achieving programmatic results. In addition, GAO has identified four major management challenges facing VA. We found that VA's performance report discussed the agency's progress in resolving all of its challenges. Of the six major management challenges identified by GAO, its performance plan had (1) goals and measures that were directly related to four of the challenges, (2) goals and measures that were indirectly related to one of the challenges and (3) had no goals and measures related to one of the challenges, but discussed strategies to address it. Appendix I provides information on how VA addressed these challenges. As agreed, our evaluation was generally based on the requirements of GPRA, the Reports Consolidation Act of 2000, guidance to agencies from OMB for developing performance plans and reports (OMB Circular A-11, Part 2), previous reports and evaluations by us and others, our knowledge of VA's operations and programs, GAO identification of best practices concerning performance planning and reporting, and our observations on VA's other GPRA-related efforts. We also discussed our review with agency officials in the Office of Assistant Secretary for Management and with the VA Office of Inspector General. The agency outcomes that were used as the basis for our review were identified by the Ranking Minority Member of the Senate Governmental Affairs Committee as important mission areas for the agency and generally reflect the outcomes for all of VA's programs or activities. The major management challenges confronting VA, including the governmentwide high-risk areas of strategic human capital management and information security, were identified by GAO in our January 2001 performance and accountability series and high- risk update, and were identified by VA's IG in December 2000. We did not independently verify the information contained in the performance report and plan, although we did draw from other GAO work in assessing the validity, reliability, and timeliness of VA's performance data. We conducted our review from April 2001 through June 2001 in accordance with generally accepted government auditing standards. VA generally agrees with the information presented in our report. However, VA was concerned that our report suggested that the Department's performance plan was inadequate because, in some cases, it does not have performance goals and measures linked to each of the major management challenges contained in appendix I. For example, VA cited our statement that it has not identified performance goals and measures for human capital management linked to achieving programmatic results. VA believes that it is not necessary to develop and track quantifiable performance goals and measures for management challenges that are not strategic in nature. In these cases, VA believes that it is appropriate and sufficient to have a mitigation plan including milestones for completing remedial actions. (App. II contains VA's written comments.) As we reported, VA's performance plan identified actions for resolving each of its major management challenges, even when quantifiable goals and measures were not included. However, OMB Circular No. A-11 states, "Performance goals for management problems should be included in the annual plan, particularly for problems whose resolution is mission- critical...." In particular, the annual plan should include a performance goal(s) covering the major human resources strategies, such as recruitment, retention, and skill development and training, according to OMB guidance. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to appropriate congressional committees; the Secretary of Veterans Affairs; and the Director, Office of Management and Budget. We will also make copies available to others upon request. If you or your staff have any questions, please call me at (202) 512-7101. Key contributors to this report were Shelia Drake, Paul Wright, Walter Gembacz, Greg Whitney, John Borrelli, Valerie Melvin, J. Michael Resser, Mary J. Dorsey, Alana Stanfield, Steve Morris, and Bonnie McEwan. The following table identifies the major management challenges confronting the Department of Veterans Affairs (VA), which includes the governmentwide high-risk areas of strategic human capital management and information security. The first column lists the challenges identified by our office and/or VA's Inspector General (IG). The second column discusses what progress, as discussed in its fiscal year 2000 performance report, VA made in resolving its challenges. The third column discusses the extent to which VA's fiscal year 2002 performance plan includes performance goals and measures to address the challenges that we and the VA's IG identified. We found that VA's performance report discussed the agency's progress in resolving all its challenges. Of the 16 major management challenges, its performance plan had (1) goals and measures that were directly related to 7 of the challenges, (2) goals and measures that were indirectly related to 2 of the challenges and (3) had no goals and measures related to 7 of the challenges, but discussed strategies to address them. Major management challenge GAO-designated governmentwide high risk Strategic human capital management: GAO has identified shortcomings at multiple agencies involving key elements of modern human capital management, including strategic human capital planning and organizational alignment; leadership continuity and succession planning; acquiring and developing staffs whose size, skills, and deployment meet agency needs; and creating results-oriented organizational cultures. In its report, VA recognizes that a comprehensive workforce planning initiative is essential for VA to remain as a provider of quality services to America's veterans. An anticipated upswing in retirements, rapid changes in technology, an increasingly diverse labor and beneficiary pool, and different expectations of younger workers are forces that strongly suggest the need for new recruitment and retention practices to meet program goals. VA states it has established a workforce planning process, and is in the beginning stages of developing and implementing a workforce forecasting system. VA has a strategic goal, strategic objectives, and strategies to address human capital. However, they are not directly linked to program performance. The plan identifies improved workforce planning and enhancing accountability for performance as initiatives that will permit the agency to deliver "world-class" service to veterans and their families. VA has developed a workforce planning model, secured VA senior leadership approval of the model, and worked with its administrations to pilot the model. VA's performance report noted that its succession planning strategy includes recruiting new staff, redirecting staff from other offices, and providing training. For example, VA hired over 450 new claims- processing staff during fiscal year 2000. In addition, VA plans to redirect 200 existing staff to claims-processing positions and hire nearly 250 new staff during fiscal year 2001. Major management challenge Information security: Our January 2001 high-risk update noted that agencies' and governmentwide efforts to strengthen information security have gained momentum and expanded. Nevertheless, recent audits continue to show federal computer systems are riddled with weaknesses that make them highly vulnerable to computer-based attacks and place a broad range of critical operations and assets at risk of fraud, misuse, and disruption. Progress in resolving major management challenge as discussed in the fiscal year 2000 performance report VA has acknowledged the security weakness in its systems and data and reported information security controls as a material weakness in its Federal Managers Financial Integrity Act report for 2000. To address the department's information security control issues, VA noted in its performance report that it had established a centrally managed agency-wide security program. In addition, the department issued a revised information security plan in October 2000 that identified a number of security enhancements that were being accelerated to improve agency-wide information security. These included enhancements to: (1) security awareness, (2) risk assessments, (3) security policies, (4) security officer training, and (5) system certification. Applicable goals and measures in the fiscal year 2002 performance Yes. VA has developed corrective action plans to address the information security weaknesses. These plans were in various stages of implementation. VA's performance plan noted that VA established a performance indicator to measure progress in implementing its information security program. The department targeted this program to be 20 percent complete by fiscal year 2001 and 80 percent complete by fiscal year 2002. However, this measurement does not assess the effectiveness of VA's security, a more effective measure of program success. As we have previously reported, the VA information security management plan generally includes the key elements of an effective security management program. However, the success of VA's efforts to improve the department's computer security will depend largely on adequate program resources and commitment throughout the department. The Chief Information Officers Council, in coordination with the National Institute of Standards and Technology and the Office of Management and Budget, has developed a framework for agencies to use in determining the current status of information systems controls and, where necessary, to establish a target for improvement. VA could use this framework as a means for measuring progress in improving its information security program. Discussed under outcomes in the report. Discussed under outcomes in the report. Discussed under outcomes in the report. Yes. Discussed under outcomes in the report. Yes. Discussed under outcomes and comparison of performance plans in the report. VA's plan has measures for alternative revenues and for conducting studies to assess and realign its health care system. Yes. Discussed under outcomes in the report. Progress in resolving major management challenge as discussed in the fiscal year 2000 performance report Budget: Not addressed in report. Applicable goals and measures in the fiscal year 2002 performance None. VA's budget systems need to be aligned technology to help serve veterans need improvement. Performance-based budgeting: VA and OMB staff jointly developed a proposal to restructure VA's budget accounts to facilitate charging each program's budget account for all of the significant resources used to operate and produce its outcomes and outputs. VA is continuing to work with major stakeholders on implementation issues. weaknesses remain despite unqualified audit opinion. Information technology (IT): VA implemented a capital investment process that the department uses to select, control, and evaluate IT investments. The department reviews IT projects that exceed planned expenditures by 10 percent to determine whether to change the scope of funding or terminate the project. The report did not address progress VA made in developing a departmentwide IT architecture, business process reengineering, or the need to obtain a full- time chief information officer. Financial management: VA described a number of information security enhancements as described under Information Security challenge. Information technology: VA has many initiatives planned or in progress. For example, VA is taking steps to develop an architecture that will promote departmentwide interoperability and data sharing. VA stated that it has completed the technical component of this architecture and is in the process of developing the logical component. In addition, VA stated that efforts are underway to improve its capital investment process in response to GAO recommendations. Also, VA stated that it is reevaluating its previous decision to leave business process reengineering at the administration level. However, VA provided little information on its effort to obtain a full- time chief information officer or when one would be appointed. Financial management: VA has developed corrective action plans to address information security weaknesses, which are in various stages of implementation. Discussed under outcomes in the report. VA has initiated changes to its resource allocation method to correct resource and infrastructure imbalances, has given VA managers authority to reduce physician levels in overstaffed specialties, and is implementing a cost-based data system to provide more useful performance measurement information on resources and clinical and administrative workloads. Yes. Discussed under outcomes in the report. None. Resource allocation continues to be a major public policy issue. VA management is addressing staffing and other resource allocation disparities as part of various initiatives to restructure its health care system. VA is implementing IG recommendations regarding Decision Support System standardization. Progress in resolving major management challenge as discussed in the fiscal year 2000 performance report Claims processing and appeals processing discussed under outcomes in the report. Applicable goals and measures in the fiscal year 2002 performance Yes for claims processing and appeals processing: Discussed under outcomes in the report. VA has implemented VA's IG recommendations aimed at ensuring VHA performs complete medical examinations. VA is also evaluating the use of contract examinations. VA has several initiatives in various stages of implementation that address inappropriate benefit payments. For example, VA asked the IG to identify internal control weaknesses that might facilitate or contribute to fraud in the compensation and pension program. The IG found vulnerabilities involving numerous technical, procedural, and policy issues; VA has agreed to initiate corrective actions. VA reports that it has completed audits of the quality of data used to compute three of the current key performance measures including, among others, rating-related claims-processing timeliness. Audits are underway regarding the measures for the Prevention Index and the Chronic Disease Care Index. VA reports taking corrective actions on deficiencies identified. However, VA notes that it continues to find significant problems with data input and weaknesses in information security, which limit VA's confidence in the quality of the data. None for timeliness and quality of medical examinations. However, VA has established standards of performance for reducing the number of incomplete examination for its field offices. None. The plan discusses initiatives to identify and adjust payments to, among others, veterans who receive dual compensation, underreport income, or are incarcerated or deceased. For example, VA is starting or continuing a variety of computer matches with other agencies' records to identify inappropriate payments. However, VA still has been unable to offset disability compensation against military reserve pay for all persons who receive both payments. Procedures established between DOD and VA have not been effective or fully implemented. DOD is having difficulties obtaining the accurate data from military services that VA needs to carry out the offsets. None. VA began taking action to correct deficiencies in its data. Management officials continue to refine procedures for compiling performance data. Performance data are receiving greater scrutiny within the department, and procedures are being developed to enhance data validation. Progress in resolving major management challenge as discussed in the fiscal year 2000 performance report VA reported that it has developed corrective action plans for the information security control weaknesses, with corrective actions to be completed by 2002. While VA established a system to track the resolution of security weaknesses identified, as we have previously reported,the department does not have a process to ensure that corrective actions were effective. Applicable goals and measures in the fiscal year 2002 performance Yes. Discussed under Information Security challenge. VA consolidated financial statements: Material internal control weaknesses exist related to information security, housing credit assistance, and fund balances with Treasury reconciliations. Also discussed under Information Security challenge. VA resolved two of the three weaknesses. However, the information security weakness remains unresolved. For additional actions, see Information Security above. None. The report acknowledges the debt management weakness. VA has initiated actions, such as a one- time review of all open/active cases, to correct fraud and abuse but the report did not identify the extent to which improvements have been made. VA does not have goals or measures directly applicable to resolving material weaknesses reported in its financial statement audit report. However, the plan has a performance measure that indirectly addresses the information security material weakness. VA has developed corrective action plans for the information security and control issues and expects to complete corrective actions in 2002. None. VA identified actions that it expects will result in a significant improvement in collections, such as installing computer software to facilitate referral of debt to the Department of Treasury Offset Program. None. VA recently completed a one-time review of all open/active cases. VA identified 255 cases as potentially fraudulent. VA is implementing other programs to prevent or identify fraud in the future, such as identifying workers compensation claimants who are also receiving VA compensation and pension benefits to prevent dual payments. Progress in resolving major management challenge as discussed in the fiscal year 2000 performance report The report identified savings of $13 million in fiscal year 2000 attributed to aggressive use of the governmentwide purchase card. However, the IG identified significant vulnerabilities regarding its use, including circumventing competition requirements and payment of excessive prices. Applicable goals and measures in the fiscal year 2002 performance None. VA is conducting business reviews of all acquisition and materiel management functions at VA facilities to resolve problems in this area.
This report reviews the Department of Veterans Affairs (VA) fiscal year 2000 performance report and fiscal year 2002 performance plan required by the Government Performance and Results Act of 1993 to assess VA's process in achieving selected key outcomes that are important to its mission. VA reported making mixed progress towards achieving its key outcomes. For example, VA reported that it made good progress in providing high-quality care to patients, but it did not achieve its goal of processing veterans' benefits claims in a timely manner. GAO found out that VA made several improvements to its fiscal year 2000 performance report and 2002 performance plan. These improvements resulted in clearer discussions of VA's management challenges and additional performance measures for assessing program achievement. Furthermore, VA addressed all six of the major management challenges previously identified by GAO, and generally described goals or actions that VA is taking or plans to take in response to them. VA has established strategies for achieving strategic goals and objectives for two of these challenges: human capital management and information security. VA has established a performance goal and identified milestones for implementing certain strategies to address information security. However, VA has not identified performance goals and measures for human capital management linked to achieving programmatic results.
7,976
251
Several federal laws and policies--predominantly the Federal Information Security Modernization Act of 2014 and its predecessor, the Federal Information Security Management Act of 2002 (both referred to as FISMA)--provide a framework for protecting federal information and IT assets. The purpose of both laws is to provide a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets. The laws establish responsibilities for implementing the framework and assign those responsibilities to specific officials and agencies: The Director of the Office of Management and Budget (OMB) is responsible for developing and overseeing implementation of policies, principles, standards, and guidelines on information security in federal agencies, except with regard to national security systems. Since 2003, OMB has issued policies and guidance to agencies on many information security issues, including providing annual instructions to agencies and inspectors general for reporting on the effectiveness of agency security programs. More recently, OMB issued the Cybersecurity Strategy and Implementation Plan for the Federal Civilian Government in October 2015, which aims to strengthen federal civilian cybersecurity by (1) identifying and protecting high- value information and assets, (2) detecting and responding to cyber incidents in a timely manner, (3) recovering rapidly from incidents when they occur and accelerating the adoption of lessons learned, (4) recruiting and retaining a highly qualified cybersecurity workforce, and (5) efficiently acquiring and deploying existing and emerging technology. OMB also recently updated its Circular A-130 on managing federal information resources to address protecting and managing federal information resources and on managing PII. The head of each federal agency has overall responsibility for providing appropriate information security protections for the agency's information and information systems, including those collected, maintained, operated or used by others on the agency's behalf. In addition, the head of each agency is required to ensure that senior agency officials provide information security for the information and systems supporting the operations and assets under their control, and the agency chief information officer (CIO) is delegated the authority to ensure compliance with the law's requirements. The assignment of information security responsibilities to senior agency officials is noteworthy because it reinforces the concept that information security is a business function as well as an IT function. Each agency is also required to develop, document, and implement an agency-wide information security program that involves an ongoing cycle of activity including (1) assessing risks, (2) developing and implementing risk-based policies and procedures for cost-effectively reducing information security risk to an acceptable level, (3) providing awareness training to personnel and specialized training to those with significant security responsibilities, (4) testing and evaluating effectiveness of security controls, (5) remedying known weaknesses, and (6) detecting, reporting, and responding to security incidents. As discussed later, our work has shown that agencies have not fully or effectively implemented these programs and activities on a consistent basis. FISMA requires the National Institute of Standards and Technology (NIST) to develop information security standards and guidelines for agencies. To this end, NIST has developed and published federal information processing standards that require agencies to categorize their information and information systems according to the impact or magnitude of harm that could result if they are compromised and specify minimum security requirements for federal information and information systems. NIST has also issued numerous special publications that provide detailed guidelines to agencies for securing their information and information systems. In 2014, FISMA established the Department of Homeland Security's (DHS) oversight responsibilities, including (1) assisting OMB with oversight and monitoring of agencies' information security programs, (2) operating the federal information security incident center, and (3) providing agencies with operational and technical assistance. Other cybersecurity-related laws were recently enacted, which include the following: The National Cybersecurity Protection Act of 2014 codifies the role of DHS's National Cybersecurity and Communications Integration Center as the federal civilian interface for sharing information about cybersecurity risks, incidents, analysis, and warnings for federal and non-federal entities, including owners and operators of systems supporting critical infrastructure. The Cybersecurity Enhancement Act of 2014, among other things, authorizes NIST to facilitate and support the development of voluntary standards to reduce cyber risks to critical infrastructure and, in coordination with OMB, to develop and encourage a strategy for the adoption of cloud computing services by the federal government. The Cybersecurity Act of 2015, among other things, sets forth authority for enhancing the sharing of cybersecurity-related information among federal and non-federal entities, gives DHS's National Cybersecurity and Communications Integration Center responsibility for implementing these mechanisms, requires DHS to make intrusion and detection capabilities available to any federal agency, and calls for agencies to assess their cyber-related workforce. Our work has identified the need for improvements in the federal government's approach to cybersecurity. While the administration and agencies have acted to improve the protections over their information and information systems, additional actions are needed. Federal agencies need to effectively implement risk-based entity- wide information security programs consistently over time. Since FISMA was enacted in 2002, agencies have been challenged to fully and effectively develop, document, and implement agency-wide programs to secure the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency or contractor. For example, in fiscal year 2015, 19 of the 24 major federal agencies covered by the Chief Financial Officers Act of 1990 reported that information security control deficiencies were either a material weakness or significant deficiency in internal controls over financial reporting. In addition, inspectors general at 22 of the 24 agencies cited information security as a major management challenge for their agency. The following actions will assist agencies in implementing their information security programs. Enhance capabilities to effectively identify cyber threats to agency systems and information. A key activity for assessing cybersecurity risk and selecting appropriate mitigating controls is the identification of cyber threats to computer networks, systems, and information. In 2016, we reported on several factors that agencies identified as impairing their ability to identify these threats to a great or moderate extent. The impairments included an inability to recruit and retain personnel with the appropriate skills, rapidly changing threats, continuous changes in technology, and a lack of government-wide information-sharing mechanisms. Addressing these impairments will enhance the ability of agencies to identify the threats to their systems and information and be in a better position to select and implement appropriate countermeasures. Implement sustainable processes for securely configuring operating systems, applications, workstations, servers, and network devices. We routinely determine that agencies do not enable key information security capabilities of their operating systems, applications, workstations, servers, and network devices. Agencies were not always aware of the insecure settings that introduced risk to the computing environment. Establishing strong configuration standards and implementing sustainable processes for monitoring and enabling configuration settings will strengthen the security posture of federal agencies. Patch vulnerable systems and replace unsupported software. Federal agencies consistently fail to apply critical security patches in a timely manner on their systems, sometimes years after the patch is available. We also consistently identify instances where agencies use software that is no longer supported by their vendors. These shortcomings often place agency systems and information at significant risk of compromise since many successful cyberattacks exploit known vulnerabilities associated with software products. Using vendor-supported and patched software will help to reduce this risk. Develop comprehensive security test and evaluation procedures and conduct examinations on a regular and recurring basis. The information security assessments performed for agency systems were sometimes based on interviews and document reviews, limited in scope, and did not identify many of the security vulnerabilities that our examinations identified. Conducting in-depth security evaluations that examine the effectiveness of security processes and technical controls is essential for effectively identifying system vulnerabilities that place agency systems and information at risk. Strengthen oversight of contractors providing IT services. As demonstrated by the Office of Personnel Management data breach of 2015, cyber attackers can sometimes gain entree to agency systems and information through the agency's contractors or business partners. Accordingly, agencies need to ensure that their contractors and partners are adequately protecting the agency's information and systems. In August 2014, we reported that five of six selected agencies were inconsistent in overseeing the execution and review of security assessments that were intended to determine the effectiveness of contractor implementation of security controls, resulting in security lapses. In 2016, agency chief information security officers (CISO) we surveyed reported that they were challenged to a large or moderate extent in overseeing their IT contractors and receiving security data from the contractors, thereby diminishing the CISOs' ability to assess how well agency information maintained by the contractors is protected. Effectively overseeing and reviewing the security controls implemented by contractors and other parties is essential to ensuring that the organization's information is properly safeguarded. The federal government needs to improve its cyber incident detection, response, and mitigation capabilities. Even agencies or organizations with strong security can fall victim to information security incidents due to previously unknown vulnerabilities that are exploited by attackers to intrude into an agency's information systems. Accordingly, agencies need to have effective mechanisms for detecting, responding to, and recovering from such incidents. The following actions will assist the federal government in building its capabilities for detecting, responding to, and recovering from security incidents. DHS needs to expand capabilities, improve planning, and support wider adoption of its government-wide intrusion detection and prevention system. In January 2016, we reported that DHS's National Cybersecurity Protection System (NCPS) had limited capabilities for detecting and preventing intrusions, conducting analytics, and sharing information. In addition, adoption of these capabilities at federal agencies was limited. Expanding NCPS's capabilities for detecting and preventing malicious traffic, defining requirements for future capabilities, and developing network routing guidance would increase assurance of the system's effectiveness in detecting and preventing computer intrusions and support wider adoption by agencies. Improve cyber incident response practices at federal agencies. In April 2014 we reported that 24 major federal agencies did not consistently demonstrate that they had effectively responded to cyber incidents. For example, agencies did not determine the impact of incidents or take actions to prevent their recurrence. By developing complete policies, plans, and procedures for responding to incidents and effectively overseeing response activities, agencies will have increased assurance that they will effectively respond to cyber incidents. Update federal guidance on reporting data breaches and develop consistent responses to breaches of personally identifiable information (PII). As we reported in December 2013, eight selected agencies did not consistently implement policies and procedures for responding to breaches of PII. For example, none of the agencies documented the evaluation of incidents and lessons learned. In addition, OMB's guidance to agencies to report each PII-related incident--even those with inherently low risk to the individuals affected--within 1 hour of discovery may cause agencies to expend resources to meet reporting requirements that provide little value and divert time and attention from responding to breaches. Updating guidance and consistently implementing breach response practices will improve the effectiveness of government-wide and agency-level data breach response programs. The federal government needs to expand its cyber workforce planning and training efforts. Ensuring that the government has a sufficient number of cybersecurity professionals with the right skills and that its overall workforce is aware of information security responsibilities remains an ongoing challenge. These actions can help meet this challenge: Enhance efforts for recruiting and retaining a qualified cybersecurity workforce. This has been a long-standing dilemma for the federal government. In 2012, agency chief information officers and experts we surveyed cited weaknesses in education, awareness, and workforce planning as a root cause in hindering improvements in the nation's cybersecurity posture. Several experts also noted that the cybersecurity workforce was inadequate, both in numbers and training. They cited challenges such as the lack of role-based qualification standards and difficulties in retaining cyber professionals. In 2016, agency CISOs we surveyed reported that difficulties related to having sufficient staff; recruiting, hiring, and retaining security personnel; and ensuring security personnel have appropriate skills and expertise pose challenges to their abilities to carry out their responsibilities effectively. Improve cybersecurity workforce planning activities at federal agencies. In November 2011, we reported that only five of eight selected agencies had developed workforce plans that addressed cybersecurity. Further, agencies reported challenges with filling cybersecurity positions, and only three of the eight had a department- wide training program for their cybersecurity workforce. In summary, federal law and policy set forth a framework for addressing cybersecurity risks to federal systems. However, implementation of this framework has been inconsistent, and additional action is needed to address ongoing challenges. Specifically, agencies need to address control deficiencies and fully implement organization-wide information security programs, cyber incident response and mitigation efforts need to be improved across the government, and establishing and maintaining a qualified cybersecurity workforce needs to be a priority. Chairman Donilon, Vice Chair Palmisano, and distinguished members of the Commission, this concludes my prepared statement. I would be happy to answer any questions you have. If you have any questions about this statement, please contact Gregory C. Wilshusen at (202) 512-6244 or [email protected]. Other staff members who contributed to this statement include Larry Crosland and Michael Gilmore (assistant directors), Chris Businsky, Franklin Jackson, Kenneth A. Johnson, Lee McCracken, Scott Pettis, and Adam Vodraska. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The dependence of federal agencies on computerized information systems and electronic data makes them potentially vulnerable to a wide and evolving array of cyber-based threats. Securing these systems and data is vital to the nation's safety, prosperity, and well-being. Because of the significance of these risks and long-standing challenges in effectively implementing information security protections, GAO has designated federal information security as a government-wide high-risk area since 1997. In 2003 this area was expanded to include computerized systems supporting the nation's critical infrastructure, and again in February 2015 to include protecting the privacy of personally identifiable information collected, maintained, and shared by both federal and nonfederal entities. GAO was asked to provide a statement on laws and policies shaping the federal IT security landscape and actions needed for addressing long-standing challenges to improving the nation's cybersecurity posture. In preparing this statement, GAO relied on previously published work. Over the past several years, GAO has made about 2,500 recommendations to federal agencies to enhance their information security programs and controls. As of September 16, 2016, about 1,000 have not been implemented. Cyber incidents affecting federal agencies have continued to grow, increasing about 1,300 percent from fiscal year 2006 to fiscal year 2015. Several laws and policies establish a framework for the federal government's information security and assign implementation and oversight responsibilities to key federal entities, including the Office of Management and Budget, executive branch agencies, and the Department of Homeland Security (DHS). However, implementation of this framework has been inconsistent, and additional actions are needed: Effectively implement risk-based information security programs. Agencies have been challenged to fully and effectively establish and implement information security programs. They need to enhance capabilities to identify cyber threats, implement sustainable processes for securely configuring their computer assets, patch vulnerable systems and replace unsupported software, ensure comprehensive testing and evaluation of their security on a regular basis, and strengthen oversight of IT contractors. Improve capabilities for detecting, responding to, and mitigating cyber incidents. Even with strong security, organizations can continue to be victimized by attacks exploiting previously unknown vulnerabilities. To address this, DHS needs to expand the capabilities and adoption of its intrusion detection and prevention system, and agencies need to improve their practices for responding to cyber incidents and data breaches. Expand cyber workforce and training efforts. Ensuring that the government has a sufficient cybersecurity workforce with the right skills and training remains an ongoing challenge. Government-wide efforts are needed to better recruit and retain a qualified cybersecurity workforce and to improve workforce planning activities at agencies.
2,889
530
The Navy's UCLASS system will be the first unmanned aircraft system deployed on an aircraft carrier. Efforts to develop an unmanned combat air system for the Navy can be traced back to 2003 when DOD established a joint Navy and Air Force program called the Joint Unmanned Combat Air System (J-UCAS). This joint effort drew on knowledge that the Air Force had gained through early development of the Unmanned Combat Air Vehicle, an effort that began in the late 1990s. The J-UCAS program was canceled in late 2005. The following year, the Navy initiated the Unmanned Combat Air System Demonstration (UCAS- D) program--the immediate predecessor to UCLASS--with the intent to design, develop, integrate, test, and demonstrate the technical feasibility of operating unmanned air combat systems from an aircraft carrier. In 2013, the Navy successfully launched and landed a UCAS-D on an aircraft carrier. In total the Navy invested more than $1.4 billion in the UCAS-D program. In 2011, as UCAS-D efforts were ongoing, the Navy received approval from DOD to begin planning for the UCLASS acquisition program. In our past work examining weapon acquisition and best practices for product development, we found that leading commercial firms and successful DOD programs pursue an acquisition approach that is anchored in knowledge, whereby high levels of knowledge are demonstrated at critical junctures. Specifically, there are three critical junctures--knowledge points--in an acquisition program at which decision makers must have adequate knowledge to make large investment decisions. If the knowledge attained at each juncture does not confirm the business case on which the acquisition was originally justified, the program does not go forward. At the first knowledge point, a match must be made between the customers' needs and the available resources--technical and engineering knowledge, time, and funding-- before a system development program is started. At the second knowledge point, about midway through development, the developer must demonstrate that the system's design is stable and that it can meet performance requirements. At the third knowledge point, the developer must show that the system can be manufactured within cost, schedule, and quality targets and that it is reliable before beginning production. The first knowledge point is the most critical point of the three. At that point programs should present their business case for review and approval, which establishes an acquisition program baseline. This baseline describes the cost, quantity, schedule, and performance goals of a program and provides a framework for effective oversight and accountability. This first knowledge point typically coincides with a substantial financial commitment. DOD's acquisition policy and guidance encourage the use of a knowledge-based acquisition approach, in which major decision reviews are aligned with the start of key acquisition phases, including technology development, system development--referred to as engineering and manufacturing development--and production. Figure 1 aligns the knowledge points with key decision points in DOD's acquisition process. According to DOD acquisition policy, the purpose of the technology development phase is to reduce technology risk, determine and mature the appropriate set of technologies to be integrated into a full system, and to demonstrate critical technology elements on prototypes. A system level preliminary design review is to be held during the technology development phase to inform requirements trades; improve cost estimation; and identify remaining design, integration, and manufacturing risks. The results of the preliminary design review are to be reported to decision makers at Milestone B--the decision review in DOD's process that corresponds with knowledge point 1 and initiates system development. The purpose of system development is to develop a system or an increment of capability, complete full system integration, develop an affordable and executable manufacturing process, and demonstrate system integration, interoperability, safety, and utility, among other things. System development provides a critical opportunity for objective oversight before beginning production. At Milestone B, major defense acquisition programs are required by DOD policy to have approved requirements, an independent cost estimate, and an acquisition program baseline; begin tracking unit cost changes and report unit cost growth against Nunn-McCurdy statutory thresholds; and periodically report to Congress on the cost, schedule, and performance status of the program in Selected Acquisition Reports. At that time, major defense acquisition programs are also required by statute to present a business case analysis and certify on the basis of that analysis that the program is affordable, has reasonable lifecycle cost and schedule estimates, and that technologies have been demonstrated in a relevant environment, among other things. Taken together, these requirements form the basic oversight framework to ensure that Congress and DOD decision makers are adequately informed about the program's cost, schedule, and performance progress. In addition, the information is valuable for identifying areas of program risk and its causes, and helps to ensure that decision makers consider the full financial commitment before initiating a new development program. Once initiated at Milestone B, major defense acquisition programs are required to measure program performance against the program's baseline estimate. Changes to the baseline are only authorized under certain conditions, including a program restructure that is approved by the milestone decision authority, or a breach of the critical Nunn-McCurdy statutory threshold where DOD certifies continuation of the program to Congress. In fiscal year 2014, the Navy plans to commit to investing an estimated $3.7 billion to develop, produce, and field from 6 to 24 aircraft and modify 1 to 4 aircraft carriers as an initial increment of UCLASS capability-- referred to as an early operational capability. The Navy plans to manage UCLASS as if it were a technology development program, although its strategy encompasses activities commensurate with a program in system development and early production. Accordingly, it is not planning to hold a Milestone B review to formally initiate a system development program-- which would trigger key oversight mechanisms--until after the initial capability is fielded in fiscal year 2020. This strategy means the program will not be subject to these oversight mechanisms including an acquisition program baseline; Nunn-McCurdy unit cost growth thresholds; and periodic reporting of the program's cost, schedule, and performance progress. This strategy will likely limit Congress's ability to oversee this 6- year multibillion dollar program. Navy officials believe that their approach effectively utilizes the flexibility in DOD's acquisition policy to ensure that UCLASS requirements and concept of operations are well understood and achievable before formally beginning a system development program. Yet they emphasize that by fiscal year 2020 they may have accumulated enough knowledge to allow them to bypass a formal development program and proceed directly to production at Milestone C. Figure 2 illustrates the Navy's strategy. As indicated above, the Navy plans to award four firm fixed-price contracts in fiscal year 2013 to competing contractors to develop preliminary designs for the UCLASS air vehicle. The following year, the Navy plans to review those preliminary designs, conduct a full and open competition, and award a contract to develop and deliver the UCLASS air vehicles, effectively ending competition within the air vehicle segment. A review of the full system level preliminary design--including the air vehicle, carrier, and control segments--is scheduled for fiscal year 2015. DOD policy and best practices indicate that around this review point a program would typically be expected to hold a Milestone B review and transition from technology development to system development. Figure 3 illustrates the later point in the process in which the Navy plans to establish the UCLASS acquisition program baseline and formally initiate a development program. Although the Navy does not plan to hold a Milestone B review until 2020, if at all, it is effectively committing to system development and early production in fiscal year 2015. According to the Navy's strategy, system development and early production activities, including system integration and air vehicle fabrication, will begin in fiscal year 2015 around the time of the system-level preliminary design review. The Navy also expects to increase annual funding for the UCLASS system from $146.7 million to $522.5 million between fiscal years 2014 and 2015. Testing to demonstrate the system's capabilities is scheduled to take place from fiscal year 2017--scheduled first flight--through fiscal year 2020, when an early operational capability is expected to be achieved. If the program proceeds according to the Navy's plan, by 2020, it will have completed many of the activities typically authorized by a Milestone B decision. Moreover, since enough quantities of UCLASS are expected to be delivered for operational use on one or more aircraft carriers, the strategy could also be seen as having begun early production before a Milestone C decision is held. In a March 2007 report we identified oversight challenges presented by an acquisition strategy that calls for proceeding into system development, demonstration, manufacturing, and fielding without the benefit of a Milestone B decision. A framework of laws make major defense acquisition programs accountable for their planned outcomes and cost, give decision makers a means to conduct oversight, and ensure some level of independent program review. The application of these acquisition laws is typically triggered by a program's entry into system development. While the activities the UCLASS program plans to undertake exemplify that the program is entering into system development, these laws will not be triggered because the program is not holding a Milestone B review and formally initiating a development program. Therefore, the UCLASS program will not be accountable for establishing a program baseline or for reporting any cost growth to that baseline to DOD and Congress. The UCLASS system faces several risks related to cost, schedule, and program management that, if not addressed, could lead to additional cost and significant schedule delays for the system. The Navy recognizes that many of these risks exist and has mitigation plans in place to address them. UCLASS cost estimates are uncertain and could exceed available funding: Preliminary cost estimates completed by the Navy indicate that the development and fielding of the initial UCLASS system through fiscal year 2020 could cost between $3.7 and $5.9 billion, all of which is expected to be development funding. However, the Navy has only projected funding of $3.2 billion for the system through fiscal year 2020. The variability in the cost estimates is due largely to cost estimating ground rules and assumptions. For example, Navy officials stated that the $3.7 billion cost estimate reflects an assumed savings of 15 to 20 percent that they believe is achievable since competing contractors' preliminary designs will be relatively mature. Navy and DOD officials we spoke with emphasized that no true sense of cost will be known until after the air vehicle segment preliminary design reviews have been completed and a single contractor has been selected. If the preliminary designs are less mature than assumed, costs could increase significantly, further exceeding budgeted resources. Source selection schedule is compressed: After the four competing contractors have completed their preliminary air vehicle designs, the Navy plans to conduct a full and open competition before awarding the air vehicle segment contract. The Navy's strategy allows for about 8 months between the time that it issues its request for air vehicle proposals and the time it awards the contract. According to OSD officials, this type of contract award process typically takes approximately 12 months. UCLASS is dependent on development and delivery of other systems: The Navy identifies the delivery of the Common Control System software as a risk and notes that if it is delayed, alternative control system software would be needed to achieve the established deployment timeline. Using alternative software would increase integration costs and extend the testing timeline, resulting in duplicated development, integration, and testing once the common control system software is delivered. The Navy expects this risk to be mitigated over time as individual segments of the control system software are built, delivered, integrated, and tested. UCLASS is also critically dependent on the development and fielding of the Joint Precision Approach and Landing System (JPALS), which is a global positioning system-based aircraft landing system that guides the aircraft to make a safe landing on the aircraft carrier deck. However, in a March 2013 report, we found that the JPALS program has experienced significant schedule delays. Additional JPALS delays would likely affect the Navy's UCLASS schedule, in which case the Navy may need to identify an alternative landing system for UCLASS, thus increasing the cost and delaying delivery of the capability. The Navy recognizes this risk. The program office holds weekly integrated master schedule reviews with the JPALS program and plans to mitigate risk through JPALS testing, initial deployments, and continued communication with the JPALS program and other Navy offices. UCLASS system integration will be challenging: The Navy plans to act as the lead systems integrator for all three segments through the development and fielding of the initial UCLASS system. The Navy will have three separate but interrelated segments to manage, the timing and alignment of which are crucial to success of the overall system. The system is reliant on 22 existing government systems, such as JPALS. The Navy recognizes that there is risk associated with its role as the lead systems integrator, as it does not routinely act in this capacity. Therefore, the Navy plans to manage this risk through interaction with industry and regular system level reviews. According to program officials, this integration effort will require the number of full time equivalent staff in the program office to double from its current level of 150 staff to around 300 staff. While the Navy has not yet established a business case or acquisition program baseline, the UCLASS strategy reflects aspects of a knowledge- based approach. Some of these aspects are discussed in more detail below: Leveraging significant knowledge gained from prior technology development efforts: The Navy is planning to maximize the use of technologies for carrier-based unmanned aircraft systems operations that have been developed under other efforts like the UCAS-D program, which recently demonstrated the feasibility of launching and landing an unmanned aircraft on an aircraft carrier. Navy officials note that they plan to leverage navigation and control technologies, among other things, from the demonstration program. By effectively leveraging these types of previous investments, along with other existing systems and technologies, the Navy could reduce cost and schedule for the UCLASS system and promote affordability. Incorporating an open systems design approach: We reported in July 2013 that the Navy is planning to use an open systems approach for the UCLASS system. The Navy has identified key system interfaces and, according to program officials, plans to require contractors to comply with particular open system standards, which it believes will reduce acquisition costs and simplify integration. The Navy also plans to incorporate an open systems architecture developed by OSD for the UCLASS system control segment. This architecture implements a common framework, user interfaces, software applications, and services, and is designed to be common across unmanned aircraft systems. DOD estimates that the open architecture will reduce costs and allow for rapid integration of payloads. Matching requirements to available resources: In 2012, the Joint Requirements Oversight Council issued a memorandum that required the Navy to reduce its UCLASS requirements because at that time they were deemed unaffordable. The Joint Requirements Oversight Council specifically noted that the Navy's requirements should focus on achieving an affordable, adaptable platform that supports a wide range of missions within 3 to 6 years. As a result, the Navy scaled down the UCLASS requirements and updated its analysis of alternatives to include requirements that are more affordable and feasible. Our prior work has found that matching requirements with resources before beginning a system development program increases the likelihood that the program will meet cost and schedule objectives. Holding competition for preliminary designs: In fiscal year 2013, the Navy plans to award four firm fixed-price contracts to competing contractors to develop and deliver preliminary air vehicle designs. The Navy then plans to review those preliminary designs, conduct a full and open competition, and award a single air vehicle segment contract. The Navy believes that this competition will drive efficiencies and ultimately result in cost savings across the system's life cycle. This strategy reflects recent DOD initiatives that emphasize the importance of competition, which we have noted in the past, can help reduce program costs. The Navy plans to manage UCLASS as a technology development program, although its strategy encompasses activities commensurate with system development and early production. The Navy believes the strategy provides considerable latitude to manage UCLASS development and to demonstrate significant knowledge before the Milestone B decision. Indeed, we have often reported that programs tend to move forward with Milestone B and system development before they have demonstrated enough knowledge. But the Navy's plan to develop, manufacture, and field operational UCLASS systems on up to four aircraft carriers before holding a Milestone B decision would defer the decision and mechanisms that would otherwise enable oversight of these very program activities until after they are over. Without a program baseline and regular reporting on progress, it will be difficult for Congress to hold the Navy accountable for achieving UCLASS cost, schedule, and performance goals. As we have noted, these kinds of risks are present in the program and warrant such oversight. Looking ahead to fiscal year 2020, when the UCLASS system is already being delivered, Congress may have few options other than to continue authorizing funding for UCLASS manufacturing and fielding. If the UCLASS program can be executed according to the Navy's strategy, it would be consistent with the normal DOD acquisition process that applies to most weapon system programs, with the exception of the deferral of the Milestone B review. In fact, the timing of the Milestone B review notwithstanding, the actual program activities planned are consistent with a knowledge-based acquisition approach. For example, the Navy is leveraging knowledge gained from prior technology development programs, incorporating an open systems design, matching resources with requirements, and utilizing competition. Given the competitive preliminary design process planned and subsequent competitive contract award, it seems reasonable that a Milestone B decision could be held following the competition and before the beginning of system development, providing a solid oversight framework with little or no change to the strategy's schedule. To enhance program oversight and accountability given that the Navy does not plan to modify its acquisition strategy and hold a Milestone B decision review for the UCLASS system following the system level preliminary design review in fiscal year 2015, Congress should consider directing the Navy to hold a Milestone B review for the system after the system level preliminary design review is complete. If the Navy does not comply, Congress should consider limiting the amount of funding available for the UCLASS system until the Navy provides the basic elements of an acquisition program baseline, such as development and production cost estimates, unit costs, quantities, schedules, annual funding profiles, and key performance parameters needed for such a large investment. The Navy should also be required to periodically report the program's status against the baseline. In order to provide for increased congressional oversight and program accountability, we recommend that the Secretary of Defense direct the Secretary of the Navy to hold a Milestone B decision review for the UCLASS system following the system level preliminary design review-- which is currently scheduled in fiscal year 2015. The Navy provided us with written comments on a draft of this report. The Navy's comments are reprinted in appendix II. The Navy also provided technical comments, which were incorporated as appropriate. The Navy did not concur with our recommendation to hold a Milestone B decision review for the UCLASS system following its planned system level preliminary design review in 2015. The Navy stated that the Under Secretary of Defense for Acquisition, Technology, and Logistics approved its UCLASS acquisition strategy in 2013 and certified that the strategy was compliant with the Weapon Systems Acquisition Reform Act of 2009, the amendments made to that Act, and DOD policy. The Navy pointed out that DOD's policy defines the technology development phase as an "iterative process designed to assess the viability of technologies while simultaneously refining user requirements." The Navy went on to state that the UCLASS user requirements and Concept of Operations will be refined during the early operational capability fleet exercises currently scheduled to begin in fiscal year 2020 and that, at that time, the Navy plans to request approval to hold a Milestone B review to continue development of the UCLASS capability. While the Navy's UCLASS acquisition strategy may be compliant with laws and DOD policy, the development, production, and fielding of an operational system before holding a Milestone B review will limit congressional oversight of a significant investment in weapon system development. An estimated development cost of $3.7 billion makes this UCLASS investment larger than the majority of DOD's current major weapon system development programs. We agree that the technology development phase of an acquisition program is intended to assess the viability of technologies while refining requirements. However, the system development and early production activities included in the Navy's UCLASS acquisition strategy go well beyond technology development and requirements refinement, and thus warrant oversight commensurate with a major weapon system development program. Thus, we continue to believe that our recommendation is valid and are making two matters for congressional consideration to ensure Congress has information available to oversee the UCLASS system and to hold the Navy accountable for achieving UCLASS cost, schedule, and performance goals. We are sending copies of this report to the Secretary of Defense, the Secretary of the Navy, and interested congressional committees. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report or need additional information, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The National Defense Authorization Act for Fiscal Year 2012 mandated that GAO evaluate the Unmanned Carrier-Launched Airborne Surveillance and Strike (UCLASS) system acquisition strategy. This report (1) assesses the Navy's UCLASS acquisition strategy, (2) identifies key areas of risk facing the UCLASS system, and (3) notes areas where the Navy's strategy contains good practices. In order to assess the Navy's UCLASS acquisition strategy, we collected, reviewed, and compared the UCLASS acquisition strategy with best practice standards for using knowledge to support key program investment decisions. These standards are based on GAO's extensive body of work in this area. Additionally we compared the Navy's strategy against DOD acquisition policy. In order to identify any key areas of risk facing the UCLASS system and note areas where the Navy's strategy contains good practices, we collected and reviewed additional UCLASS documentation, such as the analysis of alternatives, capabilities development document, and other relevant Navy management documents. We discussed the Navy's UCLASS acquisition strategy with officials from the UCLASS system program office, the Naval Air Systems Command, the Chief of Naval Operations, and organizations within the Office of the Secretary of Defense (OSD) including the Director of OSD Cost Assessment and Program Evaluation, the Deputy Assistant Secretary of Defense for Systems Engineering, and the Under Secretary of Defense for Acquisition, Technology, and Logistics. We conducted this performance audit from July 2013 to September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our finding based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Michael J. Sullivan, (202) 512-4841 or [email protected]. In addition to the contact named above, key contributors to this report were Travis Masters, Assistant Director; Laura Greifner; Julie Hadley; Kristine Hassinger; Laura Jezewski; Matt Lea; John Pendleton; Dr. Timothy M. Persons; and Roxanna Sun.
The Navy estimates that it will need $3.7 billion from fiscal year 2014 through fiscal year 2020 to develop and field an initial UCLASS system. The National Defense Authorization Act for Fiscal Year 2012 mandated that GAO evaluate the UCLASS system acquisition strategy. This report (1) assesses the UCLASS acquisition strategy, (2) identifies key areas of risk facing the system, and (3) notes areas where the Navy's strategy contains good practices. To do this work, GAO reviewed the Navy's acquisition strategy and compared it to DOD's acquisition policy, among other criteria; and reviewed Navy acquisition documents and spoke with Navy and Office of the Secretary of Defense officials. In fiscal year 2014, the Navy plans to commit to investing an estimated $3.7 billion to develop, build, and field from 6 to 24 aircraft as an initial increment of Unmanned Carrier-Launched Airborne Surveillance and Strike (UCLASS) capability. However, it is not planning to hold a Milestone B review--a key decision that formally initiates a system development program and triggers key oversight mechanisms--until after the initial UCLASS capability has been developed and fielded in fiscal year 2020. The Navy views UCLASS as a technology development program, although it encompasses activities commensurate with system development, including system integration and demonstration. Because the initial UCLASS system is to be developed, produced, and fielded before a Milestone B decision, Congress's ability to oversee the program and hold it accountable for meeting cost, schedule, and performance goals will likely be limited. Specifically, the program will operate outside the basic oversight framework provided by mechanisms like a formal cost and schedule baseline, statutory unit cost tracking, and regular reports to Congress on cost, schedule, and performance progress. The Navy believes its approach effectively utilizes the flexibility in the Department of Defense's (DOD) acquisition policy to gain knowledge needed to ensure a successful UCLASS system development program starting in fiscal year 2020. Yet the Navy expects to review preliminary designs, conduct a full and open competition, and award a contract for UCLASS development in fiscal year 2014, a point at which DOD policy and best practices indicate that a program would be expected to hold a Milestone B review to initiate a system development program. Apart from deferring Milestone B, the Navy's plan would be consistent with the knowledge-based acquisition process reflected in DOD policy. UCLASS faces several programmatic risks going forward. First, the UCLASS cost estimate of $3.7 billion exceeds the level of funding that the Navy expects to budget for the system through fiscal year 2020. Second, the Navy has scheduled 8 months between the time it issues its request for air vehicle design proposals and the time it awards the air vehicle contract, a process that DOD officials note typically takes 12 months to complete. Third, the UCLASS system is heavily reliant on the successful development and delivery of other systems and software, which creates additional schedule risk. Fourth, the Navy will be challenged to effectively manage and act as the lead integrator for three separate but interrelated segments--air vehicle, carrier, and control system--and 22 other government systems, such as the aircraft landing system, the timing and alignment of which are crucial to achieving the desired UCLASS capability. While the Navy recognizes many of these risks and has mitigation plans in place, they could lead to cost increases and schedule delays if not effectively addressed. The Navy's UCLASS acquisition strategy includes some good acquisition practices that reflect aspects of a knowledge-based approach. For example, the Navy is leveraging significant knowledge gained from prior technology development efforts, incorporating an open systems design approach, working to match the system's requirements with available resources, and reviewing preliminary designs for the air vehicle before conducting a competition to select a single contractor to develop and deliver the air vehicle segment. Congress should consider directing the Navy to hold a Milestone B review for the UCLASS system after the system level preliminary design review is complete.If the Navy does not comply, Congress should consider limiting the amount of funding available for the UCLASS system until an acquisition program baseline is provided. GAO included these matters for consideration because the Navy does not plan to make changes as a result of GAO's recommendation to hold a Milestone B review following the system level preliminary design review--which is currently scheduled in fiscal year 2015. The Navy did not concur with the recommendation, and believes that its approved strategy is compliant with acquisition regulations and laws. GAO continues to believe that its recommendation is valid as discussed in this report.
5,025
975
FMD is a highly contagious animal disease. It affects cloven-hoofed animals such as cattle, sheep, goats, and pigs, and has occurred in most countries of the world at some point during the past century. It has 7 types and over 80 subtypes. Immunity to, or vaccination for, one type of the virus does not protect animals against infection from the other types. FMD-infected animals usually develop blister-like lesions in the mouth, on the tongue and lips, on the teats, or between the hooves. They salivate excessively or become lame. Other symptoms include fever, reduced feed consumption, and miscarriages. Cattle and pigs, which are very sensitive to the virus, show disease symptoms after a short incubation period of 3 to 5 days. The incubation period in sheep is considerably longer, about 10 to 14 days, and the clinical signs of the disease are usually mild and may be masked by other diseases, thereby allowing FMD to go unnoticed. The mortality rate for young animals infected with FMD varies and depends on the species and strain of the virus; in contrast, adult animals usually recover once the disease has run its course. However, because the disease leaves them severely debilitated, meat-producing animals do not normally regain their lost weight for many months, and dairy cows seldom produce milk at their former rate. Therefore, the disease can cause severe losses in the production of meat and milk. The FMD virus is easily transmitted and spreads rapidly. Before and during the appearance of clinical signs, infected animals release the virus into the environment through respiration, milk, semen, blood, saliva, and feces. The virus may become airborne and spread quickly if pigs become infected because pigs prolifically produce and excrete large amounts of the virus into the air. Animals, people, or materials that are exposed to the virus can also spread FMD by bringing it into contact with susceptible animals. For example, the virus can spread when susceptible animals come in contact with contaminated animals; animal products, such as meat, milk, hides, skins, and manure; transport vehicles and equipment; clothes or shoes worn by people; and hay, feedstuffs, or veterinary biologics. FMD virus is the most infectious animal disease-causing virus. It has been determined that for certain strains, the dose required to infect cattle or sheep through inhalation is about 10 organisms (10 1 TCID50). Infected pigs produce immense amounts of airborne virus. An infected pig exhales 400 million organisms per day (10 8.6 TCID50). The sensitivity of cattle to infection and the high levels of airborne virus produced by infected pigs illustrate that the airborne spread of infection is another important factor in FMD outbreaks. FMD occurs throughout much of the world, and although some countries have been free of FMD for some time, its wide host range and rapid spread represent cause for international concern. After World War II, the disease was widely distributed across the globe. In 1996, endemic areas included Asia, Africa, and parts of South America. In North America, the last outbreaks of FMD for the United States, Canada, and Mexico occurred in 1929, 1952, and 1953, respectively. North America, Australia, and Japan have been free of FMD for many years. New Zealand has never had a case of FMD. Most European countries have been recognized as disease free, and countries belonging to the European Union have stopped FMD vaccination. Plum Island is a federally owned 840-acre island off the northeastern tip of Long Island, New York. Scientists working at the facility are responsible for protecting U.S. livestock against foreign animal diseases that could be accidentally or deliberately introduced into the United States. Plum Island's research and diagnostic activities stem from its mission to protect U.S. animal industries and exports from accidental or deliberate introduction of foreign animal diseases. Plum Island's scientists identify the pathogens that cause foreign animal diseases and work to develop vaccines to protect U.S. livestock. The primary research and diagnostic focus at Plum Island is foreign or exotic diseases that could affect livestock, including cattle, pigs, and sheep. In addition to FMD and classical swine fever, other types of livestock diseases that have been studied at Plum Island include African swine fever, rinderpest, and various pox viruses, such as sheep and goat pox. Some of the pathogens maintained at Plum Island are highly contagious; therefore, research on these pathogens is conducted in a biocontainment area that has special safety features designed to contain them. If accidentally released, these pathogens could cause catastrophic economic losses in the agricultural sector. The biocontainment area includes 40 rooms for livestock and is the only place in the United States that is equipped to permit the study of certain contagious foreign animal diseases in large animals. USDA uses this biocontainment area for basic research, for diagnostic work, and for the clinical training of veterinarians in the recognition of foreign animal diseases. DHS now shares bench space with USDA in the biocontainment area for its applied research. The North American Foot-and-Mouth Disease Vaccine Bank is also located on Plum Island. USDA was responsible for Plum Island until June 1, 2003, when provisions of the Homeland Security Act of 2002 were implemented that transferred Plum Island, including all its assets and liabilities, to DHS. This action shifted overall responsibility for Plum Island to DHS, including all the costs associated with the facility's maintenance, operations, and security. The Act specified that USDA would continue to have access to Plum Island to conduct diagnostic and research work on foreign animal diseases, and it authorized the President to transfer funds from USDA to DHS to operate Plum Island. Plum Island is now operated as part of a broader joint strategy developed by DHS and USDA to protect against the intentional or accidental introduction of foreign animal diseases. Under the direction of DHS's Science and Technology Directorate, the strategy for protecting livestock also includes work at DHS's National Center for Food Protection and Defense and at its National Center for Foreign Animal and Zoonotic Disease Defense, as well as at other centers within the DHS homeland security biodefense complex. These include the National Biodefense Analysis and Countermeasures Center and the Lawrence Livermore National Laboratory. The strategy calls for building on the strengths of each agency's assets to develop comprehensive preparedness and response capabilities. Homeland Security Presidential Directive 9 tasks the Secretary of Agriculture and the Secretary of Homeland Security to develop a plan to provide safe, secure, and state-of-the-art agriculture biocontainment laboratories for the research and development of diagnostic capabilities for foreign animal and zoonotic diseases. To partially meet these obligations, DHS has asked the Congress to appropriate funds to construct NBAF, a new facility. This facility would house high-containment laboratories able to handle the pathogens currently under investigation at PIADC, as well as other pathogens of interest. DHS selected five potential sites for NBAF in July 2007 and must prepare an environmental impact statement (EIS) for each site. According to DHS, although not included in the competitive selection process, the DHS- owned PIADC will now be considered as a potential NBAF site, and DHS will also prepare an EIS for Plum Island. (See table 1.) DHS has asked for public comment on the selection process. Following completion of the environmental impact statements and public hearings, DHS expects to choose a site by October 2008 and to open NBAF in 2014. According to DHS officials, the final construction cost will depend on the site's location and may exceed the currently projected $451 million. Additional expenses, such as equipping the new facility and relocating existing personnel and programs, may reach $100 million. DHS has not yet determined what action to take with respect to PIADC when construction of NBAF has been completed. We found that DHS has neither conducted nor commissioned any study to determine whether FMD work can be done safely on the U.S. mainland. Instead, DHS relied on a study that USDA commissioned and a contractor conducted in May 2002 that examined a different question: whether it is technically feasible to conduct exotic disease research and diagnostics, including FMD and rinderpest, on the U.S. mainland with adequate biosafety and biosecurity to protect U.S. agriculture. This approach fails to recognize the distinction between what is technically feasible and what is possible, given the potential for human error. DHS told us that this study has allowed it to conclude that it is safe to conduct FMD work on the U.S. mainland. In addition to a number of other methodological problems with the study, we found that it was selective in what it considered in order to reach its findings. In particular, the study 1. did not assess the history of releases of FMD virus or other dangerous 2. did not address in detail the issues related to large animal work in BSL- 3 Ag facilities, and 3. was inaccurate in comparing other countries' FMD work experience with that of the United States. A comprehensive analysis to determine if FMD work could be conducted safely on the U.S. mainland would have considered these points, at a minimum. DHS did not identify or remedy these deficiencies before using the USDA study to support its conclusions. Consequently, we believe DHS does not have evidence to conclude that FMD work can be done safely on the U.S. mainland. We found no evidence that the study examined data from past releases of FMD--particularly the release of FMD on Plum Island in 1978--or the history of internal releases at PIADC. The study did not assess the general history of accidents within biocontainment laboratories, and it did not consider the lessons that can be learned from a survey of the causes of such accidents. Such a survey would show that technology and operating procedures alone cannot ensure against a release, since human error can never be completely eliminated and since a lack of commitment to the proper maintenance of biocontainment facilities and their associated technology--as the Pirbright facility showed--can cause releases. The study panel members we interviewed said that no data on past accidents with or releases of either FMD or other pathogens was systematically presented or discussed. Rather, the panel members recalled that they relied on their own knowledge of and experience with the history of releases in a general discussion. The release of FMD virus from facilities is very rare. In fact, the incidence of the release of any dangerous pathogen from modern containment facilities is quite low. During the vast majority of the time, such facilities have been operating safely. Some releases have occurred, however. Table 2 lists known and attributed releases of FMD virus from laboratories worldwide, including those that produce vaccines. A particular deficiency in the 2002 USDA study was the omission of any explicit analysis of the release of FMD virus from Plum Island itself in 1978. In September of that year, FMD virus was found to have infected clean animals being held outside the laboratory compound in the quarantined animal supply area of PIADC. The exact route by which the virus escaped from containment and subsequently infected the animal supply was never definitely ascertained. An internal investigation concluded that the most probable routes of escape of the virus from containment were (1) faulty air balance of the incinerator area, (2) leakage through inadequately maintained air filter and vent systems, and (3) seepage of water under or through a construction barrier near the incinerator area. Animal care workers then most likely carried the disease back to the animal supply area on the island, where it infected clean animals being held for future work. (See table 3.) An analysis of the deficiencies underlying these probable routes of escape noted during the investigation show that all were related to human error and that none were related to insufficient containment technology. Any one of these deficiencies could happen in a modern facility, since they were not a function of the technology or its sophistication, procedures or their completeness, or even, primarily, the age of the facility. The deficiencies were errors in human judgment or execution and, as such, could occur today as easily as they did in 1978. In addition, a number of incidents at PIADC have resulted in internal releases such that animals within the laboratory compound inadvertently became infected, although no FMD virus was released outside the facility. These incidents show that technology sometimes fails, facilities age, and humans make mistakes. Table 4 lists known internal releases of FMD virus at PIADC since 1971. These incidents involved human error, lack of proper maintenance, equipment failure, and deviation from standard operating procedures. Many were not a function of the age of the facility or the lack of technology and could happen in any facility today. While these incidents did not directly result in any external release, they could have been useful in the 2002 study in illustrating the variety of ways in which internal controls--especially in large animal biocontainment facilities--can be compromised. Given the rarity of the release of FMD virus from laboratories, and how relevant its release is to the question of moving FMD work off its present island location, we believe that the 2002 study was remiss in not more explicitly considering this matter. In fact, members of the panel we spoke with could recall little, if any, discussion of incidents of release at Plum Island. Beyond the history of incidents at Plum Island, we found no evidence that the study considered the history of accidents in or releases from biocontainment facilities generally. Had the study considered this history, it would have shown that no facility for handling dangerous pathogens can ever be completely safe and that no technology can be totally relied on to ensure safety. The study found that "today's technology is adequate to contain any biosafety risks at any site." While we agree that technology-- biocontainment facilities, filtration technologies, and the like--has come a long way and is a critical component of biosafety, we believe that it is inadequate by itself in containing biosafety risks. A comprehensive biosafety program involves a combination of biocontainment technology, proper procedures, and properly trained people. The study also concurred that "biosafety is only as effective as the individual who practices it." Even with a proper biosafety program, human error can never be completely eliminated. Many experts told us that the human component accounts for the majority of accidents in high-containment laboratories. This risk persists, even in the most modern facilities and with the latest technology. The 2002 study, in fact, acknowledged this, although it did not elaborate on the critical role that people play in keeping biocontainment laboratories safe when it stated that "biosafety is only as effective as the individual who practices it." The study's summary conclusion that "biocontainment technology allows safe research" is, therefore, disingenuous. Finally, as we have reported previously, the maintenance of any biocontainment facility or technology plays a critical role in biosafety. For example, the lack of proper maintenance was one of the probable routes of escape in the 1978 release at Plum Island. High-containment laboratories are highly sophisticated facilities that require specialized expertise to design, construct, operate, and maintain. Because they are intended to contain dangerous microorganisms, usually in liquid or aerosol form, even minor structural defects--such as cracks in the wall, leaky pipes, or improper sealing around doors--can often have severe consequences. For example, leaking drainage pipes was determined to be the likely cause of the FMD outbreak at Pirbright in 2007. According to the experts we talked with, failure to budget for and conduct regular inspections and maintenance of biocontainment facilities is a risk to which even the most modern facilities are susceptible. All the experts we talked with, including the panel members who contributed to the 2002 study, emphasized the importance of effective maintenance and the need to protect maintenance budgets from being used for other purposes. One official told us, for example, that as his containment facility ages, he is spending more and more of his operating budget on maintenance and that, in fact, he is having to offset the rise in maintenance costs from other categories of funding within his overall budget. The 2002 study did not address in detail the issues of containment related to large animals like cattle and pigs, which present problems very different from those of laboratory animals like rats, mice, and guinea pigs. It did not address the unique risks associated with the special containment spaces required for large animals or the impact of highly concentrated virus loads on such things as the air filtration systems. Large animals cannot be kept in containers. They must be allowed sufficient space to move around in. Handling large animals within confined spaces--a full size cow can weigh up to 1,430 pounds--can present special dangers for the scientists as well as the animal handlers. Moving carcasses from contained areas to necropsy or incineration poses additional risks. For example, one of the internal releases of FMD virus at PIADC happened in transporting large animal carcasses from contained rooms through to incineration. Although it could not have been known to the study group in 2002, transferring FMD work to NBAF is to be accompanied by an increase in both scope and complexity over the current activities at PIADC. These increases in scope and complexity would mean an increase in the risk associated with work at the new facility. For example, the proposed BSL-3 Ag space at the new NBAF is projected to be almost twice the size of the space currently at PIADC and is to accommodate many more large animals. USDA's Agricultural Research Service animal holding area requirements at PIADC specify space for 90 cattle, 154 swine, or 176 sheep (or combinations thereof). Translational studies will involve clinical trials with aerosolized FMD virus challenging groups of 30 to 45 animals and lasting 3 to 6 months. This is contrasted with about 16 large animals that PIADC can process today. Moreover, unique risks are associated with BSL-3 Ag facilities, where the facility itself is considered the primary containment area. In a standard BSL-3 laboratory, in contrast, work is done within a biological safety cabinet, which provides the primary level of containment, eliminating direct contact between the human operator and infected material. The outer parts of the facility walls thus provide a secondary barrier. Because large animals cannot be handled within a biological safety cabinet, they are free to move around in a BSL-3 Ag laboratory, where the laboratory walls provide the primary containment. An important difference between a standard BSL-3 laboratory, such as those used with human pathogens, and a BSL-3 Ag laboratory therefore is that in the latter there is extensive direct contact between the human operator and the infected animal and, consequently, the virus. Because the virus can be carried in a person's lungs, nostrils, or other body parts, the human becomes a potential avenue by which the virus can escape the facility. Special biosafety procedures are needed--for example, a full shower upon exiting containment, accompanied by expectorating to clear the throat and blowing through the nose to clear the nasal passages. Additionally, a 5-to-7-day quarantine period is usually imposed on any person who has been within containment where FMD virus is present, a tacit acknowledgment that humans can carry the disease out with them even after these additional procedures. Although the study mentioned these matters, it gave no indication that these unique risks associated with working in large animal biocontainment facilities informed the study's eventual findings. We also found that the study did not consider other safety issues specific to FMD. For example, the study did not look at the likely loads that air filtration systems have to deal with, especially in the case of pigs infected with FMD virus--which, through normal expiration, excrete very large amounts of virus-laden aerosols. Properly fitted and maintained high- efficiency particulate air (HEPA) filters are a key factor in all modern biocontainment facilities and have a record of being highly effective in keeping aerosolized pathogens, including viruses, contained. Nevertheless, they do not represent an absolute barrier. The typical standard for such filters is that they must operate with an efficiency of at least 99.97 percent. Often the highest level-containment laboratories use two HEPA filters in series, in addition to prefiltration systems, to gain increased efficiency. However, we found no indication that the study examined specific filtration issues with the FMD virus or that it questioned the efficiency of such systems specifically in relation to a high-volume challenge of virus, a concern that, while remote, should not have been dismissed, given the very low dose of FMD virus required for animals to become infected. The study cited the experience of three countries around the world in working with FMD--Australia, Canada, and the United Kingdom. While the study cited Australia as a foreign precedent, it noted that Australia has not conducted any FMD work on the mainland. In fact, Australia--by law--does not allow any FMD work on the mainland. In this respect, it is even more restrictive than the United States. Australia maintains a ban on live virus FMD work at all its laboratories, whether on mainland, island, or peninsula, including the laboratory at Geelong--considered by many to be the premier laboratory in the world in terms of state-of-the-art animal containment technology. Australia mitigates the risk FMD poses to its livestock by outsourcing its FMD work to other countries. The Canadian laboratory at Winnipeg was not in operation at the time of the 2002 study and is not appropriately compared to the U.S. situation. Canada has decided to conduct FMD work on the mainland. However, it is in a downtown location where there is little likelihood that susceptible animals will be in the immediate neighborhood. In addition, its scope of work for FMD is smaller than the present FMD work at the PIADC facility or the proposed facility. The proposed U.S. sites are potentially more likely to pose a risk, given their closer proximity to susceptible animal populations. The 2002 study used the U.K. Pirbright facility as an example of a precedent for allowing FMD work on the mainland. The study participants could not have known in 2002, however, that an accidental release of FMD virus at the Pirbright facility in 2007 led directly to eight separate outbreaks of FMD on farms surrounding the Pirbright laboratory. This fact highlights the risks of release from a laboratory that is in close proximity to susceptible animals and provides the best evidence in favor of an island location. Finally, the study did not consider the German and Danish situations. For example, all FMD work with large animals in Germany is restricted to Riems, an island just off the northeastern coast of Germany in the Baltic Sea. FMD work in Germany was originally restricted to the island in the1910s. During the post-World War II period, when Riems was controlled by East Germany, West Germany maintained a separate mainland facility for its FMD research, but after re-unification, Germany again decided to restrict all FMD research to Riems and disestablished the mainland facility. Construction is currently under way to expand the facility on the island at Riems. Similarly, Denmark restricts all FMD work to the National Veterinary Institute Department of Virology, on the island of Lindholm. The Danish government has recently made a further commitment to Lindholm and has rebuilt a new BSL-3 Ag laboratory exclusively for FMD work on the island. While location confers no advantage in preventing a release, location can help prevent the spread of FMD virus and a resulting disease outbreak, if there is a release. An island location can help prevent the spread of FMD virus along terrestrial routes, such as by vehicles splashed with contaminated mud or other material. An examination of the empirical evidence of past FMD releases from research facilities shows that an island location can help keep a release from becoming a more general outbreak. Another benefit of an island location is that it provides a permanent geographical barrier that may not be impregnable but that can more easily allow the Office International des Epizooties (OIE) to declare the rest of the U.S. mainland disease-free from FMD if there happened to be a release on the island. Experts we spoke with--including a number of the expert panel members from the 2002 study--agreed that an island location provides additional protection. They agreed that all other factors being equal, FMD research can be conducted more safely on an island than in a mainland location. A comparison of the releases at Plum Island in 1978 and Pirbright in 2007 provides evidence that an island location can help keep a release from becoming a more general outbreak. In September 1978, FMD virus was found to have been released from containment at PIADC. The exact route of escape was never definitely ascertained, but clean animals held on the island in the animal supply area outside the laboratory compound became infected with FMD. However, no virus was ever found off the island. In fact, when the subsequent investigation by USDA's Animal and Plant Health Inspection Service on the mainland of Long Island found that no spread of FMD, OIE--in consideration of PIADC's island location--continued to officially consider the United States as a whole free from FMD. This was a significant declaration that allowed the continued unrestricted export of U.S. animal products from the mainland. In summarizing the 1978 FMD virus release, the PIADC Safety Investigation Committee identified three main PIADC lines of defense that stood as barriers against the escape of disease agents: (1) the design, construction, and operation of its laboratory buildings; (2) its restrictions on the movement of personnel, materials, supplies, and equipment; and (3) the island location. This internal investigation concluded that although the first two barriers had been breached, probably by human error, the final line of defense--the island location--succeeded in containing the release from becoming a wider outbreak beyond PIADC itself. The 1978 release at Plum Island can be compared to the release at Pirbright in the summer of 2007. Pirbright is located on the mainland of Great Britain in Surrey, a semi-agricultural area just southwest of London. The U.K. Institute for Animal Health and Merial, a commercial vaccine production plant, are collocated there, and both work with FMD virus. The site is surrounded by a number of "hobby farms," on some of which 40 to 50 cattle are bred and raised. In summer 2007, cattle on farms near the Pirbright facility became infected with FMD. Subsequent investigations concluded that the likely source of the release was a leaking drainage pipe at the facility that carried waste from the contained areas to an effluent treatment plant. The virus was then spread onto local farms by the splashing of contaminated mud onto vehicles that had unrestricted access to the contaminated area and could easily drive onto and off the site. The investigations determined that there had been a failure to properly maintain the site's infrastructure. In all, eight separate outbreaks occurred over a 2-month period. A key difference, of course, between the Pirbright incident in 2007 and the incident at Plum Island in 1978 is that virus did not spread off the Plum Island. Similarly, escapes in 1968 in Denmark from the Lindholm facility and in the 1970s in Germany from the Riems facility, when compared to Pirbright in 2007, also demonstrate the benefit of an island location in containing a release. Since 1996, OIE has provided a procedure for officially recognizing the sanitary status of countries with regard to particular animal diseases, including FMD. A country can apply for and be granted disease-free status if it can prove that a disease is not present in the country. Ad hoc groups of international experts examine countries' applications for official recognition of sanitary status. An elected Specialist Commission reviews the recommendations of these groups and either accepts or rejects them. If an outbreak does occur, procedures exist for countries to regain their disease-free status. This offers significant economic benefit, because export bans can exist for countries not considered disease-free. In 2002, GAO reported that an export ban on U.S. livestock products because of an FMD outbreak in the United States, similar to the 2001 outbreak in the United Kingdom, could result in losses of $6 billion to $10 billion a year while the nation eradicated the disease and regained disease-free status. Instead of revoking the U.S. disease-free status in response to the 1978 release at Plum Island, OIE continued to consider the United States as a whole free from FMD. This was because of the facility's island location. This status from OIE allowed the United States to continue exporting animal products from the mainland after the release was identified. However, these OIE officials said that if a similar release were to occur from a facility on the U.S. mainland, OIE would most likely not be able to declare the United States disease-free. In their view, the island location provides a natural "zoning" ability that, under OIE's rules, more easily allows the country to prove the compartmentalization that is necessary for retaining "disease-free" status. While humans cannot become infected with FMD through contact with infected animals or through eating products of diseased animals, still, FMD can have economic consequences, as recent outbreaks in the United Kingdom have demonstrated. Although estimates vary, experts agree that the economic consequences of an FMD outbreak on the U.S. mainland could be significant, especially for red meat producers whose animals would be at risk for diseases, depending on how and where such an outbreak occurred. According to a study by the U.K. National Audit Office, the direct cost of the 2001 FMD outbreak to the public sector was estimated at over $5.71 billion and the cost to the private sector was estimated at over $9.51 billion. By the time the disease was eradicated, in September 2001, more than six million animals had been slaughtered: over four million for disease control purposes and two million for welfare reasons. Compensation and other payments to farmers were expected to total nearly $2.66 billion. Direct costs of measures to deal with the epidemic, including the purchase of goods and services to eradicate the disease, were expected to amount to nearly $2.47 billion. Other public sector costs were estimated at $0.57 billion. In the private sector, agriculture and the food chain and supporting services incurred net costs of $1.14 billion. Tourism and supporting industries lost revenues eight times that level--$8.56 billion to $10.27 billion, when the movement of people in the countryside was restricted. The Treasury had estimated that the net economic effect of the outbreak was less than 0.2 percent of gross domestic product, equivalent to less than $3.8 billion. The possibility of the introduction of FMD into the United States is of concern because this country has the largest fed-cattle industry in the world, and it is the world largest producer of beef, primarily high-quality, grain-fed beef for export and domestic use. Although estimates of the losses vary, experts agree that the economic consequences of an FMD outbreak on the U.S. mainland could mean significant losses, especially for red meat producers, whose animals would be at risk for disease, depending on how and where an outbreak occurred. Current estimates of U.S. livestock inventories are 97 million cattle and calves, 7 million sheep, and 59 million hogs and pigs, all susceptible to an FMD outbreak. The total value of the cash receipts for U.S. livestock in 2007 was $141.4 billion. The total export value of red meat in 2007 was $6.4 billion. These values represent the upper bound of estimated losses. Direct costs to the government would include the costs of disease control and eradication, such as the maintenance of animal movement controls, control areas, and intensified border inspections; the destruction and disposal of infected animals; vaccines; and compensation to producers for the costs of disease containment. However, government compensation programs might not cover 100 percent of producers' costs. As a result, direct costs would also occur for disinfection and for the value of any slaughtered animals not subject to government compensation. According to the available studies, the direct costs of controlling and eradicating a U.S. outbreak of FMD could vary significantly, depending on many factors including the extent of the outbreak and the control strategy employed. Indirect costs of an FMD outbreak would include costs affecting consumers, ancillary agricultural industries, and other sectors of the economy. For example, if large numbers of animals were destroyed as part of a control and eradication effort, then ancillary industries such as meat processing facilities and feed suppliers would be likely to lose revenue. Furthermore, an FMD outbreak could have adverse effects such as unemployment, loss of income (to the extent that government compensation would not fully reimburse producers), and decreased economic activity, which could ripple through other sectors of the economy as well. However, our analyses show that these effects would likely be local or regional and limited in scope. The economic effects of an FMD outbreak would depend on the characteristics of the outbreak and how producers, consumers, and the government responded to it. The scale of the outbreak would depend on the time elapsed before detection and the number of animals exposed, among other factors. Costs to producers of addressing the disease outbreak and taking steps to recover would similarly vary. The responses of consumers in the domestic market would depend on their perceptions of safety, as well as changes in the relative prices of substitutes for the affected meat products, as supply adjusted to the FMD disruption. In overseas markets, consumers, responses would be mediated by the actions their governments would take or not take to restrict imports from the United States. Because an overall estimate of effects depends heavily on the assumptions made about these variables, it is not possible to settle on a single economic assessment of the cost to the United States of an FMD outbreak. We have reviewed literature that considers but a few of the many possible scenarios in order to illustrate cost components and to consider the possible market reaction rather than to predict any particular outcome. DHS believes that modern technology, combined with biosafety practices, can provide for a facility's safe operation on the U.S. mainland. Most experts we talked with believe that technology has made laboratory operations safer over the years. However, accidents, while rare, still occur because of human or technical errors. Given the non-zero risk of a release from any biocontainment facility, most of the experts we spoke with told us that an island location can provide additional protection. DHS has not conducted any studies to determine whether FMD work can be done safely on the mainland. Instead, in proposing to move FMD virus to the mainland, DHS relied on a 2002 USDA study that addressed a different question. That study does not clearly support the conclusion that FMD work can be done safely on the mainland. An island location can help prevent the spread of FMD virus along terrestrial routes, such as by vehicles splashed with contaminated mud, and may also reduce airborne transmission. Historically, the United States and other countries as well have seen the benefit of an island location, with its combination of remoteness from susceptible species and a permanent water barrier. Although FMD has no human-health implications, recent outbreaks in the United Kingdom have demonstrated its economic consequences. Estimates for the United States vary but would depend on the characteristics of the outbreak and how producers, consumers, and the government responded to it. For further information regarding this statement, please contact Nancy Kingsbury, Ph.D., at (202) 512-2700 or [email protected], or Sushil K. Sharma, Ph.D., Dr.PH, at (202) 512-3460 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. William Carrigg, Jack Melling, Penny Pickett, and Elaine Vaurio made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
DHS is proposing to move foot-and mouth disease (FMD) research from its current location at the Plum Island Animal Disease Center--located on a federally owned island off the northern tip of Long Island, New York--and potentially onto the United States mainland. FMD is the most highly infectious animal disease that is known. Nearly 100 percent of exposed animals become infected. A single outbreak of FMD on the U.S. mainland could have significant economic consequences. Concerns have been raised about moving FMD research off its island location and onto the U.S. mainland--where it would be in closer proximity to susceptible animal populations--as opposed to building a new facility on the island. GAO was asked to evaluate the evidence DHS used to support its decision that FMD work can be done safely on the U.S. mainland, whether an island location provides any additional protection over and above that provided by modern high containment laboratories on the mainland, and the economic consequences of an FMD outbreak on the U.S. mainland. In preparing this testimony, GAO interviewed officials from DHS and USDA, talked with experts in FMD and high-containment laboratories worldwide, and reviewed studies on FMD, high-containment laboratories, and the economic consequences of FMD outbreaks. GAO also visited the Plum Island Animal Disease Center and other animal biocontainment laboratories in other countries. GAO found that the Department of Homeland Security (DHS)has neither conducted nor commissioned any study to determine whether work on foot-and-mouth disease (FMD) can be done safely on the U.S. mainland. Instead, in deciding that work with FMD can be done safely on the mainland, DHS relied on a 2002 U.S. Department of Agriculture (USDA) study that addressed a different question. The study did not assess the past history of releases of FMD virus or other dangerous pathogens in the United States or elsewhere. It did not address in detail the issues of containment related to large animal work in BSL-3 Ag facilities. It was inaccurate in comparing other countries' FMD work experience with that of the United States. Therefore, GAO believes DHS does not have evidence to conclude that FMD work can be done safely on the U.S. mainland. While location, in general, confers no advantage in preventing a release, location can help prevent the spread of pathogens and, thus, a resulting disease outbreak if there is a release. Given that there is always some risk of a release from any biocontainment facility, most experts GAO spoke with said that an island location can provide additional protection. An island location can help prevent the spread of FMD virus along terrestrial routes, such as from vehicles splashed with contaminated mud, and may also reduce airborne transmission. Some other countries besides the United States have historically seen the benefit of an island location, with its remoteness from susceptible species and permanent water barriers. A recent release from the Pirbright facility--located in a farming community on the mainland of the United Kingdom--highlights the risks of a release from a laboratory that is in close proximity to the susceptible animals and provides the best evidence in favor of an island location. FMD has no health implications for humans, but it can have significant economic consequences, as recent outbreaks in the United Kingdom have demonstrated. The economic effects of an FMD outbreak in the United States, however, would depend on the characteristics of the outbreak and how producers, consumers, and the government responded to it. Although estimates vary, experts agree that the economic consequences of an FMD outbreak on the U.S. mainland could be significant, especially for red meat producers whose animals would be at risk for diseases, depending on how and where such an outbreak occurred.
8,060
825
The need to transform the military services has been widely recognized in a number of DOD policy papers, reports, and strategy documents. The national security strategy, the national military strategy, the Secretary of Defense's guidance to the services, the 1997 Quadrennial Defense Review, and the Chairman of the Joint Chiefs of Staff's Joint Vision statements (2010 and 2020) all cite the need to transform U.S. armed forces to maintain military dominance in the new security environment. Over the last several years, the Navy has undergone some reorganization, shifted its science and technology funding, and undertaken a wide range of experiments and innovation activities. A key organization for carrying out the Navy's transformation has been the Navy Warfare Development Command, which was established in June 1998 to develop new operational and warfighting concepts to plan and coordinate experiments based on new concepts and to develop doctrine. The Command has been preparing a capstone concept based on network centric warfare that is to serve as a guide for future naval operations. The Command has also planned and coordinated a series of major experiments involving the fleets to evaluate many of the concepts and technologies associated with network centric warfare. Before it established the Command, the Navy did not have an organization dedicated to operational experimentation. The Command's fiscal year 2000 and 2001 budgets are about $45.3 million and $44 million, respectively. In fiscal year 2002, the Command's budget is projected to decline to about $41.7 million. Almost half of each annual budget is allocated to experimentation-related activities. Two other organizations important to transformation are the Naval War College and the Chief of Naval Operations' Strategic Studies Group. The college conducts war games that test concepts and potential technologies.Its close working relationship with the Navy Warfare Development Command provides an avenue for new concepts to be further evaluated and integrated into experimentation efforts. The Strategic Studies Group, comprised of a small group of senior Navy, Marine Corps, and Coast Guard officers, generate and analyze innovative and revolutionary naval warfighting concepts and reports directly to the Chief of Naval Operations. Recent studies have centered on attacking land targets from the sea, future surface ship deployments, new crewing concepts, and multitiered sensor grids. In 1999 the Navy reorganized its science and technology resources into 12 future naval capabilities to focus more sharply on the capabilities needed over the next 10-15 years. Senior Navy and Marine Corps officials lead integrated product teams that prioritize individual efforts in the capability areas. The Navy's science and technology budget has remained relatively static over the last decade and has decreased as a percentage of its total budget. The Navy currently allocates about 35 percent of its science and technology budget to support its future naval capabilities. The Navy plans further refinements to its science and technology structure, including the possibility of adding or subtracting individual future naval capabilities. Appendix I provides further information on the future naval capabilities. Since March 1997, the Navy has also conducted nine fleet battle experiments. The experiments are assessed to determine which new operational concepts, tactics, and technologies prove workable and what follow-on experimentation to pursue. The Navy Warfare Development Command is also coordinating with other military organizations to jointly lease one or more dual hulled high-speed ships for a broad range of experiments. For 18 months starting in September 2001, the Navy will conduct a series of experiments to explore potential uses for such vessels, including amphibious lift, armament configuration, and helicopter operations. Appendix II provides examples of issues explored in the fleet battle experiments. Finally, the Navy conducts a wide range of innovation activities. For example, the Third Fleet has set aside a portion of its command ship, the U.S.S. Coronado, to test innovations related to command, control, communications, computers, and intelligence concepts. Appendix III provides some examples of these innovation activities. The Navy is conducting a variety of transformation activities: it is experimenting with new technologies, it has made some organizational changes, it has introduced the new network centric warfare concept, and it is pursuing a wide range of innovations. However, the Navy has not developed an overarching, long-term strategy that integrates these activities or that clearly defines transformation goals, organizational roles and responsibilities, timetables for implementation, resources to be used to achieve its transformation goals, and ways to measure progress toward those goals. In other words, the Navy does not have a strategic plan and roadmap for its transformation that shows where it wants to go; how it proposes to get there; and how transformation will be managed, funded, implemented, or monitored. The lack of a plan and roadmap has contributed to confusion within the Navy and DOD about what constitutes the Navy's transformation. The adoption of an evolutionary approach to transformation has so far not led the Navy toward careful and full consideration of all the strategic, budgetary, and operational elements of transformation. Additionally, the Navy's progress has been adversely affected by insufficient support for new organizations responsible for leading transformation efforts, limited conduct of long-term experiments, and a variety of Navy-wide innovation activities that are not well coordinated and tracked. There is no clear consensus on the precise definition, scope, or direction of Navy transformation. In discussions with Navy and DOD officials and outside defense experts, we found there was some confusion about what constitutes transformation and about the role of the network centric warfare concept, which is the centerpiece of the Navy's transformation efforts. The Navy has not developed a plan that clearly identifies what transformation is and what its goals or components are. For example, although network centric warfare is clearly a fundamental concept for the Navy's future operations, the Navy still has not made it clear how the concept fits in with its many ongoing transformation activities or with its overall transformation efforts, what effects the concept will have on the types and composition of forces, or how the concept's many components will be integrated with each other or with those of the other services. The Navy plans to soon publish a capstone concept document for its future force. The concept document is expected to apply the tenets of network centric operations to the Navy's vision statements and identify some of the capabilities required to implement these tenets. Navy Warfare Development Command officials believe the concept document is critical to the success of the Navy's transformation, and they expect the concept document to be approved by the Chief of Naval Operations in the near future. Good management practices and the advice of defense experts both inside and outside the Navy suggest that a clear strategy is central to the success of transformation efforts. DOD and Navy officials and outside defense experts identified a number of benefits that can be obtained from strategic planning. Navy officials at headquarters and several commands stated that establishing an agreed-upon definition of transformation would be vital for explaining what constitutes transformation. Most Navy officials we spoke with believe that a strategic plan and roadmap would bring greater coherence to the Navy's transformation efforts. A strategic plan and roadmap would also provide the Congress with a means to evaluate and make optimal decisions on the Navy's transformation. The need for a strategic plan when attempting major organizational and operational changes, such as those the Navy is undertaking, has also been long recognized in the private sector as a best business practice. We discussed the need for a strategic plan and roadmap with a wide range of DOD and Navy officials and with outside defense experts, many of whom have been directly involved in advising DOD on military transformation. These individuals agreed that such a plan should clearly articulate the Navy's transformation goals and objectives, priorities, specific responsibilities, and linkages with other organizations, as well as the scope of activities and the resources necessary to carry them out. These management tools should also identify the challenges and obstacles that need to be addressed and should include understandable, simple, and reasonable metrics to provide ways to gauge progress, provide information to decisionmakers, and make necessary changes. Some Navy officials expressed caution that such a plan should not dictate a particular force structure but rather provide the elements of the process to guide the transformation efforts. Appendix IV provides additional information on the key factors for successful transformation planning and management. The same officials and experts said that further complicating Navy transformation planning efforts is the absence of clearly articulated transformation guidance from the Secretary of Defense and the Chairman of the Joint Chiefs of Staff to the military services. The Secretary and the Chairman have provided only broad guidance on the direction and progress of military transformation and on the types of future capabilities required for transforming the military. The responsibility for clearly identifying priorities and developing an implementation plan for their transformations has been left to the individual services. However, it is widely recognized that the success of future joint operations requires careful joint planning and integration. Various organizations, including the Defense Science Board, have cited the need for the Secretary of Defense to provide clear guidance on transformation. In 1999, the Board called for an explicit strategy, or a master plan; a roadmap; and outcome-related metrics to assess progress. In its annual performance plan, issued pursuant to the Government Performance and Results Act of 1993, DOD identified the transformation of U.S. forces among its performance goals. The act requires federal agencies to clearly define their missions, set goals, link activities and resources to goals, prepare annual performance plans, measure performance, and report on accomplishments. However, we recently reported that two of the transformation's three underlying metrics-- procurement spending and defense technology objectives--do not provide a direct link toward reaching that performance goal. Without such metrics, DOD cannot adequately assess its progress toward transforming its forces for the 21st century. The Navy would be expected to provide input to such a DOD effort and should therefore have its own clearly articulated transformation plan. The Navy has adopted what it calls an evolutionary approach to transformation, meaning that its effort is more about incremental changes in its force posture than in its force structure. The Navy believes that this is an appropriate path to follow since it already is an expeditionary, self- sustaining, and mobile force with worldwide reach. What it needs to do, the Navy asserts, is to improve its expeditionary capabilities by focusing less on the types of ships in its force structure and more on linking them together through data networks--hence the network centric warfare concept. This evolutionary approach, however, has so far not led the Navy toward careful and full consideration of all the strategic, budgetary, and operational elements of transformation. Through its approach, the Navy has also allowed almost a decade to pass with slow progress in a number of key transformation areas. Without the benefit of an overarching strategic plan and roadmap, the Navy has not taken the steps necessary to explore the possibilities of long-term changes to its force structure and operations to adequately address near- and long-term security requirements within existing and projected fiscal parameters. There are at least three reasons why the Navy may need to adopt a more far-reaching and considered approach to its transformation: (1) it may not be able to recapitalize its existing forces at current shipbuilding rates, which might necessitate more fundamental changes in force structure and operations than it currently plans; (2) new operational concepts and technologies needed to operate in littoral areas may be coming into the force too slowly, given the increased importance of littoral operations recognized by the Navy; and (3) there are substantial technological challenges presented by network centric warfare that could take a long time and considerable effort to overcome. DOD in its comments to a draft of this report, stated that the evolutionary approach followed by the Navy for transformation was prudent and allowed the Navy to continuously improve its combat capabilities. It also stated that Navy transformation efforts, such as the Navy's fleet battle experiment program, have not excluded consideration of innovative force structures. DOD attributed the majority of actual and perceived transformation shortfalls to the lack of an overarching strategic plan and roadmap rather than to the approach followed for transformation by the Navy. The Navy has not been building enough ships to maintain the roughly 300-ship force mandated by the 1997 Quadrennial Defense Review. The high costs of supporting the current force, the time needed to acquire new ships, and the prospect of a continued mismatch between fiscal resources and force structure requirements increase the urgency of planning for and carrying out transformation. Although we did not make an independent assessment of the funds needed to maintain a force of 300 ships and its associated inventory of aircraft and supporting infrastructure, the Congressional Budget Office has estimated that the Navy would require roughly $17 billion more each year for fiscal years 2001 through 2005 than it is currently expected to receive to sustain this force level. If current construction rates and funding levels remain the same, the Navy's force could decrease to approximately 260 ships or lower after 2020. Navy officials believe they face even bigger challenges. As part of DOD's July 2000 report on naval vessel force structure requirements, the Navy reported that its force needed to increase to about 360 ships over the next 15 to 20 years to better meet its total operational requirements and the national military strategy. The recent establishment of an Office of the Deputy Chief of Naval Operations for Warfare Requirements and Programs may help focus the Navy's attention on analyzing the potential for changes that might be needed to address fiscal concerns as well as current and future force structure requirements. In addition, the President of the Naval War College was recently chosen by the Chief of Naval Operations to lead a task force to analyze the force structure implications of operating the Navy on approximately the same budget level it now has. A senior Navy headquarters official agreed that the shortfall in funding and the mismatch between requirements and resources are major drivers for transformation. But the official also acknowledged that the Navy's evolutionary approach to transformation might not address its fiscal problems. The Navy has been slow in acquiring many of the capabilities that it needs to successfully conduct littoral operations. We recently reported on the Navy's limited countermine, antisubmarine, and ship self-defense capabilities and the lack of credible surface fire support capabilities.Although the Navy has had acquisition programs under way to improve its capabilities in each of these areas for many years, we found progress has been slow. We also found that unless current efforts can be accelerated or alternatives developed, it will be another 10 to 20 years before the naval services have the capabilities they say they need to successfully execute littoral warfare operations against a competent enemy. Our ongoing reviews of Navy chemical and biological defense capabilities have found shortcomings in equipment and training for shipboard personnel and naval personnel ashore in high-threat areas. Such deficiencies could also seriously affect the Navy's ability to operate successfully in littoral areas. The Navy faces significant challenges in developing the network centric warfare capability. Navy officials told us that they have only just begun to define and implement the concept and that making it operational involves significant challenges. Officials in the Navy's operating forces expressed a lack of a clear understanding about what network centric warfare is and how it is expected to change operations and forces. Some elements such as the Cooperative Engagement Capability have recently deployed, while others are in the early stages of research and development and are years away from practical use. Most will rely on interoperability (compatibility with equipment used by the Navy and the other services) for their ultimate success. Yet the Navy does not have an implementation plan to integrate all the different elements. Several Navy and joint officials have indicated that some components require much more comprehensive planning and an integrated roadmap for their development. Others said that the Navy and the other services were not doing enough to ensure interoperability. The Navy has carried out several organizational changes aimed at moving transformation forward. But as with all of its other transformation activities, these changes have not been carried out within the context of an overarching strategy that clearly and authoritatively identifies roles and responsibilities of different bodies and stakeholders. Thus, even though the Navy Warfare Development Command was established primarily to direct the Navy's transformation efforts, the Command has had difficulty building relationships with other Navy organizations and has not yet achieved the priority for resources needed to make it an effective focal point for transformation. Several important activities are underway at the Command. For example, it is pursuing a comprehensive review and reorganization of the Navy's doctrine structure, and it is coordinating all major Navy fleet battle experiments as well as the Navy's participation in joint experiments. Its work on the capstone concept document based on network centric warfare--the centerpiece of the Navy's transformation activities--is nearing completion. It has also established a constructive working arrangement with the Naval War College and the Strategic Studies Group. The Command has had less success establishing itself as the Navy's focal point for transformation and has sometimes faced resistance at the fleets and at Navy headquarters while trying to carry out its responsibilities. Atlantic and Pacific Fleet officials said that while they appreciate the intent of the Command's work, fleet personnel sometimes see the Command's experiments as disruptions to their everyday operations and do not fully understand how the experiments can benefit them. They explained that the fleets are focused more on immediate issues affecting operations and are therefore less receptive to activities that might be aimed at the Navy's longer term interests. A number of senior Navy officials said that the Command has had difficulty promoting its concepts to the fleets because some fear that new concepts could threaten support and funding for existing programs. Part of the difficulty of building relationships with other Navy organizations is that the Command is just 3 years old, and its mission is not well known throughout the Navy. During our fleet visits, we found that with the exception of fleet battle experiments, the Command's overall role, responsibilities, and relationships were not fully understood. Several senior Navy officials noted that the Command has not been afforded a high priority for staffing. For example, only 46 of its 60 authorized positions for military personnel were filled as of June 2001. The Command's detachments at the Atlantic and Pacific Fleets have several important responsibilities, including providing support for experimentation, innovation activities, and concept and doctrine development and acting as the liaison between joint and fleet organizations and the Command. However, they have only a skeletal number of authorized staff to carry out these responsibilities, and even these positions have not always been fully staffed. An official of the Command's Pacific Fleet detachment said that lack of personnel prevents the detachment's staff from attending key meetings and making visits to Navy organizations throughout the region. Officials at the Command's Atlantic Fleet detachment expressed similar limitations to involvement with organizations in that area. Additionally, the Command has been unable to assign a permanent representative to the U.S. Joint Forces Command to represent the Navy on joint experimentation issues. The Command has also had some difficulties with funding needed to support its activities. An official in the Command's Pacific Fleet detachment told us the detachment has had to rely on other Navy organizations, such as the Third Fleet, to provide funds for basic support such as office space, telephones, heating, and lighting. Plans for prototyping of ships and other weapon systems will require additional funds over the Command's current funding. Navy Warfare Development Command officials expressed concern that about 75 percent of the Command's research and development budget for fiscal year 2002 will be spent to support its portion of one single experiment--the U.S. Joint Forces Command's Millennium Challenge. To cover its other experimentation requirements, it will need to obtain additional funds from the Navy and other organizations with which the Command cooperates on experimentation projects. Recent organizational changes at Navy headquarters should help overcome some of these difficulties. The establishment of the Office of the Deputy Chief of Naval Operations for Warfare Requirements and Programs provides a clearer link between headquarters and organizations vital to transformation. This link may help increase the visibility of the Navy Warfare Development Command's efforts and could afford more support for promising new ideas that may not otherwise be embraced by other Navy organizations. The Warfare Requirements and Programs Office was created to separate requirements and resource allocation functions that had previously been handled by a single office. The office's responsibility for balancing warfighting requirements with available resources could also provide a better means for the Navy to assess its resource priorities and make the necessary budget trade-offs between current and future needs. The Navy is also considering establishing "mission capability packages." Rather than focusing on individual platforms (ships, submarines, or aircraft), the packages would examine requirements in terms of all the capabilities needed to perform a specific mission. Officials at Navy headquarters and the Navy Warfare Development Command said these packages could help the Navy focus more on the capabilities it needs to clarify funding priorities. Officials at Navy headquarters and the Navy Warfare Development Command have told us that since the reorganization, the Command has begun to obtain greater acceptance from other Navy organizations, and its ties with headquarters have improved. The Navy is also considering changing the Command's link to the fleet to provide the Command with more visibility and influence. One possibility under consideration is to place the Command under the Commander in Chief of the Atlantic Fleet. While this could increase the Command's visibility and influence with the fleet, some Navy officials said it could also have the consequence of focusing their efforts on more near-term fleet issues over longer term transformation. While the Navy has actively conducted experimentation over the last 4 years, it has focused its experiments on near- and mid-term operational and force issues and much less on long-term issues. In spite of the importance of experimentation for transformation, the Navy has not developed a comprehensive strategy that places long-term goals and resources for experiments within the context of its overall transformation objectives and priorities. Experimentation allows the Navy to explore new operational concepts, identify alternative force mixes and capabilities, validate promising technologies and system concepts, and serve as an overall mechanism for transformation. Most importantly, it helps to shape and challenge ideas and thinking about the future. Despite the Navy's increased experimentation effort since 1997, Navy officials at headquarters, fleet, and other organizations believe the Navy needs to expand its experimentation activities to explore major long-term operational and force concepts to provide better information on future requirements and capabilities. A wide range of Navy officials and defense experts stated that the Navy needs to explore new ship design concepts-- possibly revolutionary ones--and employ prototypes to experiment with them. Such experimentation is necessary for the Navy to analyze potential force structure and operating options in the face of likely budgets and opportunities possible in emerging technologies. An example of this type of effort is the Navy's current plan to begin at-sea experimentation with a high-speed ship concept. Resource priorities also affect the Navy's ability to experiment and address long-term issues. The Navy has stated that operating a smaller force in a period of increased level of overseas operations has limited the number of ships it can assign to experimentation. It has worked around this limitation by conducting its experimentation, such as fleet battle experiments, as part of its major fleet exercises. Another resource issue is the limited staff available to support the Navy experimentation program. Since 1997, the Navy has conducted fleet battle experiments at the rate of two each year. In addition to drawing heavily on the staff and resources of the Navy Warfare Development Command and the fleets, the Navy believed this pace did not allow sufficient time to plan and prepare for experiments beforehand and assess the results afterward. In 2001, it changed the schedule to approximately one experiment each year. We learned that many of the Navy's innovation activities are not well coordinated or tracked between different organizations. The Navy has been undertaking a wide range of innovation activities. Some of these activities are directed at specific problems, while others have a broader servicewide focus. Some are aimed at best business practice innovations; others are operational in nature. These activities contribute to the incremental, evolutionary approach the Navy has adopted for transformation, and if sufficiently orchestrated and sustained, they can lead to substantial change. Many Navy officials throughout the organizations we visited believed that the Navy needs to improve the servicewide coordination and tracking of innovation activities. An official at the Pacific Fleet headquarters stated that the Pacific Fleet has attempted to identify and track these innovation activities, both within the Fleet and in other parts of the Navy. However, the official said that it was not possible to determine the extent to which all activities were captured because of the large number of and differences in activities. Several Navy officials from various fleet and headquarters organizations stated that a central Navy clearinghouse for maintaining and disseminating information about ongoing and past activities would benefit, promote, and accelerate other innovation efforts. Various Navy officials suggested that the Navy Warfare Development Command would be an appropriate organization to manage and maintain this information. The Navy Warfare Development Command has proposed an effort to provide greater servicewide coordination of innovation and transformation-related activities. According to the proposal, the Navy would develop web-based tools to further enhance coordination efforts. It would also focus on coordinating innovation efforts with the other services and the U.S. Joint Forces Command. However, no decision has yet been reached by the Navy's leadership on who will lead the coordination effort. The complexities and uncertainties that underlie the Navy's transformation require that clear direction and guidance be given to all levels of the organization on what transformation is and how it will be carried out. While the Navy has initiated a number of activities to transform its forces, it has not articulated and promulgated a well-defined transformation program. Current activities have not been conducted within the context of an overall strategic plan and roadmap to provide the direction, goals, priorities, scope, options, and resource requirements necessary to achieve a successful transformation. The importance of such planning to effective and efficient management of federal programs is recognized under the Government Performance and Results Act of 1993. Implementing the Navy's transformation will be complicated and will require careful consideration of near-term needs, as well as fundamental changes in the force structure, concepts, and organizations required to meet future security challenges within likely budgets. Actions need to be planned and orchestrated as part of a broader, well-developed strategy designed to achieve long-term objectives and not simply to satisfy immediate requirements. Development of a long-term strategic plan and roadmap would help to maintain the delicate balance between current and future requirements as the Navy transforms. It would also provide the necessary guidance to better focus and direct the Navy's transformation activities and tools to guide and oversee progress toward achievement of goals and objectives. Such a plan, for example, could also address the coordination and monitoring of innovation activities and delineate the authority of the Navy Warfare Development Command in carrying out its mission. Without such a plan, it can be difficult for senior leaders, the Congress, and others to provide the necessary support and make optimal decisions on priorities and the effective use of resources to successfully transform Navy forces. Although the Navy has stated that its transformation efforts are focused on force posture and not necessarily force structure, there is a clear and persistent need for the Navy to explore potential fundamental changes in its force structure and operational concepts that would permit it to carry out its requirements within certain fiscal parameters. The time required to design and build ships further compels urgent action by the Navy. Without an experimentation effort that includes evaluating long-term issues such as new ship designs and operational concepts, the Navy will be less able to make the difficult but important decisions that will be needed regarding the size, shape, and composition of its future fleet. The wide range of innovation activities being conducted throughout the Navy contributes to the Navy's overall transformation efforts. But the lack of adequate Navy-wide coordination and tracking limits the potential benefits these activities could have for all organizations. The creation of a Navy-wide clearinghouse would provide a central repository for all organizations--in the Navy and elsewhere in the Department of Defense-- to exchange information and lessons learned on innovation activities. To more clearly determine the Navy's direction and promote better understanding of actions taken to transform its forces for the 21st century, we recommend that the Secretary of Defense direct the Secretary of the Navy to develop a long-term strategic plan and roadmap that clearly articulates priorities, objectives, and milestones; identifies the scope, resource requirements, and responsibilities; and defines the metrics for assessing progress in achieving successful transformation. We also recommend that the Secretary of Defense direct the Secretary of the Navy to (1) adjust the Navy's experimentation program to provide greater exploration of long-term force structure and operational issues and (2) create a clearinghouse for Navy-wide innovation activities to improve coordination and monitoring of such activities. We received written comments from the Department of Defense on a draft of this report, which are included in their entirety as appendix V. The Department agreed with our recommendations but did not elaborate on how it would address them. DOD generally believed that our findings accurately reflect the Navy's transformation process, the current status, and the increased efforts in the Navy toward transformation. DOD agreed with our overall conclusion that the Navy needs to develop a strategic plan and roadmap to manage and execute its transformation efforts. In its comments, DOD stated that the Navy is implementing near-, mid-, and far-term steps to achieve a transformation goal of assured access, which was identified by the Navy's 1999 Maritime Concept as a key operational challenge. We agree that these steps are an element in the development of a comprehensive long-term strategic plan and roadmap that we recommend for Navy transformation. However, such a plan and roadmap must also articulate the priorities, objectives, and milestones; identify the scope, resource requirements, and responsibilities; and define the metrics for assessing progress. By including these additional elements, the plan and roadmap would provide the clear direction, focus, and integration necessary for the Navy to carry out a successful transformation. To develop criteria for assessing the Navy's management of its transformation, we identified several key factors important to success in military transformation (see app. IV). We identified these factors from our review of a wide range of DOD and Navy publications and statements, open literature, academic research on the subject of military innovation and transformation, and case studies of past transformation efforts. To assess the reasonableness and completeness of these factors, we discussed them with Navy and DOD officials and outside defense experts from various research and academic organizations. We also used the principles laid out in the Government Performance and Results Act of 1993 as additional benchmarks for our assessment. To determine the Navy's transformation-related activities and develop our observations of the key management issues affecting progress, we obtained information, documents, and perspectives from officials at all levels of the Navy, including Navy headquarters, the Navy Warfare Development Command, the Naval War College, the Atlantic and Pacific Fleets, and the Offices of the Secretary of Defense and the Chairman of the Joint Chiefs of Staff. We discussed Navy transformation with the former Secretary of the Navy (1998-2001) and with several senior Navy leaders who have responsibility for various aspects of the Navy's transformation. We also obtained perspectives from several defense experts and academicians who have followed military and Navy transformation. Appendix VI lists the principal organizations and offices where we performed work. We reviewed an extensive array of policy, planning, and guidance documents; intelligence documents; posture statements and speeches; congressional hearings and testimonies; open literature; and studies and assessments. We also made extensive use of information available on public and DOD Internet web sites. To develop a better understanding of the Navy's transformation and the actions it has taken to carry out the transformation, we obtained information on various areas related to concept development, experimentation, innovation, research and development, and other transformation activities. We reviewed the concept of network centric warfare with Navy officials at several organizations and offices responsible for developing and implementing the concept. To ascertain the Navy's experimentation and innovation efforts, we discussed the plans, content, and results with officials at the Navy Warfare Development Command, Atlantic and Pacific Fleets, and research and development organizations. To obtain information on the Navy's participation in joint experimentation efforts, particularly Millennium Challenge 2002, we met with officials at the U.S. Joint Forces Command and the Joint Staff's Joint Vision and Transformation Division. To be cognizant of the security environment in which the Navy is likely to operate its forces through 2020, we obtained an intelligence briefing from the Defense Intelligence Agency. To attain information on the Navy's investment in research and development to support transformation, we met with officials at the Office of Naval Research, the Space and Naval Warfare Systems Command, and the Defense Advanced Research Projects Agency. Although we did not include a review of Marine Corps transformation activities in our review, we did meet with a senior Marine Corps official responsible for the service's transformation to discuss coordination and joint transformation- related efforts between the two services. We did not include the Navy's management of service Defense Reform Initiatives in our scope. Our review was conducted from August 2000 through May 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested congressional committees, the Secretary of Defense, the Secretary of the Navy, the Chairman of the Joint Chiefs of Staff, and the Chief of Naval Operations. We will also make copies available to others upon request. Please contact me at (202) 512-3958 if you or your staff have any questions concerning this report. Major contributors to this report were Marvin E. Casterline, Mark J. Wielgoszynski, Joseph W. Kirschbaum, and Stefano Petrucci. To more sharply focus on the capabilities the Navy will need in the next 10 to 15 years, in 1999 the Navy reorganized its science and technology resources into 12 future naval capabilities. The objective is to focus on capabilities and not platforms. The future naval capabilities are managed by integrated product teams, which include senior Navy and Marine Corps military and civilian officials. These teams focus on the overall capability by prioritizing the individual efforts and supporting technology areas. Table 1 lists the 12 future naval capabilities and provides examples of individual technology efforts for each capability. Since March 1997 the Navy has conducted nine fleet battle experiments. Each of these experiments has focused on some of the Navy's core missions, such as land attack, or those it expects to conduct in the future. These experiments have also enabled the Navy to assess how new technologies and approaches could enhance fleet capabilities and operations with joint and allied forces. The experiments rotate among the Navy's fleets and are scheduled to coincide with a major fleet exercise. Roughly $5 million is dedicated to each fleet battle experiment. This amount does not include the operation and maintenance funds expended by a fleet during the actual experiment. Upon completion, each experiment is assessed to determine which concepts proved workable and what follow-on experimentation should be pursued. Table 2 provides some examples of issues addressed in the fleet battle experiments. A wide range of innovations and transformation-related activities are being conducted at the fleet level and in many other Navy organizations. For example, the Second Fleet has been evaluating the concepts, technologies, and procedures for network centric antisubmarine warfare. This concept employs collaborative tools to link ships and aircraft to greatly increase the effectiveness of antisubmarine forces. It assists the Navy in implementing its plan to distribute antisubmarine warfare capability throughout its forces rather than in only a few dedicated platforms. Table 3 provides examples of Navy innovation activities. A number of factors are important for the Navy or any military organization to successfully transform its forces and operations. On their own or in combination, these eight factors are useful in establishing effective planning mechanisms for managing transformation efforts. We identified these factors from our review of a wide range of Department of Defense (DOD) and Navy publications and statements, open literature, academic research on the subject of military innovation and transformation, and case studies of past transformation efforts. To assess the reasonableness and completeness of these factors, we discussed these factors with Navy and DOD officials and outside defense experts from various research and academic organizations. A clear and authoritative statement of vision, rationale, and direction of transformation efforts is necessary. The precise shape and structure of the future Navy is difficult to determine. But the direction of development for required capabilities can be outlined to the extent that lines of effort can be delineated, priorities established, and responsibilities for executing them assigned. The Navy's leadership must ensure that such policies are communicated throughout its organization. This factor involves the details of transformation and how an organization should carry them out. This entails a delineation of organizational elements responsible for converting concepts and ideas into practical operational and force structure changes. It is important that personnel and funds are dedicated to innovation and transformation-related efforts. These efforts include experimentation, prototype development, and acquisition. For example, the period of the 1920s and 1930s was one of fiscal constraint for the Navy. But it devoted considerable resources to the development of aircraft carriers and naval aviation, which later contributed to the Navy's success during the Second World War. Clear and adaptable measures of effectiveness are required for experiments to determine the value of innovations and for procedural matters to determine the progress of transformation. Innovation and transformation must include changes in how the Navy operates at all levels. There must be feedback among innovators, operators, experimenters, doctrine writers, and the training and education establishment. Many defense experts have recognized this linkage as one of the most important elements of military transformation. This is the "culture" aspect of transformation. Leaders from all levels of the organization should provide tangible commitment to Navy transformation and to those who make contributions to that end. Innovators must be given incentives to innovate, allowed to take reasonable risks in areas such as experiments, and given the authority to conduct energetic analyses to address the Navy's future warfare challenges. The active support of the Congress is vital to effecting transformation in the Navy. In some cases, this may be resource-oriented. In others, such support would involve congressional oversight, as it has in the past, and provide incentives and direction when and where appropriate. For example, during the development of naval aviation, the Congress mandated that those officers seeking to command the new aircraft carriers had to be flight qualified. This mandate stimulated the career path. To better ensure an effective transformation, the Navy needs to coordinate its plans and efforts with the Congress as well as the other services and joint organizations. Individual Navy efforts must be interoperable with the other services in order for future joint operations to be viable. This is applicable to the specifications of individual capabilities, such as communication equipment, as well as to the broader issue of developing integrated operational level capabilities and concepts. U.S. Third Fleet (U.S.S. Coronado)
With the end of the Cold War, national security strategies changed to meet new global challenges. The Navy developed a new strategic direction in the early 1990s, shifting its primary focus from open ocean "blue water" operations to littoral, or shallow water, operations closer to shore. GAO found that although the Navy has recently placed more emphasis on transformation, it does not have a well-defined and overarching strategy for transformation. It has not clearly identified the scope and direction of its transformation; the overall goals, objectives, and milestones; or the specific strategies and resources to be used in achieving these goals. It also has not clearly identified organizational roles and responsibilities, priorities, resources, or ways to measure progress. Without a well-defined strategic plan to guide the Navy's efforts, senior leaders and Congress will not have the tools they need to ensure that the transformation is successful.
8,185
187
The new IRS Commissioner and IRS management have expressed a commitment to ensure that taxpayers are treated properly. Even so, problems with current management information systems make it impossible to determine the extent to which allegations of taxpayer abuse and other taxpayer complaints have been reported, or the extent to which actions have been taken to address the complaints and prevent recurrence of systemic problems. That is because, as we reported to you in 1996, information systems currently maintained by IRS, Treasury OIG, and the Department of Justice do not capture the necessary management information. These systems were designed as case tracking and resource management systems intended to serve the management information needs of particular functions, such as IRS Inspection's Internal Security Division. None of these systems include specific data elements for "taxpayer abuse"; instead, they contain data elements that encompass broad categories of misconduct, taxpayer problems, and legal and administrative actions. Information contained in these systems relating to allegations and investigations of taxpayer abuse and other taxpayer complaints is not easily distinguishable from information on allegations and investigations that do not involve taxpayers. Consequently, as currently designed, the information systems cannot be used individually or collectively to account for IRS' handling of instances of alleged taxpayer abuse. Officials of several organizations indicated to us that several information systems might include information related to taxpayer abuse allegations--five maintained by IRS, one by Treasury OIG, and two by Justice. (See attachment for a description of these systems.) also said the system could not be used to identify such instances without a review of specific case files. From our review of data from these systems for our 1996 report, we concluded that none of them, either individually or collectively, have common or comparable data elements that can be used to identify the number or outcomes of taxpayer abuse allegations or related investigations and actions. Rather, each system was developed to provide information for a particular organizational function, usually for case tracking, inventory, or other managerial purposes relative to the mission of that particular function. While each system has data elements that could reflect how some taxpayers have been treated, the data elements vary and in certain cases may relate to the same allegation and same IRS employee. Without common or comparable data elements and unique allegation and employee identifiers, these systems do not collect information in a consistent manner that could be used to accurately account for all allegations of taxpayer abuse. As we also reported in our 1996 report, IRS has not historically had a definition of taxpayer abuse. In response to the report, IRS adopted a definition for taxpayer complaints that included the following elements: (1) allegations of IRS employees' violating laws, regulations, or the IRS Code of Conduct; (2) overzealous, overly aggressive, or otherwise improper behavior of IRS employees in discharging their official duties; and (3) breakdowns in IRS systems or processes that frustrate taxpayers' ability to resolve issues through normal channels. Also in response to the report, IRS established a Customer Feedback System in October 1997, which IRS managers are to use to report allegations of improper employee behavior toward taxpayers. IRS used this system to support its first required annual reporting to Congress on taxpayers' complaints through December 31, 1997. IRS officials acknowledged, however, that there were changes needed to ensure the accuracy and consistency of the reported data. The 1988 amendments to the Inspectors General Act, which created the Treasury OIG, did not consolidate IRS Inspection into the Treasury OIG, but authorized the Treasury OIG to perform oversight of IRS Inspection and conduct audits and investigations of the IRS as appropriate. The act also provided the Treasury OIG with access to taxpayer data under the provisions of Section 6103 of the Internal Revenue Code as needed to conduct its work, with some recording and reporting requirements for such access. Currently, Treasury OIG is responsible for investigating allegations of misconduct, waste, fraud, and abuse involving senior IRS officials, GS-15s and above, as well as IRS Inspection employees. Treasury OIG also has oversight responsibility for the overall operations of IRS Inspection. Since November 1994, Treasury OIG has had increased flexibility for referring allegations involving GS-15s to IRS for investigation or administrative action. The need to make more referrals of GS-15 level cases was due to resource constraints and an increased emphasis by Treasury OIG on investigations involving criminal misconduct and procurement fraud across all Treasury bureaus. In fiscal year 1996, Treasury OIG conducted 43 investigations--14 percent of the 306 allegations it received--many of which implicated senior IRS officials. Treasury OIG officials said that these investigations rarely involved allegations of taxpayer abuse because senior IRS officials and IRS Inspection employees usually do not interact directly with taxpayers. The IRS Chief Inspector, who reports directly to the IRS Commissioner, is responsible for conducting IRS investigations and internal audits done by IRS Inspection, as well as for coordinating IRS Inspection activities with Treasury OIG. IRS Inspection is to work closely with Treasury OIG in planning and performing its duties. IRS Inspection is also to provide information on its activities and results, as well as constraints or limitations placed on its activities, to Treasury OIG for incorporation into Treasury OIG's Semiannual Report to Congress. Disputes that the IRS Chief Inspector may have with the IRS Commissioner are to be resolved through Treasury OIG and the Secretary of the Treasury, to whom the Treasury OIG reports. In September 1992, Treasury OIG issued Treasury Directive 40-01, which summarizes the authority vested in Treasury OIG and the reporting responsibilities of various Treasury bureaus. Treasury law enforcement bureaus, including IRS, are to (1) provide a monthly report to Treasury OIG concerning significant internal investigative and audit activities; (2) notify Treasury OIG immediately upon receiving allegations involving senior IRS officials, internal affairs employees, or IRS Inspection employees; and (3) submit written responses to Treasury OIG detailing actions taken or planned in response to Treasury OIG investigative reports and Treasury OIG referrals for agency management action. Under procedures established in a Memorandum of Understanding between Treasury OIG and IRS Commissioner in November 1994, the requirement for immediate referrals to Treasury OIG of all misconduct allegations covered in the Directive was reiterated and supplemented. Treasury OIG has the discretion to refer any allegation to IRS for appropriate action, that is, either investigation by IRS Inspection or administrative action by IRS management. If IRS officials believe that an allegation referred by Treasury OIG warrants Treasury OIG attention, they may refer the case back to Treasury OIG, requesting that Treasury OIG conduct an investigation. During our review for the 1996 report, Treasury OIG officials advised us that under the original 1992 Directive, they generally handled most allegations implicating Senior Executive Service (SES) and IRS Inspection employees, while reserving the right of first refusal on GS-15 employees. Under the procedures adopted in 1994, which were driven in part by resource constraints and Treasury OIG's need to do more criminal misconduct and procurement fraud investigations across all Treasury bureaus, Treasury OIG officials stated they have generally referred allegations involving GS-15s and below to IRS for investigation or management action. The same is true for allegations against any employees, including those in the SES, involving administrative matters and allegations dealing primarily with disputes of tax law interpretation. of the allegations; referred 214 to IRS--either for investigation or administrative action; investigated 43; and closed 9 others for various administrative reasons. Treasury OIG officials stated that, based on their investigative experience, most allegations of wrongdoing by IRS staff that involve taxpayers do not involve senior-level IRS officials or IRS Inspection employees. Rather, these allegations typically involve IRS Examination and Collection employees who most often interact directly with taxpayers. Treasury OIG officials are to assess the adequacy of IRS' actions in response to Treasury OIG investigations and referrals as follows: (1) IRS is required to make written responses on actions taken within 90 days and 120 days, respectively, on Treasury OIG investigative reports of completed investigations and Treasury OIG referrals for investigations or management action; (2) Treasury OIG investigators are to assess the adequacy of IRS' responses before closing the Treasury OIG case; and (3) Treasury OIG's Office of Oversight is to assess the overall effectiveness of IRS Inspection capabilities and systems through periodic operational reviews. In addition to assessing IRS' responses to Treasury OIG investigations and referrals, each quarter, the Treasury Inspector General, Deputy Inspector General, and Assistant Inspector General for Investigations are to brief the IRS Commissioner, IRS Deputy Commissioner, and Chief Inspector on the status of allegations involving senior IRS officials, including those being investigated by Treasury OIG and those awaiting IRS action. referrals inclusion in discussions during quarterly Inspector General briefings with the IRS Commissioner. Since 1996, there has been some indication of problems between the two offices. Specifically, in its most recent Semiannual Report to Congress, Treasury OIG concluded, after reviewing IRS' compliance with Treasury Directive 40-01, that "both IRS and Treasury OIG need to make improvements, particularly in the area of timely, prompt referrals." It is not clear what steps Treasury OIG officials plan to take to resolve the problems. At the Committee's September 1997 IRS oversight hearings, some IRS employees raised concerns about the effectiveness of IRS Inspection and its independence from undue pressures and influence from IRS management. Since that time, debate has continued on the issue of where IRS Inspection would be optimally placed organizationally to provide assurance that taxpayers are treated properly. This is not a new issue. During the debate preceding the passage of the 1988 amendments to the Inspectors General Act that established the Treasury OIG and left IRS Inspection intact, as well as on several other occasions since, concerns have been raised about the desirability of having a separate IRS Inspection Service. Historically, we have supported a strong statutory Treasury OIG, believing that such an office could provide independent oversight of the Department, including IRS. That is, reviews of IRS addressed to the Secretary of the Treasury, rather than the IRS Commissioner, should improve executive branch oversight of tax administration in general and provide greater assurance that taxpayers are treated properly, fairly, and courteously. We have also noted that under the statute, Treasury OIG is authorized to enhance the protection of taxpayer rights by conducting periodic independent reviews of IRS dealings with taxpayers and IRS procedures affecting taxpayers. We have also recognized that, to meet his managerial responsibilities, the IRS Commissioner needs an internal capability to review the effectiveness of IRS programs. IRS Inspection has provided Commissioners with investigative and audit capabilities to evaluate IRS programs since 1952. IRS Inspection currently has roughly 1,200 authorized staff in its budget who are split about equally between its two divisions, Internal Security and Internal Audit. The Treasury OIG, on the other hand, has fewer than 300 authorized staff to provide oversight of IRS Inspection activities as well as to carry out similar investigations and audits for Treasury and its 10 other very diverse bureaus. IRS officials have been concerned that if IRS Inspection is transferred to the Treasury OIG, the transferred resources will be used to investigate or audit other Treasury bureaus to the detriment of critical IRS oversight. The Inspectors General Act provides guidance on the authorities, qualifications, safeguards, resources, and reporting requirements needed to ensure independent investigative and audit capabilities. No matter where IRS Inspection is placed organizationally, certain mechanisms need to be in place to ensure that it is held accountable and can achieve its mission without undue pressures or influence. For example, a key component of accountability and protection against undue pressures or influence is reporting of investigative and audit activities and findings to both those responsible for agency management and oversight. Another IRS organization responsible for protecting the rights of taxpayers is the Taxpayer Advocate. The position was originally codified in the Taxpayer Bill of Rights 1 as the Taxpayer Ombudsman, although IRS has had the underlying Problem Resolution Program (PRP) in place since 1979. In the Taxpayer Bill of Rights 2, the Taxpayer Advocate and the Office of the Taxpayer Advocate replaced the Taxpayer Ombudsman position and the headquarters PRP staff. The authorities and responsibilities of this new office were expanded, for example, to address taxpayer cases involving IRS enforcement actions and refunds. The most significant change may have been to emphasize that the Advocate and those assigned to the Advocate's Office are expected to view issues from the taxpayers' perspective and find ways to alleviate individual taxpayer concerns as well as systemic problems. The Advocate reported that it resolved 237,103 cases in fiscal year 1997. Its reported activities included establishing cases to resolve taxpayer concerns, providing relief to taxpayers with hardships, resolving cases in a proper and timely manner, and analyzing and addressing factors contributing to systemic problems. The report also discussed activities and initiatives and proposed solutions for systemic problems. Even with the enhanced legislative authorities and numerous activities and initiatives, questions about the effectiveness of the Taxpayer Advocate persist. The questions relate to the Advocate's (1) organizational independence within IRS; (2) resource commitments to achieve its mission; and (3) ability to identify and correct systemic problems adversely affecting taxpayers. We have recently initiated a study of the Advocate's Office to address these questions about the Advocate's effectiveness. The first question centers on the Advocate's organizational placement at headquarters and field offices. The Taxpayer Advocate reports to the IRS Commissioner. Taxpayer Advocates in the field report to the IRS Regional Commissioner, District Director, or Service Center Director in their particular geographic area. Thus, these field advocate officials report to the IRS executives who are responsible for the operations that may have frustrated taxpayers and created the Advocate's caseloads. The second question involves the manner in which the Advocate's Office is staffed and funded. For fiscal year 1998, the Advocate's Office was authorized 442 positions to handle problem resolution duties. These authorized Advocate Office staff must rely on assistance from more than 1,000 other field employees, on a full-time or part-time basis, to carry-out these duties. These 1,000 employees are funded by their functional office, such as Collection or Customer Service. While working PRP cases, these employees receive program direction and guidance from the Advocate's Office. They are administratively responsible to their Regional Commissioners, District Directors, or Service Center Directors--again, the same managers responsible for the operations that may have frustrated taxpayers. The third question was debated during oversight hearings last year regarding the Advocate's ability to identify and correct IRS systems or processes that have frustrated taxpayers. The question historically has been the amount of attention afforded the analysis of problem resolution cases to identify systemic issues in light of the Advocate's workload and available staff. The more recent question, however, has been the ability of the Advocate's Office to bring about needed administrative or legislative changes to address systemic problems. detract from its ability to focus on its overall mission. Our recently initiated study is designed to provide such an assessment of the Advocate's effectiveness. Two of the IRS systems--Inspection's Internal Security Management Information System (ISMIS) and Human Resources' Automated Labor and Employee Relations Tracking System (ALERTS)--are designed to capture information on cases involving employee misconduct, which may also involve taxpayer abuse. ISMIS is designed to determine the status and outcome of Internal Security investigations of alleged employee misconduct; ALERTS is designed to track disciplinary actions taken against employees. While ISMIS and ALERTS both track aspects of alleged employee misconduct, these systems do not share common data elements or otherwise capture information in a consistent manner. IRS also has three systems that include information on concerns raised by taxpayers. These systems include two maintained by the Office of Legislative Affairs--the Congressional Correspondence Tracking System and the IRS Commissioner's Mail Tracking System--as well as the Taxpayer Advocate's system known as the Problem Resolution Office Management Information System (PROMIS). The two Legislative Affairs systems are designed to track taxpayer inquiries, including those made through congressional offices, to ensure that responses are provided by appropriate IRS officials. PROMIS is to track similar inquiries to ensure that taxpayers' problems are resolved and to determine whether the problems are recurring in nature. Treasury OIG has an information system known as the Treasury OIG Office of Investigations Management Information System. It is designed to track the status and outcomes of Treasury OIG investigations as well as the status and outcomes of actions taken by IRS in response to Treasury OIG investigations and referrals. Justice has two information systems that include data that may be related to taxpayer abuse allegations and investigations. The Executive Office for the U.S. Attorneys maintains a Centralized Caseload System that is designed to consolidate the status and results of civil and criminal prosecutions conducted by U.S. Attorneys throughout the country. Cases involving criminal misconduct by IRS employees are to be referred to and may be prosecuted by the U.S. Attorney in the particular jurisdiction in which the alleged misconduct occurred. The Tax Division of Justice also maintains a Case Management System that is designed for case tracking, time reporting, and statistical analysis of litigation cases the Division conducts. Lawsuits against either IRS or IRS employees are litigated by the Tax Division, with representation provided to IRS employees if the Tax Division determines that the actions taken by the employees were within the scope of employment. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the: (1) adequacy of the Internal Revenue Service's (IRS) controls over the treatment of taxpayers; (2) responsibilities of the Offices of the Chief Inspector (IRS Inspection) and the Department of the Treasury Office of the Inspector General (OIG) in investigating allegations of taxpayer abuse and employee misconduct; (3) organizational placement of IRS Inspection; and (4) role of the Taxpayer Advocate in handling taxpayer complaints. GAO noted that: (1) in spite of IRS management's heightened awareness of the importance of treating taxpayers properly, GAO remains unable to reach a conclusion as to the adequacy of IRS' controls to ensure fair treatment; (2) this is because IRS and other federal information systems that collect information related to taxpayer cases do not capture the necessary management information to identify instances of abuse that have been reported and actions taken to address them and to prevent recurrence of those problems; (3) Treasury OIG and IRS Inspection have separate and shared responsibilities for investigating allegations of employee misconduct and taxpayer abuse; (4) IRS Inspection has primary responsibility for investigating and auditing IRS employees, programs, and internal controls; (5) Treasury OIG is responsible for the oversight of IRS Inspection investigations and audits and may perform selective investigations and audits at IRS; (6) the two offices share some responsibilities as reflected in a 1994 IRS Commissioner-Treasury OIG Memorandum of Understanding; (7) in the Committee's September 1997 hearings, questions were raised about the independence of IRS Inspection; (8) subsequently, suggestions have been made to remove IRS Inspection from IRS and place it in Treasury OIG; (9) regardless of where IRS Inspection is placed organizationally, within IRS or Treasury OIG, mechanisms need to be in place to ensure its accountability and its ability to focus on its mission independent from undue pressures or influences; (10) the Inspectors General Act as amended in 1988, provides guidance on the authorities, qualifications, safeguards, resources, and reporting requirements needed to ensure independent investigation and audit capabilities; (11) in 1979, the Taxpayer Ombudsman was established administratively within IRS to advocate for taxpayers and assume authority for IRS' Problem Resolution Program; (12) in 1988, this position was codified in the Taxpayer Bill of Rights 1; (13) in 1996, the Taxpayer Bill of Rights 2 replaced the Ombudsman with the Taxpayer Advocate and expanded the responsibilities of the new Office of the Taxpayer Advocate; (14) the Advocate was charged under the legislation with helping taxpayers resolve their problems with the IRS and with identifying and resolving systemic problems; and (15) it is now nearly 20 years after the creation of the first executive-level position in IRS to advocate for taxpayers, and questions about the effectiveness of the advocacy continue to be asked.
3,897
597
Under A-76, commercial activities may be converted to or from contractor performance either by direct conversion or by cost comparison. Under direct conversion, specific conditions allow commercial activities to be moved from government or contract performance without a cost comparison study (for example, for activities involving 10 or fewer civilians). Generally, however, commercial functions are to be converted to or from contract performance by cost comparison, whereby the estimated cost of government performance of a commercial activity is compared to the cost of contractor performance in accordance with the principles and procedures set forth in Circular A-76 and the revised supplemental handbook. As part of this process, the government identifies the work to be performed (described in the performance work statement), prepares an in-house cost estimate based on its most efficient organization, and compares it with the winning offer from the private sector. According to A-76 guidance, an activity currently performed in-house is converted to performance by the private sector if the private sector offer is either 10 percent lower than the direct personnel costs of the in-house cost estimate or is $10 million less (over the performance period) than the in-house cost estimate. OMB established this minimum cost differential to ensure that the government would not convert performance for marginal savings. The handbook also provides an administrative appeals process. An eligible appellant must submit an appeal to the agency in writing within 20 days of the date that all supporting documentation is made publicly available. Appeals are supposed to be adjudicated within 30 days after they are received. Private sector offerors who believe that the agency has not complied with applicable procedures have additional avenues of appeal. They may file a bid protest with the General Accounting Office or file an action in a court of competent jurisdiction. Circular A-76 requires agencies to maintain annual inventories of commercial activities performed in-house. A similar requirement was included in the 1998 FAIR Act, which directs agencies to develop annual inventories of their positions that are not inherently governmental. The fiscal year 2000 inventory identified approximately 850,000 full-time equivalent commercial-type positions, of which approximately 450,000 were in DOD. OMB has not yet released DOD's inventory for 2001. DOD has been the leader among federal agencies in recent years in its use of OMB Circular A-76, with very limited use occurring in other agencies. However, in 2001, OMB signaled its intention to direct greater use of the circular on a government-wide basis. In a March 9, 2001, memorandum to the heads and acting heads of departments and agencies, the OMB Deputy Director directed agencies to take action in fiscal year 2002 to directly convert or complete public-private competitions of not less than 5 percent of the full-time equivalent positions listed in their FAIR Act inventories. Subsequent guidance expanded the requirement by 10 percent in 2003, with the ultimate goal of competing at least 50 percent. In 1999, DOD began to augment its A-76 program with what it terms strategic sourcing. Strategic sourcing may encompass consolidation, restructuring, or reengineering activities; privatization; joint ventures with the private sector; or the termination of obsolete services. Strategic sourcing can involve functions or activities regardless of whether they are considered inherently governmental, military essential, or commercial. I should add that these actions are recognized in the introduction to the A-76 handbook as being part of a larger body of options, in addition to A-76, that agencies must consider as they contemplate reinventing government operations. Strategic sourcing initially does not involve A-76 competitions between the public and the private sector, and the Office of the Secretary of Defense and service officials have stressed that strategic sourcing may provide smarter decisions because it determines whether an activity should be performed before deciding who should perform it. However, these officials also emphasized that strategic sourcing is not intended to take the place of A-76 studies and that positions examined under the broader umbrella of strategic sourcing may be subsequently considered for study under A-76. After several years of limited use of Circular A-76, the deputy secretary of defense gave renewed emphasis to the A-76 program in August 1995 when he directed the services to make outsourcing of support activities a priority in an effort to reduce operating costs and free up funds to meet other priority needs. The effort was subsequently incorporated as a major initiative under the then secretary's Defense Reform Initiative, and the program became known as competitive sourcing--in recognition of the fact that either the public or the private sector could win competitions. A-76 goals for the number of positions to be studied have changed over time, and out-year study targets are fewer than in previous years. However, future study targets could be impacted by the current administration's emphasis on reliance on the private sector for commercial activities. The number of positions planned for study and the timeframes for accomplishing those studies have changed over time in response to difficulties in identifying activities to be studied. In 1997, DOD's plans called for about 171,000 positions to be studied by the end of fiscal year 2003. In February 1999, we reported that DOD had increased this number to 229,000 but had reduced the number of positions to be studied in the initial years of the program. In August 2000, DOD decreased the number of positions to be studied under A-76 to about 203,000, added about 42,000 Navy positions for consideration under strategic sourcing, and extended the program to fiscal year 2005. Last year we noted that DOD had reduced the planned number to study to approximately 160,000 positions under an expanded time frame extending from 1997 to 2007. It also planned to study about 120,000 positions under strategic sourcing during that timeframe. More recently, DOD officials told us that the A-76 study goal for fiscal years 1997-2007 is now approximately 183,000 positions--135,000 between fiscal years 1997-2001, and 48,000 between fiscal years 2002-2007. It projects that it will study approximately 144,000 positions under strategic sourcing. To what extent the A-76 study goals are likely to change in the future could be a function of changes in inventories of commercial activities and continuing management emphasis on competitive sourcing. Although DOD's fiscal year 2001 inventory of commercial activities has not been publicly released, we have noted some reductions between previous inventories as the department has gained experience in completing them. In reporting on our analysis of DOD's initial FAIR Act inventory, we cited the need for more consistency in identifying commercial activities. We found that the military services and defense agencies did not always consistently categorize similar activities. We have not had an opportunity to analyze more recent inventories to determine to what extent improved guidance may have helped to increase consistency in categorizing activities. At the same time, we also previously reported that a number of factors could reduce the number of additional functions studied under A-76. For example, we noted that factors such as geographic dispersion of positions and the inability to separate commercial activities from inherently governmental activities could limit the number of inventory positions studied. Likewise, the inventory already makes provision for reducing the number of positions eligible for competition such as where performance by federal employees was needed because of national security or operational risk concerns. On the other hand, The President's Management Agenda, Fiscal Year 2002, notes "Agencies are developing specific performance plans to meet the 2002 goal of completing public-private or direct conversion competition on not less than five percent of the full-time equivalent employees listed on the FAIR Act inventories. The performance target will increase by 10 percent in 2003." Additionally, DOD's Quadrennial Defense Review Report, September 30, 2001, states that the department should "Focus DOD 'owned' resources on excellence in those areas that contribute directly to warfighting. Only those functions that must be performed by DOD should be kept by DOD. Any function that can be provided by the private sector is not a core government function. Traditionally, 'core' has been very loosely and imprecisely defined and too often used as a way of protecting existing arrangements." We have not assessed to what extent efforts in this area are likely to strengthen emphasis on A-76. As we tracked DOD's progress in implementing its A-76 program since the mid-to late-1990s, we identified a number of challenges and concerns that have surrounded the program--issues that other agencies may encounter as they seek to respond to the administration's emphasis on competitive sourcing. They include (1) the time required to complete the studies, (2) cost and resources to conduct and implement the studies, (3) selecting and grouping positions to compete, and (4) developing and maintaining reliable estimates of projected savings expected from the competitions. These need not be reasons to avoid A-76 studies but are factors that need to be taken into consideration in planning for the studies. Individual A-76 studies in DOD have taken longer than initially projected. In launching its A-76 program, some DOD components made overly optimistic assumptions about the amount of time needed to complete the competitions. For example, the Army initially projected that it would take 13 to 21 months to complete studies, depending on their size. The Navy initially projected completing its studies in 12 months. The numbers were subsequently adjusted upward, and the most recent available data indicate that the studies take on average about 22 months for single-function and 31 months for multifunction studies. Agencies need to keep these timeframes in mind when projecting resources required to support the studies and timeframes for when savings are expected to be realized--and may need to revisit these projections as they gain experience under the program. Once DOD components found that the studies were taking longer than initially projected, they realized that a greater investment of resources would be needed than originally planned to conduct the studies. For example, the 2001 president's budget showed a wide range of projected study costs, from about $1,300 per position studied in the Army to about $3,700 in the Navy. Yet, various officials expressed concern that these figures underestimated the costs of performing the studies. While the costs they cited varied, some ranged up to several thousand dollars per position. One factor raising costs was the extent to which the services used contractors to facilitate completion of the studies. Given differences in experience levels between DOD and other agencies in conducting A-76 studies, these other agencies may need to devote greater resources to training or otherwise obtaining outside assistance in completing their studies. In addition to study costs, significant costs can be incurred in implementing the results of the competitions. Transition costs include the separation costs for civilian employees who lose their jobs as a result of competitions won by the private sector or when in-house organizations require a smaller civilian workforce. Such separation costs include the costs of voluntary early retirement, voluntary separation incentives, and involuntary separations through reduction-in-force procedures. Initially, we found that DOD budget documents had not fully accounted for such costs in estimating savings that were likely to result from their A-76 studies. More recently, we found that the Department had improved its inclusion of study and transition costs in its budget documents. Selecting and grouping functions and positions to compete can be difficult. Because most services faced growing difficulties in or resistance to finding enough study candidates to meet their A-76 study goals, the goals and time frames for completing studies changed over time; and DOD ultimately approved strategic sourcing as a way to complement its A-76 program and help achieve its savings goals. Guidelines implementing the FAIR Act permit agencies to exclude certain commercial activities from being deemed eligible for competition such as patient care in government hospitals. Additionally, as experienced by DOD, factors such as geographic dispersion of positions and the inability to separate commercial activities from inherently governmental activities could limit the number of inventory positions studied. It becomes important to consider such factors in determining what portions of the FAIR inventories are expected to be subject to competition. Considerable questions have been raised concerning to what extent DOD has realized savings from its A-76 studies. In part, these concerns were exacerbated by the lack of a reliable system for capturing initial net savings estimates and updating them as needed and by other difficulties associated with the lack of precision often associated with savings estimates. Our work has shown that while significant savings are being achieved by DOD's A-76 program, it has been difficult to determine precisely the magnitude of those savings. Savings may be limited in the short-term because up-front investment costs associated with conducting and implementing the studies must be absorbed before long-term savings begin to accrue. Several of our reports in recent years have highlighted these issues. For example, we reported in March 2001 that A-76 competitions had reduced estimated costs of Defense activities primarily by reducing the number of positions needed to perform those activities under study. This is true regardless of whether the government's in-house organization or the private sector wins the competition. Both government and private sector officials with experience in such studies have stated that, in order to be successful in an A-76 competition, they must seek to reduce the number of positions required to perform the function being studied. Related actions may include restructuring and reclassifying positions and using multiskill and multirole employees to complete required tasks. In December 2000, we reported on DOD's savings estimates from a number of completed A-76 studies. We noted that DOD had reported cost reductions of about 39 percent, yielding an estimated $290 million savings in fiscal year 1999. We also agreed that individual A-76 studies were producing savings but stressed difficulty in quantifying the savings precisely for a number of reasons: Because of an initial lack of DOD guidance on calculating costs, baseline costs were sometimes calculated on the basis of average salaries and authorized personnel levels rather than on actual numbers. DOD's savings estimates did not take into consideration the costs of conducting the studies and implementing the results, which of course must be offset before net savings begin to accrue. There were significant limitations in the database DOD used to calculate savings. Savings become more difficult to assess over time as workload requirements or missions change, affecting program costs and the baseline from which savings were initially calculated. Our August 2000 report assessed the extent to which there were cost savings from nine A-76 studies conducted by DOD activities. The data showed that DOD realized savings from seven of the cases, but overall less than Defense components had initially projected. Each of the cases presented unique circumstances that limited our ability to precisely calculate savings. Some suggested lower savings. Others suggested higher savings than initially identified. In two cases, DOD components had included cost reductions unrelated to the A-76 studies as part of their projected savings. Additionally, baseline cost estimates used to project savings were usually calculated using an average cost of salary and benefits for the number of authorized positions, rather than the actual costs of the positions. The latter calculation would have been more precise. In four of the nine cases, actual personnel levels were less than authorized. While most baseline costs estimates were based largely on personnel costs, up to 15 percent of the costs associated with the government's most efficient organizations' plans or the contractors' offers were not personnel costs. Because these types of costs were not included in the baseline, a comparison of the baseline with the government's most efficient organization or contractor costs may have resulted in understating cost savings. On the other hand, savings estimates did not reflect study and implementation costs, which reduced savings in the short term. DOD has revised its information systems to better track the estimated and actual costs of activities studied but not to revise previous savings estimates. DOD is also emphasizing the development of standardized baseline cost data to determine initial savings estimates. In practice, however, many of the cost elements that are used in A-76 studies will continue to be estimated because DOD lacks a cost accounting system to measure actual costs. Further, reported savings from A-76 studies will continue to have some element of uncertainty and imprecision and will be difficult to track in the out-years because workload requirements and missions change, affecting program costs and the baseline from which savings are calculated. Although comprising a relatively small portion of the government's overall service contracting activity, competitive sourcing under Circular A-76 has been the subject of much controversy because of concerns about the process raised both by the public and private sectors. Federal managers and others have been concerned about organizational turbulence that typically follows the announcement of A-76 studies. Government workers have been concerned about the impact of competition on their jobs, their opportunity for input into the competitive process, and the lack of parity with industry offerors to protest A-76 decisions. Industry representatives have complained about the fairness of the process and the lack of a "level playing field" between the government and the private sector in accounting for costs. Concerns also have been registered about the adequacy of oversight of the competition winners' subsequent performance, whether won by the public or private sector. Amid these concerns over the A-76 process, the Congress enacted section 832 of the National Defense Authorization Act for Fiscal Year 2001. The legislation required the comptroller general to convene a panel of experts to study the policies and procedures governing the transfer of commercial activities for the federal government from government to contractor personnel. The panel, which Comptroller General David M. Walker chairs, includes senior officials from DOD, OMB, the Office of Personnel Management, private industry, federal labor organizations, and academia. The Commercial Activities Panel, as it is known, is required to report its findings and recommendations to the Congress by May 1, 2002. The panel had its first meeting on May 8, 2001, at which time it adopted a mission statement calling for improving the current framework and processes so that they reflect a balance among taxpayer interests, government needs, employee rights, and contractor concerns. Subsequently, the panel held three public hearings. At the first hearing on June 11, in Washington, D.C., over 40 individuals representing a wide spectrum of perspectives presented their views. The panel subsequently held two additional hearings, on August 8 in Indianapolis, Indiana, and on August 15 in San Antonio, Texas. The hearing in San Antonio specifically addressed OMB Circular A-76, focusing on what works and what does not in the use of that process. The hearing in Indianapolis explored various alternatives to the use of A-76 in making sourcing decisions at the federal, and local levels. Since completion of the field hearings, the panel members have met in executive session several times, augmented between meetings by work of staff to help them (1) gather background information on sourcing trends and challenges, (2) identify sourcing principles and criteria, (3) consider A-76 and other sourcing processes to assess what's working and what's not, and (4) assess alternatives to the current sourcing processes. Panel deliberations continue with the goal of meeting the May 1 date for a report to the Congress. This concludes my statement. I would be pleased to answer any questions you or other members of the committee may have at this time. Contacts and Acknowledgment For further contacts regarding this statement, please contact Barry W. Holman at (202) 512-8412 or Marilyn Wasleski at (202) 512-8436. Other individuals making key contributions to this statement include Debra McKinney, Donald Bumgardner, Jane Hunt, Nancy Lively, Stephanie May, and Judith Williams. DOD Competitive Sourcing: A-76 Program Has Been Augmented by Broader Reinvention Options. GAO-01-907T. Washington, D.C.: June 28, 2001. DOD Competitive Sourcing: Effects of A-76 Studies on Federal Employees' Employment, Pay, and Benefits Vary. GAO-01-388. Washington, D.C.: March 16, 2001. DOD Competitive Sourcing: Results of A-76 Studies Over the Past 5 Years. GAO-01-20. Washington, D.C.: December 7, 2000. DOD Competitive Sourcing: More Consistency Needed in Identifying Commercial Activities. GAO/NSIAD-00-198. Washington, D.C.: August 11, 2000. DOD Competitive Sourcing: Savings Are Occurring, but Actions Are Needed to Improve Accuracy of Savings Estimates. GAO/NSIAD-00-107. Washington, D.C.: August 8, 2000. DOD Competitive Sourcing: Some Progress, but Continuing Challenges Remain in Meeting Program Goals. GAO/NSIAD-00-106. Washington, D.C.: August 8, 2000. Competitive Contracting: The Understandability of FAIR Act Inventories Was Limited. GAO/GGD-00-68. Washington, D.C.: April 14, 2000. DOD Competitive Sourcing: Potential Impact on Emergency Response Operations at Chemical Storage Facilities Is Minimal. GAO/NSIAD-00-88. Washington, D.C.: March 28, 2000. DOD Competitive Sourcing: Plan Needed to Mitigate Risks in Army Logistics Modernization Program. GAO/NSIAD-00-19. Washington, D.C.: October 4, 1999. DOD Competitive Sourcing: Air Force Reserve Command A-76 Competitions. GAO/NSIAD-99-235R. Washington, D.C.: September 13, 1999. DOD Competitive Sourcing: Lessons Learned System Could Enhance A-76 Study Process. GAO/NSIAD-99-152. Washington, D.C.: July 21, 1999. Defense Reform Initiative: Organization, Status, and Challenges. GAO/NSIAD-99-87. Washington, D.C.: April 21, 1999. Quadrennial Defense Review: Status of Efforts to Implement Personnel Reductions in the Army Materiel Command. GAO/NSIAD-99-123. Washington, D.C.: March 31, 1999. Defense Reform Initiative: Progress, Opportunities, and Challenges. GAO/T-NSIAD-99-95. Washington, D.C.: March 2, 1999. Force Structure: A-76 Not Applicable to Air Force 38th Engineering Installation Wing Plan. GAO/NSIAD-99-73. Washington, D.C.: February 26, 1999. Future Years Defense Program: How Savings From Reform Initiatives Affect DOD's 1999-2003 Program. GAO/NSIAD-99-66. Washington, D.C.: February 25, 1999. DOD Competitive Sourcing: Results of Recent Competitions. GAO/NSIAD-99-44. Washington, D.C.: February 23, 1999. DOD Competitive Sourcing: Questions About Goals, Pace, and Risks of Key Reform Initiative. GAO/NSIAD-99-46. Washington, D.C.: February 22, 1999. OMB Circular A-76: Oversight and Implementation Issues. GAO/T-GGD-98-146. Washington, D.C.: June 4, 1998. Quadrennial Defense Review: Some Personnel Cuts and Associated Savings May Not Be Achieved. GAO/NSIAD-98-100. Washington, D.C.: April 30, 1998. Competitive Contracting: Information Related to the Redrafts of the Freedom From Government Competition Act. GAO/GGD/NSIAD-98-167R. Washington, D.C.: April 27, 1998. Defense Outsourcing: Impact on Navy Sea-Shore Rotations. GAO/NSIAD-98-107. Washington, D.C.: April 21, 1998. Defense Infrastructure: Challenges Facing DOD in Implementing Defense Reform Initiatives. GAO/T-NSIAD-98-115. Washington, D.C.: March 18, 1998. Defense Management: Challenges Facing DOD in Implementing Defense Reform Initiatives. GAO/T-NSIAD/AIMD-98-122. Washington, D.C.: March 13, 1998.
The Department of Defense (DOD) has been at the forefront of federal agencies in using the OMB Circular A-76 process. In 1995, DOD made it a priority to reduce operating costs and free funds for other needs. DOD has also augmented the A-76 program with what it terms strategic sourcing--a broader array of reinvention and reengineering options that may not necessarily involve A-76 competitions. The number of positions--at one point 229,000--that DOD planned to study and the time frames for the studies have varied. Current plans are to study about 183,000 positions between fiscal years 1997 and 2007. Changes in the inventory of commercial activities and the current administration's sourcing initiatives could change the number of positions studied in the future. However, GAO has not evaluated the extent to which these changes might occur. DOD's A-76 program has faced several challenges that may provide valuable lessons learned for other federal agencies. These lessons include the following: (1) studies took longer than initially projected, (2) costs and resources required for the studies were underestimated, (3) selecting and grouping functions to compete can be difficult, and (4) determining and maintaining reliable estimates of savings were difficult. The Commercial Activities Panel is studying and has held public hearings about the policies and procedures, including the A-76 process, and the transfer of commercial activities from government personnel to contractors. The panel, comprised of federal and private sector experts, is required to report its findings and recommendations to Congress by May 2002.
5,172
321
The Institute of Medicine, chartered by the National Academy of Sciences, has defined practice guidelines as systematically developed statements that assist practitioners in making decisions about appropriate health care for specific clinical conditions. For example, guidelines are available on such topics as the length of hospital stay for maternity care, the need for back surgery, and the management of pediatric asthma. Guidelines are intended to help physicians and others by crystallizing the research in medical literature, evaluating the evidence, applying the collective judgment of experts, and making the information available in a usable form. They are more often written as acceptable therapy options than as standardized practices that dictate specific treatments. Unlike standards of care that have few accepted variations in appropriateness, most guidelines are expected to have some variations because improved outcomes are not necessarily linked by definitive scientific evidence. Where there is a lack of scientific evidence, some organizations make recommendations that reflect expert opinion, while others recommend tests or procedures only when convincing scientific evidence of benefit exists. Many public and private organizations have been developing guidelines for decades. About 75 organizations have developed over 2,000 guidelines to date. The federal government supports the development of clinical practice guidelines through AHCPR, the National Institutes of Health (NIH), the Centers for Disease Control and Prevention, and the U.S. Preventive Services Task Force (USPSTF). Private guideline efforts have been undertaken by physician organizations, such as the American Medical Association; medical specialty societies, such as the American College of Cardiology; private research organizations, such as RAND Corporation; and private associations, such as the American Heart Association. Guidelines are also developed commercially by private companies, such as Milliman and Robertson and Value Health Sciences, which market them to health care organizations. Given the multiplicity of sources for guideline development, it is not uncommon for more than one guideline to exist for the same medical condition or for recommendations to vary. For example, at least four organizations have issued a guideline on prostate cancer screening. In addition, guidelines tend to reflect the specialty orientation of the guideline developers. In the case of the prostate screening guideline, for example, the American Urological Association, the American College of Radiology, and the American Cancer Society recommend using a prostate-specific antigen test for all eligible patients aged 50 and older, whereas the USPSTF recommends against the routine use of this test. Recent national surveys indicate that a majority of managed care plans have adopted guidelines and made them available to providers. For example, a 1994 survey sponsored by the Physician Payment Review Commission found that 63 percent of managed care plans reported using formal written practice guidelines. The results also showed that the use of guidelines was least common among less structured managed care plans because of their more limited ability to influence physicians' practice. Specifically, 76 percent of the responding health maintenance organizations reported using practice guidelines, compared with 28 percent of preferred provider organizations. Health plans we reviewed had three strong motives for adopting guidelines: pressure to moderate expenditures, to show a high performance level across key quality indicators when compared with other plans, and to comply with accreditation and regulatory requirements. These plans view practice guidelines as tools to achieve these ends by promoting greater uniformity within their own physician networks and by helping physicians increase their efficiency, improve clinical decision-making, and eliminate inappropriate procedures. In selecting aspects of physician practices that could be improved through the use of guidelines, most plans we spoke with identified those services or conditions that are high cost, high medical liability risk, and high incidence for their patient population. They reviewed the provision of such services as hospital inpatient, pharmacy, and ambulatory care--as well as variations in utilization across physicians--to identify such conditions. For example, one plan identified pediatric asthma as a condition for guideline adoption because it is among the most frequent causes of hospital admission and repeat emergency department visits. Human immunodeficiency virus (HIV) infection and high cholesterol are also among the plan's top 10 topics for guideline selection. Several plans we contacted reported cost savings from implementing guidelines that specify the appropriate use of expensive services. In one case, a plan adopted a guideline for treating stroke patients that recommended physical therapy early in the patient's hospital stay. This practice resulted in shortened stays as well as improved outcomes. Another plan adopted a guideline on non-insulin-dependent diabetes to help physicians identify when to provide intensive management rather than routine care to patients with this low-cost condition that can lead to high-cost complications. Another plan used a low back pain guideline that generated savings from the selective use of high-cost diagnostic imaging services. Plans have also reported cost savings from implementing guidelines that reduce the incidence of acute conditions and the need for more expensive care. One managed care chain we contacted increased the percentage of Medicare enrollees receiving flu shots from 27 to 55 percent in 1 year. The chain reported a reduction of about 30 percent in hospital admissions for pneumonia, savings of about $700,000, and fewer lives lost. Practice guidelines were also heavily used by plans that were being evaluated by employers buying health care for their workforce. Standardized measures for assessing health plan performance are set forth in the Health Plan Employer Data and Information Set (HEDIS), which many employers and other payers view as a report card. Purchasers can use HEDIS to compare plans across several preventive services measures, including childhood immunizations, cholesterol screening, breast cancer screening, cervical cancer screening, prenatal care in the first trimester, diabetic retinal examination, and ambulatory follow-up after hospitalization for depression. Of the 19 plans we contacted, 14 collected performance data using HEDIS measures. The adoption of practice guidelines may help plans improve their performance on HEDIS measures. For example, through the use of pediatric and adult preventive care guidelines, one plan claimed that it raised to 95 percent the number of its physicians meeting appropriate childhood immunization schedules and to 75 percent the number of its physicians meeting mammography screening goals. The plans also reported reducing the percent of breast cancers identified at advanced stages from 30 to 10 percent. In addition, plans' adoption of guidelines is encouraged indirectly through health plan accrediting organizations. Although plans are generally not required to be accredited, many seek a review to satisfy purchasers' demands and enhance their marketability. The National Committee on Quality Assurance's (NCQA) accreditation standards require that plans have guidelines for the use of preventive health services. The Joint Commission on Accreditation of Healthcare Organizations also has standards that encourage the use of practice guidelines, but not specific guidelines. States are also influencing plans' guideline use. For individuals covered under workers' compensation, for example, Florida specifies guidance on the use of diagnostic imaging to treat low back pain. As states increasingly require plans to meet certain treatment standards, plans are likely to adopt guidelines that will help them comply with these requirements. Few of the plans we visited had the resources to devote to developing an original guideline, since such an effort can be time consuming and expensive. They preferred instead to customize guidelines that had already been published to ensure local physician involvement and acceptance of the guidelines and to accommodate their individual plan objectives. In general, health plans customized guidelines by modifying their scope or recommendations or emphasizing one of several therapy options presented. Because adapted guidelines differ from original guidelines to varying degrees, some experts in the guideline development community caution that certain modifications, when made to accommodate local self-interests at the expense of patients, may compromise the integrity of the guideline. Some of the plans we visited also expressed a need for more medical technology assessments and outcomes data; however, they lack the resources to assume these activities. They suggested that the federal government enhance its role in these areas. Among the most important reasons for not adopting published guidelines strictly as written is the need for local physician involvement and acceptance. Plan managers we interviewed noted that published guidelines usually lack the input of their local physician community. They recognized that some plan physicians are reluctant to put aside their own practice patterns in favor of those recommended by outside sources, particularly when guidelines are based more on expert opinion than on conclusive scientific evidence. Physicians have confidence in guidelines that they or their peers take part in developing or that are developed by their professional organization. Therefore, guidelines adopted by a consensus of local physicians are more likely to be accepted. In one plan manager's view, without the physicians' participation in approving the final product, physicians would not be likely to follow the guideline. In citing the need for physician acceptance of guidelines, one plan manager put it this way: "The practice of medicine is parochial." Similarly, one large plan's medical policy specialist told us that published guidelines need to be modified because they are often not consistent with local standards of care--that they are not "in synch" with how plan physicians are practicing. This position was corroborated by the American Medical Association's Director of Practice Parameters, who said "a guideline can be developed at the national level, but it has to be localized . . .. t comes down to local areas developing the recommendations that suit them." Plans selected practice guidelines from a variety of sources, including federal agencies and medical specialty societies, such as the American College of Physicians. Among the health plans we contacted, few had documentation on the methods they used to adapt guidelines. However, some described their approach as typically including some combination of physician consensus and a review of outcomes of clinical studies. When there was controversy or lack of strong clinical evidence, plans reported making greater use of local physician opinion and often performed independent literature reviews to provide additional information. This was particularly likely with a guideline on a rapidly changing treatment method, such as treatment for heart attacks, since clinical developments may overtake the publication of existing guidelines. Plans have a number of other reasons for customizing clinical practice guidelines. These issues include cost considerations, resource constraints, demographic characteristics of enrolled population, simplicity of guideline presentation, and the need to update information contained in published guidelines. Plans we visited noted that clinical practice guidelines often fail to provide needed information on what is cost-effective care. In its 1992 report, the Institute of Medicine recommended that a clinical practice guideline include information on both the health and cost implications of alternative treatment strategies. However, many guidelines produced by federal and private entities do not routinely include cost-effectiveness analysis in the recommendation-making process, often because the information needed to conduct cost analysis is not available. Plans we visited often consider the costs of alternative treatments in deciding how to implement a guideline. In some instances, a guideline may allow choices among equally effective therapeutic options. This was the case with AHCPR's guideline on the treatment of depression in primary care settings, which stated: "No one antidepressant medication is clearly more effective than another. No single medication results in remission for all patients." Instead, the guideline listed several types of drugs that were considered equivalent in clinical effectiveness. In implementing this guideline, one plan we contacted chose the least expensive class of drugs from AHCPR's recommended list as its first-line treatment. The plan also noted that the selected drugs were older and their side effects were better known to its physicians. Some plans we visited also noted that guidelines may not recommend the most cost-effective health care. For example, some plans adapted a published guideline on total hip replacement that recommended that patients be admitted to the hospital the night before their surgery. The plans changed the recommendation so that patients were admitted the morning of their surgery, even though most of these patients were elderly and lived far from the hospital. One guideline expert argued that this was done to lower the cost of care with little regard for the inconvenience to or impact on the patient. Local customizing is also influenced by the amount and type of health care resources available to the plan. For example, the USPSTF's colorectal cancer screening guideline recommends a periodic sigmoidoscopy or an annual fecal occult blood test or both. Plans with a sufficient number of physicians who are trained to perform sigmoidoscopies are more likely to choose the recommendation of screening with a periodic sigmoid test and may also perform the fecal occult blood test. However, those without enough trained physicians may decide to select only the fecal occult blood test. Some plans noted that guidelines may need to be tailored to allow for population differences in each locality. They cited research showing that differences in patients' health need to be taken into account since socioeconomically different populations may have different incidence and prevalence rates of the disease. In particular, the research showed that Native American women required more frequent mammography screening due to their above-average incidence of breast cancer. Plans may also decide to recommend a wider application of diabetes screening services when their members are identified as having higher risk factors. The USPSTF guideline on diabetes states that there is insufficient evidence that routine screening is necessary. However, members of certain ethnic groups (Hispanics, African-Americans, Native Americans) are among those likely to benefit from screening tests. Therefore, plans may need to adapt guidelines to serve the needs of their more vulnerable populations. Plans also cited the need to customize to make the information in a guideline available in a more usable form. Guideline documents vary in length, from a three-page brochure to a two-volume manual. Some guidelines consist largely of decision-tree charts, called clinical algorithms, while others are predominantly text, providing a synthesis of scientific evidence, expert consensus, and references to specific research studies. Sometimes published guidelines are broad in scope and cover not only a full range of medical practices--including diagnosis, treatment, and follow-up care--but also the guideline development methodology and areas for future research. The comprehensiveness of such guidelines, designed to reach the broadest audience of practitioners as well as clinical researchers, may require a book-length presentation. Therefore, plans typically adapted such guidelines to focus on a narrower set of clinical needs, such as the pharmacological management of patients with heart failure. Several plans pointed to AHCPR's 327-page guideline on primary care physicians' treatment of depression as being too long and complicated for busy clinicians. One plan reduced it to 44 pages, another to 20 pages, and a third to 4 pages. (AHCPR has issued a shorter quick-reference version of this guideline, as it does with all its guidelines.) Format may also be an issue with practice guidelines developed by health plans. A prominent expert on guideline development noted that a mathematically based cholesterol screening guideline could not be implemented because the plan's primary care physicians did not have time to follow the complicated guideline model. Sometimes the information in existing guidelines is not current. Medical information and technology, such as pharmacological management of a condition, is continually evolving. Yet, published guidelines may not be reviewed and revised on a timely basis. For example, NIH guidelines, called consensus statements, are not reviewed for at least 5 years after issuance. In fact, only about half of the plans we contacted reviewed and updated their guidelines annually. However, one plan published guidelines with an expiration date, forcing the plan to review the guidelines at least once annually. The extent of modifications that resulted from plans' customizing published guidelines varied from minimal to substantial. Sometimes the differences between the local and published guidelines were cosmetic. For example, some individual medical groups prepared shortened versions of regionally developed guidelines on plastic cards for quick physician referral. They also removed the original source's name and applied their logo to the documents to further enhance physicians' sense of ownership. Other modifications were more than superficial. One plan customized AHCPR's HIV guideline by adding drug treatments that were not covered in the original guideline, specifying when primary care physicians should refer patients to a specialist, and providing information on state reporting requirements. Finally, some changes could be considered substantial. For example, one plan we contacted relaxed the recent chicken pox vaccination guideline from the American Academy of Pediatrics. The Academy recommended that chicken pox vaccinations be given to all healthy children. The plan adapted the guideline by recommending that its physicians discuss the extent of immunity that the vaccine could confer and let parents decide whether they want the vaccine given to their children. The plan maintained that, because the immunity offered by the vaccine might not last a lifetime, it could result in more adult cases of chicken pox, an outcome that could result in serious harm or death. The plan held that it is better for children to contract chicken pox to ensure lifetime immunity than to get the vaccine. An Academy spokesperson commented that no significant loss of immunity has been demonstrated in healthy children who were vaccinated. At another plan, we found that a customized guideline recommended treatments specifically not endorsed by AHCPR. In its low back pain guideline, the plan recommended that physicians perform an invasive treatment to control pain and an invasive test to diagnose the extent of disc damage. However, AHCPR's guideline stated that the benefits of this treatment and test were unclear and not worth the potential risk of infection to patients. A plan representative told us that their guideline was adapted to address the concerns of the plan's orthopedists, who felt that the invasive treatment and test should have been included in the original guideline. ". . . to the extent that local adaptation, broadly defined, moves in the direction of excluding certain types of practitioners . . . or of weakening a guideline document fundamentally by allowing for the provision of marginally beneficial services in situations in which guidelines would probably say 'this is inappropriate for this class of people' --then you have what looks to me like a self-serving change." " . . . guidelines that recommend the best care practices to optimize outcomes for patients may not necessarily be cost-effective or easy for MCOs to implement. MCOs, with a commitment to the bottom line, may make modifications to guidelines to achieve their best interests and not those of patients." Most plan managers we contacted applaud the various guidelines published by public and private entities. The availability of such guidelines makes plans' guideline development efforts easier and less costly. Plans consider published guidelines to be useful summaries of the literature and science, written for a diverse audience. However, given the multiplicity of guideline sources, many plan managers told us they would prefer to see some federal agencies assume an alternative role in the guideline movement. Plans noted that having many federal and private-sector guidelines on the same topic is an inefficient use of limited resources. Furthermore, some of these guideline recommendations conflict, creating confusion for plan managers and practitioners. Plan managers also told us that their needs for medical technology assessments and outcomes data remain unmet. Some plan officials suggested that some federal agencies would provide a more useful service to managed care plans by not continuing to produce guidelines. Instead, they should publish and update summaries and evaluations of evidence on medical conditions and services so that plans could use this information to develop and update their own guideline recommendations. Other plans proposed that the federal government increase funding to develop useful practice guideline tools, such as methods to incorporate cost assessments and patient preferences into practice guidelines. Furthermore, several plans asserted that federal guideline funds should be used for outcomes research and technology assessment from which plans could develop their own guidelines. One plan manager said, "This is an area that health plans do not have the resources or expertise to adequately address." Managed care plans' growing interest in practice guidelines is driven by their need to control medical costs, ensure consistency of medical care, and demonstrate improved levels of performance. By using practice guidelines, plans are making a conscious decision about the care they intend to provide, reflecting the trade-off between costs and benefits. When published guidelines differ from a plan's clinical and financial objectives, they are typically customized with the active participation of the network physicians. Since published guidelines can be inconsistent, outdated, or too complex, local adaptation may be useful. Yet some changes may compromise the quality of patient care. Moreover, local adaptation may undermine the goal of clinical practice guidelines, which is to make medical care more reliant on evidence-based recommended practices and less a function of where a patient receives care. Comments on a draft of this report were obtained from the American Association of Health Plans, AHCPR, and two experts on guideline development and use. The American Association of Health Plans generally agreed with the draft, but suggested language changes where the report addressed the goal of reducing cost. They stated that practice guidelines are intended primarily to improve the quality and outcomes of care and secondarily to contain costs. We agree that plans use guidelines for quality improvement as well as cost management. AHCPR noted that managed care plans' views on the federal role of guideline activities were similar to the agency's views and its plans for the future. The agency also provided technical comments, and we have incorporated its suggested changes and those of the expert reviewers as appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to interested parties and make copies available to others on request. Please call me at (202) 512-7119 if you or your staff have any questions. Other major contributors include Rosamond Katz, Donna Bulvin, Mary Ann Curran, Hannah Fein, and Jenny Grover. HMO model type(s) Enrollment (as of 1995) Minneapolis, Minn. Coral Gables, Fla. Woodland Hills, Calif. Woodland Hills, Calif. Virginia Beach, Va. Columbia, Md. Bethesda, Md. Seattle, Wash. Wellesley, Mass. Minneapolis, Minn. Chicago, Ill. Rockville, Md. Pasadena, Calif. Rockville, Md. Virginia Beach, Va. Mercer Island, Wash. Baltimore, Md. Fort Lauderdale, Fla. Jacksonville, Fla. Hartford, Conn. Louisville, Ky. IPA; network; staff Consolidated with CIGNA Healthcare of Richmond, Va. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed how managed health plans make use of existing clinical practice guidelines. GAO found that: (1) clinical practice guidelines promote greater uniformity within physician networks, encourage improved efficiency and clinical decision-making, and eliminate unnecessary care; (2) several health plans have adopted clinical practice guidelines to control costs, improve performance on standardized measures, receive accreditation, and comply with regulatory requirements; (3) due to time and fiscal constraints, many health plans customize published clinical guidelines rather than generate original guidelines; (4) physicians are more likely to use a clinical practice guideline if it is developed by local health providers; (5) managed health plans customize existing clinical practice guidelines to suit alternative treatments, available resources, population needs, and format and currency concerns; (6) while health plans modify existing clinical practice guidelines to varying degrees, extensive changes could jeopardize the guidelines' effectiveness; and (7) some health plans would prefer that the federal government publish and update evidence on medical conditions and services, develop useful practice guideline tools, and perform outcomes research and medical technology assessments that would help them to develop, modify, and update their guidelines.
4,980
237
In 1996, SAMHSA issued a regulation implementing the Synar amendment. The regulation requires all 50 states, the District of Columbia, and eight insular areas to (1) have in effect and enforce laws that prohibit the sale and distribution of tobacco products to people under 18 years of age, (2) conduct annual random, unannounced inspections, using a valid probability sample of outlets that are accessible to youth, of all tobacco outlets within the state to estimate the percentage of retailers who do not comply with the laws, and (3) report the retailer violation rates to the Secretary of HHS in their annual SAPT block grant applications. SAMHSA requires that each state reduce its retailer violation rate to 20 percent or less by fiscal year 2003. SAMHSA and each state negotiated interim annual target rates that states are required to meet to indicate their progress toward accomplishing the 20 percent goal. Beginning in fiscal year 1997 for most states and in subsequent years for all states, the Secretary can withhold 40 percent of a state's Substance Abuse Prevention and Treatment (SAPT) block grant award if it does not comply with the rate reduction requirements. State fiscal year 2000 SAPT block grant awards ranged from about $2.5 million to $223 million. Also in 1996, SAMHSA provided guidance to states on implementing Synar requirements. SAMHSA issued sample design and inspection guidanceto help states comply with the Synar requirement for conducting random, unannounced inspections of tobacco outlets to estimate the statewide violation rate. The guidance consists primarily of recommended strategies to give states flexibility in selecting a sample design and inspection protocol tailored to their particular circumstances, including state and local laws. For example, SAMHSA's inspection protocol guidance suggests that states recruit minors to attempt to purchase tobacco products when conducting inspections but gives states some flexibility regarding the ages of the minors that are used. SAMHSA's guidance requires states to develop and implement a consistent sample design from year to year and a standardized inspection procedure for all inspections so that measurements of violation rates over time are comparable across jurisdictions within a state. SAMHSA's guidance includes a Synar requirement that the states enforce their laws in a manner that can reasonably be expected to reduce the extent to which tobacco products are available to minors. The guidance suggests that states use a variety of activities in their enforcement strategy, such as merchant education, media and community involvement, and penalties. The enforcement activities could be conducted by different agencies, such as those responsible for substance abuse prevention and treatment programs, law enforcement, and state health departments. SAMHSA reviews state-reported information to determine whether states have complied with requirements for enforcing state laws and conducting random unannounced inspections of retail tobacco outlets. In addition to requiring states to provide evidence of their enforcement activities, SAMHSA requires states to provide their sampling methodology, inspection protocol, and tobacco outlet inspection results in their annual SAPT block grant applications. In its review, SAMHSA and its contractordetermine whether (1) the sample size is adequate to estimate the statewide violation rate and all tobacco outlets (including over-the-counter and vending machines) in the state have a known probability of being selected for inspection; (2) the state assessed the accuracy of lists used to identify the universe of tobacco outlets from which its sample is drawn; (3) the sample design and inspection protocols are consistently implemented each year within the state; and (4) the statewide violation rate is correctly calculated, meets the negotiated annual target, and shows progress toward the 20-percent goal. When data provided in the application are not sufficient to determine state compliance, SAMHSA requests additional information from the state before a final decision on state compliance is made. SAMHSA collects the state-reported data from the SAPT block grant applications and in 1996, began storing it in an automated database. These data are used to monitor states' compliance with Synar requirements, compare state progress from year to year, and produce an annual report to the Secretary of HHS and the Congress on Synar implementation. SAMHSA also uses the data to help finalize the states' annual retailer violation rates, which are released to the public. For fiscal years 1997 through 1999, the states' reported violation rates showed an overall increase in retailer compliance with state laws prohibiting the sale of tobacco products to minors. The median retailer violation rate declined from 40 percent in 1997 to 24.2 percent in 1999. Violation rates range from 7.2 percent in Florida to 72.7 percent in Louisiana for 1997 and from 4.1 percent in Maine to 46.8 percent in the District of Columbia for 1999. SAMHSA has cited 10 states over the 3-year period for being out of compliance with Synar requirements because they did not reach their violation-rate target. The Secretary of HHS, however, has not reduced any state's SAPT block grant for noncompliance with Synar. In fiscal years 1997 and 1998, states that failed to comply with Synar requirements were not assessed a penalty because they successfully argued that there were extraordinary circumstances that hindered their inspection efforts. The states that were faced with a potential penalty by the Secretary of HHS for failing to reach their fiscal year 1999 target rates chose to commit additional funds to ensure compliance with the following year's violation- rate target. State Synar implementation practices and SAMHSA oversight adversely affect the quality and comparability of state-reported retailer violation rates. Although SAMHSA approved states' sample designs, inspection protocols, and inspection results, the quality of the estimated statewide violation rates reported for fiscal years 1998 and 1999 is undermined because of several factors: First, some states used inaccurate and incomplete lists from which to select samples of tobacco outlets to inspect. Second, most states used minors younger than 16 to inspect tobacco outlets, and SAMHSA instructed the states to tell minors not to carry identification on inspections. Both of these protocols tend to lower the violation rate. Third, SAMHSA approved some states' violation rates even though they included invalid inspections. Fourth, SAMHSA relied on states to validate violation rates without ensuring that the accuracy of the supporting data was verified, even though a potential reduction in a state's block grant award for not complying with Synar could be an incentive to report artificially low rates. These data quality factors, coupled with the lack of standardization in the protocols states use when inspecting outlets, limit the comparability of retailer violation rates across states. According to SAMHSA officials, some states used inaccurate and incomplete lists to select random statistical samples of tobacco outlets to inspect, which could have affected the validity of the samples and compromised violation rates reported for fiscal years 1998 and 1999. Most states used a list-based sampling methodology in their sample design, as SAMHSA recommends. When states use list-based sampling to select a sample of tobacco outlets for inspection, SAMHSA requires that they report evidence that they have verified the accuracy and completeness of lists for both over-the-counter and vending machine outlets. However, we found that for fiscal year 1998, 40 states reported to SAMHSA that they did not know the accuracy of the lists they were using. States can use different lists to develop their population of tobacco outlets, but the accuracy and completeness of these lists vary. For example, states can use lists of state-licensed tobacco outlets, but these lists are not always updated by the responsible state agencies. Also, national and state commercial listings can be used, but they often contain many establishments that do not sell tobacco products or may identify the owners of the business but not necessarily each retail outlet. In some rural areas and Midwestern states, developing a complete list of outlets can be difficult because tobacco products are sometimes sold from individuals' homes or other places that are not known to be tobacco outlets. Comments made by several state officials indicate a need by some states for more technical assistance from SAMHSA in addressing state-specific issues--particularly sample design--that affect their compliance with Synar. Accurately identifying the population of vending machine outlets accessible to youth in a state is also important, according to SAMHSA's fiscal year 1997 report of Synar implementation and other documents, because vending machines have been a major source that children use to obtain tobacco products. In our review of the state data that SAMHSA provided from SAPT block grant applications for fiscal year 1999, we found that of the 37 states reporting that they inspected vending machine outlets, 11 did not report the population of vending machines accessible to youth in their states as SAMHSA requires. (See app. I) Further, our review of a few block grant applications showed that states reported that they inspected vending machine outlets when they found them during random inspections of over-the-counter outlets. Some states have had difficulty developing accurate and complete lists of vending machine outlets, in particular, because many of the machines are privately owned and their portability makes them difficult to track. Officials we interviewed told us that over the years there has been a significant decline in vending machine tobacco outlets accessible to minors. However, an NGA representative said that vending machines are and will continue to be a source of tobacco products for minors in some states. The results of a 1999 national survey of middle school and high school students' access to cigarettes show that vending machines continue to be a source of tobacco products for youth, particularly middle school students. For example, when students were asked where during the past 30 days, they bought their last pack of cigarettes, 2.7 percent of the high school students reported that their purchase was from vending machines. However, 12.9 percent of middle school students reported their last pack of cigarettes was purchased from vending machines. SAMHSA officials told us that states need to be more aggressive in identifying tobacco outlets. An NGA study of best practices in implementing and enforcing Synar requirements notes that programs that require tobacco retailers to be licensed provide an effective source of information for identifying the outlets. Not all states, however, require tobacco outlets to be licensed. SAMHSA officials said that they believe tobacco licensure programs that require the identification of every tobacco outlet and regular license renewals afford states the best opportunity to develop accurate and complete statewide lists of over-the- counter and vending machine tobacco outlets. However, in comments on a draft of this report, HHS stated that SAMHSA does not have the authority to license tobacco retailers or require states to enact legislation mandating tobacco retailer licensing or registration. The quality of states' violation rates can be particularly affected by the age of the minors used to inspect the tobacco outlets. Research shows that minors who are younger than 16 years of age are much less successful at purchasing tobacco products than older youths. Research also shows,and SAMHSA officials told us that, a small difference in the age of minors can make a significant difference in a state's violation rate because the younger the minor inspectors appear, the less likely store clerks will sell them tobacco. As a result, using minors younger than 16 could bias the outcome of state inspections by lowering the violation rate. Even though SAMHSA officials are aware of the research results, they allow states to include minors younger than 16 in their inspection protocols. SAMHSA's inspection protocol guidance recommends that states use 15- and 16-year- olds as inspectors because minors younger than 15 are likely to look very young, and their appearance could discourage some retailers from selling them tobacco products. Nearly all states report using as inspectors, youth from a combination of two age cohorts, 14- and 15-year-olds and 16- and 17-year-olds. For fiscal year 1999, 43 states reported using 14- and 15- year-olds as inspectors, and 16 of these states used them in more than 50 percent of their inspections. (See app. II.) Five of the 16 states (Georgia, New Hampshire, North Carolina, Tennessee, and Texas) reported the highest percentages of inspections that were conducted by 14- and 15- year-olds--73 percent to 94 percent. (See fig. 1.) Four of the 5 states also reported that a large proportion of their fiscal year 1998 inspections were conducted by 14- and 15-year-olds. Tennessee and Texas officials told us they did not purposely try to recruit large numbers of 14- and 15-year-olds. They said that they selected those minors that were willing to participate in the inspections. Inspection data supporting the violation rates for North Carolina and Tennessee show that inspections conducted by 14- and 15-year-olds resulted in lower purchase rates than inspections by 16- and 17-year-olds. For example, Tennessee reported that 14- and 15-year-old inspectors were able to purchase tobacco 16 percent of the time, whereas the 16- and 17- year-olds had a 51-percent purchase rate. New York state officials' analysis of their state inspection results for fiscal year 2000 showed that 14- and 15- year-olds were able to purchase tobacco 8 percent of the time, whereas the 16- and 17-year olds had a 21-percent purchase rate. At the time of our review, SAMHSA officials told us that they had not thoroughly examined states' use of 14- and 15-year-old inspectors and the potential impact on retailer violation rates, but they acknowledged that this is something that will require a more comprehensive evaluation. Another age-related inspection protocol procedure that can affect retailer violation rates is whether minor inspectors are told to carry valid identification on inspections and required to show it when asked. The research on this issue is mixed. Some research suggests that when minors are asked to show identification, retailers are less likely to sell them tobacco products. Other research suggests, and some state officials told us, that the likelihood of an illegal sale is greater if minors show identification when asked than if identification is not shown. As a result, having and showing identification when asked could potentially result in an illegal tobacco sale and a higher retailer violation rate. About half of the illegal sales in one state's inspections occurred after the minor showed proof of age. Research suggests that some clerks may sell minors tobacco products because they have difficulty quickly determining an individual's age from a date-of-birth on his or her identification. According to HHS, because of safety concerns, SAMHSA recommends that minors not carry identification but answer truthfully about their age if asked by a store clerk. Research also suggests that the sex of the minor inspector can bias the inspection result. For example, when controlling for the effects of both age and sex of the inspector, one researcher found that girls were able to purchase at a 39-percent rate compared to boys who had a 28-percent purchase rate. Unlike previous research, this research controlled for the effects of both age and sex. SAMHSA approved four states' retailer violation rates for fiscal years 1998 and 1999 that were inaccurately calculated because they included inspections in which the ages of minor inspectors and the inspection results were not known. SAMHSA requires states to report the ages of minor inspectors in part to confirm that the ages of the inspectors are within an acceptable range. When the ages of minors used in state inspections are unknown, SAMHSA officials told us that they consider the inspections invalid, and the inspection results should be excluded from the violation rate computation. However, we found that SAMHSA approved and published violation rates reported by Florida, Kansas, Louisiana, and Minnesota that included inspection results in which the ages of the minor inspectors were unknown. Moreover, three of these states' violation rates included some inspections where neither the age of the minors nor the outcomes of the inspections were known. Had the invalid inspections been excluded, the violation rates for Florida, Louisiana, and Minnesota would have been higher (See table 1.) However, none of the four states would have missed its target based on the recalculated rate. SAMHSA officials said that there were reasons for accepting the states' violation rates. For example, they said that they did not exclude Kansas' invalid inspections because the state provided the outcomes of the inspections. Even though Florida's retailer violation rate was based entirely on inspections in which the ages of the inspectors and the outcomes by age were unknown, SAMHSA accepted the rate because of the large number of inspections the state conducted and its low reported violation rate. SAMHSA did not ensure that the accuracy of the data that states used to support their fiscal year 1998 and 1999 estimates of retailer violation rates was verified. SAMHSA reviewed the information states reported in their SAPT block grant applications. However, SAMHSA relied on the states to assess the quality of the data they used to develop their rates, even though the potential 40-percent reduction in a state's block grant for not meeting annual violation rate goals could provide an incentive for some states to report artificially low violation rates. To improve their oversight, during the time of our review, SAMHSA officials completed pilot testing of their state data review protocol and began visiting states to evaluate their systems of data collection and documentation for Synar implementation. The draft review protocol SAMHSA officials said they were using includes questions about the states' sampling and inspection procedures and practices that could help in making an assessment of the quality of the data states used to develop violation rates. SAMHSA officials said that because of resource constraints, they plan to conduct these reviews approximately once every 3 to 4 years for each state. Differences in how states implement their inspection protocols, along with data quality weaknesses, limit the comparability of retailer violation rates across states. SAMHSA does not require all states to use the same set of protocols when conducting inspections of tobacco outlets. Although SAMHSA provides inspection guidelines, each state is allowed the flexibility to develop inspection protocols in keeping with its own circumstances, including restrictions in state law. Given this flexibility, there is inconsistent implementation of inspection protocols across states, which makes comparisons of retailer violation rates difficult. States' use of different ages and sexes of minor inspectors and different criteria in determining what type of tobacco sale is a violation punishable under state law can limit comparisons of violation rates across states. For example, the ages of minor inspectors is an issue in comparisons because some states use higher proportions of younger inspectors than other states and younger minors tend to have lower purchase rates than older minors. Also, the states' use of minor boys and girls as inspectors in different proportions can limit comparisons of violation rates because females tend to have higher tobacco purchase rates than males. Another inspection procedure that can limit the comparability of violation rates between states is whether the state uses the "consummated" or the "unconsummated" buy protocol. In a consummated buy, the minor inspector completes the purchase and takes possession of the tobacco product, whereas in an unconsummated buy the minor inspector attempts or asks to purchase the tobacco product and the clerk accepts payment, but the inspector leaves without taking the product. Some states use the unconsummated-buy protocol to protect minor inspectors, who cannot legally purchase tobacco products. For Synar inspections, if a sale is made, it is considered a successful attempt, or a violation, regardless of which protocol is used. However, according to SAMHSA and other officials we interviewed, choice of the buy protocol can affect a state's violation rate. When the unconsummated-buy protocol is used, there could be a question of whether a violation of state law actually occurred if the minor did not take possession of the tobacco product. Some merchants are challenging in court the penalties states assess under state law for violations based on unconsummated buys. If these challenges are upheld or not resolved in those states, merchants may continue to sell tobacco products to minors because they would not expect a penalty for their actions and the states' retailer violation rates could be adversely affected. This inconsistent application of the consummated- and unconsummated-buy protocols by states and the potential effect on retailer violation rates could limit comparison of rates across states. SAMHSA's fiscal year 1999 data show that 39 states used the consummated-buy protocol and 12 states used the unconsummated-buy protocol when inspecting tobacco outlets. (See app. I.) Comparing retailer violation rates across states could be useful in determining national progress toward the goal of reducing minors' access to tobacco products and in identifying best practices used by states that seem to be making better progress than others. Because of the lack of uniform inspection protocols across states, however, SAMHSA officials and others do not suggest making such comparisons. A little more than half the states reported in their fiscal year 1999 block grant applications that violators of youth tobacco access laws were penalized as part of the state's enforcement strategy. All states have laws that allow the use of penalties, but not all states reported that penalties were assessed, according to SAMHSA data. The states reported using a variety of enforcement actions, such as warnings, fines, and suspensions of retailers' licenses. SAMHSA officials said that in their review of state- reported information for Synar compliance, they look for evidence of active enforcement, such as the assessment of penalties, and make inquiries to state officials when the evidence is not apparent. However, SAMHSA officials also said that ensuring state enforcement of youth tobacco access laws has not been their primary focus because they were relying on FDA's enforcement activities, which included assessing monetary civil penalties against retailers. The officials said that because of the discontinuation of FDA's program, they need to examine states' evidence of active enforcement more closely to ensure that states are enforcing their youth tobacco access laws. Research shows that enforcement strategies that include the assessment of penalties are successful at reducing minors' access to tobacco products. In our review of SAMHSA's summary data for fiscal year 1999, we found that 28 states reported specific evidence of having imposed penalties for violations of state youth tobacco access laws. (See app. I.) These penalties included fines against retailers and sales clerks and the suspension or revocation of retailers' licenses. Seven states reported that they took other law enforcement actions against violators, such as issuing warning letters or citations. All states have laws that allow the assessment of penalties, but not all states reported using penalties as part of their enforcement strategies. For fiscal year 1999, for example, although states have the flexibility to determine which enforcement strategies are appropriate for compliance with Synar, SAMHSA maintains that state laws are more successful in changing retailer behavior regarding selling tobacco to minors when penalties are used, and SAMHSA encourages states to use them. Florida is an example of a state that has adopted a statewide enforcement strategy that penalizes violators of its youth tobacco access laws. In its fiscal year 1998 application, Florida reported that 3 percent of the merchants who were found out-of-compliance with the state's law had their licenses revoked or suspended and 93 percent were assessed fines ranging from $250 to $1,000. SAMHSA officials said they look for evidence of active enforcement, such as the assessment of penalties, in state- reported information on Synar compliance and in some cases ask the state for an explanation when the evidence is not apparent. SAMHSA officials also said, however, that prior to the discontinuance of the FDA tobacco control program in March 2000, they relied on FDA to ensure enforcement of requirements to reduce youth access to tobacco products. As a regulatory agency, FDA took an approach different from that taken by SAMHSA in prohibiting the sale of tobacco products to minors. FDA's discontinued tobacco control program focused on enforcement and required that penalties be assessed against repeat violators of FDA's regulation. FDA contracted with states to conduct inspections of tobacco outlets. FDA's contract stipulated that each state conduct at least 375 unannounced monthly compliance inspections of merchants that sold tobacco products over-the-counter, and states were instructed to re- inspect violators. FDA's goal was to have compliance checks performed throughout the entire state. If an inspection resulted in a violation, the state was expected to re-inspect the establishment within 90 days and continue inspections until compliance was achieved. For the first violation, the retailer would receive a warning letter. For subsequent offenses, civil monetary penalties were to be assessed ranging from $250 for a second offense to $10,000 for a fifth offense. At the time the program was discontinued, FDA had imposed a maximum penalty of $1,500 and collected an estimated total of $1 million. Although states were allowed to use FDA contract funds for enforcement, SAMHSA officials said that states are permitted to use SAPT block grant funds for enforcement activities only if a citation is issued for a violation at the time of the inspection. States are permitted to use SAPT block grant funds to develop sample designs and conduct inspections of tobacco outlets. SAMHSA officials told us that states would need federal funds to support broader enforcement activities now that FDA's program has been discontinued. Although NGA recognizes the importance of funding enforcement, an NGA representative told us that the association is not currently advocating additional federal funding for state enforcement activities. In commenting on this report, HHS noted that state funds and tobacco settlement funds are other possible sources of funding for enforcement activities. Officials for SAMHSA, FDA, and a state we consulted told us that they believe that without FDA's enforcement of its regulation against the sale of tobacco products to minors, some tobacco retailers will become more lax and sales to minors will increase. FDA officials also said they do not believe tobacco retailers will change their behavior without knowing that violations will result in penalties. SAMHSA officials said that they have not focused as much on state enforcement actions under Synar implementation because of their reliance on FDA to enforce its tobacco control regulation, which included penalties against retailers. They said that because FDA's program was discontinued in March 2000, they see the need to ensure that states show evidence of active enforcement of their laws. Research suggests that enforcement strategies that incorporate inspections of all retailers followed by penalties and re-inspections are successful in reducing the availability of tobacco to minors. The components of an effective enforcement strategy include an enforceable law with penalties sufficiently severe to deter potential violators, according to the research. NGA concluded from its interviews with representatives of state agencies on best practices in enforcing Synar that the single most effective factor in reducing tobacco access to minors is the establishment of a statewide inspection and enforcement program that holds merchants and clerks accountable for their actions. Some state officials told us they believe that aggressive penalties assessed against the retailer can be very effective in changing merchant behavior. New York, for example, plans to begin confiscating merchants' lottery licenses for failure to comply with laws prohibiting the sale of tobacco products to minors. The goal of the Synar amendment is to help reduce the sale of tobacco products to minors through state laws that make it illegal for retailers to sell them tobacco products. States are responsible for enacting and enforcing laws that restrict youth access to tobacco products and for reporting the progress in retailer compliance with Synar requirements. However, state implementation of Synar and SAMHSA's oversight raise concern about the quality of state estimates of the percentage of retailers that sell tobacco products to minors. These concerns center on the use of inaccurate lists of retail outlets from which to draw a sample to inspect; the use of inspection protocols among the states that could bias retailer violation rates and limit their comparability, such as the age of minor inspectors; the acceptance of violation rates that contain invalid inspection results; and the reliance on states to validate their inspection results without ensuring that the supporting data are verified. SAMHSA recently began visiting states to check their inspection practices, but more could be done to improve the quality of the inspection results and enhance the usefulness of retailer violation rates in evaluating national progress toward reducing minors' access to tobacco products. The states have flexibility in developing strategies to help enforce their youth tobacco access laws. According to researchers and state and SAMHSA officials, assessing penalties for selling tobacco to minors, as done under FDA's program, can be an effective enforcement tool for reducing minors' access. For fiscal year 1999, a little more than half the states reported evidence of using penalties to help enforce their laws. In its oversight of state enforcement activities, SAMHSA has decided to more closely examine states' use of different enforcement strategies, including the assessment of penalties as sanctions against violators of youth tobacco access laws. To help ensure the quality of states' estimates of tobacco retailer violation rates under the Synar amendment and to make the rates more comparable across states, we recommend that the Secretary of HHS direct the Administrator of SAMHSA to help states improve the validity of their samples by working more closely with them in developing ways to increase the accuracy and completeness of the lists of tobacco outlets from which they draw random samples for inspections; revise the inspection protocol guidance to better reflect research results, particularly regarding the ages of minor inspectors, and work with states to develop a more standardized inspection protocol consistent with state law, and more uniform implementation across states; and ensure that all states' retailer violation rates exclude invalid inspections, particularly those in which the ages of minors and outcomes of inspections are unknown. We obtained comments on a draft of this report from HHS. (See app. III for agency comments.) In general, HHS agreed with our findings and recommendations and found our report to be useful guidance for future changes in Synar implementation. HHS disagreed with our recommendation that SAMHSA require more standardization in inspection protocol development consistent with state laws and more uniform implementation across states. HHS stated that this action would accomplish very little in the way of meaningful comparisons of violation rates across states without federal legislation requiring states to modify their practices and possibly lead to changes in state laws pertaining to inspection protocols. We believe, however, that federal legislation may not be necessary. There are consistencies that currently exist in inspection protocols among many of the states, such as in the ages of minors used to conduct inspections. Identifying other key inspection protocols that states may be able to adopt, such as whether minor inspectors should carry identification, would provide a core group of protocols that could enhance comparisons of retailer violation rates across states. In light of HHS' comment, however, we revised our recommendation to have the Secretary of HHS direct SAMHSA to collaborate with states in developing more standardization in protocols and uniform implementation across states. HHS officials also provided comments intended to increase the report's accuracy. Where appropriate, we have incorporated HHS' suggested changes and technical comments in this report. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to others who are interested and make copies available to others who request them. If you or your staff have any questions about this report, please contact me at (202) 512-7119 or James O. McClyde at (202) 512-7152. Darryl W. Joyce, Paul T. Wagner, Jr., and Arthur J. Kendall made key contributions to this report. Warnings Fines Fines Fines Fines Fines, summonses, warnings Fines, citations License suspensions Defendants charged under misdemeanor statute Type of law enforcement action taken License suspensions, fines warnings Fines, warnings Fines, citations Fines, citations Fines, warnings, citations Fines Fines, warnings Fines, citations Citations, warnings State laws or regulations either banned tobacco vending machines or restricted youth access. According to SAMHSA officials, states that have laws that restrict tobacco vending machines are not required to inspect them. Specific law enforcement action taken was not reported.
Every day, about 3,000 young people become regular smokers. It is estimated that one-third of them will die from smoking-related diseases. If children and adolescents can be prevented from using tobacco products they are likely to remain tobacco-free for the rest of their lives. In 1992, Congress enacted legislation, known as the Synar amendment, to reduce the sale and distribution of tobacco products to individuals under the age of 18. States are required to enforce laws that prohibit tobacco sales to minors, conduct random inspections of tobacco retail or distribution outlets to estimate the level of compliance with Synar requirements, and report the results of these efforts to the Department of Health and Human Services (HHS). The Synar amendment and regulation are the only federal requirements that seek to prohibit the sale and distribution of tobacco products to minors. GAO found that weaknesses in the states' implementation of Synar and in HHS oversight may be adversely affecting the quality and comparability of state-reported estimates of the percentage of retailers that violate laws prohibiting tobacco sales to minors. First, some states used inaccurate and incomplete lists of over-the-counter and vending machine tobacco outlets from which to select samples for inspection, which affect the estimated statewide violation rate. Second, states allowed the use of minors younger than 16 as inspectors, even though research suggests that using such minors can artificially lower violation rates. Third, HHS approved a few states' reported violation rates even though the rates included inspection results that were invalid because of the ages of the inspectors and the outcomes of the inspections were unknown. Fourth, HHS relied on states to validate their own inspection results with limited verification of the accuracy of state data even though the potential reduction in a state's block grant award for not meeting annual violation-rate goals could be an incentive for states to report artificially low rates. A little more than half the states reported for fiscal year 1999 that they used fines and suspension or revocation of retailers' licenses to penalize violators of youth tobacco access laws as part of their enforcement strategy. States also reported issuing warning letters and citations. HHS requires states to report evidence of actions taken to enforce state laws but does not require the use of penalties as an enforcement tool. Research shows that penalties reduce minors' access to tobacco products.
6,891
489
The mission of the Customs Service is to ensure that all goods and persons entering and exiting the United States do so in compliance with all U.S. laws and regulations. It does this by (1) enforcing the laws governing the flow of goods and persons across the borders of the United States and (2) assessing and collecting duties, taxes, and fees on imported merchandise. During fiscal year 1997, Customs collected $22.1 billion in revenue at more than 300 ports of entry, and it processed nearly 450 million passengers who entered the United States during the year. To accomplish its mission, Customs is organized into six business areas--trade compliance, outbound, passenger, finance, human resources, and investigations. Each business area is described below. The trade compliance business area includes enforcement of laws and regulations associated with the importation of goods into the United States. To enforce compliance with the trade laws and regulations, Customs (1) works with the trade community to promote understanding of applicable laws and regulations, (2) selectively examines cargo to ensure that only eligible goods enter the country, (3) reviews documentation associated with cargo entries to ensure that it is properly valued and classified, (4) collects billions of dollars annually in duties, taxes, and fees associated with imported cargo, (5) assesses fines and penalties for noncompliance with trade laws and regulation, and (6) manages the collection of these moneys to ensure that all trade-related debts due to Customs are paid and properly accounted for. The outbound business area includes Customs operations related to the enforcement of laws and regulations associated with the movement of merchandise and conveyances from the United States. To enforce compliance with these laws and regulations, Customs (1) selectively inspects cargo at U.S. ports to guard against the exportation of illegal goods, such as protected technologies, stolen vehicles, and illegal currency, (2) collects, disseminates, and uses intelligence to identify high-risk cargo and passengers, (3) seizes and accounts for illegal cargo, (4) assesses and collects fines and penalties associated with the exportation of illegal cargo, and (5) physically examines baggage and cargo at airport facilities for explosive and nuclear materials. In addition, the outbound business includes collecting and disseminating trade data within the federal government. Accurate trade data are crucial to establishing accurate trade statistics on which to base trade policy decisions and negotiate trade agreements with other countries. By the year 2000, Customs estimates that exports will be valued at $1.2 trillion, as compared to $696 million in 1994. The passenger business area includes processing all passengers and crew of arriving and departing (1) air and sea conveyances and (2) noncommercial land vehicles and pedestrians. In fiscal year 1997, Customs processed nearly 450 million travelers, and by the year 2000, expects almost 500 million passengers to arrive in the United States annually. Many of Customs' passenger activities focus on illegal immigration and drug smuggling and are coordinated with other federal agencies, such as the Immigration and Naturalization Service and the Department of Agriculture's Animal and Plant Health Inspection Service. Activities include targeting high-risk passengers, which requires timely and accurate information, and physically inspecting selected passengers, baggage, and vehicles to determine compliance with laws and regulations. The finance business area includes asset and revenue management activities. Asset management consists of activities to formulate Customs' budget; properly allocate and distribute funds; and acquire, manage, and account for personnel, goods, and services. Revenue management encompasses all Customs activities to identify and establish amounts owed Customs, collect these amounts, and accurately report the status of revenue from all sources. Sources of revenue include duties, fees, taxes, other user fees, and forfeited currency and property. The revenue management activities interrelate closely with the revenue collection activities in the trade compliance, outbound, and passenger business areas. The human resources business area is responsible for filling positions, providing employee benefits and services, training employees, facilitating workforce effectiveness, and processing personnel actions for Customs' 18,000 employees and managers. The investigations business area includes activities to detect and eliminate narcotics and money laundering operations. Customs works with other agencies and foreign governments to reduce drug-related activity by interdicting (seizing and destroying) narcotics, investigating organizations involved in drug smuggling, and deterring smuggling efforts through various other methods. Customs also develops and provides information to the trade and carrier communities to assist them in their efforts to prevent smuggling organizations from using cargo containers and commercial conveyances to introduce narcotics into the United States. To carry out its responsibilities, Customs relies on information systems and processes to assist its staff in (1) documenting, inspecting, and accounting for the movement and disposition of imported goods and (2) collecting and accounting for the related revenues. Customs' Office of Information and Technology (OIT) fiscal year 1998 budget is about $147 million for information management and technology activities. Customs expects its reliance on information systems to increase as a result of its burgeoning workload. For 1995 through 2001, Customs estimates that the annual volume of import trade between the United States and other countries will increase from $761 billion to $1.1 trillion. This will result in Customs processing an estimated increase of 7.5 million commercial entries-- from 13.1 million to 20.6 million annually--during the same period. Recent trade agreements, such as the North American Free Trade Agreement (NAFTA), have also increased the number and complexity of trade provisions that Customs must enforce. Customs recognizes that its ability to process the growing volume of imports while improving compliance with trade laws depends heavily on successfully modernizing its trade compliance process and its supporting automated systems. To speed the processing of imports and improve compliance with trade laws, the Congress enacted legislation that eliminated certain legislatively mandated paper requirements and required Customs to establish the National Customs Automation Program (NCAP). The legislation also specified certain functions that NCAP must provide, including giving members of the trade community the capability to electronically file import entries at remote locations and enabling Customs to electronically process "drawback" claims. In response to the legislation, Customs began in 1994 to reorganize the agency, streamline operations, and modernize the information systems that support operations. As computer-based systems have become larger and more complex over the last decade, the importance of and reliance on information systems architectures have grown steadily. These comprehensive "construction plans" systematically detail the full breadth and depth of an organization's mission-based "modus operandi" in (1) logical terms, such as defining business functions and providing high-level descriptions of information systems and their interrelationships, and (2) technical terms, such as specifying hardware, software, data, communications, security, and performance characteristics. Without an architecture to guide and constrain a modernization program, there is no systematic way to preclude either inconsistent system design and development decisions or the resulting suboptimal performance and added cost associated with incompatible systems. The Congress and the Office of Management and Budget (OMB) have recognized the importance of agency information systems architectures. The 1996 Clinger-Cohen Act, for example, requires Chief Information Officers (CIO) to develop, maintain, and facilitate integrated system architectures. In addition, OMB has issued guidance that among other things, requires agency's information systems investments to be consistent with federal, agency, and bureau architectures. OMB has also issued guidance on the development and implementation of agency information technology architectures. Treasury has also issued to its bureaus, including Customs, guidance on developing an information systems architecture. This guidance, known as Treasury Information Systems Architecture Framework (TISAF), is also included in OMB's guidance. According to Treasury, TISAF is intended to help reduce the cost, complexity, and risk associated with information technology development and operations. In July 1997, Treasury issued additional guidance to complement TISAF. This guidance, which was finalized in September 1997, provides "how to" processes for developing an information systems architecture in accordance with TISAF. Customs has several efforts underway to develop and acquire new information systems and evolve (i.e., maintain) existing ones to support its six business areas. Customs' fiscal year 1998 budget for information management and technology activities is about $147 million. Customs' major information technology effort is its Automated Commercial Environment (ACE) system. In 1994, Customs began to develop ACE to replace its existing automated import system, the Automated Commercial System. ACE is intended to provide an integrated, automated information system for collecting, disseminating, and analyzing import-related data and ensuring the proper collection and allocation of revenues, totaling about $19 billion annually. According to Customs, ACE is planned to automate critical functions that the Congress specified when it established NCAP. Customs reported that it spent $47.8 million on ACE as of the end of fiscal year 1997. In November 1997, Customs estimated it would cost $1.05 billion to develop, operate, and maintain ACE over the 15 years from fiscal years 1994 through 2008. Customs plans to deploy ACE to all 342 ports that handle commercial cargo imports. Customs plans to develop and deploy ACE in multiple phases. According to Customs, the first phase, known as NCAP, is to be an ACE prototype. Customs currently plans to deploy NCAP in four releases. The first is scheduled to be deployed for field evaluation at three locations beginning in May 1998, and the fourth is scheduled for October 1999. Customs, however, has not adhered to previous NCAP deployment schedules. Specifically, implementation of the NCAP prototype slipped from January 1997 to August 1997 and then again to a series of four releases beginning in October 1997, with the fourth release starting in June 1998. Customs also has several other efforts underway to modify or enhance existing information systems that support its six business areas. For example, in fiscal year 1998, Customs plans to spend about $3.7 million to enhance its Automated Export System (AES), which supports the outbound business area and is designed to improve Customs' collection and reporting of export statistics and to enforce export regulations. In addition, Customs plans to spend another $4.6 million to modify its administrative systems supporting its finance and human resource business areas. Examples of other systems that Customs plans to modify or enhance are the Automated Commercial System, the Treasury Enforcement and Communication System, and the Seized Asset and Case Tracking System. In May 1996, we reported that Customs was not prepared to select an architecture and develop ACE because it was not effectively applying critical management practices that help organizations mitigate the risks associated with modernizing automated systems and better position themselves for success. Specifically, Customs (1) lacked clear accountability for ensuring successful implementation of NCAP requirements, (2) selected an information systems architecture for ACE and other systems without first analyzing its business requirements, (3) lacked policies and procedures to manage ACE and other systems as investments, and (4) did not ensure that systems under development adhere to Customs' own system development policies. As a result of our recommendations, Customs took the following actions. Assigned day-to-day responsibility for implementing NCAP to the Assistant Commissioner, Office of Information and Technology. Initiated an effort, with contractor assistance, to develop an enterprise information systems architecture. Designated an information technology investment review board (IRB) and hired a contractor to develop investment management policies and procedures. The contractor completed its work in mid-1997 and the agency is in the process of implementing and institutionalizing these information technology investment management processes and procedures. Revised its Systems Development Life Cycle (SDLC), conducted ACE cost-benefit analyses, instituted SDLC compliance reviews, and prepared a variety of ACE-related project plans. Customs also developed processes to ensure that SDLC compliance is an ongoing activity. In May 1997, we reported that significant weaknesses continue to be identified during audits of Customs' financial statements that hinder Customs' ability to provide reasonable assurance that sensitive data maintained in automated systems, such as critical information used to monitor Customs' law enforcement operations, are adequately protected from unauthorized access and modification. Since then, Treasury's Inspector General has reported that Customs' computer systems continue to be vulnerable to unauthorized access. Specifically, the Inspector General reported that security weaknesses could allow for unauthorized modification and deletion of application and systems software and data in Customs computer systems that support trade, financial management, and law enforcement activities. Treasury and Customs officials recognize that Customs' systems architecture is not complete and plan to complete it. For five of its six business areas (outbound, passenger, finance, human resources, and investigations), Custom's architecture does not (1) describe all the agency's business functions, (2) outline the information needed to perform the functions, and (3) completely identify the users and locations of the functions. Further, while the architecture and related documentation describe business functions and users and locations for one business area (trade compliance), they do not identify the information needs and flows for all the functions. Nonetheless, Customs has defined many characteristics of its information systems' hardware, software, communications, data management, and security components. Because these characteristics are not based on a complete understanding of its enterprisewide functional and information needs, Customs does not have adequate assurance that its information systems will optimally support its ability to (1) fully collect and accurately account for billions of dollars in annual federal revenue and (2) allow for the expeditious movement of legal goods and passengers across our nation's borders while preventing and detecting the movement of illegal goods and passengers. Reflecting the general consensus in the industry that large, complex systems development and acquisition efforts should be guided by explicit architectures, we issued a report in 1992 defining a comprehensive framework for designing and developing systems architectures. This framework divides systems architectures into a logical component and a technical component. The logical component ensures that the systems meet the business needs of the organization. It provides a high-level description of the organization's mission and target concept of operations; the business functions being performed and the relationships among functions; the information needed to perform the functions; the users and locations of the functions and information; and the information systems needed to support the agency's business needs. An essential element of the logical architecture is the definition of the component interdependencies (e.g., information flows and interfaces). The technical component ensures that systems are interoperable, function together efficiently, and are cost-effective over their life cycles (including maintenance costs). The technical component details specific information technology and communications standards and approaches that will be used to build systems, including those that address critical hardware, software, communications, data management, security, and performance characteristics. TISAF, Treasury's departmentwide architecture framework, is generally consistent with our framework. According to TISAF, a complete architecture has the following four components, each representing a different perspective or view of the agency: Functional: A representation of what the organization does (i.e., its mission and business processes) and how the organization can use information systems to support its business operations. Work: A description of where and by whom information systems are to be used throughout the agency. Information: A description of what information is needed to support business operations. Infrastructure: A description of the hardware and "services" (e.g., software and telecommunications) needed to implement information systems across the agency. TISAF's functional, work, and information components together form the logical view of the architecture, while its infrastructure represents the technical view of the architecture. To develop and evolve systems that effectively support business functions, a top-down process must be followed. The logical architecture (e.g., business functions and information flows) is defined first and then used to specify supporting systems (e.g., interfaces, standards, and protocols). Treasury endorses this top-down approach. Treasury officials responsible for developing and implementing TISAF stated that development of the architecture begins with defining and describing the agency's major business functions. Once this is accomplished, the agency can identify the relationships among the functions, the information needed to perform the functions, the users and locations of the functions, and the existing and needed applications and related information technology required to execute and support the business functions. According to Treasury guidance, the architecture's infrastructure component (i.e., its systems specifications and standards) should be derived from the other three components. In addition, the guidance states that each element of the architecture must be integrated and traceable, and the relationships between them must be explicit. Customs does not have a complete systems architecture to effectively and efficiently guide and constrain the millions of dollars it invests each year in developing, acquiring, and maintaining the information systems that support its six business areas. In summary, for five of Customs' six business areas (outbound, passenger, finance, human resources, and investigations), the architecture neither defines all critical business functions nor identifies all information needs (including information security) and information flows within and among the business areas. For the sixth business area (trade compliance), Customs has defined all the business functions and users and work locations and some, but not all, of the information and data needs and flows. With respect to the business functions, Customs' architecture provides descriptions of only 29 of 79 collective functions in its six business areas. The architecture does not describe the other 50 functions in sufficient detail to understand what they are, how they relate, who will perform them, where they will be performed, what information they will produce or consume, and how the information should be handled (i.e., captured, stored, processed, managed, distributed, and protected). Table 1 summarizes by business area the number of functions defined in the architecture. Examples of undefined functions in the outbound, passenger, investigations, and human resources business areas are as follows: Outbound: The architecture names "examine cargo" and "seize and process cargo" as 2 of the 13 functions in this business area. However, the architecture does not describe how to examine cargo, what cargo to examine, when to examine cargo, what information/data is needed to examine cargo, how the results of the cargo examination are used and by whom, or how cargo examination data should be protected. Similarly, the architecture does not describe when cargo will be seized and by whom, what criteria are used to seize cargo, how cargo will be seized and accounted for, or what information is required to account for the seized cargo (e.g., date of seizure, company name, and commodity). Passenger: The architecture names "identify compliance target" and "process non-compliant passengers/conveyances" as 2 of the 13 functions in this business area. However, the architecture does not describe how targets are identified, who identifies targets, how target information is disseminated, what information is collected to determine compliance, or how target information needs to be protected. Likewise, the architecture does not define compliant passenger/conveyance, how passengers are processed and by whom, or where passengers/conveyances are processed. Investigations: The architecture names "perform interdiction" as 1 of the 10 functions in this business area. However, the architecture does not describe how an interdiction is conducted, who conducts interdictions, what criteria are used to identify potential passengers or cargo to interdict, what happens to the seized persons or cargo, or how interdiction information needs to be protected. Human Resources: The architecture names "manage internal service programs" as 1 of the 22 functions in this business area. However, the architecture does not describe what services are provided and by whom, who is eligible to receive the services, or where the potential recipients are located. Within the trade compliance business area, even though Customs' architecture does not define 10 of 15 trade compliance functions, Customs has described these 10 business functions, the relationships among them, and the work to be performed within each function (including who will perform the work and where it will be performed) in documents other than the architecture. Further, Customs has specified the data needed to support some, but not all, of the trade compliance functions. For example, Customs identified key information sources (such as cargo manifests and summary declarations) associated with NCAP, the ACE prototype that covers a subset of trade compliance activities, and specific data elements associated with each information source. Customs, however, has not defined the information/data needs, including security, and information/data flows among its six business areas. With respect to information security in particular, Customs' architecture does not (1) specify functional requirements for enterprisewide security, (2) include a security concept of operations that describes how Customs will operate (e.g., what controls will be used) to satisfy these requirements, or (3) include a security subarchitecture that specifies how these controls will be implemented, certified, and accredited and how the controls' operational effectiveness will be validated. Given that computer security continues to be a long-standing problem at Customs, this issue is particularly troubling. In our audits of Customs' fiscal year 1992 and 1993 principal financial statements, we stated that Customs' controls to prevent and detect unauthorized access and intentional or inadvertent unauthorized modifications to critical and sensitive data and computer programs were ineffective, thereby jeopardizing the security and reliability of the operations central to Customs' mission. While Customs has since taken meaningful steps toward correcting these access problems, they still remain. According to the Treasury Inspector General's report on Customs' fiscal years 1997 and 1996 financial statements, computer security weaknesses continue to exist that could allow for unauthorized modification and deletion of application and systems software and data in Customs' systems supporting the trade, financial management, and law enforcement activities. Until Customs addresses these weaknesses, it will not know the full extent of inter- and intra-business area functional and informational needs and dependencies and thus cannot develop, acquire, and maintain supporting information systems that optimally support the agency's operations and activities. Moreover, until these interdependencies among and within business areas have been fully analyzed and defined and an approach for securing the associated information has been established, the opportunities for incompatibilities and duplications among systems and the information they process and share increase, as do the opportunities for unauthorized access and modification of data. Such opportunities jeopardize, in turn, the completeness, consistency, and integrity of the data Customs uses and publishes. Given the importance of reliable data to Customs' (1) billion dollar revenue collection mission, (2) trade statistics used in developing trade policy and negotiating trade agreements, and (3) efforts to prevent and detect the illegal movement of goods and services across our nation's borders, such risks must be effectively addressed through an enterprise systems architecture. With respect to the infrastructure or technical component of Customs' architecture, Customs has specified much of the information that Treasury guidance states should be included in this component (e.g., standards for system and application software, communication interfaces, and hardware). However, as noted previously, this component is not based on a complete analysis of Customs' functional and information needs. For example, the architecture does not address information security requirements, yet its infrastructure specifies network encryption and remote access server products. Because it specified these products without knowing the business needs they support, Customs does not have adequate assurance that these products are needed or that they satisfy its true business needs, minimally or optimally. That is, the list of products cited may be either unnecessary or insufficient to support its real business needs. Experience has shown that attempting to define and build major systems without first completing a systems architecture unnecessarily increases the cost and complexity of these systems. For example, we reported that FAA's lack of a complete architecture resulted in incompatibilities among its air traffic control systems that (1) required higher- than-need-be system development, integration, and maintenance costs and (2) reduced overall system performance. Without having architecturally defined requirements and standards governing information and data structures and communications, FAA was forced to spend over $38 million to acquire a system dedicated to overcoming incompatibilities between systems. According to a Customs' contractor, Customs is also experiencing such inefficiencies and unnecessary costs because it lacks an architecture. Specifically, this contractor reported that in the absence of an enterprise infrastructure, Customs' departments have developed and implemented incompatible systems, which has increased modernization risks and implementation costs. Customs awarded a contract in January 1997 to develop, among other things, a "technology architecture." However, Customs did not properly define the scope of this architecture, limiting it to deliverables associated with the infrastructure component without first completing the other components. Customs officials stated that they contracted for the infrastructure without first completing the higher levels of the architecture because they considered the infrastructure component to be the most important and urgently needed part of the architecture. This "bottom up" approach is fundamentally inconsistent with government and industry architectural frameworks and guidance, including Treasury's, and has historically resulted in systems that do not effectively support business operations and waste time and money. For example, after the Internal Revenue Service (IRS) spent over $3 billion attempting to modernize its tax systems without a defined logical architecture, it could not demonstrate benefits commensurate with costs and was forced to significantly restructure the effort. Unless it completes its architecture before attempting to develop operational systems like ACE, Customs runs the risk of repeating failures like those that IRS experienced. Customs' CIO officials have since acknowledged the need for a complete systems architecture and its value in information technology investment management. Accordingly, Customs is developing a statement of work for a TISAF-compliant architecture. With the help of a contractor, Customs plans to use whatever data each business area may have already developed relative to functional, work, and information needs as a starting point in completing an enterprise architecture. More specifically, by October 1998, Customs plans to identify the functional, work, and information components for each of the six business areas and identify the relationships and interdependencies across the business areas. Customs also plans to reevaluate its enterprise infrastructure. If an architecture is to be implemented effectively, institutional processes must be established to (1) require system compliance with the architecture, (2) assess and enforce such compliance, and (3) waive this requirement only on the basis of careful, thorough, and documented analysis showing that such deviation is warranted. According to Customs officials, architectural compliance will be assessed and enforced as Customs implements its recently defined investment management process. Under this process, Customs' investment review board (IRB) uses four criteria in scoring competing investment options and allocating funding among them. The four criteria are risk (e.g., technical, schedule, and cost); strategic alignment (e.g., cross-functional benefits, linkage to Customs' business plan, and compliance with legislative mandates); mission effectiveness (e.g., contributions to service delivery); and cost/benefit ratio (e.g., tangible and intangible benefits, and costs). Customs is in the process of implementing its investment management process for the fiscal year 1999 budget cycle. According to Customs' investment management process, investment compliance with the architecture is considered, but not required, under the technical risk criterion. As a result, the process does not preclude funding projects that do not comply with the enterprise architecture and does not require that deviations from the architecture be rigorously justified. According to Customs officials, while architectural compliance is not an explicit criterion in the process, it will be considered and documented as part of the IRB funding decisions. Without an effective, well-defined process for enforcing the architecture, Customs runs the risk that unjustified deviations from the architecture will occur, resulting in systems that do not meet business needs, are incompatible, perform poorly, and cost more to develop, integrate, and maintain than they should. For example, we reported that FAA's lack of an enforced systems architecture for its air traffic control operations resulted in the use of expensive interfaces to translate different data communication protocols, thus complicating and slowing communications, and the proliferation of multiple application programming languages, which increased software maintenance costs and precluded sharing software components among systems. Customs' incomplete enterprise information systems architecture and limitations in its plans for enforcing compliance with an architecture once one is completed impair the agency's ability to effectively and efficiently develop or acquire operational systems, such as ACE, and to maintain existing systems. Until Customs (1) performs the thorough analysis and careful decision-making associated with developing all architectural components for interdependent business areas and (2) ensures that these results are rigorously enforced for its information system development, acquisition, and maintenance efforts, it runs the risk of wasting scarce time and money building and maintaining systems that do not effectively and efficiently support its business operations. To ensure that the Customs Service develops and effectively enforces a complete enterprise information systems architecture, we recommend that the Commissioner of Customs direct the Customs CIO, in consultation with the Treasury CIO, to follow through on plans to complete the enterprise information systems architecture. At a minimum, the architecture should (1) describe Customs' target business operations, (2) fully define Customs' interrelated business functions to support these target operations, (3) clearly describe information needs (including security) and flows among these functions, (4) identify the systems that will provide these functions and support these information needs and flows, and (5) use this information to specify the technical standards and related characteristics that these systems should possess to ensure that they interoperate, function together efficiently, and are cost-effective to maintain. We also recommend that the Commissioner direct the Deputy Commissioner, as Chairman of the IRB, to establish compliance with the architecture as an explicit requirement of Customs' investment management process except in cases where careful, thorough, and documented analysis supports a waiver to this requirement. In commenting on a draft of this report, Customs agreed with our conclusions and recommendations and stated that it will (1) develop an enterprise systems architecture in accordance with TISAF and in close cooperation with Treasury during fiscal year 1998 and (2) strengthen enforcement of the architecture by being explicit that projects must comply with the architecture and requiring exceptions to be well justified. Additionally, Customs committed to not making major system investments prior to developing a TISAF-compliant architecture. Customs raised several additional matters related to systems architecture, none of which affect our conclusions and recommendations and thus are not discussed here. Customs' comments and our responses are reprinted in appendix II. We are sending copies of this report to the Ranking Minority Members of the Subcommittee on Treasury and General Government, Senate Committee on Appropriations, and Subcommittee on Treasury, Postal Service, and General Government, House Committee on Appropriations. We are also sending copies to the Secretary of the Treasury, the Commissioner of Customs, and the Director of the Office of Management and Budget. Copies will also be made available to others upon request. If you have any questions about this letter, please contact me at (202) 512-6240 or by e-mail at [email protected]. Major contributors to this report are listed in appendix III. To accomplish the first objective, we reviewed published architectural guidance, including the Treasury Information Systems Architecture Framework (TISAF), to identify key requirements. We also interviewed officials from Treasury's Office of the Deputy Assistant Secretary for Information Systems and Chief Information Officer (the organization responsible for developing, implementing, and maintaining TISAF) to seek clarification and explanation of TISAF requirements. Further, we asked Customs to give us its enterprise information systems architecture and a mapping of all architectural documents to TISAF's four architectural components--functional, work, information, and infrastructure. In response, Customs provided the documents listed in table I.1. Customs subsequently provided two additional architecture documents that it did not map to any TISAF component. The two additional documents were the ACE Technical Architecture and the Enterprise IT Architecture Strategy-Executive Overview. We then analyzed the architecture documents Customs provided to identify any variances with the TISAF requirements for each architectural component. We also interviewed Customs and supporting contractor officials to (1) seek clarification and explanation of the content of the architecture documents, (2) identify instances where the architectural documents did not satisfy TISAF requirements, and (3) solicit from Customs any additional evidence related to meeting TISAF requirements. To address the second objective, we reviewed Customs' policies and procedures governing information technology investment management to determine architecture enforcement processes and interviewed Customs officials to determine organizational roles and responsibilities related to architecture development and enforcement. We also discussed with Customs officials any plans for changing the agency's processes and organizational responsibilities for developing and enforcing the architecture. The following are GAO's comments on the U.S. Customs Service's letter dated March 31, 1998. 1. Our report neither states nor implies that Customs is unable to ensure the proper collection and allocation of revenues totaling about $19 billion annually. Rather, the report states that one of ACE's key functions is to ensure the proper collection and allocation of revenues totaling about $19 billion annually. 2. Customs states that it began developing its enterprise systems architecture prior to Treasury's publication of TISAF and is working with Treasury to develop a TISAF-compliant architecture. While these statements are true, they do not address our point that Customs' architecture is insufficiently complete to be useful in guiding and constraining major systems investments. In order to optimize systems investments, the architecture must specify the six elements cited in our report. Furthermore, each element of the architecture must be built upon the preceding ones. Customs' architecture does not include these elements for all business areas and, as we point out in our report, the systems and standards selected were not based on a complete analysis of Customs' functional and information needs. We do not agree with Customs' statement that an architecture is never completed. An architecture must be complete (i.e., include the six elements described in our report) to be useful in building or buying systems. This does not mean that a completed architecture cannot be modified to reflect changes in organizational missions and business functions or advancements in information technology products. This process of thoughtful and disciplined change--maintenance--is performed routinely on all information system components (e.g., architectures, documentation, software, and hardware). 3. While we agree that architectural models used in industry and government vary, all models consistently require the top-down, structured approach described in our report. Customs has not followed this approach and, therefore, does not have adequate assurance that its infrastructure (i.e., technical architecture) will meet its business requirements. Customs states that it has been cautioned against defining an architecture in too much detail lest the business process changes before system development can proceed, but it does not clearly define what it means by too much detail. Customs' architecture neither defines all critical business functions nor identifies all information needs and flows within and among the business areas for five of its six business areas. As a result, rather than being overly detailed, it lacks the basic, required elements. 4. While the Treasury Inspector General (IG) has given Customs an unqualified opinion in fiscal year 1997, the IG also reported that Customs lacks adequate assurance that all revenue due is collected and compliance with other trade laws is achieved. Despite the progress that has been made, this lack of assurance has been a persistent issue since we reported on our audit on Customs' financial statements for fiscal year 1992.5. Customs states that we have inaccurately characterized the completeness of its architecture for the finance business area because certain finance business functions have been defined in various other analyses, reports, and strategies. This assertion reflects a misunderstanding of the purpose and value of a systems architecture. Our report concludes that Customs' architecture for its finance business area (as well as all but one other business area) is substantially incomplete because it does not (1) describe all the agency's business functions, (2) outline the information needed to perform the functions, or (3) completely identify the users and locations of the functions. Even if other documents contain fragments of the missing information for one business area, which we did not attempt to verify, this does not mitigate the need for a single, comprehensive, maintainable, and enforceable statement of architectural requirements and standards. Rona Stillman, Chief Scientist for Computers and Telecommunications Linda Koontz, Associate Director Randolph Hite, Senior Assistant Director Deborah A. Davis, Assistant Director Madhav Panwar, Senior Technical Advisor Mark Bird, Assistant Director The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Customs Service's enterprise information systems architecture, focusing on determining whether: (1) the architecture is complete; and (2) Customs has processes and procedures to enforce compliance with the architecture. GAO noted that: (1) Customs does not yet have a complete enterprise information systems architecture to guide and constrain the millions of dollars it spends annually to develop and acquire new information systems and evolve existing ones; (2) for five of its six business areas Custom's architecture does not: (a) describe all the agency's business functions; (b) define the information needed to perform the functions; and (c) completely identify the users and locations of the functions; (3) while the architecture and related documentation describe business functions, and users and work locations for the sixth business area, they do not identify all the information needs and flows for all the trade functions; (4) also, Customs has named certain technical standards, products, and services that it will use in building systems to support all its business areas; (5) however, Customs has not chosen these based on a complete description of its business needs; (6) the limitations in Customs' architecture are rooted in its decision to focus on defining the technical characteristics of its systems environment; (7) Customs' view does not include the logical characteristics of its enterprise system environment, which would enable it to define and implement systems that optimally support the agency's mission needs; (8) Customs plans to develop the architecture in accordance with Department of the Treasury architectural guidance; (9) specifically, Customs plans to define its functional, information, and work needs and their interrelationships across its six business areas and, in light of these needs and interrelationships, reevaluate the technical characteristics it has selected for its systems environment; (10) until Customs defines the logical characteristics of its business environment and uses them to establish technical standards and approaches, it does not have adequate assurance that the systems it plans to build and operationally deploy will effectively support the agency's business needs; (11) Customs also has not developed and implemented effective procedures to enforce its architecture once it is completed; (12) Customs officials stated that a newly established investment management process will be used to enforce architectural compliance; (13) this process, however, does not require that system investments be architecturally compliant or that architectural deviations be justified and documented; and (14) as a result, Customs risks incurring the same problems as other federal agencies that have not effectively defined and enforced an architecture.
8,114
537
SBA was established by the Small Business Act of 1953 to fulfill the role of several agencies that previously assisted small businesses affected by the Great Depression and, later, by wartime competition. SBA's stated purpose is to promote small business development and entrepreneurship through business financing, government contracting, and technical assistance programs. In addition, SBA serves as a small business advocate, working with other federal agencies to, among other things, reduce regulatory burdens on small businesses. SBA also provides low-interest, long-term loans to individuals and businesses to assist them with disaster recovery through its Disaster Loan Program--the only form of SBA assistance not limited to small businesses. Homeowners, renters, businesses of all sizes, and nonprofit organizations can apply for physical disaster loans for permanent rebuilding and replacement of uninsured or underinsured disaster-damaged property. Small businesses can also apply for economic injury disaster loans to obtain working capital funds until normal operations resume after a disaster declaration. SBA's Disaster Loan Program differs from the Federal Emergency Management Agency's (FEMA) Individuals and Households Program (IHP). For example, a key element of SBA's Disaster Loan Program is that the disaster victim must have repayment ability before a loan can be approved whereas FEMA makes grants under the IHP that do not have to be repaid. Further, FEMA grants are generally for minimal repairs and, unlike SBA disaster loans, are not designed to help restore the home to its predisaster condition. In January 2005, SBA began using DCMS to process all new disaster loan applications. SBA intended for DCMS to help it move toward a paperless processing environment by automating many of the functions staff members had performed manually under its previous system. These functions include both obtaining referral data from FEMA and credit bureau reports, as well as completing and submitting loss verification reports from remote locations. Our July 2006 report identified several significant limitations in DCMS's capacity and other system and procurement deficiencies that likely contributed to the challenges that SBA faced in providing timely assistance to Gulf Coast hurricane victims as follows: First, due to limited capacity, the number of SBA staff who could access DCMS at any one time to process disaster loans was restricted. Without access to DCMS, the ability of SBA staff to process disaster loan applications in an expeditious manner was diminished. Second, SBA experienced instability with DCMS during the initial months following Hurricane Katrina, as users encountered multiple outages and slow response times in completing loan processing tasks. According to SBA officials, the longest period of time DCMS was unavailable to users due to an unscheduled outage was 1 business day. These unscheduled outages and other system-related issues slowed productivity and affected SBA's ability to provide timely disaster assistance. Third, ineffective technical support and contractor oversight contributed to the DCMS instability that SBA staff initially encountered in using the system. Specifically, a DCMS contractor did not monitor the system as required or notify the agency of incidents that could increase system instability. Further, the contractor delivered computer hardware for DCMS to SBA that did not meet contract specifications. In the report released in February, we identified other logistical challenges that SBA experienced in providing disaster assistance to Gulf Coast hurricane victims. For example, SBA moved urgently to hire more than 2,000 mostly temporary employees at its Ft. Worth, Texas disaster loan processing center through newspaper and other advertisements (the facility increased from about 325 staff in August 2005 to 2,500 in January 2006). SBA officials said that ensuring the appropriate training and supervision of this large influx of inexperienced staff proved very difficult. Prior to Hurricane Katrina, SBA had not maintained the status of its disaster reserve corps, which was a group of potential voluntary employees trained in the agency's disaster programs. According to SBA, the reserve corps, which had been instrumental in allowing the agency to provide timely disaster assistance to victims of the September 11, 2001 terrorist attacks, shrank from about 600 in 2001 to less than 100 in August 2005. Moreover, SBA faced challenges in obtaining suitable office space to house its expanded workforce. For example, SBA's facility in Ft. Worth only had the capacity to house about 500 staff whereas the agency hired more than 2,000 mostly temporary staff to process disaster loan applications. While SBA was able to identify another facility in Ft. Worth to house the remaining staff, it had not been configured to serve as a loan processing center. SBA had to upgrade the facility to meet its requirements. Fortunately, in 2005, SBA was also able to quickly reestablish a loan processing facility in Sacramento, California, that had been previously slated for closure under an agency reorganization plan. The facility in Sacramento was available because its lease had not yet expired, and its staff was responsible for processing a significant number of Gulf Coast hurricane related disaster loan applications. As a result of these and other challenges, SBA developed a large backlog of applications during the initial months following Hurricane Katrina. This backlog peaked at more than 204,000 applications 4 months after Hurricane Katrina. By late May 2006, SBA took about 74 days on average to process disaster loan applications, compared with the agency's goal of within 21 days. As we stated in our July 2006 report, the sheer volume of disaster loan applications that SBA received was clearly a major factor contributing to the agency's challenges in providing timely assistance to Gulf Coast hurricane. As of late May 2006, SBA had issued 2.1 million loan applications to hurricane victims, which was four times the number of applications issued to victims of the 1994 Northridge, California, earthquake, the previous single largest disaster that the agency had faced. Within 3 months of Hurricane Katrina making landfall, SBA had received 280,000 disaster loan applications or about 30,000 more applications than the agency received over a period of about 1 year after the Northridge earthquake. However, our two reports on SBA's response to the Gulf Coast hurricanes also found that the absence of a comprehensive and sophisticated planning process contributed to the challenges that the agency faced. For example, in designing DCMS, SBA used the volume of applications received during the Northridge, California, earthquake and other historical data as the basis for planning the maximum number of concurrent agency users that the system could accommodate. SBA did not consider the likelihood of more severe disaster scenarios and, in contrast to insurance companies and some government agencies, use the information available from catastrophe models or disaster simulations to enhance its planning process. Since the number of disaster loan applications associated with the Gulf Coast hurricanes greatly exceeded that of the Northridge earthquake, DCMS's user capacity was not sufficient to process the surge in disaster loan applications in a timely manner. Additionally, SBA did not adequately monitor the performance of a DCMS contractor or stress test the system prior to its implementation. In particular, SBA did not verify that the contractor provided the agency with the correct computer hardware specified in its contract. SBA also did not completely stress test DCMS prior to implementation to ensure that the system could operate effectively at maximum capacity. If SBA had verified the equipment as required or conducted complete stress testing of DCMS prior to implementation, its capacity to process Gulf Coast related disaster loan applications may have been enhanced. In the report we issued in February, we found that SBA did not engage in comprehensive disaster planning for other logistical areas--such as workforce or space acquisition planning--prior to the Gulf Coast hurricanes at either the headquarters or field office levels. For example, SBA had not taken steps to help ensure the availability of additional trained and experienced staff such as (1) cross-training agency staff not normally involved in disaster assistance to provide backup support or (2) maintaining the status of the disaster reserve corps as I previously discussed. In addition, SBA had not thoroughly planned for the office space requirements that would be necessary in a disaster the size of the Gulf Coast hurricanes. While SBA had developed some estimates of staffing and other logistical requirements, it largely relied on the expertise of agency staff and previous disaster experiences--none of which reached the magnitude of the Gulf Coast hurricanes--and, as was the case with DCMS planning, did not leverage other planning resources, including information available from disaster simulations or catastrophe models. In our July 2006 report, we recommended that SBA take several steps to enhance DCMS, such as reassessing the system's capacity in light of the Gulf Coast hurricane experience and reviewing information from disaster simulations and catastrophe models. We also recommended that SBA strengthen its DCMS contractor oversight and further stress test the system. SBA agreed with these recommendations. I note that SBA has completed an effort to expand DCMS's capacity. SBA officials said that DCMS can now support a minimum of 8,000 concurrent agency users as compared with only 1,500 concurrent users for the Gulf Coast hurricanes. Additionally, SBA has awarded a new contract for the project management and information technology support for DCMS. The contractor is responsible for a variety of DCMS tasks on SBA's behalf including technical support, software changes and hardware upgrades, and supporting all information technology operations associated with the system. In the report released in February, we identified other measures that SBA had planned or implemented to better prepare for and respond to future disasters. These steps include appointing a single individual to coordinate the agency's disaster preparedness planning and coordination efforts, enhancing systems to forecast the resource requirements to respond to disasters of varying scenarios, redesigning the process for reviewing applications and disbursing loan proceeds, and enhancing its long-term capacity to acquire adequate facilities in an emergency. Additionally, SBA had planned or initiated steps to help ensure the availability of additional trained and experienced staff in the event of a future disaster. According to SBA officials, these steps include cross-training staff not normally involved in disaster assistance to provide back up support, reaching agreements with private lenders to help process a surge in disaster loan applications, and reestablishing the Disaster Active Reserve Corps, which had reached about 630 individuals as of June 2007. While SBA has taken a variety of steps to enhance its capacity to respond to disasters, I note that these efforts are ongoing and continued commitment and actions by agency managers are necessary. In June 2007, SBA released a plan for responding to disasters. While we have not evaluated the process SBA followed in developing its plan, according to the SBA plan, the agency is incorporating catastrophe models into its disaster planning processes as we recommended in both reports. For example, the plan states that SBA is using FEMA's catastrophe model, which is referred to as HAZUS, in its disaster planning activities. Further, based on information provided by SBA, the agency is also exploring the use of models developed by private companies to assist in its disaster planning efforts. These efforts to incorporate catastrophe models into the disaster planning process appear to be at an early stage. SBA's plan also anticipates further steps to ensure an adequate workforce is available to respond to a disaster, including training and using 400 non- disaster program office staff to assist in responding to the 2007 hurricane season and beyond. According to SBA officials, about 200 of these staff members will be trained in reviewing loan applications and providing customer service by the end of this month and the remainder will be trained by this Fall. We encourage SBA to actively pursue initiatives that may further enhance its capacity to better respond to future disasters, and we will monitor SBA's efforts to implement our recommendations. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions at this time. For further information on this testimony, please contact William B. Shear at (202) 512- 8678 or [email protected]. Contact points for our Offices of Congressional Affairs and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony included Wesley Phillips, Assistant Director; Triana Bash; Alison Gerry; Marshall Hamlett; Barbara S. Oliver; and Cheri Truett. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Small Business Administration (SBA) helps individuals and businesses recover from disasters such as hurricanes through its Disaster Loan Program. SBA faced an unprecedented demand for disaster loan assistance following the 2005 Gulf Coast hurricanes (Katrina, Rita, and Wilma), which resulted in extensive property damage and loss of life. In the aftermath of these disasters, concerns were expressed regarding the timeliness of SBA's disaster assistance. GAO initiated work and completed two reports under the Comptroller General's authority to conduct evaluations and determine how well SBA provided victims of the Gulf Coast hurricanes with timely assistance. This testimony, which is based on these two reports, discusses (1) challenges SBA experienced in providing victims of the Gulf Coast hurricanes with timely assistance, (2) factors that contributed to these challenges, and (3) steps SBA has taken since the Gulf Coast hurricanes to enhance its disaster preparedness. GAO visited the Gulf Coast region, reviewed SBA planning documents, and interviewed SBA officials. GAO identified several significant system and logistical challenges that SBA experienced in responding to the Gulf Coast hurricanes that undermined the agency's ability to provide timely disaster assistance to victims. For example, the limited capacity of SBA's automated loan processing system--the Disaster Credit Management System (DCMS)--restricted the number of staff who could access the system at any one time to process disaster loan applications. In addition, SBA staff who could access DCMS initially encountered multiple system outages and slow response times in completing loan processing tasks. SBA also faced challenges training and supervising the thousands of mostly temporary employees the agency hired to process loan applications and obtaining suitable office space for its expanded workforce. As of late May 2006, SBA processed disaster loan applications, on average, in about 74 days compared with its goal of within 21 days. While the large volume of disaster loan applications that SBA received clearly affected its capacity to provide timely disaster assistance to Gulf Coast hurricane victims, GAO's two reports found that the absence of a comprehensive and sophisticated planning process beforehand likely limited the efficiency of the agency's initial response. For example, in designing the capacity of DCMS, SBA primarily relied on historical data such as the number of loan applications that the agency received after the 1994 Northridge, California, earthquake--the most severe disaster that the agency had previously encountered. SBA did not consider disaster scenarios that were more severe or use the information available from disaster simulations (developed by federal agencies) or catastrophe models (used by insurance companies to estimate disaster losses). SBA also did not adequately monitor the performance of a DCMS contractor or completely stress test the system prior to its implementation. Moreover, SBA did not engage in comprehensive disaster planning prior to the Gulf Coast hurricanes for other logistical areas, such as workforce planning or space acquisition, at either the headquarters or field office levels. While SBA has taken steps to enhance its capacity to respond to potential disasters, the process is ongoing and continued commitment and actions by agency managers are necessary. As of July 2006, SBA officials said that the agency had completed an expansion of DCMS's user capacity to support a minimum of 8,000 concurrent users as compared with 1,500 concurrent users supported for the Gulf Coast hurricanes. Further, in June 2007, SBA released a disaster plan. While GAO has not evaluated the process SBA followed in developing its plan, consistent with recommendations in GAO reports, the plan states that SBA is incorporating catastrophe models into its planning process, an effort which appears to be at an early stage. GAO encourages SBA to actively pursue the use of catastrophe models and other initiatives that may further enhance its capacity to better respond to future disasters.
2,662
792
The federal government sets broad federal requirements for Medicaid-- such as requiring that state Medicaid programs cover certain populations and benefits--and matches state Medicaid expenditures with federal funds for most services. States administer their respective Medicaid programs on a day-to-day basis, and have the flexibility to, among other things, establish provider payment rates and cover many types of optional benefits and populations. Section 1115 demonstrations provide a further way for states to innovate in ways that fall outside of many of Medicaid's otherwise applicable requirements, and to receive federal matching Medicaid funds for costs that would not otherwise be matchable. For example, states may use these demonstrations to test new approaches for delivering care to generate savings or efficiencies or improve quality and access. Such changes have included expanding benefits to cover populations that would not otherwise be eligible for Medicaid, altering the state's Medicaid benefit package, or financing payment pools, for example, for state-operated health programs or supplemental provider payments. Demonstrations are typically approved for an initial 5-year period that can be renewed for future demonstration periods. Some states have operated some or all of their Medicaid programs for decades under section 1115 demonstrations. Each demonstration is governed by STCs, which reflect the agreement between CMS and the state. The STCs include any provisions governing spending under the demonstration. For example, STCs indicate for what populations and services funds can be spent. In states receiving approval to implement payment pools for state health programs and supplemental provider payments, the STCs could include parameters for payments under those pools. For example, they may require that payment pools be capped at certain levels. The STCs may also include criteria for providers to receive payments and protocols that states must have to ensure the appropriateness of the payments and allow CMS to review those payments. The STCs also include the limits on the amount of federal funds that can be spent on the demonstration--referred to as spending limits--and indicate how spending limits will be enforced. Finally, the STCs include the reporting requirements the state must meet. Reporting requirements--as contained in the STCs--may include regular telephone calls between the state and CMS, regular performance reports, and quarterly expenditure reports. The STCs outline what the state should include in each of these reports, which can vary by demonstration. CMS policy requires that section 1115 demonstrations be budget neutral to the federal government--that is, the federal government should spend no more under a state's demonstration than it would have spent without the demonstration. Once approved, each demonstration operates under a negotiated budget neutrality agreement, documented in its STCs, that places a limit on federal Medicaid spending over the life of the demonstration. This limit is referred to as the spending limit. If a state exceeds the demonstration spending limit at the end of the demonstration period, it must return the excess federal funds. Spending limits can be a per person limit that sets a dollar limit for each Medicaid enrollee included in the demonstration in each month, a set dollar amount for the entire demonstration period regardless of the level of enrollment, or a combination of both. Spending limits are calculated by establishing a spending base and applying a rate of growth over the period of the demonstration. The spending base generally reflects a recent year of state expenditures for populations included in the demonstration, and the growth rate to be applied is generally based on the lower of a state-specific historical growth rate or a federal nationwide estimate. Different data elements may be required by CMS to assess a state's compliance with the spending limit. For example, for a per person spending limit, which is generally a defined dollar limit per enrollee per month, CMS needs both expenditure and enrollment data to assess compliance with the spending limit. CMS is responsible for monitoring compliance with the STCs during the demonstration, including compliance with requirements around how Medicaid funds can be spent and spending limits. Monitoring efforts may include reviewing performance reports and quarterly financial reporting required under the STCs and discussing questions and concerns with the state. When a state seeks a renewal of a demonstration, that request offers CMS an opportunity to negotiate revisions to the STCs with the state, which could include changes to spending limits and reporting requirements. (See fig. 1.) States are required to report Medicaid expenditures, including expenditures under demonstrations, to CMS at the end of each quarter. CMS reviews these expenditures on a quarterly basis for reasonableness. If, during the expenditure review, CMS is uncertain as to whether a particular state expenditure is allowable, then CMS may withhold payment pending further review (referred to as a deferral). With regard to reporting on expenditures under demonstrations, the STCs dictate the level of detail that the state is required to include in the quarterly expenditure reporting. For example, they might require the state to report expenditures by population and by payment pool approved under the demonstration. Federal spending under section 1115 Medicaid demonstrations increased significantly from fiscal year 2005 through fiscal year 2015, rising from $29 billion in 2005 to over $100 billion in 2015. Federal spending on demonstrations also increased as a share of total federal Medicaid spending during the same period, rising from 14 percent of all federal Medicaid spending in fiscal year 2005 to 33 percent in fiscal year 2015. (See fig. 2.) Several factors likely contributed to these trends. First, the number of states with demonstrations increased during this period, with 31 states reporting demonstration expenditures in fiscal year 2005 and 40 reporting such expenditures in fiscal year 2015. Second, some states expanded their demonstrations over this period, with demonstration spending in 24 states representing a greater proportion of total Medicaid spending in fiscal year 2015 than in fiscal year 2005. For example, CMS officials told us that, during this period, some states shifted expenditures for managed care and home and community based services from other Medicaid authorities to section 1115 demonstrations. In addition, during 2010 through 2015, a number of states expanded coverage through demonstrations to low-income adults, which, as CMS officials told us, likely contributed to the increase in demonstration spending. Demonstration spending as a proportion of total federal Medicaid spending varied across states and represented most--75 percent or more--of Medicaid spending in 10 of the 40 states that reported expenditures in fiscal year 2015. (See fig. 3.) Further, in 5 of these 10 states, demonstration spending represented more than 90 percent of the state's total federal Medicaid spending. In contrast, in fiscal year 2005, spending under demonstrations did not exceed 75 percent of total Medicaid spending in any state. In that year, demonstration spending represented between 25 percent and 75 percent of total Medicaid spending in 10 states and less than 25 percent in 21 states. (See app. I.) The extent to which demonstration spending changed over time varied across states, as illustrated by the most recent 5 years of spending data in our selected states. In two of our four selected states--California and Indiana--spending under demonstrations increased between fiscal years 2011 and 2015, consistent with the national trend. (See table 1.) California's demonstration spending increased the most significantly-- more than tripling--during this time frame, during which the state expanded its demonstration to, among other things, provide coverage to low-income adults. Indiana reported a 22 percent increase in demonstration spending between fiscal years 2011 and 2015. In contrast, Tennessee reported a 3 percent decrease during that period. With regard to the change in the proportion of total Medicaid spending that demonstrations represented, the proportion did not change between 2011 and 2015 for Indiana and Tennessee and doubled in California, from a quarter of its total Medicaid expenditures in 2011 to half of its total Medicaid expenditures in 2015. We could not assess the change in spending for the fourth state--New York--because the state's expenditure reporting for fiscal year 2015 was incomplete. We found that CMS took a number of steps to monitor demonstration spending in our selected states. For example, CMS held calls with states and performed various steps to assess the appropriateness of expenditures. Held monitoring calls. CMS and state officials told us that they held monitoring telephone calls to discuss any significant current or expected developments in the demonstrations. CMS officials confirmed they may use the calls to obtain information to supplement their review of states' performance and expenditure reports. For example, CMS officials said they used the calls with Tennessee to raise questions about the content of the state's submitted quarterly reports. In addition, California officials told us that CMS used these calls to get updates and supporting documentation on state programs. Checked for the appropriateness of expenditures. In reviewing the quarterly expenditure reports, CMS officials told us that they assessed the appropriateness of expenditures. For example, the agency checked that the amounts claimed complied with federal requirements for matching funds. As a result of these checks, CMS issued several deferrals to withhold payment of federal funds to California until the state could account for expenditures claimed. Officials also told us that as part of the checks, they assessed the appropriateness of pool payments, such as those for supplemental payments to providers, where relevant. Assessing the appropriateness of pool payments involves ensuring that pool payments align with the approved purposes of the pool and that the payments were made to approved providers. For example, CMS officials told us that for one of the pools in the New York demonstration, agency staff checked whether the payments made were to eligible providers, the requirements of which were described in the STCs. As a result of this review, CMS deferred providing over $38 million in federal funds to New York for payments made to providers under the pool in the quarter ending March 31, 2016, until the state could provide documentation that the providers were eligible to receive payment. CMS officials also told us that they checked to ensure that the state was not receiving funds from other federal funding sources that are intended to serve the same purposes as funds in their payment pools (i.e., duplicating federal funds), and that the state's share of funding for the pools is from permissible sources, such as the state's general revenue. According to CMS officials, as a result of the agency's checks of spending for New York's demonstration, CMS identified $172 million in federal funds that were inappropriately used to finance the state share of demonstration costs. CMS recovered these funds in fiscal year 2015. However, we also found inconsistencies in CMS's monitoring process that potentially limited the effectiveness of the agency's monitoring efforts in the selected states. The inconsistencies included the following: Reporting requirements were sometimes insufficient to provide information needed to assess compliance with spending limits. CMS did not consistently require states to report the elements needed for the agency review staff to compare actual demonstration spending to the spending limit. For example, although CMS needs states to report the number of enrollees per month--referred to as member months--to assess compliance with per person spending limits, the agency only required such reporting for two of the four selected states' demonstrations. CMS acknowledged that having member month data is important to assess spending limit compliance. For example, CMS did not require California to report enrolled member months for its demonstration from 2010 to 2015, but the agency amended the STCs to include this requirement when the state's demonstration was renewed beginning in 2016. Including this requirement will prevent CMS from having to use alternative means to gain necessary information for this compliance assessment. For example, CMS officials said that they have used monitoring calls to obtain the missing enrollment information from the state. Enforcement of expenditure reporting requirements was inconsistent. We found that the selected states did not report demonstration expenditures in all of the categories specified under their demonstration STCs. For example, California's expenditure reporting did not align with the STC reporting requirements for 2010 through 2015. California officials told us this was largely because CMS had not enforced the reporting requirements prior to 2015. Therefore, based on our review, CMS would not be able to assess compliance with the spending limit for California using the data included in its expenditure report, if CMS tried to do so. Monitoring compliance with spending limits was inconsistent. CMS did not consistently assess compliance with the spending limit in all our selected states. CMS officials told us that they assessed compliance with the spending limits on a quarterly basis for the demonstrations in Tennessee and Indiana. However, the agency did not regularly assess compliance for the California and New York demonstrations--which represented tens of billions of dollars in federal spending annually--due to limitations in the state-reported expenditure data. CMS officials told us that they did not assess California's compliance with the spending limit because the expenditure data submitted by the state was not accurate. Furthermore, the agency's focus was on resolving a number of broader financial compliance issues in the state (see sidebar), the resolution of which, according to officials, was necessary before the agency could assess compliance with the spending limit. With regard to New York, CMS had not assessed compliance with the spending limit since 2011, because the state's reporting of expenditures has been significantly incomplete since then. According to CMS officials, significant staff transitions disrupted New York's ability to report expenditures to CMS as required. The state delayed reporting expenditures, and it did not report them in the categories specified in the STCs. Although CMS did not assess compliance with the spending limit for either of these two states, officials told us that they were not concerned that California or New York exceeded their spending limits because the limits in those states have historically been higher than actual spending. These inconsistencies may have resulted, in part, from CMS's lack of written, standard operating procedures for monitoring spending under demonstrations. For example, CMS does not have internal guidance on the elements that must be included in reporting requirements for states. In addition, regarding the state performance reports, CMS does not have a review protocol or a requirement that staff check that reports contain the elements required by the STCs, for example, enrollment data needed to assess a state's compliance with the spending limit. CMS has written materials to train staff on how spending limits are set and how demonstration spending is monitored. However, these materials are limited to high-level descriptions of the monitoring roles and do not contain specific procedures for staff to use in monitoring. Regarding the review of quarterly expenditure reports, CMS has guidance for agency staff who review them, but the guidance lacks detailed direction on what checks of demonstration expenditure data should occur. CMS also lacks standard procedures for documenting its monitoring efforts. For example, the agency has no written requirements for its staff to document that required performance reports have been submitted by the states. Furthermore, the agency does not require its staff to document the content of monitoring calls, including any concerns and potential resolutions discussed. In addition, CMS does not require its staff to systematically document checks performed for state compliance with demonstration spending limits or the appropriateness of pool payments. According to CMS officials, while there are not written requirements to do so, there is an expectation that staff maintain documentation of their monitoring efforts. However, officials also told us that any documentation of checks that a demonstration complied with its spending limits is likely included in the personal notes of individual CMS staff. As such, they may not necessarily be accessible to all staff who have oversight responsibility of the demonstration. One example of evidence we observed of CMS documenting its monitoring efforts was when checks for appropriateness of expenditures resulted in a deferral of federal funds, which were documented in letters to the states. CMS officials told us that they are in the early stages of developing standard operating procedures and a management information system to better standardize the monitoring of demonstrations: Standard operating procedures. CMS officials told us that they are developing protocols for monitoring state demonstration programs and state compliance with demonstration spending limits. Officials told us that the protocols would outline staff roles and responsibilities. Officials also told us that they are working on standardizing the format and content of required state performance reports, which could help ensure that CMS is receiving the information needed to monitor spending under the demonstration. As of December 2016, CMS officials expected that the first phase of standard procedures, which will focus on assessments of compliance with the spending limit, will be developed and documented in the next year. They explained that developing the procedures is an iterative process and that it could take the agency 2 years to completely develop and document its plans. Management information system. CMS officials also told us that they are in the initial phases of building a management information system to facilitate and document demonstration oversight. The first part of the system, which was in use as of September 2016, allowed CMS to centralize the collection of state demonstration performance reports. In future phases of system development, officials told us that the system will include alerts for missing reports or incomplete reviews and prompts for CMS's staff to document completion of monitoring checks. CMS also plans for the system to include a database of demonstration STCs that CMS staff can search, which could help to ensure that STCs consistently include necessary reporting requirements. It is too early to determine how well CMS's planned standard operating procedures and management information system will address the inconsistencies in its demonstration monitoring process. CMS officials did not have any written documentation regarding the agency's plans as of December 2016. As such, it was unclear, for example, whether the procedures and new system would include mechanisms to ensure that STCs consistently require states to report the information needed for CMS to assess compliance with the spending limits. In addition, it was unclear if the procedures or new system would ensure that agency staff regularly check that expenditure reporting complies with reporting requirements. CMS officials said they intend for the procedures and new system to include mechanisms to ensure consistency in those areas. Federal internal control standards require that federal agencies design control activities to achieve objectives and respond to risks, and that agencies implement control activities, including documenting the responsibilities for these activities through policies and procedures. Without standard procedures for monitoring demonstration spending and documenting those efforts, CMS faces the risk of continued inconsistencies in monitoring and the risk that it may not identify cases where states may be inappropriately using federal funds or exceeding spending limits. CMS's policy for applying demonstration spending limits has allowed our selected states to accrue unused spending authority under the demonstration spending limit (referred to in this report as unspent funds) and use it to expand demonstrations to include new costs. According to CMS officials, under long-standing policy, if a state spends less than the spending limit, the agency allows the state to accrue the difference between actual expenditures and the spending limit and carry forward the unspent funds into future demonstration periods. CMS allowed our selected states to use unspent funds to expand the demonstration by, for example, financing care for additional eligibility groups or additional supplemental payments. For example, according to CMS officials, Indiana accrued $600 million in unspent state and federal funds during the first demonstration period and was using a portion of that--approximately $2 million a year--to finance care for a small group of beneficiaries with end- stage renal disease in a subsequent period of the demonstration. CMS allowed New York to use $8 billion in accrued unspent federal funds from previous demonstration periods to expand its demonstration by including a new supplemental payment pool for incentive payments to Medicaid providers, costs that would not have been eligible for federal matching funds outside of the demonstration. If a state were to exceed its spending limit in a demonstration period, the agency allows it to draw upon unspent funds from previous demonstration periods to cover demonstration expenses, which, according to CMS officials, is consistent with the budget neutrality policy, under which spending limits are enforced over the life of the demonstration including any extensions beyond the initial 5-year term. The flexibility afforded to states in their accrual and use of unspent funds may explain, in part, why CMS has infrequently found that states exceed spending limits. Agency officials told us the agency has only withheld federal funds once as a result of a state exceeding its spending limit. Specifically, in 2007, CMS found that Wisconsin exceeded its demonstration spending limit and required the state to return $10.2 million to the federal government. Among our selected states, we found that states could accrue significant amounts of unspent funds. For example, CMS officials estimated that New York and California accrued billions of dollars in unspent funds. Based on our analysis, we found that Tennessee accrued approximately $11.6 billion in unspent funds over 3 years. (See fig. 4) According to CMS officials, growth in health care costs has proven lower than the agency and states assumed when setting the spending limits, resulting in spending that consistently falls below spending limits across demonstrations. In past work, we found that HHS had approved spending limits that were higher than the budget neutrality policy suggested. Among other concerns, we reported that HHS allows methods for establishing the spending limit that may be inappropriate--including application of inappropriately high growth rates--and may result in excessively high spending limits. For example, we found four demonstrations where the spending limits were a total of $32 billion higher than they should have been for the demonstration periods, typically 5 years. In May 2016, CMS communicated to states that the budget neutrality policy had been revised to, among other things, restrict the accrual of unspent funds to better control demonstration costs. Specifically, for demonstrations renewed starting in January 2016, CMS restricts the amount of unspent funds states can accrue over time in two ways. First, when states apply to renew their demonstrations, they can only carry over unspent funds from the past 5 years of the demonstration. Second, for demonstrations renewed through 2021, CMS limits the amount of unspent funds states can accrue each year in the renewal period. Specifically, after a state's initial 5-year demonstration period, the amount of expected unspent funds that a state can accrue is reduced by 10 percent per year, until states can only accrue 25 percent of expected unspent funds under the spending limit. For example, a state renewing its demonstration after completing its first 5-year demonstration period would be able to keep 90 percent of the unspent funds it would accrue under the spending limit in the sixth year of the demonstration--which is the first year of the renewal period--80 percent in the seventh year, and so on. States that had renewed previously would experience further restrictions until the 13th year of the demonstration, at which point a state would be limited to accruing 25 percent a year. For demonstration renewals starting in 2021, states will still be limited to carrying over 5 years of unspent funds, but the percentage restrictions will be replaced with different requirements that could lower spending limits. Specifically, CMS will require states to submit new cost estimates using recent cost data (i.e., to rebase their cost projections). Those new cost projections, subject to adjustment, would become the basis of spending limits for the renewal period. To the extent that using more recent cost data results in spending limits that more closely reflect actual costs-- which have proven lower than assumed by states and CMS when setting the spending limit--this requirement may lower spending limits and accordingly may reduce the unspent funds that states accrue under those limits. As of mid-December 2016, CMS was in the early stages of implementing the restrictions, having approved six demonstration renewals under the revised policy--those for Arizona, California, Massachusetts, New York, Tennessee, and Vermont. The updated STCs for the new demonstration periods in each state limited the states' access to unspent funds to the last 5 years and reduced the amount of unspent funds the states can accrue over the next 5 years. For example, under the revised policy, California's expected accrued unspent funds over its next demonstration period will be reduced by approximately $15 billion. (See fig. 5.) The effectiveness of the revised policy in controlling costs will depend, in part, on whether CMS consistently implements the revisions. We found two weaknesses that could lead to inconsistent application. Lack of formal guidance. CMS released a slide presentation on the revised policy during a teleconference with all states but has not issued formal guidance. CMS made the slides available on the agency's website, but they were not included in the database of guidance--typically letters--for state Medicaid directors. CMS officials told us that there was no plan to issue additional guidance to states. Although the slides detail how unspent funds will be reduced, without formal guidance, it is unclear whether CMS will consistently apply these new requirements during demonstration renewals. Inconsistent tracking of unspent funds. We found that CMS was not consistently tracking unspent funds under the spending limits in our selected states, which makes it difficult for CMS to ensure the unspent funds are reduced by the amount specified under the new policy. For example, New York had not provided the financial reporting CMS needed to calculate the state's actual costs for the different eligibility groups covered by the demonstration or its accrued unspent funds, even though there were specific spending limits for these different groups. As a result, CMS could not track unspent funds in the state. CMS officials told us the agency required New York to produce that information as part of the application for renewing the state's demonstration. Similarly, the agency did not have actual costs for California's demonstration, given California's lack of reporting as specified under the STCs, and required the state to provide that information under its renewed demonstration. CMS officials told us the standard operating procedures, as noted above, that the agency is developing for monitoring demonstrations will reflect the revisions to CMS's budget neutrality policy. It is too soon to determine if these procedures will ensure consistent tracking of unspent funds because, as we noted earlier, there was no documentation of the agency's plans for these procedures as of December 2016. Federal internal control standards require that federal agencies should design control activities to achieve objectives. Control activities like formal guidance and standard procedures that clarify the application of agency policies help ensure that those policies--such as the revised budget neutrality policy--are consistently carried out in achieving cost control objectives. Without addressing potential weaknesses, including the lack of formal guidance and the lack of consistent tracking of unspent funds across all demonstrations, CMS may not be able to effectively implement the policy and achieve its related cost-control objectives. Medicaid section 1115 demonstrations are an important tool for states to test new approaches to delivering care that, among other things, may be more cost effective. However, the growing federal expenditures for demonstrations--now at over $100 billion a year--for costs that, in some cases, would not otherwise be eligible for Medicaid funding makes monitoring of those dollars critical. While our work found that CMS was monitoring demonstration spending in our selected states, the agency's process also raised concerns. CMS's lack of standard procedures for its monitoring process has contributed to insufficient reporting requirements for states and inconsistent enforcement of those requirements. Insufficient reporting can create a barrier to monitoring efforts, including assessing compliance with spending limits. Inconsistent enforcement might allow compliance issues to go undetected for extended periods of time, which, as demonstrated by the issues in California and New York, can take years to resolve. A key principle for demonstrations has long been the policy that they must be budget neutral to the federal government. Whether demonstrations adhere to that principle depends both on how CMS approves and applies spending limits during the demonstration. We have raised concerns in the past about demonstration approvals, including that in some cases spending limits for demonstrations were set too high. Our current work found that as a result of high spending limits, states are accruing significant amounts of unspent funds under the spending limits and using those funds to finance expansions of the demonstration. CMS's move under the revised budget neutrality policy to begin restricting the amount of unspent funds that states can accrue is a positive step toward the agency's goal of better controlling demonstration costs. However, states may continue to accrue significant amounts of unspent funds. Without standard procedures for tracking these funds, CMS will not be able to effectively enforce the limits on those funds. Further, without formal guidance on the revised policy, it is unclear whether CMS will consistently apply the policy. To improve consistency in CMS oversight of federal spending under section 1115 demonstrations, we recommend that the Secretary of Health and Human Services require the Administrator of CMS to take the following two actions: 1. Develop and document standard operating procedures for monitoring a. Require setting reporting requirements for states that provide CMS the data elements needed for CMS to assess compliance with demonstration spending limits; b. Require consistent enforcement of states' compliance with financial reporting requirements; and c. Require consistent tracking of the amount of unspent funds under demonstration spending limits. 2. Issue formal guidance on the revised budget neutrality policy, including information on how the policy will be applied. We provided a draft of this report to HHS for review and comment. HHS concurred with our first recommendation that the agency should develop and document standard operating procedures for monitoring demonstration spending. In its response to this recommendation, HHS added that the department is developing infrastructure and procedures to better support demonstration monitoring. HHS did not explicitly agree or disagree with the second recommendation that the agency should issue formal guidance on the revised budget neutrality policy and how it will be applied. In its response to this recommendation, HHS noted that the new policy is being incorporated into new budget neutrality workbook templates and monitoring procedures, which will be used by the states and reviewers. The agency stated that it will determine if additional guidance is needed as implementation continues. Given the importance of this policy in controlling demonstration costs, we believe that developing formal guidance is necessary to ensure consistent application. HHS also provided technical comments, which we incorporated as appropriate. HHS's comments are reprinted in appendix II. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services, appropriate congressional committees, and other interested parties. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Total Medicaid expenditures (millions) Expenditures for demonstrations (millions) Total Medicaid expenditures (millions) Expenditures for demonstrations (millions) Total Medicaid expenditures (millions) Expenditures for demonstrations (millions) Total Medicaid expenditures (millions) Expenditures for demonstrations (millions) In addition to the contact named above, Susan Barnidge (Assistant Director), Jasleen Modi (Analyst-in-Charge), Shamonda Braithwaite, Elizabeth Miller, and Giao N. Nguyen made key contributions to this report. Also contributing were Giselle Hicks, Laurie Pachter, and Emily Wilson.
As of November 2016, 37 states had demonstrations under section 1115 of the Social Security Act, under which the Secretary of HHS may allow costs that Medicaid would not otherwise cover for state projects that are likely to promote Medicaid objectives. By policy, demonstrations must be budget neutral; that is, the federal government should spend no more for a state's Medicaid program than it would have spent without the demonstration. CMS is responsible for monitoring spending and assessing compliance with demonstration terms and conditions for how funds can be spent and applying spending limits to maintain budget neutrality. GAO was asked to examine federal spending for demonstrations and CMS's oversight of spending. This report examines (1) federal spending over time, (2) CMS's monitoring process, and (3) CMS's application of spending limits. GAO reviewed federal expenditure data for fiscal years 2005-2015, relevant documentation for 4 states, selected based on variation among their demonstrations, and federal internal control standards, and also interviewed CMS and state Medicaid officials. Over the last decade, federal spending under Medicaid section 1115 demonstrations, which allow states flexibility to test new approaches for delivering Medicaid services, has increased significantly. The Centers for Medicare & Medicaid Services (CMS), within the Department of Health and Human Services (HHS), took a number of steps to monitor demonstration spending in GAO's 4 selected states. However, GAO also found inconsistencies in CMS's monitoring process. For example, CMS did not consistently require selected states to report the information needed to assess compliance with demonstration spending limits. The inconsistencies may have resulted from a lack of written standard procedures. CMS officials told GAO that CMS was developing procedures to better standardize monitoring, but did not have detailed plans for doing so. Thus, it is too soon to determine whether these efforts will address the inconsistencies GAO found. Federal standards require that federal agencies design control activities to achieve objectives. Without standard, documented procedures, CMS may not identify cases where states are inappropriately using federal funds or exceeding spending limits. In applying demonstration spending limits, CMS allowed states to accrue unspent funds (more specifically, unused spending authority) when state spending is below the limit and use them to finance expansions of the original demonstration. For example, CMS allowed New York to use $8 billion in unspent federal funds to expand its demonstration to include an incentive payment pool for Medicaid providers. In May 2016, CMS released a slide presentation outlining new restrictions on the accrual of unspent funds. Per federal standards, formal guidance helps ensure that policies are consistently carried out. However, CMS has not issued formal guidance on the policy and does not consistently track unspent funds under the spending limit, raising questions as to whether the revised policy will be effective in better controlling costs. GAO recommends that CMS (1) develop and document standard operating procedures for sufficient reporting requirements and to require consistent monitoring and (2) issue formal guidance on its revised policy for restricting accrual of unspent funds. HHS agreed with GAO's first recommendation and neither agreed nor disagreed with GAO's second recommendation.
6,653
646
VBA is in the process of modernizing many of its older, inefficient systems and has reportedly spent an estimated $294 million on these activities between October 1, 1986 and February 29, 1996. The modernization program can have a major impact on the efficiency and accuracy with which over $20 billion in benefits and other services is paid to our nation's veterans and their dependents. However, in the last 6 years some aspects of VBA's service to the veterans have not improved. For example, in the past 6 years, VBA's reported processing time for an original compensation claim rose from 151 days in fiscal year 1990 to 212 days in fiscal year 1994. In March 1996 the average time was 156 days. Software development is a critical component of this major modernization initiative. VBA, with the assistance of contractors, will be developing software for the Veterans Services Network (VETSNET) initiative, a replacement for the existing Benefit Delivery Network. For efforts like VETSNET to succeed, it is crucial that VBA have in place a disciplined set of software development processes to produce high quality software within budget and on schedule. VBA relies upon its own staff and contractors to develop and maintain software that is crucial to its overall operations. In fiscal year 1995, VBA had 314 full-time equivalents, with payroll expenses of $20.8 million, devoted to developing and maintaining software throughout the organization. It also spent $17.7 million in contract services in these areas. To evaluate VA's software development capability, version 2.0 of the Software Engineering Institute's (SEI) software capability evaluation (SCE) method was used by an SEI-trained team of GAO specialists. The SCE is a method for evaluating agencies' and contractors' software development processes against SEI's five-level software Capability Maturity Model (CMM), as shown in table 1. These levels and the key process areas (KPAs) described within each level define an organization's ability to develop software, and can be used to improve its software development processes. The findings generated from an SCE identify (1) process strengths that mitigate risks, (2) process weaknesses that increase risks, and (3) improvement activities that indicate potential mitigation of risks. We requested that VA identify for our evaluation those projects using the best software development processes implemented within VBA and AAC. VBA and AAC identified the following sites and projects. --Compensation & Pension/Financial Management System --Claims Processing System We evaluated the software development processes used on these projects, focusing on KPAs necessary to achieve a repeatable capability. Organizations that have a repeatable software development process have been able to significantly improve their productivity and return on investment. In contrast, organizations that have not developed the process discipline necessary to better manage and control their projects at the repeatable level incur greater risk of schedule delay, cost overruns, and poor quality software. These organizations rely solely upon the variable capabilities of individuals, rather than on institutionalized processes considered basic to software development. According to SEI, processes for a repeatable capability (i.e., CMM level 2) are considered the most basic in establishing discipline and control in software development and are crucial steps for any project to mitigate risks associated with cost, schedule, and quality. As shown in table 2, these processes include (1) requirements management, (2) software project planning, (3) software project tracking and oversight, (4) software subcontract management, (5) software quality assurance, and (6) software configuration management. We conducted our review between August 1995 and February 1996, in accordance with generally accepted government auditing standards. Highlights of our evaluation of VBA's software practices using the SEI criteria outlined in appendix II follow. Requirements Management - The purpose of requirements management is to establish a common understanding between the customer and the software project of the customer's requirements that will be addressed by the software project. The first goal within this KPA states that, "system requirements allocated to software are controlled to establish a baseline for software engineering and management use." VBA does not manage and control system requirements as required by this goal. Moreover, members of software-related groups are not trained in requirements management activities. Also, changes made to software plans, work products, and activities resulting from changes to the software requirements are not assessed for risk. Software Project Planning - The purpose of software project planning is to establish reasonable plans for performing the software engineering and for managing the software project. VBA projects do not have software development plans, estimates for software project costs are not derived using conventional industry methods and tools, and VBA is unable to show the derivation of the estimates for the size (or changes to the size) of the software work products. Also, individuals involved in the software project planning are not trained in estimating and planning procedures applicable to their area of responsibility. Software Project Tracking and Oversight - The purpose of software project tracking and oversight is to provide adequate visibility into actual progress so that management can take effective actions when the software project's performance deviates significantly from software plans. VBA does track software project schedules against major milestones; however, as mentioned previously, these schedules and milestones are not derived using conventional industry methods nor is there a comprehensive software plan against which to track activities. Moreover, the size of software work products (or the size of changes to software work products) are not tracked, and the software risks associated with cost, resource, schedule, and technical aspects of the project are not tracked. Software Subcontract Management - The purpose of software subcontract management is to select qualified software subcontractors and manage them effectively. VBA does not have a written organizational policy that describes the process for managing software contracts. Additionally, the software work to be contracted is neither defined nor planned according to a documented procedure. Finally, software managers and other individuals who are involved in developing, negotiating, and managing a software contract are not trained to perform these activities. Software Quality Assurance - The purpose of software quality assurance is to provide management with appropriate visibility into the process being used by the software project and of the products being built. VBA has a software quality and control (SQ&C) group that has a reporting channel to senior management, independent of the project managers. The SQ&C group also performs testing of the software code. However, the SQ&C group does not participate in other software quality assurance (SQA) functions, such as the preparation, review, and audit of projects' software development plans, standards, procedures, and other work products. Also, projects do not have SQA plans. Software Configuration Management - The purpose of software configuration management is to establish and maintain the integrity of products of the software project throughout the project's software life cycle. VBA has provided formal training to its staff in defining software processes. However, VBA cannot effectively control the integrity of its software work products because it has no software configuration control board, it does not identify software work products to be placed under configuration management, and it has no configuration management library system to serve as a repository for software work products. VBA has begun improvement activities in this area by (1) establishing a software configuration management group and (2) drafting a software configuration management procedure. Following a presentation of GAO's SCE results to the Chief Information Officer of VBA, the Director of VBA's Office of Information Systems forwarded a letter to GAO citing a number of initiatives that are currently underway to address some of the stated deficiencies. Initiatives cited by the VBA include: development and distribution of interim configuration management procedures; identification of a library structure to hold all of the work products from the development process; and initiation of several meetings with SEI to discuss the Software CMM. Similar to VBA, we compared the CMM criteria in appendix II to the software development practices at AAC. Summary results of this evaluation follow. Requirements Management - AAC does not create or control a requirements baseline for software engineering. Also, AAC does not manage or control requirements. AAC does have a process for negotiating periodic contractual arrangements with customers, but this process does not include baselining and controlling software requirements. Software Project Planning - Although AAC documents its schedule estimates for software development projects, there is (1) no defined methodology in use for estimating software costs, size, or schedule, (2) no derivation of estimates for the size (or changes to the size) of software products, and (3) no derivation of the estimates for software project costs. Similarly, AAC uses a project planning tool called "MultiTrak". However, projects do not have software development plans. Software Project Tracking and Oversight - AAC performs schedule tracking at major milestones. However, the goals for this KPA call for (1) the tracking of actual results and performances against software plans, (2) the management of corrective actions when deviations from the software plan occur, and (3) the affected parties to mutually agree to changes in commitments. AAC does not conform to these goals. For example, AAC does not track (1) the software risks associated with cost, resource, schedule, and technical aspects of the project and (2) the size of software work products (or size of changes to software work products). Software Subcontract Management - Although the goals for this KPA emphasize the selection of qualified software subcontractors and managing them effectively, AAC does not (1) have a documented procedure that explains how the work to be contracted should be defined and planned and (2) ensure that software managers and other individuals who are involved in establishing a software contract are trained to perform this activity. Software Quality Assurance - The goals within this KPA emphasize (1) the verification of the adherence of software products and activities to applicable standards, procedures, and requirements and (2) the reporting of noncompliance issues that cannot be resolved within the project to senior management. AAC has an automated data processing system integrity guideline and a systems integration service (SIS) group that has a reporting channel to senior management and is independent of the project managers. However, projects do not have SQA plans; the SIS group does not participate in certain SQA functions, such as the preparation, review, and audit of projects' software development plans, standards, and procedures; and members of the SIS group are not trained to perform their SQA activities. Software Configuration Management - AAC performs software (i.e., code only) change control using a tool called "ENDEVOR," and its employees are trained in the use of this tool. However, the scope of the goals within this KPA cover all products in the entire software life cycle and not just the software code. AAC has not identified software work products (with the exception of software code) that need to be placed under configuration management, established a configuration management library system that can be used as a repository for software work products, or established a software configuration control board. Unless both VBA and AAC initiate improvement activities within the various KPAs and accelerate those already underway, they are unlikely to produce and maintain high-quality software on time and within budget. Because VBA and AAC do not satisfy any of the KPAs required for a level 2 (i.e., repeatable) capability, there is no assurance that (1) investments made in new software development will achieve their operational improvement objectives or (2) software will be delivered consistent with cost and schedule estimates. To better position VBA and AAC to develop and maintain their software successfully and to protect their software investments, we recommend that the Secretary of Veterans Affairs take the following actions: Delay any major investment in software development beyond that which is needed to sustain critical day-to-day operations until the repeatable level of process maturity is attained. Obtain expert advice to assist VBA and AAC in improving their ability to develop high-quality software, consistent with criteria promulgated by SEI. Develop an action plan, within 6 months from the date of this letter, that describes a strategy to reach the repeatable level of process maturity. Implement the action plan expeditiously. Ensure that any future contracts for software development require the contractor have a software development capability of at least CMM level 2. VBA comments responded to its SCE results, and VA comments responded to the SCE results for AAC. In commenting on a draft of this report, the Veterans Benefits Administration (VBA) agreed with four of our recommendations and disagreed with one recommendation. VBA stated that while it agreed that a repeatable (i.e., level 2) level of process maturity is a goal that must be attained, it disagreed that "...all software development beyond that which is day-to-day critical must be curtailed..." VBA further stated that the payment system replacement projects, the migration of legacy systems, and other activities to address the change of century must continue. While we agree that the software conversion or development activities required to address issues such as the change of century or changes to legislation must continue, we would characterize these as sustaining critical day-to-day operations. However, major system development initiatives in support of major projects such as the system modernization effort, which involves several system replacement projects and the migration of legacy systems, and VETSNET, which includes several payment system replacement projects, should be reassessed for risk of potential schedule slippage, cost overrun, and shortfall in anticipated system functions and features. Shortcomings such as these are more likely from organizations with a software development maturity rating below level 2 (i.e., the repeatable level). Therefore, to minimize software development risks, we continue to believe that VBA should delay any major investment in software development unless it is required to sustain day-to-day operations, until a maturity rating of level 2 is reached. Regarding the remaining four recommendations, we are pleased to see that VBA is already initiating positive actions, including acquiring the assistance of the Software Engineering Institute. VA stated that we did not demonstrate a willingness or flexibility in relating AAC documentation products, activities, and terms to the SEI terms. We reviewed all documentation provided to us by VA including the documents listed in their comments on our draft report. As called for by the SCE methodology, we carefully compared all this documentation to the SEI CMM criteria. As stated throughout our report, we found some strengths but in many cases, VA's documentation was not commensurate with that called for by the SCE methodology. Our comments on the specific key process areas follow. The VA comments stated that the OFM/IRM Business Agreement, dated September 1994, contains guidelines which mandate the management of software requirements. However, in our review of the documentation listed under requirements management (Enclosure 1: Documents Addressing Key Process Area), we found no evidence that these documents addressed any of the goals of this KPA. For example, (1) the allocated requirements are neither managed, controlled, nor baselined, and (2) no software development plans were developed based on the allocated requirements. VA feels that the AAC Business Agreement and the negotiated quarterly contract satisfies this KPA; however, we found that AAC does not perform a majority of the activities required to meet the goals within this KPA. For example, AAC was not able to submit evidence for estimating software size and cost, nor did AAC demonstrate any methodology used for estimating schedules. VA stated that project size and risk remain consistent throughout the development/implementation cycle. However, AAC did not provide our SCE team with any evidence validating this assertion and, as discussed on page 8, AAC does not track this information. VA claims that specific written policies and procedures are followed when managing software contracts; however, AAC staff interviewed were unable to provide us with any specific policies or procedures used for software contracting. The AAC staff acknowledged that they do not track (1) software contractor performance at the coding level (i.e., track functionality only) or (2) contractor produced software documentation. Regarding training for software contract management, VA stated that its COTRs receive training in procurement, project management, and evaluating contractor performance. However, there is no indication that these courses are specific to software contracting. In addition, other individuals involved in establishing the software contract for the projects reviewed had not received contract management training related to software. VA states that its ADP System Integrity Guide, dated September 1994, contains detailed procedures directing the SIS group in specific SQA functions. Although this is a good first step, the AAC is still deficient because it does not have project specific software quality assurance plans that are implemented for individual projects, as requuired by this KPA within the CMM. Furthermore, we were not provided with any evidence showing that the ADP System Integrity Guide has been officially issued or whether its use will be mandatory or discretionary. The VA comments do not present any additional evidence that would help to satisfy the criteria for this KPA. Specifically, communication between the SIS, AAC staff, and customer do not substitute for the rigor and discipline of a software configuration control board, which VA acknowledged they do not have. Furthermore, the placement of software code under configuration management is not sufficient to satisfy this KPA because other software work products--such as system design specifications, database specifications, and computer program specifications--are also required. Finally, although the AAC does maintain a library of those software work products that it does produce, the products are not maintained under a formal software configuration management discipline, which would include version control and rigorous requirements traceability. We are sending copies of this report to the Chairmen and Ranking Minority Members of the House and Senate Committees on Veterans Affairs and the House and Senate Committees on Appropriations; the Secretary of Veterans Affairs; and the Director, Office of Management and Budget. Copies will also be made available to other interested parties upon request. This work was performed under the direction of William S. Franklin, Director, Information Systems Methodology and Support, who can be reached at (202) 512-6234. Other major contributors are listed in appendix IV. The following is GAO's comment on the Department of Veterans Affairs' May 24, 1996, letter. 1. This issue is not addressed in our report. To establish a common understanding between the customer and the software project of the customer's requirements that will be addressed by the software project. Goal 1 System requirements allocated to software are controlled to establish a baseline for software engineering and management use. Goal 2 Software plans, products, and activities are kept consistent with the system requirements allocated to software. To establish reasonable plans for performing the software engineering and for managing the software project. Goal 1 Software estimates are documented for use in planning and tracking the software project. Goal 2 Software project activities and commitments are planned and documented. Goal 3 Affected groups and individuals agree to their commitments related to the software project. To provide adequate visibility into actual progress so that management can take effective actions and when the software project's performance deviates significantly from software plans. Goal 1 Actual results and performances are tracked against the software plans. Goal 2 Corrective actions are taken and managed to closure when actual results and performance deviate significantly from the software plans. Goal 3 Changes to software commitments are agreed to by the affected groups and individuals. To select qualified software subcontractors and manage them effectively. Goal 1 The organization selects qualified software subcontractors. Goal 2 The organization and the software subcontractor agree to their commitments to each other. Goal 3 The organization and the software subcontractor maintain ongoing communications. Goal 4 The organization tracks the software subcontractors' actual results and performance against its commitments. (continued) To provide management with appropriate visibility into the process being used by the software project and of the products being built. Goal 1 Software quality assurance activities are planned. Goal 2 Adherence of software products and activities to the applicable standards, procedures, and requirements is verified objectively. Goal 3 Affected groups and individuals are informed of software quality assurance activities and results. Goal 4 Noncompliance issues that cannot be resolved within the software project are addressed by senior management. To establish and maintain the integrity of products of the software project throughout the project's software life cycle. Goal 1 Software configuration management activities are planned. Goal 2 Selected software work products are identified, controlled, and available. Goal 3 Changes to identified software work products are controlled. Goal 4 Affected groups and individuals are informed of the status and content of software baselines. David Chao, SCE Team Leader Gary R. Austin, SCE Team Member K. Alan Merrill, SCE Team Member Madhav S. Panwar, SCE Team Member Keith A. Rhodes, SCE Team Member Paul Silverman, SCE Team Member The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed software development processes and practices at the Department of Veterans Affairs' Veterans Benefits Administration (VBA) and Austin Automation Center (AAC). GAO found that: (1) neither VBA nor AAC satisfy any of the criteria for a repeatable software development capability; (2) VBA and AAC do not adequately define systems requirements, train personnel, plan software development projects, estimate costs or schedules, track software project schedules or changes, manage software subcontractors, or maintain quality assurance and software configuration procedures; (3) VBA initiatives to improve its software development processes include developing and distributing interim configuration management procedures, identifying a library structure for all work products, and meeting with the Software Engineering Institute (SEI) to discuss software development; (4) VBA and AAC cannot reliably develop and maintain high-quality software on any major project within existing cost and schedule constraints; and (5) VBA and AAC can use their strengths in software quality assurance and their improvement activities in software configuration management as a foundation for improving their software development processes.
4,648
228
DOD is not receiving expected returns on its large investment in weapon systems. The total acquisition cost of DOD's 2007 portfolio of major programs under development or in production has grown by nearly $300 billion over initial estimates. While DOD is committing substantially more investment dollars to develop and procure new weapon systems, our analysis shows that the 2007 portfolio is experiencing greater cost growth and schedule delays than the fiscal years 2000 and 2005 portfolios (see table 1). Total acquisition costs for programs in DOD's fiscal year 2007 portfolio have increased 26 percent from first estimates--compared to a 6-percent increase for programs in its fiscal year 2000 portfolio. Total RDT&E costs for programs in 2007 have increased by 40 percent from first estimates, compared to 27 percent for programs in 2000. The story is no better when expressed in unit costs. Schedule delays also continue to impact programs. On average, the current portfolio of programs has experienced a 21-month delay in delivering initial operational capability to the warfighter, and 14 percent are more than 4 years late. Continued cost growth results in less funding being available for other DOD priorities and programs, while continued failure to deliver weapon systems on time delays providing critical capabilities to the warfighter. Put simply, cost growth reduces DOD's buying power. As program costs increase, DOD must request more funding to cover the overruns, make trade-offs with existing programs, delay the start of new programs, or take funds from other accounts. Delays in providing capabilities to the warfighter result in the need to operate costly legacy systems longer than expected, find alternatives to fill capability gaps, or go without the capability. The warfighter's urgent need for the new weapon system is often cited when the case is first made for developing and producing the system. However, DOD has already missed fielding dates for many programs and many others are behind schedule. Over the past several years our work has highlighted a number of underlying systemic causes for cost growth and schedule delays both at the strategic and at the program level. At the strategic level, DOD's processes for identifying warfighter needs, allocating resources, and developing and procuring weapon systems--which together define DOD's overall weapon system investment strategy--are fragmented and broken. At the program level, the military services propose and DOD approves programs without adequate knowledge about requirements and the resources needed to successfully execute the program within cost, schedule, and performance targets. DOD largely continues to define war fighting needs and make investment decisions on a service-by-service basis, and assess these requirements and their funding implications under separate decision-making processes. While DOD's requirements process provides a framework for reviewing and validating needs, it does not adequately prioritize those needs and is not agile enough to meet changing warfighter demands. Ultimately, the process produces more demand for new programs than available resources can support. This imbalance promotes an unhealthy competition for funds that encourages programs to pursue overly ambitious capabilities, develop unrealistically low cost estimates and optimistic schedules, and to suppress bad news. Similarly, DOD's funding process does not produce an accurate picture of the department's future resource needs for individual programs--in large part because it allows programs to go forward with unreliable cost estimates and lengthy development cycles--not a sound basis for allocating resources and ensuring program stability. Invariably, DOD and the Congress end up continually shifting funds to and from programs--undermining well-performing programs to pay for poorly performing ones. At the program level, the key cause of poor outcomes is the consistent lack of disciplined analysis that would provide an understanding of what it would take to field a weapon system before system development. Our body of work in best practices has found that an executable business case is one that provides demonstrated evidence that (1) the identified needs are real and necessary and that they can best be met with the chosen concept and (2) the chosen concept can be developed and produced within existing resources--including technologies, funding, time, and management capacity. Although DOD has taken steps to revise its acquisition policies and guidance to reflect the benefits of a knowledge- based approach, we have found no evidence of widespread adoption of such an approach in the department. Our most recent assessment of major weapon systems found that the vast majority of programs began development with unexecutable business cases, and did not attain, or plan to achieve, adequate levels of knowledge before reaching design review and production start--the two key junctures in the process following development start (see figure 2). Knowledge gaps are largely the result of a lack of disciplined systems engineering analysis prior to beginning system development. Systems engineering translates customer needs into specific product requirements for which requisite technological, software, engineering, and production capabilities can be identified through requirements analysis, design, and testing. Early systems engineering provides knowledge that enables a developer to identify and resolve gaps before product development begins. Because the government often does not perform the proper up-front analysis to determine whether its needs can be met, significant contract cost increases can occur as the scope of the requirements change or become better understood by the government and contractor. Not only does DOD not typically conduct disciplined systems engineering prior to beginning system development, it has allowed new requirements to be added well into the acquisition cycle. The acquisition environment encourages launching ambitious product developments that embody more technical unknowns and less knowledge about the performance and production risks they entail. A new weapon system is not likely to be approved unless it promises the best capability and appears affordable within forecasted available funding levels. We have recently reported on the negative impact that poor systems engineering practices have had on several programs such as the Global Hawk Unmanned Aircraft System, F-22A, Expeditionary Fighting Vehicle, Joint Air-to-Surface Standoff Missile and others. With high levels of uncertainty about technologies, design, and requirements, program cost estimates and related funding needs are often understated, effectively setting programs up for failure. We recently compared the service and independent cost estimates for 20 major weapon system programs and found that the independent estimate was higher in nearly every case, but the difference between the estimates was typically not significant. We also found that both estimates were too low in most cases, and the knowledge needed to develop realistic cost estimates was often lacking. For example, program Cost Analysis Requirements Description documents--used to build the program cost estimate--are not typically based on demonstrated knowledge and therefore provide a shaky foundation for estimating costs. Cost estimates have proven to be off by billions of dollars in some of the programs we reviewed. For example, the initial Cost Analysis Improvement Group estimate for the Expeditionary Fighting Vehicle program was about $1.4 billion compared to a service estimate of about $1.1 billion, but development costs for the system are now expected to be close to $3.6 billion. Estimates this far off the mark do not provide the necessary foundation for sufficient funding commitments and realistic long-term planning. When DOD consistently allows unsound, unexecutable programs to pass through the requirements, funding, and acquisition processes, accountability suffers. Program managers cannot be held accountable when the programs they are handed already have a low probability of success. In addition, they are not empowered to make go or no-go decisions, have little control over funding, cannot veto new requirements, and they have little authority over staffing. At the same time, program managers frequently change during a program's development. Limiting the length of development cycles would make it easier to more accurately estimate costs, predict the future funding needs, effectively allocate resources, and hold decision makers accountable. We have consistently emphasized the need for DOD's weapon programs to establish shorter development cycles. DOD's conventional acquisition process often requires as many as 10 or 15 years to get from program start to production. Such lengthy cycle times promote program instability--especially when considering DOD's tendency to change requirements and funding as well as leadership. Constraining cycle times to 5 or 6 years would force programs to conduct more detailed systems engineering analyses, lend itself to fully funding programs to completion, and thereby increasing the likelihood that their requirements can be met within established time frames and available resources. An assessment of DOD's acquisition system commissioned by the Deputy Secretary of Defense in 2006 similarly found that programs should be time-constrained to reduce pressure on investment accounts and increase funding stability for all programs. Our work shows that acquisition problems will likely persist until DOD provides a better foundation for buying the right things, the right way. This involves (1) maintaining the right mix of programs to invest in by making better decisions as to which programs should be pursued given existing and expected funding and, more importantly, deciding which programs should not be pursued; (2) ensuring that programs that are started can be executed by matching requirements with resources and locking in those requirements; and (3) making it clear that programs will then be executed based on knowledge and holding program managers responsible for that execution. We have made similar recommendations in past GAO reports, but DOD has disagreed with some and not fully implemented others. These changes will not be easy to make. They will require DOD to reexamine not only its acquisition process, but its requirement setting and funding processes as well. They will also require DOD to change how it views program success, and what is necessary to achieve success. This includes changing the environment and incentives that lead DOD and the military services to overpromise on capability and underestimate costs in order to sell new programs and capture the funding needed to start and sustain them. Finally, none of this will be achieved without a true partnership among the department, the military services, the Congress, and the defense industry. All of us must embrace the idea of change and work diligently to implement it. The first, and most important, step toward improving acquisition outcomes is implementing a new DOD-wide investment strategy for weapon systems. We have reported that DOD should develop an overarching strategy and decision-making processes that prioritize programs based on a balanced match between customer needs and available department resources---that is the dollars, technologies, time, and people needed to achieve these capabilities. We also recommended that capabilities not designated as a priority should be set out separately as desirable but not funded unless resources were both available and sustainable. This means that the decision makers responsible for weapon system requirements, funding, and acquisition execution must establish an investment strategy in concert. DOD's Under Secretary of Defense for Acquisition, Technology and Logistics--DOD's corporate leader for acquisition--should develop this strategy in concert with other senior leaders, for example, combatant commanders who would provide input on user needs; DOD's comptroller and science and technology leaders, who would provide input on available resources; and acquisition executives from the military services, who could propose solutions. Finally, once priority decisions are made, Congress will need to enforce discipline through its legislative and oversight mechanisms. Once DOD has prioritized capabilities, it should work vigorously to make sure each new program can be executed before the acquisition begins. More specifically, this means assuring requirements for specific weapon systems are clearly defined and achievable given available resources and that all alternatives have been considered. System requirements should be agreed to by service acquisition executives as well as combatant commanders. Once programs begin, requirements should not change without assessing their potential disruption to the program and assuring that they can be accommodated within time and funding constraints. In addition, DOD should prove that technologies can work as intended before including them in acquisition programs. More ambitious technology development efforts should be assigned to the science and technology community until they are ready to be added to future generations of the product. DOD should also require the use of independent cost estimates as a basis for budgeting funds. Our work over the past 10 years has consistently shown when these basic steps are taken, programs are better positioned to be executed within cost and schedule. To keep programs executable, DOD should demand that all milestone decisions be based on quantifiable data and demonstrated knowledge. These data should cover critical program facets such as cost, schedule, technology readiness, design readiness, production readiness, and relationships with suppliers. Development should not be allowed to proceed until certain knowledge thresholds are met--for example, a high percentage of engineering drawings completed at critical design review. DOD's current policies encourage these sorts of metrics to be used as a basis for decision making, but they do not demand it. DOD should also place boundaries on the time allowed for system development. To further ensure that programs can be executed, DOD should pursue an evolutionary path toward meeting user needs rather than attempting to satisfy all needs in a single step. This approach has been consistently used by successful commercial companies we have visited over the past decade because it provides program managers with more achievable requirements, which, in turn, facilitate shorter cycle times. With shorter cycle times, the companies we have studied have also been able to assure that program managers and senior leaders stay with programs throughout the duration of a program. DOD has policies that encourage evolutionary development, but programs often favor pursuing more revolutionary, exotic solutions that will attract funds and support. The department and, more importantly, the military services, tend to view success as capturing the funding needed to start and sustain a development program. In order to do this, they must overpromise capability and underestimate cost. In order for DOD to move forward, this view of success must change. World-class commercial firms identify success as developing products within cost estimates and delivering them on time in order to survive in the marketplace. This forces incremental, knowledge-based product development programs that improve capability as new technologies are matured. To strengthen accountability, DOD must also clearly delineate responsibilities among those who have a role in deciding what to buy as well as those who have role in executing, revising, and terminating programs. Within this context, rewards and incentives must be altered so that success can be viewed as delivering needed capability at the right price and the right time, rather than attracting and retaining support for numerous new and ongoing programs. To enable accountability to be exercised at the program level once a program begins, DOD will need to (1) match program manager tenure with development or the delivery of a product; (2) tailor career paths and performance management systems to incentivize longer tenures; (3) strengthen training and career paths as needed to ensure program managers have the right qualifications to manage the programs they are assigned to; (4) empower program managers to execute their programs, including an examination of whether and how much additional authority can be provided over funding, staffing, and approving requirements proposed after the start of a program; and (5) develop and provide automated tools to enhance management and oversight as well as to reduce the time required to prepare status information. DOD also should hold contractors accountable for results. As we have recommended, this means structuring contracts so that incentives actually motivate contractors to achieve desired acquisition outcomes and withholding fees when those goals are not met. Recognizing the need for more discipline and accountability in the acquisition process, Congress recently enacted legislation that, if followed, could result in a better chance to spend resources wisely. Likewise, DOD has recently begun to develop several initiatives, based in part on congressional direction and GAO recommendations that, if implemented properly, could also provide a foundation for establishing a well balanced investment strategy and sound, knowledge-based business cases for individual acquisition programs. Congress has enacted legislation that requires DOD to take certain actions which, if followed, could instill more discipline into the front-end of the acquisition process when key knowledge is gained and ultimately improve acquisition outcomes. For example, legislation enacted in 2006 and 2008 requires decision-makers to certify that specific levels of knowledge have been demonstrated at key decision points early in the acquisition process before programs can receive milestone approval for the technology development phase or the system development phase respectively. The 2006 legislation also requires programs to track unit cost growth against their original baseline estimates--and not only their most recent estimates--and requires an additional assessment of the program if certain cost growth thresholds are reached. Other key legislation requires DOD to report on the department's strategies for balancing the allocation of funds and other resources among major defense acquisition programs, and to identify strategies for enhancing the role of program managers in carrying out acquisition programs. DOD has also initiated actions aimed at improving investment decisions and weapon system acquisition outcomes, based in part on congressional direction and GAO recommendations. Each of the initiatives is designed to enable more informed decisions by key department leaders well ahead of a program's start, decisions that provide a closer match between each program's requirements and the department's resources. For example: DOD is experimenting with a new concept decision review, different acquisition approaches according to expected fielding times, and panels to review weapon system configuration changes that could adversely affect program cost and schedule. DOD is also testing portfolio management approaches in selected capability areas to facilitate more strategic choices about how to allocate resources across programs and also testing the use of capital budgeting as a potential means to stabilize program funding. In September 2007, the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics issued a policy memorandum to ensure weapons acquisition programs were able to demonstrate key knowledge elements that could inform future development and budget decisions. This policy directed pending and future programs to include acquisition strategies and funding that provide for contractors to develop technically mature prototypes prior to initiating system development, with the hope of reducing technical risk, validating designs and cost estimates, evaluating manufacturing processes, and refining requirements. DOD also plans to implement new practices that reflect past GAO recommendations intended to provide program managers more incentives, support, and stability. The department acknowledges that any actions taken to improve accountability must be based on a foundation whereby program managers can launch and manage programs toward greater performance, rather than focusing on maintaining support and funding for individual programs. DOD acquisition leaders have told us that any improvements to program managers' performance hinge on the success of these departmental initiatives. In addition, DOD has taken actions to strengthen the link between award and incentive fees with desired program outcomes, which has the potential to increase the accountability of DOD programs for fees paid and of contractors for results achieved. If adopted and implemented properly these actions could provide a foundation for establishing sound, knowledge-based business cases for individual acquisition programs, and the means for executing those programs within established cost, schedule, and performance goals. DOD understands what it needs to do at the strategic and at the program level to improve acquisition outcomes. The strategic vision of the current Under Secretary of Defense for Acquisition, Technology and Logistics acknowledges the need to create a high-performing, boundary-less organization--one that seeks out new ideas and new ways of doing business and is prepared to question requirements and traditional processes. Past efforts have had similar goals, yet we continue to find all too often that DOD's investment decisions are too service- and program- centric and that the military services overpromise capabilities and underestimate costs to capture the funding needed to start and sustain development programs. This acquisition environment has been characterized in many different ways. For example, some have described it as a "conspiracy of hope," in which industry is encouraged to propose unrealistic cost estimates, optimistic performance, and understated technical risks during the proposal process and DOD is encouraged to accept these proposals as the foundation for new programs. Either way, it is clear that DOD's implied definition of success is to attract funds for new programs and to keep funds for ongoing programs, no matter what the impact. DOD and the military services cannot continue to view success through this prism. More legislation can be enacted and policies can be written, but until DOD begins making better choices that reflect joint capability needs and matches requirements with resources, the acquisition environment will continue to produce poor outcomes. It should not be necessary to take extraordinary steps to ensure needed capabilities are delivered to the warfighter on time and within costs. Executable programs should be the natural outgrowth of a disciplined, knowledge-based process. While DOD's current policy supports a knowledge-based, evolutionary approach to acquiring new weapons, in practice decisions made on individual programs often sacrifice knowledge and realism in favor of revolutionary solutions. Meaningful and lasting reform will not be achieved until DOD changes the acquisition environment and the incentives that drive the behavior of DOD decision-makers, the military services, program managers, and the defense industry. Finally, no real reform can be achieved without a true partnership among all these players and the Congress. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions you may have at this time. For further questions about this statement, please contact Michael J. Sullivan at (202) 512-4841. Individuals making key contributions to this statement include Ron Schwenn, Assistant Director; Kenneth E. Patton, and Alyssa B. Weir. Defense Acquisitions: A Knowledge-Based Funding Approach Could Improve Major Weapon System Program Outcomes. GAO-08-619. Washington, D.C.: July 2, 2008. Defense Acquisitions: Better Weapon Program Outcomes Require Discipline, Accountability, and Fundamental Changes in the Acquisition Environment. GAO-08-782T. Washington, D.C.: June 3, 2008. Defense Acquisitions: Results of Annual Assessment of DOD Weapon Programs. GAO-08-674T. Washington, D.C.: April 29, 2008. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-08-467SP. Washington, D.C.: March 31, 2008. Best Practices: Increased Focus on Requirements and Oversight Needed to Improve DOD's Acquisition Environment and Weapon System Quality. GAO-08-294. Washington, D.C.: Feb. 1, 2008. Cost Assessment Guide: Best Practices for Estimating and Managing Program Costs. GAO-07-1134SP, Washington, D.C.: July 2007. Defense Acquisitions: Assessments of Selected Weapon Programs. GAO-07-406SP. Washington, D.C.: March 30, 2007. Best Practices: An Integrated Portfolio Management Approach to Weapon System Investments Could Improve DOD's Acquisition Outcomes. GAO-07-388, Washington, D.C.: March 30, 2007. Best Practices: Better Support of Weapon System Program Managers Needed to Improve Outcomes. GAO-06-110. Washington, D.C.: November 1, 2005. Defense Acquisitions: Major Weapon Systems Continue to Experience Cost and Schedule Problems under DOD's Revised Policy. GAO-06-368. Washington, D.C.: April 13, 2006. Defense Acquisitions: Stronger Management Practices Are Needed to Improve DOD's Software-Intensive Weapon Acquisitions. GAO-04-393. Washington, D.C.: March 1, 2004. Best Practices: Setting Requirements Differently Could Reduce Weapon Systems' Total Ownership Costs. GAO-03-57. Washington, D.C.: February 11, 2003. Best Practices: Capturing Design and Manufacturing Knowledge Early Improves Acquisition Outcomes. GAO-02-701. Washington, D.C.: July 15, 2002. Defense Acquisitions: DOD Faces Challenges in Implementing Best Practices. GAO-02-469T. Washington, D.C.: February 27, 2002. Best Practices: Better Matching of Needs and Resources Will Lead to Better Weapon System Outcomes. GAO-01-288. Washington, D.C.: March 8, 2001. Best Practices: A More Constructive Test Approach Is Key to Better Weapon System Outcomes. GAO/NSIAD-00-199. Washington, D.C.: July 31, 2000. Defense Acquisition: Employing Best Practices Can Shape Better Weapon System Decisions. GAO/T-NSIAD-00-137. Washington, D.C.: April 26, 2000. Best Practices: Better Management of Technology Development Can Improve Weapon System Outcomes. GAO/NSIAD-99-162. Washington, D.C.: July 30, 1999. Defense Acquisitions: Best Commercial Practices Can Improve Program Outcomes. GAO/T-NSIAD-99-116. Washington, D.C.: March 17, 1999. Defense Acquisitions: Improved Program Outcomes Are Possible. GAO/T-NSIAD-98-123. Washington, D.C.: March 17, 1998. Best Practices: Successful Application to Weapon Acquisition Requires Changes in DOD's Environment. GAO/NSIAD-98-56. Washington, D.C.: February 24, 1998. Best Practices: Commercial Quality Assurance Practices Offer Improvements for DOD. GAO/NSIAD-96-162. Washington, D.C.: August 26, 1996. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since 1990, GAO has designated the Department of Defense's (DOD) management of major weapon system acquisitions a high risk area. DOD has taken some action to improve acquisition outcomes, but its weapon programs continue to take longer, cost more, and deliver fewer capabilities than originally planned. These persistent problems--coupled with current operational demands--have impelled DOD to work outside of its traditional acquisition process to acquire equipment that meet urgent warfighter needs. Poor outcomes in DOD's weapon system programs reverberate across the entire federal government. Over the next 5 years, DOD expects to invest more than $357 billion on the development and procurement of major defense acquisition programs. Every dollar wasted on acquiring weapon systems is less money available for other priorities. This testimony describes DOD's current weapon system investment portfolio, the problems that contribute to cost and schedule increases, potential solutions based on past GAO recommendations, and recent legislative initiatives and DOD actions aimed at improving outcomes. It also provides some observations about what is needed for DOD to achieve lasting reform. The testimony is drawn from GAO's body of work on DOD's acquisition, requirements, and funding processes, as well as its most recent annual assessment of selected DOD weapon programs. DOD is not receiving expected returns on its large investment in weapon systems. Since fiscal year 2000, DOD significantly increased the number of major defense acquisition programs and its overall investment in them. During this same time period, the performance of the DOD portfolio has gotten worse. The total acquisition cost of DOD's 2007 portfolio of major programs under development or in production has grown by nearly $300 billion over initial estimates. Current programs are also experiencing, on average, a 21-month delay in delivering initial capabilities to the warfighter--often forcing DOD to spend additional funds on maintaining legacy systems. Systemic problems both at the strategic and at the program level underlie cost growth and schedule delays. At the strategic level, DOD's processes for identifying warfighter needs, allocating resources, and developing and procuring weapon systems--which together define DOD's overall weapon system investment strategy--are fragmented and broken. At the program level, weapon system programs are initiated without sufficient knowledge about system requirements, technology, and design maturity. Lacking such knowledge, managers rely on assumptions that are consistently too optimistic, exposing programs to significant and unnecessary risks and ultimately cost growth and schedule delays. Our work shows that acquisition problems will likely persist until DOD provides a better foundation for buying the right things, the right way. This involves making tough decisions as to which programs should be pursued, and more importantly, not pursued; making sure programs can be executed; locking in requirements before programs are ever started; and making it clear who is responsible for what and holding people accountable when responsibilities are not fulfilled. Recent congressionally mandated changes to the DOD acquisition system, as well as initiatives being pursued by the department, include positive steps that, if implemented properly, could provide a foundation for establishing a well balanced investment strategy, sound business cases for major weapon system acquisition programs, and a better chance to spend resources wisely. At the same time, DOD must begin making better choices that reflect joint capability needs and match requirements with resources. DOD investment decisions cannot continue to be dictated by the military services who propose programs that overpromise capabilities and underestimate costs to capture the funding needed to start and sustain development programs. To better ensure warfighter capabilities are delivered when needed and as promised, incentives must encourage a disciplined, knowledge-based approach, and a true partnership with shared goals must be developed among the department, the military services, the Congress, and the defense industry.
5,396
793
This background section discusses (1) NNSA's methods of accounting for and tracking costs, (2) the legislative requirement for NNSA to develop a plan to improve and integrate financial management, and (3) leading practices for strategic planning. NNSA contractors use different methods to account for costs, according to NNSA officials. In general, federal Cost Accounting Standards govern how federal contractors, including NNSA's contractors account for costs. Federal Cost Accounting Standards provide direction for the consistent and equitable distribution of contractors' costs to help federal agencies more accurately determine the actual costs of their contracts, projects, and programs. In particular, these standards establish requirements for the measurement, assignment, and allocation of costs to government contracts and provide criteria for the classification and allocation of indirect costs. To allocate costs to programs, contractors are to classify costs as either direct or indirect. Direct costs, are assigned to the benefitting program or programs. Indirect costs--those costs that cannot be assigned to a particular program such as costs for administration and site support--are to be accumulated, or grouped, into indirect cost pools. The contractor is to estimate the amount of indirect costs (accumulated into indirect cost pools) that will need to be distributed to each program and adjust the costs to actual costs by the end of the fiscal year. The contractor then is to distribute these costs based on a rate in accordance with the cost allocation model. The final program cost is the sum of the total direct costs plus the indirect costs distributed to the program. In implementing this allocation process, federal Cost Accounting Standards provide contractors with flexibility regarding the extent to which they identify incurred costs directly with a specific program and how they collect similar costs into indirect cost pools and allocate them among programs. Therefore, similar costs may be allocated differently because contractors' cost allocation models differ. Specifically, cost models may differ in how they (1) classify costs as either direct or indirect, (2) accumulate these costs into indirect cost pools, and (3) distribute indirect costs to benefitting programs. Examples follow: Classification. Contractors may differ in how they classify costs as direct or indirect. For example, electricity and other utility costs are usually classified as indirect because they are not associated with a single program; however, electricity costs could be charged directly if, for example, a contractor installs a meter to track the electricity consumption in a building used solely by one program. Accumulation. Contractors may differ in how they accumulate indirect costs into indirect-cost pools. The number and type of cost pools used to accumulate indirect costs may vary. Distribution. Management and Operating (M&O) contractors may differ in how they distribute indirect costs accumulated into indirect- cost pools to programs. Because similar indirect costs can be allocated differently by different contractors and contractors may change the way they allocate indirect costs over time, it is difficult to compare contractor costs among sites. NNSA contractors also use different methods to track costs, according to NNSA officials. Specifically, NNSA contractors use different work breakdown structures (WBS) for tracking costs. A WBS is a method of deconstructing a program's end product into successive levels (of detail) with smaller specific elements until the work is subdivided to a level suitable for management control. Within WBSs, cost elements capture discrete costs of a particular activity of work, such as labor, material, and fringe benefits. The use of different methods to track cost makes it difficult for NNSA and others to understand or compare costs for comparable activities across programs, contractors, and sites. For example, in 2011 we concluded that the cost savings that NNSA anticipated from the consolidation of the M&O contracts for two of its production sites were uncertain, in part, because historic cost data were not readily available for NNSA to use in its cost analysis. More specifically, we found that a key step in NNSA's process for estimating savings--developing a comparative baseline of historical site costs--is a difficult and inexact process because DOE and NNSA contractors use different methods for tracking costs, and DOE's cost data are of limited use in comparing sites. To obtain more consistent information on costs, some program offices have developed customized contractor cost reporting requirements and designed various systems to collect the cost information needed to manage their programs. For example, NNSA's Office of Defense Programs began developing a data system in 2007--the Enterprise Portfolio Analysis Tool (EPAT)--to provide a consistent framework for managing the planning, programming, budgeting, and evaluation processes within Defense Programs. EPAT has evolved to incorporate a common WBS to allow managers to compare the budget estimates for analogous activities across the nuclear security enterprise regardless of which contractor or program is conducting them. However, NNSA officials told us that neither EPAT nor other customized, program-specific cost collection systems satisfy the section 3128 requirements for establishing an NNSA-wide approach to collecting cost information. According to NNSA officials, EPAT is not suitable because, among other reasons, it is not designed to reconcile with the DOE's official accounting system. Section 3128 of the National Defense Authorization Act for Fiscal Year 2014 requires NNSA to develop a plan for improving and integrating financial management of the nuclear security enterprise. The Joint Explanatory Statement accompanying the act states that NNSA is to develop a plan for a common cost structure for activities at different sites with the purpose of comparing how efficiently different sites within the NNSA complex are carrying out similar activities. According to the act, matters to be included in the plan are: (1) an assessment of the feasibility of the plan (2) the estimated costs of carrying out the plan, (3) an assessment of the expected results of the plan, and (4) a timeline for implementation of the plan. In April 2014, to address the requirements of section 3128, NNSA formed a Lean Six Sigma team of 20 federal and contractor staff. In December 2014, the team produced a report that summarized the results of the team's effort and included a number of recommendations to NNSA. According to the report, the team's work also addressed separate but related requirements contained in a different section of the National Defense Authorization Act for Fiscal Year 2014. Specifically, section 3112 requires the NNSA Administrator to establish a Director for Cost Estimation and Program Evaluation to serve as the principal advisor for cost estimation and program evaluation activities, including development of a cost data collection and reporting system for designated NNSA programs and projects. Therefore, according to the December 2014 report, the team focused on both the requirements of section 3128 and the development of a cost data collection and reporting system required by section 3112. We have previously reported that, in developing new initiatives, agencies can benefit from following leading practices for strategic planning. Congress enacted the GPRA Modernization Act of 2010 (GPRAMA) to improve the efficiency and accountability of federal programs and, among other things, to update the requirement that federal agencies develop long-term strategic plans that include agencywide goals and strategies for achieving those goals. The Office of Management and Budget (OMB) has provided guidance in Circular A-11 to agencies on how to prepare these plans in accordance with the GPRA Modernization Act of 2010 (GPRAMA) requirements. We have reported in the past that, taken together, the strategic planning elements established under the Government Performance and Results Act of 1993 (GPRA), as updated by GPRAMA, and associated OMB guidance, along with practices we have identified, provide a framework of leading practices that can be used for strategic planning at lower levels within federal agencies, such as planning for individual divisions, programs, or initiatives. In February 2016, more than13 months after the statutory reporting deadline, NNSA produced a plan with the stated purpose of integrating and improving the financial management of the nuclear security enterprise. NNSA's plan includes the four elements required under section 3128--a feasibility assessment, estimated costs, expected results, and an implementation timeline--but contains few details related to each of these elements. Feasibility assessment. NNSA's plan includes a section entitled feasibility, which lists concerns regarding the feasibility of implementing the plan. The concerns listed are (1) the availability of resources, (2) the identification and implementation of an information technology solution, (3) the alignment of contractor systems and cost models with a new standardized reporting framework, and (4) that the use of the enterprise-wide approach may come at the expense of specific ad hoc reporting requests. NNSA's feasibility assessment does not provide any specific information regarding these concerns. In addition, it does not provide information on potential costs or benefits, which will be needed to determine if the planned investment of time and other resources will yield the desired results. Estimated cost. The plan includes information on the estimated cost of its implementation plan. It states that total federal and contractor implementation costs are estimated to be between $10 million and $70 million, with the largest variable in the estimate being the cost of the information technology system requirements. NNSA's cost estimate, however, provides no details regarding how the estimate was developed, beyond stating that it is based on professional judgment and input from NNSA's contractors. Instead, the plan states that NNSA will provide a more precise estimate as the agency determines total staffing and information requirements. Expected results. The plan does not explicitly include a discussion of the expected results. However, the plan concludes that the collection of standard performance and cost data will improve both program and financial management through improved cost analysis, cost estimating, and program evaluation. The language in the conclusion, however, provides no details regarding the ways in which cost analysis, cost estimating, and program evaluation will be improved. Implementation timeline. The plan states that contractors will begin reporting detailed cost data into a common NNSA system in fiscal year 2019 and includes an implementation timeline for meeting this goal. (see fig. 1). Elements of the timeline include developing and implementing an enterprise-wide financial management policy, exploring the feasibility of a common WBS for NNSA, and standardizing direct and indirect cost elements. While NNSA's plan says its timeline is "notional," the plan provides a specific time frame of 3 to 5 years during which the core elements are expected to be completed. Moreover, the plan does not identify which elements are considered the "core elements" or explain the reasoning behind the implementation timeframe of 3 to 5 years. As of December 2016, NNSA has fully implemented one of the elements included in the timeline by creating and filling the position of Program Director of Financial Integration. Other elements that were scheduled under NNSA's implementation timeline to begin in fiscal year 2015 and early fiscal year 2016 were not started according to the timeline but are now underway. For example, the timeline indicates that NNSA will begin developing and implementing an enterprise-wide financial management policy during the second half of fiscal year 2015 but this effort was not initiated until October 2016. The elements listed in NNSA's timeline correspond with the recommendations included in the December 2014 internal NNSA report The report recommended that NNSA: establish a clear and consistent program management policy addressing common program management data reporting requirements for all work performed with NNSA funding; establish a standard WBS for all work performed within the nuclear establish a clear and consistent policy and methodology for identifying base capabilities and programs of record that is systematically applied; report financial data by standardized labor categories, labor hours, functional elements, and cost elements; enhance or develop an agency data Warehouse and analytical tools; establish a knowledge management function; and appoint an "Executive Champion" to implement the recommendations and plan. However, NNSA's plan does not include additional details regarding each of the elements listed in its timeline or contain many of the details included in its internal agency report. Instead of using the information and recommendations from the December 2014 report as a basis for developing an actionable implementation plan, NNSA summarized portions of the report and issued the summary document as its official plan. Moreover, differences between the internal report and the published plan are not discussed or explained--potentially creating ambiguity as to NNSA's planned approach. For example, the internal agency report recommends that NNSA establish a standard WBS for all work performed within the nuclear security enterprise. NNSA's plan, however, states that NNSA will explore the feasibility of a common WBS--which leaves open the option of not creating a common WBS. NNSA's plan does not fully incorporate leading practices, which limits its usefulness as a planning tool and limits the effectiveness of NNSA's effort to provide meaningful financial information to Congress and other stakeholders. Reliable financial information is important for making programmatic and budgetary decisions and providing appropriate oversight. To improve the consistency of this information, as discussed previously, Congress directed NNSA to develop a financial integration plan. In developing plans for implementing new initiatives, agencies-- including NNSA--can benefit from following leading practices for strategic planning. These leading practices include (1) defining the mission and goals of a program or initiative, (2) defining strategies and identifying resources needed to achieve goals, (3) ensuring leadership involvement and accountability, and (4) involving stakeholders in planning. We highlight these four practices because NNSA's financial improvement and integration initiative is still being developed, and these practices are particularly relevant to the early stages of developing a strategic plan. NNSA's plan, however, does not fully incorporate any of these leading practices. Table 1 shows our assessment of the extent to which NNSA used these practices in developing its plan for improving and integrating its financial management. Mission and goals. NNSA's plan does not explicitly include a mission statement or strategic goals; as a result, it is difficult to understand fully what NNSA's plan is intended to do and how it will do it. More specifically, it's unclear if the sole purpose of the plan is to satisfy section 3128 requirements and the information needs of Congress or if it is also intended to satisfy the information needs of NNSA decision makers. For example, the plan's executive summary states that NNSA developed the plan to address specific requirements set forth in section 3128 of the National Defense Authorization Act for Fiscal Year 2014; however, information presented in the plan's conclusions suggests that the plan may also be intended to satisfy the information needs of NNSA decision makers. In addition, while the plan concludes that the collection of standard performance and cost data will improve both program and financial management through improved cost analysis, cost estimating, and program evaluation, it does not explicitly present this as a goal either in the conclusions or earlier in the plan. Strategies and resources needed to achieve goals. NNSA's plan does not include strategies to address management challenges or describe the specific resources needed to meet goals. We have previously reported that when developing a strategic plan, it is particularly important for agencies to define strategies that address management challenges that threaten their ability to meet long-term strategic goals and include a description of the resources, actions, time frames, roles, and responsibilities needed to meet established goals. NNSA's plan includes a list of the challenges NNSA will face during implementation of the plan--including challenges related to the availability of resources and identifying and implementing an information technology solution--and provides a "notional" implementation timeline with milestones for certain significant actions. However, beyond the high- level cost estimate provided, NNSA's plan does not include a description of the specific resources needed to meet specific elements of the plan or define strategies that address these management challenges. Leadership involvement and accountability. The CIOs for NNSA and DOE were not involved in developing the NNSA financial integration plan. An agency's senior leadership is key to ensuring that strategic planning becomes the basis for day-to-day operations. The NNSA CIO told us that he was aware of Section 3128 but was not involved in developing the plan or determining how to identify a system to meet the section 3128 requirements. NNSA officials told us that they did not think it was necessary to get the NNSA or DOE CIOs involved with the team because the agency had yet to identify the requirements for a new information technology system. However, this assertion is inconsistent with information contained in NNSA's December 2014 Lean Six Sigma report which states that a sub-team was formed to determine the data system requirements for collecting and reporting costs that would satisfy sections 3128 and 3112 of the National Defense Authorization Act for Fiscal Year 2014. Stakeholder involvement. Key stakeholders, such as program managers were not involved in developing the mission, goals, or strategies associated with the plan. We have previously reported that it is important for agencies to involve stakeholders in developing their mission, goals, and strategies to help ensure that the highest priorities are targeted. However, NNSA did not involve key stakeholders in the development of its financial integration plan. The Lean Six Sigma team that NNSA formed to develop the plan was widely represented in terms of geographic location and included representatives from NNSA's budget, financial management, information technology, and cost-estimating communities, but key stakeholders, such as federal program managers were not included in the effort. NNSA officials told us that the biggest challenge NNSA will face in implementing the plan will be overcoming cultural resistance to change and the parochial interests of different program offices--particularly for program offices that have developed their own independent technology solutions for collecting the cost data they need to manage. Yet, the involvement from NNSA program management offices was limited to budget and finance staff. According to NNSA officials, program managers were invited to participate in the Lean Six Sigma team but none volunteered. Moreover, the federal program manager from one of NNSA's largest programs--the B-61 Life Extension Program--told us that he was not involved in the team and was only vaguely aware of the section 3128 requirement for NNSA to develop a financial integration plan. Given that program managers are a primary user of managerial cost information, obtaining their perspectives and getting their buy-in is important. NNSA also did not solicit input from congressional staff in the development of the plan. NNSA officials told us that the only time they met with congressional staff regarding Section 3128 was in May 2015 to brief staff on their progress, but this was after they were done studying the issue. Because NNSA's plan does not fully incorporate leading strategic planning practices, such as those included in table 1, it has not provided a useful road map for guiding NNSA's effort. According to the NNSA official who is responsible for overseeing the execution of the plan--the Director of Financial Integration--the plan NNSA submitted to Congress was not a comprehensive or actionable plan. In addition, other NNSA officials told us that the plan they submitted to Congress was never intended to provide a road map to guide their efforts. More specifically, they said that they disagree with the premise that the plan submitted to Congress should have been a detailed, operational plan with specific milestones and extensive information about costs, schedule, and risks. Instead, according to these NNSA officials, the purpose of the plan was to identify general principles and a strategic vision for achieving financial integration. The Director of Financial Integration, who accepted this position in January 2016 shortly before NNSA issued its plan, told us in July and in November 2016 that he was in the process of developing an actionable plan with specific goals, objectives, and milestones and acknowledged that key stakeholders, such as program managers, would need to be involved in the process. However, he did not tell us when the more detailed, actionable plan would be finalized and, on the basis of planning documents he provided us; it is unclear if the new plan will incorporate leading practices. Until an actionable plan is in place that incorporates leading strategic planning practices, NNSA cannot be assured that it has established a roadmap to effectively guide and assess the success of this initiative. Effective management and oversight of the contracts, projects, and programs that support NNSA's mission are dependent upon the availability of reliable enterprise-wide management information and, as required, NNSA has provided Congress with a plan for improving and integrating its financial management. Although NNSA's plan includes the elements required under section 3128, details are limited, and it appears that this plan will not provide the framework needed to guide NNSA's efforts and ensure that Congress and other stakeholders have accurate, reliable cost information that can be compared across programs, contractors, and sites. In particular, NNSA's plan has not provided an effective framework for guiding NNSA's effort because it does not incorporate leading planning practices including (1) defining the missions and goals of a program or initiative, (2) defining strategies and identifying resources needed to achieve goals, (3) ensuring leadership involvement, and (4) involving stakeholders in planning. In addition, differences between NNSA's internal report and the published plan are not discussed or explained--potentially creating ambiguity as to NNSA's planned approach. NNSA officials have acknowledged that the plan they submitted to Congress is not a comprehensive or actionable plan. The Director of Financial Integration has taken steps to develop an actionable plan, but it is unclear when the plan will be finalized or the extent to which it will incorporate leading practices. Until a plan that incorporates leading practices is in place, NNSA cannot be assured that its efforts will result in a cost collection model that satisfies the information needs of Congress or improves program and financial management through improved cost analysis, cost estimating, and program evaluation. Such information would better position NNSA to address longstanding contract and project management challenges. Without proper planning NNSA could waste valuable resources, time, and effort in its financial management improvement and integration process. To help provide a roadmap to effectively guide NNSA's effort to integrate and improve its financial management, we recommend that the NNSA Administrator direct the Program Director of Financial Integration to develop a plan for producing cost information that fully incorporates leading practices. We provided a draft of this report to NNSA for its review and comment. NNSA provided written comments, which are reproduced in appendix I, and technical comments that were incorporated as appropriate. In its written comments, NNSA stated that it will update the plan and address the items GAO identified. Although NNSA has agreed to implement our recommendation, in its written comments, NNSA states that given the plan's "early level of maturity" GAO's evaluation of the plan against leading practices resulted in a somewhat misleading conclusion. We disagree. The purpose of a plan is to provide a roadmap to guide the agency's effort. Regardless of the plan's maturity, incorporating leading practices for strategic planning can improve its utility. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, the Administrator of NNSA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Diane LoFaro (Assistant Director), Mike LaForge (Assistant Director), Cheryl Harris; Charles Jones, and Mark Keenan made key contributions to this report.
Effective management and oversight of contracts, projects, and programs are dependent upon the availability of reliable enterprise-wide financial management information. Such information is also needed by Congress to carry out its oversight responsibilities and make budgetary decisions. However, meaningful cost analysis of NNSA programs, including comparisons across programs, contractors, and sites, is not possible because NNSA's contractors use different methods of accounting for and tracking costs. The National Defense Authorization Act for Fiscal Year 2014 required NNSA to develop and submit to Congress a plan to improve and integrate its financial management. An explanatory statement accompanying the act included a provision for GAO to review the adequacy of NNSA's plan. This report evaluates the extent to which NNSA's plan (1) addresses the objectives of the act and (2) follows leading practices for planning. GAO reviewed NNSA's plan and compared it with legislative requirements and leading practices for planning and interviewed NNSA officials. On February 5, 2016, more than 13 months after the statutory reporting deadline, the National Nuclear Security Administration (NNSA) submitted to Congress a plan for improving and integrating its financial management. The plan includes the four elements required by the National Defense Authorization Act for Fiscal Year 2014--a feasibility assessment, estimated costs, expected results, and an implementation timeline--but contains few details related to each of these elements. For example, NNSA's feasibility assessment includes a list of implementation concerns--including general concerns related to the availability of resources and to identifying and implementing an information technology solution--but does not provide any specific information regarding these concerns. In addition, NNSA's plan includes a cost estimate of between $10 million and $70 million but provides no details on how the estimate was developed beyond stating that it is based on professional judgment and input from NNSA's contractors. The plan also includes a "notional" implementation timeline that calls for the plan's core elements to be completed in 3 to 5 years but does not include details on which elements are considered core elements. NNSA's financial integration plan does not fully incorporate leading strategic planning practices, which limits its usefulness as a planning tool as well as the effectiveness of NNSA's effort to provide meaningful financial information to Congress and other stakeholders. As GAO has reported previously, in developing plans for implementing new initiatives, agencies can benefit from following leading practices for strategic planning. These leading practices include (1) defining the missions and goals of a program or initiative, (2) defining strategies and identifying resources needed to achieve goals, (3) ensuring leadership involvement, and (4) involving stakeholders in planning. However, NNSA's plan does not fully incorporate any of these leading practices. For example, beyond the high-level cost estimate provided, NNSA's plan does not include a description of the specific resources needed to meet specific elements of the plan or define strategies that address management challenges, including the implementation concerns identified in the plan's feasibility assessment. In addition, NNSA did not involve key stakeholders in developing its plan. Because NNSA's plan does not incorporate leading strategic planning practices, it has not provided a useful road map for guiding NNSA's effort. NNSA officials told GAO that the plan they submitted to Congress was never intended to provide a road map to guide their efforts. Instead, they said the purpose of the plan was to identify general principles and a strategic vision for achieving financial integration. The NNSA official responsible for overseeing the plan's execution told GAO, he has begun to develop a more comprehensive and actionable plan to guide NNSA's effort. However, it is unclear when the new plan will be finalized or the extent to which it will incorporate leading practices. Until a plan is in place that incorporates leading strategic planning practices, NNSA cannot be assured that its efforts will result in a cost collection tool that produces reliable enterprise-wide information that satisfies the needs of Congress and program managers. Such information would better position NNSA to address long-standing contract and project management challenges. Without proper planning, NNSA could waste valuable resources, time, and effort on its financial integration effort. To provide a road map to guide NNSA's financial management improvement effort, GAO recommends that the NNSA Administrator direct the Program Director of Financial Integration to develop a plan for producing cost information that fully incorporates leading planning practices. NNSA agreed to update its plan and address the items GAO identified.
5,002
944
In 2000, significantly fewer managers at CMS--then known as the Health Care Financing Administration--reported using performance information for various management decisions, as compared to their counterparts in the rest of government. Between our 2000 and 2007 surveys, however, CMS showed one of the largest average increases in the percentage of managers who reported using performance information for certain decisions. This increase placed CMS in about the middle of our agency rankings, which were based on an index of 2007 survey results designed to reflect the extent to which managers at each agency reported using performance information. Our analysis of CMS survey results, management interviews, and agency policies, performance reports, and other relevant documents indicated that the adoption of key management practices contributed to this improvement. Our 2007 survey results showed that significantly more CMS managers agreed that their leadership is committed to achieving results, than they did in 2000 (see fig. 2). Nearly all of the CMS officials we interviewed credited the commitment of one or more agency leaders--such as the CMS Administrator or the Chief Operating Officer--for their increased use of performance information to achieve results. One way in which leaders can demonstrate their commitment is through frequent communication of established goals and progress made toward those goals. As an example, in an effort to reduce the incidence of pressure ulcers among nursing home residents, a Region IV manager described to us how regional leadership began to routinely share performance information about the pressure-ulcer problem with the many stakeholders involved with patient care including hospital and nursing-home personnel, patient advocates, emergency medical technicians, and others. CMS contracts with states to assess the quality of care provided by Medicare and Medicaid-participating facilities, such as nursing homes, and is therefore several steps removed from the delivery of health-care services to patients and the resulting health outcomes. According to CMS Region IV managers we interviewed, this indirect influence had been considered a limiting factor in CMS' ability to affect outcomes among nursing-home patients. However, these same managers said that leadership commitment to getting stakeholders to the table and sharing performance information with them were critical factors in bringing about a reduction in the incidence of pressure ulcers. In that region, between fiscal years 2006 and 2008, this improvement translated into nearly 2,500 fewer long-stay nursing-home residents with pressure ulcers. Our survey results also indicated that between 2000 and 2007, a significantly greater percentage of CMS managers reported that they were held accountable for program results (see fig. 3). In 2006, as part of a change throughout HHS, the agency adopted a new performance-management system that links organizational and program goals with individual accountability for program results. Top CMS headquarters officials said that the new system had made individual accountability for program results more explicit. They described how agency goals and objectives were embedded in the Administrator's performance agreement and cascaded down through the management hierarchy, so that each level of management understood their accountability for achieving the broad department and agency-level goals. To illustrate, broad goals for preventive healthcare cascade from HHS through a CMS director responsible for increasing early detection of breast cancer among Medicare beneficiaries, to a CMS Health Insurance Specialist responsible for communications to raise awareness of the importance of mammograms and other preventive measures. Our survey results show that between 2000 and 2007, there was a significant decline in the percentage of CMS managers who reported that difficulty developing meaningful measures was a hindrance to using performance information (see fig. 4). According to CMS officials, to ensure that performance information was useful to managers, they limited the number of measures for GPRA reporting purposes to the 31 that represented the agency's priorities. This official noted that it would be unmanageable to measure and report on every aspect of their programs and processes. They ultimately settled on a set of performance goals that helped managers and staff identify performance gaps and opportunities to improve performance to close the gaps. Our survey results and interviews with several CMS officials indicate that the agency also took steps to develop their staff's capacity to use performance information, such as investing in improved data systems and offering increased training opportunities on a range of topics related to performance planning and management. Between 2000 and 2007, there was a significant positive increase on all six survey questions related to managers' access to training over the past three years on the use of performance information for various activities (see fig. 5). According to one official we spoke with, increasing her staff's skills in conducting analyses of performance information and presenting findings was a gradual process that required training, coaching, and guidance. Just as the adoption of key management practices can facilitate greater use of information and a greater focus on results, the absence of these practices can hinder widespread use. Fewer managers at FEMA and Interior reported making extensive use of performance information for decision making compared to managers at other agencies. Survey results, interviews with senior level-officials and regional and program managers, and a review of policies and other documents related to performance planning and management at both agencies showed that inconsistent use of these practices contributed to this condition. Our 2007 survey results indicated that, compared to the rest of government, a smaller percentage of FEMA managers agreed their top leadership demonstrated a strong commitment to using performance information to guide decision making (see fig. 6). Our interviews with officials at FEMA were consistent with these survey results, indicating that management commitment was demonstrated inconsistently across the program directorates and regions we reviewed. Leaders and managers we spoke to throughout the management hierarchy were clearly committed to carrying out FEMA's mission. The level of commitment to using performance information for decision making, however, appeared to vary among those we interviewed. For example, in the Disaster Assistance Directorate, one headquarters official told us that he does not need performance targets to help him determine whether his directorate is accomplishing its mission, relying instead on verbal communications with the leadership and with FEMA's regions, joint field offices, and members of Congress to identify issues to be addressed and areas that are running well. Another headquarters official within the Disaster Assistance Directorate's Public Assistance program said he does not receive formal performance reports from regional program managers, nor are any performance reports required of him by his supervisors; rather, he said that he spoke to the regions on an ad hoc basis as performance problems arose. These officials expressed reluctance toward holding their staff accountable for meeting performance goals due to external factors, such as the unpredictability of disasters beyond their control. Further, they expressed uncertainty as to how they could use performance information in the face of uncontrollable external factors. As noted below, however, other managers in FEMA have found ways to take unpredictable occurrences into account as they monitor their progress in achieving performance goals. FEMA faces other hurdles, including the lack of a performance- management system requiring managers to align agency goals with individual performance objectives, which makes it challenging for managers to hold individuals accountable for achieving results. The agency also lacks adequate information systems for ensuring that performance information can be easily collected, communicated, and analyzed. For example, in order to gather performance information across directorates, one official reported that it was necessary to write programs to generate specific reports for each of the systems and then manually integrate the information, making it difficult to produce repeatable and verifiable reports. Further, according to several officials we interviewed, there was a limited number of staff with the analytic skills necessary to work with performance metrics. As with FEMA, at Interior we observed that leaders and managers at all levels conveyed a strong commitment to accomplishing the agency's mission. Interior's survey results were similar to FEMA's results on items related to managers' perceptions of their leadership's commitment to using performance information. Interior's 2007 results were also lower than those in the rest of government (see fig. 7). According to officials we interviewed, leaders at Interior and NPS did not effectively communicate to their staff how, if at all, they used performance information to identify performance gaps and develop strategies to better achieve results. Several NPS managers referred to the performance reporting process as "feeding the beast," because they receive little or no communication from either Interior or NPS headquarters in response to the information they are required to report, leading them to assume that no one with authority reviews or acts on this information. Furthermore, some bureau-level managers at NPS and Reclamation said the performance measures they are required to report on were not always useful for their decision making, either because there were too many or because they were not credible. We have previously reported that to be useful and meaningful to managers and staff across an agency, performance measures should be limited at each organizational level to the vital few that provide critical insight into the agency's core mission and operations. However, in the seven years since the inception of the former administration's Performance Assessment Rating Tool (PART) initiative, Interior has expanded its performance reporting to include 440 PART program measures, in addition to the approximately 200 strategic performance measures used to track progress against its strategic and annual plans, as required by GRPA. A senior headquarters official at Interior said that the number of measures makes it difficult for senior leaders and managers to focus on priorities and easily identify performance gaps among the different program areas. At NPS alone, managers were required to report on 122 performance measures related to GPRA and PART. Managers at both NPS and Reclamation also described performance information that lacked credibility because the measures either did not accurately define comparable elements or did not take into account different standards across bureaus or units. For example, several NPS managers noted that one of the measures on which they report, "percent of historic structures in good condition," does not differentiate between a large, culturally significant structure such as the Washington Monument and a smaller, less significant structure such as a group of headstones. Consequently, a manager could achieve a higher percentage by concentrating on improving the conditions of numerous less significant properties. Poorly integrated performance and management information systems further hindered NPS and Reclamation managers' efforts to use performance information to inform their decision making. For example, according to some Reclamation managers we interviewed, there is no one centralized database to which a Reclamation executive can go to find out how the bureau is doing on all of Reclamation's required performance goals. The lack of linkage among the different Reclamation systems required managers to enter the same data multiple times, which some managers said is a burden. Despite the challenges facing FEMA and Interior, we also observed various initiatives and program areas within the agencies where leaders were committed to increasing the use of performance information; and were demonstrating that commitment by communicating the importance of using data to identify and solve problems, involving their managers in efforts to develop useful measures, and connecting individual performance with organizational results. Within FEMA, Mitigation Directorate officials we interviewed reported that they had begun to use performance information to plan for and respond to factors outside of their control, a change that they attributed in large part to the former Mitigation Administrator's commitment to performance and accountability. For example, storms and other natural events can disrupt the Mitigation Directorate's production work related to floodplain maps modernization, which is a key step in ensuring that flood-prone communities have the most reliable and current flood data available. To plan for possible disruptions, Mitigation Directorate officials said they reviewed performance information on progress toward map modernization goals on a monthly basis with their external stakeholders, including state and local governments and insurance companies and FEMA's regional management, which sent a clear signal that Mitigation's leadership was paying attention to outcomes. According to these officials, this review helped them to determine in advance if they were at risk of missing performance targets and to identify corrective actions or contingency plans in order to get back on track toward achieving their goals. Moreover, they said, they were able to meet or exceed their performance target of 93 percent of communities adopting new floodplain maps, in part, as a result of their frequent communication and review of performance information. Mitigation Directorate officials said that developing measures and holding staff and contractors accountable for their performance was not an easy transformation. They said that one key to this culture change was for the leadership to strike an appropriate balance between holding managers accountable for agency goals and building trust among managers and staff that performance information would be used as an improvement tool, rather than as a punitive mechanism. Finally, Mitigation Directorate officials said that managers and staff became more supportive of their leadership's efforts to use performance information in their decision making once they began to see that measuring performance could help them to improve results. At Interior and NPS, officials were aware that managers continue to struggle with the high volume of performance information they are required to collect, and have initiated various strategies designed to improve the usefulness of performance information without adding to the existing data-collection and reporting process. For example, NPS' Core Operations Analysis is a park-level funding and staffing planning process, recently adopted by several regions, that is intended to improve the efficiency of park operations and ensure that a park's resource-allocation decisions are linked to its core mission goals. Regional-level managers who engaged in the Core Operations Analysis said it was useful in establishing goals based on the park's priorities, monitoring progress toward achieving those goals, and holding park superintendents accountable for meeting established goals. Our report contains recommendations to the Secretary of the Department of Homeland Security (DHS) for FEMA and the Secretary of the Interior, designed to build upon the positive practices we identified within these agencies. We recommended that FEMA augment its analytic capacity to collect and analyze performance information and strengthen linkages among agency, program, and individual performance. We also recommended that Interior, NPS, and Reclamation review the usefulness of their performance measures in conjunction with OMB and refine or discontinue performance measures that are not useful for decision making. Finally, to FEMA, Interior, and NPS, we made recommendations intended to improve the visibility of agency leadership's commitment to using performance information in decision making. Both DHS and Interior generally agreed with these recommendations. As we have noted in the past, the President and Congress both have unique and critical roles to play in demonstrating their commitment to improving federal agency performance results. Both OMB and Congress can send strong messages to agencies that results matter by articulating expectations for individual agency performance and following up to ensure that performance goals are achieved. At the same time, they also need to address performance problems in the areas of government that require the concerted efforts of multiple agencies and programs. Increasingly, many of the outcomes we look for--such as prevention of terrorist attacks, reduction in incidence of infectious diseases, or improved response to natural disasters--go beyond the scope of any one single agency. In these cases, agencies must work closely together to achieve desired results. The President can send a signal to federal managers that using performance information is critical for achieving results and maximizing the return on federal funds invested by selecting and focusing his attention on achieving certain critical goals, such as creating or retaining jobs through investments under the American Recovery and Reinvestment Act of 2009. As a first step, OMB has begun to issue guidance to agencies on identifying a limited number of high-priority performance goals, with the explicit message that performance planning is a key element of the President's agenda to build a high-performing government. With this recent guidance, OMB has also put agencies on notice that the executive- branch leadership is paying attention to their performance, by establishing regular reviews of the progress agencies are making to improve results in these high-priority areas. As the primary focal point for overall management in the federal government, OMB can support agency efforts to use performance information by encouraging agencies to invest in training, identifying and disseminating leading practices among agency managers, and assisting agencies in adopting these practices where appropriate. As we previously reported, our survey results showed a positive relationship between managers who reported receiving training and development on setting program performance goals and those who report using performance information when setting or revising performance goals. However, as we testified in July 2008, while our survey found a significant increase in training since 1997, only about half of our survey respondents in 2007 reported receiving any training that would assist in analyzing and making use of performance information. We previously recommended that OMB ensure that agencies are making adequate investments in training on performance planning and measurement, with a particular emphasis on how to use performance information to improve program performance. Although the agency has not yet implemented this recommendation, an official who oversees OMB's management initiatives said that OMB has recently launched a collaborative Wiki page for federal agencies. According to this official, the Wiki is intended to provide an on-line forum for federal managers to share lessons learned and leading practices for using performance information to drive decision making. In addition to providing support to help improve agency-level performance, OMB is uniquely positioned to facilitate collaborative, governmentwide performance toward crosscutting goals. As noted above, there are numerous performance challenges, ranging from combating terrorism to preventing the spread of infectious diseases, which transcend organization lines and require the concerted efforts of multiple agencies and programs. We have previously reported that GPRA could provide OMB, agencies, and Congress with a structured framework for addressing crosscutting program efforts. OMB, for example, could use the provision of GPRA that calls for OMB to develop an annual governmentwide performance plan to integrate expected agency-level performance. Such a plan could help the executive branch and Congress address critical federal performance and management issues such as conflicting agency missions, jurisdiction issues, and incompatible procedures, data, and processes. As we pointed out in our July 2008 testimony, this provision has not been implemented fully. In addition to the annual performance plan, a governmentwide strategic plan could identify long-term goals and strategies to address issues that cut across federal agencies. To that end, we have also recommended that Congress consider amending GPRA to require the President to develop a governmentwide strategic plan. Such a plan--supported by a set of key national outcome-based indicators of where the nation stands on a range of economic, environmental, safety and security, social, and cultural issues--could offer a cohesive perspective on the long-term goals of the federal government and provide a much-needed basis for fully integrating, rather than merely coordinating, a wide array of federal activities. By routinely incorporating agency performance issues into its deliberations and oversight, Congress can send an unmistakable message to agencies that they are expected to manage for results. As we have noted in our earlier work, however, Congress needs to be actively involved in early conversations about what to measure and how to present this information. We previously reported that the PART process used by the prior administration did not systematically incorporate a congressional perspective and promote a dialogue between Congress and the President. As a result, most congressional committee staff we spoke to did not use the PART results to inform their deliberations. Although the Obama Administration intends to adopt a new performance improvement and analysis framework, any new framework should include a mechanism to consult with members of Congress and their staffs about what they consider to be the most important performance issues and program areas warranting review. Engaging Congress early in the process could help target performance improvement efforts toward those areas most likely to be on the agenda of Congress, thereby increasing the likelihood that they will use performance information in their oversight and deliberations. Additionally, as we noted in our July 2008 testimony, Congress could consider whether a more structured oversight mechanism would be helpful in bringing about a more coordinated congressional perspective on governmentwide performance issues. Just as the executive branch needs to better address programs and challenges that span multiple departments and agencies, Congress might find it useful to develop structures and processes that provide a coordinated approach to overseeing agencies where jurisdiction crosses congressional committees. We have previously suggested that one possible approach could involve developing a congressional performance resolution identifying the key oversight and performance goals that Congress wishes to set for its own committees and for the government as a whole. Such a resolution could be developed by modifying the annual congressional budget resolution, which is already organized by budget function. This may involve collecting the input of authorizing and appropriations committees on priority performance issues for programs under their jurisdiction and working with crosscutting committees such as the Senate Committee on Homeland Security and Governmental Affairs, the House Committee on Oversight and Government Reform, and the House Committee on Rules. In conclusion, while federal agencies have become better positioned to manage for results, there is still much to be done to shift the focus of federal managers from merely measuring agency performance to actively managing performance to improve results. Our work indicates that widespread adoption of the key management practices we have identified is a critical first step. At the same time, the President and Congress each have unique and critical roles to play in building a high-performing, results-oriented, and collaborative culture across the government. Beyond this, the creation of a long-term governmentwide strategic plan, informed by a set of key national indicators, and an annual governmentwide performance plan could provide important tools for integrating efforts across agencies to achieve results on the challenging issues that increasingly face our nation in the 21st century. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions you or other members of the subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-6543 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals who made key contributions to this testimony were Elizabeth Curda (Assistant Director), Jessica Nierenberg, Laura Miller Craig, Kate Hudson Walker, Karin Fangman, Melanie Papasian, A.J. Stephens, and William Trancucci. National Aeronautics and Space Administration Department of Housing and Urban Development Department of the Treasury (excluding Internal Revenue Service) Centers for Medicare & Medicaid Services United States Agency for International Development Department of Agriculture (excluding Forest Service) Department of Homeland Security (excluding Federal Emergency Management Agency) Department of Transportation (excluding Federal Aviation Administration) Department of Health and Human Services (excluding Centers for Medicare & Medicaid Services) This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since 1997, periodic GAO surveys indicate that overall, federal managers have more performance information available, but have not made greater use of this information for decision making. To understand the barriers and opportunities for more widespread use, GAO was asked to (1) examine key management practices in an agency in which managers' reported use of performance information has improved; (2) look at agencies with relatively low use of performance information and the factors that contribute to this condition; and (3) review the role the President and Congress can play in promoting a results-oriented and collaborative culture in the federal government. This testimony is primarily based on GAO's report, Results-Oriented Management: Strengthening Key Practices at FEMA and Interior Could Promote Greater Use of Performance Information, which is being released today. In this report, GAO made recommendations to the Departments of Homeland Security (DHS) and the Interior for improvements to key management practices to promote greater use of performance information at FEMA, the National Park Service, Bureau of Reclamation, as well as at Interior. Both DHS and Interior generally agreed with these recommendations. The testimony also draws from GAO's extensive prior work on the use of performance information and results-oriented management. GAO's prior work identified key management practices that can promote the use of performance information for decision making to improve results, including: demonstrating leadership commitment; aligning agency, program, and individual performance goals; improving the usefulness of performance information; building analytic capacity; and communicating performance information frequently and effectively. The experience of the Centers for Medicare & Medicaid Services (CMS) illustrates how strengthening these practices can help an agency increase its use of performance information. According to GAO's most recent 2007 survey of federal managers, the percentage of CMS managers reporting use of performance information for various management decisions increased by nearly 21 percentage points since 2000--one of the largest improvements among the agencies surveyed. CMS officials attributed this positive change to a number of the key practices, such as the agency's leaders communicating their commitment to using performance information to drive decision making. Conversely, the experiences of the Department of the Interior (Interior) and the Federal Emergency Management Agency (FEMA) within the Department of Homeland Security indicated that the absence of such commitment can discourage managers and their staff from using performance information. According to GAO's 2007 survey, Interior and FEMA ranked 27 and 28, respectively, out of 29 agencies in their reported use of performance information for various management functions. Based on further survey data analysis, reviews of planning, policy, and performance documents, and management interviews, GAO found that inconsistent application of key practices at FEMA and Interior--such as routine communication of how performance information influences decision making--contributed to their relatively low survey scores. While both FEMA and Interior have taken some promising steps to make their performance information both useful and used, these initiatives have thus far been limited. The President and Congress also have unique and critical roles to play by driving improved federal agency performance. By focusing attention on certain high-level goals and tracking agency performance, the President and the Office of Management and Budget (OMB) can send a message that using performance information is critical for achieving results and maximizing the return on federal funds invested. Through its oversight, Congress can also signal to agencies that results matter by articulating performance expectations for areas of concern and following up to ensure that performance goals are achieved. The President and Congress can also play a role in improving government performance in areas that require the concerted efforts of multiple agencies and programs to address, such as preparing for and responding to a pandemic influenza. A governmentwide strategic plan could support collaborative efforts by identifying long-term goals and the strategies needed to address crosscutting issues.
4,843
789
DOE's LGP was originally designed to address a fundamental impediment for investors and lenders that stems from the risks of innovative and advanced energy projects, including technology risk--the risk that the new technology will not perform as expected--and execution risk--the risk that the borrower or project will not perform as expected. Companies can face obstacles in securing enough affordable financing from lenders to survive the gap between developing innovative technologies and commercializing them. Because the risks that lenders must assume to support new technologies can put private financing out of reach, companies may not be able to commercialize innovative technologies without the federal government's financial support. The LGP was established in Title XVII of the Energy Policy Act of 2005 to encourage early commercial use of new or significantly improved technologies in energy projects. The act--specifically section 1703-- originally authorized DOE to guarantee loans for energy projects that (1) use new or significantly improved technologies as compared with commercial technologies already in service in the United States and (2) avoid, reduce, or sequester emissions of air pollutants or man-made greenhouse gases. In February 2009, Congress expanded the scope of the LGP in the American Recovery and Reinvestment Act (Recovery Act) by adding section 1705 to the Energy Policy Act, which extended the program and provided funding to include projects that use commercial energy technology that employs renewable energy systems, electric power transmission systems, or leading-edge biofuels that meet certain criteria. As of March 2014, DOE had made 31 loan guarantees for approximately $15.7 billion under section 1705, which expired on September 30, 2011, and 2 loan guarantees for approximately $6.2 billion under section 1703. These guarantees have been for biomass, geothermal, nuclear, solar, and wind generation; energy storage; solar manufacturing; and electricity transmission projects (see app. III). Two borrowers withdrew in 2012 before starting to draw funds from their loans. Additionally, in September and October 2013, DOE deobligated 2 loan guarantees because they did not seem likely to meet the loan conditions required to begin drawing on their loans. Three other loan guarantee borrowers have defaulted and filed for bankruptcy--one borrower and its loan guarantee have been restructured, and the guarantee remains active; the other two borrowers are in liquidation proceedings. In addition, DOE has conditional commitments,$3.8 billion in section 1703 loan guarantees for two nuclear projects. In December 2013, DOE announced a new solicitation for applications for up to $8 billion in loan guarantees for advanced fossil energy projects. issued in 2010, for approximately The ATVM loan program was established in December 2007 by the Energy Independence and Security Act (EISA), and the fiscal year 2009 Continuing Resolution appropriated funding for the program. DOE's five loans for $8.4 billion under this program went to both established automakers and start-up manufacturers. These loans are for the manufacture of fuel-saving enhancements of conventional vehicle technology, plug-in hybrids, and all-electric vehicles. In May 2013, one borrower paid back its loan. Two ATVM borrowers have defaulted on their loans. In 2013, DOE sold the defaulted loan notes in auction proceedings. In 2010, DOE consolidated the previously separate LGP and ATVM program under the Loan Programs Office. Monitoring for both LGP and ATVM is conducted out of one division: the Portfolio Management Division, with support coming from several other divisions throughout the Loan Programs Office. DOE has not fully developed or consistently adhered to loan monitoring policies for its loan programs. In particular, DOE has established policies for most loan monitoring activities, but policies for some of these activities remain incomplete or outdated. Further, in some cases we examined, DOE generally adhered to its loan monitoring policies but, in others, DOE adhered to those policies inconsistently or not at all because the Loan Programs Office was still developing its staffing, management and reporting software, and policies. DOE has established policies for most loan monitoring activities, but policies for some of these activities remain incomplete or outdated. More specifically, DOE has established policies for loan monitoring activities including disbursing funds, monitoring and reporting on credit risk, and managing troubled loans. (For more details about DOE loan monitoring policies and activities, see app. II.) However, loan monitoring policies for evaluating and mitigating program-wide risk remain incomplete or outdated, and several dates DOE set for completing or updating these policies passed during the course of our work. Evaluating and mitigating program-wide risk is generally the responsibility of the Risk Management Division within DOE's Loan Programs Office. This division was established in February 2012 and has been operating since its inception under incomplete or outdated policies. For example, the policies do not address how the new structure of the Risk Management Division fits into existing policies, thus not providing clear guidance on the organizational roles of the division. DOE officials told us that policy revisions were delayed in part because the Loan Programs Office did not have a Director of Risk Management until November 2012 and that a planned revision was put on hold to await the arrival of a new Executive Director in May 2013. Additionally, the Risk Management Division had not staffed 11 of its 16 planned positions until late 2013, when it staffed 6 of 11 vacancies. As highlighted by an independent White House review of DOE's loan programs, as well as our discussions with private lenders, a risk management division is essential for mitigating risk. Similarly, Office of Management and Budget (OMB) guidance specifies that credit programs should have robust management and oversight frameworks for monitoring the programs' progress toward achieving policy goals within acceptable risk thresholds, and taking action where appropriate to increase efficiency and effectiveness. It is difficult to determine whether DOE is adequately managing risk if policies against which to compare its actions are outdated or incomplete. Also, without fully staffing key monitoring positions, the Risk Management Division is limited in its ability to revise and complete policies, as well as perform its other monitoring responsibilities. In some cases we examined, DOE generally adhered to its loan monitoring policies but, in other cases, DOE adhered to its monitoring policies inconsistently or not at all because DOE was still developing the Loan Programs Office's organizational structure, including staffing, management and reporting software, and implementing procedures for policies. As a consequence, DOE was making loans and disbursing funds from 2009 through 2013 without a fully developed loan monitoring function. DOE generally adhered to its monitoring policies for activities such as disbursing funds and reviewing borrower requests for changes to loan agreement provisions. For example, for the 10 loans in our sample, we found that, in disbursing funds, DOE generally documented its analysis of the financial health of the project and recorded supervisory approvals, as required in its policy. Similarly, we found that in nearly all of the 30 requests for amendments and waivers to loan agreements for the 10 loans in our sample, DOE officials properly recorded their review of the requested changes. In some other cases, DOE inconsistently adhered to its monitoring policies. For example, in regard to monitoring and reporting on credit risk, DOE was inconsistent in its preparation of credit reports which, according to DOE's policy manuals, provide early warning signs of potential credit problems and can guide project and loan restructuring efforts should the need arise. In total, DOE was missing 24 of 88 periodic credit reports due through May 2013 across the 10 sample loans. Twenty of the missing reports were not completed because DOE did not begin producing periodic credit reports until August 2011. DOE officials told us that such reports were not produced before then because the Portfolio Management Division had not filled the staff positions needed for producing these reports, and its management and reporting software was under development. As a result, DOE disbursed more than $4.7 billion for the 10 loans in our sample before it began producing periodic credit reports as required in its policy manuals. According to DOE officials, although DOE was not producing credit reports during this time, DOE staff were taking other measures to monitor the loans, such as keeping in regular contact with the borrowers. In addition, after DOE began producing credit reports, DOE officials inconsistently recorded credit risk ratings on multiple credit reports. For example, of the 64 reports we reviewed as part of our sample, 11 had one or more credit risk rating fields left blank, and other credit rating fields contained errors. According to DOE officials, the reasons for the blank and incorrect fields included human error and a system design error that occurs in its management and reporting software. Further, there was a wide range between when credit reports were completed and when they were reviewed; more specifically, the time it took to review reports completed on a quarterly basis ranged from as little as a week to over 3 months. DOE officials told us that the reporting and review period inconsistencies were a result of inadequate staffing and incomplete implementing procedures that did not provide clear guidance on reporting dates. Also, some reports were submitted and approved outside of DOE's management and reporting software, for which the system design was still being worked out, and training was being provided. DOE's policy manuals specify that one purpose of these credit reports is to serve as an information source for inquiries by government oversight authorities seeking to understand the loans' structures and decisions. Incomplete or inconsistent credit reporting can make it difficult for these authorities to understand and assess the status of the loans and determine if corrective actions are needed. According to DOE officials, as of February 2014, its staffing levels and its management and reporting software were sufficient to support full and timely credit reporting. After we found inconsistencies in DOE's credit reports, DOE established a draft implementing procedure to guide the development of future credit reports that clarified reporting dates and preparation periods for new and existing staff in June 2013. Furthermore, DOE officials stated, in January 2014, that the department has taken steps to address human error in the credit risk rating fields by requiring that the fields representing previous credit ratings are populated automatically. DOE officials also stated that they are addressing the system design issue in the next generation of its management and reporting software, planned for release in late fall 2014. In another example, DOE inconsistently adhered to policies for managing troubled loans. DOE's policy manuals require that DOE prepare and approve plans for handling troubled loans to borrowers who are in danger of defaulting on their loan repayments. Once it becomes clear that a loan is in danger of default, DOE policy calls for the preparation, approval, and implementation of a workout action plan, which identifies potential problems and lays out decisive remedial actions to help minimize potential losses. However, for two troubled loans in our sample, DOE officials told us they had not prepared a formal workout action plan in a single document but instead specified problems and remedial actions in many documents over a period of time. For example, in one case, in 2011, where DOE officials were aware for at least 10 months that the borrower would likely default on its payments, DOE provided us with about 20 such documents including analyses of collateral, draft memoranda, and slideshow presentations, which showed that DOE had taken or considered some of the options described by its policy. However, these documents did not conform to DOE's policy for these plans, particularly its policy that DOE prepare a workout plan document and seek formal approval from its management. DOE officials told us that "operational matters had evolved beyond the steps outlined in their policy manuals." DOE officials noted that they were revising the manuals to better comport with best practices in the finance industry and that DOE has been operating under draft implementing procedures since June 2012. These officials also noted that DOE's 2009 and 2011 policy manuals were inadequate and were completed without the benefit of experts in the field of workout plans due to limited staffing in the Portfolio Management Division at the time. Officials noted that managers with such expertise are now on staff in the division, but the branch within the division that is tasked with managing troubled loans, including the development and implementation of workout action plans, had not staffed four of five positions as of February 2014. DOE officials told us that, given the availability of third-party financial advisors and the limited number of assets that fall within that category, they may not need to fill all of the positions. However, inconsistent adherence to policies and incomplete staffing limit DOE's assurance that it has been effectively managing troubled loans during a period when there have been five defaults and bankruptcies among DOE loan program borrowers or that it can effectively manage such loans in the future. Further, DOE did not adhere to some existing policies for program-wide evaluation and mitigation of portfolio-wide risk, in particular policies for evaluating the effectiveness of its loan monitoring. DOE's 2011 policy manual states that certain functions are critical to management's ability to assess the adequacy and quality of the agency's monitoring. The manual further states that failure to maintain these functions is an unsound practice that could expose DOE to loss or criticism. These functions-- which are referred to as credit review, compliance, and reporting functions--include internal assessment of documentation, portfolio-wide reporting on risks, and evaluation of the effectiveness of DOE's loan monitoring. The Loan Programs Office's Portfolio Management Division has conducted some internal assessments of the quality of DOE documentation and, in May 2013, started some portfolio-wide reporting on the overall risk posed by DOE's loan obligations. However, DOE officials told us that the division has not evaluated the effectiveness of the agency's loan monitoring efforts or produced the required reports. DOE officials told us that these responsibilities have been transferred to the Risk Management Division, which, as noted earlier, was operating under incomplete or outdated policies and had staff vacancies. Without conducting these evaluations, DOE management cannot assess the adequacy of its monitoring efforts and thus be reasonably assured that it is effectively managing risks associated with its loan programs. DOE's loan programs began making loans and guarantees in 2009, and by March 2014 DOE had made or guaranteed over $30 billion in loans that required monitoring. In its policy manuals, DOE recognizes the importance of monitoring loans and guarantees to proactively manage their risks and protect the financial interests of the federal government and the taxpayer. OMB guidance specifies that credit programs should have robust management and oversight frameworks. However, DOE has been monitoring its loans since 2009 without the benefit of a fully- developed organizational structure because staffing, management and reporting software, and monitoring policies and procedures are still works in progress. The absence of a fully-developed organizational structure has resulted in inconsistent adherence to policies during a period of significant program events, including loan disbursements, borrower bankruptcies, and loan repayments involving billions of dollars. Because DOE inconsistently adhered to the policies it had in place, DOE's assurance that it was completing activities critical to monitoring the loans has been limited. DOE has made progress since 2011 in developing its monitoring functions, but it has repeatedly missed internal deadlines for completing its loan monitoring policies and procedures. In the meantime, DOE has recently announced a new solicitation for up to $8 billion in loan guarantees for advanced fossil energy projects and issued two new loan guarantees for nuclear generation, adding $6.2 billion in loans to be overseen. In addition to a fully developed loan monitoring organization, evaluating the effectiveness of ongoing monitoring efforts is important to ensuring risks are being adequately managed in DOE's loan programs. However, since the first loans were made, DOE has not conducted evaluations of its loan monitoring by performing the credit review, compliance, and reporting functions outlined in its 2011 policy manual. Such evaluations might have identified and addressed the inconsistent adherence to its policies that we identified. As DOE's manual states, a failure to maintain a reliable and effective evaluation function is unsound and could expose DOE to loss or criticism. Given the high profile and large sums of money involved in DOE's loan programs--more than $30 billion in loans and guarantees already made and approximately $45 billion in remaining loan and loan guarantee authority--this exposure is significant. To provide greater assurance that DOE is effectively monitoring its loans, we recommend that the Secretary of Energy direct the Executive Director of the Loan Programs Office to take the following four actions: Fully develop its organizational structure by updating management and reporting software, and staffing key monitoring positions, completing policies and procedures for loan monitoring and risk management. Evaluate the effectiveness of DOE's monitoring by performing the credit review, compliance, and reporting functions outlined in the 2011 policy manual for DOE's loan programs. We provided a draft of this report to DOE for review and comment. In its written comments, DOE generally agreed with our recommendations. DOE also said it disagreed with some statements in the draft report. It was difficult, however, for us to determine which statements DOE disagreed with because the comments focused on highlighting DOE's monitoring efforts in four areas rather than specifying areas of disagreement. DOE's written comments and our detailed responses can be found in appendix V of this report. DOE also provided technical comments that we incorporated, as appropriate. DOE noted several actions it is undertaking in response to our recommendations. Regarding its organizational structure, DOE stated it would continue to recruit and hire qualified managers and staff for its Portfolio Management and Risk Management Divisions; implement a second generation of its software by the end of the first quarter of 2015, as well as a new information and reporting system by the end of the third quarter of 2014; and continue to prepare and issue portfolio monitoring and risk management procedures and guidelines. However, DOE did not provide information on any plans for updating and completing its overall policy manual for the programs. We believe this action is needed because it would provide guidance on the organizational roles of the new Risk Management Division and address inconsistencies we found between the current manual and current DOE practices, such as those for troubled loans. Regarding evaluation of the effectiveness of DOE's monitoring, DOE described several efforts for reviewing and monitoring the Loan Programs Office's portfolio. However, DOE did not indicate that it plans to prepare the required reports to evaluate the effectiveness of its loan monitoring. We are sending copies of this report to the Secretary of Energy, the appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. This appendix details the methods we used to examine the Department of Energy's (DOE) Loan Programs Office. The 2007 Revised Continuing Appropriations Resolution mandates that GAO review DOE's execution of the Loan Guarantee Program (LGP) and report its findings to the House and Senate Committees on Appropriations. Because DOE is administering the LGP and Advanced Technology Vehicles Manufacturing (ATVM) loan program through one Loan Programs Office, we included both programs in this review. For this report, we assessed the extent to which DOE has developed and adhered to loan monitoring policies for its loan programs. U.S. Department of Energy Loan Programs Office, Credit Policies and Procedures, Title XVII of the Energy Policy Act of 2005 (Washington, D.C.: Oct. 6, 2011). January 2014, DOE is drafting a new policy manual in order to reflect current practices and supersede the separate manuals for both programs. Moreover, we conducted semistructured interviews with DOE staff to ensure that our understanding of these policies and procedures was complete and accurate. To assess the extent to which DOE adhered to its monitoring policies, we acquired and analyzed documentation from a nonprobability sample of 10 of the 36 loans and loan guarantees that had been made by March 2013, therefore, requiring monitoring. The use of a nonprobability sample means that we are unable to generalize our findings to the loans and loan guarantees not in our sample, but we are able to make observations about DOE's monitoring activities for the diverse set of 10 loans and guarantees. The loans and loan guarantees were chosen to cover projects involving a range of technologies, construction statuses, credit watch list statuses, loan or guarantee amounts, dates of loan finalization, and amounts disbursed. We examined relevant project files, including disbursement records, plans for troubled loans, and credit reports. We requested all disbursement records and plans for managing troubled loans for the 10 sample loans. DOE began producing credit reports in August 2011, so we examined all credit reports for the 10 sample loans that were produced between August 2011, and May 2013, when we completed our data collection. We compared these files with selected DOE policies to determine where the guidance was followed and where it was not, as well as the level of consistency in monitoring across projects. We did not review all DOE policies, rather only those most directly associated with the 10 activities identified in our summary. In some cases, our review of documentation was limited by the fact that DOE's detailed procedures remained under development. In addition, to provide context, we compared DOE's monitoring policies with those of private lenders. We conducted semistructured interviews with eight experts (four private lenders, three academic experts, and one industry expert) about private lender monitoring policies and compared the information they provided with DOE's policies and the 10 activities we identified. We selected a nonprobability sample of four private lenders financing similar projects to those in the LGP and ATVM loan program, using additional criteria such as the value of loans issued and number of loans issued. The use of a nonprobability sample in this case means that we are unable to generalize the information they provided to the private lenders not in our sample, but we are able to make observations about how DOE's polices compare with those lenders' descriptions of their own and industry-wide practices. Our primary source in identifying lenders was a search of the lenders most active in financing large innovative energy and advanced vehicle projects in the Bloomberg New Energy Finance database. We discussed general policies and practices with these private lenders because they were unable to share specific written policies and procedures. We identified academic experts through a literature search of relevant academic articles on project finance loan monitoring practices and the financing of innovative energy technologies, and we then contacted the most frequently cited academics. In addition, we reviewed a study conducted by KPMG on behalf of the Export-Import Bank of the United States, which compiled leading monitoring practices from institutions across a number of sectors, including 10 private lenders, as well as several export credit and government agencies. We then interviewed the study's author. We conducted this performance audit from March 2013 to April 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In order for us to compare the Department of Energy's (DOE) monitoring efforts to those of private lenders, we summarized and categorized DOE's monitoring into 10 activities based on DOE policy and discussions with DOE officials (see table 1). Taken together, the 10 activities cover the full duration of the loan, from the time loans have been made, through disbursement of funds, construction of the project, and the operation of the project, until final repayment, which can be up to 3 decades in the future. Some of the monitoring activities we identified are to be done for every loan. For example, prior to disbursing loan payments, DOE policies require that staff perform several steps to review the financial health of the project and the borrower, such as checking borrower documentation, performing technical reviews of the health of the project, and obtaining supervisory approval prior to processing payment. During construction, DOE policy directs loan monitoring and technical staff to review project financial and technical documents to ensure that the project is progressing toward its construction goals. DOE policy for another activity we identified directs DOE staff to monitor and report on the financial health of the project and prepare reports about the project's financial information, among other things. Other monitoring activities are applied only if the internal or external risk to the financed project increases. Two of these 10 activities address this possibility: (1) assessing potential actions for loans with increasing risks and (2) managing troubled loans. As part of managing troubled loans, DOE officials are to determine whether it is preferable to make changes in a troubled loan's structure, such as restructuring the project in a way that might reduce risk, or to seek an outside entity to purchase the loan and take on the risk, among other options. To develop this summary of DOE's monitoring activities, we examined high-level DOE policy guidance--specifically DOE's October 2011 policy manual for its Loan Guarantee Program, its 2009 policy manual for its Advanced Technology Vehicles Manufacturing (ATVM) loan program, DOE's approved implementing procedure documents, and DOE loan monitoring documentation. In addition, we reviewed a draft policy manual intended to unify guidance for both loan programs (more detailed information about how we summarized these activities is available in app. I). Table 1 summarizes the 10 monitoring activities we identified that are described by DOE Loan Programs Office policies and procedures. Arizona Solar One, LLC (aka Abengoa Solar, Inc; Solana) Genesis Solar, LLC Granite Reliable Power, LLC Great Basin Transmission South, LLC (aka SWIP/On Line) High Plains Ranch II, LLC (aka Sunpower Corp CA Valley Solar Ranch) Mojave Solar LLC (aka Abengoa Solar Mojave) Solar generation NGP Blue Mountain I, LLC Geothermal OFC 2, LLC (aka Ormat) Stephentown Regulation Services, LLC (aka Beacon Power Corporation) Tonopah Solar Energy (aka Solar Reserve, LLC) Defaulted/auctioned (subsequently bankrupt) Defaulted/auctioned (subsequently restructured by the purchaser) In order to provide context and better understand the Department of Energy's (DOE) loan monitoring, we compared DOE policies with those of private lenders that finance large energy projects. We conducted semistructured interviews with eight experts about private lenders' monitoring policies and compared the results with DOE's policies and the 10 activities we identified. For more information on our methodology see appendix I. For the activities in which DOE has established policies, those policies generally align with those of private lenders. More specifically, our discussions with experts and our review of DOE's policies indicate that DOE's general monitoring activities, frequency of monitoring, actions taken when risk appears to be increasing, and organizational structures were all generally similar to those of private lenders. For example, both DOE policies and private lenders described various monitoring activities to oversee borrowers including periodic reviews of borrower information, independent engineering reviews of projects' progress and expenditures, site visits to projects to monitor construction, and the tracking of borrowers' compliance with loan agreements. In one instance, DOE's monitoring policies appear more rigorous than many private lenders'. Specifically, both DOE's and private lenders' policies call for independent engineers for technical expertise and oversight, but DOE also has engineers on staff who oversee the independent engineers and advise DOE's loan portfolio managers. While both DOE and private lenders were similar in having separate risk management functions, we could not directly compare DOE's reporting relationship for its Risk Management Division with that of the private sector because of the incomplete policies and evolving nature of the Risk Management Division, as well as differences between a private lender and a government agency--such as the structure of overall management and the greater diversity of missions assigned to DOE than to a bank. The following are GAO's comments on the letter from the Department of Energy dated April 18, 2014. 1. DOE states that its disbursement monitoring is fully developed and that DOE has not made disbursements without confirming that all required conditions have been satisfied and obtaining all necessary internal approvals. We found that DOE generally adhered to its policies related to disbursements, documenting analyses, and recording approvals, as we report on page 10. However, as we point out in the report, other aspects of DOE's loan monitoring were still under development while DOE was disbursing funds. 2. DOE states that its 2011 policy manual requires the agency to prepare loan review reports for each project, at least annually, and that the agency has prepared quarterly, semiannual, and annual reports for all projects since the 2011 manual was issued. The manual in effect prior to 2011 also required periodic credit reports, so we looked for all reports prepared for the 10 loans in our sample. As we report, DOE did not start preparing any periodic credit reports until 2011, and 24 of 88 required periodic credit reports were missing for the 10 loans. Of these missing reports, 4 were due after the 2011 policy manual was issued. DOE also states that credit reports are not the only tool in determining borrower health and capacity to repay the DOE-supported loan. We agree that credit reports are not the only tool, but DOE's policy manual indicates their importance. Specifically, it states that reports will enhance the credit monitoring process by providing (1) an early warning signal of potential credit issues, (2) a basis for potential loan restructuring, (3) a basis for reassessing credit risk, and (4) an information source for inquiries. 3. DOE states that it prepared comprehensive analyses and presentations that served as workout plans for the two loans mentioned by GAO and that consolidation of these analyses and presentations into a single document would not have changed the overall effectiveness of the workout plans. We did not evaluate whether this approach was more or less effective than the stated approach in DOE's policies, which call for a formal workout action plan that must be approved by management. As we note in our report, DOE did not follow its policies in that area. We also note DOE's statements that they are revising the policy manual to better comport with best practices in this field and that DOE has been operating under draft implementing procedures since June 2012. The mismatch between DOE's written policies and its actual practices highlights the importance of our recommendation for DOE to complete its policies and procedures. 4. DOE describes several actions it has taken and has under way for reviewing and monitoring the Loan Programs Office's portfolio. We note in the report that DOE has conducted some internal assessments and begun portfolio-wide reporting on risks, but that DOE was not adhering to its policies for evaluating the effectiveness of its loan monitoring. Specifically, DOE's 2011 policy manual requires "a formal Credit Review and Compliance report" to be issued quarterly, but DOE officials told us that none had been produced, again underscoring the importance of our recommendation in this area. In addition to the individual named above, Karla Springer, Assistant Director; Marcia Carlsen; Lee Carroll; Cindy Gilbert; Ryan Gottschall; Armetha Liles; Eric Miller; Cynthia Norris; Madhav Panwar; Lindsay Read; Barbara Timmerman; Jarrod West; and Steve Westley made key contributions to this report. Federal Support for Renewable and Advanced Energy Technologies. GAO-13-514T. Washington, D.C.: April 16, 2013. Department of Energy: Status of Loan Programs. GAO-13-331R. Washington, D.C.: March 15, 2013. DOE Loan Guarantees: Further Actions Are Needed to Improve Tracking and Review of Applications. GAO-12-157. Washington, D.C.: March 12, 2012. Department of Energy: Advanced Technology Vehicle Loan Program Implementation Is Under Way, but Enhanced Technical Oversight and Performance Measures Are Needed. GAO-11-145. Washington, D.C.: February 28, 2011. Department of Energy: Further Actions Are Needed to Improve DOE's Ability to Evaluate and Implement the Loan Guarantee Program. GAO-10-627. Washington, D.C.: July 12, 2010. Department of Energy: New Loan Guarantee Program Should Complete Activities Necessary for Effective and Accountable Program Management. GAO-08-750. Washington, D.C.: July 7, 2008. Department of Energy: Observations on Actions to Implement the New Loan Guarantee Program for Innovative Technologies. GAO-07-798T. Washington, D.C.: April 24, 2007. The Department of Energy: Key Steps Needed to Help Ensure the Success of the New Loan Guarantee Program for Innovative Technologies by Better Managing Its Financial Risk. GAO-07-339R. Washington, D.C.: February 28, 2007.
DOE's Loan Programs Office administers the Loan Guarantee Program (LGP) for certain renewable or innovative energy projects and the Advanced Technology Vehicles Manufacturing (ATVM) loan program for projects to produce more fuel-efficient vehicles and components. As of March 2014, the programs had made more than $30 billion in loans and guarantees: approximately $21.9 billion for 33 loan guarantees under the LGP and $8.4 billion for 5 loans under the ATVM loan program. Both programs can expose the government and taxpayers to substantial financial risks should borrowers default. GAO assessed the extent to which DOE has developed and adhered to loan monitoring policies for its loan programs for 2009 to 2013. GAO analyzed relevant regulations and guidance; prior audits; DOE policies; and DOE data, documents, and monitoring reports for a nonprobability sample of 10 loans and guarantees. Findings from the sample are not generalizable, but the sample covered a range of technologies and loan statuses. GAO also interviewed DOE officials. The Department of Energy (DOE) has not fully developed or consistently adhered to loan monitoring policies for its loan programs. In particular, DOE has established policies for most loan monitoring activities, but policies for evaluating and mitigating program-wide risk remain incomplete and outdated. These activities are generally the responsibility of the Risk Management Division in DOE's Loan Programs Office. This division, established in February 2012, has been operating since its inception under incomplete or outdated policies. DOE has missed several internal deadlines for updating its loan monitoring policies. DOE officials told GAO that updated policies were delayed in part because the Loan Programs Office did not have a Director of Risk Management until November 2012. Additionally, the Risk Management Division had not staffed 11 of its 16 planned positions until late 2013, when it staffed 6 of the 11 vacancies. Under federal guidance, credit programs should have robust management and oversight frameworks for monitoring the programs' progress toward achieving policy goals within acceptable risk thresholds, and taking action where appropriate to increase efficiency and effectiveness. It is difficult to determine whether DOE is adequately managing risk if policies are outdated or incomplete and key monitoring positions are not fully staffed. In some cases GAO examined, DOE generally adhered to the loan monitoring policies that it had in place. For example, DOE generally adhered to its policies for authorizing disbursement of funds to borrowers. But, in other cases, DOE adhered to the policies inconsistently or not at all because the Loan Programs Office had staff vacancies and was still developing management and reporting software and procedures for implementing policies. For example: DOE inconsistently adhered to its policies for monitoring and reporting on credit risk, particularly for preparing credit reports--periodic reviews of project progress and factors that may affect the borrower's ability to meet the terms of the loan. DOE did not prepare dozens of credit reports, mostly in 2011, because according to officials it had not filled positions or fully developed the software needed for producing these reports. DOE inconsistently adhered to its policies for managing troubled loans requiring that it prepare and approve plans for handling loans to borrowers in danger of defaulting on their loan repayments. For two troubled loans, officials said DOE did not prepare a formal plan, as called for in its policy, in part because implementing procedures were incomplete. DOE did not adhere to its policy requiring it to evaluate the effectiveness of its loan monitoring because of continuing staff vacancies. Without conducting these evaluations, DOE management cannot assess the adequacy of its monitoring efforts and thus be reasonably assured that it is effectively managing risks associated with its loan programs. As a result, DOE was making loans and disbursing funds from 2009 through 2013 without a fully developed loan monitoring function. During this time, inconsistent adherence to policies limited assurance that DOE was completing activities important to monitoring the loans and protecting the government's interest. GAO recommends that DOE (1) staff key positions, (2) update management and reporting software, (3) complete policies for loan monitoring, and (4) evaluate the effectiveness of its loan monitoring. DOE generally agreed with the recommendations.
6,934
861
The Congress has long recognized the need for the President to have flexibility in the foreign policy area. This is reflected in sections 506 and 552 of the Foreign Assistance Act of 1961, as amended. In addition, the Congress has occasionally authorized the President to initiate drawdowns for specific purposes in foreign operations appropriations acts. Section 506(a)(1) of the Foreign Assistance Act authorizes the President to "drawdown" defense articles, services, and military education and training from DOD and the military services' inventories and provide such articles and services to foreign countries or international organizations. Before exercising this authority, the President must report to the Congress that an unforeseen emergency exists requiring immediate military assistance that cannot be met under any other law. Section 506(a)(2) of the Foreign Assistance Act authorizes the President to drawdown articles and services from the inventory and resources of any U.S. government agency and provide them to foreign countries or international organizations in a number of nonemergency situations. As above, before exercising this authority, the President must first report to the Congress that any such drawdown is in the national interests of the United States. This special authority is broad in scope, allowing the President to use drawdowns to assist with counternarcotics efforts, provide international disaster assistance and migration and refugee assistance, aid prisoner-of-war and missing-in-action efforts in Southeast Asia, supplement peacekeeping missions, and support mid- to long-term national interests in nonemergency situations. Section 552 of the Foreign Assistance Act authorizes the President to provide assistance for peacekeeping operations and other programs carried out in furtherance of U.S. national security interests. Specifically, section 552(c)(2) authorizes the President to direct the drawdown of commodities and services from the inventory and resources from any U.S. agency if the President determines that an unforeseen emergency requires the immediate provision of such assistance. At the discretion of the President, drawdown proposals are typically developed in an interagency process that generally includes DOD, the National Security Council, and State but may include other executive branch agencies. Based on the estimated price and availability of the defense articles and services, the agencies agree on the parameters of the drawdown and State prepares a justification package, including the presidential determination for the President's signature. Once the presidential determination is approved, the Defense Security Cooperation Agency (DSCA), a component of DOD, executes the drawdown by working with the military services to determine what specific defense articles and services will be provided and who will provide them. DSCA is also charged with tracking and reporting on the drawdown status. A drawdown is typically completed when the emergency or foreign policy goal has been met or the dollar value of the authority has been reached. The excess defense articles program, which authorizes the President to transfer defense articles excess to DOD's needs to eligible foreign countries or international organizations, is sometimes used in conjunction with drawdowns. Defense articles, including excess defense articles, that are transferred under presidential determinations authorizing drawdowns must be fully operational on delivery. The drawdown authority may be used, if necessary, to refurbish defense articles to operational status. In the 27 years from 1963 through 1989, the President approved 20 determinations authorizing drawdowns valued at a total of about $1 billion. In the 13 years since 1989, the President approved 70 determinations authorizing drawdowns valued at about $2.3 billion (see app. I). Of the 90 total drawdowns, 58 totaling about $2.1 billion were authorized under section 506 of the Foreign Assistance Act; 15 additional drawdowns valued at about $141.7 million were authorized under section 552. As shown in figure 1, drawdown authorizations as a percentage of total military assistance provided by the United States have varied considerably over the years (see also app. II). But the increased use of drawdowns in the 1990s represents a larger percentage of total annual military assistance than in any other period except during the Vietnam War. The Foreign Assistance Act of 1961, as amended, also requires that the President report to the Congress on military assistance, including drawdowns, provided to foreign recipients. Specifically, Section 506(b)(2) requires the President to keep the Congress fully and currently informed of all military assistance provided under section 506. This includes detailing all military assistance to a foreign country or international organization upon delivery of any article or upon completion of any service or education and training. Section 655 requires the President to submit an annual report to the Congress on the aggregate value and quantity of defense articles and services and military education and training activities both authorized and actually provided by the United States to each foreign recipient. The Director of DSCA is primarily responsible for preparing these reports, as delegated by the President through the Secretary of Defense. Overall, DSCA's reports to the Congress on the status of drawdowns are inaccurate and incomplete. Its information system for tracking the status of drawdowns is outmoded, and the military services do not regularly provide DSCA updated information on the transfers they are implementing. As a result, the Congress and the executive branch do not have accurate and up- to-date information readily available to oversee and manage the assistance provided through drawdowns. DSCA uses its "1000 System" as a central repository for drawdown data. The 1000 System was designed in the late 1960s to track defense articles and services granted under the Military Assistance Program, which was discontinued in 1982. Although the Army, Air Force, and Navy compile data on the cost, type, quantity, and delivery status of defense articles and services supplied as drawdowns; each service uses a different automated system--any updates submitted to DSCA have to be converted to the 1000 System, and any coding or conversion errors have to be manually corrected. In addition, the services do not regularly report this information to DSCA. DSCA officials stated that it might take a few months to several years for the military services to report drawdown data. A March 2002 Navy memo regarding DSCA's request for an update stated that the 1000 System was an impediment to drawdown processing. A DSCA official told us that the Navy had not provided updated information for several years. Further, although officials at the Army Security Assistance Command said that the Army was sending updates of drawdown data to DSCA on a monthly basis, agency officials told us that they were not aware of the updates. In response to specific inquiries, DSCA usually relies on its country desk officers to work with the military services to determine the defense articles and services provided and the associated costs to DOD and the services. Nevertheless, we found that this information, as well as other information that the DSCA desk officers maintain, is often not entered into the 1000 System. Our analysis of updates provided by the services and of more detailed information from our four case studies revealed numerous inaccuracies in the 1000 System and DSCA's reports to the Congress. Four presidential determinations authorizing drawdowns totaling $17 million were not on DSCA's list, and three presidential determinations were incorrectly identified in the 1000 System. For a 1993 drawdown to Israel, DSCA's 1000 System reports that nothing has been delivered. In information provided to us, the Army reported that Apache and Blackhawk helicopters and services worth $272 million were provided to Israel, but indicated that its records are not clear whether the helicopters were provided as part of the 1993 drawdown. However, an Army security assistance officer in Israel during 1993 told us that the helicopter deliveries were part of the 1993 drawdown. DSCA was required to report every 60 days on the delivery and disposition of defense articles and services to Bosnia. In June 2001, in its last 60-day report to the Congress, DSCA reported that $98.3 million in defense articles and services had been provided to Bosnia. Records provided to us by the military services indicate that DSCA did not use actual costs in these reports. For the 1996 drawdown to Jordan, the President authorized the transfer of 88 M60 tanks. DSCA stated in its 1996 annual report to the Congress that 50 tanks were authorized, but did it not report whether these tanks were delivered or at what cost. In subsequent annual reports to the Congress, DSCA provided no further updates on the Jordan drawdown. According to U.S. embassy officials and the DSCA Jordan desk officer, 50 tanks were delivered in December 1996, and the remaining 38 tanks were delivered in December 1998. As recently as July 2002, the 1000 System indicated that only 5 tanks had been delivered to Jordan at a cost of $10.6 million. The Army reported that $15.5 million was the value of all 88 tanks, but this figure did not include costs for refurbishment, spare parts, and transportation. Under a 1997 drawdown to Mexico, the President authorized the transfer of 53 UH-1H helicopters, which was reported to the Congress. As with Jordan, in subsequent annual reports to the Congress, DSCA provided no further updates to the Mexico drawdown. In February 2001, DSCA closed the drawdown, with concurrence from the services involved, 3 years after the drawdown was completed and nearly 18 months after the helicopters had been returned to the United States. DSCA reported the total costs as $16.1 million including $8 million for the 53 helicopters. However, as of July 2002, the 1000 System had not recorded the transfer, much less noted the return of the helicopters. Appendix III presents the dollar value of deliveries reported in DSCA's 1000 System compared with the dollar value shown in the military services' reports for the 51 drawdowns authorized during fiscal years 1993-2001. Overall, the 1000 System reported the delivery of about $300 million in defense articles and services, while the military services reported $724.2 million. DSCA and the military services' data agreed for 16 drawdowns--reporting no deliveries for 12--and differed by less than $1 million for 12 others. Of the 23 drawdowns with differences greater than $1 million, the military services generally reported significantly higher amounts. Drawdowns are an additional tool for the President to address U.S. foreign policy and national security objectives. They allow the President to provide military assistance to foreign recipients quickly because the defense articles and services are not provided through regular acquisition channels. Drawdowns also allow the United States to provide additional or improved military capability to foreign recipients. Officials from both the U.S. and recipient governments stated that the transfer of defense articles and services through drawdowns helps promote military-to-military relations. Also, DOD and State officials told us that the transfer of defense articles under drawdowns can help expand markets for U.S. defense firms. According to State officials, drawdowns allow the United States to provide assistance to foreign recipients in an emergency using DOD resources. In particular, drawdown authority has been useful in providing humanitarian assistance in the wake of natural disasters. For example, in response to a 1998 hurricane that struck Central America, the President determined that a strong U.S. response to save lives and assist in reestablishing basic infrastructure was needed. The drawdown authority allowed DOD to use existing inventory and resources for its relief efforts. The importance of the President's ability to supply defense articles or services quickly to address a regional crisis was evidenced by a 1996 drawdown to Bosnia. The United States provided defense articles and services to the Bosnian Federation within 6 months of a July 1996 presidential determination. According to DOD and State officials, the drawdown allowed assistance to be provided more quickly and at less cost than other security assistance programs would have. The United States provided 116 fully operational 155mm howitzers as excess defense articles to help ensure the Bosnian Federation Army's capacity to return indirect fire if attacked, which they lacked during the conflict with the Bosnian Serbs. The United States also provided 45 M60 tanks, 80 armored personnel carriers, 15 UH-1H helicopters, and light arms including 46,100 M16 rifles. These articles and related services met the force requirements for military stabilization that were approved in the Dayton Peace Agreement and enumerated in the Organization for Security and Cooperation in Europe Agreement on Sub-Regional Arms Control. According to DOD and State officials, the defense articles and services provided under the drawdown helped promote the peace and military stability of Bosnia. The drawdown authority is also useful for providing logistical assistance to regional operations, as illustrated in the following examples. In a 1999 drawdown to Kosovo, the United States supplied airlift and related services for the United Nations High Commissioner for Refugees. In a 1999 drawdown to East Timor, the United States provided transportation for peacekeepers as part of a regional multilateral operation headed by Australia. Similarly, in a 2000 drawdown for disaster assistance in southern Africa, the United States provided the logistical support for a South African-led regional multilateral disaster response force. Drawdowns are also used to support international counternarcotics operations. During fiscal years 1996-99, the United States provided defense articles and services through drawdowns to the Colombian and Mexican military and national police to increase their ability to interdict the flow of illicit narcotics to the United States. The United States provided the Colombian Army and National Police with fully operational defense articles including 7 C-26 aircraft, 12 UH-1H helicopters, and 9 patrol boats. Similarly, the United States provided Mexico with 53 UH-1H helicopters and 4 C-26 aircraft. According to State officials, although Colombia and Mexico experienced difficulty in using these articles (Mexico eventually returned the helicopters to the United States), the drawdown helped improve their capability to conduct counternarcotics operations. In the case of Colombia, the drawdown, which was implemented by State, was a way to provide arms, ammunition, and other lethal assistance to the Colombian National Police. In 1996, 1998, and 1999, three separate drawdowns were intended to help Jordan promote regional security of the Middle East. The drawdowns were initiated after Jordan signed a peace treaty with Israel in 1995 and as a result of Jordan's subsequent role in the Wye River Peace Conference. The United States provided Jordan with 88 M60 tanks, 18 UH-1H helicopters, 38 antitank armored personnel carriers, a C-130 aircraft, a rescue boat and 2 personnel boats, 18 8-inch howitzers, and 302 air-to-air missiles. According to DOD and State officials, the defense articles that were transferred helped Jordan secure its borders. Drawdowns can help foster better military-to-military relations between the United States and foreign recipients. According to DOD and State officials, the current U.S. military-to-military relationship with Jordan is excellent, in part because of the transfer of articles and services through drawdowns. U.S. officials cited as evidence Jordan's participation in peacekeeping operations in East Timor, Haiti, and Sierra Leone. More recently in Afghanistan, the Jordanian Armed Forces participated in demining operations and set up a field hospital that has treated over 30,000 patients, including U.S. soldiers. DOD officials also noted that U.S.- Jordanian training exercises resulted in the U.S. Marine Corps being better prepared to operate in Afghanistan. According to State officials, the transfer of defense articles under drawdowns and excess defense articles help to expand markets for U.S. defense firms. For example, the Jordanian Army signed a $38 million contract with a U.S. defense firm to refit Jordan's M60 tanks, including the 88 tanks transferred under a 1996 drawdown, with a new 120mm gun. Jordan plans to develop its defense industrial base around this capability and make this service available to other countries in the Middle East. We found two major concerns in the current use of drawdowns that may limit the benefits of the program. The U.S. military services are not being reimbursed for the costs associated with a drawdown, and the countries that receive defense articles through drawdowns often do not have the resources to maintain and operate them. According to DOD and military service officials, the services are not reimbursed for the defense articles provided or the associated costs of drawdowns, and the articles are usually not replaced. Section 506(d) of the Foreign Assistance Act authorizes the appropriation of funds to the President to reimburse the services for the costs associated with executing drawdowns. However, since 1979, the President has not requested such reimbursements. The military services can incur six types of costs when executing a drawdown--(1) the value of the defense articles provided including aircraft, vehicles, weapons and ammunition, or other major end items; (2) the repair or refurbishment of these items; (3) spare parts and tools; (4) training; (5) packing, crating, handling, and transportation; and (6) administrative costs. The cost of defense articles charged against a drawdown is a depreciated value and not necessarily the replacement cost. The other costs of a drawdown are typically paid out of a service's operations and maintenance account and are not budgeted or planned for in advance. In effect, this means that the services have less operations and maintenance funding for other items in their inventories. Information provided by the services shows that unreimbursed costs associated with drawdowns have totaled about $724.2 million since 1993. The Army reported about $557 million in unreimbursed costs, and the Air Force and Navy reported $69.4 million and $97.8 million, respectively. Case by case, unreimbursed costs ranged from less than $100 to approximately $87.2 million. A large proportion of these costs were for refurbishing the defense articles, providing spare parts and support equipment, and transporting the articles. For example, the Army reported that it spent approximately $31.4 million from its operations and maintenance account to refurbish and deliver $55.8 million worth of articles for the 1996 drawdown to Bosnia. Similarly, the Army spent $23.8 million for spare parts and transportation from its operations and maintenance account on $51.5 million worth of articles for the 1996 drawdown to Jordan. However, this figure did not include refurbishment. Numerous DOD and service officials stated that the unreimbursed costs associated with a drawdown negatively affect the readiness of the U.S. military services. However, these officials could not provide any examples of programs forgone or specific deficiencies in unit readiness. In 1996, we reported that Army operations and maintenance costs exceeded funding for contingency operations as a result, in part, of Army expenditures on the 1996 drawdown to Bosnia. In addition, A July 1996 memorandum from the Chief of Staff of the Army to the Chairman of the Joint Chiefs of Staff stated that drawdowns affect the Army's ability to respond to contingencies. It also stated that defense articles for future drawdowns would have to be taken from war reserve stocks or from reserve components. In other documents since 1996, the Army characterized the unbudgeted expenditures from operations and maintenance accounts in support of drawdowns as a drain on its readiness, training, transformation activities, and quality-of-life funds and as a long-term risk to the stability of Army investments. Furthermore, in 2000, the military services reported to DSCA on the effect on readiness of drawdowns for counternarcotics efforts. Generally, the services characterized the effect as dollars spent on unplanned contingencies and, therefore, not available to support other requirements. In their responses to DSCA, The Army stated that it expected readiness to be adversely affected by the diversion of $8 million worth of Blackhawk helicopter spare parts for Colombia, but it did not say whether any specific helicopter unit would be affected. Subsequently, the Joint Staff directed the Army to provide the parts to Colombia under a 1999 drawdown. The Air Force noted that it would need to replace several utility vehicles transferred under drawdown authority, but it did not specify when or at what cost these vehicles would be replaced or the effect on readiness of no longer having the vehicles. In 1985, we reported that even if DOD and the military services were reimbursed for the costs associated with drawdowns, full replacement was unlikely, if not impossible. This is because, among other reasons, the replacement cost of an article may have increased more than the depreciated value charged against the drawdown or been replaced by a newer (and more expensive) item. According to DOD officials, drawdowns are successful over the long term only if the foreign recipient has the ability to support the defense articles or if the United States provides additional funding for maintenance. Drawdowns typically provide for 1 or 2 years of essential spare parts for aircraft, vehicles, and weapons, but many recipients do not have the resources to support the defense articles after that. In addition, because defense articles delivered under drawdowns are often older articles, the spare parts and tools needed to maintain them may not be readily available. Consequently, the recipients' ability to conduct military or police missions in support of U.S. foreign policy diminishes as vehicles and weapons break down and as parts for these older defense articles become more difficult to obtain. Each of our case studies provided examples of problems with the long-term sustainability of the defense articles provided through drawdowns. Bosnia. According to officials from the Bosnian Federation Ministry of Defense and DOD, the Bosnian Federation Army does not have enough of its own funds, and does not receive enough assistance from the United States, to maintain the vehicles and weapons it received in the 1996 drawdown. Bosnia has received less than $6 million per year in financing since 1996 to support the defense articles. However, Bosnian Federation Ministry of Defense officials stated that they need approximately $10 million per year just for spare parts and fuel. These officials noted that, as of May 2002, the readiness of the Federation units had significantly deteriorated and that the operational rates were below 35 percent for the helicopters and below 60 percent for the tanks. Colombia. In 1998, we reported that a 1996 counternarcotics drawdown to Colombia was hastily developed and did not consider sufficient information on specific Colombian requirements--including Colombia's ability to operate and maintain the articles. For example, 2 months after Colombia received 12 UH-1H helicopters, the Colombian National Police reported that only 2 were operational. The U.S. embassy estimated the cost of the repairs at about $1.2 million. As part of the same drawdown, the United States transferred 5 C-26 aircraft to conduct counternarcotics surveillance missions. According to U.S. embassy officials, the United States spent at least an additional $3 million to modify each aircraft to perform the surveillance missions, and it costs at least $1 million annually to operate and maintain each aircraft. Mexico. In 1996 and 1997, the United States provided the Mexican military with 73 UH-1H helicopters--20 from a 1996 excess defense articles transfer and 53 from a 1997 drawdown--and 2 years of spare parts to assist Mexico in its counternarcotics efforts. As we reported in 1998, the usefulness of the U.S.-provided helicopters was limited because the helicopters were inappropriate for some counternarcotics missions and lacked adequate logistical support. At the time, U.S. embassy officials were concerned that once the U.S.-provided support had been used, the Mexican military would not provide the additional support--estimated at $25 million per year for the UH-1H fleet--because of budgetary constraints. In March 1999, 72 UH-1H helicopters (one crashed) were grounded because of overuse and airworthiness concerns. Shortly thereafter, Mexico transferred the 72 helicopters back to the United States for repair and ended its involvement in the helicopter program. Jordan. Although Jordan has allocated $16 million of U.S. aid per year for sustainment and modernization since 2000, it cannot fully use all of the defense articles it has received through drawdowns. For example, the Jordanian Air Force cannot get all the necessary spare parts from DOD's logistics system for its UH-1H's helicopters; as of May 2002, only 20 of 36 helicopters were operational. In addition, Jordan does not have funds to purchase additional munitions for some of the weapons it received from the drawdowns. As a result, the Jordanian Army and Air Force have never test fired the air-to-air missiles or the antitank missiles it received. Furthermore, according to U.S. military officials in Jordan, the shelf life of some of the other munitions and light weapons ammunition used for training purposes may be expiring, and Jordan does not have the funds to replace them. Drawdowns give the President the ability to provide defense articles, training, and services to foreign countries and international organizations without first seeking specific appropriations from the Congress. In making this accommodation, the Congress has required that the President regularly report on the use of these special authorities. However, DSCA's system for collecting information on the status of drawdowns is outmoded and does not readily permit DSCA to meet the reporting requirements to the Congress. While DSCA can respond to ad hoc inquiries about specific drawdowns, a way to systematically track and accurately report on the status of drawdowns does not currently exist. As a result, neither the Congress nor the executive branch has complete and accurate information about the status of defense articles and services provided to foreign recipients through drawdowns. In light of the increased use of drawdowns since 1990, the need for such information has increased accordingly. To help ensure that the Congress has accurate and complete information on the use of drawdowns, we recommend that the Secretary of Defense, in consultation with the Director of DSCA and the Secretaries of the military services, develop a system that will enable DSCA to report to the Congress on the cost, type, quantity, and delivery status of defense articles and services transferred to foreign recipients through drawdowns, as required. DOD provided written comments on a draft of this report (see app. IV). The Department of State had no comments. DOD concurred with our recommendation, but stated that DSCA is dependent on the military services for specific drawdown cost and delivery information and is not funded to support this administrative reporting requirement. We note that the Secretary of Defense has the authority to require regular and timely reporting by the services and believe that DOD should provide DSCA the necessary resources to fully implement our recommendation. DSCA also provided certain technical clarifications that we have incorporated as appropriate. Overall, to examine the use of drawdown authorities, we focused on the special authorities granting the President the ability to provide military assistance in emergency situations and in the U.S. national interests for the purposes of international counternarcotics control. We selected four countries--Bosnia-Herzegovina, Colombia, Jordan, and Mexico--as case studies to analyze specific costs, benefits, and problems associated with the drawdowns. Bosnia and Jordan represent examples of the use of drawdowns in an emergency situation to help stabilize their respective regions, and Colombia and Mexico represent examples of U.S. assistance in the national interest for counternarcotics efforts. To determine whether the costs to DOD and the status of drawdowns are reported to the Congress, as required, we analyzed relevant DSCA and military services' reports and documentation and addressed this issue with cognizant DSCA, military services, and State officials. Specifically, we compared DSCA's list of presidential determinations authorizing drawdowns to presidential determinations published in the Federal Register and drawdown reports from the military services; analyzed DSCA's cost and delivery data for the drawdowns from fiscal years 1993-2001 by comparing it with data collected from the military services; and compared information that we obtained from the DSCA country desk officers with information from U.S. embassy officials in the case study countries to determine the status of specific drawdowns, including deliveries and costs. We also reviewed the Foreign Assistance Act of 1961, as amended, to determine the relevant reporting requirements. To determine how the drawdowns benefit the United States and foreign recipients and what concerns, if any, are associated with the programs, we focused primarily on the four case study countries. We analyzed relevant DSCA, military services, and State documentation. We visited Bosnia and Jordan and met with U.S. embassy and host country officials, including officials in the host country ministries of defense and military services, and reviewed relevant documentation. We met with the cognizant officials of the unified military commands for Bosnia, Colombia, and Jordan. In Washington, D.C., we met with DSCA country desk officers and officials from DSCA's Comptroller's Office and General Counsel's Office; the U.S. military service's respective security assistance offices; and the Office of the Joint Chiefs of Staff, Directorate for Strategic Plans and Policy. We also met with cognizant officials in the Department of State's Bureau for Political and Military Affairs and the Bureau for International Narcotics and Law Enforcement Affairs. We conducted our work between November 2001 and August 2002 in accordance with generally accepted government auditing standards. We will send copies of this report to the interested congressional committees and the Secretaries of Defense and State. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please call me at (202) 512-4268 or contact me at [email protected]. An additional GAO contact and staff acknowledgments are listed in appendix V. Table 1 lists the 90 presidential determinations that have authorized $3.3 billion in drawdowns since fiscal year 1963. The first drawdown authorized military assistance for India; the most recent authorized counterterrorism assistance for the Philippines in June 2002. Over the years, over 55 countries and other organizations such as the United Nations have been authorized U.S. military assistance through drawdowns. Israel was authorized to receive the most military assistance with nine drawdowns totaling approximately $923 million during the early and mid- 1990s. South Vietnam was second with drawdown authority totaling $375 million under two presidential determinations in 1965 and 1966. Cambodia was third with drawdown authority totaling $325 million under presidential determinations in 1974 and 1975. The frequency of presidential determinations has increased since 1990. During fiscal years 1961-89, 20 presidential determinations authorized a total of about $1 billion in drawdowns. Since 1990, 70 presidential determinations authorized $2.3 billion in drawdowns. As shown in table 2, 58 drawdowns totaling approximately $2.1 billion were authorized under section 506 of the Foreign Assistance Act, which allows the President to authorize assistance for unforeseen military emergencies, counternarcotics, counterterrorism, and disaster relief. Of the remaining 32 drawdowns, 16 drawdowns totaling approximately $1.1 billion were authorized under various foreign operations acts to support activities in the national interest, including efforts to locate servicemen listed as prisoners of war and missing in action in Southeast Asia; 15 drawdowns totaling $141.7 million were authorized specifically for peacekeeping-related operations (section 552 of the Foreign Assistance Act); and 1 drawdown totaling $5 million was authorized under the Iraq Liberation Act of 1998 for training Iraqi opposition organizations. Table 3 illustrates that drawdown authorizations have been used more frequently in the 1990s, as shown in appendix I. It also shows that the military assistance authorized by presidential determinations has more than tripled as a percentage of overall U.S. military assistance, averaging over 4.6 percent a year during fiscal years 1990-2001 compared with 1.3 percent for the previous 29 years (fiscal years 1961-89). At least one drawdown has been authorized every year since fiscal year 1986, with 10 each in fiscal years 1996 and 1999. In 2002, five drawdowns had been authorized through June--primarily for counterterrorism purposes. We analyzed cost and delivery data from DSCA's 1000 System and compared it with similar information provided by the services for the 51 drawdowns authorized during fiscal years 1993-2001. Table 4 illustrates the differences in the reported value of defense articles and services delivered. Overall, the 1000 System reported about $300 million in drawdown transfers while the military services reported $724.2 million. Of the 51 drawdowns, DSCA and the military services' data agreed for 16, including 12 with no reported deliveries, and differed by less than $1 million for 12 others. Of the 23 drawdowns with differences greater than $1 million, the military services generally reported significantly higher amounts. We did not attempt to determine the reasons for the differences in reporting. For example, DSCA reported no costs for a drawdown to Israel (93-17) while the Army reported $272 million. However, Army officials noted that they were not certain if the transfers it reported were specifically for the drawdown. DSCA reported costs of $5.8 million for a drawdown to Mexico (97-09) while the services reported $19.5 million. DSCA reported costs of $16.5 million for a drawdown to Jordan (98-19) while the services reported $33 million. In addition to the above named individual, Allen Fleener, Ronald Hughes, James Strus, and Jason Venner made key contributions to this report. Lynn Cothern, Ernie Jackson, and Reid Lowe provided technical assistance. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO's commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO's Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as "Today's Reports," on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select "Subscribe to daily E-mail alert for newly released products" under the GAO Reports heading.
Since 1961, the President has had special statutory authority to order the "drawdown" of defense articles--such as aircraft, vehicles, various weapons, and spare parts--and services or military education and training from Department of Defense (DOD) and military service inventories and transfer them to foreign countries or international organizations. Drawdowns give the President the ability to respond to U.S. foreign policy and national security objectives, such as counternarcotics efforts, peacekeeping needs, and unforeseen military and nonmilitary emergencies, by providing military assistance without first seeking additional legislative authority or appropriations from Congress. The Defense Security Cooperation Agency's reports to Congress on the costs and delivery status of drawdowns are inaccurate and incomplete. Two principal problems contribute to the agency's inability to meet the reporting requirements. First, its information system for recording drawdown data is outmoded and difficult to use--service drawdown reports are in different formats, and any conversion errors have to be manually corrected. Second, the services do not regularly provide updates to the agency on drawdown costs and deliveries, and available information sometimes does not get into the system. Drawdowns benefit the United States and foreign recipients primarily by providing the President the flexibility to address foreign policy and national security objectives quickly. Drawdowns also allow the President to provide defense articles and services to improve foreign recipients' capability to conduct military and police missions in support of U.S. foreign policy. Other benefits cited include improved military-to-military relations between the U.S. military services and the foreign recipients and expanded markets for U.S. defense firms. According to U.S. and foreign military officials, the use of drawdowns presents some concerns. Because drawdowns are used to quickly address U.S. national interests and emergencies, the costs associated with a drawdown, such as refurbishment and transportation, are not budgeted for by the services and are not reimbursed.
7,512
413
Transportation programs, like other federal programs, need to be viewed in the context of the nation's fiscal position. Long-term fiscal simulations by GAO, the Congressional Budget Office, and others all show that despite a 3-year decline in the federal government's unified budget deficit, we still face large and growing structural deficits driven by rising health care costs and demographic trends. As the baby boom generation retires, entitlement programs will grow and require increasing shares of federal spending. Absent significant changes to tax and spending programs and policies, we face a future of unsustainable deficits and debt that threaten to cripple our economy and quality of life. This looming fiscal crisis requires a fundamental reexamination of all government programs and commitments. Although the long-term outlook is driven by rising health care costs, all areas of government should be re-examined. This involves reviewing government programs and commitments and testing their continued relevance and relative priority for the 21st century. Such a reexamination offers an opportunity to address emerging needs by eliminating outdated or ineffective programs, more sharply defining the federal role in relation to state and local roles, and modernizing those programs and policies that remain relevant. We are currently working with Congress to develop a variety of tools to help carry out a reexamination of federal programs. The nation's surface transportation programs are particularly ready for reexamination. This would include asking whether existing program constructs and financing mechanisms are relevant to the challenges of the 21st century, and making tough choices in setting priorities and linking resources to results. We have previously reported on the following factors that highlight the need for transformation of the nation's transportation policy. Future demand for transportation will strain the network. Projected population growth, technological changes, and increased globalization are expected to increase the strain on the nation's transportation system. Congestion across modes is significant and projected to worsen. National transportation goals and priorities are difficult to discern. Federal transportation statutes and regulations establish multiple, and sometimes conflicting, goals and outcomes for federal programs. In addition, federal transportation funding is generally not linked to system performance or to the accomplishment of goals or outcomes. Furthermore, the transportation program, like many other federal programs, is subject to congressional directives, which could impede the selection of merit-based projects. The federal government's role is often indirect. The Department of Transportation (DOT) implements national transportation policy and administers most federal transportation programs. While DOT carries out some activities directly, it does not have control over the vast majority of the activities it funds. Additionally, DOT's framework of separate modal administrations makes it difficult for intermodal projects to be integrated into the transportation network. Future transportation funding is uncertain. Revenues to support the Highway Trust Fund--the major source of federal highway and transit funding--are eroding. Receipts for the Highway Trust Fund, which are derived from motor fuel and truck-related taxes (e.g., truck sales) are continuing to grow. However, the federal motor fuel tax of 18.4 cents per gallon has not been increased since 1993, and thus the purchasing power of fuel tax revenues has eroded with inflation. Furthermore, that erosion will continue with the introduction of more fuel-efficient vehicles and alternative-fueled vehicles in the coming years, raising the question of whether fuel taxes are a sustainable source of financing transportation. In addition, funding authorized in the recently enacted highway and transit program legislation is expected to outstrip the growth in trust fund receipts. Finally, the nation's long-term fiscal challenges constrain decision makers' ability to use other revenue sources for transportation needs. Recognizing many of these challenges and the importance of the transportation system to the nation, Congress established The National Surface Transportation Policy and Revenue Study Commission (Commission) in the Safe, Accountable, Flexible, Efficient Transportation Equity Act--A Legacy for Users (SAFETEA-LU). The mission of the Commission was, among other things, to examine the condition and future needs of the nation's surface transportation system and short and long- term alternatives to replace or supplement the fuel tax as the principal revenue source to support the Highway Trust Fund. In January 2008, the Commission released a report with numerous recommendations to place the trust fund on a sustainable path and to reform the current structure of the nation's surface transportation programs. Congress also created the National Surface Transportation Infrastructure Financing Commission in SAFETEA-LU and charged it with analyzing future highway and transit needs and the finances of the Highway Trust Fund and recommending alternative approaches to financing transportation infrastructure. This Commission issued its interim report this past week, and its final report is expected by spring of 2009. In addition, various transportation industry associations and research groups have issued, or plan to issue in the coming months, proposals for restructuring and financing the surface transportation program. Through our prior analyses of existing programs, we identified a number of principles that could help drive an assessment of proposals for restructuring the federal surface transportation programs. These principles include (1) defining the federal role based on identified areas of national interest, (2) incorporating performance and accountability for results into funding decisions, and (3) ensuring fiscal sustainability and employing the best tools and approaches to improve results and return on investment. Our previous work has shown that identifying areas of national interest is an important first step in any proposal to restructure the surface transportation program. In identifying areas of national interest, proposals should consider existing 21st century challenges and how future trends could have an impact on emerging areas of national importance--as well as how the national interest and federal role may vary by area. For example, experts have suggested that federal transportation policy should recognize emerging national and global imperatives, such as reducing the nation's dependence on foreign fuel sources and minimizing the impact of the transportation system on global climate change. Once the various national interests in surface transportation have been identified, proposals should also clarify specific goals for federal involvement in the surface transportation program as well as define the federal role in working toward each goal. Goals should be specific and outcome-based to ensure that resources are targeted to projects that further the national interest. The federal role should be defined in relation to the roles of state and local governments, regional entities, and the private sector. Where the national interest is greatest, the federal government may play a more direct role in setting priorities and allocating resources as well as fund a higher share of program costs. Conversely, where the national interest is less evident, state and local governments, and others could assume more responsibility. For example, efforts to reduce transportation's impact on greenhouse gas emissions may warrant a greater federal role than other initiatives, such as reducing urban congestion, since the impacts of greenhouse gas emissions are widely dispersed, whereas the impacts of urban congestion may be more localized. The following illustrative questions can be used to determine the extent to which proposals to restructure the surface transportation program define the federal role in relation to identified areas of national interest and goals. To what extent are areas of national interest clearly defined? To what extent are areas of national interest reflective of future trends? To what extent are goals defined in relation to identified areas of national interest? To what extent is the federal role directly linked to defined areas of national interest and goals? To what extent is the federal role defined in relation to the roles of state and local governments, regional entities, and the private sector? To what extent does the proposal consider how the transportation system is linked to other sectors and national policies, such as environmental, security, and energy policies? Our previous work has shown that an increased focus on performance and accountability for results could help the federal government target resources to programs that best achieve intended outcomes and national transportation priorities. Tracking specific outcomes that are clearly linked to program goals could provide a strong foundation for holding grant recipients responsible for achieving federal objectives and measuring overall program performance. In particular, substituting specific performance measures for the current federal procedural requirements could help make the program more outcome-oriented. For example, if reducing congestion were an established federal goal, outcome measures for congestion, such as reduced travel time could be incorporated into the programs to hold state and local governments responsible for meeting specific performance targets. Furthermore, directly linking the allocation of resources to the program outcomes would increase the focus on performance and accountability for results. Incorporating incentives or penalty provisions into grants can further hold grantees and recipients accountable for achieving results. The following illustrative questions can be used to determine the extent to which proposals to restructure the surface transportation program incorporate performance and accountability mechanisms. Are national performance goals identified and discussed in relation to state, regional, and local performance goals? To what extent are performance measures outcome-based? To what extent is funding linked to performance? To what extent does the proposal include provisions for holding stakeholders accountable for achieving results? To what extent does the proposal create data collection streams and other tools as well as a capacity for monitoring and evaluating performance? We have previously reported that the effectiveness of any overall federal program design can be increased by incorporating strategies to ensure fiscal sustainability as well as by promoting and facilitating the use of the best tools and approaches to improve results and return on investment. Importantly, given the projected growth in federal deficits, constrained state and local budgets, and looming Social Security and Medicare spending commitments, the resources available for discretionary programs will be more limited--making it imperative to maximize the national public benefits of any federal investment through a rigorous examination of the use of such funds. The federal role in transportation funding must be reexamined to ensure that it is sustainable in this new fiscal reality. A sustainable surface transportation program will require targeted investment, with adequate return on investment, from not only the federal government, but also state and local governments, and the private sector. The user-pay concept--that is, users paying directly for the infrastructure they use--is a long-standing aspect of transportation policy and should, to the extent feasible and appropriate, remain an essential tenet as the nation moves toward the development of a fiscally sustainable transportation program. For example, a panel of experts recently convened by GAO agreed that regardless of funding mechanisms pursued, investments need to seek to align fees and taxes with use and benefits. A number of specific tools and approaches can be used to improve results and return on investment including using economic analysis, such as benefit-cost analysis in project selection; requiring grantees to conduct post-project evaluations; creating incentives to better utilize existing infrastructure; providing states and localities greater flexibility to use certain tools, such as tolling and congestion pricing; and requiring maintenance of effort provisions in grants. The suitability of the tool and approach used varies depending on the level of federal involvement or control that policymakers desire for a given area of policy. Using these tools and approaches could help surface transportation programs more directly address national transportation priorities and become more fiscally sustainable. The following illustrative questions can be used to determine the extent to which proposals to restructure the surface transportation program ensure fiscal sustainability and employ the best tools and approaches to improve results and return on investment. To what extent do the proposals reexamine current and future spending on surface transportation programs? Are the recommendations affordable and financially stable over the long- term? To what extent are the recommendations placed in the context of federal deficits, constrained budgets, and other spending commitments and to what extent do they meet a rigorous examination of the use of federal funds? To what extent do the proposals discuss how costs and revenues will be shared among federal, state, local, and private stakeholders? To what extent are recommendations considered in the context of trends that could affect the transportation system in the future, such as population growth, increased fuel efficiency, and increased freight traffic? To what extent do the proposals build in capacity to address changing national interests? To what extent do the proposals address the need better to align fees and taxes with use and benefits? To what extent are efficiency and equity tradeoffs considered? To what extent do the proposals provide flexibility and incentives for states and local governments to choose the most appropriate tool in the toolbox? The Commission makes a number of recommendations designed to restructure the federal surface transportation program so that it meets the needs of the nation in the 21st century. The recommendations include significantly increasing the level of investment by all levels of government in surface transportation, consolidating and reorganizing the current programs, speeding project delivery, and making the current programs more performance- and outcome-based and mode-neutral, among other things. We are currently analyzing the Commission's recommendations using the principles that we have developed for evaluating proposals to restructure the surface transportation program. Although our analysis is not complete, our preliminary results indicate that some of the Commission's recommendations address issues included in the principles. For example, to make the surface transportation program more performance-based, the Commission recommends the development of outcome-based performance standards for various programs. Other recommendations, however, appear to be aligned less clearly with the principles. In its report, the Commission identifies eight areas of national interest and recommends organizational restructuring of DOT to eliminate modal stovepipes. In particular, the report notes that the national interest in transportation is best served when (1) facilities are well maintained, (2) mobility within and between metropolitan areas is reliable, (3) transportation systems are appropriately priced, (4) modes are rebalanced and travel options are plentiful, (5) freight movement is explicitly valued, (6) safety is assured, (7) transportation decisions and resource impacts are integrated, and (8) rational regulatory policy prevails. We and others have also identified some of these and other issues as possible areas of national interest for the surface transportation program. For example, at a recent forum on transportation policy convened by the Comptroller General, experts identified enhancing the mobility of people and goods, maintaining global competitiveness, improving transportation safety, minimizing adverse environmental impacts of the transportation system, and facilitating transportation security as the most important transportation policy goals. The Commission report also recommends restructuring DOT to consolidate the current programs and to eliminate modal stovepipes. We have also identified the importance of breaking down modal stovepipes. Specifically, we have reported that the modal structure of DOT and state and local transportation agencies can inhibit the consideration of a range of transportation options and impede coordination among the modes. Furthermore, in the forum on transportation policy, experts told us that the current federal structure, with its modal administrations and stovepiped programs and funding, frequently inhibits consideration of a range of transportation options at both the regional and national levels. Some of the Commission's recommendations related to the national interest and the federal role also raise questions for consideration. Although consolidating and reorganizing the existing surface transportation programs, as the Commission recommends, could help eliminate modal stovepipes, it is not clear to what extent eliminating any of the existing programs was considered. Given the federal government's fiscal outlook, we have reported that we cannot accept all of the federal government's existing programs, policies, and activities as "givens." Rather, we have stated that we need to rethink existing programs, policies, and activities by reviewing their results relative to the national interests and by testing their continued relevance and relative priority. It is not clear from the Commission's report that such a "zero-based" review of the current and proposed surface transportation programs took place. The Commission also recommends an 80/20 cost sharing arrangement for transportation projects under most programs--that is, the federal government would fund 80 percent of the project costs and the grantee (e.g., state government) would fund 20 percent. In addition, the Commission recommends that the federal government should pay 40 percent of national infrastructure capital costs. These proposed cost share arrangements suggest that the recommended level and share of federal funding reflects the benefits the nation receives from investment in the project--that is, the national interest. However, the report offers no evidence that this is the case. Rather, the proposed cost share arrangements appear to reflect the historical funding levels of many surface transportation programs without considering whether this level of funding reflects the national interest or should vary by program or project. For example, the Commission recommends that the federal government pay for 80 percent of the proposed intercity passenger rail system. However, we have found that the nation's intercity passenger rail system appears to provide limited public benefits for the level of federal expenditures required to operate it, raising questions as to whether an 80 percent federal share is justified. The Commission proposes to make the surface transportation program performance- and outcome-based, and its recommendations include several performance and accountability mechanisms. In particular, the Commission recommends the development of national outcome-based performance standards for the different federal programs. The Commission recommends that states and major metropolitan areas also be required to include performance measures in their own transportation plans, along with time frames for meeting national performance standards. To receive federal funding, projects must be listed in state and local plans, be shown to be cost-beneficial, and be linked to specific performance targets. In addition, the Commission recognizes the importance of data in measuring the effectiveness of transportation programs and overall project performance and recommends that an important goal of the proposed research, development, and technology program be to improve the nation's ability to measure project performance data. Although the Commission emphasizes the need for a performance- and outcome-based program, it is unclear to what extent some of the Commission's recommendations are aligned with such principles. For example, the Commission recommends that overall federal funding be apportioned to states based on state and local transportation plans, rather than directly linking the distribution of funds to state and local governments' performance in meeting identified national transportation goals. In addition, although the Commission recognizes the importance of data in evaluating the effectiveness of projects, the Commission does not recommend the use of post-project, or outcome, evaluations. Our previous work has shown that post-project evaluations provide an opportunity to learn from the successes and shortcomings of past projects to better inform future planning and decision making and increase accountability for results. The Commission recommends a range of financing mechanisms and tools as necessary components of a fiscally sustainable transportation program. These mechanisms include an increase in the federal fuel tax, investment tax credits, and the introduction of new fees, such as a new fee on freight and a new transit ticket tax. Experts at our forum on transportation policy also advocated the use of various financing mechanisms, including many of the mechanisms recommended by the Commission, arguing that there is no "silver bullet" for the current and future funding crisis facing the nation's transportation system. The Commission also recognizes that states will need to use other tools to generate revenues for their share of the recommended increase in investment and to manage congestion. Therefore, the Commission supports fewer federal restrictions on tolling and congestion pricing on the interstate highways system and recommends that Congress encourage the use of public-private partnerships where appropriate. In addition, the Commission recognizes the growing consensus that, with more fuel-efficient and more alternative- fuel vehicles, an alternative to the fuel tax will be required in the next 15 to 20 years. To facilitate a transition to new revenue sources, the Commission recommends that Congress require a study of specific mechanisms, such as mileage-based user fees. It is unclear, however, whether some of the Commission's recommendations are fiscally sustainable--both over the short and the long-term--and encourage the use of the best tools and approaches. For example, the Commission recommends a substantial investment-- specifically, $225 billion per year--in the surface transportation program by all stakeholders. However, the level of investment called for by the Commission reflects the most expensive "needs" scenario examined by the Commission, raising questions about whether this level of investment is warranted and whether federal, state, and local governments can generate their share of the investment in light of competing priorities and fiscal constraints. In addition, while much of the increased investment in the surface transportation program would come from increased fuel taxes and other user fees, some funding would come from general revenues. Such recommendations need to be considered in the context of the overall fiscal condition of the federal government. Finally, while the Commission recommends enhanced opportunities for states to implement alternative tools such as tolling, congestion pricing, and public-private partnerships, it also recommends that Congress place a number of restrictions on the use of these mechanisms, such as requirements that states cap toll rates (at the level of the CPI minus a productivity adjustment), prohibit the use of revenues for non-transportation purposes, avoid toll rates that discriminate against certain users, and fully consider the effect tolling might have on diverting traffic to other facilities. The potential federal restrictions must be carefully crafted to avoid undermining the potential benefits. In conclusion, the magnitude of the nation's transportation challenges calls for an urgent response, including a plan for the future. The Commission's report offers one way forward. Over the coming months, other options to restructure and finance the surface transportation program will likely be put forward by a range of transportation stakeholders. Ultimately, Congress and other federal policymakers will have to determine which option--or which combination of options--best meets the needs of the nation. There is no silver bullet solution to the nation's transportation challenges and many of the options, such as reorganizing a large federal agency or allowing greater private sector investment in the nation's infrastructure, could be politically difficult to implement both nationally and locally. The principles that we identified provide a framework for evaluation. Although the principles do not prescribe a specific approach to restructuring, they do provide key attributes that will help ensure that a restructured surface transportation program addresses current challenges. We will continue to assist the Congress as it works to evaluate the various options and develop a national transportation policy for the 21st century that will improve the design of transportation programs, the delivery of services, and accountability for results. Madam Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Committee might have. For further information on this statement, please contact JayEtta Z. Hecker at (202) 512-2834 or [email protected]. Individuals making key contributions to this testimony were Elizabeth Argeris, Nikki Clowers, Barbara Lancaster, Matthew LaTour, Nancy Lueke, and Katherine Siggerud. Long-Term Fiscal Outlook: Action Is Needed to Avoid the Possibility of a Serious Economic Disruption in the Future. GAO-08-411T. Washington, D.C.: January 29, 2008. Freight Transportation: National Policy and Strategies Can Help Improve Freight Mobility. GAO-08-287. Washington, D.C.: January 7, 2008 A Call For Stewardship: Enhancing the Federal Government's Ability to Address Key Fiscal and Other 21st Century Challenges. GAO-08-93SP. Washington, D.C.: December 2007. Highlights of a Forum: Transforming Transportation Policy for the 21st Century. GAO-07-1210SP. Washington, D.C.: September 2007. Public Transportation: Future Demand Is Likely for New Starts and Small Starts Programs, but Improvements Needed to the Small Starts Application Process. GAO-07-917. Washington, D.C.: July 27, 2007. Surface Transportation: Strategies Are Available for Making Existing Road Infrastructure Perform Better. GAO-07-920. Washington, D.C.: July 26, 2007. Highway and Transit Investments: Flexible Funding Supports State and Local Transportation Priorities and Multimodal Planning. GAO-07-772. Washington, D.C.: July 26, 2007. Railroad Bridges and Tunnels: Federal Role in Providing Safety Oversight and Freight Infrastructure Investment Could Be Better Targeted. GAO-07-770. Washington, D.C.: August 6, 2007. Intermodal Transportation: DOT Could Take Further Actions to Address Intermodal Barriers. GAO-07-718. Washington, D.C.: June 20, 2007. Performance and Accountability: Transportation Challenges Facing Congress and the Department of Transportation. GAO-07-545T. Washington, D.C.: March 6, 2007. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. Fiscal Stewardship: A Critical Challenge Facing Our Nation. GAO-07- 362SP. Washington, D.C.: January 2007. Intercity Passenger Rail: National Policy and Strategies Needed to Maximize Public Benefits from Federal Expenditures. GAO-07-15. Washington, D.C.: November 13, 2006. Freight Railroads: Industry Health Has Improved, but Concerns about Competition and Capacity Should be Addressed. GAO-07-94. Washington, D.C.: October 6, 2006. Highway Finance: States' Expanding Use of Tolling Illustrates Diverse Challenges and Strategies. GAO-06-554. Washington, D.C.: June 28, 2006. Highway Trust Fund: Overview of Highway Trust Fund Estimates. GAO-06-572T. Washington, D.C.: April 4, 2006. Highway Congestion: Intelligent Transportation Systems' Promise for Managing Congestion Falls Short, and DOT Could Better Facilitate Their Strategic Use. GAO-05-943. Washington, D.C.: September 14, 2005. Freight Transportation: Short Sea Shipping Option Shows Importance of Systematic Approach to Public Investment Decisions. GAO-05-768. Washington, D.C.: July 29, 2005. Highlights of an Expert Panel: The Benefits and Costs of Highway and Transit Investments. GAO-05-423SP. Washington, D.C.: May 6, 2005. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP. Washington, D.C.: February 2005. Highway and Transit Investments: Options for Improving Information on Projects' Benefits and Costs and Increasing Accountability for Results. GAO-05-172. Washington, D.C.: January 24, 2005. Federal-Aid Highways: Trends, Effect on State Spending, and Options for Future Program Design. GAO-04-802. Washington, D.C.: August 31, 2004. Surface Transportation: Many Factors Affect Investment Decisions. GAO-04-744. Washington, D.C.: June 30, 2004. Highways and Transit: Private Sector Sponsorship of and Investment in Major Projects Has Been Limited. GAO-04-419. Washington, D.C.: March 25, 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The nation has reached a critical juncture with its current surface transportation policies and programs. Demand has outpaced the capacity of the system, resulting in increased congestion. In addition, without significant changes in funding mechanisms, revenue sources, or planned spending, the Highway Trust Fund--the major source of federal highway and transit funding--is projected to incur significant deficits in the years ahead. Furthermore, the nation is on a fiscally unsustainable path. Recognizing many of these challenges and the importance of the transportation system to the nation, Congress established The National Surface Transportation Policy and Revenue Study Commission (Commission) to examine current and future needs of the system and recommend needed changes to the surface transportation program, among other things. The Commission issued its report in January 2008. This testimony discusses 1) principles to assess proposals for restructuring the surface transportation program and 2) GAO's preliminary observations on the Commission's recommendations. This statement is based on GAO's ongoing work for the Ranking Member of this Committee, the Chairman of the House Transportation and Infrastructure Committee, Senator DeMint, as well as a body of work GAO has completed over the past several years for Congress. GAO has called for a fundamental reexamination of the nation's surface transportation program because, among other things, the current goals are unclear, the funding outlook for the program is uncertain, and the efficiency of the system is declining. A sound basis for reexamination can productively begin with identification of and debate on underlying principles. Through prior analyses of existing programs, GAO identified a number of principles that could help drive an assessment of proposals for restructuring the federal surface transportation program. These principles include (1) defining the federal role based on identified areas of national interest, (2) incorporating performance and accountability for results into funding decisions, and (3) ensuring fiscal sustainability and employing the best tools and approaches to improve results and return on investment. GAO developed these principles based on prior analyses of existing surface transportation programs as well as a body of work that GAO developed for Congress, including its High-Risk, Performance and Accountability, and 21st Century Challenges reports. The principles do not prescribe a specific approach to restructuring, but they do highlight key attributes that will help ensure that a restructured surface transportation program addresses current challenges. In its report, the Commission makes a number of recommendations for restructuring the federal surface transportation program. The recommendations include significantly increasing the level of investment by all levels of government in surface transportation, consolidating and reorganizing the current programs, speeding project delivery, and making the current program more performance- and outcome-based and mode-neutral, among other things. GAO is currently analyzing the Commission's recommendations using the principles that GAO developed for evaluating proposals for restructuring the surface transportation program. Although this analysis is not complete, GAO's preliminary results indicate that some of the Commission's recommendations appear to be aligned with the principles, while others may not be aligned. For example, although the Commission identifies areas of national interest and recommends reorganizing the individual surface transportation programs around these areas, it generally recommends that the federal government pay for 80 percent of project costs without considering whether this level of funding reflects the national interest or should vary by program or project.
5,786
700
Before presenting additional preliminary results, I would like to provide some information on our scope and methodology. Specifically, we are interviewing key OWCP and Postal Service officials in Washington, D.C., to discuss and collect pertinent information regarding the employees' claims for WCP eligibility and for compensation for lost wages and schedule awards. Additionally, we collected and reviewed a total of 483 Postal Service employee WCP case files located at the 12 OWCP district offices throughout the country. For the 12-month period beginning July 1, 1997, we randomly selected the claims and obtained case file records for injuries that occurred or were recognized as job-related during this period on the basis of the type of injury involved: traumatic or occupational; and on the basis of their approval or nonapproval for WCP benefits and compensation or schedule award payments. We chose this period of time because we believed it was current enough to reflect ongoing operations, yet historical enough for most, if not all, of the claims to have been decided upon. Also, in discussing the preliminary results, we generally present our analyses of claim processing times in terms of the "median" time to process cases covered by our review. This means that 50 percent of the cases were processed in the median time or less, and 50 percent of the cases were processed in more time than the median. We did our work from January to May 2002 in accordance with generally accepted government auditing standards. We have not had enough time to fully analyze all of the data we collected, including analyzing the total percentage of claims processed within specified processing standards, or to fully discuss the data with Postal Service or OWCP officials. Accordingly, we are limiting our discussion to median time intervals between the major steps in the WCP claims process up until the time of the decision on the claim and initial compensation payment. Among other things, prior to this hearing, we did not have the time to (1) pinpoint and evaluate specific problems that may have affected the time to process the cases we reviewed, (2) address issues OWCP raised on how the claims processing times might be affected by "administrative closures" or schedule awards, or (3) evaluate numerous other factors that may have affected overall claims processing. Our work has not included an analysis of any time involved in the appeal process of any claim we reviewed, nor did we evaluate the appropriateness of OWCP's decisions on approving or denying the claims. More detail about our sampling plan is presented in appendix I. Although OWCP is charged with implementing the WCP, there is a federal partnership between OWCP and the employing federal agencies for administering the WCP. In this partnership, federal agencies, including the Postal Service, provide the avenue through which injured federal employees prepare and submit their notice of injury forms and claims for WCP benefits and services to OWCP. Additionally, employing agencies are responsible for paying normal salary and benefits to those employees who miss work for up to 45 calendar days, during a 1-year period, due to a work-related traumatic injury for which they have applied for WCP benefits. After receiving the claim forms from the employing agencies, OWCP district office claims examiners review the forms and supporting evidence to decide on the claimant's entitlement to WCP benefits or the need for additional information or evidence, determine the benefits and services to be awarded, approve or disapprove payment of benefits and services, and manage and maintain WCP employee case file records. If additional information or other evidence is needed before entitlement to WCP benefits can be determined, OWCP generally corresponds directly with the claimant or the WCP contact at the applicable Postal Service locations. OWCP regulations require that evidence needed to determine a claimant's entitlement to WCP benefits meet five requirements. These requirements are as follows: 1. The claim was filed within the time limits specified by law. 2. The injured or deceased person was, at the time of injury or death, an employee of the United States. 3. The injury, disease, or death did, in fact, occur. 4. The injury, disease, or death occurred while the employee was in the performance of duty. 5. The medical condition for which compensation or medical benefits is claimed is causally related to the claimed job-related injury, disease, or death. Such evidence, among other things, must be reliable and substantial as determined by OWCP claims examiners. If the claimant submits factual evidence, medical evidence, or both, but OWCP determines the evidence is not sufficient to meet the five requirements, OWCP is required to inform the claimant of the additional evidence needed. The claimant then has at least 30 days to submit the evidence requested. Additionally, if the employer-in this case, the Postal Service-has reason to disagree with any aspect of the claimant's report, it can submit a statement to OWCP that specifically describes the factual allegation or argument with which it disagrees and provide evidence or arguments to support its position. According to the files we reviewed, about 99 percent of the Postal Service employees' traumatic injury claims contained evidence related to the five requirements set by OWCP regulations. About 1 percent of the traumatic injury claims were not approved, according to the case files we reviewed, because evidence was not provided for one or more of the requirements. About 97 percent of the claims filed by Postal Service employees for occupational disease claims contained evidence related to the five requirements. The remaining claims, or about 3 percent, did not include all of the required evidence. Generally, the evidence not provided for both types of claims pertained to either (1) the employee's status as a Postal Service employee or (2) whether the claim was filed within the time limits specified by law. We did not evaluate OWCP's decisions regarding the sufficiency of the information provided. During the period covered by our review, OWCP regulations required an employee who sustained a work-related traumatic injury to give notice of the injury in writing to OWCP using Form CA-1, "Federal Employee's Notice of Traumatic Injury and Claim for Continuation of Pay/ Compensation," in order to claim WCP benefits. To claim benefits for a disease or illness that the employee believed to be work-related, he or she was also required to give notice of the condition in writing to OWCP using Form CA-2, "Notice of Occupational Disease and Claim for Compensation." Both notices, according to OWCP regulations, should be filed with the Postal Service supervisor within 30 days of the injury or the date the employee realized the disease was job-related. Upon receipt, Postal Service officials were supposed to complete the agency portion of the form and submit it to OWCP within 10 working days if the injury or disease was likely to result in (1) a medical charge against OWCP, (2) disability for work beyond the day or shift of injury, (3) the need for more than two appointments for medical examination/or treatment on separate days leading to time lost from work, (4) future disability, (5) permanent impairment, or (6) COP. OWCP regulations, during the period covered by our review, did not provide time frames for OWCP claims examiners to process these claims. Instead, OWCP's operational plan for this period specified performance standards for processing certain types of WCP cases within certain time frames. Specifically, the performance standard for processing traumatic injuries specified that a decision should be made within 45 days of its receipt in all but the most complex cases. The performance standards for decisions on occupational disease claims specified that decisions should be made within 6 to 12 months, depending on the complexity of the case. The case files we reviewed indicated that the length of time taken to process a claim-from the date of traumatic injury or the date an occupational disease was recognized as job-related to the date the claimant's entitlement to benefits was determined-varied widely. For example, we estimate that 25 percent of the claims were processed in up to 48 days for traumatic injury and in up to 78 days for occupational disease. We estimate that 90 percent of the claims were processed in up to 307 days for traumatic injury and in up to 579 days for occupational disease. Finally, we estimate that 50 percent of the claims were processed in up to 84 days for traumatic injuries and in up to 136 days for occupational disease. Specifically, Postal Service employee claims for injuries or diseases covered by our review took the median times shown in table 1 to complete. The median elapsed time taken by Postal Service employees and Postal Service supervisors met the applicable time frames set forth in OWCP regulations. As shown in table 1, the median time taken by Postal Service employees to prepare and submit the claim forms needed to make a determination on their entitlement to WCP benefits for traumatic injuries to the Postal Service supervisor was 2 days from the date of the injury, well within the 30-day time frame set by OWCP regulations. For occupational disease, Postal Service employees signed and submitted the notice of disease form to the Postal Service supervisor in a median time of 26 days from the date the disease was recognized as job-related, or 4 days less than the 30-day time frame set by OWCP regulations. Upon receipt, the Postal Service supervisor then took up to a median time of 11 calendar days-also within the time limit of 10 working days set forth in the regulations-to complete the form and transmit it to OWCP. Also as shown in table 1, once OWCP received the form from the Postal Service, our preliminary analysis showed that OWCP claims examiners processed these notice of injury forms for traumatic injuries in a median time of 59 days to determine a claimant's entitlement to WCP benefits. As mention earlier, the performance standard for these types of cases was 45 days, or 14 days less than the median time taken. According to OWCP officials, the 59-day median processing time inappropriately included the time during which certain types of claims were "administratively closed," then reopened later when a claim for compensation was received. We plan to determine the effect to which these types of claims may have affected the processing times as we complete our review. For occupational disease claims, the data showed that OWCP processed these forms at the median time of 63 days, which was within the 6 to 12-month time frame for simple to complex occupational disease cases specified by OWCP's performance standards. During the period covered by our review, OWCP regulations stated that when an employee was disabled by a work-related injury and lost pay for more than 3 calendar days, or had a permanent impairment, the employer is supposed to furnish the employee with Form CA-7, "Claim for Compensation Due to Traumatic Injury or Occupational Disease." This form was used to claim compensation for periods of disability not covered by COP as well as for schedule awards. The employee was supposed to complete the form upon termination of wage loss-the period of wage loss was less than 10 days or at the expiration of 10 days from the date pay stopped if the period of wage loss was 10 days or more-and submit it to the employing agency. Upon receipt of the compensation claim form from the employee, the employer was required to complete the agency portion of the form and as soon as possible, but not more than 5 working days, transmit the form and any accompanying medical reports to OWCP. For the period covered by our review, OWCP regulations did not provide time limits for OWCP claims examiners to process these claims. Instead, OWCP's annual operational plan for the period of our review specified a performance standard for processing wage loss claims. Specifically, the performance standard stated that all payable claims for traumatic injuries- excluding schedule awards-should be processed within 14 days. This time frame was to be measured from the date OWCP received the claim form from the employing agency to the date the payment was entered into the automated compensation payment system. No performance standard was specified for occupational disease compensation claims. The case file data showed that the processing time--from the date the claim for compensation was prepared to the date the first payment was made-varied widely. For example, we estimate that to process 25 percent of the claims, it took up to 28 days for traumatic injuries and up to 32 days for occupational diseases. To process 90 percent of the claims, it took up to 323 days for traumatic injuries and up to 356 days for occupational diseases. To process 50 percent of the claims, it took up to 49 days for the traumatic injuries and up to 56 days for the occupational diseases. Specifically, the median times to process the claims for compensation for the traumatic injury and occupational disease claims covered by our review are shown in table 2. The case files we reviewed did not contain the information that would have enabled us to determine whether the claims for compensation were prepared and filed by the employees within the time frame set forth by OWCP regulations. However, as shown in table 2, once a claim was prepared, at the median time, we found that after receipt of a claim for compensation for a traumatic injury, the Postal Service supervisor completed the agency portion of the form and transmitted it to OWCP in 4 calendar days, which was less than the 5 working days required by OWCP regulations. For occupational disease compensation claims, we found that upon receipt of the claim form from the employee, the Postal Service supervisor took 7 calendar days, which was also within the 5 working day requirement imposed by OWCP regulations, to transmit the claims to OWCP. As also, as shown in table 2, once OWCP received a traumatic injury compensation claim form, the median time for OWCP claims examiners to process the claim was 23 days, which was longer than the 14 days specified by OWCP's performance standard-excluding schedule awards. However, our data included claims for schedule awards. As mentioned earlier, prior to this hearing we did not have time to evaluate the effect that schedule awards might have had on the median processing time. We plan to do so in our analysis for the final report. For occupational disease claims, our analysis showed that upon receipt, OWCP claims examiners, at the median processing time, took 22 days to make the initial payment for the approved claims. OWCP did not specify a performance standard for occupational disease claims. Finally, our preliminary analysis of case file data showed that during the time between the date of injury or recognition of a disease as job-related, injured employees often (1) continued working in a light-duty capacity, (2) received COP while absent from work, or (3) went on paid annual or sick leave until the time they actually missed work and their pay stopped. In fact, the data showed that the median elapsed time from the date the injury occurred or the disease was recognized as job-related to the beginning date of the compensation period was 98 days for traumatic injuries and 243 days for occupational disease claims. Mr. Chairman, this concludes my prepared statement. I will be pleased to answer any questions you or other Members of the Subcommittee may have. For further information regarding this testimony, please contact Bernard Ungar, Director, or Sherrill Johnson, Assistant Director, Physical Infrastructure Issues, at (202) 512-4232 and (214) 777-5699, respectively. In addition to those named above, Michael Rives, Frederick Lyles, Melvin Horne, John Vocino, Scott Zuchorsky, Maria Edelstein, Lisa Wright- Solomon, Brandon Haller, Jerome Sandau, Jill Sayre, Sidney Schwartz, and Donna Leiss made key contributions to this statement.
In fiscal year 2002, U.S. Postal Service employees accounted for one-third of both the federal civilian workforce and the $2.1 billion in overall costs for the Federal Workers' Compensation Program (WCP). Postal workers submitted half of the claims for new work-related injuries that year. Postal Service employees with job-related traumatic injuries or occupational diseases almost always provided the evidence required to make a determination on their entitlement. In two percent of the cases, the Office of Workers' Compensation Program (OWCP) found that evidence was missing for one or more of the required elements. However, the length of time taken to process claims varied widely even though all were subject to the same OWCP processing standards. OWCP claims examiners took 59 days to process traumatic injury claims after receiving the notice of injury claim forms from the Postal Service--a process that should take 45 days for all but the most complex cases, according to OWCP performance standards. The case files lacked the information necessary to determine whether the claims for compensation were prepared and filed by the employees within the time frame set by OWCP regulations. OWCP claims examiners took 23 days to process traumatic injury compensation claims for wage loss and schedule awards. OWCP's performance standard states that all payable claims should be processed within 14 days from the date of receipt.
3,338
286
ESRD occurs when an individual's kidneys have regressed to less than 10 percent of normal baseline function. Without functioning kidneys, excess wastes and fluids in the body rise to dangerous levels, and certain hormones are no longer produced. Individuals with ESRD must undergo either regular dialysis treatments or receive kidney transplants to survive. As of the end of 2004, of the approximately 480,000 adults with ESRD (those at least 18 years old), just over one-fourth (about 130,000) had functioning kidney transplants and two-thirds (about 330,000) were receiving dialysis treatments. In addition, of the almost 5,700 pediatric individuals with ESRD (those younger than 18 years old), approximately two-thirds (about 3,800) had functioning transplants and less than one- third (about 1,700) were receiving dialysis treatments. A kidney transplant is the preferred method of treatment for individuals with ESRD because it increases an individual's quality of life and decreases long-term mortality rates compared with lifetime dialysis treatments. Studies have reported that pediatric ESRD patients tend to perform better developmentally with transplants than on dialysis. For example, one study reported improvement in neurological development in infants aged 6-11 months following transplantation. Another study showed that transplantation increased the rate at which pediatric ESRD patients improved on measures of intelligence and mathematical skills. Medicare covers over 80 percent of all individuals with ESRD. For these individuals, Medicare covers the cost of lifetime dialysis treatments, or for individuals who receive kidney transplants, the cost of the transplants and 3 years of follow-up care--including immunosuppressive medications needed to sustain the transplants. Medicare also covers hospital inpatient services and outpatient services, such as physician visits and laboratory tests, as well as medical evaluations provided to living donors and recipients in anticipation of transplants. In addition to Medicare, individuals with ESRD may be covered by other public or private health insurance, such as Medicaid or an employer-sponsored health plan. For individuals who are eligible for Medicare on the basis of ESRD, Medicare is the secondary payer if the individuals have employer-sponsored group health insurance coverage during the first 30 months of Medicare coverage. After the first 30 months, Medicare becomes the primary payer for these beneficiaries until they are no longer entitled to Medicare. For an individual who is eligible for Medicare solely because of ESRD and who has a kidney transplant, Medicare coverage ends on the last day of the 36th month after the individual receives the transplant unless the individual is entitled to Medicare other than because of ESRD. However, after 36 months, a transplant recipient can become eligible for Medicare again after a transplant failure and subsequently receive a retransplant or dialysis. Following termination of Medicare coverage, individuals who are unable to pay for immunosuppressive medications and other transplant-related follow-up care must rely on other public or private health insurance or charity care. Pediatric recipients have several potential sources of coverage when their Medicare coverage ends: private health insurance-- generally, a parent's employer-sponsored coverage; Medicaid; the State Children's Health Insurance Program (SCHIP); and charity care. However, once individuals turn 19, they may lose access to their parents' private insurance coverage as well as coverage under SCHIP and Medicaid. Individuals who receive kidney transplants require immunosuppressive therapy--usually a combination of at least two different immunosuppressive medications--as well as regular laboratory tests to monitor and maintain their transplants. Although the frequency of laboratory tests decreases over time, the need for immunosuppressive medications continues for the life of the transplant. Recipients who do not take their immunosuppressive medications according to the prescribed regimens are more likely to have their transplanted kidneys fail. Studies have shown that not only does medication noncompliance cause 13 to 35 percent of transplants to fail, one of the studies indicated that it also causes recipients to die at rates fourfold greater than compliant recipients. One recent study showed that about 23 percent of recipients with failed transplants who returned to dialysis died within 2 years. Several studies have reported that there are a number of reasons why some transplant recipients do not comply with their medication regimens. More specifically, one study reported that adverse side effects of the medications, difficulty following complex treatment regimens that involve several drugs and varying schedules of dosing, and an inability to pay for medications due to a lack of health insurance coverage, among other reasons, can contribute to medication noncompliance. Other studies have reported that medication noncompliance can be unpredictable, often without an identifiable reason. Studies have also shown that adolescent recipients are especially prone to medication noncompliance or partial compliance. For example, one study showed that for individuals aged 12 to 19 years, dissatisfaction with body image and the physical side effects of medications have been linked to poor compliance with prescribed transplant medication regimens. Another study found that 57 percent of participating recipients under 20 years old were not compliant with their medication regimens, compared with only 15 percent of participants over 40 years old. Pediatric, transitional, and adult kidney transplant recipients were similar with respect to sex, race, and income level. As of December 31, 2004, all three age groups were predominately male, white, and lived in counties with a median annual household income of $25,000 to less than $50,000. However, the three groups differed in terms of their types of health insurance coverage, with a smaller percentage of pediatric and transitional recipients covered by Medicare compared to their adult counterparts. Based on our analyses of USRDS and ARF data, we found that pediatric, transitional, and adult recipients were similar with respect to sex, race, and income level, as of December 31, 2004. All three age groups were predominantly male, and the proportion of males in each age group was higher than that found in the general U.S. population--49 percent (see table 1). Approximately 59 percent of individuals with ESRD are male. All three age groups were also predominantly white, and the percentage distribution of other races among the three groups was similar (see table 2). Although a higher percentage of transitional recipients were white and a lower percentage were black compared with pediatric and adult recipients, the differences were not substantial. In addition, the distribution of racial groups among pediatric, transitional, and adult transplant recipients was similar to that found in the general U.S. population. Pediatric, transitional, and adult transplant recipients were similar in terms of their household income level (see table 3). Seventy-five percent of recipients in each age group resided in counties with a median annual household income of $25,000 to less than $50,000, which is almost three times the percentage for the general U.S. population (27 percent). When compared to the general U.S. population, a very small percentage of recipients in each of the three age groups resided in counties with the lowest and highest median annual household incomes--less than $25,000 or $75,000 or more, respectively. About 27 percent of the U.S. population resided in counties with a median annual household income of less than $25,000 and about 28 percent resided in counties with a median annual household income of $75,000 or more. While pediatric, transitional, and adult transplant recipients were similar in terms of sex, race, and income, they were less similar in terms of their health insurance coverage. As of December 31, 2004, while more than two-thirds of adult recipients had coverage under Medicare, just over one- third of pediatric recipients and slightly less than half of transitional recipients were covered under Medicare (see table 4). Although each group had about the same percentage of recipients with both Medicare and Medicaid coverage, almost three times as many adult recipients had Medicare but not Medicaid coverage compared with pediatric recipients, and almost twice as many adult recipients had Medicare but not Medicaid coverage compared with transitional recipients. Although still smaller than the percentage of adult recipients, based on our analysis of USRDS data, a larger percentage of pediatric and transitional recipients had Medicare coverage at the time of their transplants-- 67 percent and 81 percent, respectively, compared to 87 percent. It is not known why these differences in Medicare coverage existed, given that most individuals who have ESRD are eligible for Medicare coverage. Our analysis of data from the USRDS show that after the first year posttransplant, a higher percentage of transitional recipients experienced a transplant failure compared with their pediatric and adult counterparts. In addition, the largest increase in transplant failure among the three age groups occurred in the first 3 years posttransplant--before termination of Medicare coverage--and the increase was substantially higher for transitional recipients than for pediatric and adult recipients. After experiencing a transplant failure, a higher percentage of transitional recipients received dialysis, a higher percentage of pediatric recipients received retransplants after the first year posttransplant, and a higher percentage of adult recipients died. Based on our analysis of USRDS data, we found that after the first year posttransplant, a higher percentage of transitional recipients experienced a transplant failure when compared with their pediatric and adult counterparts (see fig. 1). For example, we found that by 5 years posttransplant, the percentage of transitional recipients who experienced a transplant failure (33 percent) was about twice as high as the percentage of pediatric recipients (16 percent) and somewhat higher than adult recipients (28 percent). According to several representatives of pediatric kidney transplant centers that we interviewed, adolescent kidney transplant recipients--who generally populate our transitional age group--are less likely than other age groups to comply with their medication regimens, which, among other things, can lead to transplant failure. The largest increase in the percentage of transitional recipients who experienced a transplant failure occurred in the first 3 years posttransplant, and this increase was substantially higher than the increase for pediatric and adult recipients. Specifically, the percentage of failures for transitional recipients increased by 133 percent between 1 and 3 years posttransplant, while the percentage increases for pediatric and adult recipients were 83 and 100 percent, respectively. After 3 years posttransplant, all three age groups showed a smaller increase in transplant failures when compared with the period between 1 and 3 years posttransplant. Between 3 and 5 years posttransplant, the percentage increase in transplant failures was 45 percent for pediatric, 57 percent for transitional, and 56 percent for adult recipients. The percentage increase in failures remained lower during the 5 to 7 years posttransplant period--63 percent, 33 percent, and 43 percent for pediatric, transitional, and adult recipients, respectively. Failure to see a large percentage increase of transplant failures in pediatric and transitional recipients beyond 3 years posttransplant, when Medicare coverage terminates for many recipients, may be explained by the practices of transplant centers. Representatives from pediatric kidney transplant centers with whom we spoke stated that once Medicare coverage ends, they either help recipients to acquire other health insurance coverage or provide them with free or reduced-cost immunosuppressive medications if they lack health insurance coverage or otherwise cannot afford the medications. They also stated that the percentage of recipients who experience transplant failures because of an inability to pay for their medications after Medicare coverage ends (3 years posttransplant) is low. Based on our analysis of USRDS data, we found that after experiencing transplant failures, a higher percentage of transitional recipients received dialysis, a higher percentage of pediatric recipients received retransplants after the first year posttransplant, and a higher percentage of adult recipients died (see figs. 2, 3, and 4). By 7 years posttransplant, the percentage of transitional recipients who received dialysis after experiencing a transplant failure was nearly 30 percent higher than that of pediatric recipients and nearly 60 percent higher than that of adult recipients. In addition, at 7 years posttransplant, the percentage of pediatric recipients who received retransplants after experiencing a transplant failure was over 25 percent higher than that of transitional recipients and more than twice the percentage of adults who received retransplants. The percentage of adults who died following a transplant failure was about twice as high as the percentage of pediatric recipients and about three times as high as transitional recipients. Based on our analysis of USRDS data, we found that recipients who had both Medicare and Medicaid coverage experienced a higher percentage of transplant failures compared with those who had Medicare but not Medicaid coverage or were in the Other category (see fig. 5). By 7 years posttransplant, the percentage of recipients covered by both Medicare and Medicaid who experienced a transplant failure was slightly higher (24 percent) than recipients covered by Medicare but not Medicaid and was more than three times as high as the percentage of recipients in the Other category. After experiencing a transplant failure, a higher percentage of recipients who had both Medicare and Medicaid coverage received dialysis when compared with recipients who had Medicare but not Medicaid coverage or were in the Other category (see fig. 6). For example, by 7 years posttransplant, the percentage of recipients covered by both Medicare and Medicaid who received dialysis after experiencing a transplant failure was about 70 percent higher than recipients in the Other category. After the first year posttransplant, the percentage of recipients covered by both Medicare and Medicaid who received dialysis after a transplant failure was substantially higher than the percentage for recipients in the Other category. Based on our analysis of USRDS data, we found that Medicare beneficiaries with functioning transplants cost substantially less per year to treat than those beneficiaries who experienced transplant failures. Specifically, we found that overall, the median annual Medicare cost for a beneficiary with a functioning transplant was $8,550, compared with a median annual Medicare cost of $50,938 for a beneficiary after a transplant failure--a difference of 500 percent. For pediatric beneficiaries, the percentage difference was even higher--the median annual Medicare cost after a transplant failure was 750 percent higher than for a functioning transplant (see table 5). The differences for transitional and adult beneficiaries were 550 percent and 500 percent, respectively. The substantial cost of treating transplant recipients who experience transplant failures underscores the importance of maintaining functioning kidney transplants. While there are many reasons that could account for transplant failures during the first 3 years posttransplant--including medication noncompliance--the large percentage increase in transplant failures from 1 year to 3 years posttransplant for transitional recipients cannot be attributed to an inability to access immunosuppressive medications due to a lack of Medicare coverage. In commenting on a draft of this report, CMS stated that it appreciated our interest in kidney transplant patients and in the cost of care provided to those receiving transplants or dialysis. CMS stated that it was concerned about the quality of care and the outcomes experienced by Medicare beneficiaries, including the higher rate of transplant failure among transitional patients. CMS also stated that educating beneficiaries with kidney failure is critical to improving beneficiaries' ability to actively participate in and make informed decisions about their care. As a result, the agency engages in numerous educational and outreach efforts targeted to beneficiaries, providers, and national organizations that represent renal patients. CMS's comments are reprinted in appendix I. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this report to the Secretary of HHS and to other interested parties. In addition, this report will be available at no charge on GAO's Web site at http://www.gao.gov. We will also make copies available to others upon request. If you or your staff have any questions about this report, please call me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Nancy Edwards, Assistant Director; Kelly DeMots; Krister Friday; Joanna Hiatt; Xiaoyi Huang; Martha Kelly; and Ba Lin made key contributions to this report.
For individuals with end-stage renal disease (ESRD), the permanent loss of kidney function, Medicare covers kidney transplants and 36 months of follow-up care. Kidney transplant recipients must take costly medications to avoid transplant failure. Unless a transplant recipient is eligible for Medicare other than on the basis of ESRD, Medicare coverage, including that for medications, ends 36 months posttransplant. Pediatric transplant recipients, including those who were under 18 when transplanted but are now adults (transitional recipients), may be more likely than their adult counterparts to lose access to medications once Medicare coverage ends because they may lack access to other health insurance coverage. GAO was asked to examine (1) the percentage of transplant failures and subsequent outcomes--retransplant, dialysis, or death--among pediatric, transitional, and adult kidney transplant recipients and (2) how the cost to Medicare for a beneficiary with a functioning transplant compares with the cost for a beneficiary with a transplant failure. To do this, GAO analyzed 1997 through 2004 data from the United States Renal Data System (USRDS) and interviewed officials from pediatric transplant centers. The Centers for Medicare & Medicaid Services--the agency that administers Medicare--commented that it is concerned about beneficiary outcomes and has an education program to help them. The percentage of kidney transplant recipients who experience a transplant failure varies by age group as do the percentages who experience dialysis, retransplant, or death. After the first year posttransplant, a higher percentage of transitional recipients (those younger than 18 at the time of their transplants and at least 18 as of December 31, 2004) experienced a transplant failure and subsequently received dialysis compared with their pediatric (those younger than 18 as of December 31, 2004) and adult (those at least 18 at the time of their transplants) counterparts. By 5 years posttransplant, the percentage of transitional recipients who experienced a transplant failure (33 percent) was about twice as high as pediatric recipients (16 percent) and somewhat higher than adult recipients (28 percent). The largest increase in transplant failures for each age group occurred in the first 3 years posttransplant--before the termination of Medicare coverage on the basis of ESRD--and the increase was substantially higher for transitional recipients (133 percent) than for pediatric (83 percent) and adult (100 percent) recipients. Medicare beneficiaries with functioning transplants cost substantially less per year to treat than those who experienced a transplant failure. GAO found that the median annual Medicare cost for a beneficiary whose transplant failed ($50,938) was 500 percent more than the median annual Medicare cost for a beneficiary with a functioning transplant ($8,550). This percentage difference was consistent across transplant recipient age groups. The substantial cost of treating transplant recipients who experience a transplant failure underscores the importance of maintaining functioning kidney transplants. While there are many reasons that could account for transplant failures, the large percentage increase in transplant failures from 1 year to 3 years posttransplant for transitional recipients cannot be attributed to an inability to access medications due to a lack of Medicare coverage.
3,512
667
See GAO-09-399. property transported by commercial passenger aircraft. At the 463 TSA- regulated airports in the United States, prior to boarding an aircraft, all passengers, their accessible property, and their checked baggage are screened pursuant to TSA-established procedures, which include passengers passing through security checkpoints where they and their identification documents are checked by transportation security officers (TSO) and other TSA employees or by private sector screeners under TSA's Screening Partnership Program. Airport operators, however, are directly responsible for implementing TSA security requirements, such as those relating to perimeter security and access controls, in accordance with their approved security programs and other TSA direction. TSA relies upon multiple layers of security to deter, detect, and disrupt persons posing a potential risk to aviation security. These layers include behavior detection officers (BDO), who examine passenger behaviors and appearances to identify passengers who might pose a potential security risk at TSA-regulated airports; TSA has selectively deployed about 3,000 BDOs to 161 of 463 TSA-regulated airports in the United States, including Boston-Logan airport where the program was initially deployed in 2003. Other security layers include travel document checkers, who examine tickets, passports, and other forms of identification; TSOs responsible for screening passengers and their carry- on baggage at passenger checkpoints, using x-ray equipment, magnetometers, Advanced Imaging Technology, and other devices; random employee screening; and checked baggage screening systems. Additional layers cited by TSA include, among others, intelligence gathering and analysis; passenger prescreening against terrorist watchlists; random canine team searches at airports; federal air marshals, who provide federal law enforcement presence on selected flights operated by U.S. air carriers; Visible Intermodal Protection Response (VIPR) teams; reinforced cockpit doors; the passengers themselves; as well as other measures both visible and invisible to the public. Figure 1 shows TSA's layers of aviation security. TSA has also implemented a variety of programs and protective actions to strengthen airport perimeters and access to sensitive areas of the airport, including conducting additional employee background checks and assessing different biometric-identification technologies. Airport perimeter and access control security is intended to prevent unauthorized access into secure areas of an airport--either from outside or within the airport complex. According to TSA, each one of these layers alone is capable of stopping a terrorist attack. TSA states that the security layers in combination multiply their value, creating a much stronger system, and that a terrorist who has to overcome multiple security layers to carry out an attack is more likely to be pre-empted, deterred, or to fail during the attempt. We reported in May 2010 that TSA deployed SPOT nationwide before first determining whether there was a scientifically valid basis for using behavior and appearance indicators as a means for reliably identifying passengers who may pose a risk to the U.S. aviation system. DHS's Science and Technology Directorate completed a validation study in April 2011 to determine the extent to which SPOT was more effective than random screening at identifying security threats and how the program's behaviors correlate to identifying high-risk travelers. However, as noted in the study, the assessment was an initial validation step, but was not designed to fully validate whether behavior detection can be used to reliably identify individuals in an airport environment who pose a security risk. According to DHS, additional work will be needed to comprehensively validate the program. According to TSA, SPOT was deployed before a scientific validation of the program was completed to help address potential threats to the aviation system, such as those posed by suicide bombers. TSA also stated that the program was based upon scientific research available at the time regarding human behaviors. We reported in May 2010 that approximately 14,000 passengers were referred to law enforcement officers under SPOT from May 2004 through August 2008. Of these passengers, 1,083 were arrested for various reasons, including being illegal aliens (39 percent), having outstanding warrants (19 percent), and possessing fraudulent documents (15 percent). The remaining 27 percent were arrested for other reasons. As noted in our May 2010 report, SPOT officials told us that it is not known if the SPOT program has resulted in the arrest of anyone who is a terrorist, or who was planning to engage in terrorist-related activity. According to TSA, in fiscal year 2010, SPOT referred about 50,000 passengers for additional screening and about 3,600 referrals to law enforcement officers. The referrals to law enforcement officers yielded approximately 300 arrests. Of these 300 arrests, TSA stated that 27 percent were illegal aliens, 17 percent were drug-related, 14 percent were related to fraudulent documents, 12 percent were related to outstanding warrants, and 30 percent were related to other offenses. DHS has requested about $254 million for fiscal year 2012 for the SPOT program, which would support an additional 350 (or 175 full-time equivalent) BDOs. If TSA receives its requested appropriation, TSA will be in a position to have invested about $1 billion in the SPOT program since fiscal year 2007. According to TSA, as of August 2011, TSA is pilot testing revised procedures for BDOs at Boston-Logan airport to engage passengers entering screening in casual conversation to help determine suspicious behaviors. According to TSA, after a passenger's travel documents are verified, a BDO will briefly engage each passenger in conversation. If more information is needed to help determine suspicious behaviors, the officer will refer the passenger to a second BDO for a more thorough conversation to determine if additional screening is needed. TSA noted that these BDOs have received additional training in interviewing methods. TSA plans to expand this pilot program to additional airports in the fall of 2011. A 2008 report issued by the National Research Council of the National Academy of Sciences stated that the scientific evidence for behavioral monitoring is preliminary in nature. The report also noted that an information-based program, such as a behavior detection program, should first determine if a scientific foundation exists and use scientifically valid criteria to evaluate its effectiveness before deployment. The report added that such programs should have a sound experimental basis and that the documentation on the program's effectiveness should be reviewed by an independent entity capable of evaluating the supporting scientific evidence. According to the report, a terrorist's desire to avoid detection makes information-gathering techniques, such as asking what a person has done, is doing, or plans to do, highly unreliable. Using these techniques to elicit information could also have definite privacy implications. These findings, in particular, may be important as TSA moves forward with its pilot program to expand BDOs' use of conversation and interviews with all passengers entering screening. As we reported in May 2010, an independent panel of experts could help DHS develop a comprehensive methodology to determine if the SPOT program is based on valid scientific principles that can be effectively applied in an airport environment for counterterrorism purposes. Thus, we recommended that the Secretary of Homeland Security convene an independent panel of experts to review the methodology of the validation study on the SPOT program being conducted to determine whether the study's methodology was sufficiently comprehensive to validate the SPOT program. We also recommended that this assessment include appropriate input from other federal agencies with expertise in behavior detection and relevant subject matter experts. DHS concurred and stated that its validation study, completed in April 2011, included an independent review of the study with input from a broad range of federal agencies and relevant experts, including those from academia. DHS's validation study found that SPOT was more effective than random screening to varying degrees. For example, the study found that SPOT was more effective than random screening at identifying individuals who possessed fraudulent documents and identifying individuals who law enforcement officers ultimately arrested. However, DHS noted that the identification of such high-risk passengers was rare in both the SPOT and random tests. In addition, DHS determined that the base rate, or frequency, of SPOT behavioral indicators observed by TSA to detect suspicious passengers was very low and that these observed indicators were highly varied across the traveling public. Although details about DHS's findings related to these indicators are sensitive security information, the low base rate and high variability of traveler behaviors highlights the challenge that TSA faces in effectively implementing a standardized list of SPOT behavioral indicators. In addition, DHS outlined several limitations to the study. For example, the study noted that BDOs were aware of whether individuals they were screening were referred to them as the result of identified SPOT indicators or random selection. DHS stated that this had the potential to introduce bias into the assessment. DHS also noted that SPOT data from January 2006 through October 2010 were used in its analysis of behavioral indicators even though questions about the reliability of the data exist. In May 2010, we reported weaknesses in TSA's process for maintaining operational data from the SPOT program database. Specifically, the SPOT database did not have computerized edit checks built into the system to review the format, existence, and reasonableness of data. In another example, BDOs could not input all behaviors observed in the SPOT database because the database limited entry to eight behaviors, six signs of deception, and four types of prohibited items per passenger referred for additional screening. Because of these data- related issues, we reported that meaningful analyses could not be conducted at that time to determine if there is an association between certain behaviors and the likelihood that a person displaying certain behaviors would be referred to a law enforcement officer or whether any behavior or combination of behaviors could be used to distinguish deceptive from nondeceptive individuals. In our May 2010 report, we recommended that TSA establish controls for this SPOT data. DHS agreed and TSA has established additional data controls as part of its database upgrade. However, some of DHS's analysis for this study used SPOT data recorded prior to these additional controls being implemented. The study also noted that it was not designed to comprehensively validate whether SPOT can be used to reliably identify individuals in an airport environment who pose a security risk. The DHS study made recommendations related to strengthening the program and conducting a more comprehensive validation of whether the science can be used for counterterrorism purposes in the aviation environment. Some of these recommendations, such as the need for a comprehensive program evaluation including a cost-benefit analysis, reiterate recommendations made in our May 2010 report. TSA is currently reviewing the study's findings and assessing the steps needed to address DHS's recommendations but does not have time frames for completing this work. If TSA decides to implement the recommendations in the April 2011 DHS validation study, DHS may be years away from knowing whether there is a scientifically valid basis for using behavior detection techniques to help secure the aviation system against terrorist threats given the broad scope of the additional work and related resources identified by DHS for addressing the recommendations. Thus, as we reported in March 2011, Congress may wish to consider the study's results in making future funding decisions regarding the program. We reported in September 2009 that TSA has implemented a variety of programs and actions since 2004 to improve and strengthen airport perimeter and access controls security, including strengthening worker screening and improving access control technology. For example, to better address the risks posed by airport workers, in 2007 TSA implemented a random worker screening program that was used to enforce access procedures, such as ensuring workers display appropriate credentials and do not possess unauthorized items when entering secure areas. According to TSA officials, this program was developed to help counteract the potential vulnerability of airports to an insider attack--an attack from an airport worker with authorized access to secure areas. TSA has also expanded its requirements for conducting worker background checks and the population of individuals who are subject to these checks. For example, in 2007 TSA expanded requirements for name-based checks to all individuals seeking or holding airport-issued identification badges and in 2009 began requiring airports to renew all airport-identification media every 2 years. TSA also reported taking actions to identify and assess technologies to strengthen airport perimeter and access controls security, such as assisting the aviation industry and a federal aviation advisory committee in developing security standards for biometric access controls. However, we reported in September 2009 that while TSA has taken actions to assess risk with respect to airport perimeter and access controls security, it had not conducted a comprehensive risk assessment based on assessments of threats, vulnerabilities, and consequences, as required by DHS's National Infrastructure Protection Plan (NIPP). We further reported that without a full depiction of threats, vulnerabilities, and consequences, an organization's ability to establish priorities and make cost-effective security decisions is limited. We recommended that TSA develop a comprehensive risk assessment, along with milestones for completing the assessment. DHS concurred with our recommendation and said it would include an assessment of airport perimeter and access control security risks as part of a comprehensive assessment for the transportation sector--the Transportation Sector Security Risk Assessment (TSSRA). The TSSRA, published in July 2010, included an assessment of various risk-based scenarios related to airport perimeter security but did not consider the potential vulnerabilities of airports to an insider attack--the insider threat--which it recognized as a significant issue. In July 2011, TSA officials told us that the agency is developing a framework for insider risk that is to be included in the next iteration of the assessment, which TSA expected to be released at the end of calendar year 2011. Such action, if taken, would meet the intent of our recommendation. We also recommended that, as part of a comprehensive risk assessment of airport perimeter and access controls security, TSA evaluate the need to conduct an assessment of security vulnerabilities at airports nationwide. At the time of our review, TSA told us its primary measures for assessing the vulnerability of airports to attack were professional judgment and the collective results of joint vulnerability assessments (JVA) it conducts with the Federal Bureau of Investigation (FBI) for select--usually high-risk--airports. Our analysis of TSA data showed that from fiscal years 2004 through 2008, TSA conducted JVAs at about 13 percent of the approximately 450 TSA-regulated airports that existed at that time, thus leaving about 87 percent of airports unassessed. TSA has characterized U.S. airports as an interdependent system in which the security of all is affected or disrupted by the security of the weakest link. However, we reported that TSA officials could not explain to what extent the collective JVAs of specific airports constituted a reasonable systems- based assessment of vulnerability across airports nationwide. Moreover, TSA officials said that they did not know to what extent the 87 percent of commercial airports that had not received a JVA as of September 2009-- most of which were smaller airports--were vulnerable to an intentional security breach. DHS concurred with our 2009 report recommendation to assess the need for a vulnerability assessment of airports nationwide, and TSA officials stated that based on our review they intended to increase the number of JVAs conducted at Category II, III, and IV airports and use the resulting data to assist in prioritizing the allocation of limited resources. Our analysis of TSA data showed that from fiscal year 2004 through July 1, 2011, TSA conducted JVAs at about 17 percent of the TSA-regulated airports that existed at that time, thus leaving about 83 percent of airports unassessed. Since we issued our report in September 2009, TSA had not conducted JVAs at Category III and IV airports. TSA stated that the TSSRA is to provide a comprehensive risk assessment of airport security, but could not tell us to what extent it has studied the need to conduct JVAs of security vulnerabilities at airports nationwide. Additionally, in August 2011 TSA reported that its national inspection program requires that transportation security inspectors conduct vulnerability assessments at all commercial airports, which are based on the joint vulnerability assessment model. According to TSA, every commercial airport in the United States receives a security assessment each year, including an evaluation of perimeter security and access controls. We have not yet assessed the extent to which transportation security inspectors consistently conduct vulnerability assessments based on the joint vulnerability model. Providing additional information on how and to what extent such security assessments have been performed would more fully address our recommendation. We also reported in September 2009 that TSA's efforts to enhance the security of the nation's airports have not been guided by a national strategy that identifies key elements, such as goals, priorities, performance measures, and required resources. To better ensure that airport stakeholders take a unified approach to airport security, we recommended that TSA develop a national strategy for airport security that incorporates key characteristics of effective security strategies, such as measurable goals and priorities. DHS concurred with this recommendation and stated that TSA would implement it by updating the Transportation Systems-Sector Specific Plan (TS-SSP), to be released in the summer of 2010. TSA provided a copy of the updated plan to congressional committees in June 2011 and to us in August 2011. We reviewed this plan and its accompanying aviation model annex and found that while the plan provided a high-level summary of program activities for addressing airport security such as the screening of workers, the extent to which these efforts would be guided by measurable goals and priorities, among other things, was not clear. Providing such additional information would better address the intent of our recommendation. Chairman McCaul, Ranking Member Keating, and Members of the Subcommittee, this concludes my statement. I look forward to answering any questions that you may have at this time. For questions about this statement, please contact Stephen M. Lord at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony are David M. Bruno and Steve Morris, Assistant Directors; Ryan Consaul; Barbara Guffy; Tracey King; Tom Lombardi; and Lara Miklozek. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The attempted bombing of Northwest flight 253 in December 2009 underscores the need for effective aviation security programs. Aviation security remains a daunting challenge with hundreds of airports and thousands of flights daily carrying millions of passengers and pieces of checked baggage. The Department of Homeland Security's (DHS) Transportation Security Administration (TSA) has spent billions of dollars and implemented a wide range of aviation security initiatives. Two key layers of aviation security are (1) TSA's Screening of Passengers by Observation Techniques (SPOT) program designed to identify persons who may pose a security risk; and (2) airport perimeter and access controls security. This testimony provides information on the extent to which TSA has taken actions to validate the scientific basis of SPOT and strengthen airport perimeter security. This statement is based on prior products GAO issued from September 2009 through September 2011 and selected updates in August and September 2011. To conduct the updates, GAO analyzed documents on TSA's progress in strengthening aviation security, among other things. DHS completed an initial study in April 2011 to validate the scientific basis of the SPOT program; however, additional work remains to fully validate the program. In May 2010, GAO reported that TSA deployed this program, which uses behavior observation and analysis techniques to identify potentially high-risk passengers, before determining whether there was a scientifically valid basis for using behavior and appearance indicators as a means for reliably identifying passengers who may pose a risk to the U.S. aviation system. TSA officials said that SPOT was deployed in response to potential threats, such as suicide bombers, and was based on scientific research available at the time. TSA is pilot testing revised program procedures at Boston-Logan airport in which behavior detection officers will engage passengers entering screening in casual conversation to help determine suspicious behaviors. TSA plans to expand this pilot program in the fall of 2011. GAO recommended in May 2010 that DHS, as part of its validation study, assess the methodology to help ensure the validity of the SPOT program. DHS concurred and stated that the study included an independent review with a broad range of agencies and experts. The study found that SPOT was more effective than random screening to varying degrees. However, DHS's study was not designed to fully validate whether behavior detection can be used to reliably identify individuals in an airport environment who pose a security risk. The study also noted that additional work was needed to comprehensively validate the program. TSA officials are assessing the actions needed to address the study's recommendations but do not have time frames for completing this work. In September 2009 GAO reported that since 2004 TSA has taken actions to strengthen airport perimeter and access controls security by, among other things, deploying a random worker screening program; however, TSA had not conducted a comprehensive risk assessment or developed a national strategy. Specifically, TSA had not conducted vulnerability assessments for 87 percent of the approximately 450 U.S. airports regulated for security by TSA in 2009. GAO recommended that TSA develop (1) a comprehensive risk assessment and evaluate the need to conduct airport vulnerability assessments nationwide and (2) a national strategy to guide efforts to strengthen airport security. DHS concurred and TSA stated that the Transportation Sector Security Risk Assessment, issued in July 2010, was to provide a comprehensive risk assessment of airport security. However, this assessment did not consider the potential vulnerabilities of airports to an insider attack--an attack from an airport worker with authorized access to secure areas. In August 2011, TSA reported that transportation security inspectors conduct vulnerability assessments annually at all commercial airports, including an evaluation of perimeter security. GAO has not yet assessed the extent to which inspectors consistently conduct vulnerability assessments. TSA also updated the Transportation Systems-Sector Specific Plan, which summarizes airport security program activities. However, the extent to which these activities were guided by measurable goals and priorities, among other things, was not clear. Providing such additional information would better address GAO's recommendation. GAO has made recommendations in prior work to strengthen TSA's SPOT program and airport perimeter and access control security efforts. DHS and TSA generally concurred with the recommendations and have actions under way to address them.
3,972
888
The Results Act is the centerpiece of a statutory framework provided by recent legislation to bring needed improvements to federal agencies' management activities. (Other parts of the framework include the 1990 Chief Financial Officers Act, the 1995 Paperwork Reduction Act, and the 1996 Clinger-Cohen Act.) Under the Results Act, every major federal agency must now ask itself some basic questions: What is our mission? What are our goals and how will we achieve them? How can we measure our performance? How will we use that information to make improvements? The act forces federal agencies to shift their focus away from such traditional concerns as staffing and activity levels and toward the results of those activities. is included in VBA's business plan, also included in VA's fiscal year 1999 budget submission. In previous testimony before this Subcommittee, we noted that VBA's planning process has been evolving. VBA first developed a strategic plan in December 1994, which covered fiscal years 1996 through 2001. The plan laid out VBA's mission, strategic vision, and goals. For example, the vocational rehabilitation and counseling (VR) goal was to enable veterans with service-connected disabilities to become employable and to obtain and maintain suitable employment. In addition, a program goal was to treat beneficiaries in a courteous, responsive, and timely manner. However, as VA's Inspector General noted, VBA's plan did not include specific program objectives and performance measures that could be used to measure VBA's progress in achieving its goals. In fiscal year 1995, VBA established a new Results Act strategic planning process that included business process reengineering (BPR). VBA began developing five "business-line" plans that corresponded with its major program areas: compensation and pension, educational assistance, loan guaranty, vocational rehabilitation and counseling, and insurance. Each business-line plan supplemented the overall VBA strategic plan--which VBA refers to as its business plan--by specifying program goals that are tied to VBA's overall goals. Also, each business-line plan identified performance measures that VBA intended to use to track its progress in meeting each plan's goals. In VBA's fiscal year 1998 budget submission, VBA set forth its business goals and measures, most of which were focused on the process of providing benefits and services, such as timeliness and accuracy in processing benefit claims. As with last year's business plan, VBA's fiscal year 1999 business plan continues to focus primarily on process-oriented goals and performance measures. VBA is, however, developing more results-oriented goals and measures for its five benefit programs. VBA officials consider this initial effort, which it hopes to complete by this summer, to be an interim step; final results-oriented goals and measures will be developed following program evaluations and other analyses, which VBA plans to conduct over the next 3 to 5 years. To help achieve its program goals, VBA has efforts under way to coordinate with other agencies that support veterans' benefit programs; these efforts will need to be sustained to ensure quality service to veterans. VBA also faces significant challenges in setting clear strategies for achieving the goals it has established and in measuring program performance. For example, VBA considers its BPR efforts to be essential to the success of key performance goals, such as reducing the number of days it takes VBA to process a veteran's disability compensation claim. VBA is, however, in the process of reexamining BPR implementation; at this point, it is unclear exactly how VBA expects reengineered processes to improve claims processing timeliness. VBA is also in the process of identifying and developing key data it needs to measure its progress in achieving specific goals. At the same time, VBA recognizes, and is working to correct, data accuracy and reliability problems with its existing management reporting systems. In its fiscal year 1999 business plan, VBA has realigned its goals and measures to better link with VA's departmentwide strategic and performance plans. In keeping with the overall structure of VA's strategic and performance plans, each business-line plan has been organized into two sections. The first section--entitled "Honor, Care, and Compensate Veterans in Recognition of Their Sacrifices for America"--is intended to incorporate VBA's results-oriented goals in support of VA's efforts to do just that. The second section, entitled "Management Strategies," incorporates goals related to customer satisfaction, timeliness, accuracy, costs, and employee development and satisfaction. This structure more clearly highlights the need to focus on program results as well as on process-oriented goals. satisfaction with VBA's efforts. VBA has also made some progress in developing results-oriented goals and measures for two of its five programs--VR and housing. In our assessments of VA's strategic planning efforts, we determined that perhaps the most significant challenge for VA is to develop results-oriented goals for its major programs, particularly for benefit programs. As VBA notes in its business plan, the objective of the VR program is to increase the number of disabled veterans who acquire and maintain suitable employment and are considered to be rehabilitated. To measure the effectiveness of vocational rehabilitation program efforts to help veterans find and maintain suitable jobs, VBA has developed an "outcome success rate," which it defines as the percentage of veterans who have terminated their program and who have met accepted criteria for program success. One major goal of VBA's loan guaranty--or housing--program is to improve the abilities of veterans to obtain financing for purchasing a home. The outcome measure VBA established for this goal is the percentage of veterans who say they would not have been able to purchase any home, or would have had to purchase a less expensive home, without a VA-guaranteed loan. While the results-oriented goals and measures VBA has developed to date are a positive first step, they do not allow VBA to fully assess these programs' results. The VR outcome success rate, for example, focuses only on those veterans who have left the program, rather than on all applicants who are eligible for program services. This success rate also does not consider how long it takes program participants to complete the program. In addition, by relying on self-reported data from beneficiaries, the housing outcome measure does not provide objective, verifiable information on the extent to which veterans are able to obtain housing as a result of VBA's housing program. which veterans are using their earned education benefit, rather than on program results. One of the purposes of this program is to extend the benefits of a higher education to qualifying men and women who might not otherwise be able to afford such an education. A results-oriented goal would focus on issues such as whether the program indeed provided the education that the veteran could not otherwise have obtained. One measure VBA could use to assess its progress in achieving this goal would be the extent to which veterans have obtained a college degree or otherwise completed their education. In the past, VA has cited the lack of formal program evaluations as a reason for not providing results-oriented goals for many of its programs. Evaluations can be an important source of information for helping the Congress and others ensure that agency goals are valid and reasonable, providing baselines for agencies to use in developing performance goals and measures, and identifying factors likely to affect agency performance. VBA officials told us they now plan to develop results-oriented goals and measures for its three other programs--disability compensation and pensions, education benefits, and insurance coverage--by this summer. They consider these goals and measures--as well as those already developed for the VR and housing programs--to be interim, with final goals and measures to be developed following the completion of evaluations and analyses, which they plan to conduct over the next 3 to 5 years. In focusing on program results, VBA will need to tackle difficult questions in consultation with the Congress. For example, the purpose of the disability compensation program is to compensate veterans for the average loss in earning capacity in civilian occupations that results from injuries or conditions incurred or aggravated during military service. Given this program purpose, results-oriented goals would focus on issues such as whether disabled veterans are indeed being compensated for average loss in earning capacity and whether VBA is providing compensation to all those who should be compensated. However, we have reported that the disability rating schedule, which has served as a basis for distributing compensation among disabled veterans since 1945, does not reflect the many changes that medical and socioeconomic conditions may have had on veterans' earning capacity over the last 53 years. Thus, the ratings may not accurately reflect the levels of economic loss that veterans currently experience as a result of their disabilities. Issues such as whether veterans are being compensated to an extent commensurate with their economic losses are particularly sensitive, according to VBA officials, and for that reason, they plan to consult with key stakeholders--including the Congress and veterans' service organizations--over the next few months about the interim goals and measures VBA is developing. This will continue the consultative process, which VA officials, including those from VBA, began last year as part of VA's efforts to develop a departmentwide strategic plan. As VBA develops more results-oriented goals and measures, it also needs to ensure that it is coordinating efforts with other parts of VA as well as federal and state agencies that support veterans' benefits programs. For example, our work has shown that state vocational rehabilitation agencies, the Department of Labor, and private employment agencies also help veterans find employment once they have acquired all of the skills to become employable; VA has contracted for quality reviews of higher education and training institutions that have already been reviewed by the Department of Education; VBA relies on the Department of Defense for information about veterans' military service, including their medical conditions, to help determine eligibility for disability compensation, vocational rehabilitation, and educational assistance programs; and in determining the eligibility of a veteran for disability compensation, VBA usually requires the veteran to undergo a medical examination, which is generally performed by a VHA physician. letter outlining their benefits and the requirements for maintaining their eligibility. VBA also is working with VHA to improve the quality of the disability exams VHA physicians conduct; the lack of adequate exams has been the primary reason why appealed disability decisions are remanded to VBA. VBA will need to continue to coordinate with the organizations that are critical to veterans' benefits programs to ensure overall high-quality service to veterans. In addition to requiring an agency to identify performance goals and measures, the Results Act also requires that an agency highlight in its annual performance plan the strategies needed to achieve its performance goals. Without a clear description of the strategies an agency plans to use, it will be difficult to assess the likelihood of the agency's success in achieving its intended results. A clear strategy would identify specific actions, including implementation schedules, that the agency was taking or planned to take and how these actions would achieve intended results. VBA is in the early stages of developing clear and specific strategies. While it has identified numerous functions and activities as its strategies, VBA has not clearly demonstrated how these efforts will lead to intended results. For example, in its current business plan, VBA consistently refers to BPR as the key to achieving its performance goals. VBA states that with the implementation of BPR, it will reduce the time it takes to complete an original claim for compensation to an average of 53 days from the current estimate of 106 days. However, VBA does not describe the specific actions needed, set a timetable for implementing needed changes, or show a clear link between BPR initiatives and reduced processing times. According to VBA officials, efforts to implement BPR are still under way and are now being reassessed. A major challenge VBA faces in developing clear and specific strategies for achieving performance goals will be effectively using BPR to identify what actions are needed to achieve performance goals and explain how these actions will lead to the intended results. Results Act, agencies are expected to use the performance and cost data they collect to continuously improve their operations, identify gaps between their performance and their performance goals, and develop plans for closing performance gaps. However, in developing its performance measures, VBA has identified numerous data gaps and problems that, if not addressed, will hinder VBA and others' ability to assess VBA's performance and determine the extent to which it is achieving its stated goals. For example, one goal is to ensure that VBA is providing the best value for the taxpayers' dollar; however, VBA currently is unable to calculate the full cost of providing benefits and services to veterans. VBA's ability to develop complete cost information for its program activities hinges on the successful implementation of its new cost accounting system, Activity Based Costing, currently under development. In addition, VBA plans to measure and assess veterans' satisfaction with the programs and services VBA provides. The data VBA needs to make this assessment, however, will not be available until VBA implements planned customer satisfaction surveys for two of its five programs--VR and educational assistance. In addition, VBA's recently appointed Under Secretary for Benefits has raised concerns about the accuracy of data contained in VBA's existing management reporting systems. Moreover, completed and ongoing IG audits have identified data system internal control weaknesses and data integrity problems, which if not corrected will undermine VBA's ability to reliably measure its performance. In its fiscal year 1996 audit of VA's financial statements, for example, the Inspector General reported that the accounting system supporting the housing program does not efficiently and reliably accumulate financial information. The Inspector General believes the system's deficiencies have the potential to adversely affect VBA's ability to accurately and completely produce reliable financial information and to effectively audit system data. Also, an ongoing IG audit appears to have identified data integrity problems with certain performance data, according to VBA officials. Specifically, in assessing whether key claims processing timeliness data are valid, reliable, and accurate, IG auditors found instances where VBA regional office staff were manipulating data to make their performance appear better than it in fact was. VBA officials told us they are in the process of assessing the data system's vulnerabilities so they can take steps to correct the problems identified. Mr. Chairman, this completes my testimony this morning. I would be pleased to respond to any questions you or Members of the Subcommittee may have. Agencies' Annual Performance Plans Under the Results Act: An Assessment Guide to Facilitate Congressional Decisionmaking (GAO/GGD/AIMD-10.1.18, Feb. 1998). Vocational Rehabilitation: Opportunities to Improve Program Effectiveness (GAO/T-HEHS-98-87, Feb. 4, 1998). Managing for Results: Agencies' Annual Performance Plans Can Help Address Strategic Planning Challenges (GAO/GGD-98-44, Jan. 30, 1998). The Results Act: Observations on VA's August 1997 Draft Strategic Plan (GAO/T-HEHS-97-215, Sept. 18, 1997). The Results Act: Observations on VA's June 1997 Draft Strategic Plan (GAO/HEHS-97-174R, July 11, 1997). Veterans Benefits Administration: Focusing on Results in Vocational Rehabilitation and Education Programs (GAO/T-HEHS-97-148, June 5, 1997). The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven (GAO/GGD-97-109, June 2, 1997). Veterans' Affairs: Veterans Benefits Administration's Progress and Challenges in Implementing GPRA (GAO/T-HEHS-97-131, May 14, 1997). Veterans' Employment and Training Service: Focusing on Program Results to Improve Agency Performance (GAO/T-HEHS-97-129, May 7, 1997). Agencies' Strategic Plans Under GPRA: Key Questions to Facilitate Congressional Review (GAO/GGD-10.1.16, ver. 1, May 1997). Managing for Results: Using GPRA to Assist Congressional and Executive Branch Decisionmaking (GAO/T-GGD-97-43, Feb. 12, 1997). VA Disability Compensation: Disability Ratings May Not Reflect Veterans' Economic Losses (GAO/HEHS-97-9, Jan. 7, 1997). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Veterans Benefits Administration's (VBA) implementation of the Government Performance and Results Act of 1993. GAO noted that: (1) VBA continues to make progress in setting goals and measuring its programs' performance but faces significant challenges in its efforts to successfully implement the Results Act; (2) VBA has efforts under way to address these challenges, which if continued will help ensure success; (3) for example, VBA is in the process of developing results-oriented goals and measures for each of its programs in response to concerns that GAO and others have raised; (4) developing more results-oriented goals and measures will require VBA to address difficult and sensitive questions regarding specific benefit programs, such as whether disabled veterans are being compensated appropriately under the existing disability program structure; (5) to address these questions, VBA is continuing its consultations with Congress, begun last year in conjunction with the Department of Veterans Affairs (VA) strategic planning efforts; (6) VBA also has efforts under way to coordinate with agencies that support veterans' benefits programs, such as the Department of Defense, in achieving specific goals; (7) to successfully implement the Results Act, VBA must also develop effective strategies for achieving its performance goals and ensure that it has accurate, reliable data to measure its progress in achieving these goals; (8) VBA is in the early stages of developing clear and specific strategies but has not yet clearly demonstrated how these strategies will help it achieve the intended results; (9) morever, VBA does not yet have the data needed to effectively measure its performance in several key areas; (10) for example, one goal is to ensure that VBA is providing the best value for the taxpayer dollar; however, VBA currently is unable to calculate the full cost of providing benefits and services to veterans; (11) in addition, VBA officials and VA's Inspector General (IG) have raised concerns about the accuracy of data VBA is currently collecting; (12) for example, completed ongoing IG audits have identified data integrity problems with VBA's claims processing timeliness data; and (13) VBA is currently determining how best to address these concerns.
3,724
443
DOD instruction 1330.04 outlines the following roles and responsibilities regarding the Armed Forces Sports Program: Principal Deputy Under Secretary of Defense for Personnel and Readiness: Provides guidance and oversight concerning the participation of servicemembers in Armed Forces, national, and international amateur sports competitions. Senior Military Sports Advisor: Serves as the Service Personnel Chief who is responsible for the management and operation of the program and reports to the Principal Deputy Under Secretary of Defense for Personnel and Readiness. Armed Forces Sports Council: Serves as the governing body of the program, and is composed of the Morale, Welfare, and Recreation representatives from each service or their designated representatives. Armed Forces Sports Council Secretariat: Serves as the executive office for the council and serves as the U.S. liaison to the International Military Sports Council. Armed Forces Sports Council Working Group: Serves as the staffing body of the Armed Forces Sports Council, which is composed of Morale, Welfare, and Recreation representatives from each service. Secretaries of the Military Departments: Develop sports programs based on specific needs and mission requirements that provide the opportunity for servicemembers to prepare for and compete in national and international amateur sports competitions on a voluntary basis. According to Sports Council Secretariat officials and the policies for managing servicemembers' participation in national and international amateur sports competitions, the Sports Council Secretariat and the service sports offices each have responsibilities for managing the Armed Forces Sports Program. Table 1 further describes the responsibilities of the Armed Forces Sports Council Secretariat's and the service sports offices for the Armed Forces Sports Program. The number of staff members working in the Armed Forces Sports Council Secretariat and the service sports offices and the percentage of time staff members spend working for the Armed Forces Sports Program varies. For example, the Navy Sports Office has two staff members who work on the program nearly full time, while the Army Sports Office has four staff members who work on the program part time. In addition, the staff members working for the Armed Forces Sports Program include both civilians and active-duty servicemembers. Table 2 provides further details on the number of staff members and the estimated percentage of time they spend working for the Armed Forces Sports Program. DOD has data on participation in and costs of the Armed Forces Sports Program, but has not taken steps, including developing performance measures and clarifying roles and responsibilities, that are needed to help ensure that the program is implemented effectively. Sports Council Secretariat officials provided us with data for fiscal years 2012-2016 on servicemember participation in the program, including on the number of days servicemembers are away from their unit participating in the program and on civilians supporting the program, and data for fiscal years 2014-2016 on program costs. In analyzing the number of servicemembers participating in the program, we found that servicemember participation changed from 968 servicemembers in fiscal year 2012 to 848 servicemembers in fiscal year 2016. Table 3 provides further details about the number of servicemembers who participated in or supported the Armed Forces Sports Program in fiscal years 2012-2016. We also found that servicemember participation ranged from an average of 6.8 days per event in fiscal year 2013 to 13.2 days per event in fiscal year 2016. Sports Council Secretariat and service officials stated that the servicemembers who participate in the program are in peak physical shape and that they were unaware of any additional recovery time that a participant has needed after competing. Table 4 breaks out these data for each year from fiscal years 2012 through 2016. According to officials, DOD civilians provide various types of support to the Armed Forces Sports Program and may include employees who work for the program on a full- or part-time basis, as well as those who serve in a volunteer capacity. Civilians who support the program as volunteers may serve in a variety roles, as coaches or as staff, for example, which include athletic trainers, service representatives, or medical staff. Table 5 provides further details on the number of civilians who supported the Armed Forces Sports Program in fiscal years 2012 through 2016. Sports Council Secretariat officials stated the program covers the cost of servicemembers participating and units do not have to provide any funding. Program costs ranged from about $2.1 million to about $2.8 million from fiscal years 2014 through 2016. Table 6 provides additional details about these costs. Armed Forces Sports Championships are hosted by one of the services and must include at least three of the services in competition for all team sports and most individual sports. Higher level competitions are attended by the most competent athletes from the Armed Forces Sports Championships or athletes selected based on other qualifying events or criteria and may include U.S. national, International Military Sports Council, or other international events. In table 7 we break out the costs for participation in events from table 6 associated with Armed Forces Sports Championships and higher level competitions for fiscal years 2014 through 2016. While DOD has data on program participation and cost, these data are outputs and not outcomes and therefore do not exhibit important attributes of successful performance measures that are necessary to demonstrate that the Armed Forces Sports Program is being implemented effectively. Federal internal control standards state, among other things, that managers should establish activities to monitor performance measures. Furthermore, our prior work on performance measurement identified ten key attributes of performance measures, such as clarity, objectivity, having a measurable target, and having baseline and trend data in order to identify, monitor, and report changes in performance and to help ensure that performance is viewed in context. Table 8 identifies each attribute of effective performance measures along with its definition. Sports Council Secretariat officials stated that they use data on the number of servicemembers and services annually participating in each sport and competition to measure the performance and effectiveness of the Armed Forces Sports Program. While these data provide important context about the program's size and reach, they are outputs and do not constitute performance measures because they do not exhibit several of the key attributes previously discussed. First, we found that the Sports Council Secretariat's use of participation data does not exhibit the attribute of linkage in that there is not clear alignment between the number of participants and how it affects the program's ability to achieve its goals and mission. For example, while DOD Instruction 1330.04 does not specify goals or a mission, the Armed Forces Sports Council's standard operating procedures identify that the five objectives of the program are to: (1) promote goodwill among the Armed Services through sports, (2) promote a positive image of the Armed Forces through sports, (3) provide the incentive and encourage physical fitness by promoting a highly competitive sports program, (4) provide a venue for military athletes to participate in national and international competitions, and (5) engage in valuable military-to-military opportunities with International Military Sports Council member nations through sports. However, Sports Council Secretariat officials have not established a link between the participant data that they stated are used to measure program performance and the achievement of these objectives. Further, our prior work has shown that linkages between goals and measures are most effective when they are clearly communicated and create a "line of sight" so that everyone understands what an organization is trying to achieve and the goals it seeks to reach. During meetings with the Sports Council Secretariat, officials stated that they use data, such as servicemember participation in the Armed Forces Sports Championships, International Military Sports Council Championships, U.S. Nationals, and the Olympic and Paralympic Games to measure the performance and effectiveness of the Armed Forces Sports Program, and that they have created performance measures on an as-needed basis when it has been necessary to prioritize the allocation of funds for individual sports. However, none of the documents we were provided on the program identify participation or any other data as a performance measure, and these efforts do not exhibit a deliberate, four-stage performance measurement process that involves (1) identifying goals, (2) developing performance measures, (3) collecting data, and (4) analyzing data and reporting results. Further, servicemember participation in the Olympic and Paralympic Games is not a valid performance measure because, according to officials from the Office of the Secretary of Defense, the Sports Council Secretariat, and the services, the Armed Forces Sports Program does not have responsibility for these games. Second, participation data do not exhibit the measurable target attribute because they represent a summary of the program's activity and are not associated with numerical goals, which are needed to gauge program progress and results. Our prior work has shown that numerical targets or other measurable values facilitate future assessments of whether overall goals and objectives are achieved because comparisons can be easily made between projected performance and actual results. While the Sports Council Secretariat's data included the "actual" number of program participants, they did not identify projected performance targets that would enable program officials to determine how far they have progressed toward a desired outcome or end state. In response to our analysis, Sports Council Secretariat officials stated that they consider the list of 24 sports and total number of competitions that servicemembers may participate in to be the target--the attainment of which is based on variables such as available funding and the extent to which each service agrees to provide teams to participate in the competitions. However, this is not a valid demonstration of this attribute because neither the target in this sense nor the variables affecting participation (e.g., funding and service branch involvement) demonstrate how well the Armed Forces Sports Program performs or carries out its mission. In addition, officials from the Sports Council Secretariat and the services stated that the program directly benefits the services' readiness, recruitment, and retention efforts. Specifically, officials cited the program's emphasis on a higher level of physical fitness than is otherwise required by the services as contributing to individual servicemember readiness, and involvement in national and international sports championships as aiding recruiting efforts because it showcases some of the unique opportunities open to those in the services. Further, officials stated that the opportunity to participate in higher level competitions through the program helps retention because it provides an incentive for some servicemembers to stay in the services. However, outside of participation and cost data and some anecdotal examples, officials did not have specific measures for or data on the Armed Forces Sports Program's contribution to the services' readiness, recruiting, and retention efforts. Third, while DOD has program participation data, it does not track baseline and trend data in order to assess the program's performance and progress over time. Our prior work has demonstrated that by tracking and developing a performance baseline for all measures--including those that demonstrate the effectiveness of a program--agencies can better evaluate progress made and whether or not goals are being achieved. Further, identifying and reporting deviations from the baseline as a program proceeds provides valuable information for oversight by identifying areas of program risk and their causes for decision makers. According to Sports Council Secretariat officials, many of the program's benefits--such as helping with readiness, recruitment, and retention--are not measured, and commanding officers are responsible for determining and managing the program's effect on the readiness of their units. Thus, given the relatively small number of program participants and participation being contingent on obtaining commanding officer approval, Sports Council Secretariat officials stated that they do not believe that the services' readiness is negatively affected by servicemembers participating in the Armed Forces Sports Program. We acknowledge that the measurement of the program's performance may be difficult, but DOD's participation data do not include targets allowing program performance to be measured and do not assess the intended benefits of the program. Without effective performance measures that demonstrate linkage with the program's goals or mission, have measurable targets, and an established baseline of data, DOD will be unable to effectively demonstrate the benefits of the program and will not have the information needed to ensure that the department is allocating resources to its highest priority efforts. The roles and responsibilities that are currently being implemented for the Armed Forces Sports Program differ from the program's roles and responsibilities specified in DOD policy. DOD Instruction 1330.04 and the Armed Forces Sports Council's standard operating procedures specify that the Armed Forces Sports Program includes training or national qualifying events in preparation for participation in International Military Sports Council events, the Pan American Games, the Olympic Games, the Paralympic Games, and other international competitions. While this is how the program is defined in key program documents, the Office of the Secretary of Defense, Sports Council Secretariat, and service officials stated that all responsibilities, including costs, associated with servicemember participation in the Pan American, Olympic, and Paralympic Games are, in practice, handled by the services. According to these officials, the program's primary objective when it was established was to support the Olympic movement by providing servicemembers the opportunity to compete in the 1948 London Olympic Games. Further, DOD Instruction 1330.04 specifies that the Armed Forces Sports Program includes, among other things, training or national qualifying events in preparation for participation in the Pan American Games, the Olympic Games, and the Paralympic Games. However, officials stated that over time, the services assumed responsibility for their respective servicemembers' participation in the Pan American, Olympic, and Paralympic Games. The Office of the Secretary of Defense and Sports Council Secretariat officials stated they plan to review DOD Instruction 1330.04 and make necessary updates but did not indicate what specific changes would be made to clarify the program's roles and responsibilities. Further, these officials stated that they were not sure whether they would remove the Pan American, Olympic, and Paralympic Games from the Armed Forces Sports Council's standard operating procedures because of the potential for responsibilities to shift again in the future. The Armed Forces Sports Program provides a means by which servicemember athletes can participate in national and international competitions while representing the Armed Forces. However, the program currently does not have performance measures with linkage, measurable targets, or a baseline. Without measures that address the desired outcomes and include these attributes, it will be difficult for DOD and Congress to determine whether the program is meeting the desired goals or benefiting readiness, recruitment, and retention. To improve the management of the Armed Forces Sports Program and better determine whether the program is achieving its desired results, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to develop and implement performance measures for the Armed Forces Sports Program that measure the desired outcomes for the program and, at a minimum, demonstrate linkage to the program's goals or mission, have a measurable target, and include a baseline that can be used to demonstrate program performance. We provided a draft of this report to DOD and the Department of Homeland Security (DHS) for review and comment. In its comments on a draft of this report, DOD concurred with our recommendation and their comments are reprinted in their entirety in appendix II. DOD and DHS also provided technical comments, which we incorporated into the report as appropriate. DOD concurred with our recommendation to develop and implement performance measures for the Armed Forces Sports Program that measure the desired outcomes for the program and, at a minimum, demonstrate linkage to the program's goals or mission, have a measurable target, and include a baseline that can be used to demonstrate program performance but also noted potential limitations on establishing measures. Specifically, DOD said that it will explore the development and implementation of performance outcome measures for the Armed Forces Sports Program and that it will review Department of Defense Instruction 1330.04 for potential opportunities to incorporate appropriate guidance regarding performance measures for the Armed Forces Sports Program. However, DOD stated that there are limitations on establishing metrics for several of the program's objectives, such as goodwill and positive image, which are challenging to measure. Further, DOD said that quantifying outcomes for some objectives, such as the "spirit" of the program, also will be challenging, but that the lack of a performance measurement does not negate the importance of pursuing objectives that contribute to demonstrating the program's overall effectiveness. In our report, we acknowledged that measurement of the program's performance may be difficult but also necessary to produce the evidence-based support that is needed to objectively demonstrate how the specific activities that comprise a program are contributing to its effectiveness. Exploring the development and implementation of performance measures and reviewing DOD guidance regarding performance measures are positive steps, but we continue to believe that DOD needs to develop and implement performance measures in order to demonstrate if the Armed Forces Sports Program is being implemented effectively. While it may be challenging to develop performance measures, our prior work has demonstrated that even for highly complex areas such as DOD's reform of its medical health system and prevention of sexual assault, developing and implementing performance measures can be done, and if implemented correctly, can enhance decision-making. Until DOD does develop and implement performance measures, it will be unable to effectively demonstrate the benefits of the program and will not have the data needed to monitor the program, make decisions about program management and ensure that the department is allocating resources to its highest priority efforts. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretary of Homeland Security; the Secretaries of the Army, the Navy, and the Air Force; the Commandants of the Marine Corps and the Coast Guard; and the Under Secretary of Defense for Personnel and Readiness. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix III. To assess the effectiveness of the Department of Defense's (DOD) implementation of the Armed Forces Sports Program, we reviewed DOD and service (including the Coast Guard) policies and procedures related to the administration of and participation in the program. We interviewed officials from the Office of the Under Secretary of Defense for Personnel and Readiness, the Armed Forces Sports Council Secretariat ("Sports Council Secretariat"), and each service about these policies and procedures. We also discussed the extent to which any performance measures had been established to assess the program's effectiveness, including any effects of program participation on the services' readiness. We obtained and analyzed data from DOD on the number of active-duty servicemembers, by service, who had participated in the Armed Forces Sports Program in fiscal years 2012 through 2016 as well as on the number of days servicemembers had spent away from their respective units participating in the program during the same time frame. We also obtained and analyzed data from DOD on the number of DOD and Coast Guard civilians who had supported the Armed Forces Sports Program in fiscal years 2012 through 2016. Further, we obtained and analyzed data from DOD on program costs for fiscal years 2014 through 2016, including the administrative, travel, and salary costs incurred by the Armed Forces Sports Council Secretariat, program-related travel and salary costs for each service, and participation costs of travel participants, which according to program officials include transportation and lodging costs. The time frame of the participant and cost data that we obtained differs because DOD officials stated that fiscal year 2014 was the most recent year that cost data were available from all the services. Based on responses from the Armed Forces Sports Program office to data reliability questionnaires, we determined that the data we obtained were sufficiently reliable for the purposes of this review. We compared DOD's policy for the program against the federal standards for internal control that state, among other things, that managers should establish activities to monitor performance measures. Additionally, we compared DOD's participant data--the department's measure for demonstrating the effectiveness of the Armed Forces Sports Program--with our prior work on performance measurement to determine the extent to which these data exhibit the ten key attributes of successful performance measures. To obtain servicemembers' perspectives on the Armed Forces Sports Program and its effect on individual readiness, we interviewed 13 randomly selected servicemembers who had participated in the program in calendar year 2015 since, at that time, this was the most recent year for which the program had a complete set of participant data. To understand any effect that a servicemembers' participation may have had on unit readiness, we also interviewed ten commanding officers who had approved one of the randomly selected servicemembers' requests to participate in the Armed Forces Sports Program. While the information that we obtained was nongeneralizable, it provided perspectives from individuals with first-hand experience with the Armed Forces Sports Program. We also reviewed DOD and service policies and procedures to identify roles and responsibilities associated with implementing the Armed Forces Sports Program. Further, we interviewed officials within each organization to discuss how designated roles and responsibilities were being implemented. We conducted this performance audit from August 2016 to June 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Kimberly A. Mayo, Assistant Director; Christopher H. Conrad; Mae Frances Jones; Stephanie Moriarty; Shahrzad Nikoo; Shane T. Spencer; Andrew Stavisky; and John W. Van Schaik made key contributions to this report.
For nearly a century, the U.S. Armed Forces (i.e., the Army, the Navy, the Marine Corps, the Air Force, and the Coast Guard) have organized and participated in international and national sporting competitions in part because of the intended benefits for servicemember morale and the unique opportunity that participation provides to foster diplomatic relations. House Report 114-537 accompanying a bill for the National Defense Authorization Act for Fiscal Year 2017 included a provision for GAO to review the Armed Forces Sports Program and its impact on the military services' readiness. This report assesses the effectiveness of DOD's implementation of the Armed Forces Sports Program. GAO analyzed participation data for fiscal years 2012 through 2016 and cost data for fiscal years 2014 through 2016, compared DOD data with attributes of successful performance measures, compared roles and responsibilities specified in policy with those being implemented, and interviewed DOD officials. The Department of Defense (DOD) has data on participation in and costs of the Armed Forces Sports Program, but has not taken steps, including developing performance measures and clarifying roles and responsibilities that are needed to help ensure the program is implemented effectively. DOD officials stated that they use sport and competition participation data to measure the performance and effectiveness of the program. According to these data, servicemember participation changed from 968 servicemembers in fiscal year 2012 to 848 servicemembers in fiscal year 2016, and program costs ranged from about $2.1 million to about $2.8 million in fiscal years 2014 through 2016. While these data provide important context about the program's size and reach, they do not exhibit several key attributes, such as linkage, a measurable target, and baseline and trend data that GAO has found are key to successfully measuring a program's performance. First, these data do not exhibit linkage because no relationship has been established to show how the number of servicemember participants contribute to achievement of the program's objectives, such as promoting goodwill among and a positive image of the U.S. Armed Forces through sports. Second, these data were not associated with a measurable target that would enable program officials to determine how far the program has progressed toward a desired outcome or end state. for measures that are able to assess the program's performance and progress over time. Without performance measures that demonstrate these attributes, DOD will be unable to effectively demonstrate that it is achieving the intended benefits of the program, such as improving readiness, recruitment, and retention as well as promoting the goodwill of the U.S. Armed Forces. Officials cited the program as aiding recruiting because it showcased unique opportunities open to those in the U.S. Armed Forces. However, outside of participation and cost data and some anecdotal examples, officials did not have specific measures for or data on the Armed Forces Sports Program's contribution to the services' readiness, recruiting, and retention efforts. The roles and responsibilities that are currently being implemented for the program differ from the program's roles and responsibilities specified in DOD policy. DOD Instruction 1330.04 specifies that the program includes training or national qualifying events in preparation for participation in International Military Sports Council events, the Pan American Games, the Olympic Games, the Paralympic Games, and other international competitions. While this is how the program is defined in key program documents, DOD officials stated that all responsibilities, including costs, associated with servicemember participation in the Pan American, Olympic, and Paralympic Games are handled by the services. DOD officials stated that they plan to review DOD Instruction 1330.04 and make necessary updates, but have not yet determined what specific changes would be made to clarify the program's roles and responsibilities. GAO recommends that DOD develop and implement performance measures for the Armed Forces Sports Program that, at a minimum, demonstrate linkage to the program's goals or mission, have a measurable target, and include a baseline that can be used to demonstrate program performance. DOD concurred with the recommendation, noting potential limitations on establishing measures. GAO acknowledges these limitations, but continues to believe that measures are important to evaluating the program's effectiveness.
4,675
903
SBA depends on its IT environment to support the management of its programs. This environment includes 42 mission-critical systems running on legacy mainframes and minicomputers. Ten of these systems support administrative activities; the remaining 32 support loan activities, including loan accounting and collection, loan origination and disbursement, and loan servicing and debt collection. According to SBA's self-assessment of its IT environment, the legacy systems are not effectively integrated and thus provide limited information sharing. The assessment also showed that SBA cannot depend on the systems to provide consistent information. Because of these problems, it has embarked on an agencywide systems modernization initiative to replace its outmoded legacy systems. Our May report presented the results of our evaluation of SBA's management of IT in the areas of investment management, architecture, software development and acquisition, information security, and human capital. These five areas encompass major IT functions and are widely recognized as having substantial influence over the effectiveness of operations. Blank Circle indicates that policies and procedures do not exist or are substantially obsolete or incomplete; and practices for planning, monitoring and evaluation are predominantly ad hoc, or not performed. Half Circle indicates that policies and procedures are predominantly current and facilitate key functions; and selected key practices for planning, monitoring, and evaluation have been implemented. Solid Circle indicates that policies and procedures are current and comprehensive for key functions; and practices for planning, monitoring, and evaluation adhere to policies, procedures, and generally accepted standards. Properly implemented, IT investment management is an integrated approach that provides for the life-cycle management of IT projects. This investment process requires three essential phases: selection, control, and evaluation. In the selection phase, the organization determines priorities and makes decisions about which projects will be funded based on their technical soundness, contribution to mission needs, performance improvement priorities, and overall IT funding levels. In the control phase, all projects are consistently controlled and managed. The evaluation phase compares actual performance against estimates to identify and assess areas in which future decision-making can be improved. Our assessments of SBA's investment management processes disclosed that policies and procedures were substantially incomplete; and practices were predominately ad hoc or not performed for most of the critical activities, as shown in figure 1. SBA had made progress in establishing an investment review board and is beginning to define an investment selection process. However, it had not yet established IT investment management policies and procedures to help identify and select projects that will provide mission-focused benefits and maximum risk-adjusted returns. Likewise, SBA had not yet defined processes for investment control and evaluation to ensure that selected IT projects will be developed on time, within budget, and according to requirements, and that these projects will generate expected benefits. The agency had performed only limited reviews of major IT investments, and these reviews were ad-hoc since little data had been captured for analyzing benefits and returns on investment. Without established policies and defined processes for IT investment, SBA cannot ensure that consistent selection criteria are used to compare costs and benefits across proposals, that projects are monitored and provided with adequate management oversight, or that completed projects are evaluated to determine overall organizational performance improvement. In addition, the agency lacks assurance that the collective results of post- implementation reviews across completed projects will be used to modify and improve investment management based on lessons learned. To address IT investment management weaknesses, SBA planned to develop and implement an investment selection process that includes screening, scoring, and ranking proposals. It also planned to use its target architecture to guide IT investments. In addition, SBA planned to develop and implement an investment control process to oversee and control projects on a quarterly basis. As part of investment control, SBA intended to collect additional data from all investment projects and compare actual data with estimates in order to assess project performance. SBA's plans indicate a strong commitment to making improvements in this area; however, to establish robust IT investment management processes, additional actions are needed. Accordingly, we recommended that the SBA Administrator direct the chief information officer to establish policies and procedures and define and implement processes to ensure that (1) IT projects are selected that result in mission-focused benefits, maximizing risk-adjusted return-on-investment; (2) projects are controlled to determine if they are being developed on time, within budget, and according to requirements; and (3) projects are evaluated to ascertain whether completed projects are generating expected benefits. An IT architecture is a blueprint--consisting of logical and technical components--to guide the development and evolution of a collection of related systems. At the logical level, the architecture provides a high-level description of an organization's mission, the business functions being performed and the relationships among the functions, the information needed to perform the functions, and the flow of information among functions. At the technical level, it provides the rules and standards needed to ensure that interrelated systems are built to be interoperable and maintainable. Our assessments of SBA's information architecture disclosed that SBA had drafted policies and procedures for key activity areas except for change management, and had drafted architecture components except for change management, as reflected in figure 2. SBA had made progress with its target IT architecture by describing its core business processes, analyzing information used in its business processes, describing data maintenance and data usage, identifying standards that support information transfer and processing, and establishing guidelines for migrating current applications to the planned environment. However, procedures did not exist for change management to ensure that new systems installations and software changes would be compatible with other systems and SBA's planned operating environment. Without established policies and systematic processes for IT architecture activities, SBA cannot ensure that it will develop and maintain an information architecture that will effectively guide efforts to migrate systems and make them interoperable to meet current and future information processing needs. To address IT architecture weaknesses, SBA planned to establish a change management process for architecture maintenance, to ensure that new systems installations and software changes will be compatible with other systems and with SBA's planned operating environment. In addition, it planned to incorporate in the target architecture specific security standards for hardware, software, and communications. To ensure that these planned improvements are completed and sound practices institutionalized, we recommended that the SBA Administrator direct the chief information officer to establish policies and procedures and define and implement processes to ensure that (1) the architecture is developed using a systematic process so that it meets the agency's current and future needs and (2) the architecture is maintained so that new systems and software changes are compatible with other systems and SBA's planned operating environment. To provide the software needed to support mission operations, an organization can develop software using its staff or acquire software products and services through contractors. Key processes for software development include requirements management, project planning, project tracking and oversight, quality assurance, and configuration management. Additional key processes needed for software acquisition include acquisition planning, solicitation, contract tracking and oversight, product evaluation, and transition to support. Our assessment of SBA's software development and acquisition processes disclosed that SBA had not established policies, its procedures were obsolete, and its practices were predominantly ad hoc for one or more critical activities, as shown in figure 3. SBA lacked policies for software development and acquisition to help produce information systems within the cost, budget, and schedule goals set during the investment management process that at the same time comply with the guidance and standards of its IT architecture. SBA's IT guidance and procedures were obsolete and thus rarely used for acquisition planning, solicitation, contract tracking and oversight, product evaluation, and transition to support. An existing systems development methodology was being adopted, however, to replace outdated guidelines that lacked key processes for software development. Our review of the selected software projects indicated that SBA's practices were typically ad hoc for project planning, project tracking and oversight, quality assurance, and configuration management. Without established policies and defined processes for software development and acquisition, practices will likely remain ad hoc and not adhere to generally accepted standards. Key activities--such as requirements management, planning, configuration management, and quality assurance--will be inconsistently performed or not performed at all when project managers are faced with time constraints or limited funding. These weaknesses can delay delivery of software products and services and lead to cost overruns. To address software development and acquisition weaknesses, SBA planned to implement formal practices, such as software requirements management and configuration management, on a project basis before establishing them agencywide. Specifically, SBA had selected the Loan Monitoring System (LMS) project as a starting point for identifying, developing, and implementing a new systems development methodology and associated policies, procedures, and practices. LMS therefore will serve as a model for future systems development projects. While SBA's plan is a good first step, additional measures need to be taken to ensure agencywide improvements. To establish sound IT software development and acquisition processes, we recommended that the SBA Administrator direct the chief information officer to complete the systems development methodology and develop a plan to institutionalize and enforce its use; and develop a mechanism to enforce the use of newly- established policies in areas including but not limited to requirements management, project planning/tracking/oversight, quality assurance, configuration management, solicitation, contract oversight, and product evaluation. Information security policies address the need to protect an organization's computer-supported resources and assets. Such protection ensures the integrity, appropriate confidentiality, and availability of an organization's data and systems. Key information security activities include risk assessment, awareness, controls, evaluation, and central management. Risk assessments consist of identifying threats and vulnerabilities to information assets and operational capabilities, ranking risk exposures, and identifying cost- effective controls. Awareness involves promoting knowledge of security risks and educating users about security policies, procedures, and responsibilities. Evaluation addresses monitoring the effectiveness of controls and awareness activities through periodic evaluations. Central management involves coordinating security activities through a centralized group. Our assessments of information security at SBA disclosed that policies and procedures did not exist for risk assessments and were in draft form for other key activities; and that practices were not performed for one critical activity, as shown in figure 4. SBA had not conducted periodic risk assessments for its mission-critical systems; the agency had only recently conducted a security workload assessment and a risk assessment for one system. Training and education had not been provided to promote security awareness and responsibilities of employees and contract staff. Further, security management responsibilities were fragmented among all of SBA's field and program offices. SBA's computer security procedures for systems certification and accreditation were in draft form. Without security policies, SBA faces increased risk that critical information and assets may not be protected from inappropriate use, alteration, or disclosure. Without defined procedures, practices are likely to be inconsistent for such activities as periodic risk assessments, awareness training, implementation and effectiveness of controls, and evaluation of policy compliance. To address information security weaknesses, SBA has hired additional staff to develop procedures to implement computer security policies and to manage computer accounts and user passwords. These staff are also responsible for performing systems security certification reviews of new and existing IT systems. In addition, SBA planned to finish development and testing of a comprehensive disaster recovery and business continuity plan. To build on the actions taken and planned by SBA and ensure that a comprehensive, effective security program is established, we recommended that the SBA Administrator direct the chief information officer to establish policies and procedures and define and implement processes to ensure that periodic risk assessments are conducted to determine and rank an effective security awareness program is implemented; policies and procedures are updated, with new controls implemented to address newly discovered threats; the development and testing of SBA's comprehensive disaster recovery and business continuity plan is completed, then periodically tested and updated; security evaluations are conducted to ascertain whether protocols in place are sufficient to guard against identified vulnerabilities, and if not, remedial action taken as needed; and a centralized mechanism is developed to monitor and enforce compliance by employees, contract personnel, and program offices. The concept of human capital centers on viewing people as assets whose value to an organization can be enhanced through investment. To maintain and enhance the capabilities of IT staff, an agency should conduct four basic activities: (1) assess the knowledge and skills needed to effectively perform IT operations to support the agency's mission and goals; (2) inventory the knowledge and skills of current IT staff to identify gaps in needed capabilities; (3) develop strategies and implementation plans for hiring, training, and professional development to fill the gap between requirements and current staffing; and (4) evaluate progress made in improving IT human capital capability, using the results of these evaluations to continuously improve the organization's human capital strategies. Our assessments of SBA's human capital processes disclosed that policies and procedures did not exist and that SBA was not performing critical activities, as shown in figure 5. SBA had not established policies and procedures to identify and address its short- and long-term requirements for IT knowledge and skills. Similarly, it had not conducted an agencywide assessment to determine gaps in IT knowledge and skills in order to develop workforce strategies and implementation plans. Further, SBA had not evaluated its progress in improving IT human capital capabilities or used data to continuously improve human capital strategies. Without established policies and procedures for human capital management, SBA lacks assurance that it is adequately identifying the IT knowledge and skills it needs to support its mission, is developing appropriate workforce strategies, or is effectively planning to hire and train staff to efficiently perform IT operations. To address IT human capital management weaknesses, SBA planned to conduct a comprehensive assessment of training needs with a special emphasis on the needs of its IT staff. The survey is scheduled for fiscal year 2001 and will be conducted at both headquarters and SBA field offices. While SBA's planned assessment should be useful, a more comprehensive program is needed to ensure that it hires, develops, and retains the people it needs to effectively carry out IT activities. To improve IT human capital management practices, we recommended that the SBA Administrator direct the chief information officer to establish policies and procedures and define and implement processes to ensure that SBA's IT and knowledge skills requirements are identified; periodic IT staff assessments are performed to identify current knowledge levels; workforce strategies are developed and plans implemented to acquire and maintain the necessary IT skills to support the agency mission; and SBA's human capital capabilities are periodically evaluated and the results used to continually improve agency strategies. In summary, for SBA to enhance its ability to carry out its mission, it will require solid IT solutions to help it identify and address operational problems. However, many of SBA's policies and procedures for managing IT have either not been developed or were in draft form, and its practices generally did not adhere to defined processes. While the agency plans to improve its processes, additional actions are needed in each key IT process area to institutionalize agencywide industry standard and best practices for planning, monitoring, and evaluation of IT activities. SBA has agreed with all of our recommendations and has stated that efforts are underway to address them. SBA has also emphasized that it is committed to improving IT management practices. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other members of the Committee may have at this time. For information about this testimony, please contact Joel C. Willemssen at (202) 512-6253 or by e-mail at [email protected]. Individuals making key contributions to this testimony included William G. Barrick, Michael P. Fruitman, James R. Hamilton, and Anh Q. Le. (511850)
Pursuant to a congressional request, GAO discussed the Small Business Administration's (SBA) management of information technology (IT), focusing on five key areas: (1) investment management; (2) architecture; (3) software development and acquisition; (4) information security; and (5) human capital management. GAO noted that: (1) SBA had made progress in establishing an investment review board and is beginning to define an investment selection process; (2) however, it had not yet established IT investment management policies and procedures to help identify and select projects that will provide mission-focused benefits and maximum risk-adjusted returns; (3) likewise, SBA had not yet defined processes for investment control and evaluation to ensure that selected IT projects will be developed on time, within budget, and according to requirements, and that these projects will generate expected benefits; (4) the agency had performed only limited reviews of major IT investments, and these reviews were ad-hoc since little data had been captured for analyzing benefits and returns on investment; (5) SBA had made progress with its target IT architecture by describing its core business processes, analyzing information used in its business processes, describing data maintenance and data usage, identifying standards that support information transfer and processing, and establishing guidelines for migrating current applications to the planned environment; (6) however, procedures did not exist for change management to ensure that new systems installations and software changes would be compatible with other systems and SBA's planned operating environment; (7) SBA lacked policies for software development and acquisition to help produce information systems within the cost, budget, and schedule goals set during the investment management process that at the same time comply with the guidance and standards of its IT architecture; (8) an existing systems development methodology was being adopted to replace outdated guidelines that lacked key processes for software development; (9) GAO's review of the selected software projects indicated that SBA's practices were typically ad-hoc for project planning, project tracking and oversight, quality assurance, and configuration management; (10) SBA had not conducted periodic risk assessments for its mission-critical systems; (11) the agency had only recently conducted a security workload assessment and a risk assessment for one system; (12) training and education had not been provided to promote security awareness and responsibilities of employees and contract staff; (13) SBA had not established policies and procedures to identify and address its short- and long-term requirements for IT knowledge and skills; and (14) further, SBA had not evaluated its progress in improving IT human capital capabilities or used data to continuously improve human capital strategies.
3,273
541
Treasury is the primary federal agency responsible for the economic and financial prosperity and security of the United States, and as such is responsible for a wide range of activities, including advising the President on economic and financial issues, promoting the President's growth agenda, and enhancing corporate governance in financial institutions. To accomplish its mission, Treasury is organized into departmental offices and operating bureaus. The departmental offices are primarily responsible for the formulation of policy and management of the department as a whole, while the nine operating bureaus--including the Internal Revenue Service and the Bureau of Public Debt--carry out specific functions assigned to Treasury. Figure 1 shows the organizational structure of the department. Information technology plays a critical role in helping Treasury meet its mission. For example, the Internal Revenue Service relies on a number of information systems to process tax returns, account for tax revenues collected, send bills for taxes owed, issue refunds, assist in the selection of tax returns for audit, and provide telecommunications services for business activities, including the public's toll-free access to tax information. To assist with delinquent debt collections, Treasury is engaged in the development of the FedDebt system. In fiscal year 2008, Treasury plans to spend approximately $3 billion for 234 IT investments-- including about $2 billion (about 71 percent) for 60 major investments. In 2004, we identified weaknesses in Treasury's IT investment management processes. For example, Treasury did not describe or document work and decision-making processes for agencywide board(s). Additionally, it did not use the IT asset inventory as part of managerial decision making. As a result of these and the other identified weaknesses, we made recommendations to the Secretary of the Treasury to improve the department's IT investment management processes. In 2007, we reported that Treasury had made progress in establishing many of the practices needed to build an investment foundation and manage its products as a portfolio. However, we identified additional investment management weaknesses. Specifically, the department lacked an executive investment review board that was actively engaged in the investment management process. As a result of these weaknesses, we made recommendations to Treasury for strengthening their investment management capability. In response, Treasury stated that it would take steps to strengthen its investment board operations and oversight of IT resources and programs. For example, the department recently established an executive-level investment review board. In July 2008, we reported that Treasury's rebaselining policy fully addressed one of five practices leading organizations include in their policies and partially addressed the remaining practices. Since the time of our review, Treasury has improved its rebaselining policies and procedures to be more consistent with those of leading organizations. Several of Treasury's projects have been deemed to be poorly planned and managed by the OMB and have warranted inclusion on OMB's Management Watch and High Risk Lists. In recent testimony summarizing our analysis of projects on these lists, we reported that Treasury had 4 projects on the Management Watch List as of July 2008, including one on the list for the fourth consecutive year. We also reported that the department had 21 high-risk projects determined to be poorly performing, most of them because of cost and schedule variances exceeding 10 percent. Pulling together essential cost, schedule, and technical information in a meaningful, coherent fashion is a challenge for most programs. In addition to comparing budgeted to actual costs, EVM measures the value of work accomplished in a given period. This technique compares the earned value with the planned value of work scheduled and with the actual cost of work accomplished for that period. Differences in these values are measured in both cost and schedule variances. Cost variances compare the earned value of the completed work with the actual cost of the work performed. For example, if a contractor completed $5 million worth of work and the work actually cost $6.7 million, there would be a -$1.7 million cost variance. Schedule variances are also measured in dollars, but they compare the earned value of the work completed with the value of work that was expected to be completed. For example, if a contractor completed $5 million worth of work at the end of the month but was budgeted to complete $10 million worth of work, there would be a -$5 million schedule variance. Positive variances indicate that activities are costing less or are completed ahead of schedule, whereas negative variances indicate activities are costing more or are falling behind schedule. These cost and schedule variances can be used to estimate the cost and time needed to complete a program. Without knowing the planned cost of completed work (that is, the earned value), it is difficult to determine a program's true status. Earned value provides information necessary for understanding the health of a program; it provides an objective view of program status. As such, it can alert program managers to potential problems sooner than expenditures alone can, thereby reducing the chance and magnitude of cost overruns and schedule delays. Moreover, EVM directly supports the institutionalization of key processes for acquiring and developing systems and the ability to effectively manage investments--areas which are often found to be inadequate based on our assessments of major IT investments. Because of the importance of ensuring quality earned value data, in May 1998 the American National Standards Institute (ANSI) and the Electronics Industries Alliance (EIA) jointly established a national standard for EVM systems. This standard delineates 32 guidelines on how to establish a sound EVM system, ensure that the data coming from the system are reliable, and use the earned value data to manage the program. See appendix III for details on the 32 guidelines. In June 2002, OMB's Circular A-11 included the requirement that agencies use a performance-based acquisition management system based on the May 1998 ANSI/EIA Standard to obtain timely information regarding the progress of capital investments. This requirement was restated in subsequent versions of the circular and, in August 2005, OMB issued a memorandum that outlined steps that agencies must take for all major and high-risk development projects to better ensure improved execution and performance and to promote more effective oversight through the implementation of EVM. Specifically, this guidance directs agencies to: 1. develop comprehensive policies to ensure that agencies are using EVM to plan and manage development activities for major IT investments, 2. include a provision and clause in major acquisition contracts or agency in-house project charters directing the use of an EVM system that is compliant with the ANSI standard, 3. provide documentation demonstrating the EVM system complies with 4. conduct periodic surveillance reviews, and 5. conduct integrated baseline reviews on individual programs to finalize the cost, schedule, and performance goals. Building on OMB's guidance, in July 2007, we issued an exposure draft on best practices for estimating and managing program costs. This draft highlights policies and practices adopted by leading organizations to implement an effective EVM program. Specifically, the guidance identifies the need for organizational policies to require clear criteria for which programs are required to use EVM, compliance with the ANSI standard, a standard product-oriented structure for defining work products, integrated baseline reviews, specialized training, criteria and conditions for rebaselining programs, and an ongoing surveillance function. In addition, the guidance identifies key practices that individual programs can use to ensure that they establish a sound EVM system, that the earned value data are reliable, and that they are used to support decision making. OMB refers to this guide as a key reference manual for agencies in its 2006 Capital Programming Guide. Treasury's approach to EVM involves several entities, including the Office of the Chief Information Officer (OCIO), the Office of the Procurement Executive--both of which are under the Assistant Secretary for Management and Chief Financial Officer, and Capital Planning and Investment Control (CPIC) desk officers. Responsibility for the administration and maintenance of Treasury's EVM policy lies with the OCIO. Specifically, the CPIC group within that office supports the department's investment management oversight process. CPIC desk officers are responsible for oversight of one or more bureaus and serve as the bureau CPIC coordinator's primary point of contact, responsible for scoring exhibit 300s and coordinating information sharing with the departmental budget office and other critical partners. Further, they develop bureau-level IT portfolio expertise and provide input and recommendations to the bureaus, Treasury's CIO, and Treasury's Investment Review Board. Working with the OCIO to identify acquisitions which require earned value management, the Office of the Procurement Executive is responsible for ensuring that the identified acquisitions throughout Treasury and its bureaus contain EVM requirements that are consistent with the Federal Acquisition Regulation. According to agency officials, 40 investments are currently using EVM. Project managers and contractors are required to gather the monthly costs and progress associated with each of their investments. The information gathered includes the planned value, actual costs, and earned value. This information is analyzed and used for corrective actions at the bureau level. Quarterly, the bureaus forward investment performance reports to the OCIO's CPIC office, which reviews them and forwards summaries to Treasury's Technical Investment Review Board. In January 2008, Treasury convened an EVM working group, which has representation from every bureau. According to the CPIC Director, the working group has several objectives including establishing (1) level of reporting for contractors and government employees based on thresholds; (2) rule sets, processes, and procedures for the development of work breakdown structures, integrated baselines, standard roll-up into milestones, and the use of EVM systems at the bureaus; (3) bureau monthly recordkeeping requirements; (4) standard procedures for quarterly uploading of data from the bureaus into Treasury's automated investment management tool; and (5) requirements for maintaining documentation to support the project manager validations and bureau CIO certifications of cost, schedule, and performance data for major and nonmajor investments. According to the CPIC Director, the working group is also working on revising the department's EVM policy. While Treasury has established policy to guide its implementation of EVM, key components of this policy are not fully consistent with best practices. Without a comprehensive policy, the department risks implementing policies inconsistently and using inaccurate cost and schedule performance data. We recently reported that leading organizations establish EVM policies that: establish clear criteria defining which programs are to use EVM; require programs to comply with a national ANSI standard; require programs to use a standard structure for defining work products; require programs to conduct detailed reviews of expected costs, schedules, and deliverables (called an integrated baseline review); require and enforce EVM training; define when programs may revise cost and schedule baselines (called require system surveillance--routine validation checks to ensure that major acquisitions are complying with agency policies and standards. Table 1 further describes these seven key components of an effective EVM policy. In December 2005, Treasury developed the EVM Policy Guide, which provides an approach for implementing EVM requirements for the department's major investments. The policy currently in place fully addresses three of the seven components, partially addresses three, and does not address one (see table 2). Specifically, Treasury has policies and guidance that fully address criteria for implementing EVM on all major investments, for the conduct of integrated baseline reviews, and for rebaselining. Criteria for implementing EVM on all major investments: The department's policy requires all of its major development, modernization, and enhancement investments to use EVM. Investments in steady-state (i.e., those with no development, modernization, or enhancement milestones) and those ending prior to September 2007 were not required by the department to implement the EVM requirement. Integrated baseline review: In order to verify whether the performance measurement baseline is realistic and to ensure that the government and contractor mutually understand program scope, schedule, and risks, Treasury's policy calls for an integrated baseline review. According to the policy, this review should be completed as soon as possible but no later than 6 months after the contract is awarded. Furthermore, another review may be required following any significant contract modifications. Rebaselining criteria: Treasury developed a rebaselining policy which specifies that a valid reason for requesting a new baseline must be clearly understood and documented. The policy also specifies acceptable reasons for an investment team to request a rebaseline. Further, to submit a rebaseline request, investment teams are required to explain why the current plan is no longer feasible and develop realistic cost and schedule estimates for remaining work that has been validated and spread over time to the new plan. However, Treasury's policy and guidance do not fully address the best practices represented by the following three key components: addressing compliance with the ANSI standard, defining a meaningful structure for defining work products, and conducting system surveillance reviews. Training is not addressed by the Treasury policy. Compliance with the ANSI standard: Treasury policy states that major investments are to comply with ANSI standards. Further, it outlines processes and guidelines to assist its bureau in achieving ANSI-compliant processes. However, the policy lacks sufficient detail for addressing some of the criteria defined in the standard, including the use of standard methods for EVM data collection across the department and cost performance reporting. For example, the policy does not discuss the use of templates or tools to help ensure that EVM data are collected consistently and reliably. Furthermore, the policy does not discuss what cost performance report formats are to be used. Until Treasury's policy includes a methodology that standardizes data collection and reporting, data integrity and reliability may be in jeopardy and management may not be able to make informed decisions regarding the investments and its next steps. Standard structure for defining the work products: Treasury's EVM policy calls for a product-oriented work breakdown structure that identifies and documents all activities associated with the investment. However, it does not require the use of common elements in its development. According to the CPIC Director, Treasury's EVM working group plans to establish rule sets, processes, and procedures for the development of work breakdown structures. Until Treasury's policy provides more guidance on the systematic development and documentation of work breakdown structures including the incorporation of standardized common elements, it will be difficult to ensure that the entire effort is consistently included in the work structure and that investments will be planned and managed appropriately. System surveillance: According to Treasury's policy, the contractor's EVM system is to be validated using the industry surveillance approach identified by the National Defense Industrial Association's Surveillance Guide. Additionally, Treasury is to require clear evidence that the system continues to remain compliant or that the contractor has brought the system back into compliance. However, the policy lacks guidance on conducting surveillance reviews on the government's (i.e., the department's) EVM system. Until Treasury's policy specifies reviews of the government's systems, Treasury risks not being able to effectively manage cost, schedule, and technical performance of its major investments. Training requirements: Treasury's policy does not specify EVM training requirements for program management team members or senior executives. Furthermore, the policy does not require the agency to maintain training logs confirming that all relevant staff have been appropriately trained. Until the department establishes policy for EVM training requirements for relevant personnel, it cannot effectively ensure that its program staff have the appropriate skills to validate and interpret EVM data and that its executives fully understand the data they are given in order to ask the right questions and make informed decisions. According to the CPIC Director, Treasury's EVM working group, which was established in January 2008, is working on the development of a revised EVM policy, which, according to Deputy Assistant Secretary for Information Systems and Chief Information Officer, is expected to be finalized by October 2008. Addressing these weaknesses could help Treasury optimize the effective use of EVM. While the six programs we reviewed were all using EVM, none had fully implemented any of the practices for establishing a comprehensive EVM system, ensuring that the data resulting from the system are reliable, or using earned value data for decision-making purposes. These weaknesses exist in part because, as previously noted, Treasury's policy does not fully address key elements and because the department does not have a mechanism to enforce its implementation. Until Treasury adequately implements EVM, it faces an increased risk that some programs will experience cost and schedule overruns or deliver less capability than planned. In our work on best practices, we identified three key management areas that leading organizations use to manage their acquisitions: establishing a comprehensive EVM system, ensuring reliable data, and using earned value data to manage the investment (see table 3). Table 4 provides a summary of how each investment is using EVM in the key practices areas and is followed by our analysis of these areas. The investments we reviewed are the Financial Management Service's FedDebt and Financial Information and Reporting Standardization (FIRST); the Departmental Office's DC Pension System to Administer Retirement (STAR); the Bureau of Public Debt's Treasury Automated Auction Processing System (TAAPS); and the Internal Revenue Service's Integrated Financial System/Core Financial System (IFS) and Enterprise Data Access Strategy (EDAS). These investments were identified by the department as major investments and all had milestones in development, modernization, or enhancement at the time of our review. Appendix II includes information regarding the selection of these investments and appendix IV provides a description of each. Comprehensive EVM systems were not consistently established to manage the six investments. Although aspects of a comprehensive system were present, none of the investments fully met all the best practices comprising this management area. For example, of the six investments, only IFS and STAR adequately defined the scope of effort using a work breakdown structure. Three investments developed a work breakdown structure; however, the work packages could not be traced back to EVM project management documents, such as the project management baseline, the work breakdown structure, and the statement of work or project charter. For example, although EDAS had detailed work breakdown structures, correlation could not be established among the work breakdown structure elements, the contract deliverables, and the elements being reported in the contract performance reports. Officials for the remaining investment-- TAAPS--stated that there was a documented work structure; however, they did not provide evidence of this. As another example, performance measurement baselines were developed for five of six investments. However, the baselines had noted weaknesses. Specifically, four investments--FIRST, IFS, STAR, and TAAPS--had a baseline, but some elements were not included, such as planned costs for STAR. Further, for TAAPS, independent validation of the investment's baseline was not conducted. FedDebt had a performance measurement baseline which underwent integrated baseline validation in March 2006. However, the validation indicated that there was no time-phased planned value at the individual contract level, nor was there a roll-up at the program level. No explanation was provided of how the monthly performance data on individual FedDebt projects were rolled up to the investment level as required by OMB. Further, EDAS did not have a time- phased budget baseline or a performance measurement baseline. None of the six investments fully implemented the steps to ensure data reliability from their EVM systems. Five partially implemented the steps, and one investment--EDAS--did not meet any of the steps. When executing work plans and recording actual costs, two of the six investments incorporated government costs with contractor costs. For example, FedDebt included both government and contractor costs in their quarterly reporting. However, while IFS had a mechanism for recording monthly government costs, it did not have a method that combined both contractor and government costs for review on a monthly basis. Also, few if any checks are performed to measure the quality of EVM data and, according to agency officials, Treasury currently focuses more on reporting the data than on their reliability. In addition, five of the six investments did not adequately analyze performance data and record the variances from the baseline. The IFS investment included monthly reviews of performance reports and included cost and schedule variances. The remaining investments conducted analyses of performance data, but did not all provide documentation to show cost and schedule updates and variances. For example, according to officials, TAAPS' cost and schedule variances were calculated at the project and program levels, but evidence of this could not be provided. Further, as part of its performance reporting, STAR did not calculate the cost variance and incorrectly calculated the schedule variance. None of the six investments fully implemented the two practices needed to ensure the use of EVM data for decision-making purposes. Specifically, EDAS did not take management action to mitigate risks identified through their EVM performance data, or update the performance measurement baseline as changes occurred; IFS addressed one of these practices, and the remaining investments only partially addressed them. In order to support management action to mitigate risks identified through EVM performance data--variance analysis, corrective action planning, and reviewing estimates at completion--the IFS project manager was provided with monthly performance report that indicated when cost and/or schedule variances exceeded acceptable tolerances. Further, investment- level status was provided to bureau-level and agency-level management to allow them to make capital planning and investment control decisions. However, the remaining five investments did not fully take action to mitigate risks for a variety of reasons. For example, for FedDebt, although some monthly EVM data were included in quarterly reports, no documentation was provided on how such data were being used to manage at the project or investment level. A similar situation exists for the TAAPS investment where, although agency officials stated that meetings were routinely held to discuss performance issues, no evidence was provided that a systematic method existed to use EVM metrics for decision-making purposes. Regarding the update of performance measurement baselines as changes occur, one investment team stated that it did not have any baseline changes; however, documentation showed that the schedule for the investment had been changed three times. In addition, although IFS maintained a log for tracking changes, we could not determine that these changes had been incorporated into the baseline. Further, according to officials, EDAS had a scope change in fiscal year 2007; however, the investment team was not able to provide documentation reflecting the corresponding change in the performance measurement baseline. The inconsistent application of EVM across investments exists in part because the department does not have a policy that fully addresses key components including training and system surveillance and because the department is leaving the implementation of the policy largely up to bureaus. For example, project management staff had not consistently received training, an item which is not addressed in the policy, and surveillance reviews, which are partially addressed in the policy, had not been performed for any of the investments. Furthermore, the department does not have a process for ensuring effective EVM implementation. However, in comments on a draft of this report, the Deputy Assistant Secretary for Information Systems and Chief Information Officer stated that the department is working with the bureaus to establish mechanisms and tools to ensure full compliance with the provisions of the updated EVM policy, which is to be finalized by October 2008. These mechanisms and tools would help address the implementation gaps we have identified. Treasury has established a policy that addresses criteria for implementing EVM, integrated baseline reviews, and project rebaselining consistent with best practices. However, it does not fully address other elements including compliance with the ANSI standard and system surveillance, which are necessary for effective implementation. In regards to implementation, the department is not fully addressing key practices needed to effectively manage its critical investments. Specifically, none of the six programs we reviewed were fully implementing any of the practices associated with establishing a comprehensive EVM system, ensuring the reliability of the data resulting from the system, or using earned value data to make decisions. The gaps in implementation are due in part to the weaknesses with the policy and to the low level of oversight provided by the department. Until the department defines a comprehensive policy and establishes a process for ensuring effective EVM implementation, it will be difficult for Treasury to optimize the effectiveness of EVM as a management tool and consistently implement the fundamental practices needed to effectively manage its critical programs. To improve Treasury's ability to effectively implement EVM on its IT acquisition programs, we recommend that the Secretary of Treasury direct the Assistant Secretary for Management, in collaboration with the Chief Information Officer, to take the following nine actions: Define a comprehensive EVM policy that specifies a methodology that standardizes EVM data collection and reporting compliant with the ANSI standard; a systematic approach to the development and documentation of work breakdown structures including the incorporation of standardized common elements; guidance on conducting surveillance reviews on the government's EVM training requirements for relevant personnel. Implement a process for ensuring effective implementation of EVM throughout the department by establishing a comprehensive EVM system by, among other things, defining the scope of effort using a work breakdown structure that allows for traceability across EVM project management documents; ensuring the development of validated performance measurement baselines that includes planned costs and schedules; ensuring that the data resulting from the EVM system are reliable, including executing the work plan and recording both government and ensuring that the program management team is using earned value data for decision-making by systematically using EVM performance metrics in making the ongoing monthly decisions required to effectively manage the investment; and properly documenting updates to the performance measurement baseline as changes to the cost and schedule occur. In written comments on a draft of this report, the Department of Treasury's Deputy Assistant Secretary for Information Systems and Chief Information Officer generally agreed with our findings and stated that the department will issue a revised version of the EVM policy that will address our nine recommendations by October 2008. He also noted that the department is working with the bureaus to establish mechanisms and tools including processes for conducting system surveillance and monitoring of EVM data to ensure compliance with the policy. Treasury also provided technical comments which we have addressed as appropriate. Treasury's written comments are reprinted in appendix I. We will be sending copies of this report to interested congressional committees, the Secretary of Treasury, and other interested parties. In addition, the report will be available at no charge on our Web site at http://www.gao.gov. If you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-9286 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objectives were to determine whether the Department of the Treasury and its key component agencies (1) have the policies in place to effectively implement earned value management (EVM) and (2) are adequately using EVM techniques to manage critical system investments. To assess whether Treasury has policies in place to effectively implement EVM, we analyzed Treasury and its component bureaus' policies and guidance that support EVM implementation departmentwide as well as on capital planning and investment control guidance. Specifically, we compared these policies and guidance documents to both Office of Management and Budget requirements and key best practices recognized within the federal government and industry for the implementation of EVM. These best practices are contained in an exposure draft version of our cost guide. We also interviewed key agency officials, including the Director for Capital Planning and Investment Control, to obtain information on the agency's ongoing and future EVM plans. To determine whether Treasury is adequately using EVM techniques to manage critical system investments, we reviewed 6 of the 40 systems the department required to use EVM. Specifically, we selected investments from each of the four component agencies identified as having eligible investments. We selected one investment from the Bureau of Public Debt, another from Departmental Offices, and two from the Financial Management Service and the Internal Revenue Service since they had a greater percentage of investments using EVM. With the exception of the Bureau of Public Debt which had only one major investment, we selected investments based on (1) size, (2) EVM history (i.e., use of EVM for a long enough period of time to have some history of EVM data), and (3) completion date (i.e., those that would not end during the course of our review). The 6 projects selected were FedDebt and Financial Information and Reporting Standardization from the Financial Management Service, DC Pension System to Administer Retirement (STAR) from the Departmental Offices, Treasury Automated Auction Processing System (TAAPS) from the Bureau of Public Debt, and Integrated Financial System/Core Financial System and Enterprise Data Access Strategy from the Internal Revenue Service. Our review was not intended to be generalizable, but instead to illustrate the status of a variety of programs. To determine the extent of each program's implementation of sound EVM, we compared program documentation to the 11 fundamental EVM practices implemented on acquisition programs of leading organizations, as identified in the Cost Assessment Guide. We determined whether the program fully implemented, partially implemented, or did not implement each of the practices. Finally, we interviewed program officials to obtain clarification on how EVM practices are implemented and how the data are validated and used for decision-making purposes. Regarding the reliability of cost data, we did not test the adequacy of agency or contractor cost- accounting systems. Our evaluation of these cost data was based on what we were told by the agency and the information they could provide. We conducted this performance audit from August 2007 to July 2008 in Washington, D.C., in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Organizations must be able to evaluate the quality of an EVM system in order to determine the extent to which the cost, schedule, and technical performance data can be relied on for program management purposes. In recognition of this, the American National Standards Institute (ANSI) and the Electronics Industries Alliance (EIA) jointly established a national standard for EVM systems--ANSI/EIA 748-B (commonly referred to as the ANSI standard). This standard consists of 32 guidelines addressing organizational structure; planning, scheduling, and budgeting; accounting considerations; analysis and management reports; and revisions and data maintenance. These standards comprise three fundamental management functions for effectively using EVM: establishing a sound earned value management system, ensuring that the EVM data are reliable, and using earned value data for decision-making purposes. Table 5 lists the management functions and the guidelines. Below is a description of the six investments we reviewed to assess whether the department is adequately using EVM techniques to manage critical system investments. FedDebt supports the federal government's delinquent debt collection programs, which were centralized in the Financial Management Service (FMS) pursuant to the Debt Collection Improvement Act of 1996. FedDebt also supports Treasury's strategic goal to manage the U.S. Government's finances effectively and the FMS strategic goal to maximize collection of government delinquent debt by providing efficient and effective centralized debt collection services. FedDebt plans to integrate the collection services that FMS provides to Federal Program Agencies through its other programs. FIRST is intended to automate the maintenance and distribution of the U.S. Standard General Ledger accounting rules and guidance. It also plans to integrate the general ledger guidance with the collection of all accounting trail balance data, thus providing a standardized method of collecting, storing, reporting, and analyzing such data. Furthermore, the investment is expected to facilitate accounting validations of the agency trial balance data to provide better feedback to agencies concerning the accuracy and consistency of these data. STAR is to assist Treasury and the District of Columbia Government by automating the determination of eligibility, calculating pension benefits, and delivery of payments, therefore allowing for (1) increased accuracy of pension benefit calculations and (2) improved customer service. Key functionality for this investment includes serving annuitants and survivors of the Judges Pension Plan; making benefit payments to 11,000 teachers, police, and firefighters who retired before July 1997, as well as their survivors; and automatically calculating the gross annuity and split benefit payment for teachers, police, and firefighters, to service those annuitants who retired after June 1997. TAAPS is intended to ensure that all auction-related operations are carried out flawlessly and securely. Key among auction activities are the announcement of upcoming Treasury auctions; bid submission and processing; calculation of awards; publication of results; creation and dissemination of settlement wires; creation of accounting reports and reports needed for auctions analysis; and the storage of all securities-, bidder-, and auction-related information. TAAPS is expected to make numerous intersystem interfaces and manual processes obsolete by consolidating auction processing requirements into one system and providing appropriate backup and disaster recovery systems and services. IFS is intended to operate as the Internal Revenue Services' (IRS) new accounting system of record, replacing IRS's core financial systems, including expenditure controls, accounts payable, accounts receivable, general ledger, budget formulation, and purchasing controls. IRS intends to upgrade to software that provides federal accounting functionality. By migrating to federal accounting practices, IFS is to provide benefits, such as eliminating current work-around processes, improving project management capability, and enhancing budget reports. EDAS is intended to consolidate data from multiple Business Systems Modernization applications and produce a consolidated data repository source to be used for issue detection and case selection. The goal is to develop integrated data solutions that allow IRS to retire duplicative and costly data extracts. The first major project is to develop an Integrated Production Model as a central repository for corporate data and make those data available to projects currently in development. Long-term benefits include the retirement of multiple systems and efficiency gains from improved processes. In addition to the contact named above, Sabine Paul, Assistant Director; Neil Doherty; Mary D. Fike; Nancy Glover; Sairah R. Ijaz; Rebecca LaPaze; and Paul B. Middleton made key contributions to this report.
In 2008, the Department of Treasury (Treasury) plans to spend approximately $3 billion on information technology (IT) investments--the third largest planned IT expenditure among civilian agencies. To more effectively manage such investments, in 2005 the Office of Management and Budget required agencies to use earned value management (EVM). EVM is a project management approach that, if implemented appropriately, provides objective reports of project status, produces early warning signs of impending schedule delays and cost overruns, and provides unbiased estimates of a program's total costs. GAO was asked to assess whether the department and its key component agencies (1) have the policies in place to effectively implement EVM and (2) are adequately using EVM techniques to manage critical system investments. GAO compared agency policies to best practices identified in the Cost Assessment Guide and reviewed the implementation of key EVM practices for several investments. The Department of Treasury's EVM policy is not fully consistent with best practices. Specifically, of seven best practices that leading organizations address in their policies, Treasury's policy fully addresses three, partially addresses three, and does not address the training component. According to the Director for Capital Planning and Investment Control, the department is currently working on revising its policy and according to Deputy Assistant Secretary for Information Systems and Chief Information Officer expects to finalize it by October 2008. Until Treasury develops a comprehensive policy to guide its efforts, it will be difficult for the department to optimize the effectiveness of EVM as a management tool. The department and its bureaus are not fully implementing key EVM practices needed to effectively manage their critical system investments. Specifically, the six programs at Treasury that GAO reviewed were not consistently implementing practices needed for establishing a comprehensive EVM system, ensuring that data from the system are reliable, and using the data to help manage the program. For example, when executing work plans and recording actual costs, a key practice for ensuring that the data resulting from the EVM system are reliable, only two of the six investments reviewed incorporated government costs with contractor costs. These weaknesses exist in part because Treasury's policy is not comprehensive and because the department does not have a process for ensuring effective EVM implementation. Unless the department consistently implements fundamental EVM practices, it may not be able to effectively manage its critical programs.
7,353
490
NASA's Vision for Space Exploration calls for a return of humans to the Moon and eventual human spaceflight to Mars. In September 2005, NASA outlined an initial architecture for implementing the Vision in its Exploration Systems Architecture Study (ESAS). NASA is implementing this architecture under the Constellation program. Among the first major efforts of this program are the developments of new space flight systems--including the Ares I Crew Launch Vehicle and the Orion Crew Exploration Vehicle. Ares I and Orion are currently targeted for operation no later than 2015 (see fig. 1). As illustrated by figure 1 above, the Constellation program, including the Ares I and Orion projects, is approaching the end of the formulation phase of NASA's acquisition life-cycle for spaceflight programs and projects. The purpose of the formulation phase is to establish a cost-effective program that is demonstrably capable of meeting the agency's objectives. The formulation phase concludes with the preliminary design review and a non-advocate review which marks the end of the formulation phase and the beginning of the implementation phase. During the implementation phase, the program will execute plans developed during the formulation phase. Our work on best practices over the past decade has shown that success in large-scale development efforts like Constellation depends on establishing an executable business case before committing resources to a new product development effort. In its simplest form, a business case requires a balance between the concept selected to satisfy customer needs and the resources--technologies, design knowledge, funding, time, and management capacity--needed to transform the concept into a product. At the heart of a business case is a knowledge-based approach that requires that managers demonstrate high levels of knowledge as the program proceeds from technology development to system development and, finally, production. Ideally, in such an approach, key technologies are demonstrated before development begins, the design is stabilized before prototypes are built or production begins, and testing is used to validate product maturity at each level. At each decision point, the balance among time, money, and capacity is confirmed. In essence, knowledge supplants risk over time. Having adequate knowledge about requirements and resources is particularly important for a program like Constellation because human spaceflight development projects are inherently complex, difficult, and costly. We have reported on several occasions that within NASA's acquisition framework, the preliminary design/non-advocate review--the hurdle marking transition from program formulation to program implementation--is the point at which development projects should have a sound business case in hand. NASA's Systems Engineering Policy states that the preliminary design review demonstrates that the preliminary design meets all system requirements with acceptable risk and within the cost and schedule constraints. NASA realized that the Orion project was not ready to complete the preliminary design review process as planned and delayed its initiation from summer 2008 to summer 2009. Furthermore, although NASA officially closed the Ares I preliminary design review process in September 2008, it deferred resolution of the thrust oscillation issue until the Constellation program preliminary design review in March 2010. The business case is the essential first step in any acquisition program that sets the stage for the remaining stages of a program, namely the business or contracting strategy and actual execution or performance. If the business case is not sound, execution may be subpar. This does not mean that all potential problems can be eliminated and perfection achieved, but rather that sound business cases can help produce better outcomes and better return on investment. If any one element of the business case is weak, problems are more likely in implementation. Thus far in the Constellation program, the failure of NASA to establish a sound business case for both the Ares I and Orion projects early is manifesting itself in schedule delays and cost increases. The Constellation program has not yet developed all of the elements of a sound business case needed to justify entry into implementation. Progress has been made; however, technical and design challenges are still significant and until they are resolved NASA will not be able to reliably estimate the time and money needed to execute the program. In addition, cost issues and a poorly phased funding plan continue to hamper the program. Consequently, NASA is changing the acquisition strategy for the Orion project as the agency attempts to increase confidence in its ability to meet a March 2015 first crewed launch. However, technical design and other challenges facing the program are not likely to be overcome in time to meet the 2015 date, even with changes to scope and requirements. Technical and design challenges within the Constellation are proving difficult, costly, and time intensive to resolve. The Constellation program tracks technical challenges in its Integrated Risk Management Application (IRMA). NASA procedures recommend that programs identify and track risks as part of continuous risk management. As of June 9, 2009, IRMA was tracking 464 risks for Ares I and Orion--207 high risks, 206 medium risks, and 51 low risks. We have reported on some of these areas of technical challenge in the past, including thrust oscillation, thermal protection system, common bulkhead, and J-2X nozzle extension. In addition to these challenges, our recent work has highlighted other technical challenges, including Orion mass control, vibroacoustics, lift-off drift, launch abort system, and meeting safety requirements. While NASA has made progress in resolving each of these technical challenges, significant knowledge gaps remain in each of these areas. Descriptions of these technical challenges follow. Thrust oscillation, which causes shaking during launch and ascent, occurs in some form on every solid rocket engine. Last year, we reported that computer modeling indicated that there was a possibility that the thrust oscillation frequency and magnitude may be outside the limits of the Ares I design and could potentially cause excessive vibration in the Orion capsule. Agency officials stated that thrust oscillation is well understood and they are pursuing multiple solutions. These include incorporating a passive damping system inside the first stage solid rocket booster aft skirt that will act like a shock absorber during launch; adding a composite structure and springs between the first and second stages to isolate the upper stage and crew vehicle from the first stage; and could possibly use the upper stage propellant fuel tanks to offset thrust oscillation in the first stage. Officials said that NASA will be unable to verify the success of solutions until thrust oscillation occurs during an integrated flight. Officials noted that because thrust oscillation is not expected to occur in every flight, it is difficult to forecast when the solutions will be verified. The Orion vehicle requires a large-scale ablative heat shield, at the base of the spacecraft, to survive reentry from earth orbit. These heat shields burn up, or ablate, in a controlled fashion, transporting heat away from the crew module during its descent through the atmosphere. NASA is using an ablative material derived from the substance used in the Apollo program. After some difficulties, NASA was successful in recreating the material. Because it uses a framework with many honeycomb-shaped cells, each of which must be individually filled without voids or imperfections, it may be difficult to repeatedly manufacture to consistent standards. According to program officials, during the Apollo program the cells were filled by hand. The contractor plans to automate the process for the Orion Thermal Protection System, but this capability is still being developed. The common bulkhead separates the hydrogen and oxygen fuel within the Ares I upper stage fuel tank. The initial Ares I design employed a simpler two-tank configuration with lower manufacturing costs but did not meet mass requirements. According to project officials, the common bulkhead represents the critical path in both the development and manufacturing of the upper stage. Lessons learned from the Apollo program indicate that common bulkheads are complex and difficult to manufacture and recommend against their use. According to NASA officials, the difficulty of designing and manufacturing common bulkheads stems from the sheer size of components and the tight tolerances to which they must be manufactured. To accelerate the manufacturing process NASA is exploring using an oven with a vacuum bag instead of an autoclave to bond and cure the metallic and composite materials used in the manufacture of the common bulkhead. If this process proves unsuccessful, the program may encounter schedule delays. We have reported in prior years that although the J-2X engine is based on the J-2 and J-2S engines used on the Saturn V and leverages knowledge from subsequent engine development efforts, the number of planned changes is such that, according to NASA review boards, the effort essentially represents a new engine development. A risk within this development is a requirement for a nozzle extension to meet performance requirements. NASA originally planned to pursue a composite nozzle. However, NASA eliminated the composite nozzle extension from the J-2X design because of cost and other considerations, and went with a unique aluminum alloy design, which, according to agency officials, should reduce costs, but has the potential to decrease engine performance and increase mass. Analysis indicates that the alloy nozzle is more likely to be affected by heat than a composite nozzle. In essence, while the alloy nozzle should withstand the heat environment, the composite nozzle allowed for improved performance margins. According to officials, to mitigate the potential problem, NASA is using a proven aluminum alloy with a honeycomb design, similar structurally to the Space Shuttle external tank, which will reduce weight. Contractor officials stated that they will continue to modify the nozzle design as test results are received and analyzed. Controlling for mass has led to significant design changes to the Orion vehicle. Our previous work has shown that controlling for mass is a key factor in the development of space systems. As the mass of a particular system increases, the power or thrust required to launch that system will also increase. This could result in the need to develop additional power or thrust capability to lift the system, leading to additional costs, or to stripping down the vehicle to accommodate current power or thrust capability. For example, NASA went through the process in 2007 to zero- base the design for the Orion to address mass concerns. In its efforts to reduce the mass of the Orion vehicle, NASA chose to move from land nominal landing to water nominal landing to reduce mass by eliminating air bags and, according to officials, by reducing the number of parachutes. NASA also incorporated jettisonable, load-bearing fairings into the Orion's service module design that, according to officials, saved 1,000 pounds. This change, however, increased development risk because the fairing design has no historical precedent and the fairing panels may not deploy properly and could recontact the Orion vehicle or the Ares I rocket after they are jettisoned. Another issue related to vibration is vibroacoustics--the pressure of the acoustic waves--produced by the firing of the Ares I first stage and the rocket's acceleration through the atmosphere--which may cause unacceptable structural vibrations throughout Ares I and Orion. According to agency officials, NASA is still determining how these vibrations and acoustic environments may affect the vehicles. NASA is concerned that severe vibroacoustics could force NASA to qualify Ares I and Orion components to higher vibration tolerance thresholds than originally expected. For example, if current concerns are realized, key subsystems within the Upper Stage would be unable to meet requirements, would fail qualification testing, and would have to be redesigned. Analysis of the Ares I flight path as it lifts off from the launch pad indicates the rocket may drift during launch and could possibly hit the launch tower or damage the launch facilities with the rocket plume. Factors contributing to lift-off drift include wind speed and direction, misalignment of the rocket's thrust, and duration of lift-off. NASA plans to establish a clear, safe, and predicted lift-off drift curve by steering the vehicle away from the launch tower and not launching when southerly winds exceed 15 to 20 knots. NASA continues to address challenges designing the launch abort system, which pulls the Orion capsule away from the Ares I launch vehicle in the case of a catastrophic problem during launch. The Orion contractor had trouble finding a subcontractor who could design and build a working attitude control motor that steers the system during an abort. According to agency officials, previous attitude control motors have had 700 pounds of thrust, while the requirement for the attitude control motor is 7,000 pounds of thrust. Developing an attitude control motor with high levels of thrust and long burn durations that is steerable is proving to be a difficult technical challenge. A year after the initial contract was awarded, the first subcontractor did not have a viable design and had to be replaced. The current subcontractor, however, is making progress. For example, although the valves used by the complex steering system failed during high-thrust testing in April 2008, redesigned valves have subsequently passed two high-thrust tests. Orion's safety requirements are no more than one loss of crew event in 1,700 flights and one loss of mission event for every 250 flights for the ISS mission. According to Orion officials, these requirements are an order of magnitude higher than the Space Shuttle's safety requirements, were arbitrarily set by ESAS, and may be unattainable. According to the Constellation program manager, NASA has added robustness to current systems as well as redundant systems to increase safety margins. However, these added redundancies and system robustness have added mass to the system. The technical challenges presented here do not capture all of the risks, technical or programmatic, which the Constellation program faces. As noted earlier, there are over 200 risks categorized as "high" for the Ares I/Orion programs, meaning that if not successfully mitigated, these risks (1) are either nearly certain, highly likely, or may occur, and (2) will have major effects on system cost, schedule, performance, or safety. These risks range in nature from highly complex technical risks, such as those noted above, to straightforward programmatic risks related to areas such as transitioning support work from the Marshall Space Flight Center to Michoud Assembly Facility for long-term vehicle production, compressing the software development cycle for the Orion vehicle, and creating a test program for Orion's communication and tracking system. The Constellation program's poorly phased funding plan has affected the program's ability to deal with technical challenges. In our October 2007 report, we noted that NASA initiated the Constellation program recognizing that the agency's total budget authority would be insufficient to fund all necessary activities in fiscal years 2009 and 2010. NASA's funding strategy relied on the accumulation of a large rolling budget reserve in fiscal years 2006 and 2007 to fund Constellation activities in fiscal years 2008, 2009, and 2010. Thereafter, NASA anticipated that the retirement of the space shuttle program in 2010 would free funding for the Constellation program. In our October 2007 report, we noted that NASA's approach to funding was risky and that the approved budget profile at that time was insufficient to meet Constellation's estimated needs. The Constellation program's integrated risk management system also identified this strategy as high risk and warned that funding shortfalls could occur in fiscal years 2009 through 2012, resulting in planned work not being completed to support schedules and milestones. According to project officials, these shortfalls limited NASA's ability to mitigate technical risks early in development and precluded the orderly ramp-up of workforce and developmental activities. According to the Constellation program manager, these funding shortfalls are reducing his flexibility to resolve technical challenges. The Constellation program tracks unfunded risk mitigation--engineering work identified as potentially needed but not currently funded--as cost threats in IRMA. The Constellation IRMA system currently tracks 192 cost threats for the Ares I and Orion projects totaling about $2.4 billion through fiscal year 2015. Of this $2.4 billion, NASA classifies 35 threats valued at about $730 million as likely to be needed, 54 threats valued at about $670 million as may or may not be needed, and 103 threats valued at about $1 billion as not likely to be needed. Our analysis of the cost threats indicates these cost threats may be understated. For example, of the 157 threats classified as may or may not be needed or not likely to be needed, IRMA likelihood scores indicate that 69 cost threats worth about $789 million are either highly likely or nearly certain to occur. Some examples of cost threats include $4.7 million to develop and mature Orion's data network technology and $12.5 million for an Upper Stage and First Stage separation test. The cost of the Constellation program's developmental contracts have increased as NASA added new effort to resolve technical and design challenges. Constellation program officials and contractor cost reports indicate that the new effort has increased the value of the Constellation program's developmental contracts from $7.2 billion in 2007 to $10.2 billion in June 2009. Some of these modifications remained undefinitized for extended periods as NASA worked through design issues and matured program requirements in response to technical challenges. Undefinitized contract actions authorize contractors to begin work before reaching a final agreement on contract terms. By allowing undefinitized contract actions to continue for extended periods, NASA loses its ability to monitor contractor performance because the cost reports are not useful for evaluating the contractor's performance or for projecting the remaining cost of the work under contract. With a current, valid baseline, the reports would indicate when cost or schedule thresholds had been exceeded, and NASA could then require the contractor to explain the reasons for the variances and to identify and take appropriate corrective actions. Yet, NASA allowed high-value modifications to the Constellation contracts to remain undefinitized for extended periods, in one instance, more than 13 months. In August 2008, when faced with cost increases and funding shortfalls, the Constellation program responded by reducing program reserves and deferring development effort and test activities. These changes resulted in a minimized flight test program that was so success oriented there was no room for test failures. During the course of our review, NASA test officials expressed multiple concerns about the test approach the program was then pursuing. NASA test officials also expressed concerns about the sufficiency of planned integrated system flight testing. NASA was planning only one integrated system flight test prior to the first crewed flight. Officials stated that while NASA would have been able to address each of the programs' specific test objectives during the planned flight tests, additional integrated system flight tests could have provided the agency increased confidence that the system performed as planned and allowed the agency the opportunity to design and implement solutions to performance problems without affecting the first crewed flight. According to agency officials, any problems encountered during integrated system flight testing could lead to significant delays in the first crewed flight. Test officials were also concerned that the highly concurrent test schedule had significant overlap between component qualification and fabrication of flight hardware. This concurrency could have resulted in schedule slips and increased costs if any component failed qualification tests. Our past work indicates that it is unlikely that the program will complete its test program without encountering developmental problems or test failures. The discovery of problems in complex products is a normal part of any development process, and testing is perhaps the most effective tool for discovering such problems. According to the Constellation program manager, the test plan strategy for the Constellation program is currently evolving as the program reshapes its acquisition strategy to defer all work on lunar content beyond the March 2015 first crewed flight. The test strategy is likely to continue to evolve until the Constellation program's Systems Integration Plan is finalized when the project enters the implementation phase. In response to technical challenges and cost and funding issues, NASA is changing the Orion project acquisition strategy. In December 2008, NASA determined that the current Constellation program was high risk and unachievable within the current budget and schedule. To increase its level of confidence in the Constellation program baseline NASA delayed the first crewed flight from September 2014 to March 2015 and according to officials, adopted a two-phased approach to developing the Orion vehicle. NASA's original strategy for the Orion project was to develop one vehicle capable of supporting both ISS and lunar missions. According to the Constellation program manager, the Constellation program is currently deferring work on Orion lunar content beyond 2015 to focus its efforts on developing a vehicle that can fly the ISS mission. This phased approach, however, could require two qualification programs for the Orion vehicle-- one pre-2015 Orion qualification program for ISS mission requirements and a second post-2015 Orion qualification program for lunar mission requirements. According to the program manager, the knowledge gained from flying the initial Orion to the ISS will inform the design of the lunar vehicle. The Constellation program manager also told us that NASA is unwilling to further trade schedule in order to reduce risk. He asserted that delaying the schedule is an inefficient means of mitigating risk because of the high costs of maintaining fixed assets and contractor staff. Though these changes to overarching requirements are likely to increase the confidence level associated with the March 2015 first crewed flight, they do not guarantee that the program will conduct a successful first crewed flight in March 2015. For example, in May 2009 the program announced its plan to reduce the number of crew for the ISS mission from six to four. According to project officials, NASA does not plan to finalize the preliminary design of the four-crew ISS configuration until after the Orion preliminary design review. Revising the ISS design for four crew and optimizing the area freed up by removing two crew for the ISS mission will entail additional effort on the part of the Orion design team. Furthermore, as noted above, both the Ares I and Orion projects continue to face technical and design challenges that will require significant time, money, and effort to resolve irrespective of the decision to defer lunar requirements. While deferring the lunar requirement is likely to relieve pressure on Orion's mass margins allowing increased flexibility to deal with some Orion-specific technical challenges, the lunar requirement has little bearing on many of the Ares I technical challenges discussed above. Furthermore, it is unclear how deferring the lunar requirement will affect the technical challenges faced in the development of the Orion launch abort system and in dealing with vibroacoustics. NASA's human spaceflight program is at a crossroads. Efforts to establish a sound business case for Constellation's Ares I and Orion projects are complicated by (1) an aggressive schedule, (2) significant technical and design challenges, (3) funding issues and cost increases, and (4) an evolving acquisition strategy that continues to change Orion project requirements. Human spaceflight development programs are complex and difficult by nature and NASA's previous attempts to build new transportation systems have failed in part because they were focused on advancing technologies and designs without resources--primarily time and money--to adequately support those efforts. While the current program, Constellation, was originally structured to rely on heritage systems and thus avoid problems seen in previous programs, the failure to establish a sound business case has placed the program in a poor risk posture to proceed into implementation as planned in 2010. In the past, NASA has recognized these shortfalls and has delayed design reviews for both the Ares I and Orion vehicles in an effort to gain the knowledge needed for a sound business case. NASA's current approach, however, is based on changing requirements to increase confidence in meeting the schedule. Nevertheless, the need to establish a sound business case, wherein resources match requirements and a knowledge-based acquisition strategy drives development efforts, is paramount to any successful program outcome. Until the Constellation program has a sound business case in hand, it remains doubtful that NASA will be able to reliably estimate cost and schedule to complete the program. Meanwhile, the new Administration is conducting an independent review of NASA's human spaceflight activities, with the potential for recommendations of broad changes to the agency's approach toward future efforts. While the fact that the review is taking place does not guarantee wholesale changes to the current approach, it does implicitly recognize the challenges facing the Constellation program. We believe this review is appropriate as it presents an opportunity to reassess both requirements and resources for Constellation as well as alternative ways for meeting requirements. Regardless of NASA's final plans for moving forward, the agency faces daunting challenges developing human rated spacecraft for use after the Space Shuttle is retired, and it is important that the agency lay out an acquisition strategy grounded in knowledge-based principles that is executable with acceptable levels of risk within the program's available budget. As NASA addresses the findings and recommendations of the Review of U.S. Human Space Flight Plans Committee, we recommend that the new NASA Administrator direct the Constellation program, or its successor, to develop a sound business case--supported by firm requirements, mature technologies, a preliminary design, a realistic cost estimate, and sufficient funding and time--before proceeding into implementation, and, if necessary, delay the preliminary design review until a sound business case demonstrating the program's readiness to move forward into implementation is in hand. In written comments on a draft of this report (see app. II), NASA concurred with our recommendation. NASA acknowledged that, while substantial work has been completed, the Constellation program faces knowledge gaps concerning requirements, technologies, funding, schedule, and other resources. NASA stated that it is working to close these gaps before committing to significant, long-term investments in the Constellation program. NASA stated that the Constellation program manager is required to demonstrate at the preliminary design review that the program and its projects meet all system requirements with acceptable risk and within cost and schedule constraints, and that the program has established a sound business case for proceeding into the implementation phase. At this point, the NASA Agency Program Management Council will review the Constellation program and determine the program's readiness to proceed into the implementation phase and begin detailed design. Separately, NASA provided technical comments, which have been addressed in the report, as appropriate. As agreed with your offices, unless you announce its contents earlier, we will not distribute this report further until 30 days from its date. At that time, we will send copies to NASA's Administrator and interested congressional committees. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-4841 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix III. To assess NASA's progress toward establishing a sound business case for the Ares I and Orion projects and identify key technical challenges NASA faces in developing the Ares I Crew Launch and the Orion Crew Exploration Vehicles, we obtained and analyzed Constellation plans and schedules, risk mitigation information, and contract performance data relative to the standards in our knowledge-based acquisition practices including program and project plans, contracts, schedules, risk assessments, funding profile, budget documentation, earned value reports, and the results of NASA's assessments of the program. We interviewed and received briefings from officials associated with the Constellation program office, including Exploration Systems Mission Directorate officials at NASA headquarters in Washington, D.C.; Orion project and Constellation program officials at the Johnson Space Center in Houston, Texas; and, Ares I and J-2X officials at the Marshall Space Flight Center in Huntsville, Alabama, regarding the program and projects' risk areas and test strategy, technical challenges, the status of requirements, acquisition strategy and the status of awarded contracts. We also conducted interviews and received briefings from NASA contractors on heritage hardware and design changes, and top risks and testing strategy, for the J-2X engine, Ares I First Stage, Ares I Upper Stage, Launch Abort System, and Orion vehicle. We analyzed risk documented through the Constellation program's Integrated Risk Management Application and followed up with project officials for clarification and updates to these risks. We also attended the Constellation Program's Quarterly Risk Review at the Johnson Space Center. In addition, we interviewed Constellation program officials from Johnson Space Center about program risks, requirements, and the impact of budget reductions. We also spoke with NASA headquarters officials from the Exploration Systems Mission Directorate's Resources Management Office in Washington, D.C., to gain insight into the program's top risks and the basis for fiscal year 2006 through fiscal year 2010 budget requests as well as the funding strategy employed by the Constellation program. Furthermore, we reviewed NASA's program and project management directives and systems engineering directives. Our review and analysis of these documents focused on requirements and goals set for spaceflight systems. We compared examples of the centers' implementation of the directives and specific criteria included in these directives with our best practices work on system acquisition. We conducted this performance audit from December 2008 through August 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Jim Morrison, Assistant Director; Jessica M. Berkholtz; Greg Campbell; Jennifer K. Echard; Nathaniel J. Taylor; John S. Warren Jr.; and Alyssa Weir made key contributions to this report. NASA: Assessments of Selected Large-Scale Projects. GAO-09-306SP. Washington, D.C.: March 2, 2009. NASA: Agency Faces Challenges Defining Scope and Costs of Space Shuttle Transition and Retirement. GAO-08-1096. Washington, D.C.: September 30, 2008. NASA: Ares I and Orion Project Risks and Key Indicators to Measure Progress. GAO-08-186T. Washington, D.C.: April 3, 2008. NASA: Agency Has Taken Steps Toward Making Sound Investment Decisions for Ares I but Still Faces Challenging Knowledge Gaps. GAO-08-51. Washington, D.C.: October 31, 2007. NASA: Issues Surrounding the Transition from the Space Shuttle to the Next Generation of Human Space Flight Systems. GAO-07-595T. Washington, D.C.: March 28, 2007. NASA: Long-Term Commitment to and Investment in Space Exploration Program Requires More Knowledge. GAO-06-817R. Washington, D.C.: July 17, 2006. NASA: Implementing a Knowledge-Based Acquisition Framework Could Lead to Better Investment Decisions and Project Outcomes. GAO-06-218. Washington, D.C.: December 21, 2005. Defense Space Activities: Continuation of Evolved Expendable Launch Vehicle Program's Progress to Date Subject to Some Uncertainty. GAO-04-778R. Washington, D.C.: June 24, 2004. Best Practices: Using a Knowledge-Based Approach to Improve Weapon Acquisition. GAO-04-386SP. Washington, D.C.: January 2004.
NASA's Constellation program is developing the Ares I Crew Launch Vehicle and the Orion Crew Exploration Vehicle as the agency's first major efforts in a plan to return to the moon and eventually send humans to Mars. GAO has issued a number of reports and testimonies on various aspects of this program, and made several recommendations. GAO was asked to assess NASA's progress in implementing GAO's recommendations for the Ares I and Orion projects, and identify risks the program faces. GAO analyzed NASA plans and schedules, risk mitigation information, and contract performance data relative to knowledge-based acquisition practices identified in prior GAO reports, and interviewed government officials and contractors. NASA is still struggling to develop a solid business case--including firm requirements, mature technologies, a knowledge-based acquisition strategy, a realistic cost estimate, and sufficient funding and time--needed to justify moving the Constellation program forward into the implementation phase. Gaps in the business case include significant technical and design challenges for the Orion and Ares I vehicles, such as limiting vibration during launch, eliminating the risk of hitting the launch tower during lift off, and reducing the mass of the Orion vehicle, represent considerable hurdles that must be overcome in order to meet safety and performance requirements; and a poorly phased funding plan that runs the risk of funding shortfalls in fiscal years 2009 through 2012, resulting in planned work not being completed to support schedules and milestones. This approach has limited NASA's ability to mitigate technical risks early in development and precludes the orderly ramp up of workforce and developmental activities. In response to these gaps, NASA delayed the date of its first crewed-flight and changed its acquisition strategy for the Orion project. NASA acknowledges that funding shortfalls reduce the agency's flexibility in resolving technical challenges. The program's risk management system warned of planned work not being completed to support schedules and milestones. Consequently, NASA is now focused on providing the capability to service the International Space Station and has deferred the capabilities needed for flights to the moon. Though these changes to the overarching requirements are likely to increase the confidence level associated with a March 2015 first crewed flight, these actions do not guarantee that the program will successfully meet that deadline. Nevertheless, NASA estimates that Ares I and Orion represent up to $49 billion of the over $97 billion estimated to be spent on the Constellation program through 2020. While the agency has already obligated more than $10 billion in contracts, at this point NASA does not know how much Ares I and Orion will ultimately cost, and will not know until technical and design challenges have been addressed.
6,761
576
Fiscal year 2002 was a year of challenges, not just for GAO but also for the Congress and the nation. The nation's vulnerabilities were exposed in a series of events--America's vulnerability to sophisticated terrorist networks, bioterrorism waged through mechanisms as mundane as the daily mail, and corporate misconduct capable of wiping out jobs, pensions, and investments virtually overnight. As the Congress's priorities changed to meet these crises, GAO's challenge was to respond quickly and effectively to our congressional clients' changing needs. With work already underway across a spectrum of critical policy and performance issues, we had a head start toward meeting the Congress' needs in a year of unexpected and often tumultuous events. For example, in fiscal year 2002 GAO's work informed the debate over national preparedness strategy, helping the Congress determine how best to organize and manage major new departments, assess key vulnerabilities to homeland defense, and respond to the events of September 11 in areas such as terrorism insurance and airline security. GAO's input also was a major factor in shaping the Sarbanes-Oxley Act, which created the Public Company Accounting Oversight Board, as well as new rules to strengthen corporate governance and ensure auditor independence. Further, GAO's work helped the Congress develop and enact election reform legislation in the form of the Help America Vote Act of 2002 to help restore voter confidence. In fiscal year 2002, GAO also served the Congress and the American people by helping to: Contribute to a national preparedness strategy at the federal, state, and local levels that will make Americans safer from terrorism Protect investors through better oversight of the securities industry and Ensure a safer national food supply Expose the inadequacy of nursing home care Make income tax collection fair, effective, and less painful to taxpayers Strengthen public schools' accountability for educating children Keep sensitive American technologies out of the wrong hands Protect American armed forces confronting chemical or biological weapons Identify the risks to employees in private pension programs Identify factors causing the shortage of children's vaccines Assist the postal system in addressing anthrax and various management challenges Identify security risks at ports, airports, and transit systems Save billions by bringing sound business practices to the Department of Foster human capital strategic management to create a capable, effective, Ensure that the armed forces are trained and equipped to meet the nation's defense commitments Enhance the safety of Americans and foreign nationals at U.S. Assess ways of improving border security through biometric technologies Reduce the international debt problems faced by poor countries Reform the way federal agencies manage their finances Protect government computer systems from security threats Enhance the transition of e-government--the new "electronic connection" between government and the public During fiscal year 2002, GAO's analyses and recommendations contributed to a wide range of legislation considered by the Congress, as shown in the following table. By year's end, we had testified 216 times before the Congress, sometimes on as little as 24 hours' notice, on a range of issues. We had responded to hundreds of urgent requests for information. We had developed 1,950 recommendations for improving the government's operations, including, for example, those we made to the Secretary of State calling for the development of a governmentwide plan to help other countries combat nuclear smuggling and those we made to the Chairman of the Federal Energy Regulatory Commission calling for his agency to develop an action plan for overseeing competitive energy markets. We also had continued to track the recommendations we had made in past years, checking to see that they had been implemented and, if not, whether we needed to do follow-up work on problem areas. We found, in fact, that 79 percent of the recommendations we had made in fiscal year 1998 had been implemented, a significant step when the work we have done for the Congress becomes a catalyst for creating tangible benefits for the American people. Table 2 highlights, by GAO's three external strategic goals, examples of issues on which we testified before Congress during fiscal year 2002. Congress and the executive agencies took a wide range of actions in fiscal year 2002 to improve government operations, reduce costs, or better target budget authority based on GAO analyses and recommendations, as highlighted in the following sections. Federal action on GAO's findings or recommendations produced financial benefits for the American people: a total of $37.7 billion was achieved by making government services more efficient, improving the budgeting and spending of tax dollars, and strengthening the management of federal resources (see fig. 1). For example, increased funding for improved safeguards against fraud and abuse helped the Medicare program to better control improper payments of $8.1 billion over 2 years, and better policies and controls reduced losses from farm loan programs by about $4.8 billion across 5 years. In fiscal year 2002, we also recorded 906 instances in which our work led to improvements in government operations or programs (see fig. 2). For example, by acting on GAO's findings or recommendations, the federal government has taken important steps toward enhancing aviation safety, improving pediatric drug labeling based on research, better targeting of funds to high-poverty school districts, greater accountability in the federal acquisition process, and more effective delivery of disaster recovery assistance to other nations, among other achievements. As shown in table 3, we met all of our annual performance targets except our timeliness target. While we provided 96 percent of our products to their congressional requesters by the date promised, we missed this measure's target of 98 percent on-time delivery. The year's turbulent events played a part in our missing the target, causing us to delay work in progress when higher-priority requests came in from the Congress. We know we will continue to face factors beyond our control as we strive to improve our performance in this area. We believe the agency protocols we are piloting will help clarify aspects of our interactions with the agencies we evaluate and audit and, thus, expedite our work in ways that could improve the timeliness of our final products. We also believe that our continuing investments in human capital and information technology will improve our timeliness while allowing us to maintain our high level of productivity and performance overall. The results of our work were possible, in part, because of changes we have made to maximize the value of GAO. We had already realigned GAO's structure and resources to better serve the Congress in its legislative, oversight, appropriations, and investigative roles. Over the past year, we cultivated and fostered congressional and agency relations, better refined our strategic and annual planning and reporting processes, and enhanced our information technology infrastructure. We also continued to provide priority attention to our management challenges of human capital, information security, and physical security. Changes we made in each of these areas helped enable us to operate in a constantly changing environment. Over the course of the year, we cultivated and fostered congressional and agency relations in several ways. On October 23, 2001, in response to the anthrax incident on Capitol Hill, we opened our doors to 435 members of the House of Representatives and their staffs. Later in the year, we continued with our traditional hill outreach meetings and completed a 7- month pilot test of a system for obtaining clients' views on the quality of our testimonies and reports. We also developed agency protocols to provide clearly defined, consistently applied, well-documented, and transparent policies for conducting our work with federal agencies. We have implemented our new reporting product line entitled Highlights--a one-page summary that provides the key findings and recommendations from a GAO engagement. We continued our policy of outreach to our congressional clients, the public, and the press to enhance the accessibility of GAO products. Our external web site now logs about 100,000 visitors each day and more than 1 million GAO products are downloaded every month by our congressional clients, the public, and the press. In light of certain records access challenges during the past few years and with concerns about national and homeland security unusually high at home and abroad, it may become more difficult for us to obtain information from the Executive Branch and report on certain issues. If this were to occur, it would hamper our ability to complete congressional requests in a timely manner. We are updating GAO's engagement acceptance policies and practices to address this issue and may recommend legislative changes that will help to assure that we have reasonable and appropriate information that we need to conduct our work for the Congress and the country. GAO's strategic planning process serves as a model for the federal government. Our plan aligns GAO's resources to meet the needs of the Congress, address emerging challenges and achieve positive results. Following the spirit of the Government Performance and Results Act, we established a process that provides for updates with each new Congress, ongoing analysis of emerging conditions and trends, extensive consultations with congressional clients and outside experts, and assessments of our internal capacities and needs. At the beginning of fiscal year 2002, we updated our strategic plan for serving the Congress based on substantial congressional input--extending the plan's perspective out to fiscal year 2007 and factoring in developments that had occurred since we first issued it in fiscal year 2000. The updated plan carries forward the four strategic goals we had already established as the organizing principles for a body of work that is as wide- ranging as the interests and concerns of the Congress itself. Using the plan as a blueprint, we lay out the areas in which we expect to conduct research, audits, analyses, and evaluations to meet our clients' needs, and we allocate the resources we receive from the Congress accordingly. Following is our strategic plan framework. Appendix I of this statement delineates in a bit more detail our strategic objectives and our qualitative performance goals for fiscal years 2002 and 2003. We issued our 2001 Performance and Accountability Report that combines information on our past year's accomplishments and progress in meeting our strategic goals with our plans for achieving our fiscal year 2003 performance goals. The report earned a Certificate of Excellence in Accountability Reporting from the Association of Government Accountants. We issued our fiscal year 2002 Performance and Accountability Report in January 2003. Our financial statements, which are integral to our performance and accountability, received an unqualified opinion for the sixteenth consecutive year. Furthermore, our external auditors did not identify any material control weaknesses or compliance issues relating to GAO's operations. During the past year, we acquired new hardware and software and developed user-friendly systems that enhanced our productivity and responsiveness to the Congress and helped meet our initial information technology goals. For example, we replaced aging desktop workstations with notebook computers that provide greater computing power, speed, and mobility. In addition, we upgraded key desktop applications, the Windows desktop operating system, and telecommunications systems to ensure that GAO staff have modern technology tools to assist them in carrying out their work. We also developed new, integrated, user-friendly Web-based systems that eliminate duplicate data entry while ensuring the reusability of existing data. As the Clinger-Cohen Act requires, GAO has an enterprise architecture program in place to guide its information technology planning and decision making. In designing and developing systems, as well as in acquiring technology tools and services, we have applied enterprise architecture principles and concepts to ensure sound information technology investments and the interoperability of systems. Given GAO's role as a key provider of information and analyses to the Congress, maintaining the right mix of technical knowledge and expertise as well as general analytical skills is vital to achieving our mission. We spend about 80 percent of our resources on our people, but without excellent human capital management, we could still run the risk of being unable to deliver what the Congress and the nation expect from us. At the beginning of my term in early fiscal year 1999, we completed a self- assessment that profiled our human capital workforce and identified a number of serious challenges facing our workforce, including significant issues involving succession planning and imbalances in the structure, shape, and skills of our workforce. As presented below, through a number of strategically planned human capital initiatives over the past few years, we have made significant progress in addressing these issues. For example, as illustrated in figure 3, by the end of fiscal year 2002, we had almost a 60 percent increase in the percentage of staff at the entry-level (Band I) as compared with fiscal year 1998. Also, the proportion of our workforce at the mid-level (Band II) decreased by about 8 percent. Our fiscal year 2002 human capital initiatives included the following: In fiscal year 2002, we hired nearly 430 permanent staff and 140 interns. We also developed and implemented a strategy to place more emphasis on diversity in campus recruiting. In fiscal years 2002 and 2003, to help meet our workforce planning objectives, we offered voluntary early retirement under authority established in our October 2000 human capital legislation. Early retirement was granted to 52 employees in fiscal year 2002 and 24 employees in fiscal year 2003. To retain staff with critical skills and staff with less than 3 years of GAO experience, we implemented legislation authorizing federal agencies to offer student loan repayments in exchange for certain federal service commitments. In fiscal year 2002, GAO implemented a new, modern, effective, and credible performance appraisal system for analysts and specialists, adapted the system for attorneys, and began modifying the system for administrative professional and support staff. We began developing a new core training curriculum for managers and staff to provide additional training on the key competencies required to perform GAO's work. We also took steps to achieve a fully democratically-elected Employee Advisory Council to work with GAO's Executive Committee in addressing issues of mutual interest and concern. The above represent just a few of many accomplishments in the human capital area. GAO is the clear leader in the federal government in designating and implementing 21st century human capital policies and practices. We also are taking steps to work with the Congress, the Office of Management and Budget, and the Office of Personnel Management, and others to "help others help themselves" in the human capital area. Ensuring information systems security and disaster recovery systems that allow for continuity of operations is a critical requirement for GAO, particularly in light of the events of September 11 and the anthrax incidents. The risk is that our information could be compromised and that we would be unable to respond to the needs of the Congress in an emergency. In light of this risk and in keeping with our goal of being a model federal agency, we are implementing an information security program consistent with the requirements in the Government Information Security Reform provisions (commonly referred to as "GISRA") enacted in the Floyd D. Spence National Defense Authorization Act for fiscal year 2001. We have made progress through our efforts to, among other things, implement a risk-based, agencywide security program; provide security training and awareness; and develop and implement an enterprise disaster recovery solution. In the aftermath of the September 11 terrorist attacks and subsequent anthrax incidents, our ability to provide a safe and secure workplace emerged as a challenge for our agency. Protecting our people and our assets is critical to our ability to meet our mission. We devoted additional resources to this area and implemented measures such as reinforcing vehicle and pedestrian entry points, installing an additional x-ray machine, adding more security guards, and reinforcing windows. GAO is requesting budget authority of $473 million for fiscal year 2004 to maintain current operations for serving the Congress as outlined in our strategic plan and to continue initiatives to enhance our human capital, support business processes, and ensure the safety and security of GAO staff, facilities, and information systems. This funding level will allow us to fund up to 3,269 full-time equivalent personnel. Our request includes $466.6 million in direct appropriations and authority to use estimated revenues of $6 million from reimbursable audit work and rental income. Our requested increase of $18.4 million in direct appropriations represents a modest 4.1 percent increase, primarily for mandatory pay and uncontrollable costs. Our budget request also includes savings from nonrecurring fiscal year 2003 investments in fiscal year 2004 that we propose to use to fund further one-time investments in critical areas, such as security and human capital. We have submitted a request for $4.8 million in supplemental fiscal year 2003 funds to allow us to accelerate implementation of important security enhancements. Our fiscal year 2004 budget includes $4.8 million for safety and security needs that are also included in the supplemental. If the requested fiscal year 2003 supplemental funds are provided, our fiscal year 2004 budget could be reduced by $4.8 million. Table 4 presents our fiscal year 2003 and requested fiscal year 2004 resources by funding source. During fiscal year 2004, we plan to sustain our investments in maximizing the productivity of our workforce by continuing to address the key management challenges of human capital, and both information and physical security. We will continue to take steps to "lead by example" within the federal government in connection with these and other critical management areas. Over the next several years, we need to continue to address skill gaps, maximize staff productivity and effectiveness, and reengineer our human capital processes to make them more user-friendly. We plan to address skill gaps by further refining our recruitment and hiring strategies to target gaps identified through our workforce planning efforts, while taking into account the significant percentage of our workforce eligible for retirement. We will continue to take steps to reengineer our human capital systems and practices to increase their efficiency and to take full advantage of technology. We will also ensure that our staff have the needed skills and training to function in this reengineered environment. In addition, we are developing competency-based performance appraisal and broad-banding pay systems for our mission support employees. To ensure our ability to attract, retain, and reward high-quality staff, we plan to devote additional resources to our employee training and development program. We will target resources to continue initiatives to address skill gaps, maximize staff productivity, and increase staff effectiveness by updating our training curriculum to address organizational and technical needs and training new staff. Also, to enhance our recruitment and retention of staff, we will continue to offer a student loan repayment program and transit subsidy benefit established in fiscal year 2002. In addition, we will continue to focus our hiring efforts in fiscal year 2004 on recruiting talented entry-level staff. To build on the human capital flexibilities provided by the Congress in 2000, we plan to recommend legislation that would, among other things, facilitate GAO's continuing efforts to recruit and retain top talent, develop a more performance-based compensation system, realign our workforce, and facilitate our succession planning and knowledge transfer efforts. In addition, to help attract new recruits, address certain "expectation gaps" within and outside of the government, and better describe the modern audit and evaluation entity GAO has become, we will work with the Congress to explore the possibility of changing the agency's name while retaining our well-known acronym and global brand name of "GAO." On the information security front, we need to complete certain key actions to be better able to detect intruders in our systems, identify our users, and recover in the event of a disaster. Among our current efforts and plans for these areas are completing the installation of software that helps us detect intruders on all our internal servers, completing the implementation of a secure user authentication process, and refining the disaster recover plan we developed last year. We will need the Congress' help to address these remaining challenges. We also are continuing to make the investments necessary to enhance the safety and security of our people, facilities, and other assets for the mutual benefit of GAO and the Congress. With our fiscal year 2003 supplemental funding, if provided, or if not, with fiscal year 2004 funds, we plan to complete installation of our building access control and intrusion detection system and supporting infrastructure, and obtain an offsite facility for use by essential personnel in emergency situations. With the help of the Congress, we plan to implement these projects over the next several years. As a result of the support and resources we have received from this Subcommittee and the Congress over the past several years, we have been able to make a difference in government, not only in terms of financial benefits and improvements in federal programs and operations that have resulted from our work, but also in strengthening and increasing the productivity of GAO, and making a real difference for our country and its citizens. Our budget request for fiscal year 2004 is modest, but necessary to sustain our current operations, continue key human capital and information technology initiatives, and ensure the safety and security of our most valuable asset--our people. We seek your continued support so that we will be able to effectively and efficiently conduct our work on behalf of the Congress and the American people. This appendix lists GAO's strategic goals and the strategic objectives for each goal. They are part of our updated draft strategic plan (for fiscal years 2002 through 2007). Organized below each strategic objective are its qualitative performance goals. The performance goals lay out the work we plan to do in fiscal years 2002 and 2003 to help achieve our strategic goals and objectives. We will evaluate our performance at the end of fiscal year 2003. Provide Timely, Quality Service to the Congress and the Federal Government to Address Current and Emerging Challenges to the Well- Being and Financial Security of the American People To achieve this goal, we will provide information and recommendations on the following: the Health Care Needs of an Aging and Diverse Population evaluate Medicare reform, financing, and operations; assess trends and issues in private health insurance coverage; assess actions and options for improving the Department of Veterans Affairs' and the Department of Defense's (DOD) health care services; evaluate the effectiveness of federal programs to promote and protect the public health; evaluate the effectiveness of federal programs to improve the nation's preparedness for the public health and medical consequences of bioterrorism; evaluate federal and state program strategies for financing and overseeing chronic and long-term health care; and assess states' experiences in providing health insurance coverage for low- income populations. the Education and Protection of the Nation's Children analyze the effectiveness and efficiency of early childhood education and care programs in serving their target populations; assess options for federal programs to effectively address the educational and nutritional needs of elementary and secondary students and their schools; determine the effectiveness and efficiency of child support enforcement and child welfare programs in serving their target populations; and identify opportunities to better manage postsecondary, vocational, and adult education programs and deliver more effective services. the Promotion of Work Opportunities and the Protection of Workers assess the effectiveness of federal efforts to help adults enter the workforce and to assist low-income workers; analyze the impact of programs designed to maintain a skilled workforce and ensure employers have the workers they need; assess the success of various enforcement strategies to protect workers while minimizing employers' burden in the changing environment of work; and identify ways to improve federal support for people with disabilities. a Secure Retirement for Older Americans assess the implications of various Social Security reform proposals; identify opportunities to foster greater pension coverage, increase personal saving, and ensure adequate and secure retirement income; and identify opportunities to improve the ability of federal agencies to administer and protect workers' retirement benefits. an Effective System of Justice identify ways to improve federal agencies' ability to prevent and respond to major crimes, including terrorism; assess the effectiveness of federal programs to control illegal drug use; identify ways to administer the nation's immigration laws to better secure the nation's borders and promote appropriate treatment of legal residents; and assess the administrative efficiency and effectiveness of the federal court and prison systems. the Promotion of Viable Communities assess federal economic development assistance and its impact on communities; assess how the federal government can balance the promotion of home ownership with financial risk; assess the effectiveness of federal initiatives to assist small and minority- owned businesses; assess federal efforts to enhance national preparedness and capacity to respond to and recover from natural and man-made disasters; and assess how well federally supported housing programs meet their objectives and affect the well-being of recipient households and communities. Responsible Stewardship of Natural Resources and the Environment assess the nation's ability to ensure reliable and environmentally sound energy for current and future generations; assess federal strategies for managing land and water resources in a sustainable fashion for multiple uses; assess federal programs' ability to ensure a plentiful and safe food supply, provide economic security for farmers, and minimize agricultural environmental damage; assess federal pollution prevention and control strategies; and assess efforts to reduce the threats posed by hazardous and nuclear wastes. a Secure and Effective National Physical Infrastructure assess strategies for identifying, evaluating, prioritizing, financing, and implementing integrated solutions to the nation's infrastructure needs; assess the impact of transportation and telecommunications policies and practices on competition and consumers; assess efforts to improve safety and security in all transportation modes; assess the U.S. Postal Service's transformation efforts to ensure its viability and accomplish its mission; and assess federal efforts to plan for, acquire, manage, maintain, secure, and dispose of the government's real property assets. Provide Timely, Quality Service to the Congress and the Federal Government to Respond to Changing Security Threats and the Challenges of Global Interdependence To achieve this goal, we will provide information and recommendations on the following: Respond to Diffuse Threats to National and Global Security analyze the effectiveness of the federal government's approach to providing for homeland security; assess U.S. efforts to protect computer and telecommunications systems supporting critical infrastructures in business and government; and assess the effectiveness of U.S. and international efforts to prevent the proliferation of nuclear, biological, chemical, and conventional weapons and sensitive technologies. Ensure Military Capabilities and Readiness assess the ability of DOD to maintain adequate readiness levels while addressing the force structure changes needed in the 21st century; assess overall human capital management practices to ensure a high- quality total force; identify ways to improve the economy, efficiency, and effectiveness of DOD's support infrastructure and business systems and processes; assess the National Nuclear Security Administration's efforts to maintain a safe and reliable nuclear weapons stockpile; analyze and support DOD's efforts to improve budget analyses and performance management; assess whether DOD and the services have developed integrated procedures and systems to operate effectively together on the battlefield; and assess the ability of weapon system acquisition programs and processes to achieve desired outcomes. Advance and Protect U.S. International Interests analyze the plans, strategies, costs, and results of the U.S. role in conflict interventions; analyze the effectiveness and management of foreign aid programs and the tools used to carry them out; analyze the costs and implications of changing U.S. strategic interests; evaluate the efficiency and accountability of multilateral organizations and the extent to which they are serving U.S. interests; and assess the strategies and management practices for U.S. foreign affairs functions and activities. Respond to the Impact of Global Market Forces on U.S. Economic and Security Interests analyze how trade agreements and programs serve U.S. interests; improve understanding of the effects of defense industry globalization; assess how the United States can influence improvements in the world financial system; assess the ability of the financial services industry and its regulators to maintain a stable and efficient global financial system; evaluate how prepared financial regulators are to respond to change and innovation; and assess the effectiveness of regulatory programs and policies in ensuring access to financial services and deterring fraud and abuse in financial markets. Help Transform the Government's Role and How It Does Business to Meet 21st Century Challenges To achieve this goal, we will provide information and recommendations on the following: Analyze the Implications of the Increased Role of Public and Private Parties in Achieving Federal Objectives analyze the modern service-delivery system environment and the complexity and interaction of service-delivery mechanisms; assess how involvement of state and local governments and nongovernmental organizations affect federal program implementation and achievement of national goals; and assess the effectiveness of regulatory administration and reforms in achieving government objectives. Assess the Government's Human Capital and Other Capacity for Serving the Public identify and facilitate the implementation of human capital practices that will improve federal economy, efficiency, and effectiveness; identify ways to improve the financial management infrastructure capacity to provide useful information to manage for results and costs day to day; assess the government's capacity to manage information technology to improve performance; assess efforts to manage the collection, use, and dissemination of government information in an era of rapidly changing technology; assess the effectiveness of the Federal Statistical System in providing relevant, reliable, and timely information that meets federal program needs; and identify more businesslike approaches that can be used by federal agencies in acquiring goods and services. Support Congressional Oversight of the Federal Government's Progress toward Being More Results-Oriented, Accountable, and Relevant to Society's Needs analyze and support efforts to instill results-oriented management across the government; highlight the federal programs and operations at highest risk and the major performance and management challenges confronting agencies; identify ways to strengthen accountability for the federal government's assets and operations; promote accountability in the federal acquisition process; assess the management and results of the federal investment in science and technology and the effectiveness of efforts to protect intellectual property; and identify ways to improve the quality of evaluative information. develop new resources and approaches that can be used in measuring performance and progress on the nations 21st century challenges Analyze the Government's Fiscal Position and Approaches for Financing the Government analyze the long-term fiscal position of the federal government; analyze the structure and information for budgetary choices and explore alternatives for improvement; contribute to congressional deliberations on tax policy; support congressional oversight of the Internal Revenue Service's modernization and reform efforts; and assess the reliability of financial information on the government's fiscal position and financing sources. Maximize the Value of GAO by Being a Model Federal Agency and a World-Class Professional Services Organization To achieve this goal, we will do the following: Sharpen GAO's Focus on Clients' and Customers' Requirements continuously update client requirements; develop and implement stakeholder protocols and refine client protocols; and identify and refine customer requirements and measures.
GAO is a key source of objective information and analyses and, as such, plays a crucial role in supporting congressional decision-making and helping improve government for the benefit of the American people. This testimony focuses on GAO's (1) fiscal year 2002 performance and results, (2) efforts to maximize our effectiveness, responsiveness and value, and (3) our budget request for fiscal year 2004 to support the Congress and serve the American public. In fiscal year 2002, GAO's work informed the national debate on a broad spectrum of issues including helping the Congress answer questions about the associated costs and program tradeoffs of the national preparedness strategy, including providing perspectives on how best to organize and manage the new Transportation Security Administration and Department of Homeland Security. GAO's efforts helped the Congress and government leaders achieve $37.7 billion in financial benefits--an $88 return on every dollar invested in GAO. The return on the public's investment in GAO extends beyond dollar savings to improvements in how the government serves its citizens. This includes a range of accomplishments that serve to improve safety, enhance security, protect privacy, and increase the effectiveness of a range of federal programs and activities. The results of our work in fiscal year 2002 were possible, in part, because of changes we have made to transform GAO in order to meet our goal of being a model federal agency and a world-class professional services organization. We had already realigned GAO's structure and resources to better serve the Congress in its legislative, oversight, appropriations, and investigative roles. Over the past year, we cultivated and fostered congressional and agency relations, better refined our strategic and annual planning and reporting processes, and enhanced our information technology infrastructure. We also continued to provide priority attention to our management challenges of human capital, information security, and physical security. We have made progress in addressing each of these challenges, but we still have work to do and plan to ask for legislation to help address some of these issues. GAO is requesting budget authority of $473 million for fiscal year 2004. Our request represents a modest 4.1 percent increase in direct appropriations, primarily for mandatory pay and uncontrollable costs. This budget will allow us to maintain current operations for serving the Congress as outlined in our strategic plan and continue initiatives to enhance our human capital, support business processes, and ensure the safety and security of GAO staff, facilities, and information systems. Approximately $4.8 million, or about 1 percent, of our request relates to several safety and security items that are included in our fiscal year 2003 supplemental request. If this supplemental request is granted, our fiscal year 2004 request could be reduced accordingly.
6,383
563
In our recent report, we summarize many of the agencies' financial management system implementation failures that have been previously reported by us and inspectors general (IG). Our work and that of the IGs over the years has shown that agencies have failed to employ accepted best practices in systems development and implementation (commonly referred to as disciplined processes) that can collectively reduce the risk associated with implementing financial management systems. In our report, we identified key causes of failures within several recurring themes, including disciplined processes and human capital management. DHS would be wise to study the lessons learned through other agencies' costly failures and consider building a strong foundation for successful financial management system implementation, as we will discuss later in our testimony. From our review of over 40 prior reports, we identified weaknesses in the following areas of disciplined processes. Requirements management. Ill-defined or incomplete requirements have been identified by many system developers and program managers as a root cause of system failure. It is critical that requirements--functions the system must be able to perform--be carefully defined and flow from the concept of operations (how the organization's day-to-day operations are or will be carried out to meet mission needs). In our previous work, we have found agencies with a lack of a concept of operations, vague and ambiguous requirements, and requirements that are not traceable or linked to business processes. Testing. Complete and thorough testing is essential to provide reasonable assurance that new or modified systems will provide the capabilities in the requirements. Testing is the process of executing a program with the intent of finding errors. Because requirements provide the foundation for system testing, they must be complete, clear, and well documented to design and implement an effective testing program. Absent this, an organization is taking a significant risk that substantial defects will not be detected until after the system is implemented. Industry best practices indicate that the sooner a defect is recognized and corrected, the cheaper it is to fix. In our work, we have found flawed test plans, inadequate timing of testing, and ineffective systems testing. Data conversion. In its white paper on financial system data conversion, the Joint Financial Management Improvement Program (JFMIP) identified data conversion as one of the critical tasks necessary to successfully implement a new financial system. JFMIP also noted that if data conversion is done right, the new system has a much greater opportunity for success. On the other hand, converting data incorrectly or entering unreliable data from a legacy system has lengthy and long-term repercussions. The adage "garbage in, garbage out" best describes the adverse impact. Examples of problems we have reported on include agencies that have not properly developed and implemented good data conversion plans, have planned the data conversion too late in the project, and have not reconciled account balances. Risk management. According to leading systems acquisition organizations, risk management is a process for identifying potential problems before they occur and adjusting the acquisition to decrease the chance of their occurrence. Risks should be identified as early as possible and a risk management process should be developed and put in place. Risks should be identified, analyzed, mitigated, and tracked to closure. Effectively managing risks is one way to minimize the chances of project cost, schedule, and performance problems from occurring. We have reported that agencies have not fully implemented effective risk management practices, including shortcomings in identifying and tracking risks. Project management. Effective project management is the process for planning and managing all project-related activities, such as defining how components are interrelated, defining tasks, estimating and obtaining resources, and scheduling activities. Project management allows the performance, cost, and schedule of the overall program to be continually measured, compared with planned objectives, and controlled. We have reported on a number of project management problems including inadequate project management structure, schedule-driven projects, and lack of performance metrics and oversight. Quality assurance. Quality assurance provides independent assessments of whether management process requirements are being followed and whether product standards and requirements are being satisfied. This process includes, among other things, the use of independent verification and validation (IV&V). We and others have reported on problems related to agencies' use of IV&V including specific functions not being performed by the IV&V, the IV&V contractor not being independent, and IV&V recommendations not being implemented. Inadequate implementation of disciplined processes can manifest itself in many ways when implementing a financial management system. While full deployment has been delayed at some agencies, specific functionality has been delayed or flawed at other agencies. The following examples illustrate some of the recurring problems related to the lack of disciplined processes in implementing financial management systems. In May 2004, we reported significant flaws in requirements management and testing that adversely affected the initial development and implementation of the Army's Logistics Modernization Program (LMP), in which the Army estimated that it would invest about $1 billion. These flaws also hampered efforts to correct the operational difficulties experienced at the Tobyhanna Army Depot. In June 2005, we reported that the Army had not effectively addressed its requirements management and testing problems, and data conversion weaknesses had hampered the Army's ability to address the problems that needed to be corrected before the system could be fielded to other locations. The Army lacked reasonable assurance that (1) system problems experienced during the initial deployment and causing the delay of future deployments had been corrected and (2) LMP was capable of providing the promised system functionality. Subsequent deployments of the system have been delayed. We reported in February 2005 that our experience with major systems acquisitions, such as the Office of Personnel Management's (OPM) Retirement Systems Modernization (RSM) program, has shown that having sound disciplined processes in place increases the likelihood of the acquisitions meeting cost and schedule estimates as well as performance requirements. However, we found that many of the processes in these areas for RSM were not sufficiently developed, were still under development, or were planned for future development. For example, OPM lacked needed processes for developing and managing requirements, planning and managing project activities, managing risks, and providing sound information to investment decision makers. Without these processes in place, RSM was at increased risk of not being developed and delivered on time and within budget and falling short of promised capabilities. In August 2004, the Department of Veterans Affairs (VA) IG reported that the effect of transferring inaccurate data to VA's new core financial system at a pilot location interrupted patient care and medical center operations. This raised concerns that similar conversion problems would occur at other VA facilities if the conditions identified were not addressed and resolved nationwide prior to roll out. Some of the specific conditions the IG noted were that contracting and monitoring of the project were not adequate, and the deployment of the new system encountered multiple problems, including those related to software testing, data conversion and system interfaces, and project management. As a result of these problems, patient care was interrupted by supply outages and other problems. The inability to provide sterile equipment and needed supplies to the operating room resulted in the cancelation of 81 elective surgeries for a week in both November 2003 and February 2004. In addition, the operating room was forced to operate at two-thirds of its prior capacity. Because of the serious nature of the problems raised with the new system, VA management decided to focus on transitioning back to the previous financial management software at the pilot location and assembled a senior leadership team to examine the results of the pilot and make recommendations to the VA Secretary regarding the future of the system. We are concerned that federal agencies' human capital problems are eroding the ability of many agencies--and threatening the ability of others--to perform their IT missions economically, efficiently, and effectively. For example, we found that in the 1990s, the initial rounds of downsizing were set in motion without considering the longer-term effects on agencies' IT performance capacity. Additionally, a number of individual agencies drastically reduced or froze their hiring efforts for extended periods. Consequently, following a decade of downsizing and curtailed investments in human capital, federal agencies currently face skills, knowledge, and experience imbalances, especially in their IT workforces. Without corrective action, these imbalances will worsen, especially in light of the numbers of federal civilian workers becoming eligible to retire in the coming years. In this regard, we are emphasizing the need for additional focus on the following three key elements of human capital management. Strategic workforce planning. Having staff with the appropriate skills is key to achieving financial management improvements, and managing an organization's employees is essential to achieving results. It is important that agencies incorporate strategic workforce planning by (1) aligning an organization's human capital program with its current and emerging mission and programmatic goals and (2) developing long-term strategies for acquiring, developing, and retaining an organization's total workforce to meet the needs of the future. This incorporates a range of activities from identifying and defining roles and responsibilities, to identifying team members, to developing individual competencies that enhance performance. We have reported on agencies without a sufficient human capital strategy or plan, skills gap analysis, or training plans. Human resources. Having sufficient numbers of people on board with the right mix of knowledge and skills can make the difference between success and failure. This is especially true in the IT area, where widespread shortfalls in human capital have contributed to demonstrable shortfalls in agency and program performance. We have found agency projects with significant human resource challenges, including addressing personnel shortages, filling key positions, and developing and retaining staff with the required competencies. Change management. According to leading IT organizations, organizational change management is the process of preparing users for the business process changes that will accompany implementation of a new system. An effective organizational change management process includes project plans and training that prepare users for impacts the new system might have on their roles and responsibilities and a process to manage those changes. We have reported on various problems with agencies' change management, including transition plans not being developed, business processes not being reengineered, and customization not being limited. The following examples illustrate some of the recurring problems related to human capital management in implementing financial management systems. We first reported in February 2002 that the Internal Revenue Service (IRS) had not defined or implemented an IT human capital strategy for its Business Systems Modernization (BSM) program and recommended that IRS address this weakness. In June 2003, we reported that IRS had made important progress in addressing our recommendation, but had yet to develop a comprehensive multiyear workforce plan. IRS also had not hired, developed, or retained sufficient human capital resources with the required competencies, including technical skills, in specific mission areas. In September 2003, the Treasury Inspector General for Tax Administration reported that IRS's Modernization and IT Services organization had made significant progress in developing its human capital strategy but had not yet (1) identified and incorporated human capital asset demands for the modernized organization, (2) developed detailed hiring and retention plans, or (3) established a process for reviewing the human capital strategy development and monitoring its implementation. We most recently reported in July 2005 that IRS had taken some steps in the right direction. However, until IRS fully implements its strategy, it will not have all of the necessary IT knowledge and skills to effectively manage the BSM program or to operate modernized systems. Consequently, the risk of BSM program and project cost increases, schedule slippages, and performance problems is increased. We reported, in September 2004, that staff shortages and limited strategic workforce planning resulted in the Department of Health and Human Services (HHS) not having the resources needed to effectively design and operate its new financial management system. HHS had taken the first steps in strategic workforce planning. For example, the Centers for Disease Control and Prevention (CDC), where the first deployment was scheduled, was the only operating division that had prepared a competency report, but a skills gap analysis and training plan for CDC had not been completed. In addition, many government and contractor positions on the implementation project were not filled as planned. While HHS and the systems integrator had taken measures to acquire additional human resources for the implementation of the new financial management system, we concluded that scarce resources could significantly jeopardize the project's success and lead to several key deliverables being significantly behind schedule. In September 2004, HHS decided to delay its first scheduled deployment at CDC by 6 months in order to address these and other issues. DHS faces unique challenges in attempting to develop integrated financial management systems across the breadth of such a large and diverse department. DHS was established by the Homeland Security Act of 2002, as the 15th Cabinet Executive Branch Department of the United States government. DHS inherited a myriad of redundant financial management systems from 22 diverse agencies along with 180,000 employees, about 100 resource management systems, and 30 reportable conditions identified in prior component financial audits. Of the 30 reportable conditions, 18 were so severe they were considered material weaknesses. Among these weaknesses were insufficient internal controls or processes to reliably report financial information such as revenue, accounts receivable, and accounts payable; significant system security deficiencies; financial systems that required extensive manual processes to prepare financial statements; and incomplete policies and procedures necessary to complete basic financial management activities. DHS received a disclaimer of opinion on its financial statements for fiscal year 2005, and the independent auditors also reported that DHS's financial management systems did not substantially comply with the requirements of FFMIA. The disclaimer was primarily due to financial reporting problems at five components. The five components include Immigration and Customs Enforcement (ICE), the United States Coast Guard (Coast Guard), State and Local Government Coordination and Preparedness (SLGCP), the Transportation Security Administration (TSA), and Emergency Preparedness and Response (EPR). Further, ICE is an accounting service provider for other DHS components, and it failed to adequately maintain both its own accounting records and those of other DHS components during fiscal year 2005. The auditors' fiscal year 2005 report discusses 10 material weaknesses, two other reportable conditions in internal control, and instances of noncompliance with seven laws and regulations. Among the 10 material weaknesses were inadequate financial management and oversight at DHS components, primarily ICE and Coast Guard; decentralized financial reporting at the component level; significant general IT and application control weaknesses over critical financial and operational data; and the lack of accurate and timely reconciliation of fund balance with treasury accounts. The results of the auditors' tests of fiscal year 2005 compliance with certain provisions of laws, regulations, contracts, and grant agreements disclosed instances of noncompliance. The DHS auditors reported instances of noncompliance with 31 U.S.C. SS 3512(c),(d), commonly known as the Federal Managers' Financial Integrity Act of 1982 (FMFIA); the Federal Financial Management Improvement Act of 1996 (FFMIA), Pub. L. No. 104-208, div. A, SS 101(f), title VIII, 110 Stat. 3009, 3009-389 (Sept. 30, 1996); the Federal Information Security Management Act of 2002 (FISMA), Pub. L. No. 107-347, title III, 116 Stat. 2899, 2946 (Dec. 17, 2002); the Single Audit Act, as amended (codified at 31 U.S.C. SSSS 7501-7507), and other laws and regulations related to OMB Circular No. A-50, Audit Follow-up, as revised (Sept. 29, 1982); the Improper Payments Information Act of 2002, Pub. L. No. 107-300, 116 Stat. 2350 (Nov. 26, 2002); the Department of Homeland Security Financial Accountability Act of 2004, Pub. L. No. 108-330, 118 Stat. 1275 (Oct. 16, 2004); and the Government Performance and Results Act of 1993 (GPRA), Pub. L. No. 103-62, 107 Stat. 285 (Aug. 3, 1993). Although DHS inherited many of the reportable conditions and noncompliance issues discussed above, the department's top management, including the CFO, is ultimately responsible for ensuring that progress is made in the area of financial management. In August 2003, DHS began the "electronically Managing enterprise resources for government effectiveness and efficiency" (eMerge) program at an estimated cost of $229 million. The eMerge that the acquisition of eMerge was in the early stages and continued focus and follow through, among other things, would be necessary for it to be successful. According to DHS officials, because the project was not meeting its performance goals and timeline, DHS officials began considering whether to continue the project and in Spring 2005 started looking at another strategy. DHS officials told us they decided to change the strategy for its eMergeacquisition and development activities on eMerge had stopped and the blanket purchase agreement with the systems integrator expired. DHS officials added that the eMerge using a shared services approach, which allows its components to choose among three DHS providers of financial management services and the Department of the Treasury's Bureau of the Public Debt, which was identified by OMB as a governmentwide financial management center of excellence. DHS officials told us that although a departmentwide concept of operations and migration plan were still under development, they expected progress to be made in the next 5 years. As we will discuss later, a departmentwide concept of operations document would help DHS and others understand such items as how DHS will migrate the various entities to these shared service providers and how it will obtain the departmental information necessary to manage the agency from these disparate operations. DHS officials acknowledged that they needed to first address the material weaknesses at the proposed shared service providers before component agencies migrate to them. The key for federal agencies, including DHS, to avoid the long-standing problems that have plagued financial management system improvement efforts is to address the foremost causes of those problems and adopt solutions that reduce the risks associated with these efforts to acceptable levels. Although it appears that DHS will adopt a shared services approach to meet its needs for integrated financial management systems, implementing this approach will be complex and challenging, making the adoption of best practices even more important for this undertaking. Based on industry best practices, we identified four key concepts that will be critical to DHS's ability to successfully complete its planned migration to shared service providers. Careful consideration of these four concepts, each one building upon the next, will be integral to the success of DHS's strategy. The four concepts are (1) developing a concept of operations, (2) defining standard business processes, (3) developing a migration strategy for DHS components, and (4) defining and effectively implementing disciplined processes necessary to properly manage the specific projects. We will now highlight the key issues to be considered for each of the four areas. As we discussed previously, a concept of operations defines how an organization's day-to-day operations are (or will be) carried out to meet mission needs. The concept of operations includes high-level descriptions of information systems, their interrelationships, and information flows. It also describes the operations that must be performed, who must perform them, and where and how the operations will be carried out. Further, it provides the foundation on which requirements definitions and the rest of the systems planning process are built. Normally, a concept of operations document is one of the first documents to be produced during a disciplined development effort and flows from both the vision statement and the enterprise architecture. According to the Institute of Electrical and Electronic Engineers (IEEE) standards, a concept of operations is a user- oriented document that describes the characteristics of a proposed system from the users' viewpoint. The key elements that should be included in a concept of operations are major system components, interfaces to external systems, and performance characteristics such as speed and volume. Another key element of a concept of operations is a transition strategy that is useful for developing an understanding of how and when changes will occur. Not only is this needed from an investment management point of view, it is a key element in the human capital problems discussed previously that revolved around change management strategies. Describing how to implement DHS's approach for using shared service providers for its financial management systems, as well as the process that will be used to deactivate legacy systems that will be replaced or interfaced with a new financial management system, are key aspects that need to be addressed in a transition strategy. Key Issues for DHS to Consider What is considered a financial management system? Are all the components using a standard definition? Who will be responsible for developing a DHS-wide concept of operations, and what process will be used to ensure that the resulting document reflects the departmentwide solution rather than individual component agency stove-piped efforts? How will DHS's concept of operations be linked to its enterprise architecture? How can DHS obtain reliable information on the costs of its financial management systems investments? Business process models provide a way of expressing the procedures, activities, and behaviors needed to accomplish an organization's mission and are helpful tools to document and understand complex systems. Business processes are the various steps that must be followed to perform a certain activity. For example, the procurement process would start when the agency defines its needs, and issues a solicitation for goods or services, and would continue through contract award, receipt of goods and services, and would end when the vendor properly receives payment. The identification of preferred business processes would be critical for standardization of applications and training and portability of staff. To maximize the success of a new system acquisition, organizations need to consider the redesign of current business processes. As we noted in our Executive Guide: Creating Value Through World-class Financial Management, leading finance organizations have found that productivity gains typically result from more efficient processes, not from simply automating old processes. Moreover, the Clinger-Cohen Act of 1996 requires agencies to analyze the missions of the agency and, based on the analysis, revise mission-related and administrative processes, as appropriate, before making significant investments in IT used to support those missions. Another benefit of what is often called business process modeling is that it generates better system requirements, since the business process models drive the creation of information systems that fit in the organization and will be used by end users. Other benefits include providing a foundation for agency efforts to describe the business processes needed for unique missions, or developing subprocesses to support those at the departmentwide level. Key Issues for DHS to Consider Who will be responsible for developing DHS-wide standard business processes that meet the needs of its component agencies? How will the component agencies be encouraged to adopt new processes, rather than selecting other methods that result in simply automating old ways of doing business? How will the standard business processes be implemented by the shared service providers to provide consistency across DHS? What process will be used to determine and validate the processes needed for DHS agencies that have unique needs? Although DHS has a goal of migrating agencies to a limited number of shared service providers, it has not yet articulated a clear and measurable strategy for achieving this goal. In the context of migrating to shared service providers, critical activities include (1) developing specific criteria for requiring component agencies to migrate to one of the providers rather than attempting to develop and implement their own stove-piped business systems; (2) providing the necessary information for a component agency to make a selection of a shared service provider for financial management; (3) defining and instilling new values, norms, and behaviors within component agencies that support new ways of doing work and overcoming resistance to change; (4) building consensus among customers and stakeholders on specific changes designed to better meet their needs; and (5) planning, testing, and implementing all aspects of the transition from one organizational structure and business process to another. Finally, sustained leadership will be key to a successful strategy for moving DHS components towards consolidated financial management systems. In our Executive Guide: Creating Value Through World-class Financial Management, we found that leading organizations made financial management improvement an entitywide priority by, among other things, providing clear, strong executive leadership. We also reported that making financial management a priority throughout the federal government involves changing the organizational culture of federal agencies. Although the views about how an organization can change its culture can vary considerably, leadership (executive support) is often viewed as the most important factor in successfully making cultural changes. Top management must be totally committed in both words and actions to changing the culture, and this commitment must be sustained and demonstrated to staff. As pressure mounts to do more with less, to increase accountability, and to reduce fraud, waste, abuse, and mismanagement, and efforts to reduce federal spending intensify, sustained and committed leadership will be a key factor in the successful implementation of DHS's financial management systems. Key Issues for DHS to Consider What guidance will be provided to assist DHS component agencies in adopting a change management strategy that reduces the risks of moving to a shared service provider? What processes will be put in place to ensure that individual component agency financial management system investment decisions focus on the benefits of standard processes and shared service providers? What process will be used to facilitate the decision-making process used by component agencies to select a provider? How will component agencies incorporate strategic workforce planning in the implementation of the shared service provider approach? Once the concept of operations and standard business processes have been defined and a migration strategy is in place, the use of disciplined processes, as discussed previously, will be a critical factor in helping to ensure that the implementation is successful. The key to avoiding long- standing implementation problems is to provide specific guidance to component agencies for financial management system implementations, incorporating the best practices identified by the Software Engineering Institute, the IEEE, the Project Management Institute, and other experts that have been proven to reduce risk in implementing systems. Such guidance should include the various disciplined processes such as requirements management, testing, data conversion and system interfaces, risk and project management, and related activities, which have been problematic in the financial systems implementation projects we and others have reviewed. Disciplined processes have been shown to reduce the risks associated with software development and acquisition efforts to acceptable levels and are fundamental to successful system implementations. The principles of disciplined IT systems development and acquisition apply to shared services implementation, such as that contemplated by DHS. A disciplined software implementation process can maximize the likelihood of achieving the intended results (performance) within established resources (costs) on schedule. For example, disciplined processes should be in place to address the areas of data conversion and interfaces, two of the many critical elements necessary to successfully implement a new system--the lack of which have contributed to the failure of previous agency efforts. Further details on disciplined processes can be found in appendix III of our recently issued report. Key Issues for DHS to Consider How can existing industry standards and best practices be incorporated into DHS- wide guidance related to financial management system implementation efforts, including migrating to shared service providers? What actions will be taken to reduce the risks and costs associated with data conversion and interface efforts? What oversight process will be used to ensure that modernization efforts effectively implement the prescribed policies and procedures? In closing, the best practices we identified are interrelated and interdependent, collectively providing an agency with a better outcome for its system deployment--including cost savings, improved service and product quality, and ultimately, a better return on investment. The predictable result of DHS and other agencies not effectively addressing these best practices is projects that do not meet cost, schedule, and performance objectives. There will never be a 100 percent guarantee that a new system will be fully successful from the outset. However, risk can be managed and reduced to acceptable levels through the use of disciplined processes, which in short represent best practices that have proven their value in the past. We view the application of disciplined processes to be essential for DHS's systems modernization efforts. Based on industry best practices, the following four concepts would help ensure a sound foundation for developing and implementing a DHS-wide solution for the complex financial management problems it currently faces: (1) developing a concept of operations that expresses DHS's view of financial management and how that vision will be realized, (2) defining standard business processes, (3) developing an implementation strategy, and (4) defining and effectively implementing applicable disciplined processes. If properly implemented, the best practices discussed here today and in our recently issued report will help reduce the risk associated with a project of this magnitude and importance to an acceptable level. With DHS at an important crossroads in the implementation of the eMergefoundation on which to base its efforts and avoid the problems that have plagued so many other federal agencies faced with the same challenge. Mr. Chairmen, this concludes our prepared statement. We would be happy to respond to any questions you or other Members of the Subcommittees may have at this time. For information about this testimony, please contact McCoy Williams, Director, Financial Management and Assurance, at (202) 512-9095 or at [email protected], or Keith A. Rhodes, Chief Technologist, Applied Research and Methods, who may be reached at (202) 512-6412 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals who made key contributions to this testimony include Kay Daly, Assistant Director; Chris Martin, Senior-Level Technologist; Francine DelVecchio; Mike LaForge; and Chanetta Reed. Numerous other individuals made contributions to the GAO reports cited in this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Over the years, GAO has reported on various agencies' financial management system implementation failures. GAO's recent report (GAO-06-184) discusses some of the most significant problems previously identified with agencies' financial management system modernization efforts. For today's hearing, GAO was asked to provide its perspectives on the importance of the Department of Homeland Security (DHS) following best practices in developing and implementing its new financial management systems and avoiding the mistakes of the past. GAO's testimony (1) discusses the recurring problems identified in agencies' financial management systems development and implementation efforts, (2) points out key financial management system modernization challenges at DHS, and (3) highlights the building blocks that form the foundation for successful financial management system implementation efforts. GAO's work and that of agency inspectors general over the years has shown that agencies have failed to employ accepted best practices in systems development and implementation (commonly referred to as disciplined processes) that can collectively reduce the risk associated with implementing financial management systems. GAO's recent report identified key causes of failures within several recurring themes including (1) disciplined processes, such as requirements management, testing, and project management; and (2) human capital management, such as workforce planning, human resources, and change management. Prior reports have identified costly systems implementation failures attributable to problems in these areas at agencies across the federal government. DHS faces unique challenges in attempting to develop integrated financial management systems across the breadth of such a large and diverse department. DHS inherited a myriad of redundant financial management systems from 22 diverse agencies and about 100 resource management systems. Among the weaknesses identified in prior component financial audits were insufficient internal controls or processes to reliably report financial information such as revenue, accounts receivable, and accounts payable; significant system security deficiencies; financial systems that required extensive manual processes to prepare financial statements; and incomplete policies and procedures necessary for conducting basic financial management activities. In August 2003, DHS began a program to consolidate and integrate DHS financial accounting and reporting systems. DHS officials said they recently decided to develop a new strategy for the planned financial management systems integration program, referred to as eMerge2, because the prior strategy was not meeting its performance goals and timeline. DHS's revised strategy will allow DHS components to choose from an array of existing financial management shared service providers. Based on industry best practices, GAO identified four key concepts that will be critical to DHS's ability to successfully complete its planned migration to shared service providers. Careful consideration of these four concepts, each one building upon the next, will be integral to the success of DHS's strategy. The four concepts are developing a concept of operations, defining standard business processes, developing a strategy for implementing DHS's shared services approach across the department, and defining and effectively implementing disciplined processes necessary to properly manage the specific projects. With DHS at an important crossroads in implementing financial management systems, it has an excellent opportunity to use these building blocks to form a solid foundation on which to base its efforts and avoid the problems that have plagued so many other federal agencies.
6,437
669
Job Corps was established as a national employment and training program in 1964 to provide severely disadvantaged youth with a wide range of services, including basic/remedial education, vocational training, and social skills instruction, usually at residential facilities. It remains one of the few federally run programs, unlike many other employment training programs that are federally funded but are operated by state or local governments. Job Corps centers are operated by public or private organizations under contract with Labor. Recent legislative proposals to consolidate much of the nation's job training system into block grants to the states has produced debate on the relationship between Job Corps and the states, including whether responsibility for Job Corps should be delegated to the states. A 1995 Senate-passed bill retained Job Corps as a separate federally administered program; a 1995 House-passed bill was silent about the Job Corps' future as a separate entity. A conference committee is currently attempting to resolve the differences between the two bills. The Senate bill proposes several changes to better integrate Job Corps with state and local workforce development initiatives, including requiring center operators to submit operating plans to Labor, through their state governors; requiring center operators to give nearby communities advance notice of any center changes that could affect them; and permitting the governor to recommend individuals to serve on panels to select center operators. Labor officials stated that the program is already playing a proactive role in ensuring that the National Job Corps program works more closely with state and local employment, education, and training programs. According to Job Corps officials, the program has received funding to open nine additional centers--five in program year 1996 and four in program year 1997--all of which will be located in states with existing centers. Job Corps' nine regional directors are responsible for the day-to-day administration of the program at the centers located within their geographic boundaries. Included among their responsibilities are the recruitment of youth for program participation and the assignment of enrollees to one of the program centers. Recruitment is typically carried out by private contractors, the centers, or state employment services under contract with the regional directors. The Job Corps legislation provides some broad guidance with respect to assigning enrollees to centers. It states that participants are to be assigned to the center closest to their residence, except for good cause. Exceptions can include avoiding undue delay in assigning participants to a center, meeting educational or training needs, or ensuring efficiency and economy in the operation of the program. The program currently enrolls participants aged 16 to 24 who are severely disadvantaged, in need of additional education or training, and living in a disruptive environment. Our June 1995 report contained an analysis of characteristics of those terminating from Job Corps in program year 1993 showing that over two-thirds of the program's participants faced multiple barriers to employment. Enrollments are voluntary, and training programs are open entry, open exit, and self-paced, allowing participants to enroll throughout the year and to progress at their own pace. On average, participants spend about 8 months in the program but can stay up to 2 years. In addition to basic education and vocational training courses, each of the centers provides participants with a range of services including counseling, health care (including dental), room and board, and recreational activities. Skills training is offered in a variety of vocational areas, including business occupations, automotive repair, construction trades, and health occupations. These courses are taught by center staff, private contractors, or instructors provided under contracts with national labor and business organizations. In addition, Job Corps offers, at a limited number of centers, advanced training in various occupations including food service, clerical, and construction trades. This training is designed to provide additional instruction to participants from centers across the nation who have demonstrated the ability to perform at a higher skill level. One feature that makes Job Corps different from other youth training programs is its residential component. About 90 percent of the participants enrolled each year live at the centers, allowing services to be provided 24 hours a day, 7 days a week. The premise for boarding participants is that most come from a disruptive environment and, therefore, can benefit from receiving education and training in a new setting where a variety of support services are available around the clock. Participation in Job Corps can lead to placement in a job or enrollment in further training or education. It can also lead to educational achievements such as earning a high school diploma and gaining reading or math skills. However, the primary outcome for Job Corps participants is employment; about 64 percent of those leaving the program get jobs. Job Corps program capacity differs widely among the states because the number of centers in each state differs, and the size of individual centers within the states varies substantially. Job Corps centers are located in 46 states and the District of Columbia and Puerto Rico (see fig. 1). Among states with centers, the number ranges from one center in each of 19 states; to six centers each in California, Kentucky, and Oregon; to seven in New York State. In-state capacity differs according to the number of centers in each state, the size of individual centers, and the average time participants spend in the program. For example, Kentucky's centers can serve 6,373 participants annually, nearly double the number that can be served by centers in either California (3,477) or New York (3,252); Idaho has only one center and a capacity of about 200. (See app. IV for a listing of the capacity within each state with a Job Corps center.) As shown in figure 2, Job Corps centers in 9 states had the capacity to serve over 2,000 Job Corps participants annually, whereas centers in 10 states could serve fewer than 500 participants annually. Nationwide, 41 percent of the approximately 64,000 program year 1994 Job Corps participants (about 44 percent in program year 1993) who lived in states with Job Corps centers were assigned to centers outside their home state. Openings at centers located in their states of residence were often filled by participants from other states. Those participants assigned out of state travel greater distances than those who are assigned to an in-state center. Yet, even when assigned out of state, participants tend to stay within the Labor region in which they reside. Regardless of where they are assigned, participants tend to be employed in their state of residence. Considerable variation existed among the states in the extent to which Job Corps participants were assigned to out-of-state centers (see fig. 3). In program year 1994, the majority of Job Corps participants from 15 states were assigned to centers outside their home state. For example, more than three-quarters of the Job Corps participants from Colorado, Illinois, South Carolina, and Wisconsin were assigned to centers in states other than the one in which they lived. On the other hand, less than a quarter of the youths in 16 states were assigned to out-of-state Job Corps centers. For example, less than 15 percent of the Job Corps participants from Minnesota, Nevada, New Jersey, and New York were assigned to centers outside their home state. (App. V lists the states included in each of the percentage groupings shown in fig. 3.) Percentage of Participants Assigned Out of State While substantial numbers of participants are assigned to out-of-state centers, the vast majority of all participants are assigned to centers within the Job Corps regions in which they reside. Nearly 95 percent of program year 1994 participants (92 percent in program year 1993) were assigned to a Job Corps center that was located in the same region as their residence. In 7 of Labor's 10 regions, over 90 percent of Job Corps program participants were residents of the regions in which they were assigned, and in the remaining 3 regions, over 80 percent were regional residents. A portion of the remaining 5 percent who were transferred outside their region were assigned under agreements between regional directors to send participants to centers in other regions. For example, the director in region II said that he has an agreement to send approximately 150 youths to region I and 250 youths to region IV. The director in region IX assigns 400 to 600 youths to the Clearfield, Utah, center in region VIII and another 200 youths to region X. Job Corps participants assigned to centers outside their state of residence were sent to centers that were, on average, over 4 times as distant as the in-state center closest to a participant's residence. For the approximately 26,000 youths leaving the program in program year 1994 who were assigned to out-of-state Job Corps centers, we compared the distances from their home to (1) the center to which they were assigned and (2) the in-state center nearest their residence. In 92 percent of the cases where participants were assigned out of state, there was an in-state Job Corps center closer to the participant's home. On average, participants assigned to out-of-state centers traveled about 390 miles, whereas the closest in-state center was about 90 miles from their residence. For example, about 2,200 Florida residents were assigned to Job Corps centers in other states, traveling on average about 640 miles to attend those centers. In contrast, these participants would have traveled, on average, only about 70 miles had they been assigned to the nearest Florida center. We noted that while residents in many states were being assigned to out-of-state centers, a substantial number of nonresidents were being brought in and enrolled at in-state centers. For example, in program year 1994, of the approximately 1,000 Arkansas residents in Job Corps, about 600 (or 60 percent) were assigned to out-of-state centers. Yet, about 600 nonresidents were brought in to centers in Arkansas from other states. Similarly, in Georgia, 1,300 residents from that state were assigned to Job Corps centers located elsewhere, whereas about 1,900 individuals residing in other states were brought in to centers located in Georgia. Figure 4 shows states with large numbers (500 or more) of residents sent to out-of-state centers while large numbers of nonresidents were brought in-state. (App. VI provides, for each state, the number of nonresidents brought in from other states, as well as the number of residents sent to out-of-state centers, for program years 1994 and 1993.) Assigning participants to Job Corps centers outside their state of residence resulted in wide variations in the number of nonresidents at individual Job Corps centers nationwide. The majority of participants served at about one-third of the centers were out-of-state residents. Overall, we found that in 38 of the 113 Job Corps centers operating in program year 1994, 50 percent or more of the participants resided outside the state in which the center was located (see fig. 5). Fifteen centers had 75 percent or more nonresidents enrolled during program year 1994, and the 9 centers with the most nonresidents (85 percent or more) were located in Kentucky (6 centers), California (1), Utah (1), and West Virginia (1). Because program capacity in Kentucky, Utah, and West Virginia exceeded in-state demand, large numbers of nonresidents attended centers in these states. California, on the other hand, had insufficient capacity. Nonetheless, the number of nonresidents at the California center may have been high because it provided advanced training for participants who previously had completed some basic level of training at centers across the nation. Forty-seven centers had less than 25 percent nonresidents enrolled, including 30 centers with less than 10 percent of their program participants coming from out of state. Regardless of where Job Corps participants were assigned, those who found jobs usually did so in their home state. Of the approximately 42,000 Job Corps participants who obtained jobs after leaving the program in 1994, about 83 percent found jobs in their state of residence (85 percent in program year 1993). Even those participants who were assigned to Job Corps centers outside their state of residence generally returned to their home states for employment. Specifically, of the 18,200 participants obtaining jobs after being trained in centers outside their state of residence, about 13,700 (75 percent) obtained those jobs in their home state (see fig. 6). Regional officials stated that substantial numbers of participants were assigned to centers out of state due, in part, to Labor's desire to fully utilize centers. The other principal reason given was to satisfy participant preferences either to be assigned to a specific center or to be enrolled in a specific occupational training course. According to Labor officials, full utilization of Job Corps centers was one of the principal reasons for assigning participants out of state. The Job Corps program does not routinely collect the reasons for out-of-state assignments and, therefore, we were unable to document the specific factors behind these decisions. However, we contacted Labor officials, including each of its nine regional directors--who are ultimately responsible for center assignments--as well as contractors responsible for 15 outreach/screening contracts, to determine what factors contributed to out-of-state assignments. For the most part, these officials stated that one of the reasons for not assigning participants to the center closest to their residence and, instead, to out-of-state centers was to ensure that centers were fully utilized. For example, they pointed out that many residents from Florida were assigned to centers in Kentucky; otherwise, centers in Kentucky would remain underutilized. A similar situation was cited with respect to participants from California assigned to a center in Utah that would otherwise be underutilized. In addition, Labor officials noted that participants were assigned to out-of-state centers to fill openings that occurred throughout the year because participants continuously leave the program due to the program's open-entry, open-exit, self-paced format. Moreover, at any point, there may not be any state residents ready to enroll in the program. Maintaining full capacity in Job Corps centers is one measurement Labor uses in evaluating regional director performance; Labor data indicate that, except for a portion of program year 1994, the program has operated near full capacity during the previous 3 program years. Vacancies can frequently occur at Job Corps centers because of the uneven distribution of program capacity in relation to demand for services, the continuous turnover of participants at individual centers, and the irregular flow of participants into the program. Labor officials said that in program year 1994, Job Corps had an average occupancy rate of about 91 percent programwide. Average occupancy rates at the regional level, in program year 1994, ranged from about 83 percent to 97 percent. We found less evidence to support the other principal reason cited for assigning participants to distant centers--the need to satisfy participant preferences, either to attend a particular center or to receive training in a particular occupation. While the Job Corps data system does not provide information on the extent to which such preferences are considered when making assignments, we were able to gain some insight into the degree to which specific vocational offerings might explain out-of-state assignments. We analyzed the occupational training courses in which out-of-state participants were enrolled. We found that over two-thirds of these individuals were either enrolled in occupational courses commonly offered throughout the national network of Job Corps centers or were never enrolled in an occupational course at all. For example, about 13 percent of the participants sent to out-of-state centers were being trained in clerical positions (available at 91 centers), about 8 percent in food service (available at 94 centers), and 8 percent in health occupations (available at 72 centers). In addition, about 11 percent received no specific vocational offering after being assigned to an out-of-state center (see table 1). Thus, specialized training or uncommon occupational offerings do not appear to explain these out-of-state assignments. We were, however, unable to determine whether a training slot in the requested vocational area was available at the closest center when participants were assigned out of state. During our discussions with regional Job Corps officials, some said that they have recently begun to focus more on assigning participants to Job Corps centers that are located in the same state in which they reside. Region III officials incorporate in-state assignment goals into their outreach and screening contracts, and a March 1995 regional field instruction states that the region's center assignment plan "now places greater emphasis on the assignment of youth to centers within their own state, or to centers within a closer geographical area." Similarly, other regional officials told us that they are now placing greater emphasis on in-state assignment of youth because of increased congressional interest in having greater state involvement in the program. During program year 1994, the majority of states with Job Corps centers had sufficient capacity to handle virtually all the in-state demand (at least 90 percent of in-state participants) for Job Corps training, but this ability varied substantially among the states. We compared the demand for Job Corps services within each state with the total capacity of the centers located therein. We measured state demand in terms of the number of residents who participated in Job Corps, regardless of whether they attended a center within their state of residence or out of state. Nationwide, 52,000 of the 64,000 Job Corps participants--81 percent (86 percent in program year 1993)--either were or could have been trained in centers in their home state. As shown in figure 7, a total of 27 states had sufficient capacity in their Job Corps centers to accommodate virtually all the program participants from those states, and another 12 states could meet at least 70 percent of the demand. (App. VII lists the states in each of the percentage groupings shown in fig. 7.) We found substantial differences among states in the capacity of in-state centers to serve Job Corps participants from their state. For example, South Carolina had over 1,600 residents participating in Job Corps, but the centers in that state had the capacity to serve only about 440 participants. On the other hand, Kentucky had 485 residents in Job Corps, but had the capacity (6,373) to serve about 13 times that number of participants. Although 81 percent of Job Corps participants in program year 1994 either were or could have been served in their state of residence, the remaining 19 percent (over 11,000 youths) lived in states whose centers lacked the capacity to serve all state residents enrolled in Job Corps. For example, centers in California, Florida, Louisiana, and South Carolina each would have been unable to serve over 1,000 Job Corps participants in program year 1994 in their existing centers. Figure 8 shows (for those states where demand was higher than in-state capacity) the states with Job Corps centers that had a demand that exceeded capacity by 500 or more participants. In addition, five states (Connecticut, Delaware, New Hampshire, Rhode Island, and Wyoming) did not have a Job Corps center in program year 1994. These states accounted for about another 1,400 participants who could not be served in their home state. On the other hand, the capacity in eight states was more than double the number of youths from their states in Job Corps. For example, Utah's two centers could accommodate about 2,400 youths, but only about 700 state residents were in the program. Similarly, West Virginia's centers had a capacity for about 1,100 youths, yet only about 300 West Virginia youths enrolled in Job Corps (see fig. 9). The Job Corps program's plan to establish nine new centers over the next 2 years will provide some additional capacity that is needed in states with existing centers, but will increase capacity in three other states to about twice the in-state demand. In addition, a center opened in Connecticut (which had been without a Job Corps center) in May 1996 that will serve about 300 annually. Overall, this expansion will enable the program to serve an additional 4,000 youths in those states that had insufficient capacity. For example, planned centers in Alabama, California, Florida, Illinois, and Tennessee will help those states address the shortage of available training opportunities for in-state residents, reducing the shortfall in those states from about 4,700 to 700. However, Job Corps is also planning to add centers in Maine, Massachusetts, and Michigan, providing these states with the capacity to serve nearly twice the number of state residents participating in Job Corps. In commenting on a draft of this report, Labor expressed some concerns with our presentation of certain information that it believed needed greater emphasis and with what it believed were factors we should have considered in carrying out our analysis. For example, Labor said that our characterization of in-state demand was misleading. Furthermore, it said that we did not recognize the limited availability of advanced training and its impact when calculating distance for participants assigned out of state. We have clarified our definition of demand as used in this report and recalculated distance, excluding advanced training participants, which had no impact on our finding. Labor also pointed out recent changes in program emphasis and provided some technical clarification. Labor's comments, along with our responses, are printed in appendix IX. We are sending copies of this report to the Secretary of Labor; the Director, Office of Management and Budget; relevant congressional committees; and other interested parties. Copies will be made available to others on request. If you or your staff have any questions concerning this report, please call me at (202) 512-7014 or Sigurd Nilsen at (202) 512-7003. Major contributors to this report include Dianne Murphy Blank, Jeremiah Donoghue, Thomas Medvetz, Arthur Merriam, and Wayne Sylvia. We designed our study to gather information on how Job Corps is currently operating in terms of where participants are recruited, trained, and placed. To do so, we analyzed Labor's Job Corps participant data file and interviewed Job Corps officials and recruiting contractors. To analyze where Job Corps participants are recruited from, assigned for training, and placed in jobs, we used Labor's Student Pay, Allotment and Management Information System (SPAMIS). Among other things, the database contains information on the placement and screening contractor for each participant. We analyzed data on Job Corps participants who left the program during program year 1994 (July 1, 1994, through June 30, 1995), the most recent full year for which data were available. To help determine whether program year 1994 was a unique year with regard to participant assignment, we performed similar analyses on comparable data for program year 1993. Unless otherwise stated, however, all numbers cited in the report reflect program year 1994 data. Our basic population consisted of all participants who left the program during program year 1994 from 113 Job Corps centers. There were 66,022 participants included in this population. Two Job Corps centers have since closed, but participants from these centers were included in our analysis. This basic population was used for the analysis of capacity and average length of stay. We eliminated participant files with missing information or for participants who resided in Puerto Rico or outside the United States. We also eliminated from our analyses those participants from states without Job Corps centers. This brought our analytic population to 64,060. Certain analyses dealt with subpopulations of the basic population. For example, for the analysis of where participants obtained jobs, only those 41,975 cases where the file indicated a job placement were used. For program year 1993, the file indicated that 35,116 participants obtained jobs. To determine how far participants traveled when attending out-of-state centers, we calculated the straight-line distance from the participant's residence to the last assigned out-of-state center. The distance was calculated using the centroid--or center--for the zip code of the participants' residence at entry and for the Job Corps center attended. The 5-Digit Zip Code Inventory File--part of the Statistical Analysis System library--provided the centroid's latitude and longitude. These latitude and longitude measures became the basis for the distance computations. To determine whether an in-state center was closer, we calculated the straight-line distance from the participant's residence to the nearest Job Corps center located in the participant's state of residence. We then compared this distance with the distance to the Job Corps center of assignment. Our distance analysis was dependent upon having consistent address and zip code information for the participants' residences and Job Corps centers, and the related longitude and latitude for those zip codes. Longitude and latitude data for locations outside the 50 states were not available. Thus, 989 program year 1994 participants from Puerto Rico were not included in the analysis. Another 680 participants were excluded from the analysis because either their zip code was not consistent with the state of residence information or they were missing state or zip code information. Because our focus for this analysis was on participants who lived in a state with a Job Corps center, we also excluded 1,434 participants who came from states that did not have Job Corps centers; these participants had to be assigned to out-of-state centers. This brought the total of the population for this analysis to 62,391 in program year 1994. This includes all participants regardless of the type of training program in which they participated. Table I.1 presents a summary of the subgroup sizes for analyses performed on program years 1994 and 1993 data. Excluded participant files for missing information (61) (337) Excluded participants not residing in United States, District of Columbia, or Puerto Rico (467) (444) Total terminees in our population Total terminees in states without Job Corps centers (1,434) (1,670) Total terminees in states with Job Corps centers Excluded participant files with longitude and latitude data unavailable (989) (940) Excluded participant files with inconsistent or missing zip code data (680) (422) To calculate the program year 1994 capacity of each Job Corps center, we used Labor's listing of residential and nonresidential capacity at any one time (slots) for each Job Corps center and multiplied it by the average number of days in a year (365.25 days). We then divided that number by the average length of stay of program year 1994 terminees at that center. For example, the Carl D. Perkins Job Corps Center in Prestonsburg, Kentucky, had a stated capacity of 245 slots and a program year 1994 average length of stay of 236.56 days. We calculated the yearly capacity of the Perkins' Center at 378 participants (245 times 365.25 divided by 236.56). On this basis, we performed center-by-center calculations and aggregated them to the state level to estimate a yearly capacity by state. To estimate in-state demand, we used all program participants from that state, regardless of where they were assigned, as a proxy measure. We recognize that this does not reflect total program demand, which would also include those who are eligible and interested in Job Corps but had not yet enrolled in the program. To obtain information on the process the Job Corps program uses to assign participants to centers, we interviewed Labor officials in the nine regional offices, as well as at headquarters. Using a semistructured interview protocol, we asked questions related to how participants are assigned to Job Corps centers, including the program's policies and procedures for participant assignments, the responsibilities and documentation requirements for each level of oversight, and the assignment patterns for participants within the regions. Additionally, we asked questions based on the analysis of program year 1993 assignment information (because program year 1994 data were not yet available at the time) that showed the extent to which participants were assigned out of state and out of region. Each official was also asked to comment on the current assignment patterns for participants within their regions. To obtain additional information on the Job Corps participant assignment process, we interviewed a sample of contractors responsible for 15 recruiting contracts. Using the program year 1993 assignment data contained in SPAMIS, we selected the top 16 large-scale recruiting contracts--defined as those that assigned over 300 participants to Job Corps centers--with the highest proportion of participants who were sent out of state. For contrast, we also chose three other recruiting contracts from the same locations that had relatively few out-of-state assignments. Each contractor was interviewed by telephone using a semistructured interview protocol that included questions relating to the Job Corps' participant assignment process. Specifically, we asked about the status of their recruiting contract(s) and their responsibilities and reporting requirements. We also asked the recruiting contractors to identify those factors that had the most impact on their decision on where to assign a participant. Some of the contractors were no longer under contract, and others could not be reached. As a result, we interviewed contractors responsible for 13 contracts that had a large proportion of participants recruited for out-of-state centers and 2 contracts that had relatively fewer participants going out of state. While our questions were based on the analysis of program year 1993 assignment information, we also asked each recruiting contractor to comment on his or her current student assignment patterns. We selected recruiting contractors to interview on the basis of their assignment of participants to centers outside participants' states of residence. This selection process was not random and, therefore, the results reported cannot be generalized to recruiting contractors overall. Our distance analysis was based upon zip code centroid and is intended to provide a gross measure of distance. Actual travel distances may vary. The average length of stay of participants at Job Corps centers can show some variation from year to year, as would the estimated center capacity when calculated from this number. To illustrate these variations, we have presented program year 1993 data alongside data for program year 1994 (see app. II). While we did not verify the accuracy of the SPAMIS data provided by Labor, we did check the consistency of participants' zip code and state of residence data and eliminated those files with inconsistent information. We also compared the results from our analyses of program year 1994 data with those from program year 1993 for consistency at the national, regional, and state levels. Percentage assigned to centers in home state Percentage sent to centers in other states Percentage of state residents assigned to out-of-state centers Number of states assigning 0-24 percent of state residents out of state Number of states assigning 25-49 percent of state residents out of state Number of states assigning 50-74 percent of state residents out of state Number of states assigning 75%+ state residents out of state Percentage of Job Corps participants assigned to centers in same region as residence Average distance traveled (in miles) by participants assigned to out-of-state centers Average distance (in miles) to nearest in-state center for those participants assigned to out-of-state centers Percentage of center participants from out of state Number of centers having 0-24 percent of participants from out of state Number of centers having 25-49 percent of participants from out of state Number of centers having 50-74 percent of participants from out of state Number of centers having 75+ percent participants from out of state Number of participants obtaining jobs Number of participants obtaining jobs in home state Percentage obtaining jobs in home state Number of participants that were or could have been trained in state (continued) Number of participants who were Brought in from other states (continued) Number of participants who were Brought in from other states Number of participants who were Brought in from other states (continued) Number of participants who were Brought in from other states The centers in Alaska and North Dakota (one in each state) were not fully operational in program year 1993. The following are GAO's comments on the Department of Labor's letter dated June 3, 1996. 1. The legislative language relating to the assignment of enrollees to Job Corps centers is included in the Background section of the report. 2. We have modified our report to note that the Job Corps regional operations are carried out under the direction of nine regional managers. 3. We agree that participants transferring into advanced training may be required to travel additional miles to attend this training. To respond to Labor's comments, we attempted to identify all the participants included in our analysis who transferred into advanced training courses. We were able to identify all participants who transferred from the original center to which they were assigned, regardless of the reason for transfer, but the information was not available to identify those specifically transferring to advanced training programs. Nonetheless, eliminating from our analysis the over 1,800 participants who transferred between centers did not change our findings. The average distance traveled by participants assigned to out-of-state centers was 375 miles, compared with about 390 miles when including the over 1,800; the distance to the nearest in-state center remained the same--93 miles. Thus, our finding--that participants assigned to centers outside their state of residence were sent to centers that were, on average, over 4 times as far as the closest in-state center--is unchanged. 4. We have modified our report, where appropriate, to indicate that our use of the term "demand" is limited to only those enrolling in Job Corps and that it does not include those who are eligible and interested in the program but have not yet enrolled. 5. Our report provides a separate section with a caption that highlights that program participants are employed in their state of residence. 6. We have clarified our report to recognize that the high number of nonresidents in the California center cited may have been due to the nature of the training offered, that is, the center provided advanced training to participants from across the nation. 7. The reasons for assigning participants to out-of-state centers cited in our report are based on comments by those involved in deciding where enrollees are actually assigned--the nine regional directors and several outreach/screening contractors. The principal reasons cited were to fully use available space at the centers and to satisfy participants' preferences either to attend a specific center or to enroll in a specific occupational training course. 8. As suggested, we have included a statement in the Results in Brief section that recognizes our inability to determine whether specific vocational training slots were available at the closest center when participants were enrolled. 9. We have included a statement on page 4 of our report to recognize Job Corps' proactive role in ensuring that the program works more closely with state and local agencies. Job Corps: Comparison of Federal Program With State Youth Training Initiatives (GAO/HEHS-96-92, Mar. 28, 1996). Job Corps Program (GAO/HEHS-96-61R, Nov. 9, 1995). Job Corps: High Costs and Mixed Results Raise Questions About Program's Effectiveness (GAO/HEHS-95-180, June 30, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the: (1) locations of Job Corps centers and their capacity by state; (2) extent to which Job Corps participants are trained and placed in jobs in the state in which they reside; and (3) reasons why participants are sent to centers outside their state of residence. GAO found that: (1) Job Corps program capacity differs among states because the number of centers in each state differs and the size of individual centers within each state differs; (2) in 1994, 41 percent of the 64,000 participants who lived in states with Job Corps centers were assigned to centers outside their home state; (3) the extent of out-of-state assignments varied among states; (4) participants assigned to centers outside their home state were sent to centers that were, on average, over 4 times as distant as the closest in-state center; (5) in many states, Job Corps residents were sent to out-of-state centers, while nonresidents were enrolled at in-state centers; (6) the number of nonresidents varied among individual Job Corps centers during 1994; (7) regardless of where participants were assigned, those who found jobs usually did so in their home state; (8) participants were assigned to centers outside their home state to fully utilize centers or to satisfy particular vocational preferences; (9) the recent trend has been to assign program residents to in-state centers; (10) in 1994, most in-state Job Corps centers had sufficient capacity to accommodate almost all in-state Job Corps participants; and (11) the nine new centers will provide some needed additional capacity in some states and increase capacity in three states to about twice the in-state demand.
7,515
341
We have identified numerous challenges related to the government's management of its real property, including issues pertaining to using and disposing of underutilized and excess property, an overreliance on leasing, and having unreliable real property data to support decision making.management after we designated it high risk in 2003. However, it has not yet fully addressed the underlying challenges that hamper reform, such The government has made progress reforming real property as those related to environmental cleanup and historic preservation, a lack of accurate and useful data to support decision making, and competing stakeholder interests that make it difficult to dispose of real property. In the meantime, the federal government continues to retain more real property than it needs. To address the excess and underutilized property the government holds, previous and current administrations have implemented a number of cost savings initiatives associated with better managing real property. For example, in May 2011, the administration proposed legislation--the Civilian Property Realignment Act (CPRA)-- which, among other things, would have established a legislative framework for consolidating and disposing of civilian real property as a means of generating savings to the federal government. Although CPRA and other real property reform legislation introduced in the previous session of Congress have not been enacted, according to the President's budget request for fiscal year 2014, the administration will continue to pursue enactment of CPRA. Most recently, OMB issued guidance for implementing the administration's Freeze the Footprint policy, which requires agencies to document their efforts to restrict any growth in the size of their domestic office and warehouse inventories. The June 2010 presidential memorandum required federal agencies to achieve $3 billion in cost savings by the end of fiscal year 2012 from increased proceeds from the sale of assets; reduced operations, maintenance, and energy expenses from asset disposals; or other efforts to consolidate space or increase occupancy rates in existing facilities, such as ending leases or implementing telework arrangements. Agency actions taken under the memorandum were to align with and support previous administration initiatives to measure and reduce greenhouse gas emissions in federal facilities and consolidate data centers. The memorandum also required the Director of OMB, in consultation with the Administrator of GSA and the Federal Real Property Council (FRPC)--an interagency group responsible for coordinating real property management--to develop guidance for actions agencies should take to carry out the requirements of the memorandum. In July 2010, OMB issued guidance that identified specific steps agencies could take to meet the requirements of the June 2010 memorandum. For example, the guidance required agencies to develop a Real Property Cost Savings and Innovation Plan that was to identify the real property cost-savings initiatives under way and planned by the agency, the agency's proposed share of the $3-billion savings target, and actions to achieve the proposed target. The guidance specified that the $3 billion in real property cost savings by the end of fiscal year 2012 would be measured through 1) capturing eliminated operating costs; 2) increasing the income generated through disposals; and 3) better utilizing existing real property by undertaking space realignment efforts, including optimizing or consolidating existing space within owned buildings. The agency cost savings were to reflect net savings, factoring in the costs incurred by the agency to achieve the intended result. After agencies developed their initial cost savings plans, OMB established four cost-savings categories in 2011 that agencies were to use for reporting savings: disposal, space management, sustainability, and innovation (see table 1). OMB used the administration's Performance.gov website to track agencies' reported savings; the website also listed individual agency's cost savings targets as a share of the $3-billion cost- savings goal and the cost-savings measures agencies planned to implement to achieve their targets. As stated previously, we have identified problems with the estimates from selected agencies to meet their savings targets. Overall, agencies reported $3.8 billion in cost savings from fiscal year 2010 to fiscal year 2012 across the OMB categories of disposal, space management, sustainability, and innovation. The largest cost savings were from space management activities, which accounted for more than half of the total savings reported. Civilian agencies reported $3.1 billion in cost savings over the fiscal year 2010-to-2012 time period, and DOD accounted for the remainder of the savings reported. The six selected agencies we reviewed (GSA, DHS, DOE, DOJ, State, and USDA) accounted for $2.3 billion, or 74 percent, of the total savings reported by civilian agencies. Similar to the savings reported by all agencies, the six agencies we reviewed also reported the majority of savings from space management activities. Specifically, space management activities accounted for 70 percent of the savings reported by the six agencies, followed by disposal (17 percent), innovation (9 percent) and, sustainability (4 percent). Table 2 summarizes the cost savings reported by category for all agencies and the six selected agencies. The overall savings reported by the agencies we reviewed ranged from $238 million reported by DHS to $580 million reported by DOE. All six agencies reported savings from space management activities, five agencies reported disposal and sustainability savings, and two agencies reported innovation savings (see fig. 1). All of the agencies in our review determined their reported savings by identifying activities that were under way or planned at the time the June In particular, the requirements of the 2010 memorandum was issued.memorandum and subsequent guidance issued by OMB specified that agencies were to report savings from ongoing and planned activities. For example, the June 2010 memorandum specified that agency actions should align with and include activities undertaken in response to two previous initiatives meant to improve the performance of federal facilities. As such, USDA officials told us that they reported sustainability savings identified in the agency's Strategic Sustainability Performance Plan required by a previous executive order. State officials told us they reported savings from data center consolidations carried out under a previous presidential initiative. In addition, the subsequent guidance issued by OMB in July 2010 also stated that agencies were expected to focus on real property cost savings initiatives under way and planned in developing their Real Property Cost Savings and Innovations Plans. As a result, for example, DOE officials stated that they did not identify any new cost savings to meet their cost-savings target and DOJ officials told us they obtained information from their bureaus about projects already planned or ongoing to identify the "low-hanging fruit" for potential cost savings. Based on our discussions with agency officials in our review, we identified two additional factors that led agencies to report savings from ongoing and planned activities: the individual cost-savings targets established for each agency, and the timeframes set forth by the memorandum. Cost savings targets: Individual cost savings targets played a role in how the agencies in our review determined their reported savings. Agency officials told us that they developed initial targets, as required by the July 2010 OMB guidance, by estimating the savings that could be derived from activities planned or underway at the time the memorandum was issued. However, according to agency officials, OMB subsequently increased the targets in 2011. OMB staff told us that the revised targets were meant to be realistic and also to encourage agencies to think beyond the traditional savings associated with real property. To assist agencies in identifying additional savings areas, OMB developed a best practices document that highlighted various types of savings that could be reported consistent with the requirements of the June 2010 memorandum. Most of the agency officials in our review told us that they did not have difficulty in meeting their revised targets after having discussions with OMB about the variety of savings that could be included. However, two agencies in our review, GSA and DHS, reported savings that fell short of their savings targets. According to GSA officials, the agency was conservative in reporting its overall savings achieved and only reported savings that could be supported by documentation that, according to GSA, were in the spirit of the memorandum. DHS officials told us that its revised savings target was not realistic in terms of the savings the agency could achieve in the 2-year timeframe established by the memorandum. Officials from USDA told us that once they had exceeded their cost savings target, they did not consider other areas for reporting potential savings that might have been achieved. Table 3 highlights the initial savings targets that the agencies proposed in their cost savings plans, compared to the savings targets that were established on Performance.gov and the savings that were ultimately reported. Time frames: Officials from some of the agencies in our review also told us that the time frames set forth by the memorandum drove them to report savings from activities that were already planned or under way. For example, DHS officials told us that the disposal savings they reported were from disposals that occurred during the 3-year time period specified in the memorandum, but were planned before the June 2010 memorandum was issued. DHS officials told us that, on average, disposals take 3 to 5 years to accomplish. Similarly, GSA officials told us that they use a tier system to evaluate the condition of their assets and place into the disposal category those assets that the agency plans to dispose of in the next 5 years. Thus, some of the disposal savings GSA reported were from assets it had already planned to dispose of at the time the June 2010 memorandum was issued and that were subsequently disposed of by the end of fiscal year 2012. In addition, given that it takes several years for savings from real property initiatives to be realized, agency officials told us that the timeframes established by the memorandum made it more likely for savings to be reported in certain categories over others. For example, GSA officials told us that when the memorandum was first issued there was an expectation that the largest cost savings would be reported from disposals, but this did not transpire in part because of the time it takes to dispose of properties. Agency officials also told us, and we have found in prior work, that the costs associated with disposals are often significant, making it difficult to realize disposal savings in a short time period. Similarly, we found that agencies did not report a large amount of innovation savings over the time period covered by the memorandum compared to other categories. Agency officials in our review told us that savings from innovation activities, particularly those resulting from telework initiatives, will increase in the future as telework is implemented more widely. For example, DHS officials told us that while they only reported $2 million in innovation savings stemming from their headquarters' flexible workspace initiative over the 2010-to-2012 time period, the agency expects to achieve greater departmentwide savings starting in 2013 as the initiative is more widely implemented. Finally, agency officials in our review told us that reporting savings from cost avoidance measures--those savings that resulted because a planned action did not take place--was necessary to meet their targets in the timeframe required by the memorandum. For example, agencies reported space management savings as a result of not pursuing an approved lease prospectus for additional space or from reduced budgets for planned real estate activities, in addition to savings that were the result of consolidating space or terminating leases. The following examples illustrate some of the largest cost avoidance and savings measures reported by our selected agencies: DOE reported $412 million in space management savings based on funds related to real property expenditures it would have requested in its fiscal year 2011 and 2012 budgets for the Yucca Mountain Nuclear Waste Repository project (Yucca Mountain). DOE had terminated its licensing efforts and shut down the project in 2010. In addition, DOE officials told us that after their initial savings target was increased, they included deferred maintenance eliminated by disposals in their reported cost savings. DHS reported $126 million in space management savings from not pursuing a lease prospectus for 1.8-million square feet in new building space to accommodate employees the agency anticipated hiring. State reported $80 million in innovation savings over the fiscal year 2010-to-2012 time period from property exchanges, in which the agency swaps one of its properties to acquire another property. State also reported $58.2 million in space management savings because the agency was appropriated less than what it requested in its 2010 and 2011 budgets for a particular account used for security, rehabilitation, and repairs at its facilities. State included savings from the property exchanges and funding received that was lower than its budget request after their initial savings target was increased. USDA reported $229 million in space management cost savings from funds that Congress rescinded from the agency's appropriations for 55 construction projects for Agricultural Research Service buildings and facilities. For example, $17 million in previously appropriated funds were rescinded for a research laboratory in Pullman, Washington, and about $16 million was rescinded for a national plant and genetics security center in Columbia, Missouri. USDA officials told us these project rescissions were included in the agency's reported savings after its initial savings target was increased by OMB. The guidance OMB provided to agencies for implementing the requirements of the June 2010 memorandum was unclear and lacked reporting standards. The unclear guidance led the agencies in our review to interpret the guidance differently and report savings inconsistently. Specifically, the guidance did not establish common ground rules, such as a clear definition of the term "cost savings," that according to our cost estimating and assessment guide, help ensure that data are consistently collected and reported. In particular, agency officials in our review told us that there was some uncertainty about the types of savings that could be reported, particularly whether cost avoidance measures could be reported, for example: GSA officials told us that the OMB guidance was not specific about whether cost avoidance measures could be included in the reported savings. These officials stated that this was a challenge in determining the cost savings that could be reported in response to the June 2010 memorandum. State officials also told us they were initially unsure whether they could report the cost avoidance associated with the previously mentioned reduction in their budget as savings, as well as savings from value-engineering improvements. DOJ officials told us there was uncertainty about whether or not cost avoidance savings could be included and whether to include only those savings that were actual budgetary savings, or if savings that were reprogrammed for other purposes could also be included. Although some agency officials in our review told us that the guidance was not clear on what could be considered a savings, all of the agencies in our review reported savings from cost avoidance measures, as previously discussed. In addition, the guidance and categories established by OMB on Performance.gov were broad. Agency officials in our review told us that they worked with OMB staff to understand the types of savings that could be reported under these categories. However, the categories lacked specific detail and standards for how the savings should be determined and reported to help ensure reliability. For example, for the disposal category, agencies were to report operations and maintenance costs avoided during the fiscal year 2010-to-2012 time period. However, it did not specify for how long agencies were supposed to capture these costs. As a result, the five agencies in our review that reported disposal savings made their own assumptions about the length of time in which to report savings from eliminated operations and maintenance costs. For disposals in the year 2010, for example, some agencies reported 1 year of operations and maintenance savings in the year in which the disposal occurred, whereas other agencies reported up to 3 years of operations and maintenance savings for disposals occurring in 2010 (see table 4). USDA officials told us that they believed it would not be fair to count more than 1 year of operations and maintenance savings for each of their disposal properties, whereas DOE officials told us that they reported up to 3 years of annualized operations and maintenance savings if a property was disposed of in fiscal year 2010 because, as discussed previously, OMB's overall guidance encouraged agencies to look for savings from fiscal year 2010 through 2012. Similarly, OMB guidance did not specify whether agencies could report cost savings from deferred maintenance. We found that two of the five agencies--DOE and GSA--reported the eliminated deferred maintenance or repair and alteration costs associated with their disposals while three agencies did not. We also found instances where agencies reported similar types of savings in different categories. For example, savings associated with eliminating leases were included in the space management category on Performance.gov and we found that State reported them as such; however, we found that DHS reported savings from eliminating leases as disposal savings. Similarly, GSA reported savings from property exchanges under space management, while State reported this type of savings under innovation. The OMB guidance did not specify how these types of savings were to be reported. Our guide for assessing the reliability of data identifies consistency as a key component of reliability. In particular, consistency is important for ensuring that data are clear and well defined enough to yield similar results in similar analyses. However, as the previous examples illustrate, the lack of detailed standards and use of broad cost-savings categories led agencies in our review to interpret the guidance differently and report cost-savings information inconsistently. OMB staff told us that the cost- savings categories established on Performance.gov were intentionally broad to encourage innovation in the types of savings that could be achieved through better management of real property. However, the inconsistencies we identified make it difficult for the reported savings to have collective meaning that is reliable for decision-makers. In addition to interpreting the OMB guidance for implementing the June 2010 memorandum differently, we also found several instances in which agencies' reported savings did not meet the requirements of the memorandum and guidance. For example, OMB's guidance specifically stated that agencies should report the net savings, which factor in costs to achieve savings, in their overall savings total. Despite this, we found instances in which some agencies did not deduct costs in their reported savings, for example: State and DHS did not deduct costs associated with disposals in their reported savings.with the approximately $114 million in disposal savings reported over the 2010-to-2012 time period were about $4 million. DHS officials told us that costs were not deducted in the demolition of DHS-owned State officials told us that the costs associated assets. DHS reported $565,000 in annual operating-cost savings from 54 demolitions in fiscal year 2011 and almost $2 million in annual operating-cost and rent savings from 245 demolitions in fiscal year DHS officials did not know the costs associated with these 2012.demolitions. DOE deducted the costs associated with some of its reported disposal savings, but did not do so if the disposals were carried out by its Office of Environmental Management. DOE officials told us that after discussions with OMB staff, they decided not to deduct the costs associated with disposals carried out by this program office because of its mission to deactivate and decommission contaminated facilities. DOE estimated in its initial cost savings plan that including these implementation costs would have resulted in a net loss of almost $900 million for the agency's disposals over the fiscal year 2010-to-2012 time period. DHS reported $2 million in innovation savings from reducing space due to a pilot flexible workspace initiative but, according to DHS officials, inadvertently did not deduct the one-time costs associated with reconfiguring the space in its overall reported savings. According to DHS officials, the one-time costs to reconfigure the space would have equaled 75 percent of the 1-year savings the agency has realized. We also found instances where agencies reported cost savings outside of the fiscal year 2010-to-2012 time period required by the June 2010 memorandum, for example: GSA reported $50 million in space management savings from purchasing a building in 2012 that the agency previously leased. GSA reported these savings based on the purchase price, which was $50 million less than the most recent appraisal of the building. Although GSA expects to realize savings over time from purchasing this building instead of leasing it, it is unclear that the difference between the purchase price and the appraised value would represent savings, if for example, no buyers are willing to pay the appraised value. Furthermore, while we have found that ownership is often more cost effective than leasing in the long term, GSA would have realized only a small fraction of the savings related to ownership that would have accrued during the timeframe established by the June 2010 memorandum. Similarly, GSA reported $10 million in space management savings from a property exchange with the City of San Antonio, in which the agency exchanged a courthouse and training facility for a parcel of land to construct a new courthouse. However, GSA retained ownership of the city's site in 2013 and the city will retain ownership of the GSA property after the construction of the new courthouse is completed, which has yet to be determined. GSA officials told us that they reported these savings in response to the June 2010 memorandum because the agreement to enter into the exchange occurred in 2012. USDA reported rent savings from office closures, some of which did not occur until fiscal year 2013. According to USDA officials, some of the office closures that had been planned for fiscal year 2012 were delayed and did not occur until fiscal year 2013. These 21 office closures accounted for about $4 million of the savings reported by the agency. DOJ reported more than $2 million in savings from consolidating six community corrections offices and the National Institute of Corrections Academy, which, according to Bureau of Prisons officials, took place in 2007 and 2008. Officials from the Bureau of Prisons stated that the reported savings were based on the estimated rent or lease amounts the agency would have incurred in the time period covered by the memorandum through renewed agreements, had the consolidations not occurred. Finally, we found instances where some agencies in our review reported savings from non-real estate activities in their totals. For example, DHS included $30,000 in reduced transit benefits in the $2 million in innovation savings it reported from increased telework due to its flexible workspace initiative, and GSA reported $11.6 million in sustainability savings from a reduction in its fiscal year 2012 budget for travel costs and building studies. GSA officials told us that building studies involve on-site inspections, and therefore require travel, and that GSA considers decreases in travel and building studies both economically and environmentally sustainable. However, it is unclear how these savings relate to reducing energy use for the agency's assets. OMB staff told us that the savings reported by the agencies should have been tied to real property, and that if an activity was the result of a real estate action, then the savings was justifiable. The guidance issued by OMB was specific to implementing the June 2010 memorandum for achieving $3 billion in real property cost savings, an initiative which was completed as of September 30, 2012. We also found that the documentation of agencies' reported savings was limited because OMB did not establish specific standards that required agencies to provide detailed information in support of their reported cost savings or identify how OMB planned to review the savings agencies reported. According to our cost-estimating and assessment guide, validating cost estimates, including savings estimates, is a best practice for ensuring that estimates are well-documented, comprehensive, accurate, and credible. However, OMB staff told us that they did not have the resources to review a detailed accounting of agencies' reported savings and, instead, required agencies to provide quarterly summaries highlighting the savings the agencies planned to report along with any success stories of unique savings examples. In addition, OMB staff told us they were in constant communication with the agencies about their reported savings, as well as with other OMB staff knowledgeable about the agencies' budgets and programs, to ensure that the reported savings met the requirements of the memorandum. In reviewing the information provided by the agencies, OMB staff told us they had identified instances where agencies reported savings that did not meet the requirements of the June 2010 memorandum--for example one agency reported savings that occurred outside the timeframes established in the memorandum-- and adjusted the agency's overall savings on Performance.gov accordingly. However, OMB staff also told us that it may be more efficient to obtain detailed documentation of an agency's reporting upfront, to limit the amount of follow-up required. In addition, OMB did not include detailed information about the types of savings agencies reported in response to the memorandum on Performance.gov. For example, Performance.gov summarizes the total cost savings reported by each agency in each of the cost savings categories, and includes general information about the types of activities agencies reported as savings, but does not include specific information about the types of savings that were included in the totals. As a result, the overall transparency of information on Performance.gov is limited for understanding the types of savings agencies reported across the categories. Our cost estimating and assessment guide has shown that a key factor for ensuring the reliability and transparency of cost estimates, including savings estimates, is that they include an appropriate level of detailed documentation, including source data, so that they can be recreated, updated, or understood. As part of this review, we obtained more detailed documentation supporting the agencies' reported savings, which allowed us to identify the issues illustrated in this report and understand the types of savings agencies reported to meet the requirements of the memorandum. Requiring more detailed documentation and establishing a more systematic process for reviewing and validating the reported cost savings could have allowed OMB to identify, in a timely manner, some of the reporting inconsistencies that resulted, while also ensuring that the savings met the requirements of the memorandum. Furthermore, including more detailed information on Performance.gov could have enhanced the transparency of actions agencies took to generate and report savings in response to the memorandum. The June 2010 memorandum had positive effects in the view of agency officials. For example, some agency officials told us that the memorandum allowed them to accelerate projects in their pipeline or gave them a stronger basis to encourage savings opportunities within their agencies. Agency officials also told us that the memorandum provided them with a better understanding of cost savings opportunities within their agencies, particularly those agencies wherein real property- management decisions are decentralized, stating that it allowed them to have a more comprehensive view of opportunities to collectively improve the agency's real property footprint. Finally, some agency officials cited improved collaboration among agencies and with OMB on real property issues. OMB staff told us that the memorandum served as an important first step for informing future real property reform efforts and that this initiative laid the groundwork for thinking about real property more holistically as a management tool. In particular, OMB staff said that the memorandum encouraged agencies to think more creatively about ways in which real property can be reformed to generate savings, not just from a budgeting perspective, but through more innovative uses of real estate. In discussions on our findings, OMB staff stated that while they initially did not want to be too prescriptive in how agencies responded to meeting the requirements of the memorandum, they recognized that more detailed guidance, as well as, more refined cost-savings categories and metrics are likely needed in future real property cost savings initiatives. Furthermore, OMB plans to use the lessons learned from this initiative to emphasize program outcomes and increase the transparency of future real property reform efforts. The current fiscal climate and emphasis on good management practices will continue to place pressure on federal agencies to find additional opportunities for cost savings. Better managing the government's real property footprint through disposing of excess property and managing existing assets more efficiently will play a role in efforts to realize such savings. Although the cost savings initiative established by the June 2010 memorandum is now complete, recent initiatives, like OMB's Freeze the Footprint policy and proposed CPRA legislation place an emphasis on generating space and cost savings to the federal government. As agencies continue to identify ways to improve the management of their real property in response to these and other initiatives, such as increasing telework, it is critical to ensure that any savings reported as a result of such improvements are meaningful and transparent. However, as our review has demonstrated, clear and specific standards are needed to ensure that savings data are consistently reported and reviewed so that they are sufficiently reliable and transparent enough to document performance and support decision making. Without more detailed standards for identifying and reporting cost savings across federal agencies, decision-makers will be ill equipped not only to assess the results of real property reform efforts in the federal government, but also to take actions that will maximize these savings in the future. To improve future real property cost-savings initiatives and promote reliability and transparency, we recommend that the Director of OMB, in collaboration with FRPC agencies, develop clear and specific standards for: identifying and reporting savings that help ensure common understanding of the various types of cost savings; consistently reporting across categories and agencies; and sufficiently documenting, validating, and reviewing results. We provided a draft of this report to OMB, DHS, DOE, DOJ, GSA, State, and USDA for review and comment. OMB generally agreed with our recommendation. Specifically, OMB stated that the June 2010 memorandum had positive effects on federal real-property management, and acknowledged that there are opportunities to improve future cost- savings efforts, as identified in our report. OMB stated that our recommendation was generally reasonable as it applies to prospective initiatives that directly address cost savings. DHS, DOE, and GSA provided technical comments that we incorporated as appropriate. DOJ, State, and USDA had no comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the Director of OMB; the Administrator of GSA; the Assistant Attorney General for Administration, Department of Justice; and the Secretaries of Homeland Security, Energy, State, and Agriculture; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. Our review focused on the administration's June 2010 memorandum that directed federal agencies to achieve $3 billion in real property cost savings by the end of fiscal year 2012. Our objectives were to (1) describe the cost savings agencies reported in response to the June 2010 presidential memorandum and how those savings were identified by selected agencies and (2) determine the extent that selected agencies' reporting of savings was reliable and transparent and how, if at all, the reporting of real property cost savings could be improved. To address these objectives, we reviewed the June 2010 memorandum and subsequent guidance issued by the Office of Management and Budget (OMB) to understand the requirements of the memorandum, including the types of savings that could be reported and how those savings were to be reported. We also reviewed our prior work on excess and underutilized property to understand issues previously identified with agencies' reported cost savings. To describe the cost savings agencies reported in response to the June 2010 memorandum and how those savings were identified by selected agencies, we reviewed and analyzed information on the administration's Performance.gov website, including agencies' individual cost savings targets, the total amount of savings reported by the agencies at the end of fiscal year 2012, and the amount of savings reported across the four categories--disposals, space management, sustainability, and innovation--established by OMB. We also obtained and analyzed documentation on the cost savings reported by six civilian agencies: the General Services Administration and the Departments of Agriculture, Energy, Homeland Security, Justice, and State. In particular, we reviewed the agencies' Real Property Cost Savings and Innovation Plans developed in response to the June 2010 memorandum and documentation supporting the cost savings reported by each of the agencies. We also conducted in-depth interviews with officials from these agencies to understand the processes they used to identify and report cost savings over the 2010 to 2012 time period. We conducted interviews with OMB staff about the types of savings agencies reported and obtained documentation on the savings our selected agencies reported to OMB. We compared the information on Performance.gov to the documentation provided to us by each of the agencies and to the documentation that the agencies submitted to OMB to identify and reconcile any discrepancies, but did not systematically evaluate or verify the methods agencies reported undertaking to achieve savings, as that was outside the scope of our review. Based on our review of agency documents and interviews with officials, we determined that the data were reliable for the purpose of describing cost savings as reported by the six agencies. We selected the six agencies because they had the largest cost savings targets for civilian agencies, collectively accounting for about 75 percent of the $3 billion savings goal; reported a variety of cost savings measures to achieve their savings target; and had a range of property types in their real property portfolios. To determine the extent that selected agencies' reporting of savings was reliable and transparent and to identify how, if at all, reporting of real property cost savings could be improved, we reviewed the agencies' reported cost savings against key factors identified in our data-reliability and cost-estimating guidance.reported by the six agencies in our review to determine whether similar types of savings were consistently reported, met the requirements set In particular, we analyzed the savings forth by the memorandum, and were well-documented. For example, we analyzed the savings the selected agencies reported in each of the categories established on Performance.gov to determine whether the agencies consistently determined the amount of savings reported within each of the categories and whether the agencies reported similar types of savings in the same categories. We also analyzed the savings reported by the selected agencies to determine whether they occurred within the time frames required by the memorandum, included the costs to implement the savings measure, and were tied to a real estate action. Finally, we reviewed the documentation the selected agencies provided to OMB to determine whether the information was clear and detailed enough to support their reported savings and to understand how OMB reviewed the savings reported to ensure they were reliable and met the requirements of the memorandum. We conducted in-depth interviews with officials from our selected agencies as well as OMB staff to further understand how they determined the cost savings reported over the 2010 to 2012 time period, challenges to meeting the requirements of the memorandum, and how similar efforts could be improved in the future. We conducted this performance audit from December 2012 to October 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. David J. Wise, (202) 512-2834 or [email protected]. In addition to the contact named above, David Sausville, Assistant Director; Russell Burnett, Kathleen Gilhooly; Nancy Lueke; Nitin Rao; Amy Rosewarne; and Jack Wang made key contributions to this report.
In June 2010, the President issued a memorandum directing federal agencies to achieve $3 billion in real property cost savings by the end of fiscal year 2012 through a number of methods, including disposal of excess property, energy efficiency improvements, and other space consolidation efforts. GAO was asked to review the cost savings agencies reported in response to the memorandum. This report (1) describes the cost savings agencies reported in response to the June 2010 presidential memorandum and how those savings were identified by selected agencies and (2) determines the extent that selected agencies' reporting of savings was reliable and transparent, and how, if at all, reporting of real property cost savings could be improved. GAO reviewed OMB guidance for implementing the memorandum, reviewed the cost savings agencies reported on the administration's Performance.gov website, and obtained documentation from and interviewed officials from six agencies and OMB staff about the agencies' reported cost savings. GAO selected the agencies based on their overall cost-savings targets and the types of savings measures implemented, among other things. Agencies reported real property cost savings of $3.8 billion in response to the June 2010 presidential memorandum from disposal, space management, sustainability, and innovation activities. Space management savings, defined by the Office of Management and Budget (OMB) as those savings resulting from, among other things, consolidations or the elimination of lease arrangements that were not cost effective, accounted for the largest portion of savings reported by all agencies, and for about 70 percent of the savings reported by the six agencies GAO reviewed--the General Services Administration (GSA) and the Departments of Agriculture (USDA), Energy (DOE), Homeland Security (DHS), Justice (DOJ), and State (State). The requirements of the memorandum, as well as agencies' individual savings targets and the time frame for reporting savings, led the selected agencies to primarily report savings from activities that were planned or under way at the time the memorandum was issued. GAO's review of the six selected agencies identified several problems that affect the reliability and transparency of the reporting of cost savings in response to the June 2010 memorandum. In particular, the memorandum and subsequent guidance issued by OMB were not clear on the types of savings that could be reported, particularly because the term "cost savings" was not clearly defined. For example, officials from several agencies GAO reviewed said the guidance was unclear about whether savings from cost avoidance measures could be reported. In addition, the agencies interpreted the guidance differently and, in some cases, did not follow the guidance, practices that led to inconsistent reporting, for example: Agencies made different assumptions in reporting disposal savings: Two agencies reported one year of avoided operations and maintenance savings for the year in which the disposal occurred, while three agencies reported up to 3 years of savings depending on when disposals occurred during the 3-year period. Some agencies did not deduct costs associated with their disposals: State and DHS did not deduct the costs associated with their reported disposal savings. DOE deducted costs for some of its reported disposals savings, but did not deduct costs for disposals carried out by its Office of Environmental Management. Some agencies reported savings outside the time frame of the memorandum: GSA reported savings from a property exchange, but retained ownership of the site in 2013, after the deadline, fiscal 2012's end. USDA reported savings from office closures that occurred in fiscal year 2013. Finally, OMB did not require agencies to provide detailed documentation of their reported savings or include specific information about agencies' reported savings on Performance.gov, limiting transparency. Agency officials stated that the memorandum broadened their understanding of real property cost-savings opportunities. However, establishing clearer standards for identifying and reporting savings would improve the reliability and transparency of the reporting of cost savings and help decision-makers better understand the potential savings of future initiatives to improve federal real-property management. GAO recommends that the Director of OMB establish clear and specific standards to help ensure reliability and transparency in the reporting of future real-property cost savings. OMB generally agreed with GAO's recommendation.
7,531
876
FCA is an independent federal regulatory agency responsible for supervising, regulating, and examining institutions operating under the Farm Credit Act of 1971, as amended. The act also authorizes FCA to assess the institutions it regulates to provide funds for its annual operating costs and to maintain a reserve amount for contingencies, as applicable.FCA regulations allow several methods for FCA to assess and apportion its administrative expenses among the various types of institutions it oversees. These institutions include primary market institutions (banks and associations) and related entities that collectively comprise the System, in addition to Farmer Mac (a secondary market entity). As of September 30, 2000, the System (excluding Farmer Mac) included 172 institutions holding assets of about $91 billion; Farmer Mac's assets were about $3 billion. The System is designed to provide a dependable and affordable source of credit and related services to the agriculture industry. FCA regulates and examines Farmer Mac, the secondary agricultural credit market entity, through the Office of Secondary Market Oversight (OSMO), which is an independent office with a staff of two within FCA.Figure 1 depicts the regulatory relationships among FCA, OSMO, the System, and Farmer Mac. Farmer Mac was created to provide a secondary market to improve the availability of agricultural and rural housing mortgage credit to lenders and borrowers. Both the System and Farmer Mac are government-sponsored enterprises (GSE). Although FCA does not receive any funds from the U.S. Treasury for its operating budget, its annual budget is subject to the annual congressional appropriations process, which limits the dollar amount that the agency can spend on administrative expenses. For 2000, that amount was $35.8 million. FCA raises operating funds from several sources, but most of these funds are from assessments on the institutions that it regulates. Assessments accounted for about 94 percent (including 2 percent for Farmer Mac) of the funding for the FCA's 2000 operating budget, with the balance coming from reimbursable services, investment income, and miscellaneous income (see fig. 2). FCA officials define administrative expenses as generally comprising personnel compensation, official travel and transportation, relocation expenses, and other operating expenses necessary for the proper administration of the act. FCA also has reimbursable expenses, which include the expenses it incurs in providing services and products to another entity. The five other federal financial regulators discussed in this report have oversight responsibility for various types of institutions. Table 1 shows these regulators, along with the types of institutions that they regulate. For purposes of comparison, we group the regulators into two categories according to the types of market primarily or exclusively served by the institutions they regulate, primary and secondary market entities. Of the five regulators, four--FHFB, NCUA, OCC, and OTS--regulate primary market institutions. OFHEO regulates secondary market entities. FHFB regulates the 12 Federal Home Loan Banks (FHLBanks) that lend on a secured basis to their member retail financial institutions. Under certain approved programs and subject to regulatory requirements, the FHLBanks also are authorized to acquire mortgages from their members. By law, federal financial regulators are required to examine their regulated institutions on a periodic basis (e.g., annually). The primary purpose of these supervisory examinations is to assess the safety and soundness of the regulated institution's practices and operations. The examination process rates six critical areas of operations--capital adequacy (C), asset quality (A), management (M), earnings (E), liquidity (L), and sensitivity to market risk (S), or CAMELS. The rating system uses a 5-point scale (with 1 as the best rating and 5 as the worst rating) to determine the CAMELS rating that describes the financial and management condition of the institution. Examiners issue a rating for each CAMELS element and an overall composite rating. The results of an examination, among other things, determine the extent of ongoing supervisory oversight. To varying degrees, the regulators also have responsibility for ensuring their institutions' compliance with consumer protection laws. Moreover, two GSE regulators (FCA and FHFB) have responsibilities for ensuring compliance with their respective GSEs' statutory missions. Mission and safety and soundness oversight for Fannie Mae and Freddie Mac are divided. The Department of Housing and Urban Development has general regulatory authority over Fannie Mae and Freddie Mac to ensure compliance with their missions, while OFHEO has the authority for safety and soundness regulation. To meet the first objective, we examined agency budget reports and financial documents and interviewed FCA and Farmer Mac officials. We compared FCA's reported actual administrative expenses (total operating expenses less reimbursable costs) with congressionally imposed limits; reviewed relevant statutes, legislative history, FCA regulations, and FCA legal opinions; and developed a 5-year trend analysis. To address the second objective, we interviewed agency officials, reviewed relevant statutes and regulations, and analyzed data on operational funding obtained from FCA and the five other federal financial regulatory agencies. We selected these five agencies because they use funding mechanisms that are similar to FCA's to support their operating budgets. We did not independently verify the accuracy of the data that the regulators provided or review any agency's accounting records. We obtained comments on a draft of this report from FCA and the five other federal financial regulatory agencies. FCA's comments are summarized at the end of this report. Except for OFHEO, all agencies provided technical comments, which we incorporated as appropriate. We conducted our work from January to July 2001 at FCA headquarters in McLean, VA, and at the headquarters of the other five regulators in Alexandria, VA, and Washington, D.C. We conducted our review in accordance with generally accepted government auditing standards. Over the last 5 years, FCA has reduced expenditures for administrative expenses, reflecting the agency's success in controlling operating costs. Staff reductions--due, in part, to consolidation within the System--have accounted for most of the decline in administrative expenditures. While actual administrative expenditure amounts have varied from year to year, FCA has continued to operate below congressionally approved spending levels. Significant dollar decreases in personnel costs were largely responsible for the decrease in administrative spending and the 5.8 percent decline compared with the 8.59 percent growth rate in federal government expenditures. Despite increases in purchases of other contractual services and equipment, administrative costs remained below the 1996 level throughout the second half of the 1990s and into 2000 (see table 2). The decline was not spread evenly over the 5-year period (see fig. 3). Most of the decline occurred in 1996-98, and administrative spending has increased each year since then. For 2001, administrative expenditures are expected to rise by $852,000, or 2.6 percent, over their 2000 level, primarily because of rising costs for personnel, travel, and transportation. Our analysis of FCA data shows that personnel costs accounted for over 80 percent of the FCA administrative expenses during the 5-year study period. But these costs (staff salaries and benefits) also decreased the most in dollar and percentage terms during the period, falling by about $4.1 million (13 percent), and the share of personnel costs in administrative expenditures fell from 88.7 percent to 81.7 percent. Reductions in benefits were largely responsible for this decline; the amount spent on staff benefits dropped 36.3 percent, falling from $7.3 million in 1996 to $4.6 million in 2000. Decreases in the relocation allowances, severance pay, and buyouts necessitated by the consolidation of the System accounted for most of the decline. FCA officials told us that the number of employees fell almost 15 percent--from 331 in 1996 to 282 in 2000--in part, because of the industry consolidation. The number of institutions in the System dropped by 28 percent, declining from 239 in 1996 to 172 in 2000. For 2001, however, FCA projects personnel costs to increase by 5.3 percent to about $28.8 million. As a result, our analysis shows that these costs will continue to account for a substantial percentage of administrative costs. FCA officials attribute the increase to the rising cost of employee salaries and performance bonuses. Equipment purchases and other contractual services accounted for the largest increases in administrative expenditures in 1996 through 2000. Equipment purchases experienced the largest growth but fell behind contractual services in actual dollar increases. Equipment purchases rose about $1.1 million (from $395,000 in 1996 to $1.5 million in 2000), which was about a 268-percent increase over 1996. According to an FCA official, computer replacements and upgrades, which the agency undertakes every 3 years, accounted mostly for the increase. FCA officials expect equipment purchases to decline $202,000, or about 14 percent, in 2001. Other contractual services represented a growing percentage of FCA administrative costs, increasing from 2.8 percent in 1996 to 6.8 percent of the 2000 total. These expenses consisted mostly of consulting services for a new financial management system purchased from another government agency. They accounted for the largest dollar increase (about $1.3 million) and the second-largest percentage increase (about 130 percent) in administrative expenditures, climbing from $992,000 in 1996 to $2.3 million in 2000. For 2001, however, FCA expects this cost component to decline by $209,000, or 9.2 percent. Travel and transportation expenses declined (by about 10 percent) between 1996 and 2000. FCA officials told us the decrease was largely the result of a decline in the number of employee relocations. For 2001, FCA projects these costs to decrease by $231,000, or about 15 percent. All other expenses, a category that includes rent, communications, and utilities; printing and reproduction; supplies and materials; and insurance claims and indemnities, decreased by $79,000, or 8.3 percent, over the period, primarily because of decreases in supplies and materials. For 2001, FCA expects these costs to increase by 4.3 percent. Figure 4 shows FCA administrative expenses for 2000 by expense category. Each fiscal year, Congress sets a limit on the amount of money FCA can spend on administrative expenditures. However, Congress did not set a spending limit for 1996. For each year from 1997 to 2000, FCA was in compliance with its budget limits for administrative expenses (see table 3). FCA and the other federal financial regulators do not receive any federal money to fund their annual operating budgets, relying primarily on assessment revenue collected from the institutions they oversee. In general, the regulators assess institutions using either complex asset- based formulas or less complex formulas that are based on other factors, depending on the type of institution. The different funding methodologies are designed to ensure that each institution pays an equitable share of agency expenses. FCA uses two different methods of calculating assessments on the institutions it regulates--one for all primary market entities and the other for its secondary market entity, Farmer Mac. The methodology used for primary market entities, which is complex, is based on the institutions' asset holdings and economies of scale as well as on the supervisory rating each institution received during FCA's last periodic examination. The methodology used for Farmer Mac is less complex. FCA calculates the assessment on the basis of its own direct and indirect expenses, rather than on asset holdings. Direct expenses include the costs of examining and supervising Farmer Mac, while indirect expenses are the overhead costs "reasonably" related to FCA's services. In general, the other federal financial regulators that regulate institutions similar to FCA's use comparable methodologies to calculate assessments. The law requires that the assessments be apportioned "on a basis that is determined to be equitable by the Farm Credit Administration." FCA's current assessment regulations for banks, associations, and "designated other System entities" were developed in 1993 through the negotiated rulemaking process. Banks, associations, and the Farm Credit Leasing Services Corporation (Leasing Corporation) are assessed on the same basis (i.e., assets). According to an FCA official, the agency periodically reviews these rules but currently has no plans to modify them. FCA officials said that these rules are designed to equitably apportion the annual costs of supervising, examining, and regulating the institutions. For this reason, the methodology relies on asset "brackets" that are much like tax brackets and reflect economies of scale, since the costs of supervision rise as a regulated institution becomes larger; however, these costs do not increase as fast as asset growth. FCA "bills" the institutions annually, and the institutions pay their assessments on a quarterly basis. To calculate the assessments for banks, associations, and the Leasing Corporation, FCA first determines its annual operating budget, which could include a reserve for contingencies for the next fiscal year, then deducts the estimated assessments for Farmer Mac, other System entities, and any reimbursable expenses. What is left--the net operating budget--is the total amount that will be assessed. This amount is apportioned among the banks, associations, and the Leasing Corporation using a two-part formula. The net operating budget is divided into two components of 30 and 70 percent. (According to an FCA official, the 30/70 split was devised during the negotiated rulemaking process and represents the most equitable way to assess System institutions.) The first part of the assessment, covering 30 percent of the budget, is spread across institutions on the basis of each institution's share of System risk-adjusted assets. For example, an institution whose assets equal 1 percent of System assets will have its assessment equal to 1 percent of this 30 percent of the FCA budget. The second part of an institution's assessment is charged according to a schedule that imposes different assessment rates on assets over specified levels, with these marginal rates decreasing for higher levels of assets. For example, the assessment rate that an institution pays for its assets from over $100 million to $500 million is 60 percent of the assessment rate that it pays on its first $25 million in assets. Adding the 30-percent amount and the 70-percent amount together equals the general assessment amount. Table 4 shows the assessment rates for the eight-asset "brackets." The assessment rates percentages are prescribed by FCA regulation. The general assessment may be subject to these adjustments: a minimum assessment fee, a supervisory surcharge, or both. The minimum fee of $20,000 applies only to institutions whose assessments are calculated at less than $20,000; these assessments are scaled upward, and no further charges are assessed. For institutions with assessments of more than $20,000, FCA may add a supervisory surcharge that reflects the institution's financial and management conditions. The surcharge is based on the institution's last supervisory examination rating. These ratings range from a high of 1 to a low of 5; a rating of 3, 4, or 5 can result in a surcharge ranging from 20 to 40 percent of the general assessment amount. The top-rated institutions (those rated 1 or 2) pay nothing over the general assessment. The variables in the formula allow FCA some flexibility in adjusting assessments to reflect its oversight costs. The formula not only reflects economies of scale but, by linking assessments with the financial and managerial soundness of the institutions, also seeks to ensure that the institutions that cost the most to supervise are paying their share. This approach relieves other entities within the System of bearing the cost of this additional oversight. FCA may adjust its assessments to reflect changes in its actual annual expenses and, if applicable, give institutions a credit against their next assessment or require them to pay additional assessments. Any credits are prorated on the basis of assessments paid by an institution. These credit adjustments are usually done at the end of the fiscal year. As required by law, FCA assesses Farmer Mac separately and differently from its primary market institutions. The law specifies that FCA's assessment of Farmer Mac is intended to cover the costs of any regulatory activities and specifically notes a requirement to pay the cost of supervising and examining Farmer Mac. We could not identify any legislative history that addressed these provisions. FCA officials told us that they believed the difference between the statutory provisions for assessing banks, associations, and the Leasing Corporation and Farmer Mac is due to the difference in their assets--that is, unlike those institutions, Farmer Mac does not make loans. FCA developed the current assessment methodology for Farmer Mac in 1993. Farmer Mac's assessment covers the estimated costs of regulation, supervision, and examination, but Farmer Mac is not assessed a charge for FCA's reserve. The assessment includes FCA's estimated direct expenses for these activities, plus an allocated amount for indirect or overhead expenses. In general, FCA uses the same estimated direct expenses and indirect expense calculations for Farmer Mac as for the "other System entities," such as the Federal Farm Credit Banks Funding Corporation (Funding Corporation). Estimated direct expenses take into account the costs incurred in the most recent examination of Farmer Mac and any expected changes in these costs for the next fiscal year. We asked FCA officials if and how the assessment formula they use for Farmer Mac enables them to compensate for risks in Farmer Mac's business activities. They explained that the amount assessed for direct expenses increases if additional examination time is needed. FCA officials also noted that, as their data show, direct costs can rise due to other factors. For example, from 1999 to 2001, FCA officials noted that they invested considerable resources in developing a risk-based capital rule for Farmer Mac. During this time, FCA incurred unique costs that increased Farmer Mac's assessment for those years. A proportional amount of FCA's indirect expenses--that is, those expenses that are not attributable to the performance of examinations--is allocated to Farmer Mac. This amount is calculated as a relationship between the budget for a certain FCA office and FCA's overall expense budget for the fiscal year covered by the assessment. (The proportion for 2000 was 28.9 percent.) Multiplying the percentage by the estimated direct expenses attributable to Farmer Mac equals the amount of indirect expenses. The addition of the estimated direct expenses and indirect expenses equals the estimated amount to be assessed Farmer Mac for the fiscal year. Indirect expenses would include, for example, the cost of providing personnel services and processing travel vouchers for OSMO. At the end of each fiscal year, FCA may adjust its assessment to reflect any changes in actual expenses. Other entities in the Farm Credit System, such as the Funding Corporation, are assessed separately using a methodology similar to the one used for Farmer Mac. The assets of this group of institutions differ from those of the previously discussed entities that FCA regulates. These institutions are assessed for the estimated direct expenses involved in examinations, a portion of indirect expenses, and any amount necessary to maintain a reserve. FCA estimates direct expenses for each entity on the basis of anticipated examination time and travel costs for the next fiscal year. Allocations for indirect expenses are calculated as a percentage of FCA's total budgeted direct expenses (excluding those for Farmer Mac) for the fiscal year of the assessment. As with its assessments of other entities in the System, FCA may adjust its assessments to reflect any changes in actual expenses at the end of the fiscal year. FCA and regulators of similar types of institutions use assessment formulas of varying complexity to assess the institutions they oversee. In general, they use relatively complex formulas for primary market institutions and less complex formulas for secondary market entities. FCA's method for assessing banks, associations, and the Leasing Corporation, which are all primary market institutions, is similar to most other federal financial regulators (NCUA, OCC, and OTS) that oversee primary market institutions. Most of the regulators use complex formulas that take into account a variety of factors, including the regulator's budget, the institution's asset size and examination rating, and economies of scale (see fig. 5). Like FCA's, these assessments generally include a fixed component that is based on an institution's asset holdings, plus a variable component derived by multiplying asset amounts in excess of certain thresholds by a series of declining marginal rates. The assessment amount may then be adjusted on the basis of various factors--for example, the institution's financial condition. Again like the FCA's methodology, these formulas attempt to allocate regulatory costs in a way that reflects the agency's actual cost of supervision. Institutions with a low examination rating pay an additional fee because they are likely to require more supervision than the top-rated institutions. NCUA and FHFB are the only regulators of primary market institutions that do not add a supervisory surcharge on the basis of an examination rating. However, NCUA does use a complex formula to determine an institution's assessment amount, whereas FHFB uses a less complex formula. FHFB calculates assessments for the 12 FHLBanks on the basis of each bank's total paid-in capital stock, relative to the total paid-in capital stock of all FHLBanks. FCA is the only primary market regulator that requires its institutions to pay a fixed minimum assessment amount (i.e., $20,000). Of the five other regulators we looked at, two--NCUA and OTS--reduce the assessments for qualifying small institutions. According to the report of the Assessment Regulations Negotiated Rulemaking Committee that developed the rule,the minimum assessment is required both to pay a share of FCA regulatory costs and as a necessary cost of doing business as a federally chartered System institution. The assessment methods of the two federal regulators that oversee secondary market entities are less complex than the methods applied to primary market institutions. For example, OFHEO's method of assessing Fannie Mae and Freddie Mac, which is prescribed by law, is based on the ratio of each entity's assets to their total combined assets. OFHEO does not regulate any other entities; thus, this simple formula readily meets the need to equitably apportion the agency's operating costs. FCA administrative expenditures were lower in 2000 compared with 1996, due in part to reductions in staff because of System consolidation. Although administrative expenses are projected to increase for 2001 because of rising personnel and travel costs, they are expected to remain within the congressional spending ceiling. FCA is unique among federal financial institution regulators because it regulates both primary and secondary market entities. The methods FCA uses to assess the institutions it oversees are analogous to those used by virtually all of the regulators of similar institutions and are based on the types of assets the entities hold. FCA's complex formula for assessing primary market institutions is comparable to the methods used by most regulators of other primary market institutions. These regulators oversee numerous entities of various sizes and complexities, and their complex assessment methods enable them to consider these attributes in assessing for the cost of examinations. The few secondary market entities, which include Farmer Mac, are all assessed using less complex methodologies. We received written comments on a draft of this report from the Chairman and Chief Executive Officer of FCA that are reprinted in appendix I. He agreed with the information presented in the draft report regarding FCA's administrative spending between 1996 and 2000. FCA also provided technical comments that we incorporated where appropriate. The other federal financial regulators, except for OFHEO, provided technical comments on a draft excerpt of this report that we shared with them. We incorporated their technical comments into this report where appropriate. We are sending copies of this report to the Chairman of the Senate Committee on Agriculture, Nutrition, and Forestry; the Chairmen and Ranking Minority Members of the Senate Committee on Banking, Housing and Urban Affairs, the House Committee on Financial Services, and the House Committee on Agriculture; and Michael M. Reyna, Chairman and Chief Executive Officer of the Farm Credit Administration. The report will be available on GAO's Internet home page at http://www.gao.gov. If you have any questions about this report, please contact me or M. Katie Harris at (202) 512-8678. Joe E. Hunter was a major contributor to this report.
The Farm Credit Administration (FCA) regulates the farm credit system. Administrative expenses, which accounted for about 97 percent of FCA's total operating expenses of $34.5 million in fiscal year 2000, are funded primarily by assessments on the institutions that make up the system, including the Federal Agricultural Mortgage Corporation (Farmer Mac). This report (1) analyses trends in administrative expenses for fiscal years 1996 through 2000 and (2) compares ways that FCA and other federal financial regulators calculate the assessments they need to fund their operations. GAO found that although FCA's administrative expenditures varied each year between 1996 and 2000, they remained below 1996 levels and stayed within congressionally imposed annual spending limits for each year during 1997 through 2000. Between 1996 and 2000, the agency experienced a decline in administrative spending of around $2 million, or 5.8 percent. Personnel costs were the largest single expense, consistently accounting for more than 80 percent of administrative spending; thus, a 15 percent staff reduction also provided the greatest overall savings. Unlike many government agencies whose operations are funded by taxpayers' money, the federal financial regulators are self-funded agencies that rely primarily on assessments from the entities they regulate. In calculating these assessments, FCA and the other federal financial regulators use separate methodologies for primary and secondary market entities.
5,265
278
Challenges we identified with disaster resilience as long ago as 1980 have persisted and were reflected in our work on disaster mitigation in 2007, as well as recent studies such as a 2012 National Academies National Research Council (NRC) study on disaster resilience. We testified in January 1998 that, for a number of reasons, state and local governments may be reluctant to invest in resilience-building efforts. For example, leaders may be concerned that hazard mitigation activities will detract from economic development goals and may perceive that mitigation is costly and involves solutions that are overly technical and complex. In our work on hazard mitigation issued in August 2007, we found that these issues persisted. We reported that hazard mitigation goals and local economic interests often conflict, and the resulting tension can often have a profound effect on mitigation efforts. For example, we reported that community goals such as building housing and promoting economic development may be higher priorities than formulating mitigation regulations that may include restrictive development regulations and more stringent building codes. In particular, local government officials we contacted as part of that work commented that developers often want to increase growth in hazard-prone areas (e.g., along the coast or in floodplains) to support economic development. These areas are often desirable for residences and businesses, and such development increases local tax revenues but is generally in conflict with mitigation goals. In 2012, the National Academies National Research Council (NRC) issued a report on disaster resilience, noting that understanding, managing, and reducing disaster risks provide a foundation for building resilience to disasters. Risk management--both personal and collective--is important in the resilience context because the perceptions of and choices about risk shape how individuals, groups, and public- and private-sector organizations behave, how they respond during and after a disaster event, and how they plan for future disasters. However, the National Academies report described a variety of challenges that affect risk management. As with our 1998 and 2007 work, one of the key challenges the NRC reported for state and local governments was reluctance to limit economic development with resilience measures. We testified in January 1998 that individuals may also lack incentives to take resilience-building measures. We noted that increasing the awareness of the hazards associated with living in a certain area or previous experience with disasters do not necessarily persuade individuals to take preventive measures against future disasters. Residents of hazard-prone areas tend to treat the possibility of a disaster's occurrence as sufficiently low to permit them to ignore the consequences. We have also reported that the availability of federal assistance may inhibit actions to mitigate disaster losses. As long ago as 1980, we reported that individuals may not act to protect themselves from the effects of severe weather if they believe the federal government will eventually help pay for their losses. The 1993 National Performance Review also found that the availability of post-disaster federal funds may reduce incentives for mitigation. Moreover, FEMA's 1993 review of the National Earthquake Hazards Reduction Program concluded that at the state level there is "the expectation that federal disaster assistance will address the problem after the event. Concerns about individuals' ability to appropriately evaluate risk and take action to protect themselves continued in our August 2007 work when we reported that individuals often have a misperception that natural hazard events will not occur in their community and are not interested in learning of the likelihood of an event occurring. Likewise, the 2012 NRC report on disaster resilience identified the key risk management challenge for homeowners and businesses in hazard-prone areas is the fact that they may be unaware of or underestimate the hazards that they face. In January 1998, we described three sets of issues that complicate assessing the cost-effectiveness of actions to build resilience. At the same time, we testified that a lack of comprehensive, reliable data to make decisions about cost-benefit tradeoffs may also inhibit local governments from deciding to invest in hazard mitigation activities. First, we noted that by definition, natural hazard mitigation reduces the loss of life and property below the levels that could be expected without mitigation, but it is impossible to measure what loss would have been incurred without mitigation. Second, the dispersion of mitigation funds and responsibilities across various agencies makes it difficult to determine the collective benefit of federal efforts. Finally, we noted that federal savings depend on the frequency of future disasters and the extent to which the federal government will bear the resulting losses, which is unknown. Moreover, in 2007 we reported that limited public awareness may also be a result of the complexity of the information that is needed for individuals to understand their hazard risks. We concluded that for local decision makers to develop mitigation strategies for their communities they need appropriate and easily understandable information about the probability of natural hazards and that efforts to improve public awareness and education are long-term and require sustained effort. Similarly, in our February 2014 testimony on limiting fiscal exposure from and increasing resilience to climate change, we noted that local decision makers need expert assistance translating climate change information into something that is locally relevant. The 2012 NRC study identified understanding how to share scientific information with broad audiences as one of the key challenges for resilience researchers. The challenges we identified in prior work--competing priorities for state and local governments, imperfect individual risk decision making, and imprecise, incomplete, and complex information about both risk and benefits--are difficult issues that are likely to persist. These issues are longstanding and difficult policy issues. Indeed, the increasing number of federal disaster declarations and the growing role of the federal government in funding post disaster relief and recovery efforts may serve to exacerbate some of the inherent challenges. We are encouraged that DHS finalized the National Mitigation Framework in 2013 to coordinate interagency and intergovernmental efforts and that the framework established a Mitigation Framework Leadership Group to coordinate mitigation efforts of relevant local, state, tribal, and federal organizations. The framework and the group create an avenue for interagency and intergovernmental leadership to pursue solutions to these difficult policy issues. As part of our ongoing work, we plan to evaluate the status of the Mitigation Framework Leadership Group and the actions taken to date to apply the National Mitigation Framework in the context of recovery from Hurricane Sandy. In ongoing work on federal resilience efforts in the aftermath of Hurricane Sandy, we identified three high-level actions that demonstrated an intensified federal focus on incorporating resilience-building into the recovery. In the wake of Hurricane Sandy, President Obama signed Executive Order 13632 on December 7, 2012. The Executive Order created the Hurricane Sandy Rebuilding Task Force, chaired by the HUD Secretary and consisting of more than 23 federal agencies and offices. Among other things, the executive order charged the task force to work with partners in the affected region to understand existing and future risks and vulnerabilities from extreme weather events; identify resources and authorities that strengthen community and regional resilience during recovery; and plan for the rebuilding of critical infrastructure in a manner that increases community and regional resilience. The order also charged the task force with helping to identify and remove obstacles to resilient rebuilding and promoting long-term sustainability of communities and ecosystems. In August 2013, the Sandy Rebuilding Task Force issued the Hurricane Sandy Rebuilding Strategy, which contained 69 recommendations to various federal agencies and their nonfederal partners aimed at improving recovery from both Hurricane Sandy and future disasters. Among these 69 recommendations are many that take into account the President's charge to facilitate planning and actions to build resilience in the Sandy- affected region. Introducing the strategy, the task force chair acknowledged how critical it was that efforts to rebuild for the future make communities more resilient to emerging challenges such as rising sea levels, extreme heat, and more frequent and intense storms. The task force report notes that many of the recommendations have been adopted and describes actions underway to implement them as part of the Hurricane Sandy recovery effort. Key examples of long-term resilient rebuilding initiatives to address future risks to extreme weather events include the Rebuild by Design effort and the New York Rising Community Reconstruction Program. In June 2013, HUD and its partners launched the Rebuild by Design competition to challenge communities to develop solutions to address structural and environmental vulnerabilities exposed by Hurricane Sandy. Of the 148 applicants, HUD selected 10 to move forward. The selected teams then worked with local stakeholders to tailor their projects to the communities and hosted over 50 community workshops to educate the communities on their proposals and the theme of resilience. On April 3, 2014, the final proposals were exhibited and evaluated by an expert jury. Winning design solutions may be awarded disaster recovery grants from HUD and other public and private partners. Some resilience aspects of the designs include elevating streets and adding breakwater systems. The New York Rising Community Reconstruction Program is another mitigation program that provides over $650 million for additional rebuilding and revitalization planning and implementation assistance to Sandy-affected communities. As of May 2014, six regions of New York composed of 102 localities and 50 New York Rising communities created plans that assessed storm damage and current risk, identified community needs and opportunities, and developed recovery and resilient strategies. Each locality is eligible for $3 million to $25 million from HUD and other public and private partners. According to the State of New York, as of May 2014, multiple projects had been awarded funding. As part of our ongoing work on resilience-building as part of the Hurricane Sandy recovery, we are identifying recommendations from the task force report that particularly support resilient rebuilding and assessing the actions taken to date to implement them. We plan to issue a report on these issues later this year. In January 2013, Congress passed and the President signed the Disaster Relief Appropriations Act, 2013 (Sandy Supplemental), which appropriated about $50 billion in funding to support recovery. The Sandy Supplemental appropriated funds--primarily for programs and activities associated with recovery from Hurricane Sandy-- to nineteen federal agencies. Among the nineteen agencies, four--DHS, HUD, the Department of Transportation (DOT), and U.S. Army Corps of Engineers (USACE)--received amounts that represent over 92 percent of the total with appropriations ranging from $5 billion to $15 billion. These four agencies administer five programs that play a key role in helping to promote resilience-building as part of recovery: (1) FEMA's Hazard Mitigation Grant Program (HMGP), (2) FEMA's Public Assistance Program (PA), (3) HUD's Community Development Block Grant-Disaster Recovery (CDBG-DR) Program, (4) DOT's Federal Transit Administration (FTA) Public Transportation Emergency Relief Program, and (5) USACE's Flood Risk Management Program. See table 1 for a description of these programs and how they help to support resilience-building efforts. As part of our ongoing work we plan to focus on efforts within FEMA's HMGP and PA and HUD's CDBG-DR to facilitate and support community and regional resilience efforts as part of recovery from Hurricane Sandy. We are evaluating federal actions, gathering perspectives from key state officials, and studying at least one large-scale PA project that involves resilience-building activates. The Sandy Recovery Improvement Act of 2013 (SRIA) was enacted as part of the Sandy Supplemental. The law authorizes several significant changes to the way FEMA may deliver federal disaster assistance. FEMA is tracking its implementation of 17 provisions of the act, of which are aimed at mitigating future damage. Specifically: Public Assistance Work Alternative Procedures. This section authorizes FEMA to implement alternative procedures for administration of the PA program with the aim of providing greater flexibility and less administrative burden by basing grants on fixed estimates. Among the provisions in this section of SRIA is one that would allow use of all or part of the excess grant funds awarded for the repair, restoration, and replacement of damaged facilities for cost effective activities that mitigate the risk of future damage, hardship, or suffering from a major disaster. Changes to HMGP. SRIA authorized three key changes to HMGP. First, it authorizes FEMA to expedite implementation of the program. FEMA has issued guidance for streamlining the program and is planning actions to continue to refine the changes and measure their effectiveness. Second, SRIA allows FEMA to provide up to 25 percent of the estimated costs for eligible hazard mitigation measures to a state or tribal grantee before eligible costs are incurred. As part of the revised, streamlined HMGP guidance, FEMA has informed states of this provision. Third, SRIA allows FEMA to waive notice and comment rulemaking procedures for HMGP Administration by States and authorizes FEMA to carry out the program as a pilot. FEMA is currently carrying out a pilot program and issued a notice in the Federal Register in March 2014 seeking comments from the public to help inform the development of this new method of program delivery. To develop the program, FEMA is exploring the extent to which its determinations regarding cost-effectiveness, technical feasibility and engineering, and final eligibility and funding can be made at the state level. National Strategy to Reduce Costs on Future Disasters. SRIA required FEMA to make recommendations for the development of a national strategy to reduce costs on future disasters. In September 2013 FEMA issued the required report, recommending that the following elements be considered in the development of a national strategy: 1) engage in a whole community dialogue and build upon public-private partnerships, 2) enhance data-driven decisions, 3) align incentives promoting disaster cost reduction and resilience, 4) enable resilient recovery, and 5) support disaster risk reduction nationally. As we have previously reported, most responsibility and authority for resilience activities rests largely outside the federal government; therefore, nonfederal incentives are also a critical piece of the overall strategy to reduce future losses. The federal government, by providing incentives through programs like the five discussed earlier in this statement, can help to promote and facilitate mitigation before and after disasters. However, ultimately, nonfederal entities inside and outside the government make the decisions that lead (or do not lead) to resilience activities. Several examples of mitigation efforts at the state and local levels help illustrate the variety of ways that incentives help drive communities to be more resilient--with a range of activities from shoring up building codes to facilitating buyouts of repetitive loss properties. As part of our ongoing work, we are reviewing studies about efforts to build resilience to extreme weather events and climate change. For the purposes of this statement, we selected illustrative examples from those studies to describe a range of nonfederal efforts to incentivize mitigation. The 2012 NRC report discussed earlier in this statement included several examples of earthquake mitigation efforts in California. In California, zones of potential landslide, liquefaction, or fault rupture hazard have been mapped by the California Geological Survey as "special study zones" according to provisions in the California Alquist- Priolo Earthquake Fault Zoning Act of 1972. If a property is in one of these special study zones, the buyers must sign a form indicating that they have been made aware of this potential hazard and recognize that additional inspections and work may be required if they choose to modify the property in the future. The U.S. Resiliency Council, a nonprofit organization based in California, is working on creating building "report cards" to provide technically defensible metrics to evaluate and communicate the resilience of individual buildings. The initial focus is on seismic risk, and officials plan to extend their efforts to creating metrics for resilience to catastrophic wind and flood risk. Transparency and required disclosure of these individual building resilience ratings can benefit building users, owners, and lenders by increasing the value of well designed or properly retrofitted properties. The Property Transfer Tax Program in Berkeley, California has provided funds for seismically retrofitting a number of properties in the city. In 1992, voters approved an additional 0.5 percent transfer tax on top of the existing 1 percent tax on all real estate transactions, with the tax paid equally by buyer and seller. This portion of the transfer tax is available for voluntary seismic upgrades to residential property. Residential property owners have up to 1 year to complete the seismic retrofit (or lose the funds). Since many homes sell for $750,000 to $1 million or more in Berkeley, this amounted to $3,750- 5,000 in "free funds" and can cover homeowner upgrades such as brick chimney bracing or anchoring water heaters. This incentive program has an 80 to-90 percent participation rate. Along with other measures, this program has led to more than 60 percent of the residences in Berkeley becoming more resistant to earthquakes. Similarly, the Columbia Center for Climate Change Law of Columbia Law School issued a report in 2013 that included examples of flood mitigation efforts in North Dakota and Iowa. In 1996, 83 percent of the homes in Grand Forks, ND were damaged when the Red River reached 54 feet and topped the city dikes. Using CDBG funding, the City of Grand Forks purchased 802 lots, moved salvageable homes, and destroyed the remainder to create a green space. The city also partnered with a private development company to finance the construction of 180 new homes in an underdeveloped area of Grand Forks to help relocate some of the people who had lost their homes in the flooding and subsequent buy-out program. In 1993, the Iowa River flooded, and overtopped existing levees. The US Army Corps of Engineers planned to rebuild and repair the levees--but a working group of state and federal agencies determined that the best solution would be to buy all the homes in the levee district so that it could be statutorily dissolved and the city would no longer have to support the infrastructure in the area. The buyout program developed a novel land-transfer system and engaged government agencies and non-profit organizations to execute it. The non-profit organization's role was instrumental because landowners were hesitant to sell their property to the government, but were comfortable selling it to the non-profit. The non-profit used a formula to set the land price, which contributed to the success of the buyout because purchasers didn't have to negotiate prices with each individual landowner and it removed the incentive for landowners to hold out for a better price. Chairman Begich, Ranking Member Paul, and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. If you or your staff members have any questions about this testimony, please contact me at (404) 679-1875 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Christopher Keisling, Assistant Director; and Katherine Davis, Dorian Dunbar, Melissa Duong, Kathryn Godfrey, Tracey King, Amanda Miller, and Linda Miller, made contributions to this testimony. In addition, Martha Chow, Steve Cohen, Stanley Czerwinski, Roshni Dave, Peter Del Toro, Chris Forys, Daniel Garcia-Diaz, Alfredo Gomez, Michael Hix, Karen Jarzynka-Hernandez, Jill Naamane, Brenda Rabinowitz, Joe Thompson, Lisa Van Arsdale, Pat Ward, David Wise, and Steve Westley also made contributions based on published and related work. Extreme Weather Events: Limiting Federal Fiscal Exposure and Increasing the Nation's Resilience. GAO-14-364T. Washington, D.C.: February 12, 2014. High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 14, 2013. Federal Disaster Assistance: Improved Criteria Needed to Assess a Jurisdiction's Capability to Respond and Recover on Its Own. GAO-12-838. Washington, D.C.: September 12, 2012. Natural Hazard Mitigation: Various Mitigation Efforts Exist, but Federal Efforts Do Not Provide a Comprehensive Strategic Framework. GAO-07-403. Washington, D.C.: August 22, 2007. High Risk Series: GAO's High-Risk Program. GAO-06-497T. Washington, D.C.: March 15, 2006. Disaster Assistance: Information on the Cost-Effectiveness of Hazard Mitigation Projects. GAO/T-RCED-99-106. Washington, D.C.: March 4, 1999. Disaster Assistance: Information on Federal Disaster Mitigation Efforts. GAO/T-RCED-98-67. Washington, D.C.: January 28, 1998. Disaster Assistance: Information on Expenditures and Proposals to Improve Effectiveness and Reduce Future Costs. GAO/T-RCED-95-140. Washington, D.C.: March 16, 1995. Federal Disaster Assistance: What Should the Policy Be? PAD-80-39. Washington, D.C.: June 16, 1980. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Multiple factors including increased disaster declarations, climate change effects, and insufficient premiums under the National Flood Insurance Program increase federal fiscal exposure to severe weather events. Managing fiscal exposure from climate change and the National Flood Insurance Program are both on GAO's High Risk list. GAO has previously reported that building resilience to protect against future damage is one strategy to help limit fiscal exposure. However, in prior reports GAO also identified multiple challenges to doing so. Responsibility for actions that enhance resilience rests largely outside the federal government, so nonfederal entities also play a key role. This testimony discusses (1) resilience-building challenges GAO has previously identified; (2) federal efforts to facilitate resilience-building as part of Hurricane Sandy recovery; and (3) examples of nonfederal efforts to incentivize resilience building. This testimony is based on previous GAO reports issued from 1998 through 2014 related to hazard mitigation, climate change, flood insurance, and preliminary observations from GAO's ongoing work for this committee on federal resilience efforts related to the Sandy recovery. For the ongoing work, GAO reviewed documents such as the Hurricane Sandy Rebuilding Strategy and a 2012 National Academies Study on building resilience. GAO also interviewed officials from FEMA and the Department of Housing and Urban Development (HUD). GAO has identified various challenges to resilience building--actions to help prepare and plan for, absorb, recover from, and more successfully adapt to adapt to adverse events including those caused by extreme weather. These include challenges for communities in balancing hazard mitigation investments with economic development goals, challenges for individuals in understanding and acting to limit their personal risk, and broad challenges with the clarity of information to inform risk decision making. GAO's work over more than 30 years demonstrates that these are longstanding policy issues, without easy solutions. The Department of Homeland Security's (DHS) May 2013 release of a National Mitigation Framework and establishment of a group to help coordinate interagency and intergovernmental mitigation efforts offers one avenue for leadership on these issues. In ongoing work on federal resilience efforts in the aftermath of Hurricane Sandy, GAO identified three high-level actions that demonstrated an intensified federal focus on incorporating resilience-building into the recovery. The President issued an executive order to coordinate the recovery effort and created a task force that issued 69 recommendations aimed at improving recovery from Sandy and future disasters--including recommendations designed to facilitate resilient rebuilding. Congress appropriated about $50 billion in supplemental funds for multiple recovery efforts, including at least five federal programs that help support resilience-building efforts. One of these, FEMA's Hazard Mitigation Grant Program (HMGP), is the only federal program designed specifically to promote mitigation against future losses in the wake of a disaster; while, another, the Public Transportation Emergency Relief Program made more than $4 billion available for transit resilience projects. The Sandy Recovery Improvement Act of 2013 provided additional responsibilities and authorities related to FEMA's mitigation and recovery efforts. In response, FEMA has undertaken efforts to make HMGP easier for states to use--for example by streamlining application procedures. The act also provided additional authorities for FEMA to fund hazard mitigation with other disaster relief funds and required FEMA to provide recommendations for a national strategy on reducing the cost of future disasters to Congress, which FEMA finalized in September 2013. For the purposes of this statement GAO reviewed studies that discuss resilience building and climate change adaptation and identified examples efforts at the state and local levels that illustrate a variety of nonfederal initiatives that may drive communities to build resilience. For example, a nonprofit group is creating report cards to assess the resilience of a building to earthquakes and plans to extend these efforts to wind and flood risk. In some localities public-private partnerships have helped promote efforts to buy properties that were at risk from repeat losses.
4,562
805
The Federal Payment Reauthorization Act of 1994 requires that the mayor of the District of Columbia submit to Congress a statement of measurable and objective performance goals for the significant activities of the District government (i.e., the performance accountability plan). After the end of the each fiscal year, the District is to report on its performance (i.e., the performance accountability report). The District's performance report is to include a statement of the actual level of performance achieved compared to each of the goals stated in the performance accountability plan for the year, the title of the District of Columbia management employee most directly responsible for the achievement of each goal and the title of the employee's immediate supervisor or superior, and a statement of the status of any court orders applicable to the government of the District of Columbia during the year and the steps taken by the government to comply with such orders. The law also requires that GAO, in consultation with the director of the Office of Management and Budget, review and evaluate the District performance accountability report and submit it not later than April 15 to your committees. Our June 2001 report on the District's fiscal year 2000 performance accountability report included recommendations that the District (1) settle on a set of results-oriented goals that are more consistently reflected in its performance planning, reporting, and accountability efforts, (2) provide specific information in its performance reports for each goal that changed, including a description of how, when, and why the change occurred, and (3) adhere to the statutory requirement that all significant activities of the District government be addressed in subsequent performance accountability reports. Our review had determined that the District's fiscal year 2000 report was of limited usefulness because the District had introduced new plans, goals, and measures throughout the year, the goals and measures were in a state of flux due to these changes, and its report did not cover significant activities, such as the District's public schools, an activity that accounted for more than 15 percent of the District's budget. In response, the District concurred with our findings and acknowledged that additional work was needed to make the District's performance management system serve the needs of its citizens and Congress. The comments stated that the District planned, for example, to consolidate its goals and expand the coverage of its fiscal year 2001 report to more fully comply with its mandated reporting requirements. We examined the progress the District has made in developing its performance accountability report and identified areas where improvements are needed. Specifically, the objectives of this report were to examine (1) the extent to which the District's performance accountability report includes its significant activities, (2) how well the District reports progress toward a consistent set of goals and explains any changes in the goals, (3) the extent to which the report adheres to the statutory requirements, and (4) areas for future improvement. To meet these objectives, we reviewed and analyzed the information presented in the District's fiscal year 2001 performance accountability report and interviewed key District officials. To examine the extent to which the District's performance accountability report included significant activities, we compared the information in the 2001 performance and accountability report with budget information on actual expenditures presented in the District's budget. To determine how well the District reported progress toward a consistent set of goals, we compared the report's goals with those contained in the District's fiscal year 2002 Proposed Budget and Financial Plan which served as the District's 2001 performance plan and then reviewed any changes. To determine the extent to which the report adhered to the statutory requirements, we analyzed the information contained in the District's report in conjunction with the requirements contained in the Federal Payment Reauthorization Act of 1994. We also reviewed the performance contracts for the District's cabinet-level officials. To identify areas for future improvement, we compared the fiscal year 2001 report with the District's fiscal year 2000 and 1999 performance accountability reports to identify baseline and trend information. We based our analysis on the information developed from work addressing our other objectives, recommendations from our June 8, 2001, report commenting on the District's fiscal year 2000 report, and our other recent work related to performance management issues. We conducted our work from December 2001 through April 2002 at the Office of the Mayor of the District of Columbia, Washington, D.C., in accordance with generally accepted government auditing standards. In accordance with requirements contained in P.L. 103-373, we consulted with a representative of the director of the Office of Management and Budget concerning our review. We did not verify the accuracy or reliability of the performance data included in the District's report, including information on the court orders in effect for fiscal year 2001. We provided a draft of this report to the mayor of the District of Columbia for review and comment. The deputy mayor/city administrator provided oral and written comments that are summarized at the end of this report, along with our response. The written comments are reprinted in their entirety in appendix III. The fiscal year 2001 performance accountability report includes most of the District's significant activities, providing performance information for 66 District agencies that represent 83 percent of the District's total expenditures of $5.9 billion during that year. The District included 26 additional agencies in this year's report, compared with 40 in its prior report for fiscal year 2000. Appendix I lists the 66 agencies included in the District's 2001 performance accountability report, along with the 2001 actual expenditures for each of these agencies. However, the absence of goals and measures related to educational activities remains the most significant gap. The District reports that it is continuing its efforts to include performance information on its significant activities in its performance accountability reports. For example, the 2001 performance accountability report notes that the District of Columbia Public Schools (DCPS) did not include performance goals or measures because they were in the early stages of a long-term strategic planning process initiated by the newly installed school board. DCPS accounted for about 14 percent of the District's fiscal year 2001 actual expenditures, and public charter schools, which also were not included, accounted for another 2 percent of the District's 2001 expenditures. The 2001 report states that in lieu of a formal performance accountability report for DCPS, the District included a copy of the Superintendent's testimony before the Subcommittee on the District of Columbia, Committee on Government Reform, U.S. House of Representatives. The District acknowledged that the inclusion of this information does not fully comply with the statutory requirement and set forth a plan to include DCPS performance goals and measures in the fiscal year 2003 proposed budget and financial plan that will serve as the basis for the DCPS performance accountability report for fiscal year 2002. The 2001 report lists another 10 agencies that were not included, primarily, according to the report, because they did not publish performance goals and measures in the fiscal year 2002 proposed budget. These 10 agencies accounted for about $330 million in fiscal year 2001 actual expenditures, or about 6 percent of the District's total fiscal year 2001 actual expenditures. These agencies included the Child and Family Services Agency, which was under receivership until June 15, 2001 (with fiscal year 2001 actual expenditures of $189 million) and public charter schools (with fiscal year 2001 expenditures of $137 million). Although it may not be appropriate to include agency performance information in some cases, the performance accountability report should provide a rationale for excluding them. For example, Advisory Neighborhood Commissions, according to the deputy mayor, have a wide range of agendas that cannot be captured in a single set of meaningful measures. Table 3 lists these 10 agencies and their fiscal year 2001 actual expenditures. In addition to these 10 agencies, the District also did not specifically include other areas constituting 11 percent of the District's fiscal year 2001 actual expenditures. In view of the District's interest in tying resources to results, the District could further improve its performance accountability reports by linking these budget activities as appropriate to the agencies that are responsible for these expenditures or provide a rationale for exclusion. For example, the Department of Employment Services administers the unemployment and disability funds (with fiscal year 2001 expenditures totaling about $32 million). Similarly, the Office of the Corporation Counsel administers the settlement and judgments fund, which was set up to settle claims and lawsuits and pay judgments in tort cases entered against the District (with fiscal year 2001 expenditures of about $26 million). Table 4 contains a list of these budget activities and fiscal year 2001 actual expenditures. The goals in the fiscal year 2001 performance accountability report were consistent with the goals in the District's 2001 performance plan. Using a consistent set of goals enhanced the understandability of the report by demonstrating how performance measured throughout the year contributed toward achieving the District's goals. The District also used clear criteria for rating performance on a five-point scale and reported that these ratings were included in the performance evaluations of cabinet agency directors who had performance contracts with the mayor. In addition, according to a District official, the District will be able to provide information on any future changes made to its performance goals through its new performance management database. The District has made substantial progress in improving its performance planning and reporting efforts by focusing on measuring progress toward achieving a consistent set of goals. In our June 2001 review of the District's 2000 performance accountability report, we had raised concerns that the District's performance management process was in flux, with goals changing continually throughout the year. Further, the District did not discuss the reasons for these changes. This year, the goals were consistent and the District provided some information about upcoming changes that could be anticipated in fiscal year 2002 goals. In addition, according to the 2001 report, the District has developed a performance measures database to allow it to document changes to individual goals and measures that are proposed in the agencies' fiscal year 2003 budget submissions. One of the District's enhancements to its 2001 performance accountability report was reporting on a five-point performance rating scale, as compared to the three-point performance rating scale it used in its fiscal year 2000 report. The five-point scale was designed to be consistent with the rating scale used in the District's Performance Management Program, under which management supervisory service, excepted service, and selected career service personnel develop individual performance plans against which they are evaluated at the end of the year. The five ratings are: (1) below expectations, (2) needs improvement, (3) meets expectations, (4) exceeds expectations, and (5) significantly exceeds expectations. According to the fiscal year 2001 performance accountability report, this scale was used to evaluate the performance of cabinet agency directors who held performance contracts with the mayor. It stated that 60-percent of each director's performance rating was based on the agency-specific goals included in the agency's performance accountability report, with the other 40-percent based on operational support requirements such as responsiveness to customers, risk management, and local business contracting. Our work has found that performance agreements can become an increasingly vital part of overall efforts to improve programmatic performance and better achieve results. We found that the use of results-oriented performance agreements: strengthened alignment of results-oriented goals with daily operations, fostered collaboration across organizational boundaries, enhanced opportunities to discuss and routinely use performance information to make program improvements, provided a results-oriented basis for individual accountability, and maintained continuity of program goals during leadership transitions. The District's fiscal year 2001 performance accountability report reflected improvement in adhering to the statutory requirements in the Federal Payment Reauthorization Act. The District's 2001 report was timely and included information on the level of performance achieved for most goals listed. It included the titles of the District management employee most directly responsible for the achievement of each of the goals and the title of that employee's immediate supervisor, as required by the statute. We also found that the names and titles on the performance contracts of the cabinet level officials we reviewed matched the names in the performance report as the immediate supervisor for all of the goals. Although the report contains information on certain court orders, the report could be improved by providing clearer and more complete information on the steps the District government has taken during the reporting year to comply with those orders and by including updated information on the court orders applicable to the District as required by the act. The District identified the level of performance achieved for most of the goals in its 2001 report. The report contains a total of 214 performance goals that are associated with the 66 agencies covered. Of these 214 performance goals, 201 goals (or 94 percent) include information on whether or not the goal was achieved, and only 13 did not include information on the level of performance. As shown in table 1, the 13 goals that did not include the level of performance were associated with eight agencies. For example, the District's State Education Office did not provide this information for four of its seven goals because the reports and information needed to achieve the goals had not been completed. Although the District's 2001 performance accountability report included some information on certain court orders imposed upon the District and the status of its compliance with those orders, the act calls for a statement of the status of any court orders applicable to the District of Columbia government during the year and the steps taken by the government to comply with such orders. The 2001 report contains information on the same 12 court orders involving civil actions against the District reported on for fiscal years 1999 and 2000. Among these 12 orders are 2 orders that the fiscal year 2001 report lists as no longer in effect in 2001. One of these court orders involved a receivership that terminated in May 2000. The other involved a maximum-security facility that closed at the end of January 2001. The 2001 report does not disclose whether or not any new court orders were imposed on the District during fiscal year 2001. The summaries that the District provides on the status of these court orders could be more informative if they contained clearer and more complete information on the steps taken by the District government to comply with the court orders. For example, according to the District's 2001 report, the case Nikita Petties v. DC relates to DCPS transportation services to special education students and the timely payment of tuition and related services to schools and providers. The report's summary on the status of this case states: "The School system has resumed most of the transportation responsibilities previously performed by a private contractor. A transportation Administrator with broad powers had been appointed to coordinate compliance with Court orders. He has completed his appointment and this position has been abolished." This summary does not provide a clear picture of what steps the school system is taking to comply with the requirements resulting from this court order. The act, however, calls for the District to report on the steps taken by the government to comply with such orders. The District recognized in its 2001 performance and accountability report that its performance management system is a work-in-progress and stated that there are several fronts on which improvements can be made. In the spirit of building on the progress that the District has made in improving its performance accountability reports over the last 2 years, there are three key areas where we believe that improvements in future performance accountability reports are needed. First, the District needs to be more inclusive in reporting on court orders to more fully comply with the act's requirements. Second, as part of the District's emphasis on expanding its performance-based budgeting approach, the District needs to validate and verify the performance data it relies on to measure performance and assess progress, present this information in its performance accountability reports, and describe its strategies to address any known data limitations. Finally, the District needs to continue its efforts to include goals and measures for its major activities, and it should include related expenditure information to provide a more complete picture of the resources targeted toward achieving an agency's goals and therefore help to enhance transparency and accountability. Since this is the third year that the District has had to develop performance and accountability reports, the District has had sufficient time to determine how best to present information on the status of any court orders that are applicable to the District of Columbia during the fiscal year and the steps taken to comply with those orders. However, the District has continued to report on the same 12 court orders for fiscal years 1999, 2000, and 2001. By limiting its presentation to the same 12 court orders, the District's current report does not provide assurance that the information in its performance accountability report reflects court orders applicable during the fiscal year. Court orders have an important effect on the District's performance, as reflected by the chief financial officer's statement that the District's "unforeseen expenses are often driven by new legislative imperatives, court-ordered mandates, and suits and settlements." As another indication of their importance, 1 of the 11 general clauses in performance contracts with agency directors addresses the directors' responsiveness to court orders. To make future reports more useful, the District should include information on the status of court orders it has not previously reported on as well as those applicable during the fiscal year, including those that may have been vacated during the fiscal year and the steps taken to comply with them. The District should establish objective criteria for determining the types of court orders for which it will provide specific compliance information for future performance accountability reports, and it should consider ways to provide summary information related to any other court orders. In establishing objective criteria, the factors could include the cost, time, and magnitude of effort involved in complying with a court order. If the District government has not acted to comply with a court order it should include an explanation as to why no action was taken. The District's 2001 report contains a statement that "Following the publication of the FY 1999 Performance Accountability Report, GAO and the District's Office of Corporation Counsel agreed upon a list of 12 qualifying orders that should be included in the District's future Performance Accountability Reports." We did not intend to limit future reporting to only the 12 court orders first reported by the District for fiscal year 1999. We agreed on the list of 12 court orders because, at that time, the District had difficulty identifying all the court orders as required by statute. However, we believe that the District now has had time to develop criteria and a system for ensuring that updated and accurate information on the status of applicable court orders can be presented in its future performance accountability reports. Therefore, we are recommending that the mayor ensure that such steps are taken. The District has identified data collection standards as one of the areas it is working to improve. As with federal agencies, one of the biggest challenges the District faces is developing performance reports with reliable information to assess whether goals are being met or how performance can be improved. Data must be verified and validated to ensure the performance measures used are complete, accurate, consistent, and of sufficient quality to document performance and support decision making. Data verification and validation are key steps in assessing whether the measures are timely, reliable, and adequately represent actual performance. The District's performance and accountability reports should include information obtained from verification and validation efforts and should discuss strategies to address known data limitations. As reported in our June 2001 report on the District's fiscal year 2000 performance accountability report, the District had planned to issue performance review guidelines by the end of the summer of 2001. These guidelines were to be issued in response to an Inspector General's finding that the agencies did not maintain records and other supporting documentation for the accomplishments they reported regarding the fiscal year 2000 performance contracts. The District included information in its fiscal year 2003 budget instructions regarding performance measures emphasizing the importance of high quality data. Although not required for agencies' budget submissions, the guidance called for every agency to maintain, at a minimum, documentation on how it calculated each measure and the data source for each measure. In its 2001 performance accountability report, the District said it plans to address the development of data collection standards. The District plans to begin developing manuals to document how data for each performance measure is collected, how the measure is calculated, and who is responsible for collecting, analyzing, and reporting the data. A further step the District can consider is ensuring that these data are independently verified and validated. A District official acknowledged that validating and verifying performance information is something the District would deal with in the future. Credible performance information is essential for accurately assessing agencies' progress toward the achievement of their goals and pinpointing specific solutions to performance shortfalls. Agencies also need reliable information during their planning efforts to set realistic goals. Decision makers must have reliable and timely performance and financial information to ensure adequate accountability, manage for results, and make timely and well-informed judgments. Data limitations should also be documented and disclosed. Without reliable information on costs, for example, decision makers cannot effectively control and reduce costs, assess performance, and evaluate programs. Toward that end, the District must ensure that its new financial management system is effectively implemented to produce crucial financial information, such as the cost of services at the program level, on a timely and reliable basis. Although the District has made progress in presenting program performance goals and measures, the 2001 report did not contain goals and measures for all of its major activities and it did not include information on other areas that accounted for 11 percent of its annual expenditures. The District could enhance the transparency and accountability of its reports by continuing its efforts to ensure that agencies establish goals and measures that they will use to track performance during the year and by taking steps to ensure that agencies responsible for other budget activities (as shown in table 4) include these areas in their performance reports. The District did not include, for example, goals and measures for DCPS, although it did provide a copy of a testimony and stated that this was included, at least in part, to address concerns we had raised in our June 2001 report that the District's fiscal year 2000 performance accountability report did not cover DCPS. The District also did not include another 10 agencies in its 2001 performance accountability report and indicated that it is taking steps to include relevant goals and measures for some of these agencies in the next year's report. In addition to including goals and measures for the District's significant activities, the District should consider including related expenditure information to help ensure transparency and accountability. We found, for example, that the Department of Employment Services administers the unemployment and disability funds but this information was not linked in the District's 2001 performance accountability report. By linking expenditures to agencies that are responsible for them, the District can further improve its future performance accountability reports by providing a more complete picture of performance. The District, like several federal agencies, has found that it needed to change its performance goals--in some cases substantially--as it learned and gained experience during the early years of its performance measurement efforts. The District has continued to make progress in implementing a more results-oriented approach to management and accountability and issuing a timely and more complete performance accountability report. As we have seen with federal agencies, cultural transformations do not come quickly or easily, and improvements in the District's performance management system are still underway. Despite the important progress that has been made, opportunities exist for the District to strengthen its efforts as it moves forward. In order to more fully comply with the Federal Payment Reauthorization Act of 1994, which requires the District to provide a statement of the status of any court orders applicable to the government of the District of Columbia during the year and the steps taken by the government to comply with such orders, the mayor should ensure that the District establish objective criteria to determine the types of court orders for which it will provide specific compliance information for future performance accountability reports. In establishing objective criteria, the factors could include the cost, time, and magnitude of effort involved in complying with these court orders. If the District government has not acted to comply with the court orders it should include an explanation as to why no action was taken. In addition, the District should provide summary information related to other applicable court orders in its performance accountability reports. The Mayor of the District of Columbia should also ensure that future performance accountability reports include information on the extent to which its performance measures and data have been verified and validated and discuss strategies to address known data limitations, and include goals and performance measures for the District's significant activities and link related expenditure information to help ensure transparency and accountability. On April 2, 2002, we provided a draft of our report to the mayor of the District of Columbia for his review. In response to our request, the deputy mayor/city administrator met with us on April 4 to discuss the draft and provided us with written comments on April 8. His written comments appear in appendix III. Overall, the deputy mayor stated that he agreed with the findings of the report and concurred with the report's recommendations. He stated that clear and meaningful performance reports are essential to communicate the extent to which the District has or has not met its goals and commitments to make those improvements. Further, he stated that the findings and recommendations in this report were consistent with the District government's intent of further improving its public reporting. The deputy mayor stated that the District would adopt our recommendation to develop objective criteria to determine the types of court orders for which it will provide specific compliance information for future performance accountability reports. Our recommendation also stated that the District should more fully comply with the statute by reporting information on the steps taken by the District government to comply with these orders. The deputy mayor said that they would provide such additional information although he stated that the statute does not specifically require that this information be provided. However, the Federal Payment Reauthorization Act of 1994 (P.L. 103-373) section 456(b)(C) requires that the District's performance accountability report contain "a statement of the status of any court orders applicable to the government of the District of Columbia during the year and the steps taken by the government to comply with such orders." We encourage the District government to comply with this requirement and concur with its comment that providing this information would make the report more informative and useful to Congress and the general public. The deputy mayor also concurred with our recommendation that the District's future performance reports include information on the extent to which its performance data have been validated and verified. The deputy mayor said that seven District agencies participating in the District's performance based budgeting pilot would be developing data collection manuals this summer. We encourage the District to proceed with this effort as well as to develop and report on strategies for addressing limitations in its data collections efforts. We have suggested in prior reports that when federal agencies have low quality or unavailable performance data, they should discuss how they plan to deal with such limitations in their performance plans and reports. Assessments of data quality do not lead to improved data for accountability and program management unless steps are taken to respond to the data limitations that are identified. In addition, alerting decisionmakers and stakeholders to significant data limitations allows them to judge the data's credibility for their intended use and to use the data in appropriate ways. Regarding the independent verification of performance data, the deputy mayor stated that the District's ability to secure independent verification of more than selected goals and measures is limited by the resources available to the District's Office of the Inspector General (OIG). He said that the OIG conducted spot-check audits of selected scorecard goals in the fiscal year 2000 performance accountability report and although these limited audits allowed the District to determine the validity of only those particular measures, this effort provided valuable observations and suggestions on how District agencies could improve its data collection practices. He also said that his office has discussed initiating additional spot-check audits of selected goals and measures with the OIG during fiscal year 2002. We agree that such spot checks would be useful. The knowledge that the OIG will be spot-checking some performance data during each fiscal year provides a good incentive to develop and use accurate, high-quality data. In our prior work, we have encouraged federal agencies to use a variety of strategies to verify and validate their performance information, depending upon the unique characteristics of their programs, stakeholder concerns, performance measures, and data resources. In addition to relying on inspector general assessments of data systems and performance measures, the District can use feedback from data users and external stakeholders to help ensure that measures are valid for their intended use. Other approaches can include taking steps to comply with quality standards established by professional organizations and/or using technical or peer review panels to ensure that performance data meet quality specifications. The District can also test the accuracy of its performance data by comparing it with other sources of similar data, such as data obtained from external studies, prior research, and program evaluations. The deputy mayor said that the District would be making efforts to include additional agencies and budget activities in future performance reports. We encourage the District to proceed with these efforts. Of the 10 agencies that were not included in the fiscal year 2001 performance report, the District has already included 3 agencies (the Office of Asian and Pacific Islander Affairs, the Child and Family Services Agency, and the Office of Veteran Affairs) in its fiscal year 2002 performance plan issued in March 2002. In addition, the deputy mayor stated that three additional agencies (the Office of the Secretary, the Housing Finance Agency, and the National Capital Revitalization Corporation) would be included in the District's consensus budget to be submitted to the Council of the District of Columbia in June 2002. With regard to the budget activities that were not included in the District's fiscal year 2001 performance report, the deputy mayor agreed that it would be appropriate to develop performance measures for six funds, such as settlements and judgments and administration of the disability compensation fund. The deputy mayor acknowledged that establishing performance measures for administering an additional six funds, such as the Public Benefit Corporation, would have been appropriate but they no longer exist. The deputy mayor said that the District of Columbia Retirement Board manages two funds that had relevant performance measures in the District's 2001 report. We noted, however, that these two retirement funds were not specifically identified in the 2001 performance accountability report. We are sending copies of this report to the Honorable Anthony A. Williams, Mayor of the District of Columbia. We will make copies available to others upon request. Key contributors to this report were Katherine Cunningham, Steven Lozano, Sylvia Shanks, and Susan Ragland. Please contact me or Ms. Ragland on (202) 512-6806 if you have any questions on the material in this report. The District's fiscal year 2001 performance accountability report included 66 agencies accounting for 83 percent of the District's operating budget for fiscal year 2001. Table 2 lists these agencies and their fiscal year 2001 actual expenditures. The District's fiscal year 2001 performance accountability report did not include 10 District agencies primarily because they did not publish performance goals in the District's 2001 performance plan. Table 3 lists these agencies and their fiscal year 2001 actual expenditures. In addition to these 10 agencies, we identified several budget activities-- accounting for 11 percent of the District's total fiscal year 2001 actual expenditures--that were not included in the fiscal year 2001 performance accountability report. Table 4 lists these activities and related fiscal year 2001 actual expenditures.
This report examines the progress the District of Columbia has made with its fiscal year 2001 performance accountability report and highlights continuing challenges facing our nation's capital. The District must submit a performance accountability plan with goals for the coming fiscal year and, at the end of the fiscal year, a performance accountability report on the extent to which it achieved these goals. GAO found that the District's Performance Accountability Report for Fiscal Year 2001 provided a more complete picture of its performance and made progress in complying with statutory reporting requirements by using a consistent set of goals. This allowed the District to measure and report progress toward the goals in its 2001 performance plan. Specifically, it reported information on the level of performance achieved, the titles of managers and their supervisors responsible for each goal, and described the status of certain court orders. The District has made progress over the last three years in its performance accountability reports and established positive direction for enhancements in court orders, its fiscal year 2003 performance based budgeting pilots, and performance goals and measures.
6,492
215
State's $162 million Biometric Visa Program is designed to work hand-in- hand with the DHS multibillion-dollar US-VISIT program. Both programs aim to improve U.S. border security by verifying the identity of persons entering the United States. Both programs rely on the DHS Automated Biographic Identification System, known as IDENT, which is a repository of fingerprints and digital photographs of persons who either have applied for U.S. visas since the inception of the program in September 2003, have entered the United States at one of 115 air or 14 sea ports of entry since January 2004, or are on a watch list--whether for previous immigration violations or as part of the FBI's database of terrorists and individuals with felony convictions. The process for determining who will be issued a visa consists of several steps. When a person applies for a visa at a U.S. consulate, a fingerprint scan is taken of his right and left index fingers. These prints are then transmitted from the overseas post through servers at State to DHS's IDENT system, which searches its records and sends a response back through State to the post. A "hit" response--meaning that a match to someone previously entered in the system was found--prevents the post's computer system from printing a visa for the applicant until the information is reviewed and cleared by a consular officer. According to State data, the entire process generally takes about 30 minutes. If the computer cannot determine if two sets of prints match, IDENT refers the case to DHS fingerprint experts, who have up to 24 hours to return a response to State (see fig. 1). US-VISIT aims to enhance national security, facilitate legitimate trade and travel, contribute to the integrity of the U.S. immigration system, and adhere to U.S. privacy laws and policies by collecting, maintaining, and sharing information on certain foreign nationals who enter and exit the United States; identifying foreign nationals who (1) have overstayed or violated the terms of their visit; (2) can receive, extend, or adjust their immigration status; or (3) should be apprehended or detained by law enforcement officials; detecting fraudulent travel documents, verifying traveler identity, and determining traveler admissibility through the use of biometrics; and facilitating information sharing and coordination among appropriate agencies. The process by which a foreign national is screened for entry is as follows: When a foreign national arrives at a port of entry to the United States, a DHS inspector scans the machine-readable travel documents. Existing records on the foreign national, including biographic lookout hits are returned. The computer presents available biographic information and a photograph and determines whether IDENT contains existing fingerprints for the foreign national. The inspector then scans the foreign national's fingerprints (left and right index fingers) and takes a photograph. This information is checked against stored fingerprints in IDENT. If no matching prints are in IDENT, the foreign national is enrolled in US-VISIT (i.e., biographic and biometric data are entered). If the foreign national's fingerprints are already in IDENT, the system performs a comparison of the fingerprint taken at the port of entry to the one on file to confirm that the person submitting the fingerprints is the person on file. If the system finds a mismatch of fingerprints or a watch list hit, the foreign national is held for further screening or processing. State's implementation of the technology aspects of the biometric visa program is currently on schedule to meet the October 26, 2004, deadline. According to State officials, a well-planned rollout of equipment and software and fewer technical problems than anticipated led to smooth implementation of the technological aspects of the program at the 201 posts that had the program operating as of September 1, 2004. But amid the fast pace of rolling out the program to meet the deadline, DHS and State have not provided comprehensive guidance for consular posts on how the information about visa applicants made available through the Biometric Visa Program should best be used to help adjudicate visas. Indeed, we found several significant differences in the implementation of the biometric program during our visits to San Salvador, El Salvador, and Santo Domingo, Dominican Republic. State acknowledged that posts may be implementing the program in various ways across the 207 consular posts that issue nonimmigrant visas. According to State officials, the implementation process for the biometric program led to far fewer technical problems than expected. Early on, State had a few difficulties in transmitting data between the posts and DHS's IDENT, primarily related to server and firewall (computer security) issues. According to State, most issues were resolved within a few days. In fact, 201 nonimmigrant visa (NIV)-issuing posts out of 207 had the software and hardware installed and were transmitting prints to IDENT for analysis as of September 1, 2004. State anticipates the completion of the installation by the October 2004 deadline. According to State's data, from February to August 2004, the total biometric visa process averaged about 30 minutes for an applicant's prints to be sent from an overseas post to the State server, and on to DHS for IDENT analysis and then for the response to be returned through State's server to the posts. IDENT response time could affect visa issuance times because a visa cannot be issued until the post has received and reviewed the IDENT response. Our observations at posts in San Salvador and Santo Domingo demonstrated the importance of the length of time required to receive an IDENT response. We observed that most interviews average only a few minutes, but the IDENT response time currently is 30 minutes. Thus, if interviewing officers collect prints during the interview, the interview would be completed before the IDENT response would be available to consular officers. Since the visa cannot be issued until the IDENT information is considered by the consulate, potential delays in the IDENT response times could have a major effect on the visa issuance process and inconvenience visa applicants. State has encouraged consular officials to issue visas the day after interviews since part of the visa process now relies on another agency's system. This will require significant changes for posts such as Santo Domingo, which still issues same-day visas. State has focused on implementing the Biometric Visa Program by the mandated deadline; however, our report identifies certain lags in guidance on how the program should be implemented at consular posts. State and DHS have not yet provided to posts details of how all aspects of the program will be implemented, including who should scan fingerprints, where and who should review information about applicants returned from IDENT, and response times for the IDENT system. In addition, DHS and State have not provided comprehensive guidance for consular posts on how the information about visa applicants made available through the Biometric Visa Program should be used to help adjudicate visas. We believe that it is important for State and DHS to articulate how the program could best be implemented, providing a roadmap for posts to develop implementation plans that incorporate the guidance. We recognize, however, that the workload, personnel and facility resources vary considerably from post to post. As a result, each post may not be able to easily implement the Biometric Visa Program according to a precise set of guidelines. However, posts could develop procedures to implement the guidance, identify resource and facility constraints, and implement mitigating actions to address their own unique circumstances. Therefore, we have recommended that DHS and State provide comprehensive guidance to consular posts on how information about visa applicants that is now available from IDENT should be used to help adjudicate visas. In responding to our recommendation, DHS generally concurred and State acknowledged that there may be a lag in guidance. Our work at two posts shows that, because they lack specific guidance on the system's use, consular officers at these overseas posts are uncertain how they should implement the Biometric Visa Program and are currently using the returned IDENT responses in a variety of ways. For example, we found that, in cases in which the IDENT response information is available to the overseas post by the time of the visa applicant interview, some consular officers who conduct interviews review information before the interview, some review it during the interview, and some rely instead on a designated officer or the line chief to review the information after the interview is completed and before affected visas are printed. We found several differences in the visa operations at two posts--San Salvador, El Salvador, and Santo Domingo, Dominican Republic--that handle a large volume of visa applications. For example, San Salvador, one of the first posts to begin implementing the program in September 2003, has a large new embassy complex that allowed the post great flexibility in implementing the collection of biometrics. Applicants are led through outdoor security screening before entering the interview waiting room. Once in the waiting room, they immediately proceed to a fingerprint scanning window where an American officer verifies their names and photographs and scans their fingerprints. By the time they arrive at their interview windows, usually the interviewing officer has received their IDENT responses. However, the post has designated one officer to review all of the IDENT responses, so some interviewing officers do not take the time to review IDENT information on those they interview even if the information is available at the time of the interview. Santo Domingo's consular section is hampered by significant facility constraints. The NIV applicant waiting area is very cramped and has been even more restricted over recent months due to construction efforts. Some of the NIV applicants are forced to share space in the immigrant visa waiting area. Santo Domingo has fewer interviewing windows than San Salvador and cannot easily spare one to designate for fulltime fingerprint scanning due to high interview volume. Some interviewing officers scan applicants' fingerprints at the time of the interview, so the interview ends before the IDENT response has been returned from DHS. One consular officer is designated to review the IDENT responses for all of the applicants, and interviewing officers may not see IDENT information on the applicants they interview. In some cases, the designated officer determines if the applicant should receive a visa, and in others he brings the IDENT information back to the original interviewing officer for the case for further review. Since September 11, 2001, we have issued reports recommending that State and DHS work together to improve several aspects of border security and the visa process, as described below. These reports show the importance of joint, coordinated actions by State and DHS to maximize program effectiveness. The US-VISIT program supports a multifaceted, critical mission: to help protect approximately 95,000 miles of shoreline and navigable waterways through inspections of foreign nationals at U.S. ports of entry. DHS has deployed an initial operating capability for entry to 115 airports and 14 seaports. It has also deployed an exit capability, as a pilot, at two airports and one seaport. Since becoming operational, DHS reports that more than eight million foreign nationals have been processed by US-VISIT at ports of entry, resulting in hundreds being denied entry. Its scope is large and complex, connecting 16 existing information technology systems in a governmentwide process involving multiple departments and agencies. In addition to these and other challenges, the program's operational context, or homeland security enterprise architecture, is not yet adequately defined. DHS released an initial version of its enterprise architecture in September 2003; however, we found that this architecture was missing, either partially or completely, all the key elements expected in a well-defined architecture, such as descriptions of business processes, information flows among these processes, and security rules associated with these information flows. DHS could benefit from such key elements to help clarify and optimize the relationships between US-VISIT and other homeland security programs operations, such as State's Biometric Visa Program, both in terms of processes and the underlying information technology infrastructure and applications. Although the biometrics program is administered by State, it falls under the overall visa policy area of the DHS Directorate of Border and Transportation Security, and is part of our national homeland security mission. State officials indicated that they are waiting for DHS to further define US-VISIT, which would help guide State's actions on the Biometric Visa Program. Since September 11, 2001, our work has demonstrated the need for State and DHS to work together to better address potential vulnerabilities in the visa process. In June 2003, we identified systemic weaknesses in the visa revocation process, many of which were the result of a failure to share and fully utilize information. We reported that the visa revocation process was not used aggressively to share information among agencies on individuals with visas revoked on terrorism grounds. It also broke down when these individuals had already entered the United States prior to revocation. Immigration officials and the Federal Bureau of Investigation (FBI) were not then routinely taking actions to investigate, locate, or resolve the cases of individuals who remained in the United States after their visas were revoked. Therefore, we recommended that DHS, in conjunction with the Departments of State and Justice, develop specific policies and procedures to ensure that appropriate agencies are notified of revocations based on terrorism grounds and take proper actions. In July 2004, we followed up on our findings and recommendations regarding interagency coordination in the visa revocation process and found that State and DHS had taken some actions in the summer of 2003 to address these weaknesses. However, our review showed that some weaknesses remained. For instance, in some cases State took a week or longer to notify DHS that individuals with revoked visas might be in the country. Without these notifications, DHS may not know to investigate those individuals. Given outstanding legal and policy issues regarding the removal of individuals based solely on their visa revocation, we recommended that the Secretaries of Homeland Security and State jointly (1) develop a written governmentwide policy that clearly defines roles and responsibilities and sets performance standards and (2) address outstanding legal and policy issues in this area or provide Congress with specific actions it could take to resolve them. State agreed to work together with DHS to address these recommendations. In February 2004, we reported that the time it takes to adjudicate a visa for a science student or scholar depends largely on whether an applicant must undergo a security check known as Visas Mantis, which is designed to protect against sensitive technology transfers. Based on a random sample of Visas Mantis cases for science students and scholars, we found it took an average of 67 days for the interagency security check to be processed and for State to notify the post. We also found that the way in which Visas Mantis information was disseminated at headquarters made it difficult to resolve some cases expeditiously. Finally, consular staff at posts we visited stated that they lacked clear guidance on the Visas Mantis program. While State and FBI officials acknowledged there had been lengthy waits, they reported having measures under way to improve the process and to identify and resolve outstanding Visas Mantis cases. We recommended that the Secretary of State, in coordination with the Director of the FBI and the Secretary of Homeland Security, develop and implement a plan to improve the Visas Mantis process. We are currently reviewing the measures these agencies have taken to improve the Visas Mantis program made since our February report and will report on our findings at the beginning of next year. Overall, we have reported on a number of areas in which joint, coordinated actions by DHS and State are needed to improve border security and visa processing. In commenting in our report of State's biometric program, both DHS and State have pledged their commitment to continued cooperation and joint actions. Indeed, these agencies are currently working together as part of the US-VISIT program. For example, State participates in two DHS-led groups designed to oversee and manage the US-VISIT program. First, State participates on the US-VISIT Federal Stakeholders Advisory Board, which provides guidance and direction to the US-VISIT program. State also participates as part of the US-VISIT Integrated Project Team, which meets weekly to discuss, among other things, operational issues concerning the deployment of US-VISIT. Mr. Chairman, overall, our work has demonstrated that coordinated, joint actions by State and DHS are critical for homeland and border security. State and DHS have worked together to roll out the biometric technology to consular posts worldwide on schedule. Moreover, their cooperation on US-VISIT will be critical to ensure that information is available to consulates to adjudicate visa applications and prevent persons from unlawfully entering the United States. However, they have not yet provided comprehensive guidance to the posts on how the program and biometric information should be used to adjudicate visas. We recognize that it may not be feasible for each post to implement biometric visas in the same way, given the variances among posts in workload, security concerns with the applicant pool, facilities, and personnel. However, guidance to posts on how to best implement the program, including best practices, would enable posts to develop operating procedures, identify resource needs, and implement mitigating actions to address the unique circumstances at each post. Therefore we have recommended that the Secretaries of Homeland Security and State develop and provide comprehensive guidance to consular posts on how best to implement the Biometric Visa Program. The guidance should address the planned uses for the information generated by the Biometric Visa Program at consular posts including directions to consular officers on when and how information from the IDENT database on visa applicants should be considered. Further, we have recommended that the Secretary of State direct consular posts to develop an implementation plan based on this guidance. DHS generally concurred with our recommendations, stating that GAO's identification of areas where improvements are needed in the Biometric Visa Program will contribute to ongoing efforts to strengthen the visa process. State acknowledged that there may be a lag in guidance. Regarding US-VISIT, we made an earlier recommendation that the Secretary for Homeland Security clarify the operational context in which US-VISIT is to operate. DHS agreed with our recommendation and plans to issue the next version of their enterprise architecture in September of 2004. This is an essential component in establishing biometric policy and creating consistency between the DHS-run US-VISIT program and State's Biometric Visa program. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions you or other members of the committee may have. For questions regarding this testimony, please call Jess Ford at (202) 512- 4128. Other key contributors to this statement include John Brummet, Sharron Candon, Deborah Davis, Kathryn Hartsburg, David Hinchman, and David Noone. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since September 11, 2001, the U.S. government has made a concerted effort to strengthen border security by enhancing visa issuance policies and procedures, as well as expanding screening of the millions of foreign visitors who enter the United States annually. Consistent with the 9/11 Commission report that recommends a biometric entry-exit screening system for travelers, the Department of State's biometric program complements the Department of Homeland Security's (DHS) United States Visitor and Immigrant Status Indicator Technology (US-VISIT) program--a governmentwide program to better control and monitor the entry, visa status, and exit of visitors. GAO was asked to present the findings of its report on State's Biometric Visa Program, as well as discuss other aspects of visa processing and border security that require coordinated, joint actions by State and DHS. Our report issued today finds that State is implementing the Biometric Visa Program on schedule and will likely meet the October 26, 2004, deadline for issuing visas that include biometric indicators, as mandated by Congress. As of September 1, 2004, State had installed program hardware and software at 201 visa issuing posts overseas and plans to complete the installation at the remaining 6 posts by September 30. Technology installation has progressed smoothly, however State and DHS have not provided comprehensive guidance to consular posts on when and how information from the DHS Automated Biometric Identification System (IDENT) on visa applicants should be considered by adjudicating consular officers. In the absence of such guidance, we found that these officers are unclear on how best to use the biometric program and IDENT information. Since September 11, State and DHS have made many improvements to visa issuance and border security policies. Nevertheless, in prior reports, we have found additional vulnerabilities that need to be addressed through joint, coordinated actions. For example, DHS has not adequately defined the operational context for US-VISIT, which affects the biometric program. In addition, we identified systemic weaknesses in information sharing between State and DHS in the visa revocation process. Moreover, we found related weaknesses in an interagency security check process aimed to prevent the illegal transfer of sensitive technologies.
4,121
465
STEM fields include a wide range of disciplines and occupations, including agriculture, physics, psychology, medical technology, and automotive engineering. Many of these fields require completion of advanced courses in mathematics or science, subjects that are first introduced and developed at the kindergarten through 12th grade level. The federal government, universities and colleges, and other entities have taken steps to help improve achievement in these and other subjects through such actions as enforcement of NCLBA, which addresses both student and teacher performance at the elementary and secondary school levels, and implementation of programs to increase the numbers of women, minorities, and students with disadvantaged backgrounds in the STEM fields at postsecondary school levels and later in employment. The participation of domestic students in STEM fields--and in higher education more generally--is affected both by the economy and by demographic changes in the U.S. population. Enrollment in higher education has declined with upturns in the economy because of the increased opportunity costs of going to school when relatively high wages are available. The choice between academic programs is also affected by the wages expected to be earned after obtaining a degree. Demographic trends affect STEM fields because different races and ethnicities have had different enrollment rates, and their representation in the population is changing. In particular, STEM fields have had a relatively high proportion of white or Asian males, but the proportion of other minorities enrolled in the nation's public schools, particularly Hispanics, has almost doubled since 1972. Furthermore, as of 2002, American Indians, Asians, African- Americans, Hispanics, and Pacific Islanders constituted 29 percent of all college students. Students and employees from foreign countries have pursued STEM degrees and worked in STEM occupations in the United States as well. To do so, these students and employees must obtain education or employment visas. Visas may not be issued to students for a number of reasons, including concerns that the visa applicant may engage in the illegal transfer of sensitive technology. Many foreign workers enter the United States annually through the H-1B visa program, which assists U.S. employers in temporarily filling specialty occupations. Employed workers may stay in the United States on an H-1B visa for up to 6 years, and the current cap on the number of H-1B visas that can be granted is 65,000. The law exempts certain workers from this cap, including those in specified positions or holding a master's degree or higher from a U.S. institution. The federal government also plays a role in helping coordinate federal science and technology initiatives. The National Science and Technology Council (NSTC) was established in 1993 and is the principal means for the Administration to coordinate science and technology policies. One objective of NSTC is to establish clear national goals for federal science and technology investments in areas ranging from information technologies and health research to improving transportation systems and strengthening fundamental research. From the 1994-1995 academic year to the 2003-2004 academic year, the number of graduates with STEM degrees increased, but the proportion of students obtaining degrees in STEM fields fell. Teacher quality, academic preparation, collegiate degree requirements, and the pay for employment in STEM fields were cited by university officials and Education as factors affecting the pursuit of degrees in these fields. The number of graduates with degrees in STEM fields increased from approximately 519,000 to approximately 578,000 from the 1994-1995 academic year to the 2003-2004 academic year. However, during this same period, the number of graduates with degrees in non-STEM fields increased from about 1.1 million to 1.5 million. Thus, the percentage of students with STEM degrees decreased from about 32 percent to about 27 percent of total graduates. The largest increases at the bachelor's and master's levels were in mathematics and the computer sciences, and the largest increase at the doctoral level was in psychology. However, the overall number of students earning degrees in engineering decreased in this period, and the number of students earning doctoral degrees in the physical sciences and bachelor's degrees in technology-related fields, as well as several other fields, also declined. Figure 1 shows the number of graduates for STEM and non-STEM fields in the 1994-1995 through 2003- 2004 academic years. From the 1994-1995 academic year to the 2002-2003 academic year, the proportion of women earning degrees in STEM fields increased at the bachelor's, master's, and doctoral levels, and the proportion of domestic minorities increased at the bachelor's level. Conversely, the total number of men graduates decreased, and the proportion of men graduates declined in the majority of STEM fields at all educational levels in this same period. However, men continued to constitute over 50 percent of the graduates in most STEM fields. The proportion of domestic minorities increased at the bachelor's level but did not change at the master's or doctoral level. In the 1994-1995 and 2002-2003 academic years, international students earned about one-third or more of the degrees at both the master's and doctoral levels in engineering, math and computer science, and the physical sciences. University officials told us and researchers reported that the quality of teachers in kindergarten through 12th grades and the levels of mathematics and science courses completed during high school affected students' success in and decisions about pursuing STEM fields. University officials said that some teachers were unqualified and unable to impart the subject matter, causing students to lose interest in mathematics and science. In 2002, Education reported that, in the 1999-2000 school year, 45 percent of the high school students enrolled in biology/life science classes and approximately 30 percent of those enrolled in mathematics, English, and social science classes were instructed by teachers without a major, minor, or certification in these subjects--commonly referred to as "out-of-field" teachers. Also, states reported that the problem of underprepared teachers was worse on average in districts that serve large proportions of high-poverty children. In addition to teacher quality, students' high school preparation in mathematics and science was cited by university officials and researchers as a factor that influenced students' participation and success in the STEM fields. For example, university officials said that, because many students had not taken higher-level mathematics and science courses such as calculus and physics in high school, they were immediately behind other students. A study of several hundred students who had left the STEM fields reported that about 40 percent of those college students who left the science fields reported some problems related to high school science preparation. Several other factors were cited by university officials, students, and others as influencing decisions about participation in STEM fields. These factors included the relatively low pay in STEM occupations, additional tuition costs to obtain STEM degrees, and the availability of mentoring, especially for women and minorities, in the STEM fields. For example, officials from five universities told us that low pay in STEM occupations relative to other fields such as law and business dissuaded students from pursuing STEM degrees. Also, in a study that solicited the views of college students who left the STEM fields as well as those who continued to pursue STEM degrees, researchers found that students experienced greater financial difficulties in obtaining their degrees because of the extra time needed to obtain degrees in certain STEM fields. University officials, students, and other organizations suggested a number of steps that could be taken to encourage more participation in the STEM fields. University officials and students suggested more outreach, especially to women and minorities from kindergarten through the 12th grade. One organization, Building Engineering and Science Talent (BEST), suggested that research universities increase their presence in pre- kindergarten through 12th grade mathematics and science education in order to strengthen domestic students' interests and abilities. In addition, the Council of Graduate Schools called for a renewed commitment to graduate education by the federal government through actions such as providing funds to support students trained at the doctoral level in the STEM fields and expanding participation in doctoral study in selected fields through graduate support awarded competitively to universities across the country. University officials suggested that the federal government could enhance its role in STEM education by providing more effective leadership through developing and implementing a national agenda for STEM education and increasing federal funding for academic research. Although the total number of STEM employees increased from 1994 to 2003, particularly in mathematics and computer science, there was no evidence that the number of employees in engineering and technology- related fields did. University officials, researchers, and others cited the availability of mentors as having a large influence on the decision to enter STEM fields and noted that many students with STEM degrees find employment in non-STEM fields. The number of foreign workers declined in STEM fields, in part because of declines in enrollment in U.S. programs resulting from difficulties with the U.S. visa system. Key factors affecting STEM employment decisions include the availability of mentors for women and minorities and opportunities abroad for foreign workers. From 1994 to 2003, employment in STEM fields increased from an estimated 7.2 million to an estimated 8.9 million--representing a 23 percent increase, as compared to a 17 percent increase in non-STEM fields. While the total number of STEM employees increased, this increase varied across STEM fields. Coinciding with the spread of the Internet and the personal computer, employment increased by an estimated 78 percent in the mathematics/computer sciences fields and by an estimated 20 percent in the sciences. There was no evidence that the number of employees in the engineering and technology-related fields increased. Further, a 2006 National Science Foundation report found that about two- thirds of employees with degrees in science or engineering were employed in fields somewhat or not at all related to their degree. Figure 2 shows the estimated number of employees in STEM fields. Women and minorities employed in STEM fields increased between 1994 and 2003, and the number of foreign workers declined. While the estimated number of women employees in STEM fields increased from about 2.7 million to about 3.5 million in this period, this did not result in a change in the proportion of women employees in the STEM fields relative to men. Specifically, women constituted an estimated 38 percent of the employees in STEM fields in 1994 and an estimated 39 percent in 2003, compared to 46 and 47 percent of the civilian labor force in 1994 and 2003, respectively. The estimated number of minorities employed in the STEM fields as well as the proportion of total STEM employees they constituted increased, but African-American and Hispanic employees remained underrepresented relative to their percentages in the civilian labor force. For example, in 2003, Hispanic employees constituted an estimated 10 percent of STEM employees compared to about 13 percent of the civilian labor force. Foreign workers traditionally had filled hundreds of thousands of positions, many in STEM fields, through the H-1B visa program. In recent years, these numbers have declined in certain fields. For example, the number of approvals for systems analysis/programming positions decreased from about 163,000 in 2001 to about 56,000 in 2002. University officials and congressional commissions noted the important role that mentors play in encouraging employment in STEM fields and that this was particularly important for women and minorities. One professor said that mentors helped students by advising them on the best track to follow for obtaining their degrees and achieving professional goals. In September 2000, a congressional commission reported that women were adversely affected throughout the STEM education pipeline and career path by a lack of role models and mentors. University officials and education policy experts told us that competition from other countries in educational or work opportunities and the more strict U.S. visa process since September 11, 2001, affected international employee decisions about studying and working in the United States. For example, university officials told us that students from several countries, including China and India, were being recruited by universities and employers in both their own countries and other countries as well as the United States. They also told us that they were also influenced by the perceived unwelcoming attitude of Americans and the complex visa process. GAO has reported on several aspects of the visa process and has made several recommendations for improving federal management of the process. In 2002, we cited the need for a clear policy on how to balance national security concerns with the desire to facilitate legitimate travel when issuing visas. In 2005, we reported a significant decline in certain visa processing times and in the number of cases pending more than 60 days, and we also reported that in some cases science students and scholars can obtain a visa within 24 hours. However, in 2006, we found that new policies and procedures since the September 11 attacks to strengthen the security of the visa process and other factors have resulted in applicants facing extensive wait times for visas at some consular posts. Officials from 13 federal civilian agencies reported spending about $2.8 billion in fiscal year 2004 for 207 education programs designed to support STEM fields, but they reported little about the effectiveness of these programs. Although evaluations had been done or were under way for about half of the programs, little is known about the extent to which most STEM programs are achieving their desired results. Furthermore, coordination among the federal STEM education programs has been limited. However, in 2003, the National Science and Technology Council formed a subcommittee to address STEM education and workforce policy issues across federal agencies, and Congress has introduced new STEM initiatives as well. Officials from 13 federal civilian agencies reported that approximately $2.8 billion was spent in fiscal year 2004 on 207 STEM education programs. The funding levels for STEM education programs among the agencies ranged from about $998 million for the National Institutes of Health to about $4.7 million for the Department of Homeland Security, and the numbers of programs ranged from 51 to 1 per agency, with two agencies-- NIH and the National Science Foundation--administering nearly half of the programs. Most STEM education programs were funded at $5 million or less, but 13 programs were funded at more than $50 million, and the funding reported for individual programs varied significantly. For example, one Department of Agriculture-sponsored scholarship program for U.S. citizens seeking bachelor's degrees at Hispanic-serving institutions was funded at $4,000, and one NIH grant program designed to develop and enhance research training opportunities was funded at about $547 million. Figure 3 shows the funding and number of STEM education programs by federal civilian agency. According to the agency responses to GAO's survey, most STEM education programs had multiple goals, and one goal was to attract students or graduates to pursue STEM degrees and occupations. Many STEM programs also were designed to provide student research opportunities, provide support to educational institutions, or improve teacher training. In order to achieve these goals, many of the programs were targeted at multiple groups and provided financial assistance to multiple beneficiaries. STEM education programs most frequently provided financial support for students or scholars, and several programs provided assistance for teacher and faculty development as well. U.S. citizenship or permanent residence was required for the majority of programs. Table 1 presents the most frequent program goals and types of assistance provided. Agency officials reported that evaluations--which could play an important role in improving program operations and ensuring an efficient use of federal resources--had been completed or were under way for about half of the STEM education programs. However, evaluations had not been done for over 70 programs that were started before fiscal year 2002, including several that had been operating for over 15 years. For the remaining over 30 programs that were initially funded in fiscal year 2002 or later, it may have been too soon to expect evaluations. Coordination of federal STEM education programs has been limited. In January 2003, the National Science and Technology Council's Committee on Science (COS) established a subcommittee on education and workforce development. According to its charter, the subcommittee is to address education and workforce policy issues and research and development efforts that focus on STEM education issues at all levels, as well as current and projected STEM workforce needs, trends, and issues. The subcommittee has working groups on (1) human capacity in STEM areas, (2) minority programs, (3) effective practices for assessing federal efforts, and (4) issues affecting graduate and postdoctoral researchers. NSTC reported that, as of June 2005, the subcommittee had a number of accomplishments and had other projects under way related to attracting students to STEM fields. For example, it had surveyed federal agency education programs designed to increase the participation of women and underrepresented minorities in STEM studies, and it had coordinated the Excellence in Science, Technology, Engineering, and Mathematics Education Week activities, which provide an opportunity for the nation's schools to focus on improving mathematics and science education. In addition, the subcommittee is developing a Web site for federal educational resources in STEM fields and a set of principles that agencies could use in setting levels of support for graduate and postdoctoral fellowships and traineeships. In passing the Deficit Reduction Act of 2005, Congress created a new source of grant aid for students pursuing a major in the physical sciences, the life sciences, the computer sciences, mathematics, technology, engineering, or a foreign language considered critical to the national security of the United States. These National Science and Mathematics Access to Retain Talent Grants--or SMART Grants--provide up to $4,000 for each of 2 academic years for eligible students. Eligible students are those who are in their third or fourth academic year of a program of undergraduate education at a 4-year degree-granting institution, have maintained a cumulative grade point average of 3.0 or above, and meet the eligibility requirements of the federal government's need-based Pell Grant program. Education expects to provide $790 million in SMART Grants to over 500,000 students in academic year 2006-2007. Congress also established an Academic Competitiveness Council in passing the Deficit Reduction Act of 2005. The council is composed of officials from federal agencies with responsibilities for managing existing federal programs that promote mathematics and science and is chaired by the Secretary of Education. Among the statutory duties of the council are to (1) identify all federal programs with a mathematics and science focus, (2) identify the target populations being served by such programs, (3) determine the effectiveness of such programs, (4) identify areas of overlap or duplication in such programs, and (5) recommend ways to efficiently integrate and coordinate such programs. Congress also charged the council to provide it with a report of its findings and recommendations by early 2007. In an April 2006 hearing before the House Committee on Education and the Workforce, the Secretary of Education testified that she and President Bush convened the first meeting of the council on March 6, 2006. While the total numbers of STEM graduates have increased, some fields have experienced declines, especially at the master's and doctoral levels. Given the trends in the numbers and percentages of graduates with STEM degrees--particularly advanced degrees--and recent developments that have influenced international students' decisions about pursuing degrees in the United States, it is uncertain whether the number of STEM graduates will be sufficient to meet future academic and employment needs and help the country maintain its technological competitive advantage. Moreover, although international graduate applications increased in academic year 2005-2006 for the first time in 3 years, it is too early to tell if this marks the end of declines in international graduate student enrollment. In terms of employment, despite some gains, the percentage of women in the STEM workforce has not changed significantly, minority employees remain underrepresented relative to their employment in the civilian labor force, and many graduates with degrees in STEM fields are not employed in STEM occupations. Women now outnumber men in college enrollment, and minority students are enrolling in record high levels at the postsecondary level as well. To the extent that these populations have been historically underrepresented in STEM fields, they provide a yet untapped source of STEM participation in the future. To help improve the trends in the numbers of graduates and employees in STEM fields, university officials and others made several suggestions, such as increasing the federal commitment to STEM education programs. However, before expanding the number of federal programs, it is important to know the extent to which existing STEM education programs are appropriately targeted and making the best use of available federal resources--in other words, these programs must be evaluated--and a comprehensive evaluation of federal programs is currently nonexistent. Furthermore, the recent initiatives to improve federal coordination, such as the American Competitiveness Council, serve as an initial step in reducing unnecessary overlap between programs, not an ending point. In an era of limited financial resources and growing federal deficits, information about the effectiveness of these programs can help guide policymakers and program managers in coordinating and improving existing programs as well as determining areas in which new programs are needed. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the Committee may have. For further contacts regarding this testimony, please call Cornelia M. Ashby at (202) 512-7215. Individuals making key contributions to this testimony include Jeff Appel (Assistant Director), Jeff Weinstein (Analyst- in-Charge), Carolyn Taylor, Tim Hall, Mark Ward, John Mingus, and Katharine Leavitt. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The United States is a world leader in scientific and technological innovation. To help maintain this advantage, the federal government has spent billions of dollars on education programs in the science, technology, engineering, and mathematics (STEM) fields for many years. However, concerns have been raised about the nation's ability to maintain its global technological competitive advantage in the future. This testimony is based on our October 2005 report and presents information on (1) trends in degree attainment in STEM- and non-STEM-related fields and factors that may influence these trends, (2) trends in the levels of employment in STEM- and non-STEM- related fields and factors that may influence these trends, and (3) federal education programs intended to support the study of and employment in STEM-related fields. For this report, we analyzed survey responses from 13 civilian federal departments and agencies; analyzed data from the Departments of Education and Labor; interviewed educators, federal agency officials, and representatives from education associations and organizations; and interviewed students. While postsecondary enrollment has increased over the past decade, the proportion of students obtaining degrees in STEM fields has fallen. In academic year 1994-1995, about 519,000 students (32 percent) obtained STEM degrees. About 578,000 students obtained STEM degrees in academic year 2003-2004, accounting for 27 percent of degrees awarded. Despite increases in enrollment and degree attainment by women and minorities at the graduate level, the number of graduate degrees conferred fell in several STEM-related fields from academic year 1994-1995 to academic year 2003-2004. College and university officials and students most often cited subpar teacher quality and poor high school preparation as factors that discouraged the pursuit of STEM degrees. Suggestions to encourage more enrollment in STEM fields include increased outreach and mentoring. The past decade has seen an increase in STEM employees, particularly in mathematics and computer science. From 1994 to 2003, employment in STEM fields increased by an estimated 23 percent, compared to 17 percent in non-STEM fields. Mathematics and computer science showed the highest increase in STEM-related employment, and employment in science-related fields increased as well. However, in certain STEM fields, including engineering, the number of employees did not increase significantly. Further, while the estimated number of women, African-Americans, and Hispanic-Americans employed in STEM fields increased, women and minorities remained underrepresented relative to their numbers in the civilian labor force. The number of foreign workers employed in the United States has fluctuated, experiencing declines in 2002 and 2003. Key factors affecting STEM employment decisions include mentoring for women and minorities and opportunities abroad for foreign employees. Thirteen federal civilian agencies spent approximately $2.8 billion in fiscal year 2004 to fund over 200 programs designed to increase the numbers of students in STEM fields and employees in STEM occupations and to improve related educational programs. The funding reported for individual STEM education programs varied significantly, and programs most commonly provided financial support to students or infrastructure support to institutions. However, only half of these programs had been evaluated or had evaluations underway, and coordination among STEM education programs was limited. It is important to know the extent to which existing STEM education programs target the right people and the right areas and make the best use of available resources. Since our report was issued in October 2005, Congress, in addition to establishing new grants to encourage students from low-income families to enroll in STEM fields, established an Academic Competitiveness Council to identify, evaluate, coordinate, and improve federal STEM programs.
4,519
719
Medicare covers about 40 million elderly (over 65 years old) and disabled beneficiaries. Individuals who are eligible for Medicare automatically receive Hospital Insurance, known as part A, which helps pay for inpatient hospital, skilled nursing facility, hospice, and certain home health services. A beneficiary generally pays no premium for this coverage unless the beneficiary or spouse has worked fewer than 40 quarters in his or her lifetime, but the beneficiary is liable for required deductibles, coinsurance, and copayment amounts. Medicare-eligible beneficiaries may elect to purchase Supplementary Medical Insurance, known as part B, which helps pay for certain physician, outpatient hospital, laboratory, and other services. Beneficiaries must pay a premium for part B coverage, which was $58.70 per month in 2003. Beneficiaries are also responsible for part B deductibles, coinsurance, and copayments. Table 1 summarizes the benefits covered and cost-sharing requirements for Medicare part A and part B. Many low-income Medicare beneficiaries who cannot afford to pay Medicare's cost-sharing requirements receive assistance from Medicaid. For Medicare beneficiaries qualifying for full Medicaid benefits, state Medicaid programs pay for Medicare's part A (if applicable) and part B cost-sharing requirements up to the Medicaid payment rate as well as for services that are not generally covered by Medicare, such as prescription drugs. To qualify for full Medicaid benefits, beneficiaries must meet their state's eligibility criteria, which include income and asset requirements that vary by state. In most states, beneficiaries that qualify for Supplemental Security Income (SSI) automatically qualify for full Medicaid benefits. Other beneficiaries may qualify through one of several optional eligibility categories targeted to low-income beneficiaries, individuals with high medical costs, or those receiving care at home or in the community who otherwise would have been institutionalized. To assist low-income Medicare beneficiaries with their premium and cost- sharing obligations, Congress established several Medicare savings programs--the QMB, SLMB, QI, and QDWI programs. Under these programs, state Medicaid programs pay enrolled beneficiaries' Medicare premiums. As a result, for QMB, SLMB and QI beneficiaries, Medicare part B premiums would not be deducted from their monthly SSA checks. The QMB program also pays Medicare deductibles and other cost-sharing requirements, thereby saving beneficiaries from having to make such payments. Beneficiaries eligible for Medicare savings programs can apply for and be determined to be eligible through their state Medicaid programs. Thirty-three states have agreements with SSA whereby SSA makes eligibility determinations for a state if beneficiaries are deemed eligible by SSA to receive SSI benefits. In the other 18 states, even if an individual is eligible to receive SSI benefits, an individual must file an application with the state or local Medicaid agency to be eligible. Beneficiaries qualifying for Medicare savings programs receive different levels of assistance depending on their income. See table 2 for eligibility criteria and benefits for each program. In 1998, Congress passed legislation specifically providing funding for SSA to evaluate ways to promote Medicare savings programs. In response, SSA conducted demonstration projects to explore the effects of using various approaches to increase participation in Medicare savings programs. In one of these demonstrations conducted in 1999 and 2000, SSA tested six models designed to increase awareness and reduce barriers to enrollment. The models were implemented at 20 sites in 10 states, as well as the entire state of Massachusetts. The models differed in the extent to which SSA was involved in outreach efforts beyond mailing the letters. For example, in the "application model," SSA staff screened beneficiaries if they appeared to be eligible, completed applications, collected supporting documents, and forwarded the completed application form and supporting evidence to the state Medicaid agency for an eligibility determination. In the "peer assistance model," Medicare beneficiaries contacted an AARP toll-free number and were screened for program eligibility by an AARP volunteer. Across all six models, SSA sent more than 700,000 letters informing low-income Medicare beneficiaries that they may be eligible for benefits under the Medicare savings programs. The enrollment rate for each model varied--ranging from an additional 7 enrollees per 1,000 letters to 26 enrollees per 1,000 letters--with the application model recording the highest enrollment rate and peer assistance recording the lowest. In 2000, Congress amended the Social Security Act, through BIPA, requiring the Commissioner of Social Security to notify eligible Medicare beneficiaries about assistance available from state Medicaid programs to help pay Medicare premiums and cost sharing. BIPA also required SSA to furnish each state Medicaid program with the names and addresses of individuals residing in the state that SSA determines may be eligible for the Medicare savings programs. SSA is required to update such information at least annually. In addition to SSA's outreach efforts, CMS and individual states have engaged in efforts to increase enrollment in Medicare savings programs. Since fiscal year 2002, CMS has included increasing awareness of the Medicare savings programs as one of its Government Performance and Results Act (GPRA) goals. Specifically, CMS's goal in fiscal year 2002 was to develop a baseline to measure awareness of Medicare savings programs and to set future targets for increasing awareness. CMS estimated that 11 percent of beneficiaries were aware of Medicare savings programs in 2002 and the goal was to increase this to 13 percent for fiscal year 2003. As part of its efforts to increase awareness, CMS has coordinated with states, SSA, and other organizations regarding various outreach efforts; provided information about Medicare savings programs in various CMS publications; and developed a variety of educational materials for targeted populations, including minorities. CMS efforts in increasing enrollment in earlier years included setting state-specific enrollment targets and measuring progress toward these enrollment targets; developing and disseminating training and outreach materials to the states, and sponsoring national and regional training workshops for a variety of stakeholders, including other federal and state agencies, health care providers, and community organizations; designing a model application for Medicare savings programs that states can consider adopting; and providing grant funding to state Medicaid agencies, state health insurance assistance programs, and national advocacy groups to test and promote innovative approaches to outreach. In 2001, CMS also contracted for a survey of states to identify activities undertaken to increase program enrollment and streamline administration of these programs. Some of the most common state efforts included allowing application by mail (49 states), eliminating in-person interviews (46 states), developing a shorter application form (43 states), and conducting outreach presentations at health fairs (34 states). Other state efforts identified by the survey included increasing awareness of the programs through outreach efforts such as direct mailings and other printed material, and public service announcements on radio, television, and in newspapers; providing training for employees and education for beneficiaries; developing partnerships with other entities, such as State Health Insurance Assistance programs and local agencies on aging, to enhance outreach efforts and promote issues and solutions involving the Medicare savings programs; eliminating potential barriers to enrollment such as streamlining the enrollment and renewal process and easing financial eligibility rules; supplementing program benefits with other benefits, such as prescription drug discount programs; and providing information targeting underserved populations, including minorities. In response to BIPA, SSA is conducting an annual outreach effort to help increase enrollment in Medicare savings programs. This outreach consists of a nationwide mailing campaign and data sharing with the states. SSA selected low-income Medicare beneficiaries to be sent an outreach letter if their incomes were below the income eligibility ceilings for the Medicare savings programs. From May through November 2002, SSA sent a total of 16.4 million outreach letters to persons potentially eligible for QMB, SLMB, and QI. Additionally, in late 2002, SSA sent about 53,000 letters to those potentially eligible for benefits under the QDWI program. Starting in 2003, SSA has targeted annual outreach letters to individuals newly eligible for Medicare as well as a subset of those who were sent outreach letters in 2002 but are still not enrolled. From June through October 2003, SSA sent outreach letters to 4.3 million of these beneficiaries. SSA intends to continue its outreach mailing annually to potentially eligible beneficiaries, including recipients who did not enroll after receiving earlier letters, as well as those whose income has declined, making them eligible for the program. In addition to sending outreach letters, in 2002 and 2003 SSA provided states with a data file that listed residents who were potentially eligible for benefits under the Medicare savings programs. SSA plans to continue sharing these data once a year with states. The data provided by SSA could be used by the states to coordinate their outreach with SSA's or supplement SSA's outreach efforts. For the 2002 mailing, SSA sent letters three times each week from May through November. Each time letters were mailed, SSA sent them to approximately 207,000 Medicare beneficiaries randomly selected from the 16.4 million beneficiaries who were identified as potentially eligible for QMB, SLMB, and QI. Letters were targeted to beneficiaries whose incomes from Social Security and certain other federal sources were less than 135 percent of the federal poverty level (FPL). Specifically, those selected to be sent the outreach letters were intended to meet the following three criteria: individuals and couples entitled to Medicare, or within 2 months of Medicare entitlement eligibility; individuals who were not currently receiving Medicare savings program benefits under a state Medicaid program or not already entitled to full Medicaid based on SSI participation; and individuals and couples whose combined Social Security income and Department of Veterans Affairs and federal civil service pensions fell below the program's income eligibility ceiling. The letters provided information in English or Spanish about the Medicare savings programs, including state-specific asset guidelines and a state contact number. (See app. II for a sample 2002 outreach letter.) At the end of November 2002, SSA sent a separate mailing to about 53,000 disabled working adults who were potentially eligible for benefits under the QDWI program. Medicare beneficiaries who had sources of income other than Social Security--such as income from employment and public and private pensions--and whose incomes were above the programs' eligibility thresholds were selected nonetheless to be sent the SSA outreach letter because SSA's data systems do not collect information on these income sources. In addition, SSA's records do not contain information about beneficiaries' private assets, making it impossible for SSA to identify whether letter recipients had assets within their states' Medicare savings programs' eligibility limits--typically $4,000 for an individual and $6,000 for couples. In 2002, the Medicare Rights Center, a national health advocacy group for older adults and people with disabilities, sought a federal court order requiring SSA to resend 1.4 million letters to potentially eligible beneficiaries in Connecticut and New York to correct erroneous information on the asset limit for the QI program. The New York and Connecticut letters had incorrectly informed potential beneficiaries that only individuals with assets of less than $4,000 were eligible for the QI program, even though Connecticut and New York abolished the asset requirement for QI eligibility in 2001 and 2002, respectively. SSA agreed to resend the letters and the parties settled the case before trial. In addition to sending letters to potentially eligible low-income Medicare beneficiaries, in 2002 SSA provided all but six states with an electronic data file containing the names of all beneficiaries to whom it had sent letters in that state. The data file contained information that could assist states with outreach efforts, such as the name, address, Social Security number, date of birth, spouse's name, and the basis for Medicare entitlement of each letter recipient. SSA is required to provide updated data to the states each year. For the June through October 2003 mailing, SSA sent a second round of letters to about 4.3 million potentially eligible low-income Medicare beneficiaries nationwide whom its records indicated might have met the QMB, SLMB, and QI income eligibility criteria and were not currently enrolled in Medicare savings programs. This mailing included beneficiaries who were newly eligible since the 2002 mailing, current Medicare beneficiaries who newly met the income criteria, and about one-fifth of the beneficiaries notified in 2002 who still met the mailing criteria but were not enrolled in a Medicare savings program. At the time we conducted our work, enrollment data for beneficiaries who were sent the letter in 2003 were not available. In contrast to the 2002 letter that provided state-specific eligibility criteria and a state-specific telephone number, the 2003 letter did not contain customized state information, but provided more general national information. The letter suggested that beneficiaries who may be eligible check the government list in their local telephone books for their local Medicaid contact or call the general 1-800-Medicare number that refers callers to state help lines, such as state or local medical assistance offices, social services, or welfare offices. SSA gave several reasons for not including state-specific information in the 2003 letter. One official indicated that there was additional cost to SSA to develop state-specific letters and therefore the agency did not tailor the letters for each state. CMS officials reported that a few states did not want to provide state-level contact numbers because eligibility and other Medicare savings program administrative matters were actually conducted at the county levels. Furthermore, in some cases, the telephone numbers states initially provided were changed shortly before the 2002 mailings were begun, creating additional need for SSA to coordinate with states in finalizing the letters. However, some state officials we interviewed expressed concern about the lack of state-specific information for the 2003 mailing. Their concern was that, given that most states had established mechanisms for responding to these inquiries for the larger 2002 mailing, not including state-specific criteria or contact information on the letter could make the letter less effective since it could be more difficult for beneficiaries to obtain direct assistance or applications for eligibility determinations. We estimate that SSA's mailing from May through November 2002 to 16.4 million potentially eligible beneficiaries contributed to more than 74,000 additional beneficiaries enrolling in Medicare savings programs. Further, in the year following SSA's mailing, nationwide enrollment in Medicare savings programs increased 2.4 to 2.9 percentage points over that in the 3 previous years. Certain demographic groups also had larger additional increases in enrollment following the 2002 SSA mailing. For example, beneficiaries less than 65 years old, persons with disabilities, racial and ethnic minorities, and residents in southern states experienced larger additional increases in enrollment. On the basis of our analysis of SSA's Master Beneficiary Record (MBR), we estimate that, of the 16.4 million SSA letter recipients in 2002, an additional 74,000 beneficiaries (0.5 percent of letter recipients) enrolled in Medicare savings programs than would have likely enrolled without the mailing. To estimate this increased enrollment, we examined two cohorts of letter recipients--a cohort of 1.3 million beneficiaries who were sent the letters during the first six mailings in May 2002 and a baseline cohort of 1.3 million beneficiaries who were sent the letters during the last six mailings through November 2002. Because SSA sent the mailing to beneficiaries in a random order nationwide from May through November 2002, the only difference between the cohorts is the time at which the letters were sent to them. As a result, other factors that could influence enrollment patterns, such as demographic differences or other outreach efforts by CMS and the states, should affect the May and November cohorts similarly. We used the November 2002 cohort as a baseline to examine how the May 2002 cohort's enrollment in Medicare savings programs was affected following SSA's mailing. As shown in figure 1, by August 2002--3 months after the initial letters were sent in May 2002--the Medicare savings program enrollment for the May cohort began to increase faster than that of the November cohort, which was yet to have the SSA letter sent to them. While the cohorts were sent the SSA letters in May or November 2002, SSA officials reported that it typically takes about 3 months before enrollment is reported in the MBR. As of December 2002, more than 5,800 additional beneficiaries in the cohort of 1.3 million beneficiaries who were sent the letter in May had enrolled in Medicare savings programs compared with the November cohort, whose enrollment was not yet affected by the mailing. (See table 3.) This additional enrollment in the May cohort represents 0.5 percent of the letter recipients. Projecting the experience of the May cohort to the universe of the 16.4 million letter recipients results in an estimate of over 74,000 additional beneficiaries enrolling in Medicare savings programs as a result of the 2002 SSA mailing. Nationwide, CMS data showed that Medicare savings programs experienced an overall net increase in enrollment of 5.9 percent (341,069 individuals) from May 2002--the start of SSA's mailing--to May 2003. This 5.9 percent increase was nearly double the 3.0 to 3.5 percent increases in the 3 previous years before SSA's nationwide mailings. (See table 4.) These data suggest that SSA's mailing helped to increase enrollment at a greater annual rate than in earlier years. Across the United States, letter recipients residing in the southern states had a 0.6 percent additional increase in enrollment following SSA's mailing. This was more than residents in the Northeast, Midwest, and West, where the additional increase in enrollment was 0.4 percent. Thirty- five states had an additional increase in enrollment following the SSA mailing compared to the increase that would likely have occurred without the letter. Of the thirty-five states, the largest additional increase in enrollment following the SSA mailing occurred in Alabama, (2.9 percent), followed by Delaware (2.0 percent), and Mississippi (1.3 percent). While data from 13 other states showed an increase in enrollment following the SSA mailing, these increases were not statistically significant. Another three states showed a decrease in enrollment following the SSA mailing, but these changes also were not statistically significant. Appendix III provides the additional percentage change in enrollment following the 2002 SSA mailing for each state. Certain demographic groups also had higher additional increases in enrollment rates than the additional increase among all letter recipients. In comparison to the 0.5 percent additional increase in enrollment among all letter recipients, beneficiaries less than 65 years old and beneficiaries of any age who qualified for Medicare as a result of a disability each had a 0.8 percent additional increase in enrollment following SSA's outreach. Also, minority beneficiaries, which based on SSA's data categories include blacks or individuals of African origin, Asians and Pacific Islanders, and North American Indians or Eskimos, had a 0.7 percent additional increase in enrollment. Appendix IV provides data for all demographic groups that we examined. The percentage of additional letter recipients newly enrolling in Medicare savings programs following SSA's mailings varied significantly among the six states we reviewed. Among these six states, enrollment increases ranged from 0.3 to 2.9 percent. Further, several states we reviewed reported that calls to their telephone hot lines and applications mailed or received increased sharply during the period of the SSA outreach. In addition, some states supplemented SSA efforts with outreach efforts of their own, while other states were aware of or assisted outreach efforts by private or community groups. Among the states we reviewed, SSA's outreach had varying effects on the percentage of letter recipients enrolling. Alabama, with 2.9 percent additional letter recipients enrolled compared to the percentage that likely would have enrolled without the SSA letter, had the largest additional increase in enrollment following the SSA mailing. This contrasts with the national average of 0.5 percent. For the states we reviewed, SSA's outreach had the least impact on Medicare savings program enrollment in California, Washington, and New York with a 0.3 percent increase in additional enrollment. (See table 5.) The varying effects on enrollment by state can be attributed to several factors, including, the share of eligible beneficiaries already enrolled in Medicare savings programs prior to the outreach, a state's ability to handle increased phone calls and applications, and a state's income and asset limits. For example, a smaller share of low-income elderly beneficiaries in Alabama was enrolled in QMB as of the year prior to the SSA mailing than the national average. Specifically, the number of QMB enrollees in Alabama in 2001 was about half the number of Alabama seniors reported by the Census Bureau to have incomes below the limit for the QMB program. In contrast, about three-quarters of the seniors nationwide who reported income below the QMB limit were enrolled. As a result, a larger number of letter recipients in Alabama may have been able to meet the QMB and other Medicare savings program eligibility criteria whereas other states may have already enrolled a larger share of these beneficiaries. Further, each of the states we reviewed established or used an existing state-specific telephone number that was listed in the SSA letter to receive calls. After the SSA mailing started, however, California's phone number was discontinued and calls were redirected to CMS's nationwide 1-800- Medicare number. California's lower enrollment could also result from its eligibility requirements for SSI. For example, in a prior demonstration, SSA's mailing in 1999 and 2000 resulted in lower enrollment in California than in other demonstration sites, in part because the state offered a generous state supplement to SSI. Therefore, there were potentially not as many people eligible for the Medicare savings programs. In addition, other state differences, such as different state asset eligibility requirements and application requirements as well as state efforts to support the SSA outreach, may have contributed to different effects among states. States we reviewed often reported that calls to their hot lines and applications for Medicare savings programs increased significantly during the period of the 2002 SSA mailing. Four states provided data on the monthly trends in the number of calls either related to Medicare and Medicaid in general or the Medicare savings program specifically that showed increases concurrent with the 2002 SSA mailing. Three states were also able to provide data on changes in the number of applications sent to interested beneficiaries or received from beneficiaries. (See table 6.) While officials in several states indicated that not all of the increases noted could be attributed directly to the SSA mailing, the data provided by the states suggest that beneficiaries' interest in Medicare savings programs increased during the mailing period. For example, Alabama experienced a 19 percent increase in monthly calls to its state hot line related to any Medicare and Medicaid issue after the SSA mailings began; this was followed by a 25 percent decrease after the mailings ended. Alabama also experienced a 158 percent surge in applications received per month during the SSA mailing and then a decrease of 57 percent afterwards. State officials reported that Washington tracked calls and applications specific to the SSA mailing, and these data showed 85 percent decreases in both monthly call volume and applications mailed out to beneficiaries after the mailings ended; Washington also reported a 72 percent monthly decrease in applications received after the 2002 mailings ended. Concurrent with SSA's mailing, each of the states we reviewed reported that the state or other stakeholders conducted additional outreach. For example, the Louisiana Department of Health and Hospitals and the Pennsylvania Health Law Project, a coalition advocating for low-income individuals and the disabled, each received 3-year grants from the Robert Wood Johnson Foundation in 2002 to conduct outreach to low-income Medicare beneficiaries in these states. A state official also reported that in 2002 the New York Department of Health developed and distributed 100,000 copies of a brochure called "How To Protect Your Health and Money," which included information about the Medicare savings programs, and conducted a "Senior Day" at 16 sites in New York City and several other districts as well as presentations at local fairs. Other states reported coordinating with community or state organizations as well as private health plans participating in Medicare, such as health maintenance organizations participating in the Medicare + Choice program. Some private health plans conducted outreach to increase Medicare savings program enrollment since CMS pays these plans a higher rate for these enrollees. Several state officials also said that their states work with other groups, such as the local departments of aging or senior services and local businesses and community organizations, to assist with outreach efforts to potentially eligible beneficiaries. None of the states we reviewed reported having assessed the effectiveness of their outreach efforts. Of the six states we reviewed, only Louisiana and Pennsylvania officials reported that they used the data file listing names and addresses of potentially eligible beneficiaries provided by SSA in 2002 to assist with state outreach or enrollment efforts. For example, after receiving the SSA data file, seven parishes in Louisiana used it to obtain a list of potentially eligible beneficiaries and sent an application with a letter and return envelope to these beneficiaries. In 2003, about 20,450 applications were mailed to potential beneficiaries. Pennsylvania officials used the file to cross-check against the state's own data system to assess the number of applications authorized, rejected, or denied as a result of the SSA mailing. We provided a draft of this report to SSA, CMS, and state Medicaid agencies in Alabama, California, Louisiana, New York, Pennsylvania, and Washington. In written comments, SSA generally concurred with our findings and provided technical comments that we incorporated as appropriate. SSA also noted that improvements in state enrollment processes could further increase enrollment. SSA's comments are reprinted in appendix V. In a written response, CMS stated it did not have any specific comments on the report. However, CMS provided technical comments that we incorporated as appropriate. While we did not examine the effects of SSA's 2003 mailing, Louisiana Medicaid officials indicated that, in comparison to the 2002 SSA mailing, there was little increase in call volume following SSA's 2003 mailing, and that they believe that this was because a state-specific telephone number was not included in the 2003 outreach letter. New York Medicaid officials stated that they found an increase in Medicare savings program enrollment of over 6 percent from December 2002 to December 2003. However, in addition to being a different timeframe from what we examined, we do not believe that all of this increase can be attributed to the SSA mailing. Based on our analysis of SSA's MBR data, we report a 0.3 percent increase in enrollment in New York specifically attributable to the 2002 SSA outreach mailing. We found the net increase in enrollment from May 2002 to May 2003 (following SSA's 2002 mailing) to be 5.9 percent nationwide, similar to the net increase in enrollment that New York reported from December 2002 to December 2003. Louisiana and Pennsylvania Medicaid officials also provided technical comments that we incorporated as appropriate. Alabama, California, and Washington Medicaid officials reviewed the draft and stated that the report accurately reflected information relevant to their respective states. We are sending copies of this report to the Commissioner of SSA, the Administrator of CMS, and other interested parties. We will also provide copies to others on request. In addition, this report will be available at no charge on GAO's Web site at http://www.gao.gov. Please call me at (202) 512-7118 or John Dicken at (202) 512-7043 if you have any additional questions. N. Rotimi Adebonojo and Rashmi Agarwal were major contributors to this report. To determine what outreach the Social Security Administration (SSA) conducted in response to the statutory requirement, we obtained and reviewed copies of SSA documents, including sample 2002 and 2003 outreach letters and data on the number of letters sent to eligible Medicare beneficiaries in each state, as well as reports prepared by the Centers for Medicare & Medicaid Services (CMS) related to the Medicare savings program. In addition, we interviewed officials from the SSA and CMS. To determine how enrollment changed following SSA's outreach, we analyzed records from SSA's Master Beneficiary Record (MBR)--a database that contains the administrative records of Social Security beneficiaries, including payments for Medicare premiums--and CMS's national enrollment data for the Medicare savings programs. The MBR data contain demographic information as well as information on the monthly deductions made from beneficiaries' Social Security checks to cover Medicare part B premiums. We obtained MBR data on beneficiaries who were sent the outreach letters in the first six mailings in May and the last six mailings through November 2002, representing 2.6 million of the 16.4 million Social Security beneficiaries who were sent letters from SSA. To determine which letter recipients enrolled in the Medicare savings programs following SSA's 2002 mailing, we identified letter recipients who met the following criteria: those whose date of eligibility for Medicare savings programs began January 2002 or afterwards; those for whom a third-party payer, specifically a state, made payments on their behalf to cover Medicare part B premiums; and those who no longer had the premium deduction made from their Social Security checks to cover Medicare part B premiums at any point from June 2002 through December 2002. In order to estimate the impact of the SSA outreach mailing on additional enrollment in Medicare savings programs, we analyzed monthly enrollment from June 2002 to December 2002 for two cohorts of letter recipients to identify letter recipients who enrolled in Medicare savings programs following the initiation of the SSA mailing in May 2002. Because the mailings were sent to beneficiaries in a random order, the only notable difference between the recipients in the two cohorts would be the timing of when the SSA letters were sent to them. SSA officials noted that it typically takes about 3 months until enrollment is reported on the MBR. Therefore, since the mailings began in May 2002, the first effects of the mailing would not have been apparent until after June 2002. We analyzed the MBR data provided by SSA to determine specifically what month and year a letter recipient enrolled in Medicare savings programs. Using the enrollment by the November cohort as a baseline because these individuals met the same selection criteria as those in the May cohort, we estimated the net effect of the SSA mailing by comparing the difference in cumulative monthly enrollment between the May and November cohorts in December 2002--this difference represented the additional enrollment we attributed to the SSA mailing. We made the comparison in December 2002 because after this date the enrollment of the baseline group began increasing at a rate faster than the May cohort, indicating that this was the point when the largest cumulative difference in enrollment between the two cohorts occurred before the effects of the mailing started becoming evident for the November cohort. Using the same methodology, we calculated the effect of the SSA outreach letter for certain demographic groups and for beneficiaries in each state. We also obtained and analyzed data contained in CMS's third party master file for the period May 1999 to May 2003 that tracks national Medicare savings programs enrollment. Using these data, we examined how national Medicare savings enrollment trends compared before and after the 2002 SSA mailing. To determine how additional enrollment in the programs changed in selected states following SSA's outreach and what outreach efforts these states undertook, we interviewed Medicaid officials in six states-- Alabama, California, Louisiana, New York, Pennsylvania, and Washington. We selected these states based on several factors, including states with different levels of change in overall Medicare savings programs enrollment from 2002 to 2003, geographic diversity, relatively large populations of Medicare savings programs enrollees, and availability of data on program enrollment. We also reviewed CMS's third party master file to identify how many beneficiaries in each state were enrolled in Medicare savings programs, and analyzed records from SSA's MBR to estimate the additional enrollment in each state following the SSA mailing. In addition, we obtained information from each state to the extent available on its involvement with the SSA mailing, the state's specific eligibility criteria for its Medicare savings program, outreach efforts conducted by the state to low-income Medicare beneficiaries, and state data on call and application volume before, during, and after the SSA outreach. We obtained information from SSA and CMS on their data reliability checks and any known limitations on the data they provided us. SSA and CMS perform quality controls, such as data system edits, on the MBR and the third party beneficiary master file, respectively. We concluded that their data were sufficiently reliable for our analysis. A few MBR variables have certain limitations. For example, some Medicare beneficiaries receive their Social Security payments electronically, and therefore may not keep the record of their mailing address current. For our analysis we only used the beneficiary's state of residence, which is less likely to change as SSA reported that, even if a beneficiary's address changes, the beneficiary often stays within the same state of residence. Finally, since it is optional for beneficiaries to identify their race, a number of Social Security recipients do not. However, sufficient numbers of individuals reported their race to to allow us to analyze these data and also report missing or unknown values. SSA mailed 16.4 million letters in 2002 to potentially eligible Medicare beneficiaries notifying them about state Medicare savings programs. These letters were customized to include state-specific information, including a state contact number. These letters were sent in English or Spanish, depending on the beneficiary's preference. Figure 2 provides a sample of the outreach letter sent to a beneficiary in Texas between May and November 2002. Figure 3 shows enrollment by state of the estimated 74,000 additional beneficiaries who enrolled in Medicare savings programs following the 2002 SSA mailing. Because these estimates are based on two cohorts of about 1.3 million beneficiaries each that represent a sample of the entire population of 16.4 million beneficiaries, we calculated 95 percent confidence intervals to reflect the potential for statistical error in projecting these estimates from the sample cohorts to the entire population. The small sample size in states with smaller populations results in larger confidence intervals for the estimates for these states. The highest additional increase in enrollment was in Alabama, in which an estimated 2.9 percent (with a 95 percent confidence interval of 2.6 percent to 3.3 percent) of beneficiaries who were sent the SSA letter enrolled than if the mailing had not occurred. In three states (Montana, Utah, and Vermont) our analysis showed no additional or slightly negative enrollment following the SSA mailing, and because the confidence intervals for these and 13 other states overlap the numeric value zero, the data do not show a statistically significant change in additional enrollment in the Medicare savings programs following the 2002 SSA mailing for these states. The other 35 states showed a statistically significant increase in additional enrollment in the Medicare savings programs following the 2002 SSA mailing. On the basis of our analysis of SSA's MBR, we estimate that enrollment in Medicare savings programs was about 74,000 higher for Medicare beneficiaries following the 2002 SSA mailing than it would have been without the mailing. This represents about 0.5 percent of the 16.4 million letters sent nationwide. However, this additional enrollment following the SSA mailing varied among demographic groups. Figure 4 shows the additional enrollment in Medicare savings programs following the 2002 SSA mailing by geographic region and demographic groups, including racial categories, sex, disability status, and age categories. Because these estimates are based on two cohorts of about 1.3 million beneficiaries each that represent a sample of the entire population of 16.4 million beneficiaries, we calculated 95 percent confidence intervals to reflect the potential for statistical error in projecting these estimates from the sample cohorts to the entire population. Additional enrollment following the 2002 SSA mailing was statistically significantly higher among beneficiaries in southern states compared to other geographic regions, minorities compared to white beneficiaries, beneficiaries with disabilities compared to beneficiaries without disabilities, and beneficiaries who were younger than 65 years compared to those who were 65 years or older. Medicare and Medicaid: Implementing State Demonstrations for Dual Eligibles Has Proven Challenging. GAO/HEHS-00-94. Washington, D.C.: August 18, 2000. Low-Income Medicare Beneficiaries: Further Outreach and Administrative Simplification Could Increase Enrollment. GAO/HEHS- 99-61. Washington, D.C.: April 9, 1999. Medicare and Medicaid: Meeting Needs of Dual Eligibles Raises Difficult Cost and Care Issues. GAO/T-HEHS-97-119. Washington, D.C.: April 29, 1997. Medicare and Medicaid: Many Eligible People Not Enrolled in Qualified Medicare Beneficiary Program. GAO/HEHS-94-52. Washington, D.C.: January 20, 1994.
To assist low-income beneficiaries with their share of premiums and other out-of-pocket costs associated with Medicare, Congress has created four Medicare savings programs. Historic low enrollment in these programs has been attributed to several factors, including lack of awareness about the programs, and cumbersome eligibility determination and enrollment processes through state Medicaid programs. Concerned about this low enrollment, Congress passed legislation as part of the Medicare, Medicaid, and SCHIP Benefits Improvement and Protection Act of 2000 (BIPA) requiring the Social Security Administration (SSA) to notify low-income Medicare beneficiaries of their potential eligibility for Medicare savings programs. The statute also required GAO to study the impact of SSA's outreach effort. GAO examined what outreach SSA undertook to increase enrollment, how enrollment changed following SSA's 2002 outreach, and how enrollment changed in selected states following SSA's outreach and what additional outreach efforts these states undertook. GAO reviewed information obtained from SSA and the Centers for Medicare & Medicaid Services (CMS), analyzed enrollment data provided by SSA and CMS, and interviewed officials in and obtained data from six selected states (Alabama, California, Louisiana, New York, Pennsylvania, and Washington). In response to a statutory requirement, SSA is carrying out an annual outreach effort to help increase enrollment in Medicare savings programs. This outreach effort consists of mailing letters to potentially eligible lowincome beneficiaries nationwide as well as sharing data with states to assist with their supplemental outreach efforts. In 2002, SSA sent 16.4 million letters to low-income Medicare beneficiaries whose incomes from Social Security and certain other federal sources met the income eligibility criteria for Medicare savings programs. The 2002 letters provided eligibility criteria for programs in the beneficiary's home state and urged beneficiaries interested in enrolling to call a state telephone number provided. In addition to sending these letters, SSA provided states with a data file containing information on the beneficiaries to whom it sent letters. In 2003, SSA sent another 4.3 million letters to potentially eligible beneficiaries, and indicated that it intends to repeat the outreach mailing annually to newly eligible beneficiaries and a portion of prior letter recipients. Following SSA's outreach efforts in 2002, GAO estimated that more than 74,000 additional eligible beneficiaries enrolled in Medicare savings programs, 0.5 percent of all 2002 letter recipients, than would have likely enrolled without the letter. CMS enrollment data also showed that growth in Medicare savings programs enrollment for the year following SSA's mailing was nearly double that for each of the 3 prior years. Of the 74,000 additional enrollees, certain states and demographic groups had somewhat larger increases in enrollment than other groups. The highest additional enrollment increase was in Alabama, where 2.9 percent of letter recipients enrolled, followed by Delaware at 2.0 percent. Beneficiaries less than 65 years old, persons with disabilities, racial and ethnic minorities, and residents in southern states also had higher enrollment rates than other groups. The percentage of letter recipients newly enrolling in Medicare savings programs following SSA's 2002 mailing ranged from 0.3 to 2.9 percent among the six states GAO reviewed. The varying effects on enrollment by state could be attributable to several factors, including the share of eligible beneficiaries enrolled in Medicare savings programs prior to the outreach, each state's ability to handle increased call and application volume, and a state's income and asset limits. Four states GAO reviewed reported increases in the numbers of calls received or applications mailed or received following the SSA mailing and then decreases after the mailing period ended. Each of the states GAO reviewed reported that the state or other stakeholders conducted additional outreach during SSA's 2002 outreach. SSA generally agreed with GAO's findings. CMS stated that it did not have specific comments on the report.
7,619
799
Compounded drugs may include sterile and nonsterile preparations, which, like all drug products, are made up of active and inactive ingredients. The active ingredient or ingredients in a compounded drug may be one or more FDA-approved products or may be bulk drug substances. Bulk drug substances--usually raw powders--are generally not approved by FDA for marketing in the United States. Examples of bulk drug substances that may be used to make compounded drugs include baclofen, a muscle relaxer, and gabapentin, an anticonvulsant, both of which may be compounded for use in topical pain medications. Active ingredients used to make a compounded drug--including bulk drug substances--are generally assigned national drug codes (NDC). FDA maintains a publicly available list of NDCs for FDA-approved products. NDCs for FDA-approved products and bulk drug substances are published in three national drug compendia, by First Databank, Medi- Span, and Truven Health Analytics. In addition, these compendia include drug pricing data by NDC, such as the average wholesale price (AWP) of FDA-approved products and bulk drug substances. A single FDA- approved product or bulk substance may be distributed by multiple manufacturers, in different forms or strengths, and by varying package sizes and, hence, may have multiple NDCs associated with it. The number of bulk drug substances that First Databank has added to its database--which First Databank tracks using NDCs--has increased significantly over the last 5 years, with the number of new NDCs added from 2009 through 2013 representing an increase of approximately 58 percent. (See fig. 1 for the number of NDCs for bulk drug substances that have been added to First Databank's database from 2009 through 2013.) Under section 503A of the Federal Food, Drug, and Cosmetic Act (FDCA), a compounded drug is exempt from certain FDCA requirements, including new drug approval and certain labeling and current good manufacturing practice requirements, provided the compounded drug meets certain criteria. These criteria include that the drug is compounded by a pharmacist or physician based on a valid prescription for an identified individual patient or in limited quantities in anticipation of receiving a valid prescription based on historical prescribing patterns (known as anticipatory compounding). The Drug Quality and Security Act of 2013 amended certain FDCA provisions as they apply to the oversight of compounded drugs to clarify the applicability of section 503A nationwide and to create a category of outsourcing facilities involved in sterile drug compounding under section 503B. Outsourcing facilities that register with FDA and provide information to the agency about the products that are compounded at the facility can qualify for exemptions from the FDCA's new drug approval and certain labeling requirements. Outsourcing facilities, however, must comply with current good manufacturing practice requirements. In addition, the Drug Quality and Security Act requires FDA to develop lists of bulk drug substances that may be used for compounding and lists of drugs that present demonstrable difficulties to compound, among others. To develop these lists, FDA has issued requests for nominations of bulk drug substances that pharmacists and outsourcing facilities may use to make compounded According to FDA, inclusion of a bulk drug substance on an FDA drugs.list does not indicate that FDA has approved the drug; rather, inclusion on the list means that a pharmacist or outsourcing facility may qualify for exemptions from certain requirements of the FDCA if they compound using bulk drug substances included on the lists. USP is a scientific nonprofit organization that sets standards for the identity, strength, quality, and purity of medicines, food ingredients, and dietary supplements. USP's current suite of General Chapters for compounding includes, among others, Chapter 797 Pharmaceutical Compounding--Sterile Preparations, which provides procedures and requirements for compounding sterile preparations; and Chapter 795 Pharmaceutical Compounding--Nonsterile Preparations, which provides guidance on applying good compounding practices in the preparation of nonsterile compounded formulations for dispensing or administration to humans or animals. pharmacy transactions. These entities were required to be fully compliant with version D.0 by January 1, 2012. Medicare Part A, Medicare's inpatient medical benefit, provides benefits for drugs administered in inpatient settings, such as hospitals. Medicare Part B, Medicare's outpatient medical benefit, provides limited benefits for drugs administered to patients in outpatient settings, such as physician offices. Medicare uses contractors to process and pay Part A and Part B claims. Medicare Part C--Medicare's managed care benefit, also known as Medicare Advantage--offers beneficiaries plans that provide inpatient and outpatient drug benefits (Part A and Part B, respectively) through a network of managed care organizations. In addition, some Medicare Advantage organizations offer plans with pharmacy benefits similar to those provided under Medicare Part D. Medicare Part D provides a voluntary pharmacy benefit for Medicare beneficiaries. Beneficiaries may choose Medicare Part D plans from among those offered by private Part D-only sponsors. Part D beneficiaries may obtain drugs through retail and mail-order pharmacies. States establish and administer their own Medicaid programs within broad federal guidelines. Medicaid programs vary from state to state, but all state Medicaid programs provide inpatient and outpatient medical benefits, which include benefits for drugs administered in inpatient hospital and outpatient physician office settings. In addition, all state Medicaid programs provide a prescription drug benefit under which they pay pharmacies for drugs dispensed to Medicaid beneficiaries. States report these payments to CMS, which provides federal matching funds to states to cover a portion of these costs.benefits using a fee-for-service or managed care delivery system. In a managed care delivery system, states typically contract with managed care organizations to provide some or all Medicaid covered services to beneficiaries. Private health plans in the commercial market provide medical benefits, which include benefits for drugs administered in inpatient hospital and outpatient physician office settings, and pharmacy benefits. Private health plans offered in the commercial market include individual and group market plans. Participants in the individual market purchase health insurance directly from an insurer, through a broker, or through a state health insurance exchange. Group market participants generally obtain health insurance through a group health plan, usually offered by an employer. These plans can include fee-for-service, preferred provider organization, and health maintenance organization options. Medicare, Medicaid, and private health insurer payment practices for compounded drugs dispensed in pharmacy settings allow for the payment of FDA-approved products but vary in whether they allow payment for bulk drug substances in these compounds. As a result of version D.0 of NCPDP's standard for pharmacy transactions, officials from the states, insurers, and Part D-only sponsors we spoke with told us that claims for compounded drugs dispensed in pharmacy settings contain sufficient information to identify when a compounded drug is dispensed and the ingredients used to make the drug by NDC. Therefore, these public programs and private health insurers are able to use NDC information from national drug compendia to determine whether the ingredients in the compounded drug are FDA-approved products or bulk drug substances. Officials from CMS, the five state Medicaid programs, four of the five insurers, and the two Part D-only sponsors provided us with information on their payment practices for compounded drugs, including those made with bulk drug substances. Of the five insurers we spoke with, one insurer owns and operates its pharmacies. Officials from this insurer told us that the insurer purchases drugs and drug ingredients, including some bulk drug substances used to make compounded drugs; therefore, the insurer's payment practices in pharmacy settings differ from the other four insurers across the insurer's Medicare Part D, Medicaid, and private health plans. Under Medicare Part D, federal payments are not available for non- FDA-approved products--including bulk drug substances--and inactive ingredients used to make a compounded drug. However, insurers that offer Medicare Part D benefits and Part D-only sponsors may choose to pay for bulk substances but may not submit these payments as part of the Part D transaction data CMS uses to determine federal payments to Part D plans. Officials from two insurers offering Medicare Advantage plans that include Part D drug benefits and one Part D-only sponsor we spoke with told us that they generally pay pharmacies for each ingredient in the compounded drug that is an FDA-approved product and is otherwise eligible for payment under Part D and thus do not pay for bulk drug substances. Officials from the remaining two insurers and one Part D-only sponsor we spoke with told us that they pay pharmacies for bulk drug substances but do not include these payments as part of the Part D transaction data they submit to CMS. However, in July 2014, the Part D-only sponsor that currently pays pharmacies for bulk drug substances announced its plans to discontinue payments for most of these substances by March 2015. This decision to cease paying for bulk drug substances was a result of the sponsor's internal analyses showing that pharmacies have been increasing their billed amounts for the ingredients not covered by Part D, including bulk drug substances, used to make compounded drugs; as a result, the sponsor's costs for these ingredients began to exceed its costs for the ingredients covered by Part D in compounded drug claims in early 2014. Under Medicaid, CMS provides federal matching dollars to states that opt to pay for compounded drugs under the prescription drug benefit, including those that contain bulk drug substances, and has issued a notice to the states informing them of this policy. Officials from four of the five state Medicaid programs and two insurers that offer Medicaid managed care plans we spoke with told us that they generally do not pay for bulk drug substances used to make compounded drugs under the prescription drug benefit. The fifth state Medicaid program and the remaining two insurers pay for only those bulk drug substances that are listed on their formulary.Officials from the fifth state Medicaid program told us that pharmacies may request to add a bulk drug substance to the state formulary, and the state will evaluate the request and the need to do so. However, these officials also told us that the state has received no such requests in at least the last 4 to 6 months. For private health plans offered in the commercial market, officials from three insurers we spoke with told us that they generally do not pay for bulk drug substances used to make compounded drugs and pay only for those ingredients in the compound that are FDA- approved products under their prescription drug benefit. Officials from one of these three insurers told us that the insurer requires prior authorization for all compounded drug prescription claims; officials from the other two insurers told us that they require beneficiaries to obtain prior authorization only for compounded drug claims over a certain dollar amount, regardless of whether the drug's ingredients are FDA-approved products or bulk drug substances. The fourth insurer pays for bulk drug substances as well as FDA-approved products, provided that the bulk drug substance is not listed as the primary ingredient on the claim. Once states, insurers, and sponsors determine which ingredients they will pay for in compounded drugs dispensed in pharmacy settings, they typically calculate the amount of the payment based on common drug pricing benchmarks. These pricing benchmarks apply to both FDA- approved products and bulk drug substances used to make compounded drugs. Officials from the states, insurers, and Part D-only sponsors we spoke with told us that they generally calculate payments to pharmacies based on a negotiated price for each ingredient, such as AWP, wholesale acquisition cost, or maximum allowable cost. Some states, insurers, and one Part D-only sponsor calculate the price of each ingredient according to the pricing benchmarks and then pay pharmacies the lesser of the total calculated price for all included ingredients, the price submitted by the pharmacy, the usual and customary charge, or other payment calculations. Medicare, Medicaid, and private health insurers generally have similar payment practices for compounded drugs administered in outpatient settings, which are affected by the lack of specific billing codes for these drugs on claims. As a result, most of these public programs and private health insurers pay for compounded drugs, including both the FDA- approved products and bulk drug substances that comprise these drugs, because they may be unable to identify whether compounded drugs were administered and what individual ingredients were used to make the compounded drugs. For drugs administered in outpatient settings, public programs and private health insurers generally rely on specific codes for individual drugs in the Healthcare Common Procedure Coding System (HCPCS)--a standardized coding system used by public programs and private health insurers to help ensure medical claims are processed in a consistent manner--to indicate whether a beneficiary received a prescription drug, including a compounded drug, on an insurance claim. However, for the majority of compounded drugs administered in outpatient settings, no specific HCPCS codes exist; rather, providers typically bill for compounded drugs administered in outpatient settings using HCPCS codes for "not otherwise classified" drugs. Nonspecific HCPCS codes may also be used to bill for noncompounded drugs that lack specific HCPCS codes. Public programs and private health insurers may conduct further reviews of outpatient claims to determine whether the drug billed under a nonspecific HCPCS code is a compounded drug and to identify its ingredients in order to make payment decisions. Given the difficulty in identifying these drugs on insurance claims, the insurers we spoke with generally do not have specific written policies regarding payment allowances or limitations for any FDA- or non-FDA-approved ingredients used to make compounded drugs administered in outpatient settings. In addition, while CMS has a national policy for payment of compounded drugs under Medicare Part B, the agency does not have any policies regarding federal Medicaid payments for compounded drugs administered in outpatient settings and likely provides some federal matching dollars to states to pay for compounded drugs, including those that contain bulk drug substances.develop their own payment policies for these drugs. CMS, the five state Medicaid programs, and four of the five insurers provided us with information on whether they review outpatient claims, including requesting and reviewing additional documentation, with drugs billed under the nonspecific code. Of the five insurers we spoke with, officials from one insurer told us that because the insurer owns and operates its health care facilities and purchases drugs and drug ingredients--including some non-FDA-approved bulk drug substances used to make compounded drugs--the insurer is able to determine whether drugs administered to beneficiaries in outpatient settings are compounded drugs and what ingredients were used to make them. This insurer's payment practices for reimbursing its health care facilities differ from the other four insurers across the insurer's Medicare Advantage, Medicaid, and private health plans. Under Medicare Part B, CMS contractors manually review claims and any additional documentation, such as invoices for compounded drugs purchased by the provider. Most of the contractors do not require providers to submit NDCs for compounded drug ingredients to determine whether these ingredients are FDA-approved products or to obtain pricing information. Officials from two of the insurers we spoke with that offer Medicare Advantage plans told us that they review all claims with compounded drugs billed under the nonspecific code and request additional information. Officials from one of these insurers told us that the insurer requires providers to submit NDCs for each ingredient to determine which ingredients are FDA-approved products and does not pay for bulk drug substances, unless the insurer determines that they are medically necessary. Officials from the other insurer told us that the insurer requires providers to submit supporting documentation, including invoices that list the name and amount of each ingredient in the compounded drug. A third insurer reviews claims and requests additional documentation only when the amount for a drug billed under the nonspecific HCPCS code on a claim exceeds a certain dollar amount but does not require NDCs to determine which ingredients are FDA-approved products. For these claims, the insurer uses NDCs primarily to calculate payments, likely for all ingredients in the compounded drug. The fourth insurer does not review claims with the nonspecific HCPCS code or collect additional information and pays for all ingredients in the compounded drug. Under Medicaid, officials from two state Medicaid programs told us that these states require providers to submit NDCs for each ingredient in compounded drugs billed under the nonspecific code and review the claims and the NDCs to determine medical necessity. However, neither state uses NDCs to determine which ingredients are FDA- approved products. Both states pay for compounded drugs, including those that contain bulk drug substances, if they determine the drugs are medically necessary. Officials from two other state Medicaid programs told us that the states require providers to submit NDCs for every HCPCS drug code and not just the nonspecific code, and providers may not submit more than one NDC with the nonspecific code. For one of these states, officials told us that providers may not bill compounded drugs as single line items on claims; rather, providers must bill each ingredient with the nonspecific code and the ingredient's NDC. This state uses the NDCs to determine which ingredients are FDA-approved products and does not pay for bulk drug substances. Officials from the other state told us that the state assigns a short list of NDCs for FDA-approved products to the nonspecific HCPCS code and updates it annually when CMS updates the HCPCS code database. Officials told us that the state's claims processing system will automatically reject those claims with the nonspecific code that are accompanied by an NDC that is not on the state's list. Two insurers offering Medicaid managed care plans process claims for compounded drugs billed under nonspecific HCPCS codes in a similar manner as they do for compounded drugs billed in their Medicare Advantage plans. One insurer that collects NDCs for drugs billed under nonspecific HCPCS codes for claims exceeding a certain dollar amount in its Medicare Advantage plans does not do so in its Medicaid managed care plans; rather, in its Medicaid managed care plans, this insurer collects information from the provider, either on the claim or in additional information submitted by the provider, about why a compounded drug is being administered. For private health plans offered in the commercial market, the four insurers require information about compounded drugs administered in outpatient settings and review claims in a similar manner as they do for compounded drugs billed in either their Medicare Advantage or their Medicaid managed care plans. Medicare Part B, the states, and the insurers vary in how they calculate payments for compounded drugs billed under nonspecific HCPCS codes on outpatient claims depending upon whether these entities review these claims. Medicare contractors calculate payments for compounded drugs based on the invoice price submitted by the provider, which may also include taxes and shipping fees. Officials from the state Medicaid program that requires NDCs for each ingredient and pays only for FDA- approved products told us that they calculate payment based on either the Medicare Part B rate or the pharmacy rate of reimbursement for each FDA-approved product. The state that allows for the use of the nonspecific HCPCS code with only certain NDCs calculates payment based on the wholesale acquisition cost. Four insurers that offer Medicare Advantage, Medicaid managed care, and private health plans calculate payments based on either (1) the provider-submitted price for the drug, which may include payment for non-FDA-approved bulk drug substances; (2) common drug pricing benchmarks, such as wholesale acquisition cost, for the NDC of each FDA-approved product; or (3) the state Medicaid program's fee schedule. These insurers' payment calculations may depend on whether the plan is public or private and whether the insurer reviews claims and additional information. The insurer that owns and operates its healthcare facilities pays the price set by the manufacturer for drugs and drug ingredients. In inpatient hospital settings, drugs, including compounded drugs, are generally not billed separately from the rest of the services the beneficiary received but are bundled together as part of the overall charge for the hospital stay or inpatient admission.officials from CMS, all five states, and all but one of the insurers we spoke with told us that they cannot determine whether a beneficiary received a compounded drug. Medicare Part A, Medicaid, and private health insurers Because these drugs are bundled, generally pay a preset rate for the cost to deliver inpatient services, including any compounded drugs administered as part of the services;the use of a particular drug--including a compounded drug--would not generally change the inpatient payment rate for a given service. Medicare's Part B national payment policy for compounded drugs is unclear. The policy notes that federal law requires that drugs be reasonable and necessary in order to be covered under Medicare Part B and indicates the agency's view that, to be considered reasonable and necessary, FDA must have approved the drug for marketing. Accordingly, the policy instructs Medicare contractors and insurers that offer Medicare Advantage plans to deny payments for drugs that have not received final marketing approval by FDA. The policy also indicates that payment is available for compounded drugs; however, it does not stipulate whether payment is available for ingredients in compounded drugs that are FDA-approved products only or whether it is also available for those ingredients that are bulk drug substances that have not been As noted above, most of the Part B contractors do approved by FDA.not require providers to submit NDCs for compounded drug ingredients to determine whether these ingredients are FDA-approved products or bulk drug substances and, therefore, may be paying for ingredients that are not FDA-approved. Because Medicare Part B policy for compounded drugs is unclear, it is uncertain whether payment for such ingredients is consistent with that policy. In addition to having unclear policies, CMS does not know how much it has paid for compounded drugs under Part B, the number of compounded drug claims it paid, or whether compounded drugs paid for under Part B were made using bulk drug substances. Having access to such information may help ensure that payment for such drugs is consistent with CMS policy. CMS lacks this information because the agency does not collect any information that the contractors responsible for processing Medicare Part B claims obtain during their review of claims with the nonspecific HCPCS code, including amounts paid to providers for compounded drugs based on the invoice price, which CMS officials attributed to limitations in claims processing systems. In April 2014, HHS OIG reported on payments for compounded drugs in Medicare Part B and found that neither CMS nor its contractors track compounded drug claims and confirmed what CMS officials told us about neither the agency nor the contractors being able to determine the total number of these claims, or CMS's payments, for compounded drugs. HHS OIG recommended that CMS establish a method specifically to identify compounded drugs on those Part B claims that contain the nonspecific HCPCS code in order to track compounded drug claims, as these claims undergo manual review because of the code and not because they are for compounded drugs. HHS OIG also found that, while most Medicare contractors require providers to list the individual ingredients that made up a compounded drug billed under the nonspecific HCPCS code in a text field on a claim, they do not require NDCs for these ingredients. NDCs could be used to (a) identify whether an ingredient is an FDA-approved product or a bulk drug substance and (b) help determine ingredient price for the purposes of calculating payment. In August 2014, CMS officials told us that the agency was working to implement HHS OIG's recommendation regarding a compounded drug indicator for Part B claims. Without specific information indicating whether a beneficiary received a compounded drug in an outpatient setting or what ingredients made up the compounded drug, CMS may be may be paying for such drugs in a manner that is inconsistent with its policy. Officials from the public programs and private health insurers we spoke with generally agreed that payment practices for compounded drugs may affect the use of these drugs. Officials from CMS, one state Medicaid program, three of the five insurers, the two Part D-only sponsors, and the three PBMs with whom we spoke stated that payment practices for compounded drugs did affect their use, specifically when public programs and private health insurers excluded payments for bulk drug substances in retail pharmacy settings. In most cases, payment exclusions for bulk drug substances resulted in a decreased use of compounded drugs in these insurers' plans, particularly for compounded drugs dispensed in pharmacy settings. For example, according to CMS, a 2012 analysis of Part D data showed that compounded drugs comprised less than one percent of all Part D claims in that year, which is likely due at least in part to Part D drug rules that exclude payment for bulk drug substances. CMS officials told us that the small number of claims for compounded drugs is a result of the law limiting Medicare Part D payment to FDA-approved drugs. The Part D sponsor that pays for bulk drug substances in compounded drugs saw its costs for these bulk drug substances increase significantly between January 2012 and March 2014 and, as a result, will cease payments for these substances by March 2015. In contrast to this sponsor, officials from two insurers that do not pay for bulk drug substances in their Medicare Advantage plans that include Part D benefits told us that payments for compounded drugs have remained generally steady, with no significant increases or decreases. The experiences of the Part D sponsor and the two insurers suggest that Part D payment practices may affect the use of compounded drugs. Further, officials from one of the three insurers that pay for ingredients that are FDA-approved products only and do not pay for bulk drug substances in their private health plans in the commercial market told us that these practices have resulted in a decrease in compounded drug claims and payments. For example, officials from one of the insurers told us that, in 2011, the insurer's payments for compounded drugs decreased by 205 percent in the quarter after it ceased paying for bulk drug substances. Officials from one insurer that limits payment to only FDA-approved drugs in its private health plans in the commercial market and officials from one PBM expressed concern that manufacturers of bulk drug substances and outsourcing facilities are inflating the AWP of the bulk drug substances used to make compounded drugs. Further, these officials, as well as officials from two other PBMs, told us that outsourcing facilities are actively marketing their products to physicians, who may not know what ingredients these products contain or be sure of the compounded products' clinical benefits. Officials from one PBM said several of these outsourcing facilities are pushing certain compounded drugs onto the market through partnerships they have established with physicians who own shares in these facilities. Officials from the majority of associations representing health care providers we spoke with cited factors other than payment practices that affect the use of compounded drugs--primarily individual patient need and drug shortages. Officials from CMS and some of the states and insurers we spoke with also told us that these factors affected the use of compounded drugs. For example, officials from CMS, 11 associations, 3 insurers, and 2 pharmaceutical standards-setting organizations told us that physicians primarily prescribe compounded drugs due to individual patient needs such as (1) the lack of a commercially available product for a patient's specific treatment needs; (2) a patient's allergy to an inactive ingredient, such as a dye or a filler, in an available FDA-approved drug; (3) a patient's need for a different delivery format for the drug, such as a patient who cannot swallow pills and needs a liquid formulation; or (4) a patient's need for custom dosage requirements, such as a pediatric patient who needs a lower dosage of a commercially available drug. In addition to individual patient need, officials from 7 of the 11 associations and 2 state Medicaid programs we spoke with also cited shortages of certain FDA-approved drugs as a significant factor contributing to the need to prescribe and use compounded drugs.from one association told us that nutrition drugs that need to be administered intravenously are frequently in shortage. Patients who need these nutrition drugs sometimes require a combination of more than 20 drugs, all of which are FDA-approved. However, according to officials from this association, many of these FDA-approved nutrition drugs have been in shortage since 2010 and, therefore, clinicians have to use compounded intravenous nutrition drugs made with bulk drug substances instead. Compounded drugs account for a small but likely growing percentage of all prescription drugs dispensed in retail pharmacies, but the number of these drugs administered in outpatient settings--as well as how much public programs and private health insurers are paying for them--is unknown. The lack of information about use and payments results from the fact that, unlike retail claims, outpatient health insurance claims, including Medicare Part B claims, may not contain information specific enough to identify whether a compounded drug was administered or what ingredients were used to make it. Medicare Part B policy for payment for compounded drugs is also unclear and instructs CMS contractors to deny payment for non-FDA-approved drugs but is silent with respect to whether payment is available for ingredients--namely, bulk drug substances--in a compounded drug that are not FDA-approved. In addition, CMS may be unable to appropriately apply Medicare payment policy because CMS's Medicare contractors do not collect information needed to determine whether each ingredient used to make a compounded drug administered in an outpatient setting is FDA-approved. As a result, CMS may have paid for compounded drugs containing bulk drug substances in outpatient settings inconsistently with its payment policy and incurred additional expenses in the process. In April 2014, HHS OIG recommended that CMS establish a method to identify Part B claims for compounded drugs, which could also help CMS to appropriately apply its payment policy. To help ensure that Medicare Part B is able to appropriately apply its payment policy for compounded drugs, we recommend that the Secretary of Health and Human Services direct the Administrator of the Centers for Medicare & Medicaid Services to clarify the Medicare Part B payment policy for compounded drugs and, as necessary, align payment practices with the policy. For example, CMS should consider updating the Medicare Part B payment policy to either explicitly allow or restrict payment for compounded drugs containing bulk drug substances and, as appropriate, develop a mechanism to indicate on Medicare Part B claims both whether a beneficiary received a compounded drug and the drug's individual ingredients in order to properly apply this policy and determine payment. We provided a draft of this report to HHS for review, and its comments are reprinted in appendix I. In its comments, HHS disagreed with our recommendation that CMS clarify the Medicare Part B payment policy and align payment practices with the policy as necessary. HHS also provided technical comments, which we incorporated as appropriate. In disagreeing with our recommendation to clarify the Medicare Part B payment policy, HHS stated that it did not believe that clarifying the policy to specifically address payments for bulk drug substances was necessary at this time. HHS commented that the Part B payment policy does not currently distinguish between compounded drugs that contain bulk drug substances and compounded drugs that contain FDA-approved products but, rather, recognizes differences between compounded drugs and FDA- approved manufactured drugs. According to HHS, the policy allows for payment for compounded drugs prepared in a manner that does not violate the FDCA and does not permit payment for drugs manufactured or otherwise prepared in a manner that is inconsistent with the FDCA. As we state in the report, the Part B policy indicates the agency's view that, to be eligible for Medicare coverage, FDA must have approved the drug for marketing, while at the same time indicating that payment is available for compounded drugs. We also note that neither bulk drug substances nor compounded drugs, regardless of their ingredients, are generally approved by FDA. Based on HHS's comments, CMS does not consider the FDA-approval status of compounded drug ingredients in making Part B payment determinations and focuses on whether the drug was prepared in a manner consistent with the FDCA. We appreciate HHS's explanation of this distinction; however, we maintain that the Part B policy should be clarified to explicitly contain this exception for compounded drugs. With regard to HHS's comment that payment is not made for drugs manufactured or otherwise prepared in a manner that is inconsistent with the FDCA, including cases in which FDA has determined that a company is producing compounded drugs in violation of the FDCA--such as a company compounding drugs on a large scale that resembles manufacturing--the extent to which CMS ensures compliance with this policy is unclear. As noted by HHS OIG in its April 2014 report on Medicare Part B payments for compounded drugs, Medicare contractors generally review claims for compounded drugs to determine payment amounts and assign payments on the basis of the description of the drug given by the provider on the claim. The HHS OIG report was silent on whether the contractors also review these claims to determine who produced the compounded drug but noted that CMS contractors, whose specific policies and requirements for claims vary, do not necessarily require providers to submit this information. As we note in the report, and as the HHS OIG report confirmed, CMS does not collect information from the Medicare contractors on payments for compounded drugs, and neither CMS nor its contractors track claims or payment amounts for these drugs. Therefore, CMS does not know what compounded drugs the contractors paid for or whether the contractors were able to determine whether the company that produced the compounded drug had been found by FDA to be in violation of the FDCA for the purposes of denying payment in adherence with the Part B policy. In addition, it is unclear whether CMS contractors are collecting sufficient information to identify specific bulk drug substances used to make compounded drugs. This information will be necessary for the contractors to make payment determinations when FDA finalizes its lists of bulk drug substances that may or may not be used for compounding under the FDCA. In addition, HHS commented on the limitations of the Part B claims systems that would prevent CMS from collecting detailed information, such as NDCs of drug ingredients, from claims. However, HHS noted that CMS concurred with the HHS OIG recommendation to develop a modifier or other mechanism to identify claims for compounded drugs, which is consistent with our recommendation. In light of CMS's inability to obtain detailed information about compounded drug ingredients collected by its contractors, we remain concerned that the agency is unable to ensure that payments are made in accordance with the Part B policy. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the contact named above, Rashmi Agarwal, Assistant Director; Shana R. Deitch; Sandra George; Jyoti Gupta; and Laurie Pachter made key contributions to this report. Compounded Drugs: TRICARE's Payment Practices Should Be More Consistent with Regulations. GAO-15-64. Washington, D.C.: October 2, 2014. Prescription Drugs: Comparison of DOD, Medicaid, and Medicare Part D Retail Reimbursement Prices. GAO-14-578. Washington, D.C.: June 30, 2014. Medicaid Prescription Drugs: CMS Should Implement Revised Federal Upper Limits and Monitor Their Relationship to Retail Pharmacy Acquisition Costs. GAO-14-68. Washington, D.C.: December 19, 2013. Drug Compounding: Clear Authority and More Reliable Data Needed to Strengthen FDA Oversight. GAO-13-702. Washington, D.C.: July 31, 2013.
Drug compounding is a process whereby a pharmacist mixes or alters ingredients to create a drug tailored to the medical needs of an individual patient. Compounded drugs make up 1 to 3 percent of the $300 billion domestic prescription drug market. Compounded drugs and some of their ingredients are not approved by FDA. Members of Congress have questioned whether federal health care programs' payment practices create incentives for providers to prescribe these drugs. GAO was asked to examine public programs' and private health insurers' payment practices for compounded drugs. GAO examined (1) Medicare's, Medicaid's, and private health insurers' payment practices for compounded drugs and (2) the extent to which these payment practices for compounded drugs affect their use. GAO reviewed the payment policies of CMS, the five largest state Medicaid programs, five of the largest insurers that offer both Medicare and Medicaid managed care plans as well as private plans, and the two largest Medicare Part D-only sponsors. GAO also interviewed officials from these entities and from provider associations. Medicare, Medicaid, and private health insurers have varying payment practices for compounded drugs, depending upon whether compounded drugs and their ingredients can be identified on health insurance claims, and Medicare's Part B payment policy for these drugs is unclear. For drugs dispensed in pharmacy settings, claims contain sufficient information for public programs and private insurers to identify compounded drugs and their ingredients. These programs and plans use claims information to determine whether compounded drug ingredients are products approved by the Food and Drug Administration (FDA) or are bulk drug substances--usually raw powders--that are generally not approved by FDA. Two of the five insurers and one of the two Medicare Part D-only sponsors we spoke with generally do not pay for these substances in their Medicare Part D plans. Four of the five state Medicaid programs and three of the five insurers offering private health plans we spoke with generally do not pay for ingredients that are bulk drug substances in their respective plans. For drugs administered in outpatient physician office settings, claims lack information to identify compounded drugs because there are no specific billing codes for most of these drugs. Therefore, Medicare, most state Medicaid programs, and most private health insurers pay for these compounded drugs. Some public programs and private health insurers conduct further claims reviews for compounded drugs billed under nonspecific codes, including obtaining information that can be used to determine FDA-approval status of compounded drug ingredients, and make payment decisions based on this information. Additionally, the Centers for Medicare & Medicaid Services (CMS)--the agency within the Department of Health and Human Services (HHS) responsible for administering the Medicare program--has a national payment policy for compounded drugs under Medicare Part B, but this policy is unclear. The policy generally states that drugs must be FDA-approved to be paid for under Medicare. Payment may be available for compounded drugs, but the policy does not stipulate whether payment is available for ingredients that are bulk drug substances, which are generally not FDA-approved. CMS contractors who process Part B claims do not collect information on the FDA-approval status of drug ingredients and, therefore, may be paying for ingredients that are not FDA-approved products. Thus, it is uncertain whether Medicare payments are inconsistent with Part B policy. Payment practices of public programs and private health insurers may affect the use of compounded drugs when specific payment exclusions exist, such as those for bulk drug substances; however, other factors also affect the use of compounded drugs. For example, insurers that restrict payment for compounded drugs dispensed in pharmacy settings in their private health plans to only ingredients that are FDA-approved products saw significant decreases in both the number of claims and the amount of payments for these drugs after they implemented these restrictions. Individual patient need, such as the need for custom dosages, and drug shortages also affect the use of compounded drugs. GAO recommends that CMS clarify its Medicare Part B payment policy to either allow or restrict payment for compounded drugs containing bulk drug substances and align payment practices with this policy. HHS disagreed with this recommendation, stating that the Part B payment policy does not depend on drug ingredients. GAO maintains that the policy needs clarification.
7,824
902
The electricity industry, as shown in figure 1, is composed of four distinct functions: generation, transmission, distribution, and system operations. Once electricity is generated--whether by burning fossil fuels; through nuclear fission; or by harnessing wind, solar, geothermal, or hydro energy--it is generally sent through high-voltage, high-capacity transmission lines to local electricity distributors. Once there, electricity is transformed into a lower voltage and sent through local distribution lines for consumption by industrial plants, businesses, and residential consumers. Because electric energy is generated and consumed almost instantaneously, the operation of an electric power system requires that a system operator constantly balance the generation and consumption of power. Utilities own and operate electricity assets, which may include generation plants, transmission lines, distribution lines, and substations--structures often seen in residential and commercial areas that contain technical equipment such as switches and transformers to ensure smooth, safe flow of current and regulate voltage. Utilities may be owned by investors, municipalities, and individuals (as in cooperative utilities). System operators--sometimes affiliated with a particular utility or sometimes independent and responsible for multiple utility areas--manage the electricity flows. These system operators manage and control the generation, transmission, and distribution of electric power using control systems--IT- and network-based systems that monitor and control sensitive processes and physical functions, including opening and closing circuit breakers. As we have previously reported, the effective functioning of the electricity industry is highly dependent on these control systems. However, for many years, aspects of the electricity network lacked (1) adequate technologies--such as sensors--to allow system operators to monitor how much electricity was flowing on distribution lines, (2) communications networks to further integrate parts of the electricity grid with control centers, and (3) computerized control devices to automate system management and recovery. As the electricity industry has matured and technology has advanced, utilities have begun taking steps to update the electricity grid--the transmission and distribution systems--by integrating new technologies and additional IT systems and networks. Though utilities have regularly taken such steps in the past, industry and government stakeholders have begun to articulate a broader, more integrated vision for transforming the electricity grid into one that is more reliable and efficient; facilitates alternative forms of generation, including renewable energy; and gives consumers real-time information about fluctuating energy costs. This vision--the smart grid--would increase the use of IT systems and networks and two-way communication to automate actions that system operators formerly had to make manually. Electricity grid modernization is an ongoing process, and initiatives have commonly involved installing advanced metering infrastructure (smart meters) on homes and commercial buildings that enable two-way communication between the utility and customer. Other initiatives include adding "smart" components to provide the system operator with more detailed data on the conditions of the transmission and distribution systems and better tools to observe the overall condition of the grid (referred to as "wide-area situational awareness"). These include advanced, smart switches on the distribution system that communicate with each other to reroute electricity around a troubled line and high-resolution, time-synchronized monitors--called phasor measurement units--on the transmission system. The use of smart grid systems may have a number of benefits, including improved reliability from fewer and shorter outages, downward pressure on electricity rates resulting from the ability to shift peak demand, an improved ability to shift to alternative sources of energy, and an improved ability to detect and respond to potential attacks on the grid. Both the federal government and state governments have authority for overseeing the electricity industry. For example, the Federal Energy Regulatory Commission (FERC) regulates rates for wholesale electricity sales and transmission of electricity in interstate commerce. This includes approving whether to allow utilities to recover the costs of investments they make to the transmission system, such as smart grid investments. Meanwhile, local distribution and retail sales of electricity are generally subject to regulation by state public utility commissions. State and federal authorities also play key roles in overseeing the reliability of the electric grid. State regulators generally have authority to oversee the reliability of the local distribution system. The North American Electric Reliability Corporation (NERC) is the federally designated U.S. Electric Reliability Organization, and is overseen by FERC. NERC has responsibility for conducting reliability assessments and developing and enforcing mandatory standards to ensure the reliability of the bulk power system--i.e., facilities and control systems necessary for operating the transmission network and certain generation facilities needed for reliability. NERC develops reliability standards collaboratively through a deliberative process involving utilities and others in the industry, which are then sent to FERC for approval. These standards include critical infrastructure protection standards for protecting electric utility-critical and cyber-critical assets. FERC has responsibility for reviewing and approving the reliability standards or directing NERC to modify them. In addition, the Energy Independence and Security Act of 2007established federal policy to support the modernization of the electricity grid and required actions by a number of federal agencies, including the National Institute of Standards and Technology (NIST), FERC, and the Department of Energy. With regard to cybersecurity, the act required NIST and FERC to take the following actions: NIST was to coordinate development of a framework that includes protocols and model standards for information management to achieve interoperability of smart grid devices and systems. As part of its efforts to accomplish this, NIST planned to identify cybersecurity standards for these systems and also identified the need to develop guidelines for organizations such as electric companies on how to securely implement smart grid systems. In January 2011, we reported that NIST had identified 11 standards involving cybersecurity that support smart grid interoperability and had issued a first version of a cybersecurity guideline. FERC was to adopt standards resulting from NIST's efforts that it deemed necessary to ensure smart grid functionality and interoperability. However, according to FERC officials, the statute did not provide specific additional authority to allow FERC to require utilities or manufacturers of smart grid technologies to follow these standards. As a result, any standards identified and developed through the NIST-led process are voluntary unless regulators use other authorities to indirectly compel utilities and manufacturers to follow them. Threats to systems supporting critical infrastructure--which includes the electricity industry and its transmission and distribution systems--are evolving and growing. In February 2011, the Director of National Intelligence testified that, in the past year, there had been a dramatic increase in malicious cyber activity targeting U.S. computers and networks, including a more than tripling of the volume of malicious software since 2009. Different types of cyber threats from numerous sources may adversely affect computers, software, networks, organizations, entire industries, or the Internet. Cyber threats can be unintentional or intentional. Unintentional threats can be caused by software upgrades or maintenance procedures that inadvertently disrupt systems. Intentional threats include both targeted and untargeted attacks from a variety of sources, including criminal groups, hackers, disgruntled employees, foreign nations engaged in espionage and information warfare, and terrorists. Table 1 shows common sources of cyber threats. These sources of cyber threats make use of various techniques, or exploits that may adversely affect computers, software, a network, an organization's operation, an industry, or the Internet itself. Table 2 shows common types of cyber exploits. The potential impact of these threats is amplified by the connectivity between information systems, the Internet, and other infrastructures, creating opportunities for attackers to disrupt critical services, including electrical power. In addition, the increased reliance on IT systems and networks also exposes the electric grid to potential and known cybersecurity vulnerabilities. These vulnerabilities include an increased number of entry points and paths that can be exploited by potential adversaries and other unauthorized users; the introduction of new, unknown vulnerabilities due to an increased use of new system and network technologies; wider access to systems and networks due to increased connectivity; an increased amount of customer information being collected and transmitted, providing incentives for adversaries to attack these systems and potentially putting private information at risk of unauthorized disclosure and use. In May 2008, we reported that the corporate network of the Tennessee Valley Authority--the nation's largest public power company, which generates and distributes power in an area of about 80,000 square miles in the southeastern United States--contained security weaknesses that could lead to the disruption of control systems networks and devices connected to that network. We made 19 recommendations to improve the implementation of information security program activities for the control systems governing the Tennessee Valley Authority's critical infrastructures and 73 recommendations to address specific weaknesses in security controls. The Tennessee Valley Authority concurred with the recommendations and has taken steps to implement them. We and others have also reported that smart grid and related systems have known cyber vulnerabilities. For example, cybersecurity experts have demonstrated that certain smart meters can be successfully attacked, possibly resulting in disruption to the electricity grid. In addition, we have reported that control systems used in industrial settings such as electricity generation have vulnerabilities that could result in serious damages and disruption if exploited. Further, in 2007, the Department of Homeland Security, in cooperation with the Department of Energy, ran a test that demonstrated that a vulnerability commonly referred to as "Aurora" had the potential to allow unauthorized users to remotely control, misuse, and cause damage to a small commercial electric generator. Moreover, in 2008, the Central Intelligence Agency reported that malicious activities against IT systems and networks have caused disruption of electric power capabilities in multiple regions overseas, including a case that resulted in a multicity power outage. As government, private sector, and personal activities continue to move to networked operations, the threat will continue to grow. Cyber incidents continue to affect the electricity industry. For example, the Department of Homeland Security's Industrial Control Systems Cyber Emergency Response Team recently noted that the number of reported cyber incidents affecting control systems of companies in the electricity sector increased from 3 in 2009 to 25 in 2011. In addition, we and others have reported that cyber incidents can affect the operations of energy facilities, as the following examples illustrate: Smart meter attacks. In April 2012, it was reported that sometime in 2009 an electric utility asked the FBI to help it investigate widespread incidents of power thefts through its smart meter deployment. The report indicated that the miscreants hacked into the smart meters to change the power consumption recording settings using software available on the Internet. Phishing attacks directed at energy sector. The Department of Homeland Security's Industrial Control Systems Cyber Emergency Response Team reported that, in 2011, it deployed incident response teams to an electric bulk provider and an electric utility that had been victims of broader phishing attacks. The team found three malware samples and detected evidence of a sophisticated threat actor. Stuxnet. In July 2010, a sophisticated computer attack known as Stuxnet was discovered. It targeted control systems used to operate industrial processes in the energy, nuclear, and other critical sectors. It is designed to exploit a combination of vulnerabilities to gain access to its target and modify code to change the process. Browns Ferry power plant. In August 2006, two circulation pumps at Unit 3 of the Browns Ferry, Alabama, nuclear power plant failed, forcing the unit to be shut down manually. The failure of the pumps was traced to excessive traffic on the control system network, possibly caused by the failure of another control system device. Northeast power blackout. In August 2003, failure of the alarm processor in the control system of FirstEnergy, an Ohio-based electric utility, prevented control room operators from having adequate situational awareness of critical operational changes to the electrical grid. When several key transmission lines in northern Ohio tripped due to contact with trees, they initiated a cascading failure of 508 generating units at 265 power plants across eight states and a Canadian province. Davis-Besse power plant. The Nuclear Regulatory Commission confirmed that in January 2003, the Microsoft SQL Server worm known as Slammer infected a private computer network at the idled Davis-Besse nuclear power plant in Oak Harbor, Ohio, disabling a safety monitoring system for nearly 5 hours. In addition, the plant's process computer failed, and it took about 6 hours for it to become available again. Multiple entities have taken steps to help secure the electricity grid, including NERC, NIST, FERC, and the Departments of Homeland Security and Energy. NERC has performed several activities that are intended to secure the grid. It has developed eight critical infrastructure standards for protecting electric utility-critical and cyber-critical assets. The standards established requirements for the following key cybersecurity-related controls: critical cyber asset identification, security management controls, personnel and training, electronic "security perimeters," physical security of critical cyber assets, systems security management, incident reporting and response planning, and recovery plans for critical cyber assets. In December 2011, we reported that NERC's eight cyber security standards, along with supplementary documents, were substantially similar to NIST guidance applicable to federal agencies. NERC also has published security guidelines for companies to consider for protecting electric infrastructure systems, although such guidelines are voluntary and typically not checked for compliance. For example, NERC's June 2010 Security Guideline for the Electricity Sector: Identifying Critical Cyber Assets is intended to assist entities in identifying and developing a list of critical cyber assets as described in the mandatory standards. NERC also has enforced compliance with mandatory cybersecurity standards through its Compliance Monitoring and Enforcement Program, subject to FERC review. NERC has assessed monetary penalties for violations of its cyber security standards. NIST, in implementing its responsibilities under the Energy Independence and Security Act of 2007 with regard to standards to achieve interoperability of smart grid systems, planned to identify cybersecurity standards for these systems. In January 2011, we reported that it had identified 11 standards involving cybersecurity that support smart grid interoperability and had issued a first version of a cybersecurity guideline. NIST's cybersecurity guidelines largely addressed key cybersecurity elements, such as assessment of cybersecurity risks and identification of security requirements (i.e., controls); however, its guidelines did not address an important element essential to securing smart grid systems--the risk of attacks using both cyber and physical means. NIST officials said that they intended to update the guidelines to address this and other missing elements they identified, but their plan and schedule for doing so were still in draft form. We recommended that NIST finalize its plan and schedule for incorporating missing elements, and NIST officials agreed. We are currently working with officials to determine the status of their efforts to address these recommendations. FERC also has taken several actions to help secure the electricity grid. For example, it reviewed and approved NERC's eight critical infrastructure protection standards in 2008. Since then, in its role of overseeing the development of reliability standards, the commission has directed NERC to make numerous changes to standards to improve cybersecurity protections. However, according to the FERC Chairman's February 2012 letter in response to our report on electricity grid modernization, many of the outstanding directives have not been incorporated into the latest versions of the standards. The Chairman added that the commission would continue to work with NERC to incorporate the directives. In addition, FERC has authorized NERC to enforce mandatory reliability standards for the bulk power system, while retaining its authority to enforce the same standards and assess penalties for violations. We reported in January 2011 that FERC also had begun reviewing initial smart grid standards identified as part of NIST efforts. However, in July 2011, the commission declined to adopt the initial smart grid standards identified as a part of the NIST efforts, finding that there was insufficient consensus to do so. The Department of Homeland Security has been designated by federal policy as the principal federal agency to lead, integrate, and coordinate the implementation of efforts to protect cyber-critical infrastructures and key resources. Under this role, the Department's National Cyber Security Division's Control Systems Security Program has issued recommended practices to reduce risks to industrial control systems within and across all critical infrastructure and key resources sectors, including the electricity subsector. For example, in April 2011, the program issued the Catalog of Control Systems Security: Recommendations for Standards Developers, which is intended to provide a detailed listing of recommended controls from several standards related to control systems. The program also manages and operates the Industrial Control Systems Cyber Emergency Response Team to respond to and analyze control-systems-related incidents, provide onsite support for incident response and forensic analysis, provide situational awareness in the form of actionable intelligence, and share and coordinate vulnerability information and threat analysis through information products and alerts. For example, it reported providing on-site assistance to six companies in the electricity subsector, including a bulk electric power provider and multiple electric utilities, during 2009-2011. The Department of Energy is the lead federal agency which is responsible for coordinating critical infrastructure protection efforts with the public and private stakeholders in the energy sector, including the electricity subsector. In this regard, we have reported that officials from the Department's Office of Electricity Delivery and Energy Reliability stated that the department was involved in efforts to assist the electricity sector in the development, assessment, and sharing of cybersecurity standards. For example, the department was working with NIST to enable state power producers to use current cybersecurity guidance. In May 2012, the department released the Electricity Subsector Cybersecurity Risk Management Process. The guideline is intended to ensure that cybersecurity risks for the electric grid are addressed at the organization, mission or business process, and information system levels. We have not evaluated this guide. In our January 2011 report, we identified a number of key challenges that industry and government stakeholders faced in ensuring the cybersecurity of the systems and networks that support our nation's electricity grid.These included the following: There was a lack of a coordinated approach to monitor whether industry follows voluntary standards. As mentioned above, under the Energy Independence and Security Act of 2007, FERC is responsible for adopting cybersecurity and other standards that it deems necessary to ensure smart grid functionality and interoperability. However, FERC had not developed an approach coordinated with other regulators to monitor, at a high level, the extent to which industry will follow the voluntary smart grid standards it adopts. There had been initial efforts by regulators to share views, through, for example, a collaborative dialogue between FERC and the National Association of Regulatory Utility Commissioners, which had discussed the standards-setting process in general terms. Nevertheless, according to officials from FERC and the National Association of Regulatory Utility Commissioners, FERC and the state public utility commissions had not established a joint approach for monitoring how widely voluntary smart grid standards are followed in the electricity industry or developed strategies for addressing any gaps. Moreover, FERC had not coordinated in such a way with groups representing public power or cooperative utilities, which are not routinely subject to FERC's or the states' regulatory jurisdiction for rate setting. We noted that without a good understanding of whether utilities and manufacturers are following smart grid standards, it would be difficult for FERC and other regulators to know whether a voluntary approach to standards setting is effective or if changes are needed. Aspects of the current regulatory environment made it difficult to ensure the cybersecurity of smart grid systems. In particular, jurisdictional issues and the difficulties associated with responding to continually evolving cyber threats were a key regulatory challenge to ensuring the cybersecurity of smart grid systems as they are deployed. Regarding jurisdiction, experts we spoke with expressed concern that there was a lack of clarity about the division of responsibility between federal and state regulators, particularly regarding cybersecurity. While jurisdictional responsibility has historically been determined by whether a technology is located on the transmission or distribution system, experts raised concerns that smart grid technology may blur these lines. For example, devices such as smart meters deployed on parts of the grid traditionally subject to state jurisdiction could, in the aggregate, have an impact on those parts of the grid that federal regulators are responsible for-- namely the reliability of the transmission system. There was also concern about the ability of regulatory bodies to respond to evolving cybersecurity threats. For example, one expert questioned the ability of government agencies to adapt to rapidly evolving threats, while another highlighted the need for regulations to be capable of responding to the evolving cybersecurity issues. In addition, our experts expressed concern with agencies developing regulations in the future that are overly specific in their requirements, such as those specifying the use of a particular product or technology. Consequently, unless steps are taken to mitigate these challenges, regulations may not be fully effective in protecting smart grid technology from cybersecurity threats. Utilities were focusing on regulatory compliance instead of comprehensive security. The existing federal and state regulatory environment creates a culture within the utility industry of focusing on compliance with cybersecurity requirements, instead of a culture focused on achieving comprehensive and effective cybersecurity. Specifically, experts told us that utilities focus on achieving minimum regulatory requirements rather than designing a comprehensive approach to system security. In addition, one expert stated that security requirements are inherently incomplete, and having a culture that views the security problem as being solved once those requirements are met will leave an organization vulnerable to cyber attack. Consequently, without a comprehensive approach to security, utilities leave themselves open to unnecessary risk. There was a lack of security features built into smart grid systems. Security features are not consistently built into smart grid devices. For example, experts told us that certain currently available smart meters had not been designed with a strong security architecture and lacked important security features, including event logging and forensics capabilities that are needed to detect and analyze attacks. In addition, our experts stated that smart grid home area networks--used for managing the electricity usage of appliances and other devices in the home--did not have adequate security built in, thus increasing their vulnerability to attack. Without securely designed smart grid systems, utilities may lack the capability to detect and analyze attacks, increasing the risk that attacks will succeed and utilities will be unable to prevent them from recurring. The electricity industry did not have an effective mechanism for sharing information on cybersecurity and other issues. The electricity industry lacked an effective mechanism to disclose information about cybersecurity vulnerabilities, incidents, threats, lessons learned, and best practices in the industry. For example, our experts stated that while the electricity industry has an information sharing center, it did not fully address these information needs. In addition, President Obama's May 2009 cyberspace policy review also identified challenges related to cybersecurity information sharing within the electric and other critical infrastructure sectors and issued recommendations to address them. According to our experts, information regarding incidents such as both unsuccessful and successful attacks must be able to be shared in a safe and secure way to avoid publicly revealing the reported organization and penalizing entities actively engaged in corrective action. Such information sharing across the industry could provide important information regarding the level of attempted cyber attacks and their methods, which could help grid operators better defend against them. If the industry pursued this end, it could draw upon the practices and approaches of other industries when designing an industry-led approach to cybersecurity information sharing. Without quality processes for information sharing, utilities will not have the information needed to adequately protect their assets against attackers. The electricity industry did not have metrics for evaluating cybersecurity. The electricity industry was also challenged by a lack of cybersecurity metrics, making it difficult to measure the extent to which investments in cybersecurity improve the security of smart grid systems. Experts noted that while such metrics are difficult to develop, they could help compare the effectiveness of competing solutions and determine what mix of solutions combine to make the most secure system. Furthermore, our experts said that having metrics would help utilities develop a business case for cybersecurity by helping to show the return on a particular investment. Until such metrics are developed, there is increased risk that utilities will not invest in security in a cost-effective manner, or have the information needed to make informed decisions on their cybersecurity investments. To address these challenges, we made recommendations in our January 2011 report. To improve coordination among regulators and help Congress better assess the effectiveness of the voluntary smart grid standards process, we recommended that the Chairman of FERC develop an approach to coordinate with state regulators and with groups that represent utilities subject to less FERC and state regulation to (1) periodically evaluate the extent to which utilities and manufacturers are following voluntary interoperability and cybersecurity standards and (2) develop strategies for addressing any gaps in compliance with standards that are identified as a result of this evaluation. We also recommended that FERC, working with NERC as appropriate, assess whether commission efforts should address any of the cybersecurity challenges identified in our report. FERC agreed with these recommendations. Although FERC agreed with these recommendations, they have not yet been implemented. According to the FERC Chairman, given the continuing evolution of standards and the lack of sufficient consensus for regulatory adoption, commission staff believe that coordinated monitoring of compliance with standards would be premature at this time, and that this may change as new standards are developed and deployed in industry. We believe that it is still important for FERC to improve coordination among regulators and that consensus is reached on standards. We will continue to monitor the status of its efforts to address these recommendations. In summary, the evolving and growing threat from cyber-based attacks highlights the importance of securing the electricity industry's systems and networks. A successful attack could result in widespread power outages, significant monetary costs, damage to property, and loss of life. The roles of NERC and FERC remain critical in approving and disseminating cybersecurity guidance and enforcing standards, as appropriate. Moreover, more needs to be done to meet challenges facing the industry in enhancing security, particularly as the generation, transmission, and distribution of electricity comes to rely more on emerging and sophisticated technology. Chairman Bingaman, Ranking Member Murkowski, and Members of the Committee, this concludes my statement. I would be happy to answer any questions you may have at this time. If you have any questions regarding this statement, please contact Gregory C. Wilshusen at (202) 512-6244 or [email protected] or David C. Trimble, Director, Natural Resources and Environment Team, at (202) 512-3841 or [email protected]. Other key contributors to this statement include Michael Gilmore, Anjalique Lawrence, and Jon R. Ludwigson (Assistant Directors), Paige Gilbreath, Barbarol James, Lee McCracken, and Dana Pon. Cybersecurity: Threats Impacting the Nation. GAO-12-666T. Washington, D.C.: April 24, 2012. Cybersecurity: Challenges in Securing the Modernized Electricity Grid, GAO-12-507T. Washington, D.C.: February 28, 2012. Critical Infrastructure Protection: Cybersecurity Guidance Is Available, but More Can Be Done to Promote Its Use. GAO-12-92. Washington, D.C.: December 9, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 2011. Electricity Grid Modernization: Progress Being Made on Cybersecurity Guidelines, but Key Challenges Remain to Be Addressed. GAO-11-117. Washington, D.C.: January 12, 2011. Cybersecurity: Continued Attention Needed to Protect Our Nation's Critical Infrastructure. GAO-11-865T. Washington, D.C.: July 26, 2011. Critical Infrastructure Protection: Key Private and Public Cyber Expectations Need to Be Consistently Addressed. GAO-10-628. Washington, D.C.: July 15, 2010. Cyberspace: United States Faces Challenges in Addressing Global Cybersecurity and Governance. GAO-10-606. Washington, D.C.: July 2, 2010. Cybersecurity: Continued Attention Is Needed to Protect Federal Information Systems from Evolving Threats. GAO-10-834T. Washington, D.C.: June 16, 2010. Critical Infrastructure Protection: Update to National Infrastructure Protection Plan Includes Increased Emphasis on Risk Management and Resilience. GAO-10-296. Washington, D.C.: March 5, 2010. Cybersecurity: Progress Made but Challenges Remain in Defining and Coordinating the Comprehensive National Initiative. GAO-10-338. Washington, D.C.: March 5, 2010. Cybersecurity: Continued Efforts Are Needed to Protect Information Systems from Evolving Threats. GAO-10-230T. Washington, D.C.: November 17, 2009. Defense Critical Infrastructure: Actions Needed to Improve the Identification and Management of Electrical Power Risks and Vulnerabilities to DOD Critical Assets. GAO-10-147. Washington, D.C.: October 23, 2009. Critical Infrastructure Protection: Current Cyber Sector-Specific Planning Approach Needs Reassessment. GAO-09-969. Washington, D.C.: September 24, 2009. National Cybersecurity Strategy: Key Improvements Are Needed to Strengthen the Nation's Posture. GAO-09-432T. Washington, D.C.: March 10, 2009. Electricity Restructuring: FERC Could Take Additional Steps to Analyze Regional Transmission Organizations' Benefits and Performance. GAO-08-987. Washington, D.C.: September 22, 2008. Information Security: TVA Needs to Address Weaknesses in Control Systems and Networks. GAO-08-526. Washington, D.C.: May 21, 2008. Critical Infrastructure Protection: Multiple Efforts to Secure Control Systems Are Under Way, but Challenges Remain. GAO-07-1036. Washington, D.C.: September 10, 2007. Cybercrime: Public and Private Entities Face Challenges in Addressing Cyber Threats. GAO-07-705. Washington, D.C.: June 22, 2007. Meeting Energy Demand in the 21st Century: Many Challenges and Key Questions. GAO-05-414T. Washington, D.C.: March 16, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The electric power industry is increasingly incorporating information technology (IT) systems and networks into its existing infrastructure (e.g., electricity networks, including power lines and customer meters). This use of IT can provide many benefits, such as greater efficiency and lower costs to consumers. However, this increased reliance on IT systems and networks also exposes the grid to cybersecurity vulnerabilities, which can be exploited by attackers. Moreover, GAO has identified protecting systems supporting our nation's critical infrastructure (which includes the electricity grid) as a governmentwide high-risk area. GAO was asked to testify on the status of actions to protect the electricity grid from cyber attacks. Accordingly, this statement discusses (1) cyber threats facing cyber-reliant critical infrastructures, which include the electricity grid, and (2) actions taken and challenges remaining to secure the grid against cyber attacks. In preparing this statement, GAO relied on previously published work in this area and reviewed reports from other federal agencies, media reports, and other publicly available sources. The threats to systems supporting critical infrastructures are evolving and growing. In testimony, the Director of National Intelligence noted a dramatic increase in cyber activity targeting U.S. computers and systems, including a more than tripling of the volume of malicious software. Varying types of threats from numerous sources can adversely affect computers, software, networks, organizations, entire industries, and the Internet itself. These include both unintentional and intentional threats, and may come in the form of targeted or untargeted attacks from criminal groups, hackers, disgruntled employees, nations, or terrorists. The interconnectivity between information systems, the Internet, and other infrastructures can amplify the impact of these threats, potentially affecting the operations of critical infrastructures, the security of sensitive information, and the flow of commerce. Moreover, the electricity grid's reliance on IT systems and networks exposes it to potential and known cybersecurity vulnerabilities, which could be exploited by attackers. The potential impact of such attacks has been illustrated by a number of recently reported incidents and can include fraudulent activities, damage to electricity control systems, power outages, and failures in safety equipment. To address such concerns, multiple entities have taken steps to help secure the electricity grid, including the North American Electric Reliability Corporation, the National Institute of Standards and Technology (NIST), the Federal Energy Regulatory Commission, and the Departments of Homeland Security and Energy. These include, in particular, establishing mandatory and voluntary cybersecurity standards and guidance for use by entities in the electricity industry. For example, the North American Electric Reliability Corporation and the Federal Energy Regulatory Commission, which have responsibility for regulation and oversight of part of the industry, have developed and approved mandatory cybersecurity standards and additional guidance. In addition, NIST has identified cybersecurity standards that support smart grid interoperability and has issued a cybersecurity guideline. The Departments of Homeland Security and Energy have also played roles in disseminating guidance on security practices and providing other assistance. As GAO previously reported, there were a number of ongoing challenges to securing electricity systems and networks. These include: A lack of a coordinated approach to monitor industry compliance with voluntary standards. Aspects of the current regulatory environment made it difficult to ensure the cybersecurity of smart grid systems. A focus by utilities on regulatory compliance instead of comprehensive security. A lack of security features consistently built into smart grid systems. The electricity industry did not have an effective mechanism for sharing information on cybersecurity and other issues. The electricity industry did not have metrics for evaluating cybersecurity. In a prior report, GAO has made recommendations related to electricity grid modernization efforts, including developing an approach to monitor compliance with voluntary standards. These recommendations have not yet been implemented.
6,495
793
During the Cold War, the Soviet Union established several hundred research institutes that were dedicated to the research, development, and production of weapons of mass destruction. Although precise figures are not available, science center officials estimate that at the time of the Soviet Union's collapse, from 30,000 to 75,000 highly trained senior weapons scientists worked at these institutes. These figures do not include the thousands of less experienced junior scientists and technicians who also worked in these institutes. After the collapse of the Soviet Union in 1991, many of these scientists suffered significant cuts in pay and lost their government-supported work. By early 1992, the United States and other countries were concerned that senior weapons scientists struggling to support their families could be tempted to sell their expertise to terrorists or countries of concern such as Iraq, Iran, and North Korea. To address this threat, the United States, the European Union, Japan, and Russia signed an agreement in 1992 establishing the International Science and Technology Center in Moscow. A year later, the United States, Sweden, Canada, and Ukraine signed an agreement establishing the Science and Technology Center in Ukraine, located in the city of Kiev. The science centers in Russia and Ukraine began funding research projects in 1994 and 1995, respectively. In addition, the science centers have recently begun supporting the weapons scientists' long-term transition to peaceful research by helping them identify and develop the commercial potential of their research, providing some business training, and helping fund patent applications. While the science centers operate independently of each other, they are very similar in structure and procedures (see fig. 1). Each science center has a governing board that meets two or three times a year to make administrative decisions, which includes formally approving project funding. Each science center also has an executive director and secretariat that carries out these decisions by conducting the center's day-to-day operations and administering the funded projects. The science centers' senior management consists mostly of representatives from the United States and the other funding parties (the European Union, Japan, and Canada). However, almost all of the secretariat's staff who are responsible for project implementation and oversight are Russian and Ukrainian nationals hired by the funding parties and the host government of Russia or Ukraine. As of December 31, 2000, the United States had funded 590 projects conducted at 431 research institutes, mostly within Russia and Ukraine, but also in Armenia, Georgia, Kazakhstan, Uzbekistan, and the Kyrgyz Republic. The projects range in length from 6 months to more than 3 years and involve basic and applied research in such areas as developing anticancer drugs, devising techniques to enhance environmental cleanup, and ensuring nuclear reactor safety. The projects employ teams of senior weapons scientists, junior scientists, and technicians according to the detailed workplans included in the project agreements. They receive cash payments for their work that are sent directly from the science centers to their personal bank accounts. According to science center officials, the average daily grant payment for senior weapons scientists is $20-$22 per day, tax free, compared to an average daily wage for all workers of about $4 in Russia or about $2 in Ukraine. While most of a project's funds are spent for the scientists' and technicians' salaries, the United States also pays for other costs associated with the project, as specified in the project agreement. These costs usually include the purchasing of computer equipment and some laboratory equipment, such as chemicals and glassware. In addition, the United States pays for senior scientists' travel to international conferences so that they can present their work and meet with their western counterparts. Also, the institutes receive payment for overhead costs, such as electricity and heat (not to exceed 10 percent of the project's total cost). As table 1 shows, the United States has provided more funds for projects at both centers than any other source. Since 1994, $227 million has been appropriated specifically for the science center program, of which $133.9 million had been used to fund approved projects as of March 31, 2001. In addition, U.S. agencies such as the Departments of Defense, Agriculture, Energy, and Health and Human Services have used $25.4 million in funds from other appropriations to support projects through the science center program. Finally, private sector firms from the United States, the European Union, Japan, and Canada have funded projects of commercial interest to them that they helped develop with senior weapons scientists. As figures 2 and 3 show, the United States has provided about 45 percent of the funding for projects at the science center in Russia and about 72 percent of the funding for projects at the science center in Ukraine since 1994. In addition to the science center program, the Department of Energy (DOE) funds research by weapons scientists through two similar programs. As of December 2000, DOE had obligated about $110 million for the Initiatives for Proliferation Prevention program and about $16 million for the Nuclear Cities Initiative. Like the science centers program, Initiatives for Proliferation Prevention pays scientists directly for peaceful research in several countries of the former Soviet Union, particularly nuclear weapons scientists. However, the program is also designed to commercialize technologies that utilize the scientists' expertise. The objectives of the Nuclear Cities Initiative are to create nonmilitary job opportunities for weapons scientists in Russia's closed nuclear cities and to help Russia accelerate the downsizing of its nuclear weapons complex. Unlike the science center program, the Nuclear Cities Initiative does not pay scientists directly. One mechanism the State Department uses to meet the program's nonproliferation objectives is its leading role in selecting which projects will receive funding. The project selection process begins after the science centers send the proposals they receive from scientists to the State Department for review. An interagency process involving the Departments of State, Defense, and Energy reviews about 1,000 project proposals during the course of a year for scientific merit and potential policy and proliferation concerns. The State Department's selection is limited to those projects approved by the national government where the scientists work and, in some instances, the State Department has not been granted access to scientists at critical biological research institutes. Since 1994, the State Department has selected for funding 590 projects that employed about 9,700 senior scientists. However, the State Department does not know how much of the total population of senior scientists it has reached because estimates of the total number of scientists vary widely. The State Department's selection process begins when scientists submit project proposals through their research institutes to their government for approval and certification of the senior weapons scientists' expertise. The State Department selects from those project proposals that have been approved by the national government where the scientists work. Although State Department and science center officials stated that most project proposals were approved by the national governments, not all research institutes in the former Soviet Union have had scientists put forth a project proposal to one of the science centers. For example, four biological weapons institutes under the Russian Ministry of Defense have not submitted project proposals to the science center in Russia. This effectively denies the State Department access to the senior scientists at these institutes, an issue of potential concern, since Russia's intentions regarding its inherited biological weapons capability remain unclear. Project proposals approved by their government are then sent to one of the science center secretariats to be forwarded to the United States for review. The other funding parties also receive project proposals from the science centers and conduct their own, independent selection process. After project proposals arrive from the science centers, the State Department distributes them to the various participants in the interagency review process, including the Departments of Defense and Energy, and U.S. scientists from private companies and universities. As shown in figure 4, projects undergo a variety of reviews to ensure that the State Department funds projects that meet nonproliferation objectives and program intent. The State Department chairs an interagency group, including the Departments of Defense and Energy, that conducts a policy review of all project proposals. According to State Department officials, this interagency policy review group assesses whether the proposed project contains elements that contradict U.S. policy, such as work being conducted with institutes in Belarus (where there are human rights concerns) or with institutes that are working with Iranian scientists in areas of proliferation concern. The policy group also coordinates the project proposals with other U.S. government programs that may involve the same institute or scientists. This process relies on the reviewers' knowledge and experience with specific institutes and scientists and their expertise on policy issues. According to State Department officials, weapons scientists submit few proposals that are contrary to U.S. policy. State Department officials and science advisers from the U.S. national laboratories and other scientists also review the proposals for scientific merit to ensure that projects employ mostly senior scientists carrying out meaningful work. The science advisers forward proposals to two or three other U.S. scientists who specialize in the proposed area of work to obtain their views on the scientific implications of the work, including what they know about the scientists who submitted the proposal. Based on this review and their own experience, the advisers develop a consensus opinion on the merits of the proposed work and whether the United States should fund it. The interagency group recommends rejecting projects where less than half of the scientists are former senior weapons scientists. According to State Department officials, the Department focuses its funding efforts on projects where the majority of participants are senior scientists whose expertise represents a more significant proliferation threat than junior scientists or technicians. However, the State Department cannot independently verify the weapons experience of the senior scientists it has employed. The State Department relies on the scientists' national governments to certify that the senior weapons scientists listed as participants in a project proposal actually have sufficient expertise to pose a proliferation risk. According to State Department officials, the group also considers the commercialization potential of the proposals as part of the review process. According to State Department and science center officials, although commercialization is not a primary goal, their ability to promote the sustainability of the program through the commercial application of scientific research is limited by the inherent challenges of finding commercial applications for any scientific research. In addition, the political and economic situation in Russia, Ukraine, and the other countries participating in the science centers remains very uncertain and thus deters foreign investors. Every project proposal is also reviewed for potential proliferation concerns. The State Department chairs an interagency group, including representatives from the Departments of Defense and Energy and other national security agencies, that examines each proposal to ensure that the projects the United States funds have only peaceful applications. For example, according to State Department officials, a proposal to develop a rocket that could launch several satellites at once was rejected on the grounds that this same technology could also be used to launch multiple warheads. Careful examination of the proposed work is particularly critical in the biological area, where the division between offensive and defensive research is often difficult to determine. The proliferation review group also weighs the risks that financing certain projects could help sustain a weapons institute infrastructure in the former Soviet Union by keeping institutes in operation that might have curtailed their research functions for lack of funds. After proposals are reviewed for potential policy, science, and proliferation concerns, officials from the Departments of State, Defense, and Energy meet to develop the official U.S. position on which project proposals to fund. During final project selection, the interagency group considers the information and recommendations developed during the other reviews, supplemented by past experience with institutes and scientists, to reach consensus on each project. The group also weighs other considerations. For example, State Department and science center staff said that they try to provide funds for projects at as many institutes as possible. A project with relatively weak scientific merit might receive funding if it is at an institute of high interest to the United States due to proliferation concerns. When the group reaches consensus on which projects to fund, it passes these instructions on to the U.S. representatives on the centers' governing boards. Representatives from the funding parties on each board then jointly decide which projects will receive funding. The next step is for a member of the science center's staff to work with the project team to fine-tune the official project agreement. The staff member and the project team will revise the project's workplan and make any modifications required by the funding party. For example, in some cases the State Department has required project teams to add a U.S.-based collaborator, agree to additional oversight, or change the project's budget to allow scientists to travel to the West more frequently during the course of the project. The funding parties are not bound to make any payments related to a project until the final project agreement has their approval and has been signed by the science center's executive director. Once the project agreement has been signed, the project can begin. According to State Department officials, they cannot fund all of the project proposals that meet the State Department's selection criteria due to funding constraints. For example, in preparation for the March 2001 meeting of the governing board for the center in Russia, the Department reviewed 148 proposals and found that 92 met U.S. funding criteria. However, the State Department only funded the 31 proposals with the highest number of senior scientists, greatest scientific merit, and/or the involvement of institutes of particular proliferation concern. From 1994 through the end of 2000, the United States had funded 590 projects that employed about 9,700 senior scientists. Figure 5 shows the number of senior scientists who worked on one or more U.S.-funded projects during the course of each year. These figures increased steadily from 1994 through 1999 and decreased slightly during 2000. About 6,500 senior scientists worked on U.S.-funded projects during 2000. Since 1994, more than half of the total number of people employed by U.S.-funded projects have been senior scientists. Although the State Department knows how many scientists it has employed through the projects it has funded, it does not know what portion of the target population of senior weapons scientists it has reached. The estimated number of senior weapons scientists in the Soviet Union at the time of its collapse varies from 30,000 to 75,000 scientists. During the past decade, an unknown number of senior weapons scientists left their research institutes to pursue other forms of employment, retired, or died. At some of the research institutes we visited, the institute directors told us that about half of their staff left within 2 years of the collapse, although they stated most who left were junior scientists, technicians, and support staff. Given these uncertainties, the State Department can only estimate how much of the total population of senior scientists it has reached. For example, the 9,700 senior scientists employed by U.S.-funded projects to date could represent anywhere from 12 percent to 32 percent of the target population. According to the science centers, funding from all sources, including the United States, has employed about 21,000 senior scientists to date. The State Department does not directly monitor the activities or results of the work of scientists who are participating in U.S.-funded research projects. Instead, the Department relies on the mostly Russian and Ukrainian technical specialists and accountants at the science centers, overseen by managers from the United States, the European Union, Japan, and Canada, to monitor scientists' progress in completing their research. The State Department also uses Department of Defense and outside auditors to conduct reviews of a sample of U.S.-funded projects. For the 35 projects we reviewed at nine institutes in Russia and Ukraine, the science centers were following their monitoring procedure. However, several factors limit the ability of the State Department to monitor the activities of scientists working on U.S.-funded projects. The State Department first relies on the mostly Russian and Ukrainian staff at the science centers to ensure that scientists are working on the research they are paid to produce. The science center staff do not observe the scientists on a day-to-day basis but rather (1) conduct on-site technical and financial monitoring at least once during each project, (2) review financial and technical reports submitted by the scientists, and (3) have frequent contacts with project scientists and receive input from U.S. and other western scientists who collaborate on the projects. For the 35 projects we reviewed, the science centers were following this monitoring procedure. Under the terms of the science center project agreements, science center staff have access to the locations where the research is conducted and to the personnel, equipment, and documentation associated with the projects. At least once during the course of a project, science center technical specialists and accountants spend a day at the institute to confirm that the research is progressing according to the project agreement by, among other things, conducting confidential interviews with individual scientists to discuss their involvement in the project; verifying that the amount of time scientists claim on their timesheets matches the financial reports submitted to the science centers; and discussing and observing project accomplishments such as results of experiments, prototypes of new technology, and computer simulations and databases. For the 35 projects we examined in detail, we found that the science center staff had generally followed their on-site monitoring procedures. The science centers had reports in their project files that documented the on-site monitoring. In addition, scientists we met with at the institutes described the on-site monitoring, including the questions asked during the confidential interviews. At one institute in Ukraine, we observed the science center staff conducting confidential interviews as part of on-site monitoring. The project agreements require the research institutes to submit quarterly financial reports and quarterly, annual, and final technical reports to the science centers. Only after performing routine checks of the financial reports do the science centers deposit the payments into the scientists' individual bank accounts. The science centers also examine the technical reports to ensure that the project is achieving the technical results specified in the project agreement and determine whether the project is on schedule. For the 35 projects we selected, we verified that the science centers had received and analyzed the financial and technical reports required under the project agreements. In addition, scientists we spoke with at the research institutes also confirmed that they prepare and submit the reports according to the terms of the project agreements. In addition to the monitoring procedures provided under the project agreements, the science center staff have informal contact with scientists on the project team about once a week, which allows them to check on the status of projects on an ongoing basis. These frequent contacts occur when scientists purchase equipment through the science centers, make travel arrangements to participate in international conferences, or come to the science centers to use computers or submit reports in person. Each U.S.-funded project also has a U.S. or western collaborator, either a government agency or private company, that works with the scientists on the research. For example, collaborators attend international conferences with the scientists, visit the institutes to observe the project results, host visits by scientists to the United States, and sometimes conduct part of the research. The science centers seek feedback on the projects' technical progress from the collaborators, who often have a high degree of expertise in the project area. When possible, the science centers also participate in meetings between the scientists and collaborators. Scientists at the research institutes we visited confirmed that they have frequent contact with the science center staff and collaborators. The State Department annually selects a number of U.S.-funded projects to be audited by the Defense Contract Audit Agency of the Department of Defense. During 1999 and 2000, the agency conducted 84 audits on behalf of the State Department. The auditors review financial reports submitted to the science centers and visit the institutes to interview selected scientists, examine timesheet completion procedures and individual scientists' timesheets, and check the inventory of equipment purchased under the project. Based on these procedures, they determine, among other things, whether the scientists' time records are reliable and maintained according to the terms of the project agreement and whether the weapons scientists working on the project are the same as those identified in the workplan. Technical auditors from U.S. industry or other government agencies accompanied the Defense Contract Audit Agency on 44 of the 84 audits conducted in 1999 and 2000. The technical auditors provided the scientific expertise necessary to evaluate the scientists' technical performance and determine whether the amount of time the scientists claim they were working was commensurate with their technical performance, as documented in their scientific logbooks and research results. Because the technical auditors have the expertise to evaluate projects' technical progress, the State Department wants technical auditors to accompany the Defense Contract Audit Agency on all future audits of science center projects. The science centers also undergo an annual external audit of their financial statements and project monitoring procedures. These external audits, conducted by international accounting firms hired by the science centers, include visits to research institutes to evaluate the science centers' monitoring procedures and make recommendations regarding the ability of the science centers to monitor the amount of time that scientists spend on the science center projects. According to State Department and science center officials, the science centers take action to address deficiencies uncovered through monitoring. Science center officials stated that the problems they have uncovered through monitoring have been generally minor, for example, errors in conforming to science centers' accounting requirements. At the science center in Ukraine, officials stated that the most serious violation they had uncovered was a scientist who was charging time to a project while he was in the hospital. They calculated how much he had been overpaid, and he paid the money back. External audits have found deficiencies in the timekeeping practices for a number of projects. For example, one audit found that some scientists had claimed more than the maximum amount of time they are allowed per year (220 days) and recommended additional procedures to prevent such occurrences in the future. The Defense Contract Audit Agency initially found some scientists were charging the science centers the amount of time that had been budgeted in the project workplan rather than the actual amount of time they had worked. Usually, the scientists told the auditors that they had worked more than amount of time they had claimed on their timesheets. For many projects, the technical auditors confirmed that the scientists were probably underreporting their time spent on the projects. However, the technical auditors for two projects at an institute in Russia found that some scientists could not provide sufficient evidence that they had worked on the projects for the time they had charged. The State Department temporarily ceased funding additional projects at this institute until the problem was resolved. Overall, according to the Defense Contract Audit Agency, the science centers have implemented procedures to reinforce correct timekeeping practices among project scientists, and the problems have lessened. The scope of State Department's monitoring of scientists is limited to the implementation of science center projects. Under the terms of the project agreements, the science centers and external auditors only monitor scientists while they are working on science center projects; they cannot track what the scientists are doing while they are not working on the projects or after the projects end. Furthermore, the project agreements do not prohibit the scientists from continuing to work on research for their institutes including, in Russia, research related to nuclear weapons. Although scientists may volunteer information about their other research activities, the State Department has no formal way to monitor what other research these scientists are performing or for whom they are performing it. This limitation is particularly relevant for scientists who work only part- time on science center projects. As shown in figure 6, during 2000 very few senior scientists worked full-time (defined by both science centers as 220 working days per calendar year). Seventy-five percent worked 4 1/2 months (100 days) or less on a science center project during 2000, and some worked just a few days during the year. In addition, the project agreements only provide the science centers and external auditors access to institutes' records related to projects funded by the science centers. The lack of access to records related to what the scientists are doing while they are not working on science center projects limits the ability of the science centers and external auditors to independently confirm the information that the scientists do provide about their activities. For example, monitoring cannot confirm whether scientists are receiving pay from other sources for the time they claim they are working on science center projects. Finally, the project agreements require that auditors and science center staff provide the institutes with 20 to 35 days advance notice before making visits to conduct on-site monitoring. According to State Department and Defense Contract Audit Agency officials, the advance notice limits the element of surprise and gives project scientists the opportunity to cover up deficiencies in their adherence to the project agreements. In written comments provided on a draft of this report, the Department of State concurred with the report's major findings. However, the Department provided additional information to clarify specific sections of the draft report. Specifically, the Department agreed with our finding that it relied on Russian and Ukrainian specialists to monitor the science center projects. However, the Department stated that it is confident that the specialists' monitoring efforts comply with western standards and that the majority of these individuals are former Soviet weapons scientists who are now committed to the mission and nonproliferation objectives of the science centers. The Department also agreed with our finding that there are no reliable estimates on the total population of senior weapons scientists. However, the Department stated that anecdotal evidence suggests that the United States and other funding parties have engaged about half of the population of senior weapons scientists. Finally, the Department stated that while it would be impractical for the United States to keep track of the activities of the weapons scientists when they are not working for the science centers, the Department cited examples of how it maintains contact with current and past participants to varying degrees. The Department's comments are presented in appendix I. To review the State Department's project selection process, we met with officials from the Departments of State and Defense and the Department of Energy's national laboratories who participate in the process. We also attended one meeting of the science advisers. We discussed the program's scope and limitations with officials from the Departments of State and Defense and the U.S. national laboratories, as well as with U.S. representatives on the governing boards of both science centers. We also discussed these issues with the senior management at both centers. In addition, we reviewed the science centers' agreements, statutes, and annual reports. The statistical data were compiled from reports obtained from the Chief Financial Officers at both centers. To examine the monitoring procedures used to check whether scientists are working on the peaceful research they are paid to produce, we first met with State Department officials to discuss what monitoring procedures were in place. We then examined each component of the monitoring process in detail, as follows: We met with auditors from the Defense Contract Audit Agency and science advisers from the national laboratories to learn how they conduct their monitoring activities. We then reviewed the Defense Contract Audit Agency's reports on its audits of U.S.-funded science center projects conducted during 1999 and 2000. We reviewed the reports prepared by the external auditors for both science centers and met with representatives from the firm that conducted the most recent audit of the center in Russia. We visited the science centers in Russia and Ukraine and met with officials at all levels of these organizations including the Executive Directors, Deputy Executive Directors, Chief Financial Officers, technical specialists, and members of the financial staff to discuss how they conduct technical and financial monitoring of projects. We compared these discussions with the centers' written guidance. We also reviewed in detail the project documentation, including financial, technical, and monitoring reports, for 35 projects that had received U.S. funds. To verify that the monitoring process detailed in science center documents was actually taking place, we visited the following nine institutes in Russia and Ukraine where the 35 projects had recently been completed or were currently underway: Paton Electric Welding Institute, Kiev, Ukraine (nuclear, chemical, and missile) Institute of Semiconductor Physics, Kiev, Ukraine (nuclear and missile) Frantsevich Institute for Problems of Materials Science, Kiev, Ukraine (nuclear and missile) Moscow Engineering Physics Institute, Moscow, Russia (nuclear) All-Russia Research Institute of Automatics, Moscow, Russia (nuclear) State Scientific Research Institute of Organic Chemistry and Technology, Moscow, Russia (chemical) State Scientific Institute of Immunological Engineering, Lyubuchany, Russia (biological) State Research Center for Applied Microbiology, Obolensk, Russia(biological) Central Aerohydrodynamic Institute, Zhukovsky, Russia (aeronautics/missile) In selecting the 35 projects, we chose institutes that collectively did work in the four areas of proliferation concern. During our visits, we met with the institutes' directors and members of each project team. In many cases, we also toured the facilities where they conducted their work. Although we only selected projects to review that had received U.S. funds, in some cases other donors had also provided financial support. We performed our work from December 2000 through April 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to interested congressional committees and the Honorable Colin Powell, Secretary of State. Copies will also be made available to others upon request. If you or your staff have any questions about this report, please contact me on (202) 512-4128. Another GAO contact and staff acknowledgments are listed in appendix II. In addition to the person named above, Joe Cook, Dave Maurer, and Valerie Nowak made key contributions to this report.
Since 1994, the United States has appropriated $227 million to support two multilateral science centers in Russia and Ukraine. The science centers pay scientists who once developed nuclear, chemical, and biological weapons and missile systems for the Soviet Union to conduct peaceful research. By employing scientists at the science centers, the United States seeks to reduce the risks that these scientists could be tempted to sell their expertise to terrorists. This report examines the (1) selection procedures the State Department uses to fund projects that meet program objectives and (2) monitoring procedures the State Department uses to verify that scientists are working on the peaceful research they are paid to produce. GAO found that State lacks complete information on the total number and locations of senior scientists and has not been granted access to senior scientists at critical research institutes under the Russian Ministry of Defense. GAO also found that State has designed an interagency review process to select and fund research proposals submitted by weapons scientists to the science centers in Russia and Ukraine. The overall goal is to select projects that reduce proliferation risks to the United States and employ as many senior scientists as possible. The science centers were following their monitoring processes and were taking steps to address audit deficiencies.
6,133
241
According to our Standards for Internal Control in the Federal Government, transactions and other significant events should be authorized and executed only by persons acting within the scope of their authority. Although review of transactions by persons in authority is the principal means of assuring that transactions are valid, we found that the review and approval process for purchase card purchases was inadequate in all the agencies reviewed. At the Department of Education, we found that 10 of its 14 offices did not require cardholders to obtain authorization prior to making some or all purchases, although Education's policy required that all requests to purchase items over $1,000 be made in writing to the applicable department executive officer. We also found that approving officials did not use monitoring reports that were available from Bank of America to identify unusual or unauthorized purchases. Additionally, Education's 1990 purchase card policy, which was in effect during the time of our review (May 1998 through September 2000), stated that an approving official was to ensure that all purchase card transactions were for authorized Education purchases and in accordance with departmental and other federal regulations. The approving official signified that a cardholder's purchases were appropriate by reviewing and signing monthly statements. To test the effectiveness of Education's approving officials' review, we analyzed 5 months of cardholder statements and found that 37 percent of the 903 monthly cardholder statements we reviewed were not approved by the appropriate official. The unapproved statements totaled about $1.8 million. Further, we found that Education employees purchased computers using their purchase cards, which was a violation of Education's policy prohibiting the use of purchase cards for this purpose. As I will discuss later, several of the computers that were purchased with purchase cards were not entered in property records, and we could not locate them. If approving officials had been conducting a proper review of monthly statements, the computer purchases could have been identified and the practice halted, perhaps eliminating this computer accountability problem. Education implemented a new approval process during our review. We assessed this new process and found that while approving officials were generally reviewing cardholder statements, those officials were not ensuring that adequate supporting documentation existed for all purchases. Weaknesses in the approval process also existed at the two Navy units we reviewed. During our initial review, approving officials in these two units told us that they did not review support for transactions before certifying monthly statements for payment because (1) they did not have time and (2) Navy policy did not specifically require that approving officials review support. At one of the Navy units, one approving official was responsible for certifying summary billing statements covering an average of over 700 monthly statements for 1,153 cardholders. Further, Navy's policy allows the approving official to presume that all transactions are proper unless notified to the contrary by the cardholder. The policy appears to improperly assign certifying officer accountability to cardholders and is inconsistent with Department of Defense regulations, which state that certifying officers are responsible for assuring that payments are proper. During our follow-up review, we found that throughout fiscal year 2001, approving officials in the two units still did not properly review and certify the monthly purchase card statements for payment. Although the Department of Defense Purchase Card Program Management Office issued new guidance in July 2001 that would reduce the number of cardholders for which each approving official was responsible, neither of the two units met the suggested ratio of five to seven cardholders to one approving official until well after the start of fiscal year 2002. Further, the Department of Defense agreed with our recommendation that Navy revise its policy to assure that approving officials review the monthly statements and the supporting documentation prior to certifying the statements for payment. However, for the last quarter of fiscal year 2001, one of the Navy units continued to inappropriately certify purchase card statements for payment. The other unit issued local guidance that partially implements our recommendation. IGs at the Departments of Agriculture, the Interior, and Transportation also identified weaknesses in the review and approval processes at these agencies. For example, Agriculture's IG reported that the department has not effectively implemented an oversight tool in its Purchase Card Management System (PCMS), the system that processes purchase card transactions. This tool is an alert system that monitors the database for pre-established conditions that may indicate potential abuse by cardholders. Responsible officials are to periodically access their alert messages and review the details for questionable transactions. These reviewing officials should contact cardholders, if necessary, so that cardholders can verify any discrepancies or provide any additional information in order to resolve individual alert messages. In order to close out alert messages, reviewers must change the message status to "read" and explain any necessary details to resolve the alerts. According to Agriculture's IG, only about 29,600 out of 50,500 alerts in the database during fiscal years 1999 and 2000 had been read as of January 9, 2001, and only about 6,100 of the alerts that were read contained responses. The inconsistent use of this oversight tool means that Agriculture management has reduced assurance that errors and abuse are promptly detected and that cardholders are complying with purchase card and procurement regulations. Interior's IG reported that it reviewed the work of 53 reviewing officials and found that 42 of them performed inadequate reviews. The IG defined an adequate review as one in which the reviewing official, on a monthly basis, reconciled invoices and receipts to the purchase card statements to ensure that all transactions were legitimate and necessary. The IG found that several reviewing officials signed off on monthly statements indicating completed reviews where supporting documentation was not available. Another common internal control weakness we identified was lack of or inadequate training related to the use of purchase cards. Our Standards for Internal Control in the Federal Government emphasize that effective management of an organization's workforce--its human capital--is essential to achieving results and is an important part of internal control. Training is key to ensuring that the workforce has the skills necessary to achieve organizational goals. Lack of or inadequate training contributed to the weak control environments at several agencies. Navy's policies required that all cardholders and approving officials must receive initial purchase card training and refresher training every 2 years. We determined that the two Navy units lacked documentation to demonstrate that all cardholders and approving officials had received the required training. We tested $68 million of fiscal year 2000 purchase card transactions at the two Navy units and estimated that at least $17.7 million of transactions were made by cardholders for whom there was no documented evidence they had received either the required initial training or refresher training on purchase card policies and procedures. Although we found during our follow-up work that the two Navy units had taken steps to ensure cardholders receive training and to document the training, many cardholders at one of the units still had not completed the initial training or the required refresher training. Similarly, at Education, we found that although the policy required each cardholder and approving officials to receive training on their respective responsibilities, several cardholders and at least one approving official were not trained. Interior's IG also reported a lack of training related to the purchase card program. Specifically, the IG reported that although Interior provided training to individual cardholders, it did not design or provide training to reviewing officials. According to the IG, several reviewing officials said that they did not know how to conduct a review of purchase card transactions, nor did they understand how and why to review supporting documentation. As previously mentioned, the IG found that many reviewing officials were not performing adequate reviews. Our Standards for Internal Control in the Federal Government state that internal control should generally be designed to assure that ongoing monitoring occurs in the course of normal operations. Internal control monitoring should assess the quality of performance over time and ensure that findings of audits and other reviews are promptly resolved. Program and operational managers should monitor the effectiveness of control activities as part of their regular duties. At the two Navy units we reviewed, we found that management had not established an effective monitoring and internal audit function for the purchase card program. The policies and procedures did not require that the results of internal reviews be documented or that corrective actions be monitored to help ensure they are effectively implemented. The NAVSUP Instruction calls for semiannual reviews of purchase card programs, including adherence to internal operating procedures, applicable training requirements, micro-purchase procedures, receipt and acceptance procedures, and statement certification and prompt payment procedures. These reviews are to serve as a basis for initiating appropriate action to improve the program and correct problem areas. Our analysis of fiscal year 2000 agency program coordinator reviews at one of the Navy units showed that the reviews identified problems with about 42 percent of the monthly cardholder statements that were reviewed. The problems identified were consistent with the control weaknesses we found. Unit management considered the findings but directed that corrective actions not be implemented because of complaints about the administrative burden associated with the procedural changes that would be needed to address the review findings. These reviews generally resulted in the reviewer counseling the cardholders or in some instances, recommending that cardholders attend purchase card training. As a result, the agency program coordinator had not used the reviews to make systematic improvements in the program. During our follow-up work, we noted that this unit had recently made some efforts to implement new policies directed at improving internal review and oversight activities. However, these efforts are not yet complete. At the time of our review, Education did not have a monitoring system in place for purchase card activity. However, in December 2001, the department issued new policies and procedures that, among other things, establish a quarterly quality review of a sample of purchase card transactions to ensure compliance with key aspects of the department's policy. Transportation's IG reported that the Federal Aviation Administration (FAA) had not performed required internal follow-up reviews on purchase card usage since 1998. A follow-up review is to consist of an independent official (other than the approving official) reviewing a sample of purchase card transactions to determine whether purchases were authorized and that cardholders and approving officials followed policies and procedures. The types of weaknesses that I have just described create an environment where improper purchases could be made with little risk of detection. I will now provide a few examples of how employees used their purchase cards to make fraudulent, improper, abusive, and questionable purchases. We also found that property purchased with the purchase cards was not always recorded in agencies' property records, which could have contributed to missing or stolen property. In a number of cases, the significant control weaknesses that we and the IGs identified resulted in or contributed to fraudulent, improper, abusive, and questionable purchases. We considered fraudulent purchases to be those that were unauthorized and intended for personal use. Improper purchases included those for government use that were not, or did not appear to be, for a purpose permitted by law or regulation. We defined abusive or questionable transactions as those that, while authorized, were for items purchased at an excessive cost, for a questionable government need, or both. Questionable purchases also include those for which there was insufficient documentation to determine whether they were valid. For example, at Education, we found an instance in which a cardholder made several fraudulent purchases from two Internet sites for pornographic services. The name of one of the sites--Slave Labor Productions.com--should have caused suspicion when it appeared on the employee's monthly statement. We obtained the statements containing the charges and noted that they contained handwritten notes next to the pornography charges indicating that these were charges for transparencies and other nondescript items. According to the approving official, he was not aware of the cardholder's day-to-day responsibilities, and therefore, could not properly review the statements. The approving official stated that the primary focus of his review was to ensure there was enough money available in that particular appropriation to pay the bill. As a result of investigations related to these pornography purchases, Education management issued a termination letter, prompting the employee to resign. We also identified questionable charges by an Education employee totaling $35,760 over several years for herself and a coworker to attend college. Some of the classes the employees took were apparently prerequisites to obtain a liberal arts degree, but were unrelated to Education's mission. The classes included biology, music, and theology, and represented $11,700 of the $35,760. These classes costing $11,700 were improper charges. The Government Employees Training Act, 5 U.S.C. 4103 and 4107, requires that training be related to an employee's job and prohibits expenditures to obtain a college degree unless necessitated by retention or recruitment needs, which was not the case here. We also identified as questionable purchases totaling more than $152,000 for which Education could not provide any support and did not know specifically what was purchased, why it was purchased, or whether these purchases were appropriate. The breakdown of controls at the two Navy units we reviewed made it difficult to detect and prevent fraudulent purchases made by cardholders. We identified over $11,000 of fraudulent purchases including gifts, gift certificates, and clothing from Macy's West, Nordstrom, Mervins, Lees Men's Wear, and Footlocker, and a computer and related equipment from Circuit City. During our follow-up work, we also identified a number of improper, questionable, and abusive purchases at the Navy units, including food for employees costing $8,500; rentals of luxury cars costing $7,028; designer and high-cost leather briefcases, totes, portfolios, day planners, palm pilot cases, wallets, and purses from Louis Vuitton and Franklin Covey costing $33,054; and questionable contractor payments totaling $164,143. The designer and high-cost leather goods from Franklin Covey included leather purses costing up to $195 each and portfolios costing up to $135 each. Many of these purchases were of a questionable government need and should have been paid for by the individual. To the extent the day planners and calendar refills were proper government purchases, they were at an excessive cost and should have been purchased from certified nonprofit agencies under a program that is intended to provide employment opportunities for thousands of people with disabilities. Circumventing the requirements to buy from these nonprofit agencies and purchasing these items from commercial vendors is not only an abuse and waste of taxpayer dollars, but shows particularly poor judgment and serious internal control weaknesses. The contractor payments in question were 75 purchase card transactions with a telecommunications contractor that appeared to be advance payments for electrical engineering services. Paying for goods and services before the government has received them (with limited exceptions) is prohibited by law and Navy purchase card procedures. Navy employees told us the purchase card was used to expedite the procurement of goods and services from the contractor because the preparation, approval, and issuance of a delivery order was too time-consuming in certain circumstances. For all 75 transactions, we found that the contractor's estimated costs were almost always equal or close to the $2,500 micro- purchase threshold. Because we found no documentation of independent receipt and acceptance of the services provided or any documentation that the work for these charges was performed, these charges are potentially fraudulent, and we have referred them to our Office of Special Investigations for further investigation. IGs also identified fraudulent purchases. The Transportation Department's IG reported on two cases involving employees' fraudulent use of their purchase cards. In one case, a cardholder used a government purchase card to buy computer software and other items costing over $80,000 for a personal business. In the other case, a cardholder made numerous unauthorized charges totaling more than $58,000, including a home stereo system and a new engine for his car. Additionally, Interior's IG identified fraudulent purchases such as payments for monthly rent and phone bills, household furnishings, jewelry, and repairs to personal vehicles. One type of improper purchase we identified is the "split purchase," which we defined as purchases made on the same day from the same vendor that appear to circumvent single purchase limits. The Federal Acquisition Regulation prohibits splitting a transaction into more than one segment to avoid the requirement to obtain competitive bids for purchases over the $2,500 micro-purchase threshold or to avoid other established credit limits. For example, one cardholder from Education purchased two computers from the same vendor at essentially the same time. Because the total cost of these computers exceeded the cardholder's $2,500 single purchase limit, the total of $4,184.90 was split into two purchases of $2,092.45 each. We found 27 additional purchases totaling almost $120,000 where Education employees made multiple purchases from a vendor on the same day. Similarly, our analysis of purchase card payments at the two Navy units identified a number of purchases from the same vendor on the same day. To determine whether these were, in fact, split purchases, we obtained and analyzed supporting documentation for 40 fiscal year 2000 purchases at the two Navy units. We found that in many instances, cardholders made multiple purchases from the same vendor within a few minutes or a few hours for items such as computers, computer-related equipment, and software, that involved the same, or sequential or nearly sequential purchase order and vendor invoice numbers. Based on our analysis, we concluded that 32 of the 40 purchases were split into two or more transactions to avoid the micro-purchase threshold. During our follow-up work, we found that 23 of 50 fiscal year 2001 purchases by the two Navy units were split into two or more transactions to avoid the micro-purchase threshold. Split purchases were also identified by the IGs at the Departments of Agriculture and Transportation. For example, Agriculture's IG reported that it investigated two employees who intentionally made multiple purchases of computer equipment with the same merchant in amounts exceeding their established single purchase limits. During 3 different months, these employees purchased computer systems totaling $121,123 by structuring their individual purchases of components in amounts less than the individual single purchase limit of $2,500. In September 1999, a computer procurement totaling $47,475 was made using 20 individual purchase card transactions during a 4-day period. Other computer purchases were made in November 1999 involving 15 purchase card transactions over a 3-day period totaling $36,418 and in June 2000 involving 15 individual transactions over a 5-day period totaling $37,230. The IG reported that these procurements should have been made by a warranted contracting officer. Similarly, Transportation's IG reported that it identified 13 transactions totaling about $106,000 that violated the department's policies against splitting purchases. Another problem we and the IGs identified is that some property purchased with purchase cards was not entered in agency property records. According to our Standards for Internal Control in the Federal Government, an agency must establish physical control to secure and safeguard vulnerable assets. Such assets should be periodically counted and compared to control records. Recording the items purchased in property records is an important step to ensure accountability and financial control over these assets and, along with periodic inventory counts, to prevent theft or improper use of government property. At Education and the Navy units, we identified numerous purchases of computers and computer-related equipment, cameras, and palm pilots that were not recorded in property records and for which the agencies could not provide conclusive evidence that the items were in possession of the federal government. For example, the lack of controls at Education contributed to the loss of 179 pieces of computer equipment costing over $200,000. We compared serial numbers obtained from a vendor where the computers were purchased to those in the department's asset management system and found that 384 pieces of computer equipment were not listed in the property records. We conducted an unannounced inventory to determine whether the equipment was actually missing or inadvertently omitted from the property records. We found 205 pieces of equipment. Education officials have been unable to locate the remaining 179 pieces of missing equipment. They surmised that some of these items may have been surplused; however, there is no documentation to determine whether this assertion is valid. At the Navy units, our initial analysis showed that the Navy did not record 46 of 65 sampled items in their property records. When we asked to inspect these items, the Navy units could not provide conclusive evidence that 31 of them--including laptop computers, palm pilots, and digital cameras-- were in the possession of the government. For example, for 4 items, the serial numbers of the property we were shown did not match purchase or manufacturer documentation. In addition, we were told that 5 items were at other Navy locations throughout the world. Navy officials were unable to conclusively demonstrate the existence and location of these 5 items. We were unable to conclude whether any of these 31 pieces of government property were stolen, lost, or being misused. We and the IGs have made recommendations to the various agencies that, if fully implemented, will help improve internal controls over the purchase card programs so that fraudulent and improper payments can be prevented or detected in the future and vulnerable assets can be better protected. These recommendations include (1) emphasizing policies on appropriate use of the purchase card and cardholder and approving official responsibilities, (2) ensuring that approving officials are trained on how to perform their responsibilities, and (3) ensuring that approving officials review purchases and their supporting documentation before certifying the statements for payment. Agencies have taken actions to respond to the recommendations made. However, during our follow-up work at Education and the Navy units, we found that weaknesses remain that continue to leave them vulnerable to fraudulent and improper payments and lost assets. Management's ongoing commitment to improving internal controls is necessary to minimize this vulnerability.
The use of government purchase cards has increased in recent years as agencies have sought to eliminate the bureaucracy and paperwork long associated with small purchases. At the same time, agencies need to have adequate internal controls in place to protect the government from waste, fraud, and abuse. GAO found significant internal control weaknesses in agency purchase card programs, including inadequate review and approval processes, a lack of training for both cardholders and approving officials, and poor monitoring. This lax environment allowed cardholders to make fraudulent, improper, abusive, and questionable purchases. Weak controls also resulted in lost, missing, or misused government property.
4,616
136
USPS's mail processing network consists of multiple facilities with different functions, as shown in a simplified version of this complex network in figure 1. USPS can receive mail into its processing network from different sources such as mail carriers, post offices, and mailing companies. Once USPS receives mail from the public and commercial entities, it processes and distributes the mail on automated equipment that cancels stamps and sorts bar coded mail. Once mail distribution has been completed by other operations, the mail is transported between processing and distribution facilities. Depending on the mail shape and classification, USPS processes the mail through different types of facilities that perform various functions. While mail is processed mainly through these facilities, mail processing operations also occur in other facilities, such as at annexes that are temporary facilities used as overflow for mail processing. In its June 2008 Network Plan, USPS determined that it will reexamine its mail processing network on an ongoing basis given changes in mail volume and outlined several initiatives to improve management of its mail processing operations, retail operations, and workforce to increase efficiency and reduce costs. With regard to its mail processing operations specifically, USPS identified three major initiatives to improve efficiency: (1) closing Airport Mail Center (AMC) operations, (2) transforming the Bulk Mail Center (BMC) network, and (3) consolidating AMP operations. USPS's Network Plan also included criteria for evaluating decisions, the three most important of which were cost, service, and capacity. In September 2008, we reported that USPS took steps to address our prior recommendations to strengthen planning and accountability for its network initiatives, which was important as USPS began implementing them. However, we also found limited information on performance targets or on the costs and savings attributable to USPS's various mail processing network initiatives. In the case of consolidating AMP operations, USPS revised its guidance on the process for AMP consolidations in March 2008. The revised guidance included key steps and time frames associated with them, as well as criteria to consider when making a decision to consolidate operations. The AMP Handbook does not provide guidance regarding how to identify potential opportunities for consolidation. In January 2010, the USPS OIG recommended that the Vice President of Network Operations develop and document specific criteria to identify consolidation opportunities, and USPS management agreed with this recommendation. In December 2009, USPS also updated the AMP Communication Plan, which supplements the AMP guidelines and provides specific guidance on communicating with stakeholders. USPS has realigned parts of its mail processing network and continues to seek additional opportunities to achieve its goal of creating an efficient and flexible network. For fiscal year 2009, USPS realized a cost savings of almost $30 million from eliminating all AMC operating functions and closing nine of these facilities and reorganizing the functions of the BMC to the Network Distribution Centers (NDC). Table 1 shows the status of USPS's three major processing network initiatives intended to lower costs and achieve savings by reducing excess capacity and fuel consumption. Specific steps taken on the three major mail processing network initiatives are as follows: Elimination of AMC operating functions. Of its three major network initiatives, USPS has taken the most action by eliminating the AMC function and closing 9 AMC facilities. In the past decade, USPS has closed 68 of 80 AMC facilities. Located on airport property, AMC facilities primarily processed mail to expedite its transfer, to and from, up to 55 different commercial passenger airlines. Over time, USPS reduced the number of commercial airlines transporting mail from 55 to 7 and, from 2001 to 2007, the volume of mail transported by commercial airlines decreased by over 87 percent. At the same time, USPS contracted with air freight carriers to transport most of the mail requiring air transfer. In response, many AMC facilities made use of the available processing space by taking on additional processing functions typically handled by local processing and distribution centers (P&DCs), such as carrier and retail operations. In 2006, in an effort to eliminate redundancy and reduce costs, USPS began transferring functions performed at AMCs to nearby P&DCs or outsourcing these operations and, in September 2008, we reported that USPS estimated a targeted total savings of $117 million for closing these AMC facilities. Since our 2008 report, USPS has closed 9 AMC facilities, avoiding an estimated $12.2 million in costs. It has also revised the total cost savings to $113 million resulting from eliminating the AMC function and closing facilities, from fiscal year 2007 to fiscal year 2009. USPS officials told us that they plan to reclassify the 12 remaining facilities and determine whether some of them can be closed. Reorganization of BMC functions into NDC Network. USPS has reorganized the functions of its 21 BMCs into an NDC network with expanded functions that more efficiently use long-haul transportation and better align work hours with workload, according to the 2009 Updated Network Plan. Before the reorganization, all BMCs performed the same functions of processing local, destinating, and originating mail (e.g., Standard Mail®️, Periodicals, and Package Services). In fiscal year 2009, USPS reorganized the BMC network, including renaming the facilities as NDC, to reflect the type of operations that are occurring at the facilities, according to USPS officials. The NDC network is divided into three tiers of facilities with different distribution and processing roles: Tier 1 NDC facilities process local and destinating mail; Tier 2 facilities process local, destinating, and originating mail; and Tier 3 facilities handle Tier 2 functions and consolidate less-than- truckload volumes of mail from Tier 2 facilities. As a result of the reorganization, USPS reduced the number of facilities processing originating mail from 21 to 10; the remaining 11 facilities continue to process local and destinating mail. According to officials, USPS completed the reorganization of the BMC functions to NDC in March 2010 and plans to further integrate other mail processing operations to NDC. USPS realized a cost savings of about $17.7 million for fiscal year 2009, with a projected cost savings of about $233.8 million from additional reorganization in fiscal years 2010 and 2011. According to officials, USPS also plans to integrate its Surface Transfer Center (STC) functions into the NDC network to further eliminate redundancy and move all mail traveling the same route through the same facilities. USPS officials told us they are currently identifying and assessing opportunities for consolidating STC functions into the NDC network; however, USPS has not established a definitive timeline as to when the functions of the STC are to be integrated into the NDC network because such integration depends on future mail volumes, space requirements and space availability, and necessary equipment. Consolidation of AMP operations and facilities. As shown in table 1, USPS has continued to initiate, review, and make decisions on AMP proposals to consolidate its operations and facilities. AMP proposals are intended to reduce costs and increase efficiency by making better use of excess capacity or underused resources, primarily at USPS's P&DC facilities; the AMP proposals consist of consolidating all originating, destinating, or both types of operations, from one mail processing facility that downsizes its mail processing operations to other facilities nearby that gains the processing operations. While local and regional USPS management is responsible for conducting a feasibility study and developing an AMP proposal, USPS headquarters approves or disapproves the AMP proposal. Upon an approval from USPS headquarters, local and regional USPS management implements the consolidation of processing operations identified in an AMP proposal. According to USPS officials, the AMP initiative is an ongoing effort to identify opportunities to achieve efficiencies and, as such, USPS has not developed a program target for annual savings from AMP consolidations. As of March 2010, USPS was studying or reviewing 24 additional AMP proposals. (See app. I for a list of the AMP proposals under review.) On the basis of our analysis of 32 AMP proposals that were implemented, approved, or not approved since October 2008, USPS has followed the key steps in the AMP process. (See app. I for a list of the AMP proposals we reviewed.) As shown in figure 2, USPS has developed key steps for the AMP process, and it has established an overall goal of making an AMP decision within 5 months of the study being initiated. Our analysis found that USPS completed each step of the AMP process. It took about 6 months on average to complete the review process from initiating an AMP proposal to making a decision on 27 AMP proposals we analyzed. As shown in figure 3, 4 of the 27 AMP proposals we reviewed were completed in less than 5 months, while others took longer because of various factors, such as resolving conflicting interests from stakeholders and staffing issues. According to USPS officials, the time frames are goals to ensure the process moves forward, but USPS will take the time necessary to ensure that any issues that arise from an AMP proposal are resolved and appropriate decisions are made, even if doing so means going beyond the targeted 5-month time frame. For example, while USPS headquarters completed its review in June 2009 of consolidating the Dallas, TX, P&DC into the North Texas P&DC, the AMP proposal was not approved until December 2009 partially because the OIG was concurrently reviewing the AMP proposal in response to a congressional request. Many of the interim steps in the process conducted by the local and regional management also have time frames associated with them, such as studying the feasibility of an AMP proposal within a 2-month period. However, according to officials, USPS does not centrally track all the dates associated with the interim steps in the process because reviewing AMP proposals is an ongoing, iterative process with some steps occurring concurrently among local and regional USPS management and headquarters. An important part of the process is notifying and communicating with stakeholders, and USPS completed these steps as called for in its guidance. USPS is required to notify stakeholders, including employees, employee organizations, appropriate individuals at various levels of government, local mailers, community organizations, and the local media, as to when a feasibility study is initiated and when a final decision is made on the AMP proposal. According to its guidance, USPS must also provide stakeholders with available information about any service changes that may be affected from the proposed AMP consolidation and give ample opportunities for stakeholders to provide input on the AMP proposals. USPS is also required to conduct a public meeting after the local USPS management completes and forwards the feasibility study to regional and headquarters management for their review. We reported in 2008 that USPS had improved communication with stakeholders with regard to AMP proposals. In our analysis, we found that USPS consistently notified the stakeholders when a feasibility study was initiated and when a final decision was made; we also found that USPS consistently held public meetings and summarized public input for each AMP proposal we reviewed. Representatives of the postal unions we spoke with also commented that the USPS has been following the process and communicating with them and that the local union representatives generally attended the public meetings and were involved with the process. The last step in the AMP process is completion of two postimplementation reviews to assess the results of the consolidation. USPS has completed two reviews of the 32 AMP proposals we reviewed and is in the process of completing five more. The postimplementation reviews are intended to evaluate and measure the actual results of consolidation decisions, including realized savings in work hours, transportation, maintenance, and facility costs. In the first postimplementation review of the consolidation of the Kansas City P&DC in Kansas into the Kansas City P&DC in Missouri, USPS identified cost savings of about $22.3 million after the consolidation--$13 million more than its original projected savings of $9.3 million. USPS officials commented that several factors unrelated to the consolidation, such as the use of in-house maintenance employees rather than outsourced labor for facility projects and incentives for retirement in the fall of 2009, contributed to the larger than expected savings. Similarly, USPS identified cost savings of about $6.3 million in the first postimplementation review of the Canton P&DC consolidation with the Akron P&DC in Ohio--$4.1 million more than its original projected savings of $2.2 million. According to USPS officials, the original projections were made based on expected savings resulting from the consolidation. For both postimplementation reviews, additional savings have been realized in part because mail volume has continued to decline resulting in further reductions in work hours and transportation costs. Based on our analysis of 32 AMP proposals that USPS had decided on since October 2008, USPS consistently considered the criteria in its guidance when making its decisions. According to the AMP guidance, USPS must consider the following four criteria: impacts on the service standards for all classes of mail, issues important to local customers, impacts to USPS staffing, and savings and costs associated with moving mail processing operations. We also found that USPS has standardized its AMP data sources and analytical methodologies to achieve more consistent analysis when evaluating the criteria during the decision-making process. In addition, the OIG independently reviews data and the criteria USPS has used to validate the business cases for some AMP proposals. For instance, the OIG validated the business case for some of the AMP proposals we reviewed, including the consolidations of operations at Dallas P&DC into North Texas P&DC in Texas and New Castle processing and distribution facility (P&DF) into Pittsburgh P&DC in Pennsylvania. Additionally, the OIG concurred with the business decisions for consolidating mail processing operations at the Canton P&DC with the Akron P&DC in Ohio and Lakeland P&DC and Manasota P&DC with the Tampa P&DC in Florida. While USPS consistently evaluated these criteria, a stakeholder we spoke with commented that USPS does not provide a complete set of data it uses to make its decisions. Although USPS is not required to provide complete data that are used to consider AMP proposals under the AMP guidance, the stakeholder believed that more data transparency is needed to permit validation of USPS's AMP decisions. According to USPS officials and USPS guidance, AMP proposals contain commercially sensitive information, and public disclosure of the information could cause competitive harm to USPS. Accordingly, sensitive data contained in AMP proposals is redacted. For the proposals we reviewed, we found that USPS assessed the impact that a consolidation would have on the service standards for all classes of mail and considered issues important to local customers. Two of the AMP proposals we reviewed--the consolidation of operations at Mansfield P&DF into Akron P&DC in Ohio and Zanesville Post Office into Columbus P&DC in Ohio--were not approved due to a potential downgrade in the delivery services for First-Class Mail®️, despite potential cost savings for consolidating those facilities. In other instances, the AMP proposal was approved even though a downgrade in service for a particular class of mail was identified, such as Package Services, because an upgrade in delivery services of other mail classes was also identified, such as First-Class Mail®️. According to USPS officials, it is the overall net effect of changes in delivery services that are considered in the decision-making process. In the case of considering issues important to local customers, USPS assessed whether the AMP proposal would impact customer service, such as any changes in mail pickup times, hours for business mail acceptance, and hours of retail operations. In many of the AMP proposals we reviewed, USPS forecasted that there would be no adverse impact on local customer service. USPS also forecasted that many of the retail hours at bulk mail entry units covered in the AMP proposals would not be changed. The impact that an AMP proposal would have on USPS staffing and estimating savings and costs associated with the consolidation are also important criteria in the AMP decision process. When considering the impact on staffing, USPS examined and estimated the potential number of positions that would be reduced or transferred to gaining facilities. This is a reduction in the number of positions that are allotted to a facility and not necessarily a loss of employees. Employees, who are impacted by the consolidation, are given positions in the gaining facility, or other facilities, in accordance to their respective collective bargaining agreements. USPS estimated a total reduction of 1,263 allotted positions for the AMPs we reviewed. In estimating potential costs and savings, USPS assessed work hour savings from staffing changes, savings associated with transportation and maintenance, as well as savings associated with space and leasing facilities. USPS also examined one-time costs associated with relocating staff, moving mail processing equipment, and changing facilities. If overall estimated cost savings were not identified, then the AMP proposal would not proceed. For example, while cost savings were identified in the AMP proposal to consolidate operations at Hattiesburg Customer Service Mail Processing Center with Gulfport P&DF in Mississippi, the proposal was not approved because one-time costs associated with moving mail processing equipment were not identified, and thus, the estimated total annual savings were insufficient. USPS estimated a total annualized cost savings of about $98.5 million for the 29 approved and implemented AMP proposals we reviewed. In 2005, we reported that because USPS did not have criteria to consider, or a process to follow, when making mail processing consolidation decisions, it was not clear whether the decisions would be made in a manner that is fair to all stakeholders or that is efficient and effective. As such, we recommended that USPS establish a set of criteria for evaluating consolidation decisions, develop a process for implementing these decisions that includes evaluating and measuring the results, and develop a mechanism for informing stakeholders as decisions are made. In 2008, we reported that USPS had made progress on implementing our prior recommendations: USPS established criteria for evaluating consolidation decisions, developed a process for evaluating and measuring the results of its AMP decisions, modified its AMP Communication Plan to improve public notification, engagement, and transparency, and clarified its process for addressing public comments. As stated earlier, we found that USPS followed its AMP process and consistently applied its criteria for evaluating AMP proposals that we reviewed. We provided a draft of this report to USPS for official review and comment. In response, USPS provided technical comments that we incorporated where appropriate. We are sending copies of this report to the Postmaster General, appropriate congressional committees, and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions concerning this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Table 2 below lists Area Mail Processing (AMP) proposals under review by USPS, while Table 3 lists AMP proposals that we reviewed. In addition to the individual named above, Maria Edelstein, Assistant Director; Colin Fallon; Brandon Haller; Jennifer Kim; Jaclyn Nelson; and Crystal Wesco made key contributions to this report.
Deteriorating financial conditions and declining mail volume have reinforced the need for the U.S. Postal Service (USPS) to increase operational efficiency and reduce expenses in its mail processing network. This network consists of interdependent functions in nearly 600 facilities. USPS developed several initiatives to reduce costs and increase efficiency; however, moving forward on some initiatives has been challenging because of the complexities involved in consolidating operations. In response to a conference report directive, GAO assessed (1) the overall status and results of USPS's efforts to realign its mail processing network and (2) the extent to which USPS has consistently followed its guidance and applied these criteria in reviewing Area Mail Processing (AMP) proposals for consolidation since the beginning of fiscal year 2009. To conduct this assessment, GAO reviewed USPS's Network Plan, area mail processing consolidation guidance and proposals as well as other documents; compared USPS's actions related to consolidation of area mail processing facilities with its guidance, and interviewed officials from USPS, the USPS Office of Inspector General, and employee organizations. GAO provided USPS with a draft of this report for comment. In response, USPS provided technical comments that were incorporated where appropriate. USPS has realigned parts of its mail processing network since the beginning of fiscal year 2009 and continues to seek additional opportunities to achieve its goal of creating an efficient and flexible network and realize cost savings. Specifically, USPS: (1) eliminated all functions of the Airport Mail Centers, closed 9 of these facilities, and now uses the remaining 12 for other purposes, resulting in a realized cost savings of about $12.2 million in fiscal year 2009; (2) reorganized the functions of the 21 Bulk Mail Centers into newly developed Network Distribution Centers, resulting in a realized cost savings of about $17.7 million in fiscal year 2009; and (3) implemented 23 proposals to consolidate AMP operations and facilities and approved another 6 AMP consolidation proposals. USPS estimated an annual cost savings of about $98.5 million for the 29 approved and implemented AMP proposals. Additionally, USPS officials stated that they plan to integrate the Surface Transfer Center functions into the Network Distribution Center network to further eliminate redundancy in transporting mail. USPS has developed specific program targets for the ongoing reorganization efforts of the Network Distribution Centers and estimated a cost savings of about $233.8 million for fiscal years 2010 and 2011 from reduction in work hours and transportation costs. On the basis of GAO's analysis of 32 AMP proposals that were implemented, approved, or not approved since the beginning of fiscal year 2009, USPS has followed its realignment guidance by completing each step of the process and consistently applying its criteria in its reviews. GAO's analysis found that it took about 6 months on average--a month more than USPS's target of 5 months--to complete the review process from initiating an AMP proposal to making a decision. USPS officials noted the importance of the AMP decisions and the need to sometimes take longer than what the guidance suggests to ensure the correct decision. GAO also found that USPS consistently notified stakeholders when key steps of the AMP process were completed, such as when an AMP proposal was initiated, or public meetings were held. For each of the AMP proposals that GAO reviewed, USPS also consistently evaluated its four criteria related to AMP consolidations: (1) impacts on the service standards for all classes of mail, (2) issues important to local customers, (3) impacts to USPS staffing, and (4) savings and costs associated with moving mail processing operations.
4,131
747
As federal employees plan for their eventual retirement from government service, they often consider many financial and lifestyle issues. Agency- provided retirement education is generally the primary source of the information that employees need to plan for these issues before they retire. Retirement benefits represent an important portion of total federal compensation and employees often cite these benefits as a primary reason for staying in government service. Thus, agencies also benefit from sponsoring retirement education programs, which allow them to capitalize on their comparative advantage in competitive labor markets as well as invest in the government's human capital. The Federal Employees' Retirement System Act of 1986 (FERSA) granted the Office of Personnel Management (OPM) and federal agencies broad authority to design and implement retirement education programs for employees covered by the two largest federal civilian retirement programs--the Civil Service Retirement System (CSRS) and the Federal Employees' Retirement System (FERS). Specifically, FERSA authorizes agencies to designate retirement counselors who are responsible for providing employees with benefits information, and mandates that OPM establish a training program for these agency retirement counselors.FERSA also created the Federal Retirement Thrift Investment Board to administer the Thrift Savings Plan (TSP). The Thrift Board provides training and information on TSP to agency personnel offices and groups of employees upon agency request; however, it is not responsible for providing retirement education for the federal workforce. CSRS, which was established in 1920, currently includes an annuity and TSP. CSRS' annuity predates the Social Security system by several years. When the Social Security system was established, Congress decided that employees in CSRS would not be covered by Social Security through their federal employment. Starting in 1987, employees covered by CSRS may also contribute up to 5 percent of their salary to TSP; however, they receive no government contributions. CSRS was closed to new entrants after December 31, 1983, and, according to OPM actuaries, is estimated to end in about 2070, when all covered employees and survivor annuitants are expected to have died. FERS was implemented in 1987 and generally covers those employees who first entered federal service after 1983. The primary impetus for the new program was the Social Security amendments of 1983, which required all federal employees hired after December 1983 to be covered by Social Security. Thus, FERS includes Social Security, an annuity that is smaller than that provided under CSRS, and TSP. The government automatically contributes an amount equal to 1 percent of salary to TSP accounts for all employees covered by FERS, regardless of whether those employees make any voluntary contributions to their accounts. In addition, employees covered by FERS may contribute up to 10 percent of their salaries, up to the current legal maximum of $10,000, and receive government matching contributions on the first 5 percent. At the beginning of fiscal year 1998, CSRS and FERS covered about 2.7 million employees, or 93 percent of the civilian workforce, including U.S. Postal Service employees. As of fiscal year 1995, FERS covered slightly more federal employees than CSRS. In response to the request of Senator Carl Levin, in his former capacity as Ranking Minority Member of the Subcommittee on International Security, Proliferation and Federal Services, Senate Committee on Governmental Affairs, our objectives in preparing this report were to provide information on what OPM officials and retirement experts view as the recommended content, presentation formats, and timing of retirement education programs and OPM's and agencies' retirement education roles, responsibilities, and practices in the context of these recommendations. Because of time and resource constraints, we limited the scope of our review to the education provided to employees covered by CSRS and FERS, who represent the majority of federal civilian employees. To identify OPM's views on the recommended content, presentation formats, and timing of a retirement education program, we interviewed OPM officials and reviewed OPM's published guidance on how agencies are to design and implement federal retirement education programs. To identify retirement experts' views, we interviewed a judgmentally selected group of 15 retirement experts using a structured interview that had been pretested and provided in advance. The experts also responded to a close- ended questionnaire. We used a summary of the experts' responses as our principal basis for identifying the recommended content, presentation formats, and timing of a retirement education program. In summarizing the experts' responses to the close-ended questionnaire, we used a super- majority criterion (i.e., agreement on the part of 10 or more experts) to classify a list of 21 potential topics, or content, as (1) essential; (2) recommended, but not essential; or (3) optional. Specifically, we identified a topic as "essential" when 10 or more experts responded that the topic was essential. If the topic did not meet the criterion for being essential, we identified it as "recommended" when 10 or more experts responded that the topic was either essential or recommended. Similarly, if the topic did not meet the criteria for being essential or recommended, we identified it as "optional" when 10 or more experts responded that the topic was essential, recommended, or optional. To identify candidates who had the appropriate background and experience to serve as retirement experts, we solicited and received nominations from the following eight associations and organizations that specialize in retirement and/or financial planning issues: the American Association of Retired Persons, the Employee Benefit Research Institute, the International Association for Financial Planning, the International Foundation of Employee Benefit Plans, the National Association of State Retirement Administrators, the National Conference of Public Employee Retirement Systems, the Pension Research Council, and the Teachers Insurance and Annuity Association. For each candidate nominated, we reviewed the biographical information provided by the nominating organization(s). We selected 16 individuals who each had extensive experience with pension or retirement issues and specific expertise on retirement education. The selected experts collectively represented a breadth of professional backgrounds in both the public and private sectors, including academics, unions, financial planning, pension administration, advocacy, financial services, and human resource management consulting. We invited each of the selected candidates to share their views on retirement education, and 15 agreed to do so. Appendix I provides more information on the experts with whom we consulted. To identify OPM's and agencies' retirement education roles, responsibilities, and practices in the context of the recommendations on program content, presentation formats, and timing, we interviewed officials representing OPM, the Thrift Board, and 12 randomly selected federal agencies that had 1,000 or more employees and whose headquarters were located within the Washington, D.C., metropolitan area. We used a structured interview that had been pretested and provided to the 12 agencies in advance. We also analyzed documents and data provided by the agencies' officials. We used a summary of the agencies' practices as the principal basis for comparing the actual practices of the 12 agencies with the recommended content, presentation formats, and timing identified by OPM officials and the experts. We did not independently verify agencies' responses regarding the specifics of the content, performance formats, and timing of their retirement education programs. Thus, although we used terms such as "provided" and "sponsored" to describe agencies' practices, we were generally referring to what agencies told us they did. To develop the sample of agencies for our review, we used information from the spring 1997 Central Personnel Data File (CPDF)--an automated information system that contains individual records for most federal civilian employees and is maintained by OPM. The list of agencies used in selecting this sample included 68 organizations that represented a total of 1,682,391 federal employees who were covered by CSRS or FERS. We stratified the 68 organizations according to size (1,000 to 9,999 employees; 10,000 to 99,999 employees; and 100,000 or more employees) and randomly selected 4 agencies from each group. For the Department of Defense (DOD), our list of 68 organizations included only the Departments of the Army, Air Force, and Navy. On this basis, we selected the following 12 agencies for review: the International Trade Administration and National Oceanic and Atmospheric Administration (NOAA) of the Department of Commerce; the Bureau of Reclamation of the Department of the Interior; the Internal Revenue Service (IRS), U.S. Customs Service, and U.S. Secret Service of the Department of the Treasury; the Health Resources and Services Administration (HRSA) and the National Institutes of Health of the Department of Health and Human Services (HHS); the Department of Housing and Urban Development (HUD); the Veterans Health Administration (VHA) of the Department of Veterans Affairs (VA); and the Departments of the Navy and Air Force of DOD. The sampled agencies employed about 42 percent of the employees covered by CSRS or FERS from our sampling universe. As agreed, our analysis did not address the effectiveness of OPM's administration of federal retirement education, agencies' programs, or the retirement education that individual federal employees might receive. Also, we did not attempt to independently validate the information provided to us by OPM and the 12 agencies. Although we audited the reliability of CPDF data for fiscal year 1996 and found it sufficiently reliable for most governmentwide analyses, we did not update that audit.However, we are not aware of changes in the way that agencies submit or OPM processes CPDF data that would materially affect the reliability of the data. We used a random sample to have an objective, unbiased sample. However, as a consequence of our small sample size, the retirement education practices described in this report are not generalizable to all agencies that employ 1,000 or more employees and have headquarters in the Washington, D.C., metropolitan area. We are reporting solely on the practices of those agencies we surveyed. We did our review in Washington, D.C., from January 1998 to February 1999 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Director of OPM; the Secretaries of the Department of Commerce, DOD, HHS, HUD, the Interior, the Treasury, and VA; the Commissioner of Internal Revenue; or their designees. OPM and Commerce provided written comments. DOD's and IRS' comments were provided orally by the audit liaison and legislative affairs officer, respectively. These agencies' comments are presented at the ends of chapters 2 and 3, and OPM's written comments are reprinted in appendix II. HHS, HUD, the Interior, the Treasury's Customs Service and Secret Service, and VA said they had no comments on the draft report. OPM and the experts with whom we consulted held generally consistent views regarding the recommended content, presentation formats, and timing of retirement education programs. OPM provided guidance to federal agencies on CSRS and FERS administration in its CSRS and FERS Handbook for Personnel and Payroll Offices, benefits administration letters, and other advisory documents. OPM's guidance presented various recommendations regarding the design and implementation of agency retirement education programs. The retirement experts with whom we consulted also provided recommendations regarding the content, presentation formats, and timing of a retirement education program. Although the terminology used by OPM and the experts was not identical, we considered the substance of their recommendations regarding content, presentation formats, and timing to be generally consistent. For example, OPM and the experts agreed that new employees need basic information on their retirement system's characteristics, all employees need financial planning information on a periodic basis during their careers, and employees nearing retirement need transition planning information. Table 2.1 summarizes OPM's and the experts' views regarding the content and timing of agency-provided retirement education programs. OPM's views regarding the design and implementation of agencies' retirement education programs were reflected in the guidance and support it provided to agencies. While allowing agencies to exercise broad flexibility in designing and implementing their retirement education programs, OPM recommended that agencies include certain key topics or content, present information through various formats, and educate employees throughout their careers. The CSRS and FERS Handbook served as the principal vehicle for communicating OPM's guidance, and OPM updated that guidance on a periodic basis through handbook revisions and benefits administration letters sent directly to the agencies. OPM's guidance recommended that federal agencies consider including certain content as part of their retirement education programs. OPM's recommendations were not intended to be exhaustive and agencies were not required to include them in their retirement education programs. OPM's recommended topics included the following: plan type, including whether an employee is covered by CSRS or FERS; eligibility, including minimum age and service requirements for employees to (1) participate in the plan and (2) retire with full benefits; employer and employee contributions allowed or required under CSRS or voluntary contribution program; financial planning, including various investment strategies; military or prior civilian service deposits, including whether an employee has prior service for which a deposit or redeposit is owed and the effects of payment or nonpayment on an annuity; TSP withdrawal options, including when a retiree may begin withdrawing TSP savings as well as the monetary advantages and tax effects of the various withdrawal options; annuity estimates; divorce or separation, including the potential effect of divorce or separation agreements on retirement benefits; designating a beneficiary, including the cost and amount of survivor benefits as well as spousal eligibility for benefits; retaining health and life insurance benefits in retirement; cost-of-living adjustments (COLA), including how retirement benefits will be adjusted periodically for inflation depending on CSRS or FERS coverage; and Social Security and Medicare, including whether employees are covered by these programs and how the programs integrate with their other benefits. OPM recommended that agencies include written, interactive, and electronic formats as part of their retirement education programs. For example, OPM recommended that agencies use formats such as pamphlets and brochures, periodic workshops and seminars, Intranet/Internet Web sites, and recorded telephonic information in their retirement education programs. According to OPM, agencies that use multiple educational formats are likely to increase the number of employees that they reach through their retirement education program. OPM recommended that agencies provide employees with retirement information at various stages of their career, including: early career, 5 years before retirement eligibility, 1 year before retirement eligibility, 6 months before retirement, and 2 months before retirement. OPM also recommended that agencies cover certain topics with employees throughout their careers and periodically update information about any changes occurring to federal retirement programs or benefits. Table 2.1 summarizes OPM's recommendations on when agencies may wish to introduce topics to employees. OPM recommended that agencies identify and invite employees to attend a preretirement seminar within about 5 years before their retirement eligibility and about 1 year before their actual planned retirement. Moreover, OPM believed that agencies should contact employees within 1 year of retirement eligibility and offer those employees one-on-one counseling. Consistent with OPM's guidance, the retirement experts with whom we consulted recommended specific content, presentation formats, and timing that they considered essential for a retirement education program. A super majority (at least 10 of 15) of the experts considered 13 topics to be essential to a retirement education program, while they identified 6 topics as recommended, but not essential, and 2 topics as optional. The experts identified the following 13 topics as being essential to a retirement education program: plan type, including whether an employee is covered by CSRS or FERS; participation and vesting requirements, or the amount of time that employees must work before they are eligible to (1) contribute to and (2) own, or become "vested" in, accrued benefits of their plan; employer and employee contributions that are allowed and/or required; estimated assets needed to retire that reflect individual employee's desired retirement date, income level, and lifestyle; investment alternatives and strategies, including information on the association between investment risk and return, the benefits of saving earlier rather than later, and the importance of diversification across different types of investment vehicles; debt management that provides employees with information on how to manage limited resources efficiently and enhance their ability to save; tax considerations, including the benefits of saving with pretax versus retention of agency-provided health and life insurance benefits; minimum voluntary retirement dates; projected benefit amounts and COLA's; disability and survivor insurance, including how these programs are integrated with their other retirement benefits and any associated costs to employees; Social Security and Medicare, including whether employees are covered by these programs, how the programs are integrated with their other retirement benefits, and any associated costs to employees; and Medigap and long-term care insurance, that is, insurance designed to provide coverage for medical costs not covered by Medicare or other federal health insurance. The experts also identified the following six topics as recommended, but not essential, for a retirement education program: health maintenance, both before and after retirement; early or deferred retirement options, including circumstances under which employees would be eligible to receive reduced retirement benefits (1) earlier than the minimum voluntary retirement date or (2) later than the time of actual separation from an agency; deciding when and whether to retire; withdrawal options, such as taking accrued benefits as an annuity versus as a lump-sum payment; postretirement employment, including information on starting a new career or working part-time; and inheritance planning, including the preparation of wills and other methods of transferring estates to survivors. Finally, the experts identified the following two topics as optional components of a retirement education program: relocation, including whether and where employees might wish to relocate planning for increased leisure time. The experts believed that agencies should avail themselves of a broad range of presentation formats in their retirement education programs. For example, agencies could distribute written guidance, such as brochures and newsletters; present information more interactively by sponsoring seminars, workshops, or one-on-one counseling sessions; and/or provide information upon request by establishing electronic systems, such as Intranet/Internet Web sites and recorded telephonic response systems. The experts believed that each presentation format has its advantages and disadvantages. Moreover, no one format would be optimal for communicating with all employees, because individual learning styles vary. The experts also believed that each individual employee's need for information on a specific retirement education topic at any given point in their career is influenced by multiple demographic factors, including their age, marital status, knowledge of financial planning concepts, years until they are eligible or plan to retire, and health status. Thus, agencies are challenged with designing a retirement education program that can meet the needs of all their employees over their entire careers. The experts recommended that agencies focus on their employees' needs when selecting which presentation formats to include in their programs. To address individual employee learning styles and content needs, the experts recommended that agencies design their retirement education programs to include multiple and interactive formats to the extent possible. Specifically, they viewed one-on-one counseling and seminars as the optimal methods of presenting retirement education. Although these options represent the most costly methods of providing such information, the experts told us that both formats allow agencies to expose employees to a broad range of topics that employees then can pursue further on an as- needed basis. Moreover, employees benefit from being able to get direct and immediate responses to any questions they may have. The experts told us that one-on-one counseling represents the most customized source of information for employees; however, seminars allow for group interactions that may enrich the information available to employees. To better meet the individual content needs of different employees, the experts recommended that agencies choosing to use seminars or workshops should do so by offering customized sessions for specific groups, or segments, of their workforce. For example, agencies might provide seminars that are targeted to employees at different career stages, such as early career, midcareer, and preretirement. Agencies then could target their content to include those topics that are most relevant to the attending group of employees. This approach would also provide employees with the opportunity to attend seminars periodically throughout their careers. The experts told us that written materials also play an important role in retirement education. These materials, which can be provided in paper or on electronic Web sites, allow agencies to provide consistent and detailed information to all employees in a cost-efficient way. Employees can use such reference materials as often as they like and at their convenience. However, many of the experts with whom we consulted did not recommend that agencies rely on written materials as their primary presentation format because employees may too readily ignore, file, or throw away such materials. In particular, the experts said that younger employees may regard information on retirement planning as something to which they need not devote much attention. The experts recommended that agencies introduce many of the topics identified as essential early within employees' careers. The experts also recommended that agencies update their employees on this information on a regular basis throughout their careers--approximately once every 1 to 5 years. The table at the beginning of this chapter (see table 2.1) summarizes the experts' recommendations regarding the content that agencies may wish to present at various times in employees' careers. The experts recommended that agencies introduce basic plan information to employees within their first year of employment. Additionally, the experts recommended that agencies update employees regularly (i.e., continuously or at least once a year) on many of the topics that the experts identified as essential, recommended, or optional after the topics have first been introduced. The experts also recommended that agencies introduce information on minimum retirement dates to employees more than 5 years before they are eligible for full retirement benefits and information on postretirement employment, relocation, and planning for increased leisure time late in employees' careers. The experts told us that all employees need information early and often during their careers, regardless of whether they are covered by CSRS or FERS. However, the focus or content of agency-provided information to employees may need to be tailored to address the unique aspects of each retirement system. For example, the experts told us that it is particularly important for employees covered by FERS to understand the level of allowed contributions to their TSP accounts, the amounts of agency matching contributions that are available, the risk and investment returns associated with each available investment alternative, and the benefits generally associated with beginning to contribute to TSP early in one's career. While employees' decisions have a limited impact on the amount of their future annuities from CSRS and FERS, employees may benefit from receiving information early in their careers on such topics as the future projected value of their annuities, vesting requirements, and available withdrawal options. Employee decisions made with or without information on such topics could affect the amount of an employee's future retirement benefits. OPM, Commerce, DOD, and IRS agreed with our findings. In its written comments (see app. II), OPM added that it was gratified that there is agreement among our retirement experts, OPM, and agencies on the makeup of retirement education programs. OPM said it was working continually to improve the quality and comprehensiveness of benefits information employees receive and that our findings would be very useful in its efforts to enhance the products and services it makes available to agencies. IRS similarly indicated agreement with OPM's and our experts' recommendations and said that it would consider them in contemplating whether improvements could be made regarding the education provided early within employees' careers. OPM and the agencies we surveyed both played a role in providing retirement education to federal employees covered by CSRS and FERS. As part of its governmentwide responsibility for federal retirement systems, OPM supplemented the guidance it provided to agencies on the design and implementation of retirement education programs by developing educational materials, sponsoring training, and providing technical advice to agencies' benefits personnel. Agencies, which had primary responsibility for developing retirement education programs, generally provided information to employees on topics such as the basic features of CSRS and FERS and financial planning issues for retirement, which were recommended by OPM and the retirement experts with whom we consulted. The agencies distributed this information to employees using a variety of written, interactive, and electronic presentation formats that were available throughout employees' careers, also as recommended by OPM and the experts. In addition to providing agencies with guidance on how to design and implement their retirement education programs (see ch. 2), OPM also provided educational materials and other support to agencies' benefits officers and federal employees. Specifically, OPM developed educational materials that updated agencies on any changes in the law or regulations affecting retirement programs and that agencies could distribute directly to federal employees as part of their programs. OPM also supported agencies by sponsoring training and providing technical assistance to resolve case-specific issues for benefits staff. OPM published retirement education materials that agencies could distribute to federal employees or use as guidance in developing their own customized program materials. These materials included brochures and pamphlets as well as videos and CD-ROM programs that provided detailed information on federal retirement programs, such as retirement eligibility requirements, annuity formulas, TSP contribution limits, requirements for maintaining health and life insurance in retirement, and survivor benefits. Agencies and employees could also access OPM's Web site for retirement information and links to other related Web sites, such as the Thrift Board's site for TSP participants. Although OPM indicated in its guidance that supplying retirement education to employees is primarily an agency role, officials told us that they supported agencies' efforts in these ways to help agencies cope with increased workloads and to allow agencies' staff to devote more time to such activities as providing one-on-one counseling. For example, during the 1998 open season, when employees covered by CSRS could elect to transfer to FERS, OPM provided agencies with detailed information on the specifics of each retirement program, frequently asked questions and answers for individuals considering whether to transfer to FERS, and a computer model that allowed agencies to project what an individual's benefits might be, given different scenarios. Consistent with statutory requirements, OPM also supported agencies' retirement education programs by providing training for benefits officers on a periodic basis. Specifically, OPM sponsored quarterly meetings of the interagency network for retirement and insurance, an annual Fall Festival of Training, an annual benefits officer conference, and other training courses on an as-needed basis throughout the year, all of which provided agencies' personnel with both training and networking opportunities. In support of agencies' retirement counseling services, OPM provided expert advice and assistance on specific technical issues or cases. OPM officials told us that they have also provided direct support to certain agencies during times of unusual requirements, such as when OPM staff helped to facilitate the delivery of federal retirement and insurance benefits to those employees and survivors affected by the Oklahoma City bombing in 1995. At the time of our review, officials told us that OPM was developing a benefits service center that would augment agencies' retirement education programs by providing benefits officers and individual employees with customized benefits and retirement information and counseling. Most of the agencies that we surveyed indicated that OPM was effective and timely in communicating retirement information and benefits changes to a great or very great extent. Moreover, OPM officials told us that they conducted a customer satisfaction survey in fiscal year 1998 that included all agencies' human resources directors and a sample of agencies' benefits officers. They told us that the results of this survey indicated that agencies generally rated OPM guidance materials as excellent and were highly satisfied with OPM's efforts to share information and provide technical assistance. The retirement education programs of the agencies we surveyed generally included those topics recommended by OPM and the experts with whom we consulted. For example, agencies' officials told us that they included information on the basic features of CSRS and FERS, financial planning for retirement, and maintaining federal health and life insurance in retirement. Agencies also provided information to employees on whether and/or how Social Security would contribute to their retirement benefits, particularly for those employees who were covered by FERS. Officials said that agencies provided retirement planning information, but not advice, regardless of the topics included. Agencies we surveyed provided their employees with information on a variety of topics related to the basic features of CSRS and FERS. For example, agency materials that we reviewed typically included information on participation and vesting requirements for both the annuity and TSP components of each retirement system, required and voluntary contributions made by agencies and/or employees, minimum age and service requirements for full retirement benefits, as well as survivor and disability insurance benefits. In addition to this descriptive information on federal retirement benefits, the agencies also typically provided information that their employees could use to plan for their future retirements. For example, agencies commonly provided employees with information on their projected future benefits, tools for determining what level of assets might be needed in retirement, and general investment strategies for accumulating additional assets if desired. Because federal employees covered by CSRS and FERS are eligible for continued health and life insurance benefits in retirement, agencies we surveyed emphasized the importance of maintaining these benefits in their retirement education programs. For example, the agencies informed employees that they generally must be enrolled in the federal health and life insurance benefits programs for the full 5 years immediately preceding their retirement to qualify for these benefits. The agencies also provided information on how employees could provide these benefits for their survivors if they so choose. Agencies' officials told us that they also included information in their retirement education programs on how Social Security is integrated with federal annuity and TSP benefits. This information is particularly important to those employees covered by FERS, because Social Security represents one of the three components of their retirement plan. Agencies likewise provided information on Social Security to employees covered by CSRS, because a portion of these employees may also be eligible for full or reduced Social Security benefits on the basis of their spouses' work histories, work they did before joining the federal workforce, and/or work they plan on doing following their retirement from federal service. Consistent with OPM and expert recommendations, the officials representing the agencies we surveyed told us that they used a variety of presentation formats in their retirement education programs, including written publications, interactive formats such as seminars and one-on-one counseling, and electronic formats such as Web sites and automated systems. Agencies we surveyed used numerous publications, such as brochures and newsletters, to provide detailed information to employees on their retirement plans and issues to consider in planning for their retirement. Although a few agencies generated some of their own customized materials, the agencies we surveyed generally used written materials made available by OPM or the Thrift Board. According to the agencies' officials, these materials were convenient and high-quality sources of information for employees. Agencies also used Web sites to make many of these publications more readily available. Agencies' officials said that they supplemented their written reference materials by using more interactive formats, in particular, seminars and one-on-one counseling. Agencies offered seminars to expose employees to information on a wide variety of topics, which employees could then individually pursue in more detail as needed or desired. When employees requested one-on-one counseling sessions, agencies provided employees with highly customized retirement planning information, including benefits decisions that needed to be made at retirement and the specific steps needed to apply for retirement. To ensure that employees received expert information on a wide range of topics, agencies we surveyed generally contracted out for seminars. However, the agencies did not contract for one-on-one counseling. Agencies' officials told us that their staff were best able to provide counseling to employees, because they had access to employees' personnel records, were well-informed on the inherent complexities of the federal retirement programs, and were in a position to take personnel actions on behalf of employees, if necessary. Agencies we surveyed also used a variety of electronic media to further distribute retirement education to their employees, including videos, telephone response systems, Intranet/Internet Web sites, and computer simulation models. For example, several agencies' officials told us that they videotaped their retirement seminars (1) to make these sessions available to geographically dispersed employees who might otherwise be unable to attend and/or (2) allow employees to view the seminars multiple times at their convenience. The agencies also commonly provided retirement information using Web sites that included links to other federal sources of retirement information, including OPM, the Thrift Board, and the Social Security Administration. The Air Force, IRS, and HUD also used a centralized and automated call center to provide retirement information to geographically dispersed employees in a manner that they considered to be consistent and cost efficient. Each of these agencies used an interactive system that allowed employees to access a variety of personnel information, including retirement education, by calling a toll-free telephone number. In addition to prerecorded information, employees could reach a benefits counselor who had access to individual personnel records and could provide answers to specific questions. Agencies' officials said that these centralized and more automated systems were developed in response to downsizing that resulted in the agencies having fewer personnel staff available to provide retirement education to employees. Other agencies, including HRSA and VHA, told us that they were considering adopting a similar approach. OPM officials believed that such systems are likely to become more common across the federal service. Consistent with OPM and expert recommendations, the agencies we surveyed made retirement education available continuously throughout employees' careers. Agencies' officials told us that they view retirement education as a shared responsibility between the agencies and employees. That is, agencies were responsible for making such information readily available; however, employees were also responsible for determining when and how often to seek this information. Agencies' officials told us that they provided brochures and other written retirement education materials to employees early in their careers as a part of new employee orientations. Written materials were then provided periodically on an as-needed basis. For example, agencies' officials told us that they provided their employees with revised publications during the 1998 CSRS to FERS open season. The agencies' officials also told us that their payroll offices mail annual benefits statements to employees that contain information on benefits earned to-date and their projected future value at the time of retirement eligibility. Agencies also provided publications on a self-serve basis using centralized benefits resource centers/libraries and/or posting these documents on their retirement Web sites. All of the agencies we surveyed sponsored retirement seminars that were designed for employees who were approximately within 5 years of being eligible to retire. However, several agencies' officials told us that employees who had more than 5 years before becoming eligible were also allowed to attend these seminars, space permitting. Moreover, five of the surveyed agencies (i.e., the Air Force, NOAA, the Bureau of Reclamation, HRSA, and Customs) sponsored separate midcareer seminars that were designed to address topics most relevant to employees with approximately 15 years of federal service. These agencies' officials told us that they provided these additional seminars because they felt that attending a seminar for the first time at 5 years before retirement might be too late to allow some employees to fully prepare for retirement when they first become eligible. Thus, many federal employees had the option of taking more than one retirement seminar during their careers. Finally, the agencies we surveyed made retirement education available to employees throughout their careers using a variety of other formats, including the Web sites and automated information systems we previously discussed. All of the agencies we surveyed told us that one-on-one counseling was available to employees at any point in their careers upon request. OPM, Commerce, DOD, and IRS agreed with our findings. In its written comments (see app. II), OPM said it believes very strongly that employees should receive information about their benefits regularly throughout their careers so that retirement is simply the culmination of a long planning process. OPM also commented that it is very important to make information available in a variety of ways to meet the varying needs of both employing agencies and their employees. IRS said that it is currently delivering preretirement and ongoing education programs that generally include the information recommended by OPM and our retirement experts, and that it may consider whether improvements could be made to the education provided to employees early in their careers.
Pursuant to a congressional request, GAO reviewed the retirement education that the Office of Personnel Management (OPM) and agencies provide to federal civilian employees covered by the Civil Service Retirement System (CSRS) or the Federal Employees' Retirement System (FERS). GAO noted that: (1) OPM and the experts with whom GAO consulted held generally consistent views regarding the recommended content, presentation formats, and timing of retirement education programs; (2) they believed that these programs should provide employees with information on certain topics, or content such as plan features and financial planning, and that other agencies should consider using multiple formats so as to accommodate employees' varying needs; (3) they also believed that such information should be provided early and throughout employees' careers; (4) OPM provided guidance to agencies on the design and implementation of retirement education programs and supplemented the guidance with educational materials, training, and technical advice for agencies' benefits staff; (5) agencies had primary responsibility for designing and implementing their programs according to their agency-specific needs; (6) the retirement education programs of the agencies reviewed generally included those topics recommended by OPM and the experts; (7) in providing retirement education, agencies' officials said that they made information available on a variety of topics, including the specific features of CSRS and FERS, the requirements for maintaining federal health and life insurance benefits in retirement, and financial planning for retirement; (8) agencies' officials told GAO that they used a wide variety of presentation formats to communicate retirement education to their employees; (9) all of the agencies that GAO reviewed provided employees with written educational materials that were supplemented with interactive seminars and one-on-one counseling; (10) agencies provided retirement planning information, but not advice, regardless of the presentation format used; (11) agencies' officials also said that they generally provided retirement education to employees during their initial orientation and throughout their careers; (12) all of the agencies in GAO's review sponsored seminars designed for those employees who were nearing retirement eligibility; (13) some agencies also sponsored additional seminars that were specifically designed for employees who had approximately 15 years of federal service to encourage employees to begin planning for their retirement earlier in their careers; (14) agencies also provided one-on-one counseling at any time upon request; and (15) agencies believed that retirement education is a shared responsibility between agencies and employees, and that employees must ultimately decide for themselves whether or when to seek retirement information.
7,626
503
Although State has not yet formally defined what constitutes a soft target, State Department travel warnings and security officers generally consider soft targets to be places where Americans and other westerners live, congregate, shop, or visit, such as hotels, clubs, restaurants, shopping centers, housing compounds, places of worship, schools, or public recreation events. Travel routes of U.S. government employees are also considered soft targets, based on their history of terrorist attacks. The State Department is responsible for protecting more than 60,000 government employees, and their family members, who work in embassies and consulates abroad in 180 countries. Although the host nation is responsible for providing protection to diplomatic personnel and missions under the 1961 Vienna Convention, State has a variety of programs and activities to further protect U.S. officials and family members both inside and outside of the embassy. Following a terrorist attack that involves serious injury or loss of life or significant destruction of a U.S. government mission, State is required to convene an Accountability Review Board (ARB). ARBs investigate the attack and issue a report with recommendations to improve security programs and practices. State is required to report to Congress on actions it has taken in response to ARB recommendations. As of March 2005, there have been 11 ARBs convened since the board's establishment in 1986. Concerned that State was not providing adequate security for U.S. officials and their families outside the embassy, the American Foreign Service Association testified on a number of occasions before the Senate Appropriations Subcommittee on Commerce, Justice, State and the Judiciary on the need for State to expand its security measures. The subcommittee, in its 2002 and subsequent reports, urged State to formulate a strategy for addressing threats to locales abroad that are frequented by U.S. officials and their families. It focused its concern about soft targets on schools, residences, places of worship, and other popular gathering places. In fiscal years 2003, 2004, and 2005, Congress earmarked a total of $15 million for soft target protection each year, particularly to address security vulnerabilities at overseas schools. Moreover, in 2005, the Senate appropriations report directed State to develop a comprehensive strategy for addressing the threats posed to soft targets no later than June 1, 2005. State has a number of programs and activities designed to protect U.S. officials and their families outside the embassy, including security briefings, protection at schools and residences, and surveillance detection. However, State has not developed a comprehensive strategy that clearly identifies safety and security requirements and resources needed to protect U.S. official and their families. State officials cited several complex issues involved with protecting soft targets. As the terrorist threat grows, State is being asked to provide ever greater levels of protection to more people in more dangerous locations, and they questioned how far State's protection of soft targets should extend. They said that providing U.S. government funds to protect U.S. officials and their families at private sector locations or places of worship was unprecedented and raised a number of legal and financial challenges--including sovereignty and separation of church and state-- that have not been resolved by the department. State officials also indicated they have not yet fully defined the universe of soft targets-- including taking an inventory of potentially vulnerable facilities and areas where U.S. officials and their families congregate--that would be necessary to complete a strategy. Although State has not developed a comprehensive soft target strategy, some State officials told us that several existing programs could help protect soft targets. However, they agreed that these existing programs are not tied together in an overall strategy. State officials agreed that they should undertake a formal evaluation of how existing programs can be more effectively integrated as part of a soft target strategy, and whether new programs might be needed to fill any potential gaps. A senior official with State's Bureau of Diplomatic Security (DS) told us that in January 2005, DS formed a working group to develop a comprehensive soft targets strategy to address the appropriate level of protection of U.S. officials and their families at schools, residences, and other areas outside the embassy. According to State, the strategy should be completed by June 1, 2005. To identify vulnerabilities in State's soft target protection, and determine if State had corrected these vulnerabilities, we reviewed the ARB reports conducted after U.S. officials were assassinated outside the embassy. Of the 11 ARBs conducted since 1986, the majority (5) have focused on soft target attacks, compared with attacks against embassies (2) or other U.S. facilities (4). We found that, 17 years after the first soft target ARB, State has still not addressed the vulnerabilities and recommendations identified in that and more recent reports: specifically, the need for hands-on counterterrorism training and accountability mechanisms to promote compliance with personal security procedures. Despite State's assurances to Congress that it would implement recommendations aimed at reducing these vulnerabilities, we found that State's hands-on training course is still not mandatory, and procedures to monitor compliance with security requirements have not been fully implemented. We also found that ambassadors, deputy chiefs of mission, and regional security officers were not trained in how to implement embassy procedures intended to protect U.S. officials outside the embassies. Since 1988, State has reported to Congress that it agreed with ARB recommendations to provide counterterrorism training. For example, in 1995, State reported that it "re-established the Diplomatic Security Antiterrorism Course (DSAC) for those going to critical-threat posts to teach surveillance detection and avoidance, and defensive and evasive driving techniques." In 2003, State reported it agreed with the recommendations that employees from all agencies should receive security briefings and indicated that it would review the adequacy of its training and other personal security measures. Although State implemented the board's recommendation to require security briefings for all staff, hands-on counterterrorism training is still not mandatory, and few officials or family members have taken DSAC. Senior DS officials said they recognize that security briefings are no longer adequate to protect against current terrorist threats. In June 2004, DS developed a proposal to make DSAC training mandatory. DS officials said that DSAC training should be required for all officials, but that issues such as costs and adequacy of training facilities were constraining factors. As of April 18, 2005, the proposal had not been approved. Although State has agreed on the need to implement an accountability system to promote compliance with personal security procedures since 1988, there is still no such system in place. Beginning in 2003, State has tried to incorporate some limited accountability to promote compliance. However, based on our work at five posts, we found that post officials are following few, if any, of these new procedures. In response to a 2003 ARB, State took a number of steps to improve compliance with State's personal security procedures for officials outside the embassy. In June 2003, State revised its annual assessment criteria to take personal security into account when preparing performance appraisals, and in December 2003, State revised its Foreign Affairs Manual to mandate and improve implementation of personal security practices. In May 2004, State notified posts worldwide on use of a Personal Security Self-Assessment Checklist to improve security outside the embassy. However, none of the posts we visited were even aware of these and other key policy changes. For example, none of the officials we met with, including ambassadors, deputy chiefs of mission, regional security officers, or staff, were aware that the annual ratings process now includes an assessment of whether staff are following the personal security measures or that managers are now responsible for the reasonable oversight of subordinates' personal security activities. Furthermore, none of the supervisors were aware of the checklist, and we found no one was using the checklists to improve their personal security practices. In explaining why posts were not aware of the new personal security regulations, DS officials noted that posts were often overwhelmed by work and may have simply missed the cables and changes in the Foreign Affairs Manual. They also noted that changes like this take time to be implemented globally. Furthermore, State's original plan, to use the checklist as an accountability mechanism, was dropped before it was implemented. In its June 2003 report to Congress on implementation of the 2003 ARB recommendations, State stipulated that staff would be required to use the checklist periodically and that managers would review the checklists to ensure compliance. However, State never implemented this accountability mechanism out of concern it would consume too much staff time. We also found that key officials receive no training on how to promote personal security outside the embassy. According to a number of State officials, improvements in this area must start with the ambassador and the deputy chief of mission. Yet no ambassadors, deputy chiefs of mission, or regional security officers receive any training in how to maximize soft target protection at embassies. DS officials agreed that this critical component should be added to their training curriculum. In response to several congressional committee reports, State began developing a "Soft Targets" program in 2003 to help protect overseas schools against terrorism. The program has four proposed phases. The first two phases are focused on department-sponsored schools that have previously received grant funding from the State Department, and the third and fourth phases focus on the nondepartment-sponsored schools with American students. In phase one, department-sponsored schools were offered funding for basic security hardware such as shatter-resistant window film, two-way radios for communication between the school and the embassy, and public address systems. As of November 19, 2004, 189 department-sponsored schools had received $10.5 million in funding for security equipment in phase one of the program. The second phase provided additional security enhancements, such as perimeter fencing, walls, lighting, gates, and guard booths. As of November 2004, State has obligated over $15 million for phase two security upgrades. For phases three and four, State plans to provide similar types of security upgrades to eligible nondepartment- sponsored schools. The program also funds security enhancements for off-compound embassy employee association facilities, such as recreation centers. Security upgrades include funding for perimeter walls and shatter-resistant window film. In fiscal year 2004, almost $1 million was obligated for these enhancements. Regional Security Officers (RSO) said that identifying and funding for security enhancements at department-sponsored schools were straightforward because of the department's pre-existing relationship with these schools. However, they said it has been difficult to identify eligible nondepartment-sponsored schools for phase three because of the vast number of schools that might qualify, the lack of any pre-existing relationship, and limited guidance on eligibility criteria. For example, some RSOs questioned how many American students should attend a school for it to be eligible for security upgrades. Some RSOs were considering funding schools with just a few American students. Moreover, one RSO was considering providing security upgrades to informal educational facilities, such as those attended by children of U.S. missionaries. State is trying to determine the appropriate scope of the program, and sent cables to posts in the summer of 2004 asking RSOs to gather data on nondepartment-sponsored schools attended by American students, particularly U.S. government dependents. State officials acknowledged that the process of gathering data has been difficult since there are hundreds of such schools worldwide. According to an Overseas Buildings Operations (OBO) official, as of December 2004, only about 81 out of the more than 250 posts have provided responses regarding such schools. OBO plans to use the data to develop criteria for which schools might be eligible for funding under phase three and, eventually, phase four of the program. In anticipation of any future phases of the Soft Targets program, RSOs have been asked to identify other facilities and areas that Americans frequent, beyond schools and off-compound employee association facilities, that may be vulnerable to a terrorist attack. State Department officials were concerned about the large number of sites RSOs could identify as potential soft target sites, and the department's ability to protect them. State has a responsibility for providing a secure housing environment for U.S. officials and their families overseas. However, we found that State's primary program in place to protect U.S. officials and their families at residences, the Residential Security program, is principally designed to deter crime, not terrorism. The program includes basic security hardware and guard service; and as the crime threat increases, the hardware and guard services can be correspondingly increased at the residences. State officials said that while the Residential Security program, augmented by the local guard program, provides effective deterrence against crime, it could provide limited or no deterrence to minimize the risk and consequences of a residential terrorist attack. State officials told us that the best residential scenario for posts is to have a variety of housing options, including apartments and single-family homes, to reduce the potential for a catastrophic attack. To provide greater protection against terrorist attacks, most posts we visited used surveillance detection teams in the residential areas. The program is intended to enhance the embassies' ability to detect preoperational terrorist surveillance and stop the attack. According to State's guidance, surveillance detection units are primarily designed to protect embassies, and their use in residential areas is discouraged. However, we found RSOs at some of the posts we visited were routinely utilizing surveillance detection units to cover areas outside the embassies, such as residences, school bus stops and routes, and schools attended by U.S. embassy dependents. RSOs told us that the Surveillance Detection program is instrumental in providing deterrence against potential terrorist attacks, and argued that the current program guidelines are too restrictive. Senior State officials agreed that the use of the surveillance detection in soft target areas could be beneficial, but noted that the program is labor intensive and expensive, and any expansion of the program could require significant funding. Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. I will be happy to answer any questions you may have. For questions regarding this testimony, please call Diana Glod at (202) 512-8945. Individuals making key contributions to this testimony included Edward George and Andrea Miller. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
U.S. government officials working overseas are at risk from terrorist threats. Since 1968, 32 embassy officials have been attacked--23 fatally--by terrorists outside the embassy. As the State Department continues to improve security at U.S. embassies, terrorist groups are likely to focus on "soft" targets--such as homes, schools, and places of worship. GAO was asked to determine whether State has a strategy for soft target protection; assess State's efforts to protect U.S. officials and their families while traveling to and from work; assess State's efforts overseas to improve security at schools attended by the children of U.S. officials; and describe issues related to protection at their residences. State has a number of programs and activities designed to protect U.S. officials and their families outside the embassy, including security briefings, protection at schools and residences, and surveillance detection. However, State has not developed a comprehensive strategy that clearly identifies safety and security requirements and resources needed to protect U.S. officials and their families abroad from terrorist threats outside the embassy. State officials raised a number of challenges related to developing and implementing such a strategy. They also indicated that they have recently initiated an effort to develop a soft targets strategy. As part of this effort, State officials said they will need to address and resolve a number of legal and financial issues. Three State initiated investigations into terrorist attacks against U.S. officials outside of embassies found that the officials lacked the necessary hands-on training to help counter the attack. The investigations recommended that State provide hands-on counterterrorism training and implement accountability measures to ensure compliance with personal security procedures. After each of these investigations, State reported to Congress that it planned to implement the recommendations, yet we found that State's hands-on training course is not required, the accountability procedures have not been effectively implemented, and key embassy officials are not trained to implement State's counterterrorism procedures. State instituted a program in 2003 to improve security at schools, but its scope has not yet been fully determined. In fiscal years 2003 and 2004, Congress earmarked $29.8 million for State to address security vulnerabilities against soft targets, particularly at overseas schools. The multiphase program provides basic security hardware to protect U.S. officials and their families at schools and some off-compound employee association facilities from terrorist threats. However, during our visits to posts, regional security officers were unclear about which schools could qualify for security assistance under phase three of the program. State's program to protect U.S. officials and their families at their residences is primarily designed to deter crime, not terrorism. The Residential Security program includes basic security hardware and local guards, which State officials said provide effective deterrence against crime, though only limited deterrence against a terrorist attack. To minimize the risk and consequences of a residential terrorist attack, some posts we visited limited the number of U.S. officials living in specific apartment buildings. To provide greater protection against terrorist attacks, some posts we visited used surveillance detection teams in residential areas.
3,127
658
Pipeline transportation for hazardous liquids and natural gas is the safest form of freight transportation. By one measure, the annual number of accidents, the hazardous liquid pipeline industry's safety record has greatly improved over the past 10 years. (See fig. 1.) From 1994 through 2003, accidents on interstate hazardous liquid pipelines decreased by almost 49 percent from 245 in 1994 to 126 in 2003. However, the industry's safety record for these pipelines has not improved for accidents with the greatest consequences--those resulting in a fatality, injury, or property damage totaling $50,000 or more--which we term serious accidents. The number of serious accidents stayed about the same over the 10-year period--about 88 every year. The overall accident rate for hazardous liquid pipelines--which considers both the amounts of products and the distances shipped--decreased from about 0.41 accidents per billion ton-miles shipped in 1994 to about 0.25 accidents per billion ton-miles shipped in 2002. The accident rate for serious interstate hazardous liquid pipeline accidents stayed the same, averaging about 0.15 accidents per billion ton-miles shipped from 1994 through 2002. In contrast to the decreasing number of accidents overall for hazardous liquid pipelines, the annual number of accidents on interstate natural gas pipelines increased by almost 20 percent from 81 in 1994 to 97 in 2003. (See fig. 2.) The number of serious accidents on interstate natural gas pipelines also increased, from 64 in 1994 to 84 in 2003, though they have fluctuated considerably over this time. Information on accident rates for natural gas pipelines is not available because of the lack of data on the amount of natural gas shipped through pipelines. For both hazardous liquid and natural gas pipelines, the lack of improvement in the number of serious accidents may be due in part to the relatively small number of these accidents. OPS, within the Department of Transportation's Research and Special Programs Administration (RSPA), administers the national regulatory program to ensure the safe transportation of natural gas and hazardous liquids by pipeline. The office attempts to ensure the safe operation of pipelines through regulation, national consensus standards, research, education (e.g., to prevent excavation-related damage), oversight of the industry through inspections, and enforcement when safety problems are found. The office uses a variety of enforcement tools, such as compliance orders and corrective action orders that require pipeline operators to correct safety violations, notices of amendment to remedy deficiencies in operators' procedures, administrative actions to address minor safety problems, and civil penalties. OPS is a small federal agency. In fiscal year 2003, OPS employed about 150 people, about half of whom were pipeline inspectors. Before imposing a civil penalty on a pipeline operator, OPS issues a notice of probable violation that documents the alleged violation and a notice of proposed penalty that identifies the proposed civil penalty amount. Failure by an operator to inspect the pipeline for leaks or unsafe conditions is an example of a violation that may lead to a civil penalty. OPS then allows the operator to present evidence either in writing or at an informal hearing. Attorneys from RSPA's Office of Chief Counsel preside over these hearings. Following the operator's presentation, the civil penalty may be affirmed, reduced, or withdrawn. If the hearing officer determines that a violation did occur, the OPS's associate administrator issues a final order that requires the operator to correct the safety violation (if a correction is needed) and pay the penalty (called the "assessed penalty"). The operator has 20 days after the final order is issued to pay the penalty. The Federal Aviation Administration (FAA) collects civil penalties for OPS. From 1992 through 2002, federal law allowed OPS to assess up to $25,000 for each day a violation continued, not to exceed $500,000 for any related series of violations. In December 2002, the Pipeline Safety Improvement Act increased these amounts to $100,000 and $1 million, respectively. The effectiveness of OPS's enforcement strategy cannot be determined because OPS has not incorporated three key elements of effective program management--clear performance goals for the enforcement program, a fully defined strategy for achieving these goals, and performance measures linked to goals that would allow an assessment of the enforcement strategy's impact on pipeline safety. OPS's enforcement strategy has undergone significant changes in the last 5 years. Before 2000, the agency emphasized partnering with the pipeline industry to improve pipeline safety rather than punishing noncompliance. In 2000, in response to concerns that its enforcement was weak and ineffective, the agency decided to institute a "tough but fair" enforcement approach and to make greater use of all its enforcement tools, including larger and more frequent civil penalties. In 2001, to further strengthen its enforcement, OPS began issuing more corrective action orders requiring operators to address safety problems that led or could lead to pipeline accidents. In 2002, OPS created a new Enforcement Office to focus more on enforcement and help ensure consistency in enforcement decisions. However, this new office is not yet fully staffed, and key positions remain vacant. In 2002, OPS began to enforce its new integrity management and operator qualification standards in addition to its minimum safety standards. Initially, while operators were gaining experience with the new, complex integrity management standards, OPS primarily used notices of amendment, which require improvements in procedures, rather than stronger enforcement actions. Now that operators have this experience, OPS has begun to make greater use of civil penalties in enforcing these standards. OPS has also recently begun to reengineer its enforcement program. Efforts are under way to develop a new enforcement policy and guidelines, develop a streamlined process for handling enforcement cases, modernize and integrate the agency's inspection and enforcement databases, and hire additional enforcement staff. However, as I will now discuss, OPS has not put in place key elements of effective management that would allow it to determine the impact of its evolving enforcement program on pipeline safety. Although OPS has overall performance goals, it has not established specific goals for its enforcement program. According to OPS officials, the agency's enforcement program is designed to help achieve the agency's overall performance goals of (1) reducing the number of pipeline accidents by 5 percent annually and (2) reducing the amount of hazardous liquid spills by 6 percent annually. Other agency efforts--including the development of a risk-based approach to finding and addressing significant threats to pipeline safety and of education to prevent excavation-related damage to pipelines--are also designed to help achieve these goals. OPS's overall performance goals are useful because they identify the end outcomes, or ultimate results, that OPS seeks to achieve through all its efforts. However, OPS has not established performance goals that identify the intermediate outcomes, or direct results, that OPS seeks to achieve through its enforcement program. Intermediate outcomes show progress toward achieving end outcomes. For example, enforcement actions can result in improvements in pipeline operators' safety performance--an intermediate outcome that can then result in the end outcome of fewer pipeline accidents and spills. OPS is considering establishing a goal to reduce the time it takes the agency to issue final enforcement actions. While such a goal could help OPS improve the management of the enforcement program, it does not reflect the various intermediate outcomes the agency hopes to achieve through enforcement. Without clear goals for the enforcement program that specify intended intermediate outcomes, agency staff and external stakeholders may not be aware of what direct results OPS is seeking to achieve or how enforcement efforts contribute to pipeline safety. OPS has not fully defined its strategy for using enforcement to achieve its overall performance goals. According to OPS officials, the agency's increased use of civil penalties and corrective action orders reflects a major change in its enforcement strategy. Although OPS began to implement these changes in 2000, it has not yet developed a policy that defines this new, more aggressive enforcement strategy or describes how it will contribute to the achievement of its performance goals. In addition, OPS does not have up-to-date, detailed internal guidelines on the use of its enforcement tools that reflect its current strategy. Furthermore, although OPS began enforcing its integrity management standards in 2002 and received greater enforcement authority under the 2002 pipeline safety act, it does not yet have guidelines in place for enforcing these standards or implementing the new authority provided by the act. According to agency officials, OPS management communicates enforcement priorities and ensures consistency in enforcement decisions through frequent internal meetings and detailed inspection protocols and guidance. Agency officials recognize the need to develop an enforcement policy and up-to-date detailed enforcement guidelines and have been working to do so. To date, the agency has completed an initial set of enforcement guidelines for its operator qualification standards and has developed other draft guidelines. However, because of the complexity of the task, agency officials do not expect that the new enforcement policy and remaining guidelines will be finalized until sometime in 2005. The development of an enforcement policy and guidelines should help define OPS's enforcement strategy; however, it is not clear whether this effort will link OPS's enforcement strategy with intermediate outcomes, since agency officials have not established performance goals specifically for their enforcement efforts. We have reported that such a link is important. According to OPS officials, the agency currently uses three performance measures and is considering three additional measures to determine the effectiveness of its enforcement activities and other oversight efforts. (See table 1.) The three current measures provide useful information about the agency's overall efforts to improve pipeline safety, but do not clearly indicate the effectiveness of OPS's enforcement strategy because they do not measure the intermediate outcomes of enforcement actions that can contribute to pipeline safety, such as improved compliance. The three measures that OPS is considering could provide more information on the intermediate outcomes of the agency's enforcement strategy, such as the frequency of repeat violations and the number of repairs made in response to corrective action orders, as well as other aspects of program performance, such as the timeliness of enforcement actions. We have found that agencies that are successful in measuring performance strive to establish measures that demonstrate results, address important aspects of program performance, and provide useful information for decision-making. While OPS's new measures may produce better information on the performance of its enforcement program than is currently available, OPS has not adopted key practices for achieving these characteristics of successful performance measurement systems: Measures should demonstrate results (outcomes) that are directly linked to program goals. Measures of program results can be used to hold agencies accountable for the performance of their programs and can facilitate congressional oversight. If OPS does not set clear goals that identify the desired results (intermediate outcomes) of enforcement, it may not choose the most appropriate performance measures. OPS officials acknowledge the importance of developing such goals and related measures but emphasize that the diversity of pipeline operations and the complexity of OPS's regulations make this a challenging task. Measures should address important aspects of program performance and take priorities into account. An agency official told us that a key factor in choosing final measures would be the availability of supporting data. However, the most essential measures may require the development of new data. For example, OPS has developed databases that will track the status of safety issues identified in integrity management and operator qualification inspections, but it cannot centrally track the status of safety issues identified in enforcing its minimum safety standards. Agency officials told us that they are considering how to add this capability as part of an effort to modernize and integrate their inspection and enforcement databases. Measures should provide useful information for decision-making, including adjusting policies and priorities. OPS uses its current measures of enforcement performance in a number of ways, including monitoring pipeline operators' safety performance and planning inspections. While these uses are important, they are of limited help to OPS in making decisions about its enforcement strategy. OPS has acknowledged that it has not used performance measurement information in making decisions about its enforcement strategy. OPS has made progress in this area by identifying possible new measures of enforcement results (outcomes) and other aspects of program performance, such as indicators of the timeliness of enforcement actions, that may prove more useful for managing the enforcement program. In 2000, in response to criticism that its enforcement activities were weak and ineffective, OPS increased both the number and the size of the civil monetary penalties it assessed. Pipeline safety stakeholders expressed differing opinions about whether OPS's civil penalties are effective in deterring noncompliance with pipeline safety regulations. OPS assessed more civil penalties during the past 4 years under its current "tough but fair" enforcement approach than it did in the previous 5 years, when it took a more lenient enforcement approach. (See fig. 3.) From 2000 through 2003, OPS assessed 88 civil penalties (22 per year on average) compared with 70 civil penalties from 1995 through 1999 (about 14 per year on average). For the first 5 months of 2004, OPS proposed 38 civil penalties. While the recent increase in the number and the size of civil penalties may reflect OPS's new "tough but fair" enforcement approach, other factors, such as more severe violations, may be contributing to the increase as well. Overall, OPS does not use civil penalties extensively. Civil penalties represent about 14 percent (216 out of 1,530) of all enforcement actions taken over the past 10 years. OPS makes more extensive use of other types of enforcement actions that require pipeline operators to fix unsafe conditions and improve inadequate procedures, among other things. In contrast, civil penalties represent monetary sanctions for violating safety regulations but do not require safety improvements. OPS may increase its use of civil penalties as it begins to use them to a greater degree for violations of its integrity management standards. The average size of the civil penalties has increased. For example, from 1995 through 1999, the average assessed civil penalty was about $18,000. From 2000 through 2003, the average assessed civil penalty increased by 62 percent to about $29,000. Assessed penalty amounts ranged from $500 to $400,000. In some instances, OPS reduces proposed civil penalties when it issues its final order. We found that penalties were reduced 31 percent of the time during the 10-year period covered by our work (66 of 216 instances). These penalties were reduced by about 37 percent (from a total of $2.8 million to $1.7 million). This analysis does not include the extraordinarily large penalty of $3.05 million that OPS proposed as a result of the Bellingham, Washington, accident because including it would have skewed our results by making the average penalty appear to be larger than it actually is. OPS has assessed the operator $250,000 as of July 2004. If we had included this penalty in our analysis we find that over this period OPS reduced total proposed penalties by about two-thirds, from about $5.8 million to about $2 million. OPS's database does not provide summary information on why penalties are reduced. According to an OPS official, the agency reduces penalties when an operator presents evidence that the OPS inspector's finding is weak or wrong or when the pipeline's ownership changes during the period between the proposed and the assessed penalty. It was not practical for us to gather information on a large number of penalties that were reduced, but we did review several to determine the reasons for the reductions. OPS reduced one of the civil penalties we reviewed because the operator provided evidence that OPS inspectors had miscounted the number of pipeline valves that OPS said the operator had not inspected. Since the violation was not as severe as OPS had stated, OPS reduced the proposed penalty from $177,000 to $67,000. Because we reviewed only a small number of instances in which penalties were reduced, we cannot say whether these examples are typical. Of the 216 penalties that OPS assessed from 1994 through 2003, pipeline operators paid the full amount 93 percent of the time (200 instances) and reduced amounts 1 percent of the time (2 instances). (See fig. 4.) Fourteen penalties (6 percent) remain unpaid, totaling about $836,700 (or 18 percent of penalty amounts). In two instances, operators paid reduced amounts. We followed up on one of these assessed penalties. In this case, the operator requested that OPS reconsider the assessed civil penalty and OPS reduced it from $5,000 to $3,000 because the operator had a history of cooperation and OPS wanted to encourage future cooperation. Neither FAA's nor OPS's data show why the 14 unpaid penalties have not been collected. From the information provided by both agencies, we determined that OPS closed 2 of the penalty cases without collecting the penalties, operators are appealing 5 penalties, OPS recently assessed 3 penalties, and OPS acknowledged that 4 penalties (totaling $45,200) should have been collected. Although OPS has increased both the number and the size of the civil penalties it has imposed, the effect of this change on deterring noncompliance with safety regulations, if any, is not clear. The stakeholders we spoke with expressed differing views on whether the civil penalties deter noncompliance. The pipeline industry officials we contacted believed that, to a certain extent, OPS's civil penalties encourage pipeline operators to comply with pipeline safety regulations because they view all of OPS's enforcement actions as deterrents to noncompliance. However, some industry officials said that OPS's enforcement actions are not their primary motivation for safety. Instead, they said that pipeline operators are motivated to operate safely because they need to avoid any type of accident, incident, or OPS enforcement action that impedes the flow of products through the pipeline and hinders their ability to provide good service to their customers. Pipeline industry officials also said that they want to operate safely and avoid pipeline accidents because accidents generate negative publicity and may result in costly private litigation against the operator. Most of the interstate agents, representatives of their associations, and insurance company officials expressed views similar to those of the pipeline industry officials, saying that they believe civil penalties deter operators' noncompliance with regulations to a certain extent. However, a few disagreed with this point of view. For example, the state agency representatives and a local government official said that OPS's civil penalties are too small to be deterrents. Pipeline safety advocacy groups that we talked to also said that the civil penalty amounts OPS imposes are too small to have any deterrent effect on pipeline operators. As discussed earlier, for 2000 through 2003, the average assessed penalty was about $29,000. According to economic literature on deterrence, pipeline operators may be deterred if they expect a sanction, such as a civil penalty, to exceed any benefits of noncompliance. Such benefits could, in some cases, be lower operating costs. The literature also recognizes that the negative consequences of noncompliance--such as those stemming from lawsuits, bad publicity, and the value of the product lost from accidents--can deter noncompliance along with regulatory agency oversight. Thus, for example, the expected costs of a legal settlement could overshadow the lower operating costs expected from noncompliance, and noncompliance might be deterred. Mr. Chairman, this concludes my prepared statement. We expect to report more fully on these and other issues in our report that we expect to issue later this week. We also anticipate making recommendations to improve OPS's ability to demonstrate the effectiveness of its enforcement strategy and to improve OPS's and FAA's management controls over the collection of civil penalties. I would be pleased to respond to any questions that you or Members of the Subcommittee might have. For information on this testimony, please contact Katherine Siggerud at (202) 512-2834 or [email protected]. Individuals making key contributions to this testimony are Jennifer Clayborne, Judy Guilliams- Tapia, Bonnie Pignatiello Leer, Gail Marnik, James Ratzenberger, and Gregory Wilmoth. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Interstate pipelines carrying natural gas and hazardous liquids (such as petroleum products) are safer to the public than other modes of freight transportation. The Office of Pipeline Safety (OPS), the federal agency that administers the national regulatory program to ensure safe pipeline transportation, has been undertaking a broad range of activities to make pipeline transportation safer. However, the number of serious accidents--those involving deaths, injuries, and property damage of $50,000 or more--has not fallen. When safety problems are found, OPS can take enforcement action against pipeline operators, including requiring the correction of safety violations and assessing monetary sanctions (civil penalties). This testimony is based on ongoing work for the House Committee on Energy and Commerce and for other committees, as required by the Pipeline Safety Improvement Act of 2002. The testimony provides preliminary results on (1) the effectiveness of OPS's enforcement strategy and (2) OPS's assessment of civil penalties. The effectiveness of OPS's enforcement strategy cannot be determined because the agency has not incorporated three key elements of effective program management--clear program goals, a well-defined strategy for achieving goals, and performance measures that are linked to program goals. Without these key elements, the agency cannot determine whether recent and planned changes in its strategy will have the desired effects on pipeline safety. Over the past several years, OPS has focused primarily on other efforts--such as developing a new risk-based regulatory approach--that it believes will change the safety culture of the industry. OPS has also became more aggressive in enforcing its regulations and now plans to further strengthen the management of its enforcement program. In particular, OPS is developing an enforcement policy that will help define its enforcement strategy and has taken initial steps toward identifying new performance measures. However, OPS does not plan to finalize the policy until 2005 and has not adopted key practices for achieving successful performance measurement systems, such as linking measures to goals. OPS increased both the number and the size of the civil penalties it assessed against pipeline operators over the last 4 years (2000-2003) following a decision to be "tough but fair" in assessing penalties. OPS assessed an average of 22 penalties per year during this period, compared with an average of 14 per year for the previous 5 years (1995-1999), a period of more lenient "partnering" with industry. In addition, the average penalty increased from $18,000 to $29,000 over the two periods. About 94 percent of the 216 penalties levied from 1994 through 2003 have been paid. The civil penalty is one of several actions OPS can take when it finds a violation, and these penalties represent about 14 percent of all enforcement actions over the past 10 years. While OPS has increased the number and the size of its civil penalties, stakeholders--including industry, state, and insurance company officials and public advocacy groups--expressed differing views on whether these penalties deter noncompliance with safety regulations. Some, such as pipeline operators, thought that any penalty was a deterrent if it kept the pipeline operator in the public eye, while others, such as safety advocates, told us that the penalties were too small to be effective sanctions.
4,326
678
Our proactive testing found ineffective HUBZone program eligibility controls, exposing the federal government to fraud and abuse. In a related report and testimony released concurrently with this testimony, we reported that SBA generally did not verify the data entered by firms in its online application system. We found that SBA was therefore vulnerable to certifying firms based on fraudulent application information. Our use of bogus firms, fictitious employees, and fabricated explanations and documents to obtain HUBZone certification demonstrated the ease with which HUBZone certification could be obtained by providing fraudulent information to SBA's online application system. In all four instances, we successfully obtained HUBZone certification from SBA for the bogus firms represented by our applications. See figure 1 for an example of one of the acceptance letters we received. Although SBA requested documentation to support one of our applications, the agency failed to recognize the information we provided in all four applications represented bogus firms that actually failed to meet HUBZone requirements. For instance, the principal office addresses we used included a virtual office suite from which we leased part-time access to office space and mail delivery services for $250 a month, two different retail postal service centers from which we leased mailboxes for less than $24 a month, and a Starbucks coffee store. An Internet search on any of the addresses we provided would have raised "red flags" and should have led to further investigation by SBA, such as a site visit, to determine whether the principal office address met program eligibility requirements. Because HUBZone certification provides an opening to billions of dollars in federal contracts, approval of ineligible firms for participation in the program exposes the federal government to contracting fraud and abuse, and moreover, can result in the exclusion of legitimate HUBZone firms from obtaining government contracts. We provide specific details regarding each application below. Fictitious Application One: Our investigators submitted this fictitious application and received HUBZone certification 3 weeks later. To support the application, we leased, at a cost of $250 a month, virtual office services from an office suite located in a HUBZone and gave this address as our principal office location. Specifically, the terms of the lease allowed us to schedule use of an office space up to 16 hours per month and to have mail delivered to the suite. Our HUBZone application also indicated that our bogus firm employed two individuals with one of the employees residing in a HUBZone. Two business days after submitting the application, an SBA official emailed us requesting a copy of the lease for our principal office location and proof of residency for our employee. We created the documentation using publicly available hardware and software and faxed copies to SBA to comply with the request. SBA then requested additional supporting documentation related to utilities and cancelled checks. After we fabricated this documentation and provided it to SBA, no further documentation was requested before SBA certified our bogus firm. Fictitious Application Two: Four weeks after our investigators submitted this fictitious application, SBA certified the bogus firm to participate in the HUBZone program. For this bogus firm, our "principal office" was a mailbox located in a HUBZone that our investigators leased from a retail postal service provider for less than $24 a month. The application noted that our bogus firm had nine employees, four of which lived in a HUBZone area. SBA requested a clarification regarding a discrepancy in the application information, but no further contact was made before we received our HUBZone certification. Fictitious Application Three: Our investigators completed this fictitious application and received HUBZone certification 2 weeks later. For the principal office address, our investigators used a Starbucks coffee store located in a HUBZone. In addition, our investigators indicated that our bogus firm employed two individuals with one of the employees residing in a HUBZone area. SBA did not request any supporting documentation or explanations for this bogus firm prior to granting HUBZone certification. Fictitious Application Four: Within 5 weeks of submitting this fictitious application, SBA certified our bogus firm. As with fictitious application two, our investigators used the address for a mailbox leased from a retail postal service provider located in a HUBZone for the principal office. Our monthly rental cost for the "principal office" was less than $10 per month. Our application indicated that two of the three employees that worked for the bogus firm lived in a HUBZone. SBA requested a clarification regarding a small discrepancy in the application information, but no further contact was made before receiving the HUBZone certification. We were also able to identify 10 firms from the Washington, D.C., metro area that were participating in the HUBZone program even though they clearly did not meet eligibility requirements. Since 2006, federal agencies have obligated a total of more than $105 million to these firms for performance as the prime contractor on federal contracts. Of the 10 firms, 6 did not meet both the principal office and employee residency requirements while 4 met the principal office requirement but significantly failed the employee residency requirement. We also found other HUBZone firms that use virtual office suites to fulfill SBA's principal office requirement. We investigated two of these virtual office suites and identified examples of firms that could not possibly meet principal office requirements given the nature of their leases. According to HUBZone regulations, persons or firms are subject to criminal penalties for knowingly making false statements or misrepresentations in connection with the HUBZone program including failure to correct "continuing representations" that are no longer true. During the application process, applicants are not only reminded of the program requirements, but are required to agree to the statement that anyone failing to correct "continuing representations" shall be subject to fines, imprisonment, and penalties. Further, the Federal Acquisition Regulation (FAR) requires all prospective contractors to update ORCA-- the government's Online Representations and Certifications Application-- which includes certifying whether the firm is currently a HUBZone firm and that there have been "no material changes in ownership and control, principal office, or HUBZone employee percentage since it was certified by the SBA." However, we found that all 10 of these case-study firms continued to represent themselves to SBA, ORCA, GAO, and the general public as eligible to participate in the HUBZone program. Because the 10 case study examples clearly are not eligible, we consider each firm's continued representation indicative of fraud. We referred the 10 firms to SBA OIG for further investigation. We determined that 10 case study examples from the Washington, D.C., metropolitan area failed to meet the program's requirements. Specifically, we found that 6 out of the 10 failed both HUBZone requirements to operate a principal office in a HUBZone and to ensure that 35 percent or more of employees resided in a HUBZone. Our review of payroll records also found that the remaining four firms failed to meet the 35 percent HUBZone employee residency requirement by at least 15 percent. In addition, all 10 of the case study examples continued to represent themselves to SBA, ORCA, GAO, and the general public as HUBZone program-eligible. One HUBZone firm self-certified in ORCA that it met HUBZone requirements in March 2008 despite the fact that we had spoken with its owner about 3 weeks before about her firm's noncompliance with both the principal office and HUBZone residency requirements. Table 1 highlights the 10 case-study firms we investigated. Case 1: Our investigation clearly showed that this firm represented itself as HUBZone-eligible even though it did not meet HUBZone requirements at the time of our investigation. This firm, which provided business management, engineering, information technology, logistics, and technical support services, self-certified in July 2007 in ORCA that it was a HUBZone firm and that there had been "no material changes in ownership and control, principal office, or HUBZone employee percentage since it was certified by the SBA." We also interviewed the president in March 2008 and she claimed that her firm met the HUBZone requirements. However, the firm failed the principal office requirement. Our site visits to the address identified by the firm as its principal office found that it was a small room that had been rented on the upper floor of a dentist's office where no more than two people could work comfortably. No employees were present, and the only business equipment in the rented room was a computer and filing cabinet. The building owner stated that the president of the firm used to conduct some business from the office, but that nobody had worked there "for some time." Moreover, the president indicated that instead of paying rent at the HUBZone location, she provided accounting services to the owner at a no-cost exchange for use of the space. See figure 2 for a picture of the building the firm claimed as its principal office (arrow indicates where the office is located). Further investigation revealed that the firm listed its real principal office (called the firm's "headquarters" on its Web site) at an address in McLean, Virginia. In addition to not being a HUBZone, McLean, Virginia, is in one of the wealthiest jurisdictions in the United States. Our site visit to this second location revealed that the majority of the firm's officers in addition to about half of the qualifying employees worked there and indicated this location was the firm's actual principal office. When we interviewed the president, she claimed that the McLean, Virginia, office was maintained "only for appearance." See figure 3 for a picture of the McLean, Virginia, building where the firm rented office space. Based on our review of payroll documents we received directly from the firm, we also determined the firm failed the 35 percent HUBZone residency requirement. The payroll documents indicated that only 15 of the firm's 72 employees (21 percent) lived in a HUBZone as of December 2007. We also found that in January 2007 during SBA's HUBZone recertification process the president self-certified that 38 percent of the firm's employees lived in a HUBZone. However, the payroll documents received directly from firm showed only 24 percent of the firm's employees lived in a HUBZone at that time. In 2006 the Department of the Army, National Guard Bureau, awarded a HUBZone set-aside contract with a $40 million ceiling to this firm based on its HUBZone status. Although only $3.9 million have been obligated to date on the contract, because the firm remains HUBZone-certified, it can continue to receive payments up to the $40 million ceiling based on its HUBZone status until 2011. We referred this firm to SBA OIG for further investigation. Case 2: Our investigation determined that this firm, a general contractor specializing in roofing and sheet metal, continued to represent itself as HUBZone-eligible even though it did not meet HUBZone requirements. While he self-certified to the firm's HUBZone status in ORCA in September 2007, the vice president admitted during our interview in April 2008 that the firm did not meet HUBZone requirements. Nonetheless, after our interview, the firm continued actively to represent that it was a HUBZone firm--including a message in large letters on its Web site and business cards declaring that the firm was "HUBZone certified." The firm's vice- president self-certified during the SBA's HUBZone certification process in March 2007 that, as shown in figure 4, the firm's principal office was one- half of a residential duplex in Landover, Maryland. We visited this location during normal business hours and found no employees present. Our investigative work also found that the vice president owned another firm, which did not participate in the HUBZone program. A visit to this firm, which was located in Capitol Heights, Maryland--not in a HUBZone--revealed that both it and the HUBZone firm operated out of the same location. Further, payroll documents we received from the HUBZone firm indicated that it had 34 employees but that only 4 employees (or 12 percent) lived in a HUBZone as of December 2007. Based on our analysis of FPDS-NG data, between fiscal years 2006 and 2007 federal agencies obligated about $12.2 million for payment to the firm. Of this, about $4 million in HUBZone contracts were obligated by the Department of the Air Force. Because this firm clearly did not meet either principal office or employee HUBZone requirements at the time of our investigation but continued to represent itself as HUBZone-certified we referred this firm to SBA OIG for further investigation. Case 3: Our investigation demonstrated that this firm continued to represent itself as HUBZone-eligible while failing to meet HUBZone requirements. This firm, which specializes in the design and installation of fire alarm systems, self-certified in May 2007 in ORCA that it was a HUBZone firm and that there had been "no material changes in ownership and control, principal office, or HUBZone employee percentage since it was certified by the SBA." However, when we interviewed the president in April 2008, he acknowledged that the firm "technically" did not meet the principal office requirement. For its HUBZone certification in April 2006, an address in a HUBZone in Rockville, Maryland, was identified as its principal office location. We visited this location during normal business hours and found the address was for an office suite that provided virtual office services. According to the lease between the HUBZone firm and the office suite's management, the firm did not rent office space, but paid $325 a month to use a conference room on a scheduled basis for up to 4 hours each month. Absent additional services provided by the virtual office suite, it would be impossible for this firm to meet the principal office requirement under this lease arrangement. Moreover, the president of the firm told us that no employees typically worked at the virtual office. Additional investigative work revealed that the firm's Web site listed a second address for the firm in McLean, Virginia, which as noted above is not in a HUBZone. Our site visit determined this location to be where the firm's president and all qualifying employees worked. In addition, the payroll documents we received from the firm revealed that the percentage of employees living in a HUBZone during calendar year 2007 ranged from a low of 6 percent to a high of 15 percent--far below the required 35 percent. Based on our analysis of FPDS-NG data, between fiscal years 2006 and 2007 federal agencies obligated about $3.3 million for payment to the firm. Of this, over $460,000 in HUBZone contracts were obligated by the Department of Veterans Affairs. Further, in addition to admitting the firm did not meet the principal office requirement, the president was also very candid about having received subcontracting opportunities from large prime contracting firms based solely on the firm's HUBZone certification. According to the president, the prime contractors listed the HUBZone firm as part of their "team" to satisfy their HUBZone subcontracting goals. However, he contended that these teaming arrangements only occasionally resulted in the prime contractor purchasing equipment from his firm. Because it continued to represent itself as HUBZone-eligible, we referred it to SBA OIG for further investigation. Virtual offices are located nationwide and provide a range of services for individuals and firms, including part-time use of office space or conference rooms, telephone answering services, and mail forwarding. During our proactive testing discussed above, we leased virtual office services from an office suite located in a HUBZone and fraudulently submitted this address to SBA as our principal office location. The terms of the lease allowed us to schedule use of an office space for up to 16 hours per month, but did not provide permanent office space. Even though we never used the virtual office space we rented, we still obtained HUBZone certification from SBA. Our subsequent investigation of two virtual office suites located in HUBZones--one of which we used to obtain our certification--found that other firms had retained HUBZone certification using virtual office services. Based on our review of lease agreements, we found that, absent additional services provided by the virtual office suites, some of these firms could not possibly meet principal office requirements. For example: One HUBZone firm that claimed its principal office was a virtual office address had a lease agreement providing only mail-forwarding services. The mail was forwarded to a different address not located in a HUBZone. Absent additional services provided by the virtual office suite, it would be impossible for this firm to perform any work at the virtual office location with only a mail-forwarding agreement. Five HUBZone firms that claimed their principal office was a virtual office address leased less than 10 hours of conference room usage per month at the same time they maintained at least one other office outside of a HUBZone. Absent additional services provided by the virtual office suite, it would be impossible for these firms to meet principal office requirements with only 10 hours of conference room time per month, leading us to conclude that the majority of work at these companies was performed in the other office locations. Five other firms claimed their principal office was a virtual office address but leased office space for less than 20 hours a month. These firms simultaneously maintained at least one other office outside of a HUBZone. Absent additional services provided by the virtual office suite, it would be impossible for these firms to meet principal office requirements with only 20 hours of rented office time per month, leading us to conclude that the majority of work at these companies was performed in the other office locations. The virtual office arrangements we investigated clearly violate the requirements of the HUBZone program and, in some cases, exemplify fraudulent representations. We briefed SBA officials on the results of our investigation on July 9, 2008. They were concerned about the vulnerabilities to fraud and abuse we identified. SBA officials expressed interest in pursuing action, including suspension or debarment, against our 10 case study firms and any firm that may falsely represent their eligibility for the HUBZone program. They were also open to suggestions to improve fraud prevention controls over the HUBZone application process, such as performing steps to identify addresses of virtual office suites and mailboxes rented from postal retail centers. Madam Chairwoman and Members of the Committee, this concludes my statement. I would be pleased to answer any questions that you or other Members of the Committee may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. To proactively test whether the Small Business Administration's (SBA) controls over the Historically Underutilized Business Zone (HUBZone) application process were operating effectively, we applied for HUBZone certification using bogus firms, fictitious employees, fabricated explanations, and counterfeit documents to determine whether SBA would certify firms based on fraudulent information. We used publicly available guidance provided by SBA to create four applications. We did the minimal work required to establish the bogus small business firms represented by our applications, such as obtaining a Data Universal Numbering System (DUNS) number from Dun & Bradstreet and registering with the Central Contractor Registration database. We then applied for HUBZone certification with our four firms using SBA's online HUBZone application system. Importantly, the principal office addresses we provided to SBA, although technically located in HUBZones, were locations that would appear suspicious if investigated by SBA. When necessary (e.g., at the request of SBA application reviewers), we supplemented our applications with fabricated explanations and counterfeit supporting documentation created with publicly available computer software and hardware and other material. To identify examples of firms that participate in the HUBZone program even though they do not meet eligibility requirements, we first obtained and analyzed a listing of HUBZone firms from the SBA's Certification Tracking System as of January 2008 and federal procurement data from the Federal Procurement Data System-Next Generation (FPDS-NG) for fiscal years 2006 and 2007. We then performed various steps, including corresponding with SBA officials and testing the data elements used for our work electronically, to assess the reliability of the data. We concluded that data were sufficiently reliable for the purposes of our investigation. To develop our case studies, we limited our investigation to certified HUBZone firms with a principal office located in the Washington, D.C., metropolitan area and for which federal agencies reported obligations on HUBZone preference contracts--HUBZone sole source, HUBZone set- aside, and HUBZone price preference--totaling more than $450,000 for fiscal years 2006 and 2007. We selected 16 for further investigation based on indications that they either failed to operate a principal office in a HUBZone or ensure that at least 35 percent of employees resided in a HUBZone, or both. We also investigated one firm referred through GAO's FraudNet Hotline. For the selected 17 firms, we then used investigative methods, such as interviewing firm managers and reviewing firm payroll documents, to gather information about the firms and to determine whether the firms met HUBZone requirements. We also reviewed information about each firm in the Online Representations and Certifications Application system (ORCA). During our investigation, we also identified a couple of addresses for virtual office suites in the Washington, D.C., metropolitan area where several different HUBZone firms claimed to have their principal office. We investigated two of these virtual office suites to determine whether HUBZone firms at these locations met program eligibility requirements. For the selected virtual office suites, we obtained and reviewed the lease agreements between the HUBZone firms and the virtual office suite management and verified any of the HUBZone firms' other business addresses. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Historically Underutilized Business Zone (HUBZone) program is intended to provide federal contracting opportunities to qualified small business firms in order to stimulate development in economically distressed areas. As manager of the HUBZone program, the Small Business Administration (SBA) is responsible for certifying whether firms meet HUBZone program requirements. To participate in the HUBZone program, small business firms must certify that their principal office (i.e., the location where the greatest number of employees work) is located in a HUBZone and that at least 35 percent of the firm's employees live in HUBZones. Given the Committee's concern over fraud and abuse in the HUBZone program, GAO was asked to (1) proactively test whether SBA's controls over the HUBZone application process were operating effectively to limit program certification to eligible firms and (2) identify examples of selected firms that participate in the HUBZone program even though they do not meet eligibility requirements. To perform its proactive testing, GAO created four bogus businesses with fictitious owners and employees and applied for HUBZone certification. GAO also selected 17 HUBZone firms based on certain criteria, such as receipt of HUBZone contracts, and investigated whether they met key program eligibility requirements. GAO identified substantial vulnerabilities in SBA's application and monitoring process, clearly demonstrating that the HUBZone program is vulnerable to fraud and abuse. Considering the findings of a related report and testimony issued today, GAO's work shows that these vulnerabilities exist because SBA does not have an effective fraud-prevention program in place. Using fictitious employee information and fabricated documentation, GAO easily obtained HUBZone certification for four bogus firms. For example, to support one HUBZone application, GAO claimed that its principal office was the same address as a Starbucks coffee store that happened to be located in a HUBZone. If SBA had performed a simple Internet search on the address, it would have been alerted to this fact. Further, two of GAO's applications used leased mailboxes from retail postal services centers. A post office box clearly does not meet SBA's principal office requirement. We were also able to identify 10 firms from the Washington, D.C., metro area that were participating in the HUBZone program even though they clearly did not meet eligibility requirements. Since 2006, federal agencies have obligated a total of more than $105 million to these 10 firms for performance as the prime contractor on federal contracts. Of the 10 firms, 6 did not meet both principal office and employee residency requirements while 4 met the principal office requirements but significantly failed the employee residency requirement. For example, one firm that failed both principal office and employee residency requirements had initially qualified for the HUBZone program using the address of a small room above a dentist's office. GAO's site visit to this room found only a computer and filing cabinet. No employees were present, and the building owner told GAO investigators that nobody had worked there "for some time." During its investigation, GAO also found that some HUBZone firms used virtual office suites to fulfill SBA's principal office requirement. GAO investigated two of these virtual office suites and identified examples of firms that could not possibly meet principal office requirements given the nature of their leases. For example, one firm continued to certify it was a HUBZone firm even though its lease only provided mail forwarding services at the virtual office suite.
4,981
767
Historically, the census has focused on counting people stateside, although various overseas population groups have been included in the census at different times. For example, as shown in table 1, over the last century, the Bureau has generally included federally affiliated individuals and their dependents, but except for the 1960 and 1970 Censuses, has excluded private citizens such as retirees, students, and business people. In addition, only the 1970, 1990, and 2000 Censuses used counts of federally affiliated personnel for purposes of apportioning Congress. As a result, although estimates exceed four million people, the precise number of Americans residing abroad is unknown. The Constitution and federal statutes give the Bureau discretion over whether to count Americans overseas. Thus, Congress would need to enact legislation if it wanted to require the Bureau to include overseas Americans in the 2010 Census. Nevertheless, in recent years, the Bureau's policy of excluding private citizens from the census has been questioned. For example, advocates of an overseas census claim that better data on this population group would be useful for a variety of policy-making and other purposes. Moreover, the overseas population could, in some instances, affect congressional apportionment. More generally, the rights and obligations of overseas Americans under various federal programs vary from activity to activity. For example, U.S. citizens residing overseas are taxed on their worldwide income, can vote in federal elections, and can receive Social Security benefits, but they are generally not entitled to Medicare benefits, or, if they reside outside of the United States for more than 30 days, Supplemental Security Income. The initial results of the overseas census test suggest that counting Americans abroad on a global basis would require enormous resources and still not yield data that are comparable in quality to the stateside count. Indeed, participation in the test was low and relatively costly to obtain, and on-site supervision of field activities proved difficult. The test made clear that the current approach to counting Americans abroad--a voluntary survey that relies largely on marketing to get people to participate--by itself cannot secure a successful head count. To promote the overseas census test the Bureau relied on third parties-- American organizations and businesses in the three countries--to communicate to their members and/or customers that an overseas enumeration of Americans was taking place and to make available to U.S. citizens either the paper questionnaire or Web site address where Americans could complete their forms via the Internet. Still, the response to the overseas census test was disappointing. The 5,390 responses the Bureau received from the three test countries was far below what the Bureau planned for when it printed the questionnaires. While the Bureau ordered 520,000 paper forms for the three test sites, only 1,783 census forms were completed and returned. Of these, 35 were Spanish language forms that were made available in Mexico. The remaining 3,607 responses were completed via the Internet. Table 2 shows the number of census questionnaires that the Bureau printed for each country and the number of responses it actually received in both the paper format and via the Internet. In May, to help boost the lagging participation, the Bureau initiated a paid advertising campaign that included print and Internet ads in France, and print and radio ads in Mexico. (See fig. 1 for examples of the ads used in the paid advertising campaign). According to a Bureau official, the ads had only a slight impact on response levels. Moreover, the Bureau's experience during the 2000 Census suggests that securing a higher return rate on an overseas census would be an enormous challenge and may not be feasible. The Bureau spent $374 million on a comprehensive marketing, communications, and partnership effort for the 2000 Census. The campaign began in the fall of 1999 and continued past Census Day (April 1, 2000). Specific elements included television, radio, and other mass media advertising; promotions and special events; and a census-in-schools program. Thus, over a period of several months, the American public was on the receiving end of a steady drumbeat of advertising aimed at publicizing the census and motivating them to respond. This endeavor, in concert with an ambitious partnership effort with governmental, private, social service, and other organizations helped produce a return rate of 72 percent. Replicating this level of effort on a worldwide basis would be impractical, and still would not produce a complete count. Indeed, even after the Bureau's aggressive marketing effort in 2000, it still had to follow-up with about 42 million households that did not return their census forms. Because the overseas test had such low participation levels, the unit cost of each response was high--roughly $1,450 for each returned questionnaire, based on the $7.8 million the Bureau spent preparing for, implementing, and evaluating the 2004 overseas test. Although the two surveys are not directly comparable because the 2000 Census costs covered operations not used in the overseas test, the unit cost of the 2000 Census--which was the most expensive in our nation's history--was about $56 per household. Not surprisingly, as with any operation as complex as the overseas enumeration test, various unforeseen problems arose. The difficulties included grappling with country-specific issues and overseeing the contractor responsible for raising public awareness of the census at the three test sites. While the Bureau was able to address them, it is doubtful that the Bureau would have the ability to do so in 2010 should there be a full overseas enumeration. The Bureau encountered a variety of implementation problems at each of the test sites. Although such difficulties are to be expected given the magnitude of the Bureau's task, they underscore the fact that there would be no economy of scale in ramping up to a full enumeration of Americans abroad. In fact, just the opposite would be true. Because of the inevitability of country-specific problems, rather than conducting a single overseas count based on a standard set of rules and procedures (as is the case with the stateside census), the Bureau might end up administering what amounts to dozens of separate censuses--one for each of the countries it enumerates--each with its own set of procedures adapted to each country's unique requirements. The time and resources required to do this would likely be overwhelming and detract from the Bureau's stateside efforts. For example, addressing French privacy laws that restrict the collection of personal data such as race and ethnic information took a considerable amount of negotiation between the two countries, and was ultimately resolved after a formal agreement was developed. Likewise, in Kuwait, delivery of the census materials was delayed by several weeks because they were accidentally addressed to the wrong contractor. The Bureau hired a public relations firm to help market participation in the test. Its responsibilities included identifying private companies, religious institutions, service organizations, and other entities that have contact with Americans abroad and could thus help publicize the census test. Although the public relations firm appeared to go to great lengths to enlist the participation of these various entities--soliciting the support of hundreds of organizations in the three countries--the test revealed the difficulties of adequately overseeing a contractor operating in multiple sites overseas. For example, the public relations firm's tracking system indicated that around 440 entities had agreed to perform one or more types of promotional activities. However, our on-site inspections of several of these organizations in Paris, France, and Guadalajara, Mexico, that had agreed to display the census materials and/or distribute the questionnaires, uncovered several glitches. Of the 36 organizations we visited that were supposed to be displaying promotional literature, we found the information was only available at 15. In those cases, as shown in Figure 2, the materials were generally displayed in prominent locations, typically on a table with posters on a nearby wall. However, at 21 sites we visited, we found various discrepancies between what the public relations firm indicated had occurred, and what actually took place. For example, while the firm's tracking system indicated that questionnaires would be available at a restaurant and an English-language bookstore in Guadalajara, none were present. Likewise, in Paris, we went to several locations where the tracking system indicated that census information would be available. None was. In fact, at some of these sites, not only was there no information about the census, but there was no indication that the organization we were looking for resided at the address we had from the database. The Bureau's longstanding experience in counting the nation's stateside population has shown that specific operations and procedures together form the building blocks of a successful census. The design of the overseas test--a voluntary survey that relies heavily on marketing to secure a complete count--lacks these building blocks largely because they are impractical to perform in other countries. Thus, the disappointing test results are not surprising. What's more, refining this basic design or adding more resources would probably not produce substantially better outcomes. The building blocks include the following: Mandatory participation: Under federal law, all persons residing in the United States regardless of citizenship status are required to respond to the stateside decennial census. By contrast, participation in the overseas test was optional. The Bureau has found that response rates to mandatory surveys are higher than the response rates to voluntary surveys. This in turn yields more complete data and helps hold down costs. Early agreement on design: Both Congress and the Bureau need to agree on the fundamental design of the overseas census to help ensure adequate planning, testing and funding levels. The design of the census is driven in large part by the purposes for which the data will be used. Currently, no decisions have been made on whether the overseas data will be used for purposes of congressional apportionment, redistricting, allocating federal funds, or other applications. Some applications, such as apportionment, would require precise population counts and a very rigorous design that parallels the stateside count. Other applications, however, could get by with less precision and thus, a less stringent approach. A complete and accurate address list: The cornerstone of a successful census is a quality address list. For the stateside census, the Bureau goes to great lengths to develop what is essentially an inventory of all known living quarters in the United States, including sending census workers to canvass every street in the nation to verify addresses. The Bureau uses this information to deliver questionnaires, follow up with nonrespondents, determine vacancies, and identify households the Bureau may have missed or counted more than once. Because it would be impractical to develop an accurate address list for overseas Americans, these operations would be impossible and the quality of the data would suffer as a result. Ability to detect invalid returns: Ensuring the integrity of the census data requires the Bureau to have a mechanism to screen out invalid responses. Stateside, the Bureau does this by associating an identification number on the questionnaire to a specific address in the Bureau's address list, as well as by field verification. However, the Bureau's current approach to counting overseas Americans is unable to determine whether or not a respondent does in fact reside abroad. So long as a respondent provides certain pieces of information on the census questionnaire, it will be eligible for further processing. The Bureau is unable to confirm the point of origin for questionnaires completed on the Internet, and postmarks on a paper questionnaire only tell the location from which a form was mailed, not the place of residence of the respondent. The Bureau has acknowledged that ensuring such validity might be all but impossible for any reasonable level of effort and funding. Ability to follow up with non-respondents: Because participation in the decennial census is mandatory, the Bureau sends enumerators to those households that do not return their questionnaires. In cases where household members cannot be contacted or refuse to answer all or part of a census questionnaire, enumerators are to obtain data from neighbors, a building manager, or other nonhousehold member presumed to know about its residents. The Bureau also employs statistical techniques to impute data when it lacks complete information on a household. As noted above, because the Bureau lacks an address list of overseas Americans, it is unable to follow-up with nonrespondents or impute information on missing households, and thus, would never be able to obtain a complete count of overseas Americans. Cost model for estimating needed resources: The Bureau uses a cost model and other baseline data to help it estimate the resources it needs to conduct the stateside census. Key assumptions such as response levels and workload are developed based on the Bureau's experience in counting people decade after decade. However, the Bureau has only a handful of data points with which to gauge the resources necessary for an overseas census, and the tests it plans on conducting will only be of limited value in modeling the costs of conducting a worldwide enumeration in 2010. The lack of baseline data could cause the Bureau to over- or underestimate the staffing, budget, and other requirements of an overseas count. Targeted and aggressive marketing campaign: The key to raising public awareness of the census is an intensive outreach and promotion campaign. As noted previously, the Bureau's marketing efforts for the 2000 Census were far-reaching, and consisted of more than 250 ads in 17 languages that were part of an effort to reach every household, including those in historically undercounted populations. Replicating this level of effort on a global scale would be both difficult and expensive, and the Bureau has no plans to do so. Field infrastructure to execute census and deal with problems: The Bureau had a vast network of 12 regional offices and 511 local census offices to implement various operations for the 2000 Census. This decentralized structure enabled the Bureau to carry out a number of activities to help ensure a more complete and accurate count, as well as deal with problems when they arose. Moreover, local census offices are an important source of intelligence on the various enumeration obstacles the Bureau faces on the ground. The absence of a field infrastructure for an overseas census means that the Bureau would have to rely heavily on contractors to conduct the enumeration, and manage the entire enterprise from its headquarters in Suitland, Maryland. Ability to measure coverage and accuracy: Since 1980, the Bureau has measured the quality of the decennial census using statistical methods to estimate the magnitude of any errors. The Bureau reports these estimates by specific ethnic, racial, and other groups. For methodological reasons, similar estimates cannot be generated for an overseas census. As a result, the quality of the overseas count, and thus whether the numbers should be used for specific purposes, could not be accurately determined. So far I've described the logistical hurdles to counting overseas citizens as part of the census. However, there are a series of policy and conceptual questions that need to be addressed as well. They include: Who should be counted? U.S. citizens only? Foreign-born spouses? Children born overseas? Dual citizens? American citizens who have no intention of ever returning to the United States? Naturalized citizens? What determines residency in another country? To determine who should be included in the stateside census, the Bureau applies its "usual residence rule," which it defines as the place where a person lives and sleeps most of the time. People who are temporarily absent from that place are still counted as residing there. One's usual residence is not necessarily the same as one's voting residence or legal residence. The Bureau has developed guidelines, which it prints on the stateside census form, to help people determine who should and should not be included. The Bureau has not yet developed similar guidance for American citizens overseas. Thus, what should determine residency in another country? Duration of stay? Legal status? Should students spending a semester abroad but who maintain a permanent residence stateside be counted overseas? What about people on business or personal trips who maintain stateside homes? Quality data will require residence rules that are transparent, clearly defined, and consistently applied. How should overseas Americans be assigned to individual states? For certain purposes, such as apportioning Congress, the Bureau would need to assign overseas Americans to a particular state. Should one's state be determined by the state claimed for income tax purposes? Where one is registered to vote? Last state of residence before going overseas? These and other options all have limitations that would need to be addressed. How should the population data be used? To apportion Congress? To redistrict Congress? To allocate federal funds? To provide a count of overseas Americans only for general informational purposes? The answers to these questions have significant implications for the level of precision needed for the data and, ultimately, the enumeration methodology. Congress will need to decide whether or not to count overseas Americans, and how the results should be used. These decisions, in turn, will drive the methodology for counting this population group. As I've already mentioned, no decisions have been made on whether the overseas data will be used for purposes of congressional apportionment, redistricting, allocating federal funds, or other applications. Some uses, such as apportionment, would require precise population counts and a very rigorous design that parallels the stateside count. Other applications do not need as much precision, and thus a less rigorous approach would suffice. The basis for these determinations needs to be sound research on the cost, quality of data, and logistical feasibility of the various options. Possibilities include counting Americans via a separate survey, administrative records such as passport and voter registration forms; and/or records maintained by other countries such as published census records and work permits. The Bureau's initial research has shown that each of these options has coverage, accuracy, and accessibility issues, and some might introduce systemic biases into the data. Far more extensive research would be needed to determine the feasibility of these or other potential approaches. In summary, the 2004 overseas census test was an extremely valuable exercise in that it showed how counting Americans abroad as an integral part of the decennial census would not be cost-effective. Indeed, the tools and resources available to the Bureau cannot successfully overcome the inherent barriers to counting this population group, and produce data comparable to the stateside enumeration. Further, an overseas census would introduce new resource demands, risks, and uncertainties to a stateside endeavor that is already costly, complex, and controversial. Securing a successful count of Americans in Vienna, Virginia, is challenging enough; a complete count of Americans in Vienna, Austria, and in scores of other countries around the globe, would only add to the difficulties facing the Bureau as it looks toward the next national head count. Consequently, the report we released today suggests that Congress should continue to fund the evaluation of the 2004 test as planned, but eliminate funding for any additional tests related to counting Americans abroad as part of the decennial census. However, this is not to say that overseas citizens should not be counted. Indeed, to the extent that Congress desires better data on the number and characteristics of Americans abroad for various policy-making and other nonapportionment purposes that do not need as much precision, such information does not necessarily need to be collected as part of the decennial census, and could, in fact, be acquired through a separate survey or other means. To facilitate congressional decision-making on this issue, our report recommends that the Bureau, in consultation with Congress, research such options as counting people via a separate survey; administrative records such as passport data; and/or data exchanges with other countries' statistical agencies subject to applicable confidentiality considerations. Once Congress knows the tradeoffs of these various alternatives, it would be better positioned to provide the Bureau with the direction it needs so that the Bureau could then develop and test an approach that meets congressional requirements at reasonable resource levels. The Bureau agreed with our conclusions and recommendations. Successfully counting the nation's population is a near-daunting task. As the countdown to the next census approaches the 5-year mark, the question of enumerating Americans overseas is just one of a number of issues the Bureau needs to resolve. On behalf of the Subcommittee, we will continue to assess the Bureau's progress in planning and implementing the 2010 Census and identify opportunities to increase its cost-effectiveness. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Subcommittee might have. For further information regarding this testimony, please contact Patricia A. Dalton on (202) 512-6806, or by e-mail at [email protected]. Individuals making contributions to this testimony included Jennifer Cook, Robert Goldenkoff, Ellen Grady, Andrea Levine, Lisa Pearson, and Timothy Wexler. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The U.S. Census Bureau (Bureau) has typically excluded from the census private citizens residing abroad, but included overseas members of the military, federal civilian employees, and their dependents (in the 1990 and 2000 Censuses, these individuals were included in the numbers used for apportioning Congress). The Bureau recently tested the practicality of counting all overseas Americans. GAO was asked to testify on the test's initial results. Our statement is based on our published reports, one of which is being released at today's hearing. The test results suggest that counting all American citizens overseas as part of the census would require enormous resources, but still not yield data at the level of quality needed for purposes of congressional apportionment. Participation in the test was poor, with just 5,390 questionnaires returned from the three test sites. Moreover, as the Bureau's experience during the 2000 Census shows, securing better participation in a global count might not be practical. The Bureau spent $374 million on a months-long publicity campaign that consisted of television and other advertising that helped produce a 72-percent return rate. Replicating the same level of effort on a worldwide basis would be difficult, and still would not produce a complete count. Further, the low participation levels in the test made the unit cost of each response relatively high at around $1,450. The test results highlighted other obstacles to a cost-effective count including the resources needed to address country-specific problems and the difficulties associated with managing a complex operation from thousands of miles away. The approach used to count the overseas population in the 2004 test--a voluntary survey that largely relies on marketing to secure a complete count, lacks the basic building blocks of a successful census such as a complete and accurate address list and the ability to follow-up with nonrespondents. As the Bureau already faces the near-daunting task of securing a successful stateside count in 2010, having to simultaneously count Americans abroad would only add to the challenges it faces.
4,551
438
FDA regulates the content of all prescription drug advertising, whether directed to consumers or medical professionals. Advertising that is targeted to consumers includes both DTC and "consumer-directed" materials. DTC advertising includes, for example, broadcast advertisements (such as those on television and radio), print advertisements (such as those in magazines and newspapers), and Internet advertisements (such as consumer advertising on drug companies' Web sites). In contrast, consumer-directed advertisements are designed to be given by medical professionals to consumers and include, for example, patient brochures provided in doctors' offices. Advertising materials must contain a "true statement" of information including a brief summary of side effects, contraindications, and the effectiveness of the drug. To meet this requirement, advertising materials must not be false or misleading, must present a fair balance of the risks and benefits of the drug, and must present any facts that are material to the use of the drug or claims made in the advertising. With the exception of broadcast advertisements, materials must present all of the risks described in the drug's approved labeling. Broadcast materials may present only the major side effects and contraindications, provided the materials make "adequate provision" to give consumers access to the information in the drug's approved or permitted package labeling. Within FDA, DDMAC is responsible for implementing the laws and regulations that apply to prescription drug advertising. In March 2002, DDMAC created a DTC Review Group, which is responsible for oversight of advertising materials that are directed to consumers. As of May 2008, the group had a total of two group leaders, seven reviewers, and two social scientists. This group's responsibilities include reviewing final DTC materials and reviewing and providing advisory comments on draft DTC materials. The group also monitors television, magazines, and consumer advertising on drug companies' Web sites to identify advertising materials that were not submitted to FDA at the time they were first disseminated and reviews advertising materials cited in complaints submitted by competitors, consumers, and others. Once submitted to FDA, final and draft DTC advertising materials are distributed to a DTC reviewer. For final materials, if the reviewer identifies a concern, the agency determines whether it represents a violation and merits a regulatory letter. For draft materials submitted by drug companies, FDA may provide the drug company with advisory comments to consider before the materials are disseminated to consumers if, for example, the reviewers identify claims in materials that could violate applicable laws and regulations. If FDA identifies violations in disseminated DTC materials, the agency may issue two types of regulatory letters--either a "warning letter" or an "untitled letter." Warning letters are typically issued for violations that may lead FDA to pursue additional enforcement actions if not corrected; untitled letters are issued for violations that do not meet this threshold. Both types of letters cite the type of violation identified in the company's advertising material, request that the company submit a written response to FDA within 14 days, and request that the company take specific actions. Untitled letters request that companies stop disseminating the cited advertising materials and other advertising materials with the same or similar claims. Warning letters further request that the company issue advertising materials to correct the misleading impressions left by the violative advertising materials. The draft regulatory letters are subsequently reviewed by officials in DDMAC, FDA's Office of Medical Policy (which oversees DDMAC), and OCC. FDA has stated that it instituted OCC review for the purpose of promoting voluntary compliance by ensuring that drug companies that receive a regulatory letter understand that the letter has undergone legal review and the agency is prepared to go to court if necessary. As of 2006, FDA reviewed a small portion of the increasingly large number of DTC materials it received. FDA attempted to target available resources by focusing its reviews on the DTC advertising materials that had the greatest potential to negatively affect public health, but the agency did not document criteria for prioritizing the materials it received for review. Agency reviewers considered several informal criteria when prioritizing the materials, but these were not systematically applied and the agency did not document if a particular DTC material was reviewed. As a result, the agency could not ensure that it was identifying or reviewing the materials that were the highest priority. FDA officials told us at the time of our 2006 report that the agency received substantially more final and draft materials than the DTC Review Group could review. In 2005, FDA received 4,600 final DTC materials (excluding Internet materials) and 6,168 final Internet materials. FDA also received 4,690 final consumer-directed materials--such as brochures given to consumers by medical professionals. FDA received a steadily increasing number of final materials from 1999 through 2005. We found that, in 2006 and 2007, the total number of final DTC, Internet, and consumer-directed materials FDA received continued to increase. (See fig. 1.) FDA officials estimated that reviewers spent the majority of their time reviewing and commenting on draft materials. However, we were unable to determine the number of final or draft materials FDA reviewed, because FDA did not track this information. In the case of final and draft broadcast materials, FDA officials told us that the DTC group reviewed all of the materials it received; in 2005, it received 337 final and 146 draft broadcast materials. However, FDA did not document whether these or other materials it received had been reviewed. As a result, FDA could not determine how many materials it reviewed in a given year. We recommended in our 2006 report that the agency track which DTC materials had been reviewed. FDA officials indicated to us in May 2008 that the agency still did not track this information. At the time of our 2006 report, FDA officials identified informal criteria that the agency used to prioritize its reviews. FDA officials told us that, to target available resources, the agency prioritized the review of the DTC advertising materials that had the greatest potential to negatively affect public health. We recommended that FDA document its criteria for prioritizing its reviews of DTC advertising materials. FDA informed us in May 2008 that it now has documented criteria to prioritize reviews. For example, its first priority is to review materials with "egregious" violations, such as those identified through complaints. In addition, FDA places a high priority on reviewing television advertising materials. FDA officials also told us that the agency places a high priority on reviewing draft materials because they provide the agency with an opportunity to identify problems and ask drug companies to correct them before the materials are disseminated to consumers. We reported in 2006 that FDA did not systematically apply its criteria for prioritizing reviews to all of the materials that it received. Specifically, we found in 2006 that, at the time FDA received the materials, it recorded information about the drug being advertised and the type of material being submitted but did not screen the DTC materials to identify those that met its various informal criteria. FDA officials told us that the agency did identify all final and draft broadcast materials that it received, but it did not have a system for identifying any other high-priority materials. Absent such a system for all materials, FDA relied on each of the reviewers--in consultation with other DDMAC officials--to be aware of the materials that had been submitted and to accurately apply the criteria to determine the specific materials to review. This created the potential for reviewers to miss materials that the agency would consider to be a high priority for review. Furthermore, because FDA did not track information on its reviews, the agency could not determine whether a particular material had been reviewed. As a result, the agency could not ensure that it identified and reviewed the highest-priority materials. We recommended that the agency systematically screen the DTC materials it received against its criteria to identify those that are the highest priority for review. As of May 2008, FDA still did not have such a process. In 2006 we reported that, after the 2002 policy change requiring legal review by OCC of all draft regulatory letters, the agency's process for drafting and issuing letters citing violative DTC materials had stretched to several months and FDA had issued fewer regulatory letters per year. As a result of the policy change, draft regulatory letters received additional levels of review and the DTC reviewers who drafted the letters did substantially more work to prepare for and respond to comments resulting from review by OCC. FDA officials told us that the agency issued letters for only the violative DTC materials that it considered the most serious and most likely to negatively affect consumers' health. Once FDA identified a violation in a DTC advertising material and determined that it merited a regulatory letter, FDA took several months to draft and issue a letter. For letters issued from 2002 through 2005, once DDMAC began drafting the letter for violative DTC materials it took an average of about 4 months to issue the letter. The length of this process varied substantially across these regulatory letters--one letter took around 3 weeks from drafting to issuance, while another took almost 19 months. In comparison, for regulatory letters issued from 1997 through 2001, it took an average of 2 weeks from drafting to issuance. We recommended in 2002 that the agency reduce the amount of time to draft and issue letters and the agency agreed. We found in 2006, however, that the review time had increased and we again urged the agency to issue the letters more quickly. In 2006 and 2007, it took an average of more than 5 months from drafting to issuance. One letter took less than 2 months to issue while another took about 11 months. (See fig. 2 for the average months from 1997 through 2007.) The primary factor that contributed to the increase in the length of FDA's process for issuing regulatory letters was the additional work that resulted from the 2002 policy change. All DDMAC regulatory letters were reviewed by both OCC staff and OCC's Chief Counsel. In addition to the time required of OCC, DDMAC officials told us that the policy change created the need for substantially more work on their part to prepare the necessary documentation for legal review. After meeting with OCC and revising the draft regulatory letter to reflect the comments from OCC, DDMAC would formally submit a draft letter to OCC for legal review and approval. OCC often required additional revisions before it would concur that a letter was legally supportable and could be issued. While OCC officials told us that the office had given regulatory letters that cited violative DTC materials higher priority than other types of regulatory letters, their review of DDMAC's draft regulatory letters was a small portion of their other responsibilities and had to be balanced with other requests, such as the examination of legal issues surrounding the approval of a new drug. Recently, FDA informed us that it now allows some steps to be eliminated--if deemed unnecessary for a particular letter--in an attempt to make the legal review process more efficient. The number of regulatory letters FDA issued per year for violative DTC materials decreased after the 2002 policy change lengthened the agency's process for issuing letters. From 2002 to 2005, the agency issued between 8 and 11 regulatory letters per year that cited DTC materials. Prior to the policy change, from 1997 through 2001, FDA issued between 15 and 25 letters citing DTC materials per year. An FDA official told us that both the lengthened review time resulting from the 2002 policy change and staff turnover within the DTC Review Group contributed to the decline in the number of issued regulatory letters. More recently, we found that the number of letters issued that cite DTC materials has continued to decline--FDA issued 4 letters in 2006 and 2 letters in 2007. (See fig. 3 for the number of letters issued from 1997 through 2007.) Although the total number of regulatory letters FDA issued for violative DTC materials has decreased, the agency has issued in recent years proportionately more warning letters--which cite violations FDA considers to be more serious. Historically, almost all of the regulatory letters that FDA issued for DTC materials were untitled letters for less serious violations. From 1997 through 2001, FDA issued 98 regulatory letters citing DTC advertising materials, 6 of which were warning letters. From 2002 through 2005, 8 of the 37 regulatory letters were warning letters. Of the 6 letters FDA issued for DTC materials in 2006 and 2007, 4 were warning letters. FDA regulatory letters may cite more than one DTC material or type of violation for a given drug. Of the 19 regulatory letters FDA issued from 2004 through 2005, 7 cited more than 1 DTC material, for a total of 31 different materials. These 31 materials appeared in a range of media, including television, radio, print, direct mail, and the Internet. Further, FDA identified multiple violations in 21 of the 31 DTC materials cited in the letters. The most commonly cited violations related to a failure of the material to accurately communicate information about the safety of the drug. The letters also often cited materials for overstating the effectiveness of the drug or using misleading comparative claims. Of the 6 regulatory letters FDA issued in 2006 or 2007 that cited DTC materials, 2 cited more than 1 DTC material and all identified multiple violations in each of the cited materials. For our 2006 report, FDA officials told us, that the agency issued regulatory letters for DTC materials that it believed were the most likely to negatively affect consumers and that it did not act on all of the concerns that its reviewers identified. For example, they said the agency may be more likely to issue a letter when a false or misleading material was broadly disseminated. When reviewers had concerns about DTC materials, they discussed them with others in DDMAC and may have met with OCC and medical officers in FDA's Office of New Drugs to determine whether a regulatory letter was warranted. However, because FDA did not document decisions made at the various stages of its review process about whether to pursue a violation, officials were unable to provide us with an estimate of the number of materials about which concerns were raised but the agency did not issue a letter. At the time of our 2006 report, we found that FDA regulatory letters were limited in their effectiveness at halting the dissemination of false and misleading DTC advertising materials. We found that, from 2004 through 2005, FDA issued regulatory letters an average of about 8 months after the violative DTC materials they cited were first disseminated, by which time more than half of the materials had already been discontinued. Although drug companies complied with FDA's requests to create materials to correct the misimpressions left by the cited materials, these corrections were not disseminated until 5 months or more after FDA issued the regulatory letter. Furthermore, FDA's regulatory letters did not always prevent drug companies from later disseminating similar violative materials for the same drugs. Because of the length of time it took FDA to issue these letters, violative advertisements were often disseminated for several months before the letters were issued. From 2004 through 2005, FDA issued regulatory letters citing DTC materials an average of about 8 months after the violative materials were first disseminated. FDA issued one letter less than 1 month after the material was first disseminated, while another letter took over 3 years. The cited materials were usually disseminated for 3 or more months, and of the 31 violative DTC materials cited in these letters, 16 were no longer being disseminated by the time the letter was issued. On average, these letters were issued more than 4 months after the drug company stopped disseminating these materials and therefore had no effect on their dissemination. For the 14 DTC materials that were still in use when FDA issued the letter, the drug companies complied with FDA's request to stop disseminating the violative materials. However, by the time the letters were issued, these 14 materials had been disseminated for an average of about 7 months. As requested by FDA in the regulatory letters, drug companies often identified and stopped disseminating other materials with claims similar to those in the violative materials. For 18 of the 19 regulatory letters issued from 2004 through 2005, the drug companies indicated to FDA that they had either identified additional similar materials or that they were reviewing all materials to ensure compliance. In addition to halting materials directed to consumers, companies responding to 11 letters also stopped disseminating materials with similar claims that were targeted directly to medical professionals. Drug companies disseminated the corrective advertising materials requested in FDA warning letters, but took 5 months or more to do so. In each of the six warning letters FDA issued in 2004 and 2005 that cited DTC materials, the agency asked the drug company to disseminate truthful, nonmisleading, and complete corrective messages about the issues discussed in the regulatory letter to the audiences that received the violative promotional materials. In each case, the drug company complied with this request by disseminating corrective advertising materials. For the six warning letters FDA issued in 2004 and 2005 that cited DTC materials, the corrective advertising materials were initially disseminated more than 5 to almost 12 months after FDA issued the letter. For example, for one allergy medication, the violative advertisements ran from April through October 2004, FDA issued the regulatory letter in April 2005, and the corrective advertisement was not issued until January 2006. FDA regulatory letters did not always prevent the same drug companies from later disseminating violative DTC materials for the same drug, sometimes using the same or similar claims. From 1997 through 2005, FDA issued regulatory letters for violative DTC materials used to promote 89 different drugs. Of these 89 drugs, 25 had DTC materials that FDA cited in more than one regulatory letter, and one drug had DTC materials cited in eight regulatory letters. For 15 of the 25 drugs, FDA cited similar broad categories of violations in multiple regulatory letters. For example, FDA issued regulatory letters citing DTC materials for a particular drug in 2000 and again in 2005 for "overstating the effectiveness of the drug." For 4 of the 15 drugs, FDA cited the same specific violative claim for the same drug in more than one regulatory letter. For example, in 1999 FDA cited a DTC direct mail piece for failing to convey important information about the limitations of the studies used to approve the promoted drug. In 2001, FDA cited a DTC broadcast advertisement for the same drug for failing to include that same information. Given substantial growth in the number of DTC advertising materials submitted to FDA in recent years, FDA's role in limiting the dissemination of false or misleading advertising to the American public has become increasingly important. Fulfilling this responsibility requires that the agency, among other things, review those DTC advertising materials that are highest priority and take timely action to limit the dissemination of those that are false or misleading. We found in 2006 that FDA did not have a complete and systematic process for tracking and prioritizing all materials that it received for review. FDA's development of documented criteria to prioritize reviews is a step in the right direction. However, as we recommended in 2006, we believe that FDA should take the next step of systematically applying those criteria to the DTC materials it receives to determine which are highest priority for review. While the agency said that it would require vastly increased staff to systematically screen materials, we found in 2006 that FDA already has most of the information it would need to do so. Finally, despite FDA agreeing in 2002 that it is important to issue regulatory letters more quickly, the amount of time it takes to draft and issue letters has continued to lengthen. We believe that delays in issuing regulatory letters limit FDA's effectiveness in overseeing DTC advertising and in reducing consumers' exposure to false and misleading advertising. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or the other members of the subcommittee may have at this time. For further information about this statement, please contact Marcia Crosse, at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Martin T. Gahart, Assistant Director; Chad Davenport; William Hadley; Cathy Hamann; Julian Klazkin; and Eden Savino made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Food and Drug Administration (FDA) is responsible for overseeing direct-to-consumer (DTC) advertising of prescription drugs, which includes a range of media, such as television, magazines, and the Internet. If FDA identifies a violation of laws or regulations in a DTC advertising material, the agency may issue a regulatory letter asking the drug company to take specific actions. In 2002, GAO reported on delays in FDA's issuance of regulatory letters. GAO was asked to discuss trends in FDA's oversight of DTC advertising and the actions FDA has taken when it identifies violations. This statement is based on GAO's 2006 report, Prescription Drugs: Improvements Needed in FDA's Oversight of Direct-to-Consumer Advertising, GAO-07-54 (November 16, 2006). In this statement, GAO discusses the (1) DTC advertising materials FDA reviews, (2) FDA's process for issuing regulatory letters citing DTC advertising materials and the number of letters issued, and (3) the effectiveness of FDA's regulatory letters at limiting the dissemination of false or misleading DTC advertising. For its 2006 report, GAO examined FDA data on the advertising materials the agency received and reviewed the regulatory letters it issued citing prescription drug promotion from 1997 through 2005. For this statement, GAO also reviewed data from FDA to update selected information from the 2006 report. Since 1999, FDA has received a steadily increasing number of advertising materials directed to consumers. In 2006, GAO found that FDA reviewed a small portion of the DTC materials it received, and the agency could not ensure that it was identifying for review the materials it considered to be highest priority. While FDA officials told GAO that the agency prioritized the review of materials that had the greatest potential to negatively affect public health, the agency had not documented criteria to make this prioritization. GAO recommended that FDA document and systematically apply criteria for prioritizing its reviews of DTC advertising materials. In May 2008, FDA indicated that it had documented criteria to prioritize reviews. However, FDA still does not systematically apply its criteria to all of the DTC materials it receives. Furthermore, GAO noted in its 2006 report that FDA could not determine whether a particular material had been reviewed. GAO recommended in that report that the agency track which DTC materials had been reviewed. FDA officials indicated to GAO in May 2008 that the agency still did not track this information. As a result, the agency cannot ensure that it is identifying and reviewing the highest-priority materials. GAO found in 2006 that, since a 2002 policy change requiring legal review of all draft regulatory letters, FDA's process for drafting and issuing letters was taking longer and the agency was issuing fewer letters per year. FDA officials told GAO that the policy change contributed to the lengthened review. In 2006, GAO found that the effectiveness of FDA's regulatory letters at halting the dissemination of violative DTC materials had been limited. By the time the agency issued regulatory letters, drug companies had already discontinued use of more than half of the violative advertising materials identified in each letter. In addition, FDA's issuance of regulatory letters had not always prevented drug companies from later disseminating similar violative materials for the same drugs.
4,430
705
Drug applications--including NDAs, BLAs, and efficacy supplements-- are reviewed primarily by FDA's Center for Drug Evaluation and Research (CDER), with a smaller proportion reviewed by the Center for Prior to submission of an Biologics Evaluation and Research (CBER). When we refer to consumer advocacy groups, we are referring to groups that advocate on behalf of consumers and patients. application, sponsors may choose to seek accelerated approval status if the drug is intended to treat a serious or life-threatening illness (such as cancer) and has the potential to provide meaningful therapeutic benefit to patients over existing treatments. Sponsors of a drug with accelerated approval status may be granted approval on the basis of clinical trials conducted using a surrogate endpoint--such as a laboratory measurement or physical sign--as an indirect or substitute measurement for a clinically meaningful outcome such as survival. According to FDA, the agency generally also speeds its review of drug applications with accelerated approval status by granting them priority review, although priority review can also be granted to an application without accelerated approval status. FDA grants priority review for applications that it expects, if approved, would provide significant therapeutic benefits, compared to available drugs, in the treatment, diagnosis, or prevention of a disease. Applications for which there are no perceived significant therapeutic benefits beyond those for available drugs are granted standard review. See 21 U.S.C. SS 355(d); 42 U.S.C. SS 262(j). prevent FDA from approving the application. In response, sponsors can submit additional information to FDA in the form of amendments to the application. Certain applications are also subject to review by an independent advisory committee. FDA convenes advisory committees to provide independent expertise and technical assistance to help the agency make decisions about drug products. Additionally, FDA might require the sponsor to submit a Risk Evaluation and Mitigation Strategy (REMS) for the drug under review to ensure that the benefits of the drug outweigh its risks. FDA review time for an original application is calculated as the time elapsed from the date FDA receives the application and associated user fee to the date it issues an action letter; it is calculated using only the first review cycle and therefore does not include any time that may elapse while FDA is waiting for a sponsor to respond to FDA's first-cycle action letter or any review time that elapses during subsequent review cycles. In order to close the review cycle for NDAs, BLAs, and efficacy supplements, FDA must complete its review and issue an approval letter, a denial letter, or a "complete response" letter (i.e., a letter delineating any problems FDA identified in the application that prevented it from being approved). The review cycle will also be closed if the application is withdrawn by the sponsor. The date on which one of these actions occurs is used to determine whether the review was completed within the PDUFA goal time frame. If FDA issues a complete response letter, the sponsor may choose to submit a revised application to FDA. These are known as resubmissions and their review is covered under the user fee paid with the original submission. Resubmissions are classified as Class 1 or Class 2 according to the complexity of the information they contain, with Class 2 being the most complex. Although the prescription drug performance goals have continued to evolve with each reauthorization of the prescription drug user fee program, the goals for NDAs, BLAs, and efficacy supplements have remained fairly stable for recent cohorts--a cohort being comprised of all the submissions of a certain type filed in the same fiscal year (see table 1). For standard NDAs, BLAs, and efficacy supplements, the current goal was phased in until it reached the current level (90 percent of reviews completed within 10 months) in FY 2002. Similarly, the goal for Class 1 NDA and BLA resubmissions was phased in, reaching its current level of 90 percent of reviews completed within 2 months in FY 2001. FDA can extend the review time frame for NDAs, BLAs, or Class 2 resubmissions by 3 months if it receives a major amendment to the application from the sponsor within 3 months of the goal date. FDA met most of its performance goals for priority and standard original NDA and BLA submissions for the FYs 2000 through 2010 cohorts. However, the average FDA review time increased slightly during this period for both priority and standard NDAs and BLAs. The percentage of FDA first-cycle approvals for both priority and standard NDAs and BLAs generally increased from FY 2000 through FY 2010; however, the percentage of first-cycle approvals has decreased for priority NDAs and BLAs since FY 2007. FDA met most of its performance goals for priority and standard original NDA and BLA submissions during our analysis period by issuing the proportion of action letters specified in the performance goals within the goal time frames. Specifically, for priority original NDAs and BLAs, FDA met the performance goals for 10 of the 11 completed cohorts we examined (see fig. 1). FDA also met the performance goals for 10 of the 11 completed standard NDA and BLA cohorts we examined. However, FDA did not meet the goals (i.e., issue the specified proportion of action letters within the goal time frames) for priority or standard NDAs and BLAs in the FY 2008 cohort. FDA and industry stakeholders we interviewed suggested that the reason FDA did not meet the goals for this cohort was that extra time was required for implementation of REMS requirements, which were introduced as part of the implementation of FDAAA. Although the FY 2011 cohort was still incomplete at the time we received FDA's data, FDA was meeting the goals for both priority and standard original NDAs and BLAs on which it had taken action. For the subset of priority NDAs and BLAs that were for innovative drugs, FDA met the performance goals for 9 of the 11 completed cohorts--all cohorts except FYs 2008 and 2009. For the subset of standard NDAs and BLAs that were for innovative drugs, FDA also met the performance goals for 9 of the 11 completed cohorts--all cohorts except FYs 2007 and 2008. For the incomplete FY 2011 cohort, FDA was meeting the goals for the subsets of both priority and standard NDAs and BLAs that were for innovative drugs. If FDA issues a complete response letter to the sponsor noting deficiencies with the original submission, the sponsor can resubmit the application with the deficiencies addressed. For Class I NDA and BLA resubmissions, FDA met the performance goals for 8 of the 11 completed cohorts we examined. For Class 2 NDA and BLA resubmissions, FDA met the performance goals for 10 of the 11 completed cohorts we examined. Although the FY 2011 cohort was still incomplete at the time we received FDA's data, FDA was meeting the goals for both the Class 1 resubmissions and the Class 2 resubmissions on which it had taken action. Overall, average FDA review time--the time elapsed from when FDA received a submission until it issued an action letter--increased slightly from FY 2000 through FY 2010 for both priority and standard NDAs and BLAs. There was a larger increase in average review time for both types of applications beginning in FY 2006. However, average review time began decreasing after FY 2007 for standard applications and after FY 2008 for priority applications, bringing the review times back near the FY 2000 levels (see fig. 2). As mentioned previously, FDA and industry stakeholder groups noted the implementation of REMS requirements as a contributing factor to increased review times for the FY 2008 cohort. Although the FY 2011 cohort was still incomplete at the time we received FDA's data, average FDA review time for applications on which FDA had taken action was 186 days for priority NDAs and BLAs and 308 days for standard NDAs and BLAs. Trends in average FDA review time for the subset of NDAs and BLAs that were for innovative drugs were similar to trends for all priority or standard NDAs and BLAs. For the subset of priority NDAs and BLAs that were for innovative drugs, average FDA review times were sometimes longer and sometimes shorter than those for all priority NDAs and BLAs; review times for the subset of standard NDAs and BLAs that were for innovative drugs were generally slightly longer than review times for all standard NDAs and BLAs. We were unable to calculate the average time to final decision for original NDAs and BLAs--that is, the average time elapsed between submission of an application and the sponsor's withdrawal of the application or FDA's issuance of an approval or denial action letter in the last completed review cycle. Time to final decision includes FDA review time as well as time that elapsed between review cycles while FDA was waiting for the sponsor to resubmit the application. We were unable to complete this calculation because most cohorts were still open for these purposes (i.e., fewer than 90 percent of submissions had received a final action such as approval, denial, or withdrawal). Specifically, for priority NDAs and BLAs, only four cohorts (FYs 2001, 2002, 2005, and 2006) had at least 90 percent of submissions closed, and for standard NDAs and BLAs, only one cohort (FY 2002) had at least 90 percent of submissions closed. (See app. I, table 4 for details.) As a result, there were too few completed cohorts available to calculate the time to final decision in a meaningful way. FDA may opt to consider an application withdrawn (and thus closed) if the sponsor fails to resubmit the application within 1 year after FDA issues a complete response letter. When we examined the open applications using this criterion, we identified 194 open NDAs and BLAs in FYs 2000 through 2010 for which FDA had issued a complete response letter in the most recent review cycle but had not yet received a resubmission from the sponsor. FDA had issued the complete response letter more than 1 year earlier for 162 (84 percent) of these applications. The percentage of priority NDAs and BLAs receiving an approval letter at the end of the first review cycle exhibited a sharp 1-year decline from FY 2000 to FY 2001, then increased substantially from FY 2001 through FY 2007, before decreasing again from FY 2007 through FY 2010 (see fig. 3). The percentage of first-cycle approvals for standard NDAs and BLAs showed a similar 1-year decline from FY 2000 to FY 2001, then varied somewhat but generally increased from FY 2002 through FY 2010. Although review of the FY 2011 cohort was incomplete at the time we received FDA's data, 93 percent of the priority NDAs and BLAs that had received a first-cycle action letter had been approved, as had 42 percent of the standard NDAs and BLAs. Trends for FYs 2000 through 2010 in the percentage of first-cycle approvals were similar for the subset of NDAs and BLAs that were for innovative drugs when compared to trends for all priority or standard NDAs and BLAs. For the subset of priority NDAs and BLAs for innovative drugs, the percentage of first-cycle approvals was generally higher than for all priority NDAs and BLAs. For standard submissions, the percentage of first-cycle approvals for innovative drugs was generally lower than for all standard NDAs and BLAs; for some cohorts (e.g., FYs 2000, 2004- 2006, and 2008) this difference was substantial. FDA met most of its performance goals for priority and standard original efficacy supplements to approved NDAs and BLAs for the FYs 2000 through 2010 cohorts. However, the average FDA review time generally increased during this period for both priority and standard efficacy supplements. The percentage of FDA first-cycle approvals fluctuated for priority efficacy supplements but generally increased for standard efficacy supplements for the FYs 2000 through 2010 cohorts. FDA met most of its performance goals for efficacy supplements to approved NDAs and BLAs during our analysis period. Specifically, FDA met the performance goals for both priority and standard efficacy supplements for 10 of the 11 completed cohorts we examined (see fig. 4). Although the FY 2011 cohort was still incomplete at the time we received FDA's data, based on efficacy supplements on which it had taken action, FDA was meeting the goal for both priority and standard efficacy supplements. Average FDA review time generally increased during our analysis period for both priority and standard efficacy supplements. Specifically, average FDA review time for priority efficacy supplements increased from 173 days in the FY 2000 cohort to a peak of 205 days in the FY 2009 cohort and then fell in the FY 2010 cohort to 191 days (see fig. 5). For standard efficacy supplements, average FDA review time rose from 285 days in the FY 2000 cohort to a peak of 316 days in the FY 2008 cohort and then fell in the FY 2010 cohort to 308 days. Although the FY 2011 cohort was still incomplete at the time we received FDA's data, average FDA review time for efficacy supplements on which FDA had taken action was 195 days for priority submissions and 284 days for standard submissions. As with NDA and BLA submissions, we were unable to calculate the average time to final decision for efficacy supplements in any meaningful way because there were too few completed cohorts. Specifically, for priority efficacy supplements, only four cohorts (FYs 2000, 2001, 2004, and 2007) had at least 90 percent of submissions closed, and for standard efficacy supplements, only one cohort (FY 2005) had at least 90 percent of submissions closed. (See app. II, table 9 for details.) FDA may opt to consider an application withdrawn (and thus closed) if the sponsor fails to resubmit the application within 1 year after FDA issues a complete response letter. When we examined the open applications using this criterion, we identified 196 open efficacy supplements in FYs 2000 through 2010 for which FDA had issued a complete response letter in the most recent review cycle but had not yet received a resubmission from the sponsor. FDA had issued the complete response letter more than 1 year earlier for 168 (86 percent) of these submissions. The percentage of priority efficacy supplements receiving an approval decision at the end of the first review cycle fluctuated for FYs 2000 through 2010, ranging between 47 percent and 80 percent during this time (see fig. 6). The results for standard efficacy supplements showed a steadier increase than for priority submissions. Specifically, the percentage of first-cycle approvals rose from 43 percent in the FY 2000 cohort to 69 percent in the FY 2010 cohort. Although the FY 2011 cohort was still incomplete at the time we received FDA's data, 63 percent of first-cycle action letters for standard submissions and 92 percent of first-cycle action letters for priority submissions issued by that time were approvals. The industry groups and consumer advocacy groups we interviewed noted a number of issues related to FDA's review of prescription drug applications. The most commonly mentioned issues raised by industry and consumer advocacy stakeholder groups were actions or requirements that stakeholders believe can increase review times and insufficient communication between FDA and stakeholders throughout the review process. Industry stakeholders also noted a lack of predictability and consistency in reviews. Consumer advocacy group stakeholders noted issues related to inadequate assurance of the safety and efficacy of approved drugs. FDA is taking steps that may address many of these issues. Most of the seven stakeholder groups we interviewed told us that there are actions and requirements that can lengthen FDA's review process. For example, four of the five consumer advocacy group stakeholders noted that FDA does not require sponsors to submit electronic applications; three of these stakeholders noted that requiring electronic applications could make the review process faster. Additionally, the two industry stakeholders told us that they believe FDA should approve more applications during the first review cycle. We found that an average of 44 percent of all original NDAs and BLAs submitted in FYs 2000 through 2010 were approved during the first review cycle, while 75 percent were ultimately approved. In addition, the two industry stakeholders that we interviewed raised requirements that can make review times longer, but the consumer advocacy group stakeholders did not agree with these points. For example, both industry stakeholders noted that working out the implementation of REMS requirements introduced in FDAAA slowed FDA's review process. One industry stakeholder stated that discussions about REMS often happened late in the review process, resulting in an increase in review times; another noted that REMS requirements have not been standardized, contributing to longer review times. In contrast, one consumer advocacy group stakeholder that we interviewed suggested that standardized REMS requirements or a "one size fits all" approach would not be meaningful as a risk management strategy. The industry and consumer advocacy group stakeholders also disagreed on another issue that can potentially lengthen the review process--FDA's process for using outside scientific expertise for the review of applications. The two industry stakeholders we interviewed stated that the rules surrounding consultation with an advisory committee-- particularly those related to conflicts of interest--can extend the time it takes FDA to complete the review process. In contrast, two of the consumer advocacy group stakeholders we interviewed specifically stated that FDA should be concerned with issues of conflict of interest in advisory committees used during the drug review process. FDA has taken or plans to take several steps that may address issues stakeholders noted can lengthen the review process, including issuing new guidance, commissioning and issuing assessments of the review process, training staff, and establishing programs aimed at helping sponsors. For example, according to the draft agreement with industry for the upcoming prescription drug user fee program reauthorization, FDA would issue guidance on the standards and format for submitting electronic applications and would begin tracking and reporting on the number of electronic applications received. In addition, according to the draft agreement, FDA would publish both an interim and a final assessment of the review process for innovative drugs and then hold public meetings for stakeholders to present their views on the success of the program, including its effect on the efficiency and effectiveness of first-cycle reviews. FDA would also provide training to staff on reviewing applications containing complex scientific issues, which may improve FDA's ability to grant first-cycle approvals where appropriate. In addition, FDA would issue guidance on assessing the effectiveness of REMS for a particular drug and would hold public meetings to explore strategies to standardize REMS, where appropriate. However, we did not identify any examples of steps FDA has taken to address industry stakeholder issues with leveraging outside expertise during the drug review process in any of the recently released strategy, assessment, and guidance documents we reviewed. Most of the two industry and five consumer advocacy group stakeholders that we interviewed told us that there is insufficient communication between FDA and stakeholders throughout the review process. For example, both of the industry stakeholders noted that FDA does not clearly communicate the regulatory standards that it uses to evaluate applications. In particular, the industry stakeholders noted that the regulatory guidance documents issued by FDA are often out of date or the necessary documents have not yet been developed. Additionally, both industry stakeholders and two consumer advocacy group stakeholders noted that after sponsors submit their applications, insufficient communication from FDA prevents sponsors from learning about deficiencies in their applications early in FDA's review process. According to these four stakeholders, if FDA communicated these deficiencies earlier in the process, sponsors would have more time to address them; this would increase the likelihood of first-cycle approvals. Finally, three consumer advocacy group stakeholders also noted that FDA does not sufficiently seek patient input during reviews. One stakeholder noted that it is important for FDA to incorporate patient perspectives into its reviews of drugs because patients might weigh the benefits and risks of a certain drug differently than FDA reviewers. FDA has taken or plans to take several steps that may address stakeholders' issues with the frequency and quality of its communications with stakeholders, including conducting a review of its regulations, establishing new review programs and communication-related performance goals, providing additional staff training, and increasing its efforts to incorporate patient input into the review process. FDA is in the process of reviewing its regulations to identify burdensome, unclear, obsolete, ineffective, or inefficient regulations and is soliciting stakeholder input on additional rules that could be improved. In addition, according to the draft agreement with industry, FDA would establish a review model with enhanced communication requirements for innovative drugs, including requirements to hold pre- and late-cycle submission meetings with sponsors as well as to update sponsors following FDA's internal midcycle review meetings. Additionally, under the draft user fee agreement, FDA would inform sponsors of the planned review timeline and any substantive review issues identified thus far within 74 days of receipt for 90 percent of original NDAs, BLAs, and efficacy supplements. FDA would also issue guidance, develop a dedicated drug development training staff, and provide training on communication for all CDER staff involved in the review of investigational new drugs.increase its utilization of patient representatives as consultants to provide patient views early in the product development process and to ensure those perspectives are considered in regulatory discussions. More specifically, FDA would expect to start with a selected set of disease areas and meet with the relevant patient advocacy groups and other interested stakeholders to determine how to incorporate patient perspectives into FDA's decision making. The two industry stakeholders that we interviewed also told us that there is a lack of predictability and consistency in FDA's reviews of drug applications. For example, both stakeholders noted that there is sometimes inconsistent application of criteria across review divisions or offices. Further, both industry stakeholders we interviewed noted that FDA lacks a structured benefit-risk framework to refer to when making decisions, which they believe would improve the predictability of the review process. FDA has taken or plans to take steps that may address stakeholders' issues with the predictability and consistency of its reviews of drug applications. For example, FDA plans to provide training related to the development, review, and approval of drugs for rare diseases, which may help to improve the consistency of FDA's review of those drugs. In addition, FDA has appointed a Deputy Commissioner for Medical Products to oversee and manage CBER, CDER, and the Center for Devices and Radiological Health (CDRH) in an attempt to improve integration and consistency between the centers. Furthermore, FDA has agreed to create a 5-year plan to develop and implement a structured benefit-risk framework in the review process. FDA will also revise its internal guidance to incorporate a structured benefit-risk framework and then train its review staff on these revisions. Three of the five consumer advocacy group stakeholders that we spoke with raised issues about whether FDA is adequately ensuring the safety and efficacy of the drugs it approves for marketing. All three of these stakeholders told us that FDA should place greater priority on safety and efficacy over review speed. In addition, three stakeholders told us that FDA does not gather enough data on long-term drug safety and efficacy through methods such as postmarket surveillance. One stakeholder suggested that FDA should more effectively utilize its Sentinel System for adverse event reporting. These concerns have also been extensively discussed elsewhere. FDA has taken or plans to take steps that may address stakeholders' issues with the safety and efficacy of approved drugs, including publishing a regulatory science strategic plan. This document describes various plans FDA has for emphasizing safety and efficacy, such as developing assessment tools for novel therapies, assuring safe and effective medical innovation, and integrating complex data (including postmarket data) to allow for better analyses. FDA has also published a report identifying needs that, if addressed, would enhance scientific decision making in CDER. Some of the needs identified included improving access to postmarket data sources and exploring the feasibility of different postmarket analyses; improving risk assessment and management strategies to reinforce the safe use of drugs; and developing and improving predictive models of safety and efficacy in humans. Finally, in the draft agreement with industry, FDA has committed to conducting both an interim and a final assessment of the strengths, limitations, and appropriate use of the Sentinel System for helping FDA determine the regulatory actions necessary to manage safety issues. FDA met most of the performance goals for the agency to review and issue action letters for original NDA and BLA submissions, Class 1 and Class 2 resubmissions, and original efficacy supplements for the FYs 2000 through 2010 cohorts. FDA review times increased slightly for original NDAs, BLAs, and efficacy supplements during this period while changes in the percentage of first-cycle approvals varied by application type. While FDA has met most of the performance goals we examined, stakeholders we spoke with point to a number of issues that the agency could consider to improve the drug review process; FDA is taking or has agreed to take steps that may address these issues, such as issuing new guidance, establishing new communication-related performance goals, training staff, and enhancing scientific decision making. It is important for the agency to continue monitoring these efforts in order to increase the efficiency and effectiveness of the review process and thereby help ensure that safe and effective drugs are reaching the market in a timely manner. HHS reviewed a draft of this report and provided written comments, which are reprinted in appendix IV. HHS generally agreed with our findings and noted that they reflect what the agency reported for the same time period. HHS also called attention to activities FDA has undertaken to improve the prescription drug review process. It highlighted FDA's performance in approving innovative drugs in FY 2011. HHS also noted steps FDA will take to contribute to medical product innovation including expediting the drug development pathway and streamlining and reforming FDA regulations. Finally, HHS discussed enhancements to the drug review program that were included in the proposed recommendations for the 2012 reauthorization of the prescription drug user fee program, such as establishing a new review program for innovative drugs, enhancing benefit-risk assessment, and requiring electronic submissions and standardization of electronic application data to improve efficiency. HHS also provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Health and Human Services, the Commissioner of the Food and Drug Administration, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Includes only those submissions that had received a final FDA action letter (i.e., approval) in their last completed review cycle or were withdrawn by the sponsor at the time we received FDA's data, which include reviews by CBER and CDER through November 30, 2011. Includes only those submissions that had received a final FDA action letter (i.e., approval) in their last completed review cycle or were withdrawn by the sponsor at the time we received FDA's data, which include reviews by CBER and CDER through November 30, 2011. We defined a submission as open if the most recent review cycle was still underway (i.e., pending) or if FDA had issued a complete response letter in the most recent review cycle and the sponsor still had the option of resubmitting the application under the original user fee. Submissions that have received a complete response letter are considered complete for purposes of determining whether FDA met the PDUFA performance goals, but the review is not closed. Prior to August 2008, FDA also issued "approvable" and "not approvable" letters, which served the same purpose as the complete response letters currently used. We grouped these three types of letters together in our analysis. Yes The FY 2011 cohort was not complete at the time we received FDA's data, which include reviews by CBER and CDER through November 30, 2011. Therefore, values indicated for FY 2011 in the table above may change as these reviews are completed. Our analysis was limited to resubmissions made in FYs 2000 through 2011 for original NDAs and BLAs that were also submitted in FYs 2000 through 2011. Resubmissions made in FYs 2000 through 2011 for original NDAs and BLAs submitted prior to FY 2000 were not captured by our analysis. "I.D." stands for innovative drugs, a subset of all priority original NDAs and BLAs that includes nearly all BLAs and those NDAs designated as new molecular entities (NMEs). In FY 2000, Class 1 resubmissions were also subject to a 4-month goal time frame which is not shown in our analysis. Our calculations include extensions of the PDUFA goal time frame, where applicable. PDUFA goal time frames for Class 2 resubmissions can be extended for 3 months if the sponsor submits a major amendment to the resubmission within 3 months of the goal date. For Class 2 NDA/BLA resubmissions in these cohorts, 45 out of 463 submissions (9.7 percent) received goal extensions. For the FY 2011 cohort, 55 out of 79 standard NDA and BLA submissions (70 percent) and 7 out of 22 priority submissions (32 percent) were still under review at the time we received FDA's data, which include reviews by CBER and CDER through November 30, 2011. Therefore, values indicated for FY 2011 in the table above may change as these reviews are completed. Includes only those submissions that had received an approval letter in their last completed review cycle at the time we received FDA's data, which include reviews by CBER and CDER through November 30, 2011. Dashes (--) indicate cohorts for which no submissions met the criteria. Our calculations include extensions of the PDUFA goal time frame, where applicable. PDUFA goal time frames can be extended for 3 months if the sponsor submits a major amendment to the application within 3 months of the goal date. For priority efficacy supplements in FYs 2000 through 2011, 24 out of 400 submissions (6 percent) received PDUFA goal extensions. Average review time for the first review cycle for original submissions. Resubmissions are subject to different PDUFA goal time frames. Dashes (--) indicate cohorts for which no submissions met the criteria. Includes only those submissions that had received a first-cycle FDA action letter at the time we received FDA's data, which include reviews by CBER and CDER through November 30, 2011. Prior to August 2008, FDA also issued "approvable" and "not approvable" letters, which served the same purpose as the complete response letters currently used. We grouped these three types of letters together in our analysis. Includes only those submissions that had received a final FDA approval letter in their last completed review cycle or were withdrawn by the sponsor at the time we received FDA's data, which include reviews by CBER and CDER through November 30, 2011. 17 Our calculations include extensions of the PDUFA goal time frame, where applicable. PDUFA goal time frames can be extended for 3 months if the sponsor submits a major amendment to the application within 3 months of the goal date. For standard efficacy supplements in FYs 2000 through 2011, 90 out of 1,528 submissions (6 percent) received PDUFA goal extensions. FYs 2008 through 2011 calculations exclude submissions for which FDA had not yet issued an action letter. Includes only those submissions that had received a first-cycle FDA action letter at the time we received FDA's data, which include reviews by CBER and CDER through November 30, 2011. Prior to August 2008, FDA also issued "approvable" and "not approvable" letters, which served the same purpose as the complete response letters currently used. We grouped these three types of letters together in our analysis. Includes only those submissions that had received a final FDA approval letter in their last completed review cycle or were withdrawn by the sponsor at the time we received FDA's data, which include reviews by CBER and CDER through November 30, 2011. FDA centers and offices Center for Drug Evaluation and Research (CDER) Office of the Center Director (OCD) Office of Information Technology (OIT/OIM) Office of Planning and Informatics (OPI) Office of Counter-Terrorism and Emergency Coordination (OCTEC) Office of Pharmaceutical Science (OPS) FDA centers and offices Office of Regulatory Affairs (ORA) Office of the Commissioner (OC) Shared Service (SS) SS FTEs were not separated from the center FTEs until FY 2004. In addition to the contact named above, Robert Copeland, Assistant Director; Carolyn Fitzgerald; Cathleen Hamann; Karen Howard; Hannah Marston Minter; Lisa Motley; Aubrey Naffis; and Rachel Schulman made key contributions to this report.
The Food and Drug Administration (FDA) within the Department of Health and Human Services (HHS) is responsible for overseeing the safety and efficacy of drugs and biologics sold in the United States. New drugs and biologics must be reviewed by FDA before they can be marketed, and the Prescription Drug User Fee Act (PDUFA) authorizes FDA to collect user fees from the pharmaceutical industry to support its review of prescription drug applications, including new drug applications (NDA), biologic license applications (BLA), and efficacy supplements that propose changes to the way approved drugs and biologics are marketed or used. Under each authorization of PDUFA since 1992, FDA committed to performance goals for its drug and biologic reviews. In preparation for the next PDUFA reauthorization, GAO was asked to examine FDA's drug and biologic review processes. In this report, we (1) examine trends in FDA's NDA and BLA review performance for fiscal years (FY) 2000 through 2010, (2) examine trends in FDA's efficacy supplement review performance for FYs 2000 through 2010, and (3) describe issues stakeholders have raised about the drug and biologic review processes and steps FDA is taking that may address these issues. To do this work, GAO examined FDA drug and biologic review data, reviewed FDA user fee data, interviewed FDA officials, and interviewed two industry groups and five consumer advocacy groups. All of the stakeholder groups participated in at least half of the meetings held by FDA to discuss the reauthorization of the prescription drug user fee program. FDA met most performance goals for priority and standard NDAs and BLAs received from FY 2000 through FY 2010. FDA meets its performance goals by completing its review and issuing an action letter--such as an approval or a response detailing deficiencies that are preventing the application from being approved--for a specified percentage of applications within a designated period of time. FDA designates NDAs and BLAs as either priority--if the product would provide significant therapeutic benefits when compared to available drugs--or standard. FDA met the performance goals for both priority and standard NDAs and BLAs for 10 of the 11 fiscal years GAO examined; FDA did not meet either of the goals for FY 2008. Although FDA had not yet issued an action letter for all of the applications it received in FY 2011 and results are therefore preliminary, FDA was meeting the goals for both priority and standard NDAs and BLAs on which it had taken action. Meanwhile, FDA review time for NDAs and BLAs--the time elapsed between FDA's receipt of an application and issuance of an action letter--increased slightly from FY 2000 through FY 2010. In addition, the percentage of NDAs and BLAs receiving an approval letter at the end of the first review cycle generally increased, although that percentage has decreased for priority NDAs and BLAs since FY 2007. FDA met most of its performance goals for efficacy supplements from FY 2000 through FY 2010. Specifically, FDA met the performance goals for both priority and standard efficacy supplements for 10 of the 11 fiscal years GAO examined. FDA review time generally increased during the analysis period for both priority and standard efficacy supplements. The percentage of priority efficacy supplements receiving an approval letter at the end of the first review cycle fluctuated from FY 2000 through FY 2010, ranging between 47 percent and 80 percent during this time. The results for standard efficacy supplements showed a steadier increase with the percentage of first-cycle approval letters rising from 43 percent for FY 2000 applications to 69 percent for FY 2010 applications. The industry groups and consumer advocacy groups we interviewed noted a number of perceived issues related to FDA's review of drug and biologic applications. The most commonly mentioned issues raised by industry and consumer advocacy stakeholder groups were actions or requirements that can increase review times (such as taking more than one cycle to approve applications) and insufficient communication between FDA and stakeholders throughout the review process. Industry stakeholders also noted a perceived lack of predictability and consistency in reviews. Consumer advocacy group stakeholders noted issues related to inadequate assurance of the safety and effectiveness of approved drugs. FDA is taking steps that may address many of these issues, including issuing new guidance, establishing new communication-related performance goals, training staff, and enhancing scientific decision making. In commenting on a draft of this report, HHS generally agreed with GAO's findings and noted that they reflect what the agency reported for the same time period. HHS also called attention to activities FDA has undertaken to improve the prescription drug review process.
7,161
961
EPA was established in 1970 to protect human health and safeguard the natural environment. EPA is staffed with large numbers of technically trained personnel; more than half of its employees are engineers, scientists, and environmental protection specialists. Today, it employs 18,000 people. EPA is headquartered in Washington, D.C., and has 10 regional offices and laboratories across the country. EPA's OCR, a staff office in the Office of the Administrator, is responsible for managing the agency's discrimination complaints program. This program is intended to ensure that all EPA employees and applicants for employment are afforded equal employment and advancement opportunities free of discrimination. Moreover, OCR is responsible for the timely processing and resolution of discrimination complaints. Specifically, discrimination complaints are processed by OCR's Compliance and Internal Resolution Team. Over the years, allegations and complaints have been made that EPA tolerates discrimination, retaliates against whistleblowers, and fails to take corrective action on these matters. The agency's policies and practices were further questioned when an employee won a high profile court case in 2000. EPA's EEO practices have also attracted congressional interest in general and about untimely complaint processing in particular. Hearings before the House Committee on Science in October 2000 highlighted alleged discriminatory conduct. EPA, like other federal agencies, is required to comply with the nation's civil rights laws. Title VII of the Civil Rights Act of 1964, as amended, makes it illegal for employers to discriminate against their employees or job applicants on the basis of race, color, religion, sex, or national origin (42 U.S.C. 2000e et.seq). The Equal Pay Act of 1963 protects men and women who perform substantially equal work in the same establishment from sex-based wage discrimination (29 U.S.C. 206(b)). The Age Discrimination in Employment Act of 1967, as amended, prohibits employment discrimination against individuals who are 40 years of age and older (29 U.S.C. 621 et seq.). Sections 501 and 505 of the Rehabilitation Act of 1973, as amended, prohibit discrimination against qualified individuals with disabilities who work or apply to work in the federal government (29 U.S.C. 791 and 794a). Federal agencies are required to make reasonable accommodations to qualified employees or applicants with disabilities except when such accommodation would cause an undue hardship. EEOC is responsible for enforcing all of these laws. In addition, a person who files a complaint or participates in an investigation of an EEO complaint or who opposes an employment practice made illegal under any of the statutes enforced by EEOC is protected from retaliation or reprisal. EPA's EEO program, like those in other agencies, is subject to several regulations. EPA is responsible for developing and implementing its own equal employment program, including establishing or making available alternative dispute resolution programs and adopting complaint processing procedures as required by 29 C.F.R. Part 1614. EEOC Management Directive 110 (Federal Complaints Processing Manual) provides general guidance on how agencies should process employment discrimination complaints. Agencies are also required to provide EEO discrimination complaint data to EEOC (29 C.F.R.1614.602.). EEOC compiles these data and reports them to Congress each year in the EEOC Annual Report on the Federal Workforce. Information contained in EPA's discrimination complaint data system was unreliable because of data entry problems. EPA officials also maintain that the computer software, which was obtained from a now defunct supplier, was flawed and not able to report data accurately. Reliable discrimination complaint data are necessary for EPA's OCR to track complaints and look for trends that might indicate the need for specific actions and to respond to EEOC reporting requirements. EPA recently implemented a new EEO data system and is taking steps to train staff members and hold them accountable for maintaining the data system. Officials attributed data system weaknesses in part to a now defunct data management company whose data system was used to track and process discrimination complaint information. Officials said the system was flawed and was further compromised because EPA's EEO specialists did not always enter, update, or maintain discrimination complaint data. As a result, EPA had difficulty providing accurate EEO information. Moreover, EPA had trouble discerning if there are trends in workplace problems that lead to EEO complaints; this in turn has inhibited understanding sources of conflict and planning corrective actions. EEOC regulations point out that agencies should make every effort to ensure accurate record keeping and reporting of EEO data. Data fosters transparency, which provides an incentive to improve performance and enhance the image of the agency in the eyes of both its employees and the public. We initially requested discrimination complaint data for a 10-year period (1991-2000). However, OCR officials said they had no confidence in discrimination complaint data prior to fiscal year 1995 because the data are unreliable and source documents were not available to permit its reconstruction. OCR provided discrimination complaint data for fiscal years 1995 through 2002; however, in reviewing these data, we found that the information was incorrect. These data understated the actual number of discrimination complaints on hand, the number of new discrimination complaints filed, the number of complaints closed, and the year ending numbers. Also the data provided to us differed from the discrimination complaint data reported to EEOC. For example, the number of discrimination complaints on hand at the end of fiscal year 2000 was reported to us as 176, but EPA reported to EEOC that the number was 264. The number of new discrimination complaints filed in 2000 was reported to us as 79, but the number reported to EEOC was 75. After we pointed out some problems with the data, OCR manually reviewed source documents and revised these numbers. We did not verify the accuracy of the revised numbers because doing so would have required considerable effort to reconstruct all the data. To determine if the numbers provided for complaints on hand, new, closed, and ending were supportable, we reviewed the information EPA reconstructed, including handwritten notes. We also selected a number of supporting documents for review and found that the data reported agreed with the supporting documentation. These documents were also reviewed to determine if the numbers of complaints reported to us matched those reported to EEOC. Although we believe the reconstructed numbers are indicative of the situation at EPA, we cannot attest to the overall accuracy of these data. Table 1 shows the number of complaints on hand at the start of the year and the number of new, closed, and on hand at the end of the year for fiscal years 1995 through 2002 as reported to EEOC. The number of complaints closed fluctuated from a low of 44 in 1999 to a high of 123 in 2001. For fiscal years 1995 through 2002, a total of 548 people filed 679 complaints. The number of discrimination complainants is usually less than the number of complaints filed because more than one complaint can be made by a complainant. As table 2 shows, the number of complainants and discrimination complaints filed spiked in fiscal years 1998 and 2002. OCR officials could not provide any explanation for the increased complainants and complaints filed in these years. The agency closed 588 complaints during this period, including 125 dismissals; 48 withdrawals; 222 agency decisions, none of which found for the complainant; and 178 settlements. Settlements represented 30 percent of all discrimination complaints closed over the period. In each year from fiscal year 1996 to 2000, the number of cases settled at the agency numbered less than 20, while 54 cases were settled in 2001. These settlements represented 44 percent of all discrimination complaint cases closed in 2001. According to agency officials, a number of settlements were reached during 2001 as part of an effort to eliminate the large number of backlogged complaints. Settlements can be achieved by different methods. For example, for the years 1996 through 2001, a total of 29 discrimination complaint cases were settled at the EEOC hearing stage while another 7 cases were settled while pending before federal district courts. Beginning in 2000, as required by EEOC, EPA began a program to make Alternative Dispute Resolution (ADR) available in precomplaint and formal complaint processes. The agency uses mediation as its alternative method to resolve EEO complaints and administrative grievances. During the first 6 months of fiscal year 2003, there were 18 requests for mediation, of which 14 EEO cases were accepted for mediation, 1 case is under review, and 3 cases are pending further action. The data showed that headquarters discrimination complaints focused mainly on race, reprisal, gender, and age. The specific issues addressed in these complaints were non-selection for promotion, appraisal, and harassment. Similarly, in regional offices the most often cited bases for discrimination complaints were race, reprisal, and gender. The specific issues most cited in the regional complaints were non-selection for promotion, appraisal, harassment, and time and attendance. Table 3 lists the percentages of complaints by the bases of complaint. Table 4 lists the percentages of complaints by the issues of the complaint. EPA takes a long time to process complaints. Over the fiscal years 1995- 2002 period, it took an average of 663 days from the time a complaint was filed until it was closed. A major contributing factor to this lengthy process was the time used to investigate complaints. Over the same 8-year period, the average time to complete an investigation was 465 days. EEOC regulations require EPA and other agencies to complete investigations within 180 days of receiving discrimination complaints unless the period is extended. In 2002, the average number of days for completed investigations was 427 days in comparison to the 180-day standard. Discrimination complaint cases closed in 2002 took an average 839 days to process. When compared to the other 23 agencies that are required to comply with the CFO Act, EPA's total number of days to process a complaint from filing to closing ranked fifth highest in 2002. EPA is taking steps to improve data system reliability. It contracted with a company to procure an EEO data system and to train employees on how to use the new software program. This software (EEO-Net) is designed to automate data entry, case tracking, and reporting requirements. The procurement process began in February 2002, and it was originally estimated that the new system would be in place and fully operational in June 2002. An EPA official told us that the EEO-NET system became operational on January 15, 2003. OCR is depending on this new system to alleviate many of the inaccuracies and inconsistency problems with discrimination complaint data. Its implementation is also expected to permit identification of trends, to alert both regional and headquarters staff members of problem areas, and to serve as an early warning system. According to EPA officials, the new system is expected to automatically and accurately generate data for completing EEOC's Annual Federal Equal Employment Opportunity Statistical Report of Discrimination Complaints. The Air Force has successfully used the EEO-Net software program for over 3 years for military personnel and is installing the program for use with its civilian workforce. Officials at the National Labor Relations Board, Broadcast Board of Governors, Government Printing Office, and EEOC have all recently installed the system and are pleased with the results thus far. As discussed previously, data in the old system were not accurately entered, updated, or maintained by EEO specialists. In an interim effort to resolve these data problems, OCR hired a person whose responsibilities include entering, updating, and maintaining the data. OCR is also developing new performance standards for EEO specialists that rate them on inputting and maintaining the data. The new performance standards are intended to ensure that the data problems do not occur again. Specialists are to be held accountable for maintaining accurate discrimination complaint data as part of their assigned duties. According to OCR officials, EPA has never adopted standard operating procedures for processing internal complaints of discrimination, but it developed draft procedures in July 2001. Although these procedures are in draft form, OCR's staff uses them as guidance. EPA officials said they were waiting until the EEO-Net software is fully operational to finalize the standard operating procedures. The system became operational in January 2003, but as of May, the procedures were still in draft form. The draft standard operating procedures provide detailed step-by-step instructions for OCR's staff to follow, from when a complaint is filed through final resolution. For example, Section II,"Checklist for Preparing Correspondence," includes instructions on when and how to prepare mailings related to discrimination complaints. Section IV of the procedures addresses the steps necessary for OCR to process individual complaints, including steps to follow upon complaint receipt, complaint acknowledgment, request for EEO Counselor's Report, and all subsequent steps of the process up to the complaint's resolution at the formal stage. The draft standard operating procedures also identify data that can be used by OCR for trend analysis and address management and tracking of counselor assignments. OCR's staffing has increased from four to nine in the past 8 years, and the office plans to hire additional staff members. (See table 3.) EEOC regulations require that agencies provide sufficient resources to their EEO programs to ensure efficient and successful operation. EPA's 2001 Federal Managers' Financial Integrity Act Report stated that EPA was unable to process complaints in a timely manner and identified this situation as a material weakness and an agency weakness. The most recent report states that OCR had hired additional staff members and made other changes, such as changing contactors who conduct investigations, and now believes it can ensure the timely processing of discrimination complaints and recommends that this material weakness be closed. OCR officials told us that additional staffing would help facilitate timely processing of discrimination complaints. In June 2002, they said that they had two vacancy announcements out to recruit an additional GS-13 Equal Employment Specialist to process complaints and one GS-14 Senior Equal Employment Specialist to develop final agency decisions, prepare appeal briefs, and process complex complaint cases. OCR is currently planning to fill only the GS-14 position and, as of May 2003, the selection process was still under way. In addition, OCR embarked on a training effort in 2001 to increase the numbers of collateral duty counselors. As a result, an additional 20 counselors were trained to serve as first points of contact for employees considering filing discrimination complaints. These counselors are not full- time. They perform counseling duties in addition to their other assigned duties. The EEO counselors' responsibility is to ensure that complainants understand their rights and responsibilities under the EEO process. Specifically, the counselor must let the complainants know that they can opt for precomplaint resolution through participation in ADR or EEO counseling. Counselors also determine the claim and bases raised by the potential complaint, determine the complainant's timeliness in contacting the counselor, and advise the complainant of the right to file a formal complaint if ADR or counseling fails to resolve the dispute. EPA has not processed complaints in a timely manner, and has had a long- standing backlog of overdue cases. The backlog was caused in part by problems with contractors that conducted investigations that did not meet evidence standards as outlined in EEOC regulations. According to OCR officials, some of the investigations performed by companies formerly used by the office failed to provide adequate factual records required by EEOC regulations. As a result, these inadequate investigations did not contain the facts needed, and the investigations were reassigned and redone resulting in more time added to complaint processing. Because of these problems with incomplete and poorly done investigations, OCR terminated contracts with certain investigative firms. In June 2002, OCR contracted with a new company to conduct discrimination complaint investigations. An OCR official told us that the company has demonstrated its ability to perform thorough and complete investigations that meet EEOC's standards for investigations. OCR now contracts with six companies to investigate complaints and is satisfied overall with the investigations performed. Also, OCR's draft standard operating procedures for processing complaints of discrimination require that, prior to starting an investigation, OCR provide each investigator a copy of its guidelines for conducting EEO investigations to ensure that investigators understand what is required of them. The office currently has a blanket purchase agreement in place to hire four additional companies to perform investigations. Because of the relatively recent start of the contract, an OCR official said that OCR did not have enough statistical data to evaluate contractor effectiveness. However, OCR said that the situation regarding investigations was satisfactory. In addition, EPA helped speed adjudication of backlogged cases by creating a special task team in May 2001. The initial focus of the team efforts was on the completion of investigations and preparation of final agency decisions on backlogged complaints. Officials provided a final report that discussed the team's actions and how its stated mission was accomplished. At the beginning of the team's work, 139 discrimination complaints were identified as active with investigations not completed for 180 days or more as of June 1, 2001. The report said that 45 reports of investigation were completed and 17 were drafted and were under review, 18 final agency decisions were issued and an additional 11 were drafted and under review, 10 cases were settled, 9 cases were withdrawn or dismissed, and 27 complainants had requested EEOC hearings. Only 12 of the 139 complaints were still waiting for completion of an investigation. In February 2002, OCR also selected a contractor to augment OCR's staff by providing EEO counseling, performing EEO investigations, and writing draft agency decisions. All draft agency decisions written by the contractor are to be reviewed and revised, if necessary, by the Office of General Counsel. OCR officials said that OCR staff members are required to review draft decisions written by the contractor within 48 hours. EPA officials said that they hope this policy will help prevent discrimination complaint case backlogs from occurring as they had in the past. Moreover, OCR says it now works during the early stages of the complaint process to move discrimination complaints to the ADR process, as appropriate. If ADR is successful, this can obviate the need for investigations. In the event that a manager or employee is formally found to have discriminated, EPA is supposed to determine on a case-by-case basis whether individual employees should be disciplined. However, EPA does not have a process in place to review discrimination complaint settlements to determine if any manager or employee has participated in improper conduct and should be disciplined. Agency officials said that settlements are no fault, and in settlements no one admits to any wrongdoing and no process is in place to make such determinations. We recognize that EEO complaints can be settled without there having been discriminatory conduct involved in the case. For example, an employee who is not promoted may believe the reason was because of his or her race and file an EEO complaint on this basis. When the case is reviewed the agency could find that while race was not a factor, the manager did not adhere to other requirements of the merit promotion system. As a result, the agency could settle the complaint by agreeing to recompete the promotion and ensure that all rules are followed and that the complainant would receive fair consideration in the recompetition. However, the possibility of settlements not being related to discriminatory conduct does not alter the fact that not having a process to determine whether discrimination was involved means that any settlements involving discrimination may not be identified as such. EPA officials said that they provide managers the opportunity to change their behavior through training rather than taking disciplinary action. For example, in 2001 senior agency officials expressed concerns about managers' conduct and their compliance with Title VII of the Civil Rights Act of 1964, as amended. These concerns led to a contract with EEOC to conduct a 2-day mandatory training program for all 1,600 EPA managers in June 2002. EPA officials said that the training has improved managers' interaction with employees. However, it is unclear whether the improved management interaction with employees will result in fewer discrimination complaint filings. Officials also said that the agency has EEO performance standards for Senior Executive Service managers. Managers are evaluated according to their efforts to support EEO and fairness as part of the process for determining who gets awards. In addition, since 2001 EPA has required all employees to sign statements acknowledging the agency's zero-tolerance policy towards discrimination or harassment by managers, supervisors, or employees. Accountability is a cornerstone of results-oriented management. Because EPA's managers set the conditions and terms of work, they should be accountable for providing fair and equitable workplaces, free of discrimination and reprisal. If EPA's managers are not held accountable for their actions in cases in which discrimination has occurred, employees may not have confidence in the agency's EEO disciplinary process, and employees may be unwilling to report cases of discrimination. Further, our past work has found that agencies that promote and achieve a diverse workplace attract and retain high-quality employees. For public organizations, this translates into effective delivery of essential services to communities with diverse needs. Leading organizations understand that they must support their employees in learning how to effectively interact with and manage people in a diverse work place. Fostering an environment that is responsive to the needs of diverse groups of employees requires identification of opportunities to train managers in techniques that create a work environment that maximizes the ability of all employees to fully contribute to the organization's mission. A high-performing agency maintains an inclusive workplace in which perceptions of unfairness are minimized and workplace disputes are resolved by fair and efficient means. One way to foster openness and trust by employees is to have in place systems that hold employees responsible for discriminatory actions. Agriculture Process: In February 2003, EEOC issued a report on Agriculture's EEO program. In this report, EEOC applauded Agriculture for "holding managers accountable for their actions and disciplining them where appropriate." Since January 1998, Agriculture has reviewed cases in which discrimination was found or in which there were settlement agreements to determine if employees should be disciplined. The agency's regulations state that managers, supervisors, and other employees are to be held accountable for discrimination, civil rights violations, and related misconduct, as well as for ensuring that Agriculture's customers and employees are treated fairly and equitably. Agriculture agencies are to take appropriate corrective or disciplinary action, such as reprimands, suspensions, reductions in grade and pay, or removal. Final decisions containing a finding of discrimination and settlement and conciliation agreements are referred to the agency's Human Resources Management Office for appropriate action. This office monitors corrective and disciplinary actions taken in EEO and program discrimination matters. As a result of its process, Agriculture has taken over 200 corrective and disciplinary actions against managers and other employees since 1998, including removals, suspensions, and letters of reprimand. IRS Process: IRS offers another example of an agency process to review settled EEO complaints to assess whether employees should be held accountable. Since July 1998, IRS has been reviewing cases in which discrimination was found or in which there were settlement agreements to determine if the discrimination was intentional. Where an employee has been found to have discriminated against another employee (or against a taxpayer or a taxpayer's representative), the Internal Revenue Service Restructuring and Reform Act of 1998 provides that the individual be terminated (Pub. L.105-206, Section 1203, July 22, 1998). Only the IRS Commissioner has the authority to reduce termination to a lesser penalty. If there is a finding of discrimination, a settlement agreement is reached, or EEO issues are raised during the negotiated grievance process, IRS's Office of Labor Relations refers the matter to the National Director, EEO Diversity, Discrimination Complaint Review Unit. Local and headquarters EEO offices can also refer cases to the unit. This review is designed to alert management of any EEO-related misconduct regardless of the formal pursuit of a remedy by an employee. When it receives a case, the unit determines whether formal review and fact-finding is required before making a decision. If so, the case file is forwarded to the Department of the Treasury's Inspector General for Tax Administration, with a copy of the allegation referral form to Labor Relations. Formal reviews are to be completed within 60 days. Labor Relations coordinates with the head of the involved office if the unit finds no potential violations. The office head is responsible for determining the appropriate administrative disposition. The office conducts a limited review of referred cases at the precomplaint stage; after a formal complaint, formal withdrawal, or lapsed case due to employee inaction; or if there was no finding of discrimination. This review makes management aware of any EEO-related misconduct regardless of the formal remedy sought by an employee. Besides not having a process to determine whether managers discriminated in settled cases, EPA does not have a process to track or routinely report data on disciplinary actions taken against managers for discrimination or other types of misconduct. Data of this nature are important because they can be a starting point for agency decision makers to understand the nature and scope of issues in the workplace involving discrimination, reprisal, and other conflicts and problems, and can help in developing strategies for dealing with those issues. Under the No FEAR Act signed into law in May 2002, agencies are required to accumulate additional information about discrimination cases. The provisions of this act are to take effect October 1, 2003, and will require EPA to begin tracking and accumulating data on disciplinary actions resulting from discrimination. Specifically, the act requires that federal agencies file annual reports with Congress detailing, among other things, the number of discrimination or whistleblower cases filed with them, how the cases are resolved, and the number of agency employees disciplined for discrimination, retaliation, or harassment. These data requirements should alert agencies and employees that they are accountable for their actions in cases involving discrimination, retaliation, or harassment. This legislation demonstrates Congress's high level of interest in discouraging discriminatory conduct and reprisal at federal agencies and the need for managers to be held accountable for such conduct. EPA did not have accurate data on the numbers and types of discrimination complaints made by its employees, and this in turn made discerning trends in workplace conflicts, understanding the sources of conflict, and planning corrective actions difficult. These types of data are useful in helping to measure an agency's success in adhering to merit system principles, treating its people fairly and equitably, and achieving a diverse and inclusive workforce. Having a data software system that can track cases and provide EEO managers with the information needed to discern trends to enable the development of policies is critical. EPA is relying on its newly procured EEO data system to overcome its data accumulation and reporting problems. Moreover, the agency is relying on that system to provide it the capability to track cases and identify trends that may indicate problems areas. This, in turn, illustrates the importance of the new system's effective operation. EPA has never had standard operating procedures for EEO complaint processing and has been using draft procedures prepared in July 2001. The agency should finalize the draft procedures to help ensure that OCR staff members know what they are to do and that a uniform process is used nationwide. EPA does not have a process to determine whether managers should be disciplined for their actions in settled EEO complaint cases. If agency employees have the impression that EPA's discrimination complaint process does not discipline managers who participate in discriminatory conduct, employees may be less willing to participate in the process. Employees are less likely to file discrimination complaints if they perceive that there is no benefit from doing so or if they fear reprisal. A specific process that holds managers accountable for discriminatory conduct may enhance employee confidence in the EEO environment and demonstrate the agency's commitment to providing a fair and discrimination free environment. We recommend that the EPA Administrator direct that OCR evaluate its new EEO software system to ensure it resulted in a reliable system for tracking cases and accumulating accountability data for EEOC. In addition, the Administrator should direct that the draft standard operating procedures for handling EEO complaints be finalized. The Administrator should also direct that a process be developed that assesses every case in which discrimination is found or allegations of discrimination are settled to determine whether managers, or other employees, should be disciplined. In a June 11, 2003, letter (see app. I), the Director of EPA's Office of Civil Rights commented on a draft of this report. EPA generally agreed with the report's findings. EPA said that the report shows that the agency has made considerable progress in addressing the backlog of cases involving alleged discrimination and that it believes it has in place the procedures and resources to ensure that current and future complaints are timely processed. EPA's comments did not mention our recommendation to evaluate its new EEO software system to ensure that it meets the agency's need to track cases and accumulate accountability data. The comments also did not address our second recommendation about finalizing standard operating procedures for handling EEO complaints that have been in draft for 2 years and would be EPA's first set of official procedures. As we discussed in the report, action on both of these recommendations is important to assuring an effective EEO assurance program at EPA. Regarding the recommendation to establish a process to assess whether managers or other employees should be disciplined in cases in which discrimination is found or allegations are settled, EPA said that it would develop policies and procedures that will allow it to address effectively the issue of disciplinary action against any manager or employee found to have discriminated. This action should, when completed, address the part of the recommendation related to disciplinary action when discrimination has been found. However, it does not address the part of the recommendation dealing with the need to assess whether disciplinary action should be taken in cases where allegations of discrimination are settled. As discussed above, a process that holds managers accountable for discriminatory conduct should enhance employee confidence in the EEO environment and demonstrate the agency's commitment to providing a fair and discrimination free environment. EPA also made several technical comments, which we incorporated in the report where appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we will make no further distribution of this report until 30 days after its date. At that time, we will send copies to the Administrator of EPA, and interested committees and members of Congress. We will also make copies available to others upon request. In addition, the report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you have questions, please contact me on (202) 512-6082 or at [email protected] or contact Thomas Dowdal, Assistant Director, at (202) 512-6588 or [email protected]. Jeffery Bass, Karin Fangman, and Anthony Lofaro made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO's commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO's Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as "Today's Reports," on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select "Subscribe to e-mail alerts" under the "Order GAO Products" heading.
Minority employees at the EPA reported for a number of years that the agency had discriminated against them based on their race and retaliated against them for filing complaints. These issues were aired at hearings held by the House Committee on Science at which EPA said it would take actions to ensure a fair and discrimination free workplace. GAO was asked to review (1) the accuracy of EPA's equal employment opportunity (EEO) data, (2) various issues about the processes used to resolve discrimination complaints, and (3) the disciplinary actions taken for managers who discriminate. EPA had difficulty providing accurate EEO data because of a data system that the agency believes was unreliable and was further compromised by data entry problems. When GAO identified problems with the information EPA provided, the agency manually reconstructed data for fiscal years 1995 through 2002. The reconstructed data indicate that during this period 548 EPA employees filed 679 discrimination complaints, and the agency closed 588 complaints. Complaints were closed with 125 dismissals, 48 withdrawals, 178 settlements, 5 remands, and 222 agency decisions not supporting the claimant. GAO cannot attest to the accuracy of these numbers but believes they are indicative of the situation at EPA. EPA recently procured new software to facilitate accurate tracking and reporting of EEO information and believes the software will rectify data problems. EPA has never had official standard operating procedures for complaint processing, which are required by regulation. Rather, EPA said that complaints were processed under general guidance provided by the Equal Employment Opportunity Commission (EEOC) until draft procedures, prepared in July 2001, were put into use. EPA has taken a long time to process discrimination complaints with cases averaging 650 days from filing to closing over fiscal years 1995-2002. A major contributing factor was that investigations, which are supposed to be done in 180 days, averaged a total of 465 days. The firms used by EPA failed to conduct thorough investigations and their reports did not provide complete or factual accounts of the incidents leading to the complaints. As a result, investigations often had to be redone, adding to the amount of time needed to complete them. Over the last year, EPA has discontinued the use of these firms and contracted with new ones that it believes are doing a much better job. EPA has also increased its own staffing for EEO matters to try to reduce processing times. EPA does not have a specific process for determining whether managers involved in discrimination complaints did in fact discriminate and if so whether managers should be disciplined. EPA officials told us that they have relied on training to rectify and prevent discriminatory conduct. Other agencies have formal processes to evaluate each case in which discrimination is found or a complaint is settled to determine whether discipline is warranted. EPA will be required to collect and report the number of agency employees disciplined for discrimination or harassment under the provisions of the Notification and Federal Employee Anti- Discrimination and Retaliation Act, effective in October 2003. A process like those in place at other agencies should also help EPA meet this requirement.
7,043
676
To address the extent to which CMS implemented control procedures over contract actions, we focused on contracts that were generally subject to the FAR (i.e., FAR-based), which represented about $2.5 billion, or about 70 percent, of total obligations awarded in fiscal year 2008. The FAR is the governmentwide regulation containing the rules, standards, and requirements for the award, administration, and termination of government contracts. Based on the standards for internal control, FAR requirements, and agency policies, we identified and evaluated 11 key internal control procedures over contract actions, ranging from ensuring contractors had adequate accounting systems prior to the use of a cost reimbursement contract to certifying invoices for payment. Contract actions include new contract awards and modifications to existing contracts. We conducted our tests on a statistically random sample of 102 FAR-based contract actions CMS made in fiscal year 2008 and projected the results of our statistical sample conservatively by reporting the lower bound of our two-sided, 95 percent confidence interval. We tested a variety of contract actions including a range of dollars obligated, different contract types (fixed price, cost reimbursement, etc.), and the types of goods and services procured. The actions in the sample ranged from a $1,000 firm-fixed price contract for newspapers to a $17.5 million modification of an information technology contract valued at over $500 million. For each contract action in the sample, we determined if the 11 key internal control procedures were implemented by reviewing the contract file supporting the action and, where applicable, by obtaining additional information from the contracting officer or specialist or senior acquisition management. We also tested the reliability of the data contained in CMS's two acquisition databases. To address the extent to which CMS established a strong control environment for contract management, we obtained and reviewed documentation regarding contract closeout, acquisition planning, and other management information and interviewed officials in the Office of Acquisition and Grants Management (OAGM) about its contract management processes. We also evaluated the extent to which CMS had addressed recommendations we made in our 2007 report. We used the internal control standards as a basis for our evaluation of CMS's contract management control environment. Appendix I of our October 2009 report provides additional details of our scope and methodology. This testimony is based on our October 2009 performance audit, which was conducted from July 2008 to September 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Except for certain Medicare claims processing contracts, CMS contracts are generally required to be awarded and administered in accordance with general government procurement laws and regulations such as the FAR; the Health and Human Services Acquisition Regulations (HHSAR); the Cost Accounting Standards (CAS); and the terms of the contract. Since 1998, CMS's obligations to fiscal intermediaries, carriers, and Medicare Administrative Contractors (contractors that primarily process Medicare claims) have decreased approximately 16 percent. In contrast, obligations for other-than-claims processing contract activities, such as the 1-800 help line, information technology and financial management initiatives, and program management and consulting services, have increased 466 percent. These trends may be explained in part by recent changes to the Medicare program, including the movement of functions, such as the help line, data centers, and certain financial management activities, from the fiscal intermediaries and carriers to specialized contractors. MMA required CMS to transition its Medicare claims processing contracts, which generally did not follow the FAR, to the FAR environment through the award of contracts to Medicare Administrative Contractors. CMS projected that the transition, referred to as Medicare contracting reform, would produce administrative cost savings due to the effects of competition and contract consolidation as well as produce Medicare trust fund savings due to a reduction in the amount of improper benefit payments. Additionally, the transition would subject millions of dollars of CMS acquisitions to the rules, standards, and requirements for the award, administration, and termination of government contracts in the FAR. Obligations to the new Medicare Administrative Contractors were first made in fiscal year 2007. CMS is required to complete Medicare contracting reform by 2011. As of September 1, 2009, 19 contracts had been awarded to Medicare Administrative Contractors, totaling about $1 billion in obligations. The Standards for Internal Control in the Federal Government provide the overall framework for establishing and maintaining internal control and for identifying and addressing areas at greatest risk of fraud, waste, abuse, and mismanagement. These standards provide that--to be effective--an entity's management should establish both a supportive overall control environment and specific control activities directed at carrying out its objectives. As such, an entity's management should establish and maintain an environment that sets a positive and supportive attitude towards control and conscientious management. A positive control environment provides discipline and structure as well as the climate supportive of quality internal control, and includes an assessment of the risks the agency faces from both external and internal sources. Control activities are the policies, procedures, techniques, and mechanisms that enforce management's directives and help ensure that actions are taken to address risks. The standards further provide that information should be recorded and communicated to management and oversight officials in a form and within a time frame that enables them to carry out their responsibilities. Finally, an entity should have internal control monitoring activities in place to assess the quality of performance over time and ensure that the findings of audits and other reviews are promptly resolved. Control activities include both preventive and detective controls. Preventive controls--such as invoice review prior to payment--are controls designed to prevent errors, improper payments, or waste, while detective controls--such as incurred cost audits--are designed to identify errors or improper payments after the payment is made. A sound system of internal control contains a balance of both preventive and detective controls that is appropriate for the agency's operations. While detective controls are beneficial in that they identify funds that may have been inappropriately paid and should be returned to the government, preventive controls such as accounting system reviews and invoice reviews help to reduce the risk of improper payments or waste before they occur. A key concept in the standards is that control activities selected for implementation be cost beneficial. Generally it is more effective and efficient to prevent improper payments. A control activity can be preventive, detective, or both based on when the control occurs in the contract life cycle. Additional, detailed background information is available in our related report, GAO-10-60. Our October 2009 report identified pervasive deficiencies in internal control over contracting and payments to contractors. Specifically, as a result of our work, we estimated that at least 84.3 percent of FAR-based contract actions made by CMS in fiscal year 2008 contained at least one instance in which 1 of 11 key controls was not adequately implemented. Not only was the number of internal control deficiencies widespread, but also many contract actions had more than one deficiency. We estimated that at least 37.2 percent of FAR-based contract actions made in fiscal year 2008 had three or more instances in which a key control was not adequately implemented. The internal control deficiencies occurred throughout the contracting process and increased the risk of improper payments or waste. These deficiencies were due in part to a lack of agency-specific policies and procedures to ensure that FAR requirements and other control objectives were met. CMS also did not take appropriate steps to ensure that existing policies were properly implemented or maintain adequate documentation in its contract files. Further, CMS's Contract Review Board process had not been properly or effectively implemented to help ensure proper contract award actions. These internal control deficiencies are a manifestation of CMS's weak overall control environment, which is discussed later. Additional, detailed information on our testing of key internal controls is available in our October 2009 report. The high percentage of deficiencies indicates a serious failure of control procedures over FAR-based acquisitions, thereby creating a heightened risk of improper payments or waste. Highlights of the control deficiencies we noted included the following. We estimated that at least 46.0 percent of fiscal year 2008 CMS contract actions did not meet the FAR requirements applicable to the specific contract type awarded. For example, we found that CMS used cost reimbursement contracts without first ensuring that the contractor had an adequate accounting system. According to the FAR, a cost reimbursement contract may be used only when the contractor's accounting system is adequate for determining costs applicable to the contract. To illustrate, of the contract awards in our sample, we found nine cases in which cost reimbursement contracts were used without first ensuring that the contractor had an adequate accounting system. In addition to these nine cases, during our review of contract modifications we observed another six cases in which cost reimbursement contracts were used even though CMS was aware that the contractor's accounting system was inadequate at the time of award. In one instance, the contracting officer was aware that a contractor had an inadequate accounting system resulting from numerous instances of noncompliance with applicable Cost Accounting Standards. Using a cost reimbursement contract when a contractor does not have an adequate accounting system hinders the government's ability to fulfill its oversight duties throughout the contract life cycle and increases risk of improper payments and the risk that costs billed cannot be substantiated during an audit. We estimated that for at least 40.4 percent of fiscal year 2008 contract actions, CMS did not have sufficient support for provisional indirect cost rates nor did it identify instances when a contractor billed rates higher than the rates that were approved for use. Provisional indirect cost rates provide agencies with a mechanism by which to determine if the indirect costs billed on invoices are reasonable for the services provided until such time that final indirect cost rates can be established, generally at the end of the contractor's fiscal year. When the agency does not maintain adequate support for provisional indirect rates, it increases its risk of making improper payments. We estimated that for at least 52.6 percent of fiscal year 2008 contract actions, CMS did not have support for final indirect cost rates or support for the prompt request of an audit of indirect costs. The FAR states that final indirect cost rates, which are based on a contactor's actual indirect costs incurred during a given fiscal year, shall be used in reimbursing indirect costs under cost reimbursement contracts. The amounts a contractor billed using provisional indirect cost rates are adjusted annually for final indirect cost rates, thereby providing a mechanism for the government to timely ensure that indirect costs are allowable and allocable to the contract. CMS officials told us that they generally adjust for final indirect cost rates during contract closeout at the end of the contract performance rather than annually mainly due to the cost and effort the adjustment takes. However, CMS did not promptly close out its contracts and had not made progress in reducing the backlog of contracts eligible for closeout. Specifically, in 2007, we reported that CMS's backlog was 1,300 contracts, of which 407 were overdue for closeout as of September 30, 2007. This backlog continued to increase, and CMS officials stated that as of July 29, 2009, the total backlog of contracts eligible for closeout was 1,611, with 594 overdue based on FAR timing standards. Not annually adjusting for final indirect cost rates increases the risk that CMS is paying for costs that are not allowable or allocable to the contract. Furthermore, putting off the control activity until the end of contract performance increases the risk of overpaying for indirect costs during contract performance and may make identification or recovery of any unallowable costs during contract closeout more difficult due to the passage of time. We estimated that for at least 54.9 percent of fiscal year 2008 contract actions, CMS did not promptly perform or request an audit of direct costs. Similar to the audit of indirect costs, audits of direct costs allow the government to verify that the costs billed by the contractor were allowable, reasonable, and allocable to the contract. Not annually auditing direct costs increases the risk that CMS is paying for costs that are not allowable or allocable to the contract. We estimated that for at least 59.0 percent of fiscal year 2008 contract actions, the project officer did not always certify the invoices. CMS's Acquisition Policy Notice 16-01 requires the project officer to review each contractor invoice and recommend payment approval or disapproval to the contracting officer. This review is to determine, among other things, if the expenditure rate is commensurate with technical progress and whether all direct cost elements are appropriate, including subcontracts, travel, and equipment. We noted in our 2007 report that CMS used negative certification--a process whereby it paid contractor invoices without knowing whether they were reviewed and approved--in order to ensure invoices were paid in a timely fashion. In October 2009 we reported that negative certification continued to be CMS's policy to process contractor invoices for payment. This approach, however, significantly reduces the incentive for contracting officers, specialists, and project officers to review the invoice prior to payment. For example, in one case, although a contractor submitted over 100 invoices for fiscal year 2008, only 8 were certified by the project officer. The total value of the contract through January 2009 was about $64 million. In addition, based on a cursory review of the fiscal year 2008 invoices submitted for payment, we found instances in which the contracting officer or specialist did not identify items that were inconsistent with the terms of the contract or acquisition regulations. For example, we found two instances where the contractor billed, and CMS paid, for items generally disallowed by HHSAR. Reviewing invoices prior to payment is a preventive control that may result in the identification of unallowable billings, especially on cost reimbursement and time and materials invoices, before the invoices are paid. CMS increases its risk of improper payments when it does not properly review and approve invoices prior to payment. The control deficiencies we identified in the statistical sample discussed in our October 2009 report stemmed from a weak overall control environment. CMS's control environment was characterized by the lack of (1) strategic planning to identify necessary staffing and funding; (2) reliable data for effectively carrying out contract management responsibilities; and (3) follow-up to track, investigate, and resolve contract audit and evaluation findings for purposes of cost recovery and future award decisions. A positive control environment sets the tone for the overall quality of an entity's internal control, and provides the foundation for an entity to effectively manage contracts and payments to contractors. Without a strong control environment, the control deficiencies we identified will likely persist. Following is a summary of the weaknesses we found in CMS's overall control environment: Limited analysis of contract management workforce and related funding needs. OAGM management had not analyzed its contract management workforce and related funding needs through a comprehensive, strategic acquisition workforce plan. Such a plan is critical to help manage the increasing acquisition workload and meet its contracting oversight needs. We reported in November 2007 that staff resources allocated to contract oversight had not kept pace with the increase in CMS contract awards. In our 2009 report, we found a similar trend continued into 2008. While the obligated amount of contract awards had increased 71 percent since 1998, OAGM staffing resources--its number of full time equivalents (FTE)--had increased 26 percent. This trend presents a major challenge to contract award and administration personnel who must deal with a significantly increased workload without additional support and resources. In addition, according to its staff and management, OAGM faced challenges in meeting the various audit requirements necessary to ensure adequate oversight of contracts that pose more risk to the government, specifically cost reimbursement contracts, as well as in performing the activities required of a cognizant federal agency (CFA). Although officials told us they could use more audit funding, we found that OAGM management had yet to determine what an appropriate funding level should be. Without knowing for which contractors additional CFA oversight was needed, CMS did not have reliable information on the number of audits and reviews that must be performed annually or the depth and complexity of those audits. Without this key information, CMS could not estimate an adequate level of needed audit funding. The risks of not performing CFA duties are increased by the fact that other federal agencies that use the same contractors rely on the oversight and monitoring work of the CFA. A shortage of financial and human resources creates an environment that introduces vulnerabilities to the contracting process, hinders management's ability to sustain an effective overall control environment, and ultimately increases risk in the contracting process. Lack of reliable contract management data. Although CMS had generally reliable information on the basic attributes of each contract action, such as vendor name and obligation amount, CMS lacked reliable management information on other key aspects of its FAR-based contracting operations. For example, in our October 2009 report we identified acquisition data errors related to the number of certain contract types awarded, the extent of competition achieved, and total contract value. Standards for internal control provide that for an agency to manage its operations, it must have relevant, reliable, and timely information relating to the extent and nature of its operations, including both operational and financial data, and such information should be recorded and communicated to management and others within the agency who need it and in a form and within a time frame that enables them to carry out their internal control and operational responsibilities. The acquisition data errors were due in part to a lack of sufficient quality assurance activities over the data entered into the acquisition databases. Without accurate data, CMS program managers did not have adequate information to identify, monitor, and correct or mitigate areas that posed a high risk of improper payments or waste. Lack of follow-up to resolve contract audit and evaluation findings. CMS did not track, investigate, and resolve contract audit and evaluation findings for purposes of cost recovery and future award decisions. Tracking audit and evaluation findings strengthens the control environment in part because it can help assure management that the agency's objectives are being met through the efficient and effective use of the agency's resources. It can also help management determine whether the entity is complying with applicable acquisition laws and regulations. Contract audits and evaluations can add significant value to an organization's oversight and accountability structure, but only if management ensures that the results of these audits and evaluations are promptly investigated and resolved. For example, in an audit report dated September 30, 2008, the Defense Contract Audit Agency questioned approximately $2.1 million of costs that CMS paid to a contractor in fiscal year 2006. As discussed in our October 2009 report, OAGM management confirmed that no action had been taken at that time to investigate and recover the challenged costs. As we reported in October 2009, CMS management had not taken substantial actions to address our 2007 recommendations to improve internal control in the contracting process. Only two of GAO's nine 2007 recommendations had been fully addressed. Table 1 summarizes our assessment of the status of CMS's actions to address our recommendations. In addition to reaffirming the 7 substantially unresolved 2007 recommendations, our October 2009 report included 10 recommendations to further improve oversight and strengthen CMS's control environment. Specifically, we made recommendations for additional procedures or plans to address the following 10 areas: document compliance with FAR requirements for different contract types; document provisional indirect cost rates in the contract file; specify what constitutes timely performance of (or request for) audits of contractors' billed costs; specify circumstances for the use and content of negotiation memorandums, including any required secondary reviews; specify Contract Review Board documentation, including resolution of issues identified during the CRB reviews; conduct periodic reviews of contract files to ensure invoices were properly reviewed by both the project officer and contracting officer or specialist; develop a comprehensive strategic acquisition workforce plan, with resource needs to fulfill FAR requirements for comprehensive oversight, including CFA duties; revise the verification and validation plan to require all relevant acquisition data errors be corrected and their resolution documented; develop procedures for tracking contract audit requests and the resolution of audit findings; and develop procedures that clearly assign roles and responsibilities for the timely fulfillment of CFA duties. In commenting on a draft of our October 2009 report, CMS and HHS agreed with each of our 10 new recommendations and described steps planned to address them. CMS also stated that the recommendations will serve as a catalyst for improvements to the internal controls for its contracting function. CMS also expressed concerns about our assessment of key internal controls and disagreed with our conclusions on the status of CMS's actions to address our November 2007 recommendations. CMS stated its belief that "virtually all" of the errors we identified in our statistical sample related to "perceived documentation deficiencies." CMS also expressed concern that a reasonable amount of time had not yet elapsed since the issuance of our November 2007 report to allow for corrective actions to have taken place. However, as discussed in greater detail in our October 2009 report response to agency comments, nearly 2 years had elapsed between our November 2007 and October 2009 reports and CMS had made little progress in addressing the recommendations from our November 2007 report. Further, a significant number of our October 2009 report findings, including weaknesses in the control environment, were based on observations and interviews with OAGM officials and reviews of related documentation such as policies and strategic plans. Finally, the deficiencies we identified negatively impact the key controls intended to help ensure compliance with agency acquisition regulations and the FAR. In conclusion, Madam Chairman, while we have not updated the status of any CMS actions to address our October 2009 findings and recommendations, the extent to which control weaknesses in CMS's contracting activities continue, raises questions concerning whether CMS management has established an appropriate "tone at the top" to effectively manage these key activities. Until CMS management addresses our previous recommendations in this area, along with taking action to address the additional deficiencies identified in our October 2009 report, its contracting activities will continue to pose significant risk of improper payments, waste, and mismanagement. Further, the deficiencies we identified are likely to be exacerbated by the rise in obligations for non- claims processing contract awards as well as CMS's extensive reliance on contractors to help achieve its mission objectives. It is imperative that CMS address its serious contract-level control deficiencies and take action on our recommendations to improve overall environment controls or CMS will continue to place billions of taxpayer dollars at risk of fraud, or otherwise improper contract payments. We commend the Subcommittee for its continuing oversight and leadership in this important area and believe that hearings such as the one being held today will be critical to ensuring that CMS's continuing contract management weaknesses are resolved without further delay and that overall risks to the government are substantially reduced. Madam Chairman and Members of the Subcommittee, this concludes my prepared statement. I would be happy to answer any questions that you may have at this time. For further information regarding this testimony, please contact Kay L. Daly at (202) 512-9095 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Marcia Carlsen and Phil McIntyre (Assistant Directors), Sharon Byrd, Richard Cambosos, Francine DelVecchio, Abe Dymond, John Lopez, Ron Schwenn, Omar Torres, Ruth Walk, and Danietta Williams. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In November 2007, GAO reported significant deficiencies in internal control over certain contracts the Centers for Medicare and Medicaid Services (CMS) awarded under the Federal Acquisition Regulation (FAR). This Subcommittee and others in Congress asked GAO to perform an in-depth review of CMS's contract management practices. This testimony is based on GAO's October 2009 report on these issues and summarizes GAO's findings on the extent to which CMS (1) implemented effective control procedures over contract actions, (2) established a strong contract management control environment, and (3) implemented GAO's 2007 recommendations. GAO used a statistical random sample of 2008 CMS contract actions to assess CMS internal control procedures. The results were projected to the population of 2008 CMS contract actions. GAO reviewed contract file documentation and interviewed senior acquisition management officials. GAO reported in October 2009 that pervasive deficiencies in CMS contract management internal control increased the risk of improper payments or waste. Specifically, based on a statistical random sample of 2008 CMS contract actions, GAO estimated that at least 84.3 percent of fiscal year 2008 contract actions contained at least one instance where a key control was not adequately implemented. For example, CMS used cost reimbursement contracts without first ensuring that the contractor had an adequate accounting system, as required by the FAR. These deficiencies were due in part to a lack of agency-specific policies and procedures to help ensure proper contracting expenditures. These control deficiencies stemmed from a weak overall control environment characterized primarily by inadequate strategic planning for staffing and funding resources. CMS also did not accurately capture data on the nature and extent of its contracting, hindering CMS's ability to manage its acquisition function by identifying areas of risk. Finally, CMS did not track, investigate, and resolve contract audit and evaluation findings for purposes of cost recovery and future award decisions. A positive control environment sets the tone for the overall quality of internal control and provides the foundation for effective contract management. Without a strong control environment, the specific control deficiencies GAO identified will likely persist. As of the date of GAO's October 2009 report, CMS had not substantially addressed seven of the nine recommendations made by GAO in 2007 to improve internal control over contracting and payments to contractors. To the extent that CMS has continuing weaknesses in contracting activities, it will continue to put billions of taxpayer dollars at risk of improper payments or waste.
5,042
499
IRS has two major programs to collect tax debts: telephone collection and field collection. If taxpayers become delinquent (that is, do not pay their taxes after being notified of amounts owed), IRS staff assigned to the telephone collection program may attempt collection over the phone or in writing. According to IRS officials, IRS collection staff who make phone calls have not been initiating many calls to ask taxpayers to pay their tax debts but rather have been responding to phone calls from taxpayers about mailed tax due notices. If more in-depth collection action or analyses of the taxpayer's ability to pay tax debt is required, telephone collection staff may refer the case to field collections, where staff may visit delinquent taxpayers at their homes or businesses as well as contact them by telephone and mail. Under certain circumstances, the telephone or field staff are authorized to initiate enforced collection action, such as recording liens on taxpayer property and sending notices to levy taxpayer wages, bank accounts, and other financial assets held by third parties. Field staff also can be authorized to seize other assets owned by the taxpayer to satisfy the tax debt. As we have previously reported, in recent years IRS has deferred collection action on billions of dollars of delinquent tax debt and IRS collection program performance indicators have declined. By the end of fiscal year 2003, IRS's inventory of tax debt with some collection potential was $120 billion (up from $112 billion in the previous year). As we reported in May 2002, from fiscal years 1996 through 2001, IRS had almost universal declines in collection performance, including declines in coverage of workload, cases closed, direct staff time used, productivity, and dollars of unpaid taxes collected. Although IRS's collection workload declined, the collection cases closed declined more rapidly, increasing the gap between the number of cases assigned for collection action and the number of cases closed each year (see fig. 2 in app. I). As a result, in March 1999, IRS started deferring collection action on billions of dollars in delinquencies. By the end of fiscal year 2002, IRS had deferred collection action on about $15 billion, and, as of May 2003, was deferring action on about one of every three collection cases. Furthermore, IRS's collection staffing has declined overall comparing 1996 to 2003 (see fig. 3 in app. I) despite IRS's efforts to increase collection staffing in its budget requests since 2001. As we previously reported, IRS officials have said that collection staffing declines and delays in hiring have been caused by increased workload in other essential operations (such as as processing returns, issuing refunds, and answering taxpayer mail), other priorities (such as taxpayer service), and unbudgeted cost increases (such as rent and pay increases). According to statements by the previous and current IRS commissioners, IRS's growing workload has outpaced its resources. The former IRS Commissioner's report to the IRS Oversight Board during September 2002 made a case for additional staff to check tax compliance and collect taxes owed. The Commissioner recognized that IRS needed to improve the productive use of its current resources, but also cited a need for an annual 2 percent staffing increase over 5 years to help reverse the trends. According to the Commissioner, IRS would require 5,450 new full-time collection staff. IRS officials said that the PCA program proposal was undertaken because it is unlikely that IRS will receive funding adequate to handle the growing collection workload. Since current law requires IRS to collect tax debts, legislation has been proposed to authorize IRS to use PCAs to collect simpler tax debts under defined activities--including locating taxpayers, requesting full payment of the tax debt or offering taxpayers an installment agreement if full payment cannot be made, and obtaining financial information from taxpayers. Given the limited authorities proposed for PCAs, IRS would refer those cases that are simplest to collect and have no need for IRS enforcement action, including cases in which (1) taxpayers filed a tax return showing taxes due but that have not been paid and (2) taxpayers made three or more voluntary payments to satisfy an additional tax assessed by IRS but have stopped the payments. In 1996, Congress directed IRS to test the use of PCAs, earmarking $13 million for that purpose. IRS canceled the pilot project in 1997, in part, because it resulted in significantly lower amounts of collections and contacted significantly fewer taxpayers than expected (about 14,000 of 153,000 taxpayers). IRS reported that through January 1997, this program accounted for about $3.1 million in collections and about $4.1 million in expenses ($3.1 million in design, start-up, administrative expenses, and about $1 million in PCA payments). IRS also reported lost opportunity costs of about $17 million because IRS collection staff shifted from collecting taxes to helping with the pilot. The current proposal to use PCAs has some significant differences from the 1996 pilot test of PCAs. First, PCAs in the current proposal will actually try to resolve collection cases within certain guidelines. In the 1996 test, PCAs only contacted taxpayers to remind them of their outstanding tax debt and suggest payment options. Second, PCAs under the current proposal will be paid a percentage of dollars they help collect from a revolving fund of all PCA collections. In the 1996 test, PCAs were paid a fixed fee for such actions as successfully locating and contacting taxpayers, even if payments were not received. Third, IRS will electronically transmit cases and data about the taxpayer and taxes owed to PCAs. In 1996, IRS's computers were not set up to electronically transmit the cases and data to PCAs. For the current proposal, IRS intends to develop the capability to make secure transmissions to PCAs and protect confidentiality. To identify the critical success factors for contracting with PCAs for tax debt collection, we used multiple sources. We reviewed three of our reports on leading practices in contracting and interviewed our staff that review government contracting. We also interviewed parties with experience in contracting for government debt collection, including both tax and non-tax debt, to identify any factors common to both debt types. Specifically, we interviewed officials from 11 state revenue departments that, according to officials from the Federation of Tax Administrators (FTA), represented a mix--in aspects such as amount of resources and PCA roles--of experience in contracting with PCAs for tax debt collection and provided examples of program practices in such areas as case selection and use of performance data; the Department of the Treasury's Financial Management Service and Department of Education--two federal agencies with large-scale, non- tax debt collection contracting; and the three PCA firms that IRS selected as subject matter experts to assist in drafting the provisions of a contract for PCA collection services. To help corroborate the factors that others identified, we interviewed officials from the IRS office that is developing the proposed PCA program, the IRS Office of Taxpayer Advocate, and the National Treasury Employees Union, which represents IRS employees. To summarize and categorize the critical success factors identified, we grouped together similar factors that were most frequently cited by the officials with experience in government debt collection contracting. We first grouped factors associated with the start of a program and with a maturing program into two broad time-oriented factors, including topics we identified as implicit in the interviews and documents cited above. Between these two time-oriented factors, we categorized three other factors according to the broad topics that were most frequently cited. To validate our summarization and categorization, we asked for comments on our draft list of critical success factors from those who we had consulted to identify the factors as well as from officials at four additional PCA firms that, according to interviewed officials from two state revenue departments and the two federal agencies, had experience in government debt collection. In commenting on the draft list of factors, some officials stressed certain factors more than others or elaborated on selected factors or subfactors, but generally did not suggest factors beyond those encompassed in our draft list. We made changes based on their comments where appropriate. To determine whether IRS has addressed the critical success factors in developing the PCA contracting program and, if not, what is left to be done, we interviewed IRS program officials. We analyzed program documents, including the draft PCA contract as outlined in IRS's Request for Quotes (RFQ) and the Office of Management and Budget (OMB) Form E-300 budgetary document that describes goals and plans for the program. We did not attempt to analyze how well or to what extent IRS addressed the factors, or whether IRS made the right decisions on issues such as the program goals or measures. To determine whether, if IRS receives authority to use PCAs, it will do a study that will enable policymakers to judge whether contracting with PCAs is the best use of federal funds to achieve IRS's collection objectives, we interviewed IRS program officials. We reviewed any studies IRS had done to compare the use of PCAs with other strategies and assessed IRS's intended approach for any future studies. We also applied our knowledge of how to study the cost-effectiveness of options to meet a desired result or benefit. We did our work from June 2003 through March 2004 in accordance with generally accepted government auditing standards. Our work identified and validated five broad factors that are critical to the success of a proposed program for contracting with PCAs to collect tax debt. A general description of each critical success factor follows: Results orientation involves establishing expectations, measures, and desired results for the program. Agency resources involve obtaining and deploying various resources. Workload involves ensuring that the appropriate cases and case information are provided to PCAs. Taxpayer issues involve ensuring that taxpayer privacy and other rights are protected. Evaluation involves monitoring performance and collecting data to assess the performance of PCAs and the overall program. As figure 1 illustrates, the factors are considered "success" factors because each one, if adequately addressed, can help ensure that the PCA program achieves desired results, such as in collecting tax debts. Although addressing all factors during program design and implementation does not guarantee success, doing so could improve the chances. Table 1 further describes the critical success factors by showing their related subfactors that we identified and validated. IRS has taken steps to address the critical success factors and developed a project plan to help finish addressing the factors if Congress authorizes use of PCAs. Officials recognize that much work needs to be done to sufficiently address each factor, which they estimate will take 18 to 24 months after any legislation passes. Table 2 shows examples of the key actions taken to address the critical success factors and major tasks remaining. Discussion after table 2 elaborates on some of these major tasks. IRS officials are aware of these major tasks that must be completed to address the critical success factors and implement the PCA program. In discussing their intent to address them, IRS officials elaborated on some of the major tasks. Under "results orientation," IRS is aware that it has to clarify a goal on how much it expects to collect. IRS had estimated originally that the PCA program would result in $9 billion in tax collections and produce $7.2 billion in net revenue over 10 years. The Department of the Treasury estimated that $1.5 billion in net revenue would be produced over 10 years. IRS officials said the differences arise because each estimate was done differently. IRS acknowledged that its original estimate may be too high and is reworking it in light of the Treasury estimate. Under "workload," IRS officials said that they are aware of the importance of selecting the right cases to send to PCAs for collection and plan to use consumer credit history data on delinquent taxpayers to identify those that would be more likely to pay if contacted. IRS officials said that the new case collection system will extend beyond selecting cases for PCAs, and that the experience and knowledge IRS will gain would contribute to IRS's broader modernization program for using data to improve how IRS does collection work. For example, IRS officials said that, in the future, the case selection data might be used to help determine which collection method--such as sending notices, using PCAs, or making in-person contact--might be more effective in attempting collection from a given taxpayer. Under "evaluation," IRS officials said that they were aware that they had not developed plans or dates for evaluating the program to assess how well the PCA program achieves its results. IRS officials said that developing the evaluation was premature given the other work needed to develop the program and lack of legislative authority. IRS officials said they intend to start developing the evaluation plan after they receive this authority and to finish it before sending cases to PCAs. Evaluation plans developed before program implementation increase the likelihood that the necessary data and resources for proper evaluation will be available when needed. Many of the factors involve the development of an information system. Testing of information systems being developed for the PCA program is an important task left to do. Our interviews with IRS officials and our reviews of IRS documents indicate that IRS plans on testing the information systems to be used in the PCA program. IRS officials informed us that they have slowed development of the program due to funding constraints and uncertainty over whether and when legislation will pass to authorize contracts with PCAs. Because IRS's fiscal year 2004 budget was not passed until January 2004, IRS officials said that, since September 2003, IRS slowed work on the PCA program. These officials said that, because of various budgetary procedures, the appropriated funds were not released to the PCA program until March 2004. However, the officials explained that IRS, intending to be fiscally prudent, is delaying spending of the funds until passage of the legislation appears to be more imminent. IRS officials stated that if legislation to authorize the program was not passed during 2004, IRS eventually would suspend work on developing the program. These officials said that they have been balancing and managing their existing funds and the timing of their work given that the authorizing legislation might not pass. If this legislation passes, IRS officials said that they would need another 18 to 24 months afterwards to complete the many tasks remaining, as shown in table 2. IRS officials said that, if Congress passes authorizing legislation in summer 2004, the estimated date for starting to send cases to PCAs is July 2006. Although IRS officials intend to study the relative performance of PCAs and IRS employees in collecting delinquent taxes, the study approach under initial consideration would provide policymakers limited information to judge whether and when the PCA strategy is the best use of resources. The tentative idea for a design--comparing PCA and IRS performance for similar types of simple cases that would be sent to PCAs--does not recognize that IRS officials believe that using employees on these cases would not be their best use given the need to work on other, higher priority cases. Among other issues concerning the proposed use of PCAs, policymakers and others have questioned whether using PCAs to collect tax debts is more efficient or effective than having IRS employees do so. During consideration of IRS's proposal, some members of Congress questioned whether IRS could collect the taxes that IRS plans to assign to PCAs at less cost or whether IRS would be able to collect a higher portion of the taxes that are due. During hearings, some witnesses raised similar concerns. IRS officials have said that IRS employees might be more effective than PCAs in collecting delinquent taxes because IRS employees have greater powers to enforce collections. These powers (such as tax liens and wage levies) may enable IRS employees to collect a higher portion of the taxes from the same types of cases on which PCAs would work. IRS officials said that the proposal to use PCAs to collect simpler tax debts was not based on a judgment that PCAs would necessarily be more efficient or effective in collecting delinquent tax debt. Rather the proposal was based on a judgment that Congress was unlikely to approve a substantial increase in IRS's budget to fund additional staff for the collection function. Officials believed that the growing inventory of tax debts was not a good signal to taxpayers about the importance of complying with their tax obligations. Given constraints in hiring staff, IRS officials said that using PCAs was the only practical means available to begin working on significantly more collection cases that otherwise would not be worked on due to IRS staffing constraints. Although this policy judgment served as the rationale behind the PCA proposal, in March 2004, IRS provided us with projections of revenues and federal government costs for the proposed PCA program compared to projections for an alternative approach under which IRS would hire additional staff to work on the same volume for selected types of cases on which the PCAs would work. According to the analysis, PCAs would generate $4.6 in revenue for every dollar in cost and IRS employees would generate $4.1. We did not review the data and assumptions that underlie these revenue and cost projections because the comparison that IRS constructed did not address the relevant economic question for policymakers seeking to reduce the backlog of uncollected taxes--which is, what is the least costly approach for reaching a certain revenue collection goal. IRS's analysis did not examine other feasible approaches that IRS might be able to use, if given additional resources, to collect the same amount of revenue that the PCAs would bring in, but at lower cost. Assuming IRS receives authority to use PCAs, IRS officials said they would design a study to compare the performance of PCAs versus IRS employees. However, the study approach under initial consideration would provide policy makers limited information to help determine whether the use of PCAs as currently proposed is the best use of federal resources to collect tax debts. IRS's approach might show whether PCAs or IRS employees are best at working on certain types of collection cases, but would not show whether the use of PCAs as planned would be the best use of resources to deal with the overall collection workload. IRS officials said that although they believe they should conduct a study that compares PCA results to results achieved by IRS employees, they have not designed such a study. They expect to design the study after authorization to use PCAs is enacted and before sending cases to PCAs. Although the study approach will evolve, officials said that they are considering selecting a sample of the same type of simpler cases that will be sent to PCAs and having such cases also sent to a group of IRS telephone collection employees. The results generated by these IRS employees and by PCAs would be compared to see which option is more effective; how effectiveness would be defined and measured would be determined in designing the study. This potential design would help answer the relatively narrow--but important--question of whether and when PCAs or IRS employees are a better choice for working on the specific types of cases to be sent to PCAs. However, IRS officials told us that using IRS employees on these simpler cases would be less productive than assigning them to work on a different mix of collection cases. These officials said that the simpler cases IRS plans to assign to PCAs are generally not those cases that IRS would assign to any additional collection employees, if hired. IRS employees would work on more complex cases that fit their skills and enforcement powers and that have a higher priority due to such factors as the type and amount of tax debt or length of the delinquency. Generally, federal officials are responsible for ensuring that they are carrying out their responsibilities as efficiently and effectively as possible. Various federal and IRS guidance reinforces this responsibility. For example, according to OMB Circular A-123 "the proper stewardship of Federal resources is a fundamental responsibility of agency managers and staff. Federal employees must ensure that government resources are used efficiently and effectively to achieve intended program results." OMB Circular A-94 states that agencies should have a plan for periodic, results- oriented studies of program effectiveness to, among other purposes, help determine whether the anticipated benefits and costs have been realized and program corrections are needed. IRS guidance states that in selecting among course of action options, IRS managers should determine which is the most realistic and most cost effective. Further, IRS has adopted a critical job responsibility for its managers that specifies their responsibility to achieve goals by leveraging available resources to maximize efficiency and produce high-quality results. A study that focuses on the least costly approach to collecting a desired amount of tax debts would be more in line with federal guidance than the study that officials anticipate performing. Such a study would more likely answer the broader question of how IRS can be the most efficient and effective in achieving its collection goals. One alternative design might entail comparing the results of using PCAs to the results from using the same amount of funds to be paid to PCAs in an unconstrained manner that IRS determines to be the most effective overall way of achieving its collection goals. Determining the most effective and efficient overall way of achieving collection goals would undoubtedly require some judgment. However, because IRS is developing a new case selection model for its own use, after some experience is gained both with using PCAs and with new IRS case selection processes, IRS should have better data to use in determining the best way of achieving its collection goals. If using PCAs as expected under the current proposal meets IRS's collection goals at less cost than the best unconstrained alternative, policymakers could be comfortable with continuing their use. If not, policy makers would have information available to consider whether changes in the use of PCAs would be appropriate. Regardless of the approach chosen, IRS would have to address several challenges in designing a study to compare the use of PCAs and IRS employees. For instance, contracting for PCA assistance may provide flexibility over hiring additional IRS staff. To recruit, select, and train the new staff, IRS could need many months or more and, if experienced staff assists in training newly hired staff, the experienced staff would not be able to handle normal workloads. Further, if the collection workload were to decrease, IRS may be able to reduce contract commitments more rapidly than it could reassign and, if needed, retrain IRS staff. To some extent, the study would have to account for similar types of direct and opportunity costs to hire, train, assign, and release employees of the PCA contractor. Accounting for these and other factors raises challenges to the design of a comparative study. Because IRS would not assign cases to PCAs for collection until 2006, it will have time to take these challenges into account and to better ensure that its study would be useful to policy makers. Further, in designing the study, IRS would have time to identify the data that would be needed for the study and develop systems or processes for collecting the data. IRS has an inventory of over $100 billion dollars of tax debts that has some potential for being collected. In recent years, IRS has deferred collection actions on billions of dollars of debt because it lacked collection staff to do the work. The growth in the backlog of unpaid taxes poses a risk to our voluntary tax system, particularly as IRS has fallen further behind in pursuing existing as well as new tax debt cases. We have placed the collection of unpaid taxes on our high-risk list since 1990 due to the potential revenue losses and the threat to voluntary compliance with our tax laws. Accordingly, we believe that effective steps need to be taken to improve the collection of these unpaid taxes. Because we did not analyze available options in this review, we are not taking a position on whether the use of PCAs is a preferable option. However, doing nothing more than has been done recently is not preferable. The compliance signals sent to taxpayers from the backlog of delinquent tax debts are not appropriate. When the majority of taxpayers receiving phone calls from IRS are those who respond to written IRS notices, taxpayers and practitioners may conclude that failing to respond to IRS is an effective tactic for avoiding tax responsibilities. If Congress does authorize PCA use, IRS's planning and preparations to address the critical success factors for PCA contracting provide greater assurance that the PCA program is heading in the right direction to meet its goals and achieve desired results. Nevertheless, much work and many challenges remain in addressing the critical success factors and helping to maximize the likelihood that a PCA program would be successful. Although IRS did an analysis that suggests that using PCAs may be a somewhat more efficient means to collect certain types of delinquent debts, that analysis was not done in a manner that informs policymakers whether the proposed use of PCAs is the least costly option to achieve IRS's collection goals. Further, given the lack of experience in using PCAs to collect tax debts, key assumptions are untested. Accordingly, if Congress authorizes the use of PCAs, Congress and IRS would benefit from a study that uses the experience gained with PCAs and by IRS itself in using new case selection processes to better determine whether and how the use of PCAs fits into an overall collection strategy that is designed to most effectively and efficiently collect delinquent taxes. Although IRS officials have preliminary plans to do a study that compares the use of PCAs and IRS employees to work the same type of cases, this study design would not help policymakers in Congress and the executive branch judge whether using PCAs as currently proposed is the best use of scarce federal resources. If Congress authorizes the use of PCAs, as soon as practical after experience is gained using PCAs, the IRS Commissioner should ensure that a study is completed that compares the use of PCAs to a collection strategy that officials determine to be the most effective and efficient overall way of achieving collection goals. The Commissioner of Internal Revenue provided written comments on a draft of this report in a letter dated May 14, 2004 (see app. III). In the letter, the Commissioner said that our findings would help IRS focus its PCA program development efforts on those areas most critical to success of the program if Congress authorizes IRS's use of PCAs. He agreed that IRS had taken actions to address the critical success factors we identified and acknowledged that significant actions are yet to be done, referring to several key PCA program project plan steps that have not been completed. In response to our recommendation that, if Congress authorizes IRS's use of PCAs, IRS do a study that compares the use of PCAs to a collection strategy that officials determine to be the most effective and efficient overall way of achieving collection goals, the Commissioner agreed that IRS would need to analyze the PCA program to determine its effectiveness and impact on the overall collection of delinquent taxes. He said that the detailed design for evaluating the PCA program will include a study to ensure that IRS is making the most effective and cost efficient use of total resources available. We are also sending copies to the Secretary of the Treasury, the Commissioner of Internal Revenue, the Director, Office of Management and Budget, and other interested parties. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. This report was prepared under the direction of Thomas D. Short, Assistant Director. Appendix IV also lists major contributors to this report. If you have any questions about this report, contact me at [email protected] or Tom Short at [email protected], or either of us at (202) 512-9110. Figure 2 below shows the annual gap between the number of cases assigned to field and telephone collections and the number of delinquent accounts worked to closure (excluding accounts for which collection workload was deferred) expressed as a percentage of the number of cases assigned. The following appendix provides some detail on various IRS actions to address the critical success factors. Critical Success Factor--Results Orientation: IRS envisions that the PCA program will meet the following goals: Increase the collection of tax debts by $9.2 billion. Increase the closure of tax debt cases by 17 million taxpayers. Reduce the tax debt backlog; and Increase taxpayer satisfaction by 12.5 percent. To motivate PCAs to achieve these results, IRS is devising a balanced set of measures--the "balanced scorecard"--and a related performance-based compensation system. The performance scores on these measures also are to be used in determining financial bonuses and future case allocations to PCAs. Specifically, PCAs with above-average performance scores are to be eligible for monetary bonuses if they meet minimum thresholds for five of six performance measures. Also, the performance score is to be translated into a value for each PCA that is to be used to determine a proportionate allocation of cases for the next quarter. IRS's intent is that the balanced scorecard will ensure that collection efforts are balanced appropriately in providing quality service; ensuring adherence to taxpayer rights; and complying with IRS policies, procedures, and regulations. The performance measures are to include the following. Collection effectiveness: Dollars collected as a percentage of dollars assigned to be collected over the contract period. Case resolution: Resolving cases assigned through the payment of the tax debts immediately or through installment payments over 3 years, identification of bankrupt or deceased taxpayers, or identification of hardships that affect the taxpayers' ability to pay. Taxpayer satisfaction: Satisfaction will be measured through random surveys of taxpayers on the accuracy and quality of actions taken by PCA employees and their adherence to various standards, and through taxpayer complaints. PCA employee satisfaction: Satisfaction will be measured through surveys of employees and their retention rates. Work quality: Quality will be measured through audits of PCA cases and telephone monitoring of interactions with taxpayers. Validated taxpayer complaints: Financial penalties will be assessed and points will be subtracted from PCA performance scores if taxpayer complaints are validated. Critical Success Factor--Agency Resources: IRS has set up an infrastructure to administer the PCA program, oversee PCA contractors, and work on cases referred back to IRS from PCAs. IRS has identified initial staffing needs for the PCA program. IRS has estimated that 100 full-time equivalency positions (FTE) will be needed to initially staff the three elements of the program. IRS estimates that it will need 30 FTEs to administer the program and do oversight, and 70 FTEs to work on the cases referred back to IRS from PCAs for the first round of PCAs selected to work on cases. As IRS learns about its staffing needs and sends cases to more PCAs over time, IRS plans to adjust its staffing accordingly. IRS has informed PCAs that the number of cases that they receive over a set time period is to be based on their performance scores against balanced measures. IRS plans to oversee the assigned workload to ensure that PCAs work on the full range of simpler cases. To motivate PCAs to work on the full range of cases, IRS plans to measure, among other things, the extent to which PCAs resolve cases sent to them, including those that PCAs refer back to IRS without resolving the tax debt. IRS also is working on systems to help it identify the best cases to send to PCAs and to help it transmit and manage those cases. Critical Success Factor--Taxpayer Issues: IRS has drafted provisions to ensure that PCAs know that they have to treat taxpayers properly and make them aware of the consequences of not treating taxpayers properly. Proper treatment of taxpayers is one of the performance measures used to determine a performance score for use in granting monetary bonuses and case allocations for PCAs. The following provides examples of the draft provisions on proper taxpayer treatment. PCAs shall comply with all applicable federal and state laws. The principal federal statues and regulations currently governing collection activities are to be followed. Further, IRS plans to monitor PCA collection activities and treatment of taxpayers; any behavior that is not in conformance with cited federal and state laws and regulations will be considered a breach of contract. IRS has informed PCAs that it will be conducting customer satisfaction surveys and that customer satisfaction is one of the key components of the balanced scorecard to be used to determine financial bonuses and future case allocation. IRS plans to require that PCAs inform taxpayers orally and in writing on how to report improper treatment by PCA employees to IRS. IRS has established preliminary plans for monitoring and measuring PCA performance through such means as conducting site visits and compensating PCAs according to their performance reflected in the balanced measures scorecard. However, IRS has deferred doing much work on evaluating program performance overall given the other work that had to be done and the resources that were available. In addition to those named above, Evan Gilman, Ronald Jones, John Lesser, Cheryl Peterson, and Jim Wozny made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO's commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO's Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as "Today's Reports," on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select "Subscribe to e-mail alerts" under the "Order GAO Products" heading.
Congress is considering legislation to authorize IRS to contract with private collection agencies (PCA) and to pay them out of the tax revenue that they collect. Some have expressed concerns that this proposal might be unsuccessful, inefficient, or result in taxpayers being mistreated or having their private tax information compromised. This report discusses (1) the critical success factors for contracting with PCAs for tax debt collection; (2) IRS's actions to address these factors in developing the PCA program and actions left to be done; and (3) whether IRS, if it receives the authority to use PCAs, plans to do a study that will help policy makers judge whether PCAs are the best use of funds to meet IRS's collection objectives. Based on our analysis of information from various parties, including officials from selected state revenue departments and federal agencies that use PCAs, five factors are critical to the success of a PCA collection program. Together, these factors increase the chances for success and help the program achieve desired results. Although incomplete, IRS has taken actions to address these factors. For example, IRS has been developing (1) program performance measures and goals, (2) plans for a computer system to transmit data to PCAs, (3) a method to select cases for PCAs, and (4) contract provisions to govern data security and PCAs' interactions with taxpayers. IRS officials recognize that major development work remains and have plans to finish it. Officials said they would suspend work if PCA authorizing legislation is not passed during 2004. If legislation passes, officials estimated that it would take 18 to 24 months to send the first cases to PCAs. Aware of concerns about the efficiency of using PCAs, IRS intends to study the relative performance of PCAs and IRS employees in collecting tax debts after gaining some experience with them. However, the initial idea for a study would provide limited information to judge whether or when the PCA approach is the best use of resources. The tentative idea--comparing PCA and IRS performance for the same type of simpler cases to be sent to PCAs--does not recognize that IRS officials believe that using IRS employees on such cases would not be the best use of staff. Federal guidance emphasizes efficiently and effectively using resources to achieve results and identifying the most realistic and cost-effective program option. Experience gained in using PCAs and a new IRS case selection process would help officials design such a study.
7,350
513
As shown in figure 1, INS has benefited from significant increases in its regular appropriations and appropriations from its fee accounts. Funding increases have continued in fiscal year 1999 with Congress providing over $3.9 billion. When funding from the Working Capital Fund, carryover balances, and certain reimbursements are added to this figure, INS' operating budget totals approximately $4.0 billion for fiscal year 1999. INS divides its operating budget into four categories of spending: (1) mandatory expenses, e.g., rent; (2) personal salaries and benefits; (3) set- asides, such as employee relocations, vehicle acquisitions, and background investigations; and (4) discretionary funding. For purposes of this review, the first three categories can be grouped together as expenses that either have first claim on a budget because they must be paid or are considered integral to an agency's operations. Although many of these expenses directly benefit field operations, most are centrally-funded at headquarters. The last category--discretionary funding--funds personnel costs for other- than-permanent employees; discretionary overtime; travel; cash awards; some types of procurements; and day-to-day operating expenses, such as equipment maintenance and lease of copiers. Table 1 shows data provided by INS on its end-of-year allocation for fiscal year 1998 compared with its current allocation for fiscal year 1999, by spending categories. To determine (1) INS' overall fiscal condition, and (2) how factors such as overhiring and a decline in Examinations Fee applications have affected INS' fiscal situation, we interviewed officials in INS' Offices of Budget, Personnel, Facilities, and Field Operations. To get additional perspectives on INS' funding status, we interviewed officials in DOJ's Justice Management Division and OMB's Justice and General Services Administration Branch. We reviewed INS budget documents prepared for fiscal year 1999 that were submitted to the Justice Department, OMB, and Congress, as well as those prepared for internal use, to document and analyze changes in funding. In addition, INS provided memorandums and briefing documents relevant to our work and additional supporting material prepared specifically for our review. Our work was performed in Washington, D.C., during February and March 1999, in accordance with generally accepted government auditing standards. Since 1996, INS has been making a concentrated effort to fill both its existing vacancies and many new positions authorized by Congress each year. However, throughout this period, attrition of staff already on-board and reported difficulties in hiring new staff have impeded INS from filling many positions. In an attempt to remedy this situation, INS allowed field offices to hire 4 percent more than their number of funded positions during fiscal year 1998. As discussed below, this policy, combined with other fiscal pressures, resulted in most INS programs having less discretionary funding in fiscal year 1999 than in fiscal year 1998. Between the end of fiscal years 1995 and 1998, INS' on-board staff increased from 18,823 to 27,941. INS anticipates adding another 3,000 staff by the end of fiscal year 1999. However, according to INS officials, throughout this period, the number of staff on board generally lagged behind authorized levels. INS officials attribute the lag to (1) significant new authority to hire provided by Congress each year, (2) high rates of attrition of on-board staff throughout the year, and (3) difficulty in recruiting and retaining a group of qualified candidates from outside of INS to fill vacancies as they arise. Since 1996, INS has taken several steps to overcome these difficulties. First, to ensure that its workforce would expand rather than shift internally, INS directed field staff to hire for only entry level positions. Second, INS allowed field managers to select a larger pool of candidates to consider for employment than they were authorized to hire because it was anticipated that a number of candidates would (1) not make it through the pre-appointment process, or (2) no longer be available by the time INS could make an offer of employment. Third, with approval, field managers were permitted to hire 2 percent more than their number of funded positions. The over-hiring was supposed to occur in field offices where attrition or new hiring authority was anticipated. The over-hired positions were supposed to be used to fill vacancies as soon as they occurred so that field office hiring would not exceed funded levels for the year. At the start of fiscal year 1998, regional directors requested, and the Commissioner approved, an increase in the over-hire authority to 4 percent. During fiscal year 1998, the number of INS staff on board increased from 86 to nearly 97 percent of INS' funded level. The large amount of fiscal year 1998 hiring created fiscal stress for the agency by increasing certain payroll costs beyond budgeted levels. According to INS officials, beginning in fiscal year 1998, there was a rapid acceleration in the on-board rate of Border Patrol agents, Investigators, and Detention and Deportation officers. These positions were over-hired for substantial periods during fiscal year 1998. This created a funding problem because INS allocated personal services and benefits (PS&B) for funded positions only--not over-hired ones. As of May of 1998, INS projected that the PS&B portion of one of its accounts--Salaries and Expenses--would have a deficit of $16.1 million by the end of the fiscal year. The Border Patrol program accounted for most of the projected deficit. The nine other accounts that also provide funding for PS&B were projected to have surpluses or have negligible deficits. INS officials attributed the deficit in part to previous and projected over- hiring by field offices. INS officials told us that some field offices would over-hire, but then not use the over-hired position to fill their vacancies. In some cases, they said this occurred because there was a mismatch between the positions that had been over-hired and the vacancies that occurred. They said another reason for the deficit was a miscoding of $2.5 million in obligations for newly hired personnel to the Salaries and Expenses account instead of the Violent Crime Reduction Trust Fund (VCRTF) account. In response to the anticipated deficit, in May 1998, the Office of Budget issued guidance to executive staff. The guidance said the over-hire policy was not intended to permit field offices to remain up to 4 percent over the authorized number of positions for extended periods of time. The guidance listed four actions to be taken: (1) correct miscoding of new hires from the Salaries and Expenses account to the VCRTF account; (2) ensure all new hires are coded to the correct account; (3) manage subsequent hiring to resolve over-hiring of officer positions; and (4) redirect, by the Office of Budget, $6.5 million to cover the remainder of the anticipated year-end PS&B deficit. The guidance warned that if hiring continued to exceed authorized levels, discretionary funds would have to be used to cover the projected deficit in PS&B funds. However, as of August 1998, the projected deficit of PS&B funds in the Salaries and Expenses account had increased to $20 million. To respond to this situation, according to budget officials, field staff were directed to reduce staff on board to funded levels. At the end of fiscal year 1998, however, certain enforcement positions were still over-hired. According to an INS budget official, the over-hired positions accounted for about $12 million in PS&B deficits. Approximately 50 percent of that amount was covered by unobligated discretionary funds that were reallocated by INS regions to PS&B. In the past, according to INS and Justice Department officials, PS&B funding that was not used to pay personnel costs was reallocated to help fund other spending. To successfully implement the policy of hiring up to funded levels during fiscal year 1998, INS had to commit a larger share of its budget to pay for personnel costs. This meant that a smaller share of funds would be available to address other needs. For example, to pay an $80 million settlement with the Investigation Union, INS has been paying in annual $10 million installments from its Investigations lapsed PS&B funds. As a result of the increased hiring in fiscal year 1998, the Investigations program reportedly did not have sufficient lapsed dollars to fund the $10 million installment. As a result, the Office of Budget set aside $10 million of Investigations funding at the beginning of fiscal year 1999 to pay the current year installment. This meant that the Investigations program received substantially fewer dollars for discretionary spending. To illustrate the impact of hiring up to funded levels on INS' budget, if INS remained at the 86 percent on-board level that existed at the beginning of fiscal year 1998, then about $250 million in PS&B funds would have been available to spend on other needs. But, INS finished fiscal year 1998 with nearly 97 percent of its funded positions filled. If INS remained at the 97 percent on-board level throughout fiscal year 1999, it would have $60 million in PS&B funds after meeting payroll costs. This would be $190 million less than PS&B funds available with an 86 percent on-board level. After meeting payroll expenses, mandatory costs, and other expenses set aside for centrally-funded items that support service-wide needs, INS currently had about $71.8 million more in discretionary funds, overall, than it had in fiscal year 1998. Within INS, the Office of Field Operations, which distributes funding to field offices, had more discretionary funds than it had in fiscal year 1998, while all other headquarters offices received less discretionary funds. Although, overall, the Office of Field Operations received more discretionary funds than in fiscal year 1998, some programs within the Office of Field Operations received less. Table 2 provides a breakdown of how the 11 programs under the Office of Field Operations fared. Initially, in December 1998, when the Office of Budget communicated to the Office of Field Operations how much it would have available in discretionary funds, the total amount appeared to be $199 million less than was allocated in fiscal year 1998. However, according to INS officials, this amount did not yet include $270.7 million that was available from Examinations Fee funds, Salaries and Expenses funds for Adjudications and Naturalization program initiatives, and Working Capital funds. The amounts available from these funds had not yet been allocated because detailed spending plans needed to be developed first. Including the $270.7 million, the Office of Field Operations would have had $71.7 million more in discretionary funds, overall. In January 1999, the $270.7 million was allocated and, following feedback from the field about the inadequacy of the funds initially communicated, headquarters executive staff redirected $47.7 million to field operations for discretionary funds. These actions resulted in a total allocation that was $120.6 million more than in fiscal year 1998. Five of the 11 programs under the Office of Field Operations had less discretionary funds than in fiscal year 1998. According to a DOJ official, these problems were not communicated to Congress until January 22, 1999. According to INS Office of Budget officials, the potentially difficult fiscal situation for fiscal year 1999 was conveyed internally at meetings with (1) resource management staff in July 1998, (2) executive staff and regional directors in August 1998 during the third quarterly financial review, and (3) INS managers in October 1998 at the annual Commissioner's conference. However, initial budget allocations were not made until December 11, 1998, nearly the end of the first quarter of fiscal year 1999. According to budget officials, the allocations were made in December because of the complicated nature of the appropriation. Office of Field Operations officials said they were surprised by the magnitude of the reductions in discretionary funds. INS continues to pursue the goal of hiring to its authorized level. However, as of January 6, 1999, the Executive Associate Commissioner for Field Operations cancelled the over-hire authority for all programs except, in certain circumstances, those funded by the Examinations Fee Account. In formulating its fiscal year 1999 budget, INS projected in November 1997 that it would receive 6.9 million Adjudications and Naturalization applications, and that these would produce $862 million in revenues for its Examinations Fee account. In July 1998, INS was projecting 5.6 million applications and $560 million in revenues for this account. INS overestimated the number of applications--in particular, the number of naturalization applications--that would be submitted to INS, and because of computer problems, it was not able to detect the downturn in applications in a timely fashion. In August 1998, DOJ submitted a reprogramming request for $171 million, of which $88 million was to help cover the decline in Examinations Fee revenues. accounts, and (4) the fiscal year 1999 reprogramming for naturalization initiatives that was submitted as a separate request in August. concerning the activities of immigrant groups. None of these sources anticipated the decline in applications and revenues that occurred in fiscal year 1998. A specific type of naturalization application, referred to as N-400 by INS, made up the single largest component, both in terms of the number of applications (estimated to be 21 percent in fiscal year 1999) and revenues generated (estimated to be 39 percent in fiscal year 1999), of the Examinations Fee account. INS projected in November 1997 that in fiscal year 1999, it would receive nearly 1.5 million N-400 applications, and that these would produce approximately $334 million in Examinations Fee revenue. In June 1998, INS lowered its fiscal year 1999 projections to 700,000 applications and $127 million in revenue. INS officials have developed some hypotheses, including the following, to explain the unanticipated drop in applications: Based on contacts with several community based organizations (CBOs), INS believed that CBOs were stockpiling naturalization applications in an effort to help eligible aliens meet a January 1998 deadline for filing certain types of adjustment of status applications. INS officials expected that naturalization applications would surge after the deadline. However, it turned out that CBOs were not stockpiling naturalization applications, and the expected surge did not occur. Legislative changes restored some benefits for aliens, reportedly causing a reduction in the demand for naturalization. Naturalization applications have peaked from among the 2.7 million aliens who were granted amnesty by the Immigration Reform and Control Act of 1986. However, evidence of this did not become clear until well into fiscal year 1998. INS had a large backlog of N-400 applications, perhaps creating a disincentive for applicants to apply for naturalization. INS did not have timely information to determine that the number of N-400 applications had begun to decline. The key reason for this was that computer programming errors were not detected and resolved for an 8- month period in fiscal year 1998. During this period, INS did not know how many N-400 applications were received. In December 1997, INS tried to change its naturalization case processing and tracking system, the Redesigned Naturalization Application Casework System (RNACS), to show the date that naturalization applications were received at INS, not the date that they began to be processed by INS adjudicators. However, when INS began to use RNACS with the applications receipt date incorporated into it, the system only recognized those applications that were received and processed in the same month. If the application was received in one month and processed in another month, the end-of-month summary report produced by INS' Office of Information Resources Management did not capture the information on date of receipt. INS headquarters officials were reportedly skeptical of the low naturalization numbers derived from RNACS. However, it took several months for INS officials to determine that there was a problem with RNACS because (1) it generally takes 5 to 6 weeks for INS field offices to generate statistical information for headquarters which, in turn, is compiled and reported by headquarters' Office of Statistics, and (2) INS headquarters officials were not certain whether the unexpectedly low numbers of naturalization applications represented real behavior or a reporting error. It then took several months to correct the computer problem and generate new reports. As a result, between October 1997 and May 1998, INS' Examinations Fee Working Group did not have reliable data on which to base revised estimates of N-400 applications for fiscal years 1998 and 1999. We also examined whether and why INS' rental payment to the General Services Administration (GSA) for fiscal year 1999 may exceed INS' amount of appropriation identified for rent. We found that INS' rental payment is expected to exceed the amount appropriated by $13.2 million. For several reasons, Justice officials said, it is difficult to accurately project rent costs, and the shortfall in INS' funds for rent is not inconsistent with what it has incurred in prior years. As of March 1999, the anticipated GSA rental payment for INS for the current fiscal year is $160.1 million. This is $9.9 million above what was requested in the President's Budget for rent and $13.2 million higher than the $146.9 million appropriated by Congress. Arriving at an accurate projection of rental payments is difficult for INS and other Justice components, according to Justice officials. INS' GSA rental payment exceeded its appropriation by $15 million in fiscal year 1998, $9 million in fiscal year 1997, and $5 million in fiscal year 1996. According to INS and and Justice officials, year-to-year fluctuations in the accuracy of rent estimates could be caused by such factors as (1) the actual GSA rental payment for fiscal year 1999 being higher than that anticipated by INS at the time that it formulates its budget; (2) changes in INS programs after the end of the budget cycle (e.g., information on new projects requiring space become available after the budget cycle has ended); and (3) the difficulty of projecting requirements in an environment of high growth, such as that experienced by INS in recent years. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the fiscal year (FY) 2000 budget request for the Immigration and Naturalization Service (INS), focusing on: (1) INS' overall fiscal condition in FY 1999; and (2) how factors such as overhiring and a decline in Examinations Fee applications have affected INS' fiscal situation. GAO noted that: (1) after discussions with officials in INS, the Department of Justice, and the Office of Management and Budget, and based on GAO's analysis of INS budget documents, GAO concluded that INS is not experiencing an overall budget shortfall at this time; (2) the hiring policy that INS followed in FY 1998 in an attempt to meet congressional and administrative expectations resulted in INS having to commit a greater share of its FY 1999 budget to salaries and benefits than in prior years; (3) overall, however, INS has more discretionary funds than it had in FY 1998; (4) with respect to the Examinations Fee account, INS overestimated the number of applications it would receive and did not detect the consequent revenue shortfall for months because of computer programming errors; (5) when it became apparent that the anticipated revenues would not be realized, INS decided to seek reprogramming of funds from other accounts to cover the costs; (6) the overhiring and reduced Examinations Fee revenues contributed to most INS programs having less discretionary funding in FY 1999 than in FY 1998; and (7) although INS has not experienced an overall budget shortfall, the combination of higher personnel costs, declining Examinations Fee revenues, and the resultant need to reduce discretionary funding allocations to most programs has created fiscal stress for the agency.
4,109
358
On June 10, 1975, the U.S. government executed a Memorandum of Understanding with the governments of Belgium, Denmark, the Netherlands, and Norway to produce F-16 aircraft under a program known as the F-16 Multinational Fighter Program. Of the 998 aircraft produced under this program, the U.S. Air Force purchased 650 and the European participating governments purchased the remaining 348. Under the ongoing MLU program, the Europeans are upgrading their F-16 aircraft by equipping them with new cockpits and avionics systems. On behalf of the four European participating governments, the U.S. Air Force awarded prime contracts to Lockheed Martin Tactical Aircraft Systemsand Northrop Grumman Corporation valued at $622.7 million and $106.5 million, respectively, to provide the aircraft upgrades. The U.S. government participated in the development phase of the MLU program, but it withdrew from the production phase in November 1992. The European countries' Supreme Audit Institutions (SAIs) have raised a number of issues regarding the pricing of the MLU contracts. The U.S. and European participating governments agreed that they would "endeavor to establish the same price for the same articles when they were procured under the same conditions from the same source." Due to the proprietary nature of the information affecting the negotiation of the contracts, SAIs are precluded from having access to this information. On December 15, 1994, a meeting, involving representatives from the U.S. and the European participating governments, was held during which agreement was reached to provide assurance that the MLU contract prices were fair and reasonable. Among the issues discussed were the rates and factors used to price the MLU contracts. According to the minutes of the meeting, the European representatives were assured that the ". . . rates and factors that are used for MLU contracts are the same for all other LFWC [Lockheed Fort Worth Company] F-16 contracts with the U.S. Government." Since these rates and factors are proprietary, the Netherlands representative asked if the United States could provide certification that the same rates are used on all U.S. government contracts. The Defense Plant Representative Office Commander agreed to provide the certification and did so on March 24, 1995. Lockheed Martin and Northrop Grumman proposed and Air Force negotiators used rates and factors to price the two MLU prime contracts that were different from those used to price contemporaneous U.S. government contracts. Also, Air Force negotiators used two incorrect rates in pricing the Northrop Grumman prime contract. These two conditions increased the prime contract prices by a total of $9.4 million. The rates and factors used to price the Lockheed Martin MLU contract were not the same as those used to price U.S. government contracts. Instead, on December 23, 1994, Lockheed Martin proposed a "special" set of rates to price the MLU contract rather than using the lower FPRA rates in effect at that time. The Air Force used the special rates in negotiating the MLU contract prices. This action increased the contract price by $8 million. During the December 1994 working group meeting involving U.S. and European representatives, the Defense Plant Representative Office Commander stated he would certify that the rates used to price the MLU contract would be the same as those used to price all U.S. government contracts. Subsequently, in a March 24, 1995, written certification, the Commander stated ". . . that the applicable FPRA rates and factors used in the MLU program are the same as all other programs negotiated between the LFWC and the U.S. Government." However, contrary to the Commander's certification, the Air Force negotiated two other contracts with Lockheed Martin on the same day the MLU contract was negotiated using lower FPRA rates and factors. Neither Lockheed Martin nor the Air Force withdrew from the FPRA that was in effect at the time the MLU contract price was agreed to. The Defense Federal Acquisition Regulation Supplement stipulates that FPRA rates must be used to price contracts unless waived by the head of the contracting activity. No such waiver was requested or obtained for the special rates used to price the MLU contract. Furthermore, there was no evidence in the contract negotiation records or files that the special rates were audited by DCAA or approved for use by the Defense Plant Representative Office. Lockheed Martin proposed and Air Force negotiators used the lower FPRA rates to establish the negotiation objective for the contract price. Before contract price agreement was reached, however, Lockheed Martin provided Air Force negotiators the special set of rates and factors that they accepted and used to price the contract. Lockheed Martin officials told us a special set of rates and factors was required to negotiate the MLU contract because the existing FPRA was only valid through calendar year 1997. They explained that the MLU contract performance period covered calendar years 1993 through 2001 and that rates and factors for the outyears were required. They believe that the special rates benefited the MLU customers because a new FPRA, negotiated shortly after the MLU contract, included higher rates than those used for the MLU contract. In responding to a draft of this report, the Air Force agreed a special set of rates and factors was used to price the MLU contract, but it believed the use of those rates and factors was in the best interest of the European participating governments. The Air Force also stated that the Defense Plant Representative Office Commander signed the certification in good faith, based on his knowledge at that time, and with full intention of being consistent with the pricing agreement between the U.S. and the European participating governments. The Air Force further stated that the Defense Plant Representative Office was negotiating a new FPRA while MLU contract negotiations were going on and had already offered Lockheed Martin higher rates and factors than were in the existing FPRA. The Air Force pointed out that Lockheed Martin would never have accepted the lower existing FPRA rates and factors, which covered the period 1993 through 1997. We agree that the certification was signed in good faith. We also agree that the existing FPRA extended only through 1997 and that rates and factors were needed to cover the MLU contract performance period. However, when changing conditions cause rates in an FPRA to be no longer valid, defense procurement regulations provide approved methods for dealing with the situation--either withdraw from the rate agreement or obtain a waiver from the head of the contracting activity. Air Force negotiators did neither. We found that the Defense Plant Representative Office had issued recommended rates and factors covering 1998 and 1999. Thus, Air Force negotiators--using the existing FPRA and recommended rates--had rates and factors covering 1993 through 1999. According to negotiation records, this period accounted for 99 percent of the MLU contract value. Furthermore, the $8-million increase to the MLU contract is not due to higher rates and factors for the years beyond the FPRA period. Rather, the increase is due to increased rates and factors for 1993 through 1997--the same period covered by the existing FPRA. In addition, the MLU contract awarded to Northrop Grumman for radar systems encountered the same situation as the Lockheed Martin contract--that is, it extended beyond the period covered by the existing FPRA. However, in contrast to the Lockheed Martin situation, the Air Force used existing FPRA rates and factors to price the radar contract. The contract performance period extended into the year 2002, while the existing FPRA went through only 1996. Northrop Grumman proposed and the Air Force used the existing FPRA rates and factors and projected these rates and factors over the remaining contract performance period. Northrop Grumman proposed and the Air Force accepted a G&A overhead rate established for pricing foreign military sales contracts rather than a lower domestic rate established for pricing U.S. government contracts. Use of the G&A rate for foreign military sales contracts increased the MLU contract price by $1.3 million. Northrop Grumman officials told us they used the G&A rate for foreign military sales contracts because of the additional costs in doing business with foreign customers. They also stated they were unaware of any requirement to use the same rates applied to U.S. government contracts. They further stated that such a requirement was not made known to the corporation in the Air Force's request for proposal or subsequent contract award. In commenting on a draft of this report, the Air Force pointed out that use of the foreign military sales G&A rate was proper on the Northrop Grumman MLU contract. The Air Force advised us that the contractor could not use and the Air Force could not accept the domestic G&A rate for pricing the contract because it would be a misallocation of costs. The Air Force also pointed out that use of the foreign military sales G&A rate did not violate the intent or the spirit of the agreement between the U.S. and the European participating governments. It should be noted that while the Air Force contends that it would have been improper to use the domestic G&A rate for pricing the Northrop Grumman contract, the Air Force used a domestic G&A rate to price the Lockheed Martin MLU contract. The Air Force did not explain this inconsistency. In addition to using the higher G&A rate for foreign military sales contracts, Air Force negotiators used two incorrect rates in pricing the MLU contract, which caused its price to be increased by $163,600. The Air Force concurred that use of the incorrect rates was an oversight. In total, the MLU contract price was increased by $1.4 million as a result of using the higher G&A rate for foreign military sales contracts and two incorrect rates. DCAA conducted preaward audits of both prime contract proposals and questioned various costs. DCAA also reported large amounts of proposed subcontract costs as unresolved because several subcontractor price proposals had not been audited at the time of its preaward audits. Price negotiation memorandums showed DCAA helped the Air Force evaluate updated contractor proposals during fact-finding prior to contract price negotiations. In addition to making specific recommendations on proposed costs, DCAA also provided Air Force negotiators with information on deficiencies in the contractors' estimating systems, material management and accounting systems, and other operations. The price negotiation memorandums clearly show that Air Force negotiators used DCAA recommendations to assist in establishing objectives and negotiating lower prices for the two prime contracts. The memorandum for the Lockheed Martin contract, for example, shows DCAA reported a substantial amount of proposed subcontract costs as unresolved because audits of the subcontracts had not been completed at the time of DCAA's review. DCAA reported the same condition for the Northrop Grumman contract. Audits of the subcontractor proposals were subsequently obtained, and Air Force negotiators used the information in negotiating the contract prices. Air Force negotiators also used other DCAA recommendations in negotiating the prices of the contracts. On the Northrop Grumman contract, for example, they extensively used DCAA's recommendations on proposed material costs. The price negotiation memorandum showed Air Force negotiators were able to obtain most of DCAA's recommended cost reductions for material. We reviewed the fairness and reasonableness of subcontract and material costs negotiated in the prime contracts because these costs comprised about 88 percent of the combined negotiated contract prices. Subcontracts and material under the Lockheed Martin contract totaled $572.7 million, or about 92 percent, of the $622.7-million contract price. Subcontracts and material under the Northrop Grumman contract comprised $66.2 million, or about 62 percent, of the $106.5-million price. For competitively priced subcontracts, we examined the supporting records and, if adequate competition occurred, we accepted the prices as fair and reasonable. For noncompetitively priced subcontracts, we examined the negotiation records to determine if appropriate safeguard techniques were used to negotiate the prices. At the time of the prime contract price agreement dates, Lockheed Martin had negotiated firm prices for 10 of its 11 major subcontracts, and Northrop Grumman had negotiated firm prices for both of its major subcontracts. The contractors used the pricing techniques required by the Federal Acquisition Regulation in negotiating subcontract prices. Subcontract files and other records showed that Lockheed Martin and Northrop Grumman (1) obtained cost or pricing data, (2) conducted cost analyses, (3) conducted price negotiations, and (4) and obtained certificates of current cost or pricing data. The cognizant Defense Plant Representative Offices also obtained audits from DCAA or the participating governments' audit agencies of the subcontractor price proposals and provided the audit reports to Air Force negotiators. For the subcontract that was not priced at the time of prime contract price agreement, Lockheed Martin, as required by the Federal Acquisition Regulation, obtained cost or pricing data from the subcontractor and prepared a cost analysis of the subcontract proposal. Air Force negotiators accepted the proposed and negotiated subcontract prices as fair and reasonable based on the prime contractors' evaluation and negotiation efforts. We did not examine material items on the Lockheed Martin contract because they comprised less than 1 percent of the contract price. As for the Northrop Grumman contract, we examined the pricing of selected material items because material costs comprised about 9 percent of the contract price. Northrop Grumman used appropriate safeguard techniques to price material items. None of the eight high dollar items we selected for review were priced at the time of prime contract price agreement. Northrop Grumman based its proposed prices for four of the items on supplier competitive quotations. Northrop Grumman received multiple quotations for the four items; therefore, we accepted the competitive prices as fair and reasonable. Northrop Grumman based its proposed prices for the other four items on noncompetitive quotations, and it conducted price analyses for the items. For two of the items, the price quotations fell below the maximum prices established by the price analyses, and Northrop Grumman accepted the proposed prices as fair and reasonable. Quotations for the other two items were higher than the maximum price established by the price analyses, and Northrop Grumman decremented the quotations and submitted the lower prices to Air Force negotiators. During prime contract price negotiations, Air Force negotiators applied an additional decrement against the proposed prices for all eight items. There are indications that material is overpriced by as much as $947,000 under the two prime contracts because the prime contractors did not provide government negotiators with accurate, complete, and current data available for the items at the time of the contract price agreement dates. We provided this information to the cognizant DCAA offices, and they are reviewing material prices in both prime contracts to determine the extent of overpricing. The amount of overpricing may change as DCAA continues its review. As requested, we reviewed the pricing of the subcontracts Lockheed negotiated with Hazeltine for the advanced identification friend or foe system and with Honeywell for the color multifunction display system. The Hazeltine subcontract was awarded on a competitive basis, while the Honeywell subcontract was awarded on a noncompetitive basis. The subcontract awarded to Hazeltine was competed between Hazeltine and three other vendors. Lockheed Martin subjected the responsive proposals to a technical evaluation, management evaluation, risk analysis, and cost evaluation and determined that Hazeltine had the lowest risk approach with the highest probability of successful completion. Hazeltine was the only supplier that proposed to meet all of the technical requirements. Lockheed Martin concluded Hazeltine's proposed price was fair and reasonable and awarded the subcontract. Air Force negotiators also accepted the subcontract price as fair and reasonable. Lockheed Martin used the same safeguard techniques in negotiating the Honeywell subcontract that are required to be used in negotiating subcontracts under U.S. government prime contracts. There was not an FPRA with Honeywell at the time the subcontract price was negotiated; however, recommended rates and factors had been issued for Honeywell contracts. Lockheed Martin used the recommended rates and factors in negotiating the subcontract price. Air Force negotiators accepted the negotiated price as fair and reasonable. Air Force and contractor officials reviewed a draft of this report and their comments have been incorporated in the text where appropriate. Their comments are presented in their entirety in appendixes I, II, and III. SAIs selected two prime contracts for review. The first prime contract involved the letter contract the Air Force awarded to Lockheed Martin on August 17, 1993. The contract provides for the production of modification kits to upgrade the cockpit and avionics systems on the F-16 aircraft. The Air Force and Lockheed Martin agreed on the contract price on April 21, 1995, and the final contract was signed on June 13, 1995. The second prime contract involved a letter contract the Air Force awarded to Northrop Grumman on December 3, 1993. The contract provides for the production of modification kits for the AN/APG-66(V)2 fire control radar. The Air Force and Northrop Grumman agreed on the contract price on July 15, 1994, and the final contract was signed on September 27, 1994. SAIs also selected two subcontracts for review. Both were awarded under the prime contract to Lockheed Martin. The first involved the subcontract Lockheed Martin awarded to Honeywell (purchase order 354) on October 30, 1995, for the production of the F-16 color multifunction displays. The second involved the subcontract Lockheed Martin awarded to Hazeltine (purchase order 4XU) on September 24, 1993, for the production of the advanced identification friend or foe combined interrogator/transponder system. To determine whether the rates and factors used to price the two MLU prime contracts were the same as those used to price U.S. government contracts, we reviewed Air Force negotiation records to identify the rates and factors used for the MLU contracts. We then compared the MLU rates and factors to those included in FPRAs and forward pricing rate recommendations in effect at the time the MLU contracts were negotiated. Where differences were identified, we determined the effect on contract prices. We performed similar work on the Honeywell subcontract. We discussed the rates and factors with contractor, Air Force, DCAA, and Defense Plant Representative Office officials. To determine how Air Force officials used DCAA audit recommendations in negotiating prices for the prime contracts, we reviewed the DCAA preaward audit reports and recommendations. We evaluated contract negotiation records to determine how Air Force negotiators used DCAA's work in establishing negotiation objectives and negotiating the contract prices. We discussed the use of the audit recommendations with DCAA and Air Force officials. To determine whether subcontract and material costs included in the contract prices were fair and reasonable, we compared the pricing safeguard techniques used by the contractors with those required by the Federal Acquisition Regulation and the Defense Federal Acquisition Regulation Supplement. We verified that, when required, the contractors obtained cost or pricing data, conducted cost or price analyses, carried out negotiations with subcontractors and vendors, and obtained certificates of current cost or pricing data. We also determined whether DCAA or audit agencies of the European participating governments made audits of the subcontractor price proposals. In addition, we examined negotiation records for the subcontracts and material items and discussed them with contractor and Air Force officials. We performed our work between May and August 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense and the Air Force; the F-16 System Program Director; the Director, Defense Contract Audit Agency; the Commander, Defense Contract Management Command; and the Chief Executive Officers of Lockheed Martin and Northrop Grumman Corporations. Copies will be made available to others upon request. If you or your staff have questions about this report, please contact me at (202) 512-4841 or David E. Cooper at (202) 512-4587. Major contributors to this report are listed in appendix IV. Joe D. Quicksall, Assistant Director Jeffrey A. Kans, Evaluator Kimberly S. Carson, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the pricing of selected contracts and subcontracts awarded under the F-16 Aircraft Mid-Life Update (MLU) Program, designed to develop, produce, and install upgrades to F-16 fighter aircraft owned by Belgium, Denmark, the Netherlands, and Norway, focusing on: (1) differences between the rates and factors used to price two selected prime contracts and those used to price contemporaneous U.S. contracts; (2) how the Air Force used Defense Contract Audit Agency (DCAA) recommendations in negotiating prime contract prices; and (3) whether the prime contracts' prices for material and subcontract costs were fair and reasonable. GAO found that: (1) the prime contractors proposed and Air Force negotiators accepted rates and factors to price the two MLU contracts that were different from those used to price contemporaneous U.S. government contracts; (2) the contract prices for the European participating governments were $9.4 million higher due to the use of different rates and factors; (3) the Defense Plant Representative Office Commander certified that the forward pricing rate agreement (FPRA) rates and factors used to price the Lockheed Martin MLU contract were the same as those used to price all other contracts awarded to Lockheed Martin during the effective period of the agreement; (4) despite this certification, a special set of higher rates and factors was used to price the MLU contract rather than those called for in the FPRA; (5) for the Northrop Grumman contract, Air Force negotiators used a general and administrative overhead rate established for use in pricing foreign military sales rather than a lower domestic rate established for pricing U.S. government contracts; (6) Air Force negotiators also used two incorrect rates in pricing the MLU contract; (7) DCAA conducted preaward audits of the prime contractors' price proposals, questioned various costs, and reported large amounts of unresolved costs because audits had not been made of several subcontractor price proposals; (8) except for the rates and factors used for the Lockheed Martin contract, Air Force negotiators used DCAA's audit results to assist them in negotiating lower prices for the prime contracts; (9) Lockheed Martin and Northrop Grumman employed safeguard techniques required by U.S. procurement regulations to evaluate and negotiate subcontract and material prices for the prime contracts, and Air Force negotiators accepted the proposed and negotiated subcontract prices as fair and reasonable; (10) there are indications that material in the two prime contracts may be overpriced by as much as $947,000; (11) as for the two subcontracts selected by the European countries' Supreme Audit Institutions for review, Lockheed Martin awarded the Hazeltine subcontract competitively and the Honeywell subcontract noncompetitively; (12) in negotiating the price of the Honeywell subcontract, Lockheed Martin used rates and factors recommended by the cognizant U.S. government contract administration activity and employed the required safeguard techniques; and (13) the Air Force accepted the prices of these two subcontracts as fair and reasonable.
4,600
645
VA began providing formal treatment for alcohol dependency in the late 1960s and treatment for drug dependency in the early 1970s. According to VA, the guiding principle behind its national substance abuse treatment program has been the development of a comprehensive system of care for veterans. In accordance with this principle, VA has developed a network system of care that is supposed to afford veterans access to facilities offering a range of substance abuse treatment services, including inpatient, residential, and ambulatory care. VA requires its medical centers to maintain quality assurance programs so that veterans receive quality care. Such care is defined as the degree to which health services increase the likelihood of desired health outcomes and are consistent with current professional knowledge. Quality assurance programs measure whether quality care is provided and use performance indicators to measure whether established standards have been met. VA's substance abuse treatment programs serve a population characterized as psychologically and economically devastated. For example, in fiscal year 1995, nearly one-half of veterans in substance abuse treatment inpatient units were homeless at the time of admission, and 35 percent had both substance abuse and one or more psychiatric disorders. In addition, veterans treated in substance abuse treatment units were chronically unemployed, had problems maintaining relationships, reported low incomes, or were criminal offenders. In fiscal year 1995, VA treated 57,776 veterans in inpatient substance abuse treatment units and 121,812 veterans in outpatient substance abuse treatment units (see table 1). About 70 percent of these veterans were eligible for VA health care because of their low incomes rather than because of a service-connected disability. More than 50 percent of the veterans were Vietnam War-era veterans and another 25 percent served after that time. Only 6 percent of the inpatients and 9 percent of the outpatients had a service-connected disability of 50 percent or more. Characteristics of veterans treated in inpatient and outpatient substance abuse treatment units differed somewhat from veterans treated in VA's medical and surgical units. Veterans in the medical and surgical units were older than those in the treatment units. Their median age was about 59, compared with veterans in all substance abuse treatment units, whose median age was 43. Furthermore, more veterans in medical and surgical units were eligible for VA treatment because of their service-connected disability than were veterans being treated in substance abuse treatment units. About 34 percent of the inpatients and 47 percent of the outpatients seen in medical and surgical units had a service-connected disability, compared with 25 percent and 31 percent, respectively, for veterans in all substance abuse treatment units. VA strives to offer a continuum of services to treat veterans nationwide with substance abuse disorders. Since fiscal year 1990, VA has used additional funds to expand the number of substance abuse treatment programs, patients treated, and staff. The additional funds, accompanied by an increased emphasis on outpatient treatment, have resulted in significantly increasing the number of outpatients served at VA medical centers. VA operates 389 substance abuse treatment programs at more than 160 medical centers throughout the United States and Puerto Rico. These programs include 203 inpatient or extended-care programs, 152 outpatient programs, 22 methadone maintenance clinics, 9 residential rehabilitation programs, and 3 early intervention programs. Typically, these medical centers provide a combination of treatment settings, incorporating inpatient or extended-care programs, outpatient clinics, and residential rehabilitation programs. VA provides most substance abuse programs directly. However, it does rely on some non-VA facilities, such as community residential facilities, to provide some services. Figure 1 shows the locations and types of VA substance abuse programs provided as of October 1, 1994. Like other providers, VA uses a variety of approaches in treating veterans with substance abuse disorders. Table 2 describes the treatment approaches used in VA programs. As part of the President's national drug policy program, VA received $105 million annually in recurring funds in fiscal years 1990 to 1993. VA used these funds to expand substance abuse treatment services to more eligible veterans. The additional funds and emphasis on outpatient treatment resulted in significantly increasing the number of outpatients served at VA medical centers. As shown in figure 2, obligations for VA substance abuse treatment programs increased about 45 percent, from $407 million to $589 million from fiscal years 1991 to 1996. As shown in figures 3 and 4, the number of inpatients and inpatient programs has remained fairly stable over the years; the number of outpatients and outpatient programs has grown significantly, however. According to VA, the number of inpatients served in VA substance abuse treatment units declined slightly from 58,500 to 55,200 patients in fiscal years 1988 to 1995. The number of outpatients in substance abuse treatment in those same fiscal years rose dramatically, however, from 38,300 to 68,300 patients--about a 78-percent increase. A similar trend has occurred in the number of inpatient and outpatient treatment programs. The number of inpatient programs increased from 174 to 180 (about 4 percent) between fiscal years 1991 and 1994. However, the number of outpatient programs increased from 111 to 152--about a 37-percent increase. Traditionally, medical center directors determined the extent to which their centers offered substance abuse treatment services. This may change, however, under the VISN structure. The VISN directors, who are accountable to the Under Secretary for Health for their VISNs' performance, are charged with providing coordinated services for all eligible veterans living within their network areas. Although VISN directors and the respective medical center directors have discussed possible changes to the substance abuse treatment programs, no changes had yet been made during the time of our study. On the basis of discussions with VA officials, however, some current programs will likely be consolidated and others will likely change focus. VA currently lacks the necessary data to adequately measure and fully evaluate the efficacy of its many treatment programs. VA is therefore developing a new performance monitoring system, using new outcome measures, to compare treatment and program effectiveness both internally and with non-VA substance abuse treatment providers. VA's efforts compare with outcome measurement approaches used by non-VA providers of substance abuse treatment services. Substance abuse treatment staff at VA medical centers monitor program quality through the accreditation process and internal studies. VA medical center substance abuse treatment programs must meet the standards promulgated by the Joint Commission on Accreditation of Healthcare Organizations (JCAHO). Through its review process, JCAHO determines whether each medical center has the necessary programs in place that should result in good care. In addition, medical centers have instituted quality improvement programs, in part to satisfy accreditation requirements, using a variety of measures. The medical centers we visited track readmissions, length of stay, and patient satisfaction. At the VA medical center in Denver, for example, recidivism rates have been monitored since 1988. At a VA medical center in Chicago, discharged inpatients are monitored to determine whether they show up for outpatient follow-up care. VA's quality management philosophy and staffing resources have constrained the central office staff's monitoring role. Central office officials have primarily played a consultant role on quality assurance matters. This role has been based on VA's philosophy that, because care takes place at the medical centers, staff at the centers are the best suited to monitor their programs and take the appropriate actions to improve care. Central office officials do, however, monitor the many substance abuse treatment programs by reviewing (1) annual reports on the substance abuse treatment programs at each medical center; (2) reports on program services, staffing, and utilization from VA's Program Evaluation and Research Center; (3) the Quality Improvement Checklist, a systemwide quality improvement tool that includes one indicator about the rate of readmission for alcohol- and drug-related disorders for patients discharged from inpatient substance abuse treatment units; and (4) the results of patient satisfaction surveys. These officials also work with staff from the Center for Excellence in Substance Abuse Treatment and Education to test models of care, help identify best practices, train students, and provide continuing education in substance abuse treatment. Except for the Center's reviews, however, none of these reviews focuses on the outcomes of the specific treatments provided. In November 1995, in a shift in philosophy, VA central office officials proposed a systemwide approach to quality management using a variety of performance indicators, including treatment outcome measures. Believing substance abuse to be a chronic disease that frequently recurs, VA has dropped two previously used indicators, recidivism and discharge disposition, because staff felt that these indicators did not adequately measure program success. The new indicators will rely on data currently collected but not aggregated. Three indicators relate to the number of veterans starting substance abuse treatment programs and visiting outpatient units. Two indicators compare the number of patients in and visits to outpatient substance abuse treatment units with the number of all patients in and visits to these units as well as the number of patients in all VA substance abuse treatment units as a percentage of the total number of patients in care. In the future, VA plans to develop other performance indicators based on data not currently available to assess treatment effectiveness. These indicators will be based on data collected through a standardized data collection instrument, the Addiction Severity Index (ASI). The indicators will measure treatment outcomes that include changes in medical status, employment, alcohol use, drug use, criminal activity, family and social relationships, and psychiatric symptoms. VA is considering administering a comprehensive ASI to all patients within 3 days of entering any substance abuse treatment setting and then annually while the patient remains in treatment. An abbreviated ASI would be administered after 1 month and again after 6 months of treatment. Although both VA and non-VA substance abuse treatment officials agree that patient data collected through the ASI would be useful in determining the proper treatment and its efficacy, some are concerned that it may be too expensive and time consuming to administer. The revised performance measures will be used to evaluate individual substance abuse treatment programs and compare them with each other as well as with non-VA programs. For example, VA is already piloting a performance monitoring system developed by its Program Evaluation and Research Center. The system ranks, according to cost and utilization data, the relative performance of mental health and substance abuse units among the medical centers and 22 VISNs. To ensure that the comparisons fairly assess program performance, VA intends to account for veteran characteristics, such as other coexisting medical or psychiatric diseases, that might affect the outcome of the substance abuse treatment. VA's current and planned initiatives to monitor program performance compare with those used or planned by non-VA providers and managed behavioral health care organizations we contacted. For example, one large managed behavioral health company that has used outcome measures since 1993 collects information about readmission, complaints, and patient and provider satisfaction, among other data. A large local provider had no systematic outcome measurement efforts under way at the time of our study, but it would provide data for requested state or federal studies. Such data might include detoxification use, employment, housing, and treatment service use. Comparisons of VA's programs with publicly supported non-VA substance abuse programs should be possible once VA's various programs' treatment outcomes are known and the data are properly adjusted to account for any differences in patient characteristics. Non-VA substance abuse providers and programs are also available to and used by veterans. In Colorado, for instance, approximately 400 facilities that receive some public funding to treat patients with low incomes served five times the number of veterans treated at the Denver VA medical center in fiscal year 1995. The 10,000 veterans treated by state-funded facilities in Colorado represent about 18 percent of the patients seen at the facilities. Similarly, in Illinois, we found that 8,200 patients, about 8 percent of those treated in facilities receiving state funds, were veterans. According to VA officials and officials of the non-VA programs we visited, veterans who qualify for publicly supported treatments are like those treated at the VA medical centers. For example, in Colorado and Illinois, we found that the veterans treated by state-funded providers have low incomes and high levels of unemployment; many were homeless. Moreover, the vast majority of the veterans were male--97 percent in both Colorado and Illinois--and most did not have insurance. Although non-VA providers told us they were willing to treat more veterans, they currently do not have enough staff to do so. Therefore, these providers would need additional funding to hire staff capable of treating a significant number of low-income veterans with multiple problems. The number and health status of eligible veterans, potential demand for substance abuse treatment services, and the cost of specific programs are just some of the data needed to determine the implications of changing VA's service delivery methods. However, VA currently has neither this information nor the systems in place to gather it. This situation and the decisions VISN directors might make about what and where services will be offered make it difficult to estimate the effects of VA's changing its current delivery structure. One possible change to VA's services you asked us to explore is VA's reducing its substance abuse treatment program. If VA were to stop treating veterans for substance abuse, societal costs would likely increase. Researchers have indicated that the costs of treating people with substance abuse disorders tend to shift to other sectors, including welfare and other social services, other medical providers, and the criminal justice system, when people go untreated. Although we expect that many of VA's substance abuse patients would qualify for publicly supported treatment programs if VA ended its services, VA officials told us that some veterans would surely "fall through the cracks." These officials are concerned about the uneven distribution of care now provided through state-assisted programs and about how VA patients would fare in a managed care environment. You asked us to look at the implications of VA's contracting out for substance abuse treatment services instead of eliminating or reducing the number of such services. The implications of this approach to VA and the community are difficult to determine at this time. VA lacks information on the health care needs of eligible veterans, the number of veterans who might seek care if it were more accessible, the actual cost of treating such veterans, and the outcomes of specific treatments. Before contracting out substance abuse treatment services, VA would have to better understand its patients, treatment outcomes, and costs. Only then could it define a number of key contractual elements, such as the type of service delivery model preferred, the actual services it would and could afford to cover, the treatment philosophy to be employed, responsibilities for program monitoring, and the distribution of financial risks. The lack of this information limits our ability to evaluate the cost-effectiveness of contracting out program services and the implications of this action on the relative quality of services veterans might receive. VA reviewed a draft of this report and commented that it was a fair and accurate assessment of its substance abuse program and the initiatives it has under way. This report was prepared under the direction of Sandra Isaacson, Assistant Director; Tom Laetz; Mary Needham; and Bill Temmler. Should you have any questions, please call me at (202) 512-7111 or Sandra Isaacson at (202) 512-7174. Stephen P. Backhus Associate Director Veterans' Affairs and Military Health Care The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Veterans Affairs' (VA) substance abuse program and the effect of VA reorganization on this program, focusing on: (1) characteristics of veterans who receive substance abuse treatment; (2) services VA offers to veterans with substance abuse disorders; (3) methods VA uses to monitor the effectiveness of its substance abuse treatment programs; (4) community services available to veterans who suffer from substance abuse disorders; and (5) implications of changing VA methods for delivering substance abuse treatment services. GAO found that: (1) in fiscal year 1995, VA substance abuse treatment units served about 180,000 veterans; (2) about one half of the inpatients were homeless at the time of admission and about one third had psychiatric disorders; (3) many of these veterans were chronically unemployed, had problems maintaining relationships, reported low incomes, or were criminal offenders; (4) VA provides a variety of treatment settings and approaches; (5) between fiscal years 1991 and 1996, VA funding for treatment increased from $407 million to $589 million to accommodate growth in the substance abuse treatment program; (6) VA lacks the necessary data to adequately measure and fully evaluate the efficacy of its many treatment programs and has primarily relied on utilization information and recidivism rates to monitor the quality of its substance abuse treatment programs; (7) VA is developing a performance monitoring system based on treatment outcome measures; (8) numerous non-VA substance abuse treatment programs are also available to and used by veterans; (9) many veterans treated in community-based public programs are like those treated in VA programs; (10) if VA stopped treating veterans for substance abuse, resulting societal costs may shift to welfare or other social services, other federal or state substance abuse treatment programs, and the criminal justice system; (11) VA cannot ascertain the implications of contracting for these services, since it lacks critical information on the health care needs of eligible veterans, the number of veterans who might seek care, and actual cost of treating veterans with substance abuse disorders; and (12) VA officials have not decided how substance abuse treatment services will be delivered and what outcome measures will be used to evaluate treatment and program effectiveness.
3,437
456
GAO remains one of the best investments in the federal government, and our dedicated staff continues to deliver high quality results. In FY 2013 alone, GAO provided services that spanned the broad range of federal programs and activities. We received requests for our work from 95 percent of the standing committees of Congress and almost two-thirds of their subcommittees. We reviewed a wide range of government programs and operations including those that are at high risk for fraud, waste, abuse, and mismanagement. GAO also reviewed agencies' budgets as requested to help support congressional decision-making. Last year, our work yielded significant results across the government, including $51.5 billion in financial benefits--a return of about $100 for every dollar invested in GAO. Also, in FY 2013, we issued 709 reports and made 1,430 new recommendations. The findings of our work were often cited in House and Senate deliberations and committee reports to support congressional action, including improving federal programs on our High Risk list; addressing overlap, duplication, and fragmentation; and assessing defense, border security and immigration issues. Our findings also supported the Bipartisan Budget Act of 2013, in areas such as aviation security fees, unemployment insurance, improper payments to inmates, the strategic petroleum reserve, and the contractor compensation cap. Senior GAO officials also provided testimony 114 times before 60 Committees or Subcommittees on a wide range of issues that touched virtually all major federal agencies. A list of selected topics addressed is included in Appendix I. GAO's findings and recommendations produce measurable financial benefits through Congressional action or agency implementation. Examples of FY 2013 financial benefits resulting from congressional or federal agency implementation of GAO recommendations include: $8.7 billion from reducing procurement quantities of the Joint Strike Fighter Program: DOD decreased near-term procurement quantities in three successive budget submissions to lessen concurrency and the associated cost risks in light of our numerous recommendations citing the F-35 Joint Strike Fighter program's very aggressive and risky acquisition strategy, including substantial overlap among development, testing, and production activities. $2.6 billion from revising the approach for the Navy's Next Generation Enterprise Network (NGEN) Acquisition: Our recommendations led Navy to revise its NGEN acquisition strategy-- which was riskier and potentially costlier than other alternatives identified due to a higher number of contractual relationships--thus significantly reducing program costs between 2013 and 2017. $2.5 billion from eliminating seller-funded payment assistance for FHA-insured mortgages: The Department of Housing and Urban Development and Congress took steps to prohibit seller-funded down payment assistance, citing our findings that losses associated with those loans had substantially higher delinquency and insurance claim rates than similar loans without such assistance, and were contributing to the Federal Housing Administration's deteriorating financial performance. $2.3 billion from consolidating U.S. Forces stationed in Europe: DOD removed two brigade combat teams and support units from Europe, allowing it to further consolidate and close facilities, based in part on our work showing significant costs related to maintaining permanent Army forces in Europe and our recommendations that DOD identify alternatives that might lead to savings. $1.3 billion through improved tax compliance: Our recommendations on the use of information reporting to reduce the tax gap contributed to legislation requiring banks and others to report income that merchants receive through credit cards, third-party networks, and other means to help IRS verify information reported on merchants' income tax returns. The estimated increased revenue through improved tax compliance is expected over the provision's first 3 fiscal years. GAO has generated recommendations that save resources, increase government revenue, improve the accountability, operations, and services of government agencies, increase the effectiveness of federal spending as well as provide other benefits. Since FY 2003, GAO's work has resulted in substantial financial and other benefits for the American people, including: over 1/2 trillion dollars in financial benefits; about 14,500 program and operational benefits that helped to change laws, improve public services, and promote sound management throughout government; and about 12,000 reports, testimony, and other GAO products that included over 22,000 recommendations. In FY 2013, GAO also contributed to 1,314 program and operational benefits that helped to change laws, improve public services, and promote sound management throughout government. Thirty six percent of these benefits are related to business process and management, 31 percent are related to public safety and security, 17 percent are related to program efficiency and effectiveness, 8 percent are related to acquisition and contract management, 5 percent are related to public insurance and benefits, and 3 percent are related to tax law administration. Examples include: enhancing coordination between DOD and the Social Security Administration (SSA) on the more timely delivery of military medical records through electronic transfer; improving Veterans Affairs (VA) oversight of its medical equipment and supply purchasing; increasing collaboration between the Army and Veterans Affairs through a joint working group to improve management of military cemeteries and help eliminate burial errors and other past problems; updating Federal Emergency Management Administration (FEMA) National Flood Insurance Program contract monitoring policies to reduce the likelihood that contractor performance problems would go unnoticed; and establishing National Oceanic and Atmospheric Administration policies outlining the processes, roles and responsibilities for transitioning tsunami research into operations at tsunami warning centers. In FY 2013 GAO issued its third annual report on overlap, duplication, and fragmentation. In it, we identified 31 new areas where agencies may be able to achieve greater efficiency or effectiveness. Within these 31 areas, we identified 81 actions that the executive branch and Congress could take to reduce fragmentation, overlap, and duplication, as well as other cost savings and revenue enhancement opportunities. This work identifies opportunities for the federal government to save billions of dollars. We also maintain a scorecard and action tracker on our external website where Congress, federal agencies, and the public can monitor progress in addressing our findings. Federal agencies and Congress have made some progress in addressing the 131 areas we identified and taking the 300 actions that we recommended in our 2011 and 2012 reports. In February 2013 GAO issued the biennial update of our High Risk report, which focuses attention on government operations that are at high risk of fraud, waste, abuse, and mismanagement, or need transformation to address economy, efficiency, or effectiveness challenges. This report, which will be updated in 2015, offers solutions to 30 identified high-risk problems and the potential to save billions of dollars, improve service to the public, and strengthen the performance and accountability of the U.S. government. Our 2013 High Risk work produced 164 reports, 35 testimonies, $17 billion in financial benefits, and 411 program and operational benefits. The major cross-cutting High Risk program areas identified as of September 2013 range from transforming DOD program management and managing federal contracting more effectively, to assessing the efficiency and effectiveness of tax law administration and modernizing and safeguarding insurance and benefit programs. The complete list of high-risk areas is shown on Appendix II. Details on each high risk area can be found at http://www.gao.gov/highrisk/overview. GAO's FY 2014 budget request sought statutory authority for a new electronic docketing system to be funded by a filing fee collected from companies filing bid protests. The sole purpose of the filing fee would be to offset the cost of developing, implementing, and maintaining the system. We appreciate that the Consolidated Appropriations Act, 2014, directed GAO to develop an electronic filing and document dissemination system under which persons may electronically file bid protests and documents may be electronically disseminated to the parties. GAO is making progress in establishing the electronic protest docketing system. We have convened an interdisciplinary team of experts within GAO to examine matters such as technical requirements, the potential for commercially available systems, fee structure, cost-benefit analysis, and outreach to stakeholders, including representatives from the small business community. GAO will be reporting regularly to the House and Senate Committees on Appropriations on its progress in implementing the system. In September 2013, GAO launched the Watchdog website, which provides information exclusively to Members and congressional staff through the House and Senate intranets. The new site is designed to provide a more interactive interface for Members and their staff to request our assistance and to access our ongoing work. In addition, Watchdog can help users quickly find GAO's issued reports and legal decisions as well as key contact information. In December 2013, Members and their staff were invited to comment on our draft Strategic Plan for Serving Congress in FYs 2014-2019. The draft plan was issued in February 2014 and outlines our proposed goals and strategies for supporting Congress's top priorities. Our strategic plan framework (Appendix III) summarizes the global trends, as well as the strategic goals and objectives that guide our work. GAO's strategic goals and objectives are shown in Figure 1. The draft strategic plan also summarizes the trends shaping the United States and its place in the world. The plan reflects the areas of work we plan to undertake, including science and technology, weapons systems, the environment, and energy. We also will increase collaboration with other national audit offices to get a better handle on global issues that directly affect the United States, including international financial markets; food safety; and medical and pharmaceutical products. These trends include: U.S. National Security Interests; Fiscal Sustainability and Challenges; Global Interdependence and Multinational Cooperation; Science and Technology; Communication Networks and Information Technology; Shifting Roles in Governance and Government; and Demographic and Societal Changes. In the upcoming decade, for example, the US will face demographic changes that will have significant fiscal impacts both on the federal budget and the economy. The number of baby boomers turning 65 is projected to grow from an average of about 7,600 per day in 2011, to more than 11,600 per day in 2025, driving spending for major health and retirement programs. To ensure the updated strategic plan reflects the needs of Congress and the nation, we have solicited comments from stakeholders in addition to Congress, including GAO advisory entities, the Congressional Budget Office, and the Congressional Research Service. To manage our congressional workload, we continue to take steps to ensure our work supports congressional legislative and oversight priorities and focuses on areas where there is the greatest potential for results such as cost savings and improved government performance. Ways that we actively work with congressional committees in advance of new statutory mandates include 1) identifying mandates real time as bills are introduced; 2) participating in ongoing discussions with congressional staff; and 3) collaborating to ensure that the work is properly scoped and is consistent with the committee's highest priorities. In FY 2013, 35 percent of our audit resources were devoted to mandates and 61 percent to congressional requests. I have met with the chairs and ranking members of many of the standing committees and their subcommittees to hear firsthand feedback on our performance, as well as highlight the need to prioritize requests for our services to maximize the return on investment. GAO also appreciates Congress's assistance in repealing or revising statutory mandates that are either outdated or need to be revised. This helps streamline GAO's workload and ensure we are better able to meet current congressional priorities. During the second session of the 112th Congress, based on our input, 16 of GAO's mandated reporting requirements were revised or repealed because over time they had lost relevance or usefulness. In addition, GAO worked with responsible committees to have 6 more mandates repealed or revised as part of the 2014 National Defense Authorization Act. GAO has identified 11 additional mandates for revision or repeal and is currently working with the appropriate committees to implement these changes. For example, our request includes language to repeal a requirement for GAO to conduct bimonthly reviews of state and local use of Recovery Act funds. As the vast majority of Recovery Act funds have been spent, GAO's reviews in this area are providing diminishing returns for Congress. GAO is seeking authority to establish a Center for Audit Excellence to improve domestic and international auditing capabilities. The Center also will provide an important tool for promoting good governance, transparency and accountability. There is a worldwide demand for an organization with GAO's expertise and stature to assume a greater leadership role in developing institutional capacity in other audit offices and provide training and technical assistance throughout the domestic and international auditing communities. The proposed Center would operate on a fee-basis, generating revenue to sustain its ongoing operation, including the cost of personnel and instructors. The Center would be primarily staffed with retired GAO and other auditors, and thus, would not detract from or impact the service GAO provides to Congress. In a similar vein, to provide staff from other federal agencies with developmental experiences, GAO is requesting authority to accept staff from other agencies on a non-reimbursable basis, who can learn about GAO's work. This would allow people to develop expertise and gain experience that will enhance their work at their own agencies. We take great pride in reporting that we continue to be recognized as an employer of choice, and have been consistently ranked near the top on "best places to work" lists. In 2013, we ranked third overall among mid- sized federal agencies on the Partnership for Public Service's "Best Places to Work" list, and again ranked number one in our support of diversity. Also, in November 2013, Washingtonian Magazine named us as one of the "50 Great Places to Work" in the Washington, D.C. region among public or private entities. In addition, earlier this year, O.C. Tanner, a company that develops employee recognition programs, cited us in its article, "Top 10 Coolest Companies to Work for in Washington, D.C." Our management continues to work with our union (IFPTE, Local 1921), the Employee Advisory Council, and the Diversity Advisory Council to make GAO a preferred place to work. GAO's FY 2015 budget request will preserve staff capacity and continue critical infrastructure investments. Offsetting receipts and reimbursements primarily from program and financial audits and rental income totaling $30.9 million are expected in FY 2015. The requested resources provide the funds necessary to ensure that GAO can meet the highest priority needs of Congress and produce results to help the federal government deal effectively with its serious fiscal and other challenges. A summary of GAO's appropriations for our FY 2010 baseline and FYs 2013 to 2015 is shown in Figure 2. The requested funding supports a staffing level of 2,945 FTEs, and provides funding for mandatory pay costs, staff recognition and benefits programs, and activities to support congressional engagements and operations. These funds are essential to ensure GAO can address succession planning challenges, provide staff meaningful benefits and appropriate resources, and compete with other agencies, nonprofit institutions, and private firms who offer these benefits to the talent GAO seeks. In order to address the priorities of Congress, GAO needs a talented, diverse, high-performing, knowledgeable workforce. However, a significant proportion of our employees are currently retirement eligible, including 34 percent of our executive leadership and 21 percent of our supervisory analysts. Therefore, workforce and succession planning remain a priority for GAO. Moreover, for the first time in several years our budget allows us to replenish the much needed pipeline of entry level and experienced analysts to meet future workload challenges. In FY 2014, through a targeted recruiting GAO plans to hire entry-level staff and student interns, boosting our staff capacity for the first time in 3 years to 2,945 FTE. This will allow GAO to reverse the downward trend in our FTEs and achieve some progress in reaching our optimal staffing level of 3,250 FTE, and develop a talent pool for the future. Our FY 2015 budget request seeks funding to maintain the 2,945 FTE level. In FY 2015, pending final OPM guidance, we also plan to implement a phased retirement program to incentivize potential retirement eligible staff to remain with GAO and assist in mentoring and sharing knowledge with staff. Efforts to address challenges related to GAO's internal operations primarily relate to our engagement efficiency, information technology and building infrastructure needs. To better serve Congress and the public, we expanded our presence in digital and social media, releasing GAO iPhone and Android applications, and launching streaming video web chats with the public. During the past year, 7,600 additional people began receiving our reports and legal decisions through our Twitter feed. More than 26,600 people now get our reports, testimonies, and legal decisions daily on Twitter. GAO remains focused on improving the efficiency of our engagements through streamlining or standardizing processes without sacrificing quality. In FYs 2012 and 2013, we continued our improvements in this area. For example, with active involvement from GAO's managing directors, we identified changes to key steps and decision points in our engagement process and now have a revised engagement process that we began implementing on a pilot basis in January 2014. We also piloted and revised a tool to help teams better estimate expected staff days required for engagements. In FY 2014, we plan to implement a series of process changes that will transform the management of engagements, the use of resources, and message communication. More Efficient Content Creation, Review, and Publication GAO will strive to dramatically improve the efficiency of our content creation and management processes by standardizing, automating, and streamlining the currently cumbersome and manually intensive processes for creating, fact-checking, and publishing GAO products. In FY 2014, we plan to request proposals to acquire a technical solution and phase implementation in FYs 2014 and 2015. The proposed system will automate document routing and approvals, incorporate management and quality assurance steps, and generate required documentation. To ensure our message is available to both our clients and the public, the proposed system capability will also enable GAO to routinely publish content on GAO.gov, GAO's mobile site, and various social media platforms. Greater Transparency of Engagement Information To promote transparency, increase management capabilities, and reduce duplicate data entry and costs, in FY 2014 GAO will begin implementing a modernized, one-stop engagement management system. This system automates key business rules and decision points, improves resource management, eliminates rework, and provides increased visibility for all participants. In FY 2015, we will retire legacy databases as the new system becomes fully operational. The FY 2015 budget also provides funds to maintain our information technology (IT) systems, which are a critical element in our goal to maintain efficient and effective business operations and to provide the data needed to inform timely management decisions. Improvements to our aging IT infrastructure will allow GAO to further streamline business operations, reduce redundant efforts, increase staff efficiency and productivity, improve access to information, and enhance our technology infrastructure to support an array of engagement management, human capital, and financial management systems. GAO also plans to continue upgrading aging building systems to ensure more efficient operations and security. To support these requirements our FY 2015 budget request includes resources to: begin upgrading the heating, ventilation, and air conditioning system to increase energy efficiency and reliability; repair items identified in our long-range asset management plan, such as the water heater, chiller plant, and cooling fans; enhance continuity planning and emergency preparedness address bomb blast impact mitigation efforts. In conclusion, GAO values the opportunity to provide Congress and the nation with timely, insightful analysis. The FY 2015 budget requests the resources to ensure that we can continue to address the highest priorities of Congress. Our request seeks an increase to maintain our staffing level and provide employees with the appropriate resources and support needed to effectively serve Congress. The funding level will also allow us to continue efforts to promote operational efficiency, and begin addressing long-deferred investments and maintenance. This concludes my prepared statement. I appreciate, as always, your continued support and careful consideration of our budget. I look forward to discussing our FY 2015 request with you. Limiting the Federal Government's Fiscal Exposure by Better Managing Climate Change Risks (new) Management of Federal Oil and Gas Resources Modernizing the U.S. Financial Regulatory System and Federal Role in Housing Finance Restructuring the U.S. Postal Service to Achieve Sustainable Financial Viability Funding the Nation's Surface Transportation System Strategic Human Capital Management Transforming DOD Program Management DOD Approach to Business Transformation DOD Business Systems Modernization DOD Support Infrastructure Management DOD Financial Management DOD Supply Chain Management DOD Weapon Systems Acquisition Ensuring Public Safety and Security Mitigating Gaps in Weather Satellite Data (new) Appendix III: GAO's Strategic Plan Framework This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO's mission is to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the benefit of the American people. GAO provides nonpartisan, objective, and reliable information to Congress, federal agencies, and to the public and recommends improvements, when appropriate, across the full breadth and scope of the federal government's responsibilities. GAO's work supports a broad range of interests throughout Congress. In FY 2013, GAO received requests for our work from 95 percent of the standing committees of Congress and almost two-thirds of their subcommittees. Additionally, senior GAO officials testified at 114 hearings on national and international issues, before 60 committees and subcommittees that touch on virtually all major Federal Agencies. GAO remains one of the best investments in the federal government, and GAO's dedicated staff continues to deliver high quality results. In FY 2013 alone, GAO's work yielded $51.5 billion in financial benefits--a return of about $100 for every dollar invested in GAO. Since FY 2003, GAO's work has resulted in: over 1/2 trillion dollars in financial benefits; and about 14,500 program and operational benefits that helped to change laws, improve public services, and promote sound management throughout government. GAO is requesting a budget of $525.1 million to preserve its staff capacity and continue critical information technology and building infrastructure investments. GAO's fiscal year (FY) 2015 budget request of $525.1 million seeks an increase of 3.9 percent to maintain staff capacity as well as continue necessary maintenance and improvements to our information technology (IT) and building infrastructure. Additionally, receipts and reimbursements, primarily from program and financial audits, and rental income, totaling $30.9 million are expected in FY 2015. GAO recently issued our draft Strategic Plan for Serving Congress in FYs 2014-2019. The plan outlines our proposed goals and strategies for supporting Congress's top priority. I also have met with the Chairs and Ranking Members of many of the standing committees and their subcommittees to hear firsthand feedback on our performance, as well as prioritize requests for our services to maximize the return on investment. In order to address Congressional priorities, and fulfill GAO's mission, a talented, diverse, high-performing, knowledgeable workforce is essential. Workforce and succession planning remain a priority for GAO. A significant proportion of our employees are currently retirement eligible, including 34 percent of our executive leadership and 21 percent of our supervisory analysts. In 2014, through a targeted recruiting strategy to address critical skills gaps, GAO plans to boost our employment level for the first time in 3 years to 2,945 Full Time Equivalents (FTE). The requested FY 2015 funding level will preserve strides planned for FY 2014 to increase our staff capacity. In conjunction with the ongoing recruiting efforts and planning, we will revive our intern program and hire and train an increased number of entry level employees. This will reverse the downward staffing trajectory, develop a talented cadre of analyst and leaders for the future, achieve progress in reaching an optimal FTE level of 3,250 FTE, and assist GAO in meeting the high priority needs of Congress. We also take great pride in reporting that we continue to be recognized as an employer of choice, and have been consistently ranked near the top on "best places to work" lists. Improvements to our aging IT infrastructure will allow GAO to further streamline business operations, increase staff efficiency and productivity, as well as improve access to information. Planned investments in IT will address deferred upgrades and enhance our technology infrastructure to support an array of engagement management, human capital, and financial management systems. We also plan to continue upgrading aging building systems to ensure more efficient operations and security. Areas of focus include, increasing the energy efficiency and reliability of the heating, ventilation, and air conditioning system; enhancing continuity planning and emergency preparedness capabilities; and addressing bomb blast impact mitigation efforts
4,478
850
To assist New York in recovering from the September 11, 2001, terrorist attacks, Congress passed Public Law 107-147, the Job Creation and Worker Assistance Act of 2002. The act was signed into law on March 9, 2002, and created seven tax benefits that focus on the New York Liberty Zone. The Liberty Zone tax benefits include treating employees in the Liberty Zone as a targeted group for purposes of the work opportunity tax credit (WOTC), which IRS refers to as the business employee credit; a special depreciation allowance; an increase in section 179 expensing; special treatment of leasehold improvement property; an extension of the replacement period for involuntarily converted authority to issue tax-exempt private activity bonds; and authority to issue advance refunding bonds. An explanation of each benefit, an example of how it can be used, and the period each benefit is in effect are included in appendix II. Under the Congressional Budget Act of 1974 as amended, JCT provides estimates of the revenue consequences of tax legislation. In March 2002, JCT estimated that the New York Liberty Zone tax benefits would reduce federal revenues by $5.029 billion over the period 2002 through 2012. For one of the seven Liberty Zone tax benefits, the business employee credit, IRS is collecting but not planning to report some information about use--the number of taxpayers claiming the credit and the amount of credit claimed--nor is it planning to use this information to report on how the benefit has reduced taxpayers' tax liabilities. IRS is not planning to collect or report information about the use of the other six benefits or how using these benefits has reduced taxpayers' tax liabilities. IRS collects information on how many taxpayers use the business employee credit and the amount of the credit claimed on Form 8884 (New York Liberty Zone Business Employee Credit). Submission processing officials in the Small Business/Self-Employed (SB/SE) Division began entering information from this form into IRS's computer system in January 2003. Some taxpayers claiming the business employee credit may have their returns processed by the Wage and Investment (W&I) Division, which is not planning to enter information from the form into the computer system. However, IRS officials said that the bulk of the taxpayers who would claim this credit would submit their returns to the SB/SE Division. IRS can collect information on the use of the business employee credit because it developed a new form to administer this credit. Although the business employee credit was included in the WOTC provisions, IRS officials said they needed to track business employee credits separately because the business employee credit can be used to offset any alternative minimum taxes owed but the general WOTC provisions cannot. IRS currently cannot collect information on the remaining six Liberty Zone benefits because it is using existing forms to administer them, and taxpayers do not report these six benefits as separate items on their returns. For example, taxpayers add the amount of depreciation they are allowed under the Liberty Zone special depreciation allowance benefit to other depreciation expenses and report their total depreciation expenses on their returns. Since taxpayers do not report their use of six of the seven benefits separately on their returns, IRS cannot report on how extensively these six benefits were used. IRS officials said that although they are collecting information on the amount of business employee credits claimed by taxpayers, they are not planning on reporting information on the extent to which the benefit reduced taxpayers' tax liabilities. For the other six benefits, IRS officials said that without information about use, they cannot collect or report on the extent to which the benefits reduced taxpayers' tax liabilities. According to IRS officials, the agency followed its usual procedures in determining the type of information to collect about the Liberty Zone tax benefits. They added that IRS would collect and report information that would help it to administer the tax laws or if it was legislatively mandated to collect or report information. IRS officials said they do not need information about the use of the Liberty Zone tax benefits or the resulting reductions in taxpayers' tax liabilities in order to administer the tax laws. For example, IRS officials said that they do not need information on each specific benefit claimed to properly target their enforcement efforts. Instead, they target their enforcement efforts based on taxpayers claiming various credits, deductions, and so forth that fall outside of expected amounts. In addition, IRS officials noted that the agency has not been legislatively mandated to collect or report information on the benefits. IRS would need to make several changes if it were to collect more information on taxpayers' use of the benefits and their effect on reducing taxpayers' tax liabilities. IRS would need to change forms used to collect information from taxpayers, change how it processes information from tax returns, and revise computer programming, which would add to taxpayer burden and IRS's workload. Even if it were to make these changes, IRS would not have information for two of the years the benefits were available. Also, although the additional information would enable IRS to make an estimate of the revenue loss due to the benefits, it would not be able to produce a verifiable measure of the loss. To produce the estimate, IRS would have to make assumptions about how taxpayers would have behaved in the absence of the benefits. For six of seven of the Liberty Zone tax benefits, IRS would need to revise forms, tax return processing procedures, and computer programming if it were to collect and report information about the number of taxpayers claiming the benefit and the amount they claimed. It would also need to take most of these steps to report on the use of the seventh benefit--the business employee credit. According to IRS officials, they would need to make staff available to revise forms, review returns for completeness and accuracy, transcribe the additional data, and write the necessary computer programs for entering and extracting data. They would also need to allocate computer resources to process the additional information collected and prepare reports on the use of the benefits. For example, for the special depreciation allowance benefit, IRS would need to revise Form 4562 (Depreciation and Amortization) so that taxpayers reported the amount of depreciation they claimed specifically due to this benefit, tax return processing procedures so that processing staff reviewed Form 4562 for completeness and accuracy and transcribed information about the special depreciation allowance, and computer programming so that information about the special depreciation allowance could be entered into IRS's information systems and extracted in order to prepare reports about the use of the benefit. For the seventh benefit--the business employee credit--taxpayers already separately report the amount of the credit they are claiming, and IRS is already reviewing these forms for accuracy and completeness, transcribing data from them, and entering this information into the agency's computer system for those returns that are processed by the SB/SE Division. However, computer programming would need to be changed to extract information to prepare reports about benefit use. For any returns processed by the W&I Division, IRS would also need to revise W&I processing procedures and computer programming. Since IRS currently does not have any plans to make these changes, officials were unable to estimate the costs involved in accomplishing these actions or the number of staff needed to do so. However, IRS officials estimated they added one full-time equivalent (FTE) primarily to review the Form 8884s for completeness and accuracy and for data transcription--part of the process to collect information about the use of the business employee credit. If IRS collected information about the use of the benefits, IRS could then develop some information on the reduction in taxpayers' tax liabilities due to the benefits. For example, IRS could determine how much lower each taxpayers' tax liability is due to the use of the tax benefits, assuming that taxpayer behavior would be the same whether the benefits existed or not. Table 1 is an example of such a computation for claiming the Liberty Zone Section 179 expensing benefit. In this example, a taxpayer with $100,000 in income bought $40,000 worth of office equipment in 2002 and placed this equipment in service in the Liberty Zone in 2002. After applying the Liberty Zone section 179 expensing benefit, taxable income would be $60,000. Since the equipment has been completely expensed, the taxpayer cannot claim any further deductions for this equipment. To recalculate the taxpayer's taxable income as if the special Liberty Zone expensing benefit did not exist, IRS could assume that the taxpayer would make the same investment, even without the Liberty Zone tax benefit, and still claim the $24,000 section 179 deduction available to all taxpayers in 2002 and any other available deductions, such as the special depreciation allowance. In our example, the special depreciation allowance would be worth $4,800, and the amount otherwise available as a depreciation deduction (regular depreciation) would be worth $1,600, which would reduce the taxpayer's taxable income to $69,600. The total reduction in taxable income would be $9,600. Once all the adjustments to taxable income were made, IRS would then need to apply the appropriate marginal tax rate to arrive at the taxpayer's recalculated tax liability. If IRS were to begin collecting information on the number of taxpayers using the Liberty Zone tax benefits and the amounts they claimed, the information would not be complete. In addition, although the information would enable IRS to make an estimate of the revenue loss due to the benefits, the information would not result in a verifiable measure of the loss. To produce the estimate, IRS would have to make assumptions about how taxpayers would have behaved in the absence of the benefits. IRS said the earliest it would be able to collect information on the number of taxpayers using the benefits and the amounts each claimed would be for tax year 2004 returns, which IRS would not process until calendar year 2005. As a result, IRS would not have information for two of the years that the benefits were in effect, which is significant because most of the benefits expire by the end of 2006. IRS could not reconstruct information on tax liability for those 2 years because returns already filed would not indicate whether taxpayers used the Liberty Zone benefits and would not show the amount claimed through benefit use. Although IRS could ask for information about past benefit use since taxpayers are instructed to keep tax records for 3 years, this would require taxpayers to provide additional information and increase taxpayer burden. Also, it would be difficult for IRS to use current year information to estimate the amount claimed through benefit use retroactively because the pattern of using the benefits could have changed over time. In addition to not being complete, the data that IRS could collect on the number of taxpayers using the Liberty Zone benefits and the amounts each claimed would not be sufficient for actually measuring how much revenue those benefits cost the federal government. The reduction in revenues due to the Liberty Zone tax benefits is equal to the difference between the amount of revenue that the federal government would collect with the benefits in place and the amount it would collect in the absence of those benefits. There are two reasons why revenues would be different with and without the benefits. First, the rules for computing tax liabilities are different in the two cases (as shown in table 1). Second, the behavior of many taxpayers is likely to be different in the two cases. In fact, a primary purpose of the tax benefits is to influence taxpayer behavior. For example, in the case of the Liberty Zone section 179 benefit, some taxpayers who claim this benefit would have made different investment decisions if that particular benefit were not available. In our simplified example shown in table 1, this difference in behavior might be that the taxpayer invested less than $40,000 in office equipment--perhaps even nothing--because the Liberty Zone benefit did not exist. As a consequence, the taxpayer's taxable income would have been different than the $69,600 shown in table 1. Given that IRS cannot know what taxpayers would have done in the absence of the benefits, the best it could do is estimate revenue losses based on assumptions about that alternative behavior. The Commissioner of Internal Revenue was provided a draft of this report for his review and comment. The IRS Director of Tax Administration Coordination agreed with the contents of the report. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days from its date. At that time, we will send copies to the Chairman and Ranking Minority Member of the Senate Committee on Finance; the Chairman of the House Committee on Ways and Means and the Chairman and Ranking Minority Member of its Subcommittee on Oversight; the Secretary of the Treasury; the Commissioner of Internal Revenue; the Director of the Office of Management and Budget; and other interested parties. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. This report was prepared under the direction of Jonda Van Pelt, Assistant Director. If you have any questions regarding this report, please contact her at (415) 904-2186 or [email protected] or me at (202) 512-9110 or [email protected]. Key contributors to this report were Evan Gilman, Edward Nannenhorn, Lynne Schoenauer, Shellee Soliday, Anne Stevens, and James Wozny. Our first objective was to determine the extent to which the Internal Revenue Service (IRS) is collecting and reporting information about the use and value of the seven Liberty Zone tax benefits. We defined use as the number of taxpayers who claimed each benefit and the amount each claimed. In analyzing value, we examined what information IRS could provide about reductions in taxpayers' tax liabilities when they used the Liberty Zone tax benefits, and then examined whether this information could be used to measure the actual reduction in federal tax revenues. To address the first objective, we interviewed IRS officials from Legal Counsel, the Wage and Investment (W&I) Division's and the Small Business/Self-Employed (SB/SE) Division's submission processing groups, Statistics of Income (SOI), Forms and Publications, and the Tax Exempt Government Entities (TEGE) Division to determine if they were collecting and reporting any information about the use of the Liberty Zone tax benefits and how the benefits reduced taxpayers' tax liabilities. We analyzed the documents they provided about collecting and reporting on the use of the benefits and the reduction in taxpayers' tax liabilities. We also analyzed the data the Joint Committee on Taxation (JCT) provided about its estimate of the reduction in federal tax revenues. Finally, we interviewed New York city and state officials to determine if they were collecting and reporting information on the benefits. Our second objective was to determine what steps IRS would need to take and the resources it would need to collect and report information on the use and value of the Liberty Zone tax benefits if it is not already doing so. We used the same definition of use and value as we used for the first objective. To address the second objective, we interviewed IRS officials from Legal Counsel, the W&I Division's and the SB/SE Division's submission processing groups, SOI, Forms and Publications, and the TEGE Division to determine what steps they would need to take and the resources they would need to collect and report information on the use of the Liberty Zone tax benefits and the reduction in taxpayers' tax liabilities if they used the benefits. We also analyzed IRS documents related to the steps that would need to be taken to collect and report on the use of the benefits and on the reduction in taxpayers' tax liabilities. We performed our work from April 2003 through August 2003 in accordance with generally accepted government auditing standards. The work opportunity tax credit (WOTC) was expanded to include a new targeted group for employees who perform substantially all their services for a business in the Liberty Zone or for a business that relocated from the Liberty Zone June 1, 2002, to October 31, 2002 and 2003 elsewhere within New York City due to the physical destruction or damage of their workplaces by the September 11, 2001, terrorist attacks. The New York Liberty Zone business employee credit allows eligible businesses with an average of 200 or fewer employees to take a maximum credit of 40 percent of the first $6,000 in wages paid or incurred for work performed by each qualified employee during calendar years 2002 and 2003. Unlike the other targeted groups under WOTC, the credit for the new group is available for wages paid to both new hires and existing employees. 2002, and receives $3,000 in wages a month. The company can claim a credit for 40 percent of the first $6,000 in wages paid ($2,400). The special depreciation allowance provides an additional deduction for eligible properties. Eligible Liberty Zone properties include new tangible property (e.g., new office equipment), used tangible property (e.g., used office equipment), and residential rental property (e.g., an apartment complex) and nonresidential real property (e.g., an office building) if it rehabilitates real property damaged or replaces real property destroyed or condemned as a result of the September 11, 2001, terrorist attacks. On December 1, 2002, a real estate development firm purchases an office building in the New York Liberty Zone that costs $10 million and places it in service on June 1, 2003. The building replaces real property damaged as a result of the September 11, 2001, terrorist attacks. Under the provision, the taxpayer is allowed an additional first- year depreciation deduction of 30 percent ($3 million). For property inside the Liberty Zone, the special depreciation allowance allows taxpayers to deduct 30 percent of the adjusted basis of qualified property acquired by purchase after September 10, 2001, and placed in service on or before December 31, 2006 (December 31, 2009, in the case of nonresidential real property and residential rental property). For property outside the Liberty Zone, a special depreciation allowance is available for taxpayers but only with regard to qualified property--such as new tangible property and non-Liberty Zone leasehold improvement property--that is acquired after September 10, 2001, and before September 11, 2004, and is placed in service on or before December 31, 2004. However, recent legislation (the Jobs and Growth Tax Relief Reconciliation Act of 2003, Pub. L. No. 108-27) has increased the deduction to 50 percent for qualified property both within and outside the Liberty Zone that is acquired after May 5, 2003, and placed in service on or before December 31, 2004. Taxpayers with a sufficiently small investment in qualified section 179 business property in the Liberty Zone can elect to deduct rather than capitalize the amount of their investment and are eligible for an increased amount over other taxpayers. For qualified Liberty Zone property placed in service during 2001 and 2002, under section 179 taxpayers could deduct up to $59,000 ($24,000 under the general provision plus an additional $35,000) of the cost. The investment limit (phase-out range) in the property was $200,000. For qualified Liberty Zone property placed in service after 2002 and before 2007, taxpayers could deduct $60,000 ($25,000 under the general provision plus the additional $35,000) of the cost. In 2002, a taxpayer purchases and places in service in his or her Liberty Zone business several qualified items of equipment costing a total of $260,000. Because 50 percent of the cost of the property ($130,000) is less than $200,000, the investment limit, the section 179 deduction of $59,000 is not reduced, and the taxpayer can deduct this amount. However, recent legislation (Pub. L. No. 108-27) has further increased the maximum deduction for qualified Liberty Zone property placed in service after 2002 and before 2006 to $135,000 and has increased the investment limit to $400,000. For 2006, the maximum section 179 deduction allowed for qualified Liberty Zone property returns to $60,000 and the investment limit is $200,000. To calculate the available expensing treatment deduction amount for qualified Liberty Zone property, every dollar for which 50 percent of the cost of the property exceeds the investment limit is subtracted from the maximum deduction allowed. Taxpayers outside of the Liberty Zone may also expense qualified property under section 179. However, the maximum deduction for non-Liberty Zone property is $35,000 less than the maximum deduction allowed for Liberty Zone property. The investment limits for Liberty Zone and non-Liberty Zone property are similar. However, in contrast, in calculating the available expensing treatment deduction amount for non-Liberty Zone properties, every dollar invested in the property that exceeds the investment limit is subtracted from the maximum deduction allowed. Qualified Liberty Zone leasehold improvement property can be depreciated over a 5-year period using the straight-line method of depreciation. The term "qualified Liberty Zone leasehold property" means property as defined in section 168(k)(3) and may include items such as additional walls and plumbing and electrical improvements made to an interior portion of a building that is nonresidential real property. Qualified Liberty Zone leasehold improvements must be placed in service in a nonresidential building that is cost of the property. located in the Liberty Zone after September 10, 2001, and on or before December 31, 2006. The class life for qualified New York Liberty Zone leasehold improvement property is 9 years for purposes of the alternative depreciation system. Taxpayers can also depreciate leasehold improvements outside of the Liberty Zone. These taxpayers can depreciate an addition or improvement to leased nonresidential real property using the straight-line method of depreciation over 39 years. Qualified leasehold improvement properties outside the Liberty Zone can qualify for both the 39-year depreciation deduction and the special depreciation allowance. However, leasehold improvements inside the Liberty Zone do not qualify for the special depreciation allowance. A taxpayer may elect not to recognize gain with respect to property that is involuntarily converted if the taxpayer acquires qualified replacement property within an applicable Zone business, but it was period. The replacement period for property that was destroyed in the involuntarily converted in the Liberty Zone as a result of the September 11, 2001, September 11, 2001, terrorist attacks is 5 years after the end of the taxable year in which a gain is realized provided that substantially all of the use of the replacement property is in New York City. The involuntarily converted Liberty Zone property can be replaced with any tangible property held for productive use in a trade or business because taxpayers in presidentially declared disaster areas such as the Liberty Zone can use any tangible, productive use property to replace property that was involuntarily converted. Outside of the Liberty Zone, the replacement period for involuntarily converted property is 2 years (3 years if the converted property is real property held for the productive use in a trade or business or for investment), and the converted property must be replaced with replacement property that is similar in service or use. terrorist attacks. Several years ago, the taxpayer paid $50,000 for the truck and, over time, depreciated the basis in the truck to $30,000. If the insurance company paid $35,000 in reimbursement for the truck and the taxpayer used the $35,000 to purchase replacement property of any type that is held for productive use in a trade or business within 5 years after the close of the tax year of payment by the insurance company, the taxpayer would not recognize a gain. An aggregate of $8 billion of tax-exempt private activity bonds, called qualified New York Liberty bonds, are authorized to finance the acquisition, construction, reconstruction, and renovation of certain property that is primarily located in the Liberty Zone. Qualified New York Liberty bonds must finance nonresidential real property, residential rental property, or public utility property and must also satisfy certain other requirements. The Mayor of New York City and the Governor of New York State may each designate up to $4 billion in qualified New York Liberty bonds. The Mayor of New York City Effective for bonds designates $120 million of qualified New York Liberty bonds to finance the construction of an office building in the Liberty Zone. Assistance Act of issued after March 9, 2002 (the date of enactment of the Job Creation and Worker 2002), and on or before December 31, 2004 Advance refunding bonds be issued to pay principal, interest, or redemption price on State designates $70 million refunding bonds issued An aggregate of $9 billion of advance refunding bonds may The Governor of New York Effective for advance certain prior issues of bonds issued for facilities located in New York City (and certain water facilities located outside of to refinance bonds that New York City). Under this benefit, certain qualified bonds, which were outstanding on September 11, 2001, and had exhausted existing advance refunding authority before September 12, 2001, are eligible for one additional advance refunding. The Mayor of New York City and the Governor of New York State may each designate up to $4.5 billion in advance refunding bonds. of advance refunding bonds after March 9, 2002, financed the construction of December 31, 2004 hospital facilities in New York City. The Liberty Zone tax benefits were enacted as part of the Job Creation and Worker Assistance Act of 2002, Pub. L. No. 107-147.
The President pledged a minimum of $20 billion in assistance to New York for response and recovery efforts after the September 11, 2001, terrorist attacks. This includes tax benefits, commonly referred to as the Liberty Zone tax benefits, that the Joint Committee on Taxation (JCT) estimated would reduce federal tax revenues by about $5 billion. The actual amount of benefits realized, however, will depend on the extent to which taxpayers and the city and state of New York take advantage of them. GAO was asked to determine (1) the extent to which the Internal Revenue Service (IRS) is collecting and reporting information about the number of taxpayers using each of the seven Liberty Zone tax benefits and the revenue loss associated with those benefits and (2) if IRS is not collecting and reporting this information, what steps it would need to take and what resources would be needed to do so. For one of the seven Liberty Zone tax benefits, the business employee credit, IRS is collecting but not planning to report some information about use--the number of taxpayers claiming the credit and the amount of credit claimed--nor is it planning to use this information to report the revenue loss associated with that benefit. IRS is not planning to collect or report information about the use of the other six benefits or the revenue loss associated with those benefits. According to IRS officials, the agency followed its usual procedures in determining whether to collect information about benefit use and revenue loss. IRS officials said they would collect and report these data if (1) it would help the agency administer the tax laws or (2) IRS was legislatively mandated to do so. IRS would need to make several changes if it were to collect more information on the use of the benefits and the associated revenue loss, and this information would not be complete or lead to a verifiable measure of the reduction in federal tax revenues due to the benefits. IRS would need to change forms, processing procedures, and computer programming, which would add to taxpayer burden and IRS's workload. IRS officials were unable to estimate the costs involved in accomplishing these actions or the number of staff needed to do so. The officials said that the earliest they could make these changes would be for tax year 2004 returns. As a result, IRS would not have information for two of the years that the benefits were in effect, which is significant because most of the benefits expire by the end of 2006. In addition, if IRS were to collect data on the use of the Liberty Zone benefits, it would be able to make an estimate, but could not produce a verifiable measure, of the revenue loss due to the benefits because, for example, IRS would have to make assumptions about how taxpayers would have behaved in the absence of the benefits.
5,328
565
HUD is the principal government agency responsible for programs dealing with housing, community development, and fair housing opportunities. HUD's missions include making housing affordable through FHA's mortgage insurance for multifamily housing and the provision of rental assistance for about 4.5 million lower-income residents, helping revitalize over 4,000 localities through community development programs, and encouraging homeownership by providing mortgage insurance. HUD is one of the nation's largest financial institutions, responsible for managing more than $426 billion in mortgage insurance and $497 billion, in guarantees of mortgage-backed securities, as of September 30, 1996. The agency's budget authority for fiscal year 1998 is about $24 billion. HUD's major program areas are Housing, which includes FHA insurance and project-based rental assistance programs; Community Planning and Development (CPD), which includes programs for Community Development Block Grants, empowerment zones/enterprise communities, and assistance for the homeless; Public and Indian Housing (PIH), which provides funds to help operate and modernize public housing and administers tenant-based rental assistance programs; and Fair Housing and Equal Opportunity (FHEO), which is responsible for investigating complaints and ensuring compliance with fair housing laws. HUD has been the subject of sustained criticism for weaknesses in its management and oversight abilities, which has made it vulnerable to fraud, waste, abuse, and mismanagement. In 1994, we designated HUD as a high-risk area because of four long-standing Department-wide management deficiencies: weak internal controls, inadequate information and financial management systems, an ineffective organizational structure, and an insufficient mix of staff with the proper skills. In February 1997, we reported that HUD had formulated approaches and initiated actions to address these deficiencies but that its efforts were far from reaching fruition. HUD began a number of reform and downsizing efforts prior to the 2020 plan. In February 1993, then-Secretary Cisneros initiated a "reinvention" process in which task forces were established to review and refocus HUD's mission and identify improvements in the delivery of program services. HUD also took measures in response to the National Performance Review's September 1993 report, which recommended that HUD eliminate its regional offices, realign and consolidate its field office structure, and reduce its field workforce by 1,500 by the close of fiscal year 1999. Following a July 1994 report by the National Academy of Public Administration that criticized HUD's performance and capabilities, Secretary Cisneros issued a reinvention proposal in December 1994 that called for major reforms, including a consolidation and streamlining of HUD's programs coupled with a reduction in staff to about 7,500 by the year 2000. Secretary Cuomo initiated the 2020 planning process in early 1997 to address, among other things, HUD's needs for downsizing and correcting management deficiencies. The process included, for each major program area, (1) management reform teams that outlined each area's business and organizational structure, proposed functional changes, identified resource requirements, and allocated staff based on downsizing targets; (2) "change agent" teams that recommended consolidations and other process changes while meeting downsizing targets; and (3) review of these teams' reports by the Secretary and principal staff. Members of the management reform and change agent teams were drawn from all levels of the agency. The plan has continued to evolve since June 1997, as implementation teams proceed with their work. HUD's principal documents supporting the 2020 plan are management reform and change agent reports covering each of the agency's major program areas and functions. Prepared in the spring of 1997, these reports identify a number of potential efficiencies from consolidating and centralizing processes. Beyond allowing the agency to operate with a reduced workforce, other efficiencies include reducing the processing time for single-family housing insurance endorsements and multifamily housing development applications and reducing paperwork requirements for grant programs. The potential efficiencies are generally not based on detailed empirical analyses or studies, but rather on a variety of factors, including some workload data, limited results of a pilot project, identified best practices in HUD field offices, benchmarks from other organizations, and managers' and staff's experiences and judgment. In addition to increased efficiency, HUD expects the planned consolidation of functions and other process changes to result in increased effectiveness. For example, fewer public housing authorities and FHA multifamily projects may become "troubled" because staff can better focus on monitoring and improving the performance of the authorities and projects that are potentially troubled. The following sections discuss, for each of HUD's major program areas--Housing, Community Planning and Development, Public and Indian Housing, and Fair Housing and Equal Opportunity--the specific process changes proposed in the 2020 plan, the potential efficiencies and other benefits expected from the changes, and the studies or other information HUD provided as support for the changes. HUD's 2020 plan calls for significant organizational and process changes in three primary functions of FHA's--single-family housing activities, multifamily housing activities, and the FHA Comptroller's activities. As discussed below, the nature and detail of the studies and analyses supporting the process changes vary among the offices. Process changes proposed for single-family housing include consolidating functions, such as insurance endorsements, that were previously carried out in 81 field offices into four homeownership centers; privatizing or contracting out most property disposition activities (HUD has to dispose of FHA-insured single-family properties that it owns as a result of lenders' foreclosures on defaulted mortgages); and eliminating most loan-servicing functions by selling the inventory of HUD-held mortgages. HUD expects the reforms to permit a significant reduction in staffing requirements, reduce insurance endorsement processing time to as little as 1 day (compared with an average of about 2 weeks), improve underwriting and loss mitigation, and increase loans to targeted populations through outreach. HUD also expects the reforms to address problems such as poor control and monitoring of HUD-owned properties and inconsistent delivery of quality services. According to the Deputy Assistant Secretary for Single Family Housing, an in-house team of senior managers developed the homeownership center concept based upon the regional office structure of the Federal National Mortgage Association (Fannie Mae). Fannie Mae serves the entire United States through offices in Atlanta, Georgia; Chicago, Illinois; Dallas, Texas; Pasadena, California; and Philadelphia, Pennsylvania. Certain functions performed by FHA generally parallel some of those performed by other organizations in the single-family mortgage industry such as Fannie Mae. In 1994, as a pilot project, FHA began consolidating its single-family loan-processing operations that were performed in 17 of its field offices into the Denver Homeownership Center. According to HUD, the pilot showed that consolidating work at one site and increasing the use of technology could reduce insurance endorsement processing time from 2 weeks to as little as 1 day. In addition, according to the change agent report, the functions in the Denver Homeownership Center were carried out with half the staff who were responsible for the functions in the 17 field offices. Process changes in FHA's multifamily housing activities include consolidating the asset development and management functions into 18 hubs supported by staff in 33 program centers; implementing a fast-track loan development process, which allows field offices to waive certain loan-processing requirements and tailor processing options to local needs and requires lenders to order and pay for the appraisals and inspections; and consolidating financial and physical assessments of properties, enforcement, and rental assistance functions--along with similar functions in other program areas--into three nationwide centers. (The three are the Assessment Center, the Enforcement Center, and the Section 8 Financial Management Center.) Efficiencies projected from the changes, according to HUD, include (1) reducing the processing time for housing development applications from 360 days to 35 days and, (2) using nonfederal experience as a model, reducing individual asset managers' average workloads from 55 projects to 35 (primarily because some functions such as inspections and enforcement actions will be handled in part by the enforcement and assessment centers). In addition, HUD expects the changes to address problems such as inconsistency in processing loan development applications, in terms of both time and procedures; a failure to hold mortgagees accountable, which puts HUD at greater risk; asset managers overburdened with unrelated responsibilities; the lack of an efficient system to identify, assess, and respond to troubled properties; and an inefficient and burdensome administration system for Section 8 rental assistance. Multifamily housing officials provided some empirical data for the projected efficiencies. For example, support for the reduction in asset managers' workload included some data on workloads in nonfederal organizations that perform similar functions and HUD's own workload analysis, which is based on its current inventory of properties. The nonfederal workload ratios varied from 18 to 37 projects per project manager. Multifamily housing officials allocated staffing to the field offices (hubs and centers) based, in part, upon the following ratios: 35 insured projects with subsidies per staff person, 55 insured projects without subsidies per staff person, and 16 projects per staff person for preventing the projects from becoming troubled. A HUD survey of multifamily housing field offices showed reductions in processing time and costs using the fast-track process. Anecdotal responses from 14 offices included comments such as, "The old way took 60 to 90 days, some time longer. Processing at any one stage typically takes 30 to 40 days often much shorter;" "FAST-TRACK cut staff time from 120 hours per case to 40 hours per case;" and "Estimated savings $17,000 to $20,000 per case in contracting costs." Other factors that influenced the restructuring of multifamily housing offices and functions were the experiences of cross-functional teams (staffed from different offices to assist in the handling of workload problems) and field office staff's experiences. In accordance with the 2020 plan, the FHA Comptroller has redesigned the title I debt collection process and consolidated operations from three centers into one center (Albany, New York). In addition, the Comptroller plans to transfer routine debt collection to the Treasury Department or, if this does not prove to be feasible, to a private contractor. The process changes are being made to address two major problems: (1) the recovery processes were cumbersome and poorly integrated with other processes, such as insurance premium collection from lenders and claims examination, and (2) the resources invested were not justified by the level of assets recovered. The FHA Comptroller believes that the changes will result in increased debt collection with significantly fewer staff. The changes and benefits identified are based upon a business process redesign effort, including a workforce study, that was completed in January 1997. The process redesign showed that over a 10-year period, debt collection could increase 23 percent using fewer than half the existing number of staff. The process redesign team included a staff-level team; a management and stakeholder steering committee; and a contractor that provided consultant services. Prior to the 2020 plan, CPD consolidated the process of grantee planning and reporting for four formula grant programs and initiated a new automated system for the process. Additional changes proposed by the 2020 plan include using advanced mapping software to aid community planning, converting competitive grants providing assistance for the homeless to formula grants, and aligning resource needs and responsibilities within a new Economic Development and Empowerment Service. The reforms are meant to address problems such as fragmented approaches for solving community concerns, limited resources for managing the over 1,300 competitive grants CPD approves in a year, and limited staffing for local monitoring of programs. From the reforms, CPD expects to (1) continue to reduce paperwork requirements; (2) improve the monitoring and review of grantees by targeting its resources to high-risk projects; and (3) reduce its workload for processing, awarding, and monitoring grant applications and grantees' activities. CPD did not provide empirical or analytical studies supporting the efficiencies expected from the reforms. CPD officials said, however, that their operations demonstrate the viability of the process changes because many of the changes are already in place and personnel reductions had occurred prior to the 2020 plan. However, the conversion of the competitive grants to formula grants requires legislation, and if this does not occur, some monitoring activities may have to be contracted out. Process changes in PIH include consolidating some of the functions previously performed in 52 public housing field offices into 27 hubs and 16 program centers; centralizing and consolidating enforcement, real estate assessment, and Section 8 payment functions into three nationwide centers along with other program areas; centralizing the management of competitive grants and public housing operating and capital funds into one PIH Grants center; centralizing applications for PIH demolition/disposition, designated housing plans, and homeownership plans into one Special Applications center; centralizing functions to improve the performance of troubled public housing authorities into two Troubled Agency Recovery centers; and deregulating (reducing monitoring and reporting requirements for) small and high-performing public housing authorities. HUD envisions that the consolidation of the field offices will even out the public housing authority workload across offices, while the specialization of functions will result in less time and fewer staff needed to carry out the functions. The reforms are meant to address problems such as a lack of monitoring and coordination of PIH programs, staffing imbalances among PIH field offices, and difficulty identifying and resolving problems with housing authorities earlier because of the intensive field resources needed to deal with troubled authorities. PIH did not provide empirical data or analyses that show how the changes will produce the expected efficiencies. As discussed further in this report, PIH used workload and staffing data to redistribute the workload across its field offices. Other support for the changes, according to PIH officials, are on the basis of managers' and staff's past experiences. Process changes in FHEO include consolidating its existing field structure of 48 offices into 10 hubs, 9 project centers, and 23 program offices; consolidating, within both its headquarters and field offices, program compliance monitoring and enforcement functions; and cross-training field staff. HUD intends the changes to result in more flexibility to shift resources to meet priorities or handle workload demands; improved communication and cooperation among FHEO staff; an organizational structure that will be clearer to the public; and better integration of fair housing into HUD's other programs. The changes address problems such as fragmentation of responsibility and accountability in areas such as policy development, planning, and program evaluation; duplication of field oversight functions; and a split in field management between enforcement and program compliance functions, resulting in a "two FHEO" phenomenon. FHEO did not perform analytical studies to support the changes. Rather, the reforms and benefits identified were based on the FHEO's self-analysis, brainstorming sessions, the findings of a change agent team, a review of workload data, and discussions with employees and customers. According to the Deputy Secretary, the process changes proposed by the 2020 plan, along with partnerships with states and local entities and the use of contractors, will allow the agency to operate with 7,500 staff--a staffing target level established prior to the plan. Proposed staffing levels for each program area, as outlined in the management reform team and change agent team reports, are generally not based upon systematic workload analyses to determine needs. While the teams were instructed by the Deputy Secretary to determine staffing requirements on the basis of workload, they were also instructed to work within targeted staffing levels and HUD's staffing constraints. The teams relied on a variety of factors, including workload data, to show whether they could carry out their responsibilities within assigned targeted staffing levels. The 2020 plan proposes a staffing target of 2,900 for the Office of Housing, a reduction of about 44 percent from fiscal year 1996 staffing of 5,157. The 2,900 figure includes some positions that will be transferred to the Department-wide Assessment, Enforcement, and Section 8 Financial Management centers; the exact numbers are still evolving as implementation plans are developed for the three centers. The following sections discuss some of the factors considered in assessing the Housing Office's staffing needs. FHA's proposal to carry out single-family housing activities with the reduced staffing level of 764 (as of January 1998) stems primarily from the elimination of most loan servicing and property disposition activities. According to the Deputy Assistant Secretary for Single Family Housing, the proposed staffing level is based on past experience, input from the change agent team and the managers of the 2020 reorganization project, and staffing levels at the Denver Homeownership Center pilot. Staffing for the Title I Asset Recovery Center, part of the FHA Comptroller's office, was based in part on a workload analysis performed as part of the business processing reengineering project. The workload analysis showed a need for a staffing level of 62. This number was reduced to 50, according to FHA officials, after (1) discussions with Department of Treasury officials who, based on their experience with debt collection activities, believed the operations could be performed more efficiently and (2) higher level reviews, which concluded that further reductions were needed. When assessing multifamily housing staffing needs, FHA considered factors such as job functions, types of housing projects (subsidy or nonsubsidy, troubled or nontroubled), supervisor/staff ratios recommended by the National Performance Review, and nonfederal workloads for asset managers. As part of its assessment, FHA assumed that it will reduce troubled projects to 10 percent of the inventory (from an estimated 20 percent currently) by year 2000. The 2020 plan proposes a staffing target of 770 for Community Planning and Development, a reduction of 8.8 percent from fiscal year 1996 staffing of 844. However, the CPD management reform plan states that an additional 200 personnel may be needed to fully implement its grants management system and undertake adequately staffed on-site monitoring for high-risk projects. This staffing level need is based, according to a CPD official, on staffing and workload data from 1992 and 1996. According to the official, the analysis used a formula that takes into consideration the number of grants, dollar amount of grants, and staffing levels and compared workloads for the 2 years. CPD was unable to provide documentation of the detailed analysis. For Public and Indian Housing, the 2020 plan proposes a staffing target of 1,165, a reduction of 14 percent from fiscal year 1996 staffing of 1,355. After receiving its staffing target, PIH first identified the needs of the processing and operations centers. It then allocated the remaining staff to field office sites using a formula that incorporated the number of public housing authorities with 250 or more low-income housing units and/or 500 or more Section 8 rental assistance units within each office's jurisdiction. The 2020 plan proposes a staffing target of 591 for Fair Housing and Equal Opportunity, a reduction of about 11 percent from fiscal year 1996 staffing of 663. Of the 591 staff, 475 will be in field offices. In 1996, FHEO reviewed field office workload data and estimated that it needed from about 150 to about 250 more staff than the 474 then on board. However, officials told us that the Office's legislatively established missions can be accomplished with the allotted personnel level. In its latest semiannual report, HUD's Inspector General raised concerns about the 2020 plan, including the agency's capacity to implement the reforms. The report noted that the downsizing target of 7,500 was adopted without first performing a detailed analysis of HUD's mission and projected workload under its proposed reforms. The report also noted that although HUD is downsizing, implementation plans are not final, and the proposed legislation to streamline and consolidate programs has not been enacted. In commenting on a draft of this report, HUD's Acting Deputy Secretary stated that the Department plans to achieve its downsizing goal of 7,500 full-time employees by 2002 in two phases. During the first phase, HUD has reduced staff to approximately 9,000 employees who are being deployed to enhance the delivery of HUD's programs and services. According to the Acting Deputy Secretary, HUD now plans to continue downsizing to 7,500 by 2002--the second phase--only if (1) the Congress enacts legislation to consolidate HUD's program structure and (2) there has been a substantial reduction in the number of troubled multifamily assisted properties and troubled public housing authorities. On August 10, 1997, HUD and the American Federation of Government Employees National Council of HUD Locals 222 signed an implementation agreement to carry out the 2020 plan. The agreement, among other things, stated that buyouts, attrition, and aggressive outplacement services would be used in lieu of reductions in force through year 2002. The agreement identified two types of positions that would be filled to implement the reforms: substantially similar positions (those that entail similar duties, critical elements, and qualification requirements and can be performed by the incumbent with little loss in productivity) and new positions. The procedures outlined in the agreement to fill substantially similar positions are as follows: Reassignments to similar positions will be in the local commuting area. Positions not filled by reassignments will be filled by merit selection. Any positions still vacant will be filled by management's directed reassignment of an employee. (Because of employees' concerns, HUD has decided not to use this procedure.) Any position still vacant will be filled by outside hires. The procedures outlined in the agreement to fill new positions are as follows: For HUD's new consolidated centers, positions will be filled using merit selection procedures. Except for positions that require special skills--for example, HUD attorneys and some Community Builders--merit staffing will be restricted to HUD employees. Any positions still vacant will be filled by management's directed reassignments. (Because of employees' concerns, HUD has decided not to use this procedure.) Any positions still vacant will be filled by outside hires. HUD initiated personnel actions to implement the 2020 reforms in September 1997. A buyout was held that closed September 30, 1997, in which 771 employees were approved to leave the agency. In October 1997, HUD mailed letters to each of its employees regarding their status under the reforms. HUD sent letters to 3,024 employees notifying them that their jobs were unaffected by the reforms. HUD sent letters to 3,184 employees notifying them that they would be voluntarily reassigned to substantially similar positions within the same geographical area. HUD sent letters to approximately 3,000 employees notifying them that they had not been placed in a position in HUD's new organization. The letters also stated that they would remain in their current position if they did not obtain a position through merit staffing, or voluntary reassignment, or a career outside of HUD. The letter stated that HUD would not implement a reduction in force until 2002 if one was necessary. On October 16, 1997, according to HUD, it announced 1,676 merit staffing vacancies. The announcements closed November 3, 1997. In November, an Office of Personnel Management team reviewed HUD's merit staffing guidance for filling these vacancies and made several suggestions for revising the language in the guidance. Also, in November, HUD announced a second buyout that employees had to take advantage of by December 23, 1997. An additional 230 employees were approved to leave the agency under the buyout. In January 1998, HUD announced additional voluntary reassignments for positions that remained unfilled. Any positions still vacant after the voluntary reassignments will be advertised for outside hires. The HUD 2020 Management Reform Plan is the latest in a series of recent proposals to overhaul a department that has been long-criticized for its management weaknesses--including those that contributed to our designation of HUD as a high-risk area. The plan is directed, in part, towards correcting the management deficiencies that we and others, including the Inspector General and the National Academy of Public Administration, have identified. The plan also incorporates steps for simultaneously reducing the agency's workforce. The 2020 plan is still evolving. Because the reforms are not yet complete and some of the plan's approaches are untested, the extent to which its proposed reforms will result in the plan's intended benefits is unknown. In addition, because the downsizing target of 7,500 staff is not based upon a systematic workload analysis to determine needs, it is uncertain whether HUD will have the capacity to carry out its responsibilities once the reforms are in place. Furthermore, the plan references legislative proposals, some of which, if not enacted, could affect workloads and staffing needs. Moreover, the process changes and downsizing suggest a greater reliance on contractors to help carry out HUD's mission. These uncertainties heighten the need for HUD, as it moves forward with implementing the 2020 plan's reforms, to carefully monitor its performance, assess the impact of the reforms, and amend the plan if necessary--including its staffing targets. Consulting with the Congress, its customers, and other stakeholders through a mechanism such as the Government Performance and Results Act could enhance the success of these efforts. HUD provided comments on a draft of this report (see app. I). HUD said that the report did not consider the agency's need for management reform and whether the plan focuses on the right areas. HUD also said that (1) due to its focus on the role of empirical analysis, the draft report did not adequately acknowledge other methods used to develop specific management reforms, (2) the draft report did not reflect that HUD undertook substantial workload analyses to plan for reaching the goal of 7,500 employees, and (3) the draft report failed to discuss any of the benefits likely to emerge from the plan's systemic changes. In its comments, HUD also included information on the 2020 plan's implementation status and how certain of its specific reforms are expected to address problems identified by its Inspector General, GAO, and others. Our draft report did not specifically assess HUD's need for management reform and whether the plan focuses on the right areas because they were outside the scope of our objectives. However, the report contains background information on the agency's history of management problems and its reform and downsizing efforts prior to the 2020 plan. We agree that there was a need for HUD to take action and that some actions included in the 2020 plan may help to correct deficiencies that we and others have identified. The 2020 plan seeks to solve many of the critical problems facing the Department. HUD's recognition that it needs to establish Department-wide capacities for real estate assessment and enforcement activities; improve internal controls; and improve systems and staffing for monitoring funds and multifamily project and public housing authority activities is consistent with the long-standing concerns that we and others have had. In this regard, our report was not intended to fault HUD's attempts to correct these deficiencies, and we have made changes where appropriate to reflect a proper tone. Regarding HUD's comment about a focus on empirical analysis, two of our three objectives concerned the studies and analyses underlying (1) the efficiencies derived from centralizing and consolidating certain programs and activities and (2) the Department's ability to carry out its responsibilities with the plan's target staffing level of 7,500. By their nature, these questions encompass the role of empirical analysis. The draft report did acknowledge the role played by other factors--including the change agent and management reform teams, the experience of HUD managers and staff, the practices of other organizations, and the experience of the Denver Homeownership Center pilot project--in setting out the efficiencies HUD expects from centralizing and consolidating certain activities. In its comments, HUD said that, in addition to the factors cited in our draft report, it consulted with recognized management experts prior to the June 1997 release of the 2020 plan; consulted with affected constituent groups and the Congress since the plan's release; and incorporated the Inspector General's suggestions into its implementation plans. We agree that such steps may be useful in building support for HUD's reforms. However, as noted, our objectives were to provide information on HUD's analytical support for the efficiencies it expects from the reforms--that is, the extent of data supporting the anticipated quantitative and qualitative benefits stated in the 2020 plan. HUD said that it undertook substantial workload analyses to plan for reaching the goal of 7,500 employees and that the workload analyses--along with the reengineering of numerous processes--formed the foundation for staffing size and allocation decisions. As we noted in our draft report, HUD's management reform and change agent teams relied on a variety of factors, including workload data, to show whether each program area could carry out its responsibilities within assigned targeted staffing levels. However, we draw a distinction between (1) analysis that is directed at determining how many staff are needed to carry out a given responsibility or function and (2) the use of historical workload data to apportion, or allocate, a predetermined target number of staff among different locations or functions. While HUD clearly used the latter approach, at least within some program areas, it provided us with no evidence during our review or in its comments that it used the former.Rather, as our report states, the management reform and change agent teams were instructed by the Deputy Secretary to work within targeted staffing levels; the predetermined target level for the entire Department was 7,500, a number established prior to the 2020 planning process. As is also noted in our report, HUD's Inspector General reported in December 1997 that the downsizing target of 7,500 was adopted without first performing a detailed analysis of HUD's mission and projected workload under its proposed reforms. We have revised the language in our report where appropriate to make this distinction clear. We also added information that HUD provided in its comments concerning future downsizing to the 7,500 level from the current level of about 9,000. Concerning HUD's comment that the draft report did not acknowledge potential benefits from the 2020 reform plan, the report noted that the plan is directed in part towards correcting management deficiencies that we and others have identified. Furthermore, the report noted that, in addition to increased efficiency, HUD expects the planned consolidation of functions and other process changes to result in increased effectiveness, such as fewer troubled public housing agencies and troubled FHA multifamily projects. For the reasons stated in the report, we continue to believe that the extent to which these benefits will be realized is as yet uncertain. HUD implicitly acknowledges this uncertainty in its comments by conditioning its further downsizing in part on a "substantial" reduction in troubled public housing agencies and multifamily projects. To identify HUD's analyses supporting the (1) prospective efficiencies from centralizing and consolidating major programs and activities and (2) agency's ability to carry out its responsibilities with 7,500 employees, we reviewed the management reform and change agent reports for each of HUD's major program areas. We also interviewed officials in each program area who had participated in, or were familiar with, the process of developing the 2020 plan. We asked officials in each program area to provide any empirical studies or analyses underlying the proposed reforms that did not appear in the management reform or change agent reports. In addition, we spoke with officials in HUD's Office of the Assistant Secretary for Administration and obtained the Inspector General's report on the 2020 planning process. To identify how HUD plans to manage the personnel changes that will result from the reforms and downsizing, we interviewed officials responsible for the changes and obtained copies of union agreements and other relevant documents. We performed our work from September 1997 through February 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to appropriate congressional committees; the Secretary of Housing and Urban Development; and the Director, Office of Management and Budget. We will make copies available to others upon request. If you or your staff have any questions, please call me on (202) 512-7631. Major contributors to this report are listed in appendix II. Results Act: Observations on the Department of Housing and Urban Development's Draft Strategic Plan (GAO/RCED-97-224R, Aug. 8, 1997). High-Risk Series: Department of Housing and Urban Development (GAO/HR-97-12, Feb. 1997). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed aspects of the management reform proposals outlined in the Department of Housing and Urban Development's (HUD) 2020 Management Reform Plan, focusing on: (1) studies and analyses that HUD performed to determine the efficiencies derived from the centralization and consolidation of the Federal Housing Administration (FHA) and other major programs and activities; (2) studies and workload analyses that were conducted to show that HUD would be able to carry out its responsibilities with 7,500 employees; and (3) HUD's plan to manage the personnel changes that will result from its reforms and downsizing. GAO noted that: (1) reports covering each of HUD's major program areas and functions, prepared by teams of HUD employees in the spring of 1997, are the principal documents supporting the 2020 plan; (2) the reports identify a number of prospective efficiencies from consolidating and centralizing certain processes; (3) in addition to allowing the agency to operate with a reduced workforce, HUD intends the changes to reduce the time or paperwork required for various processes; (4) the efficiencies cited are generally not based upon detailed empirical analyses or studies, but rather on a variety of information, including some workload data, limited results from a pilot project, identified best practices in HUD field offices, benchmarks from other organizations, and managers' and staff's experiences and judgment; (5) the plan is directed in part towards correcting the management deficiencies that have been identified; (6) because the reforms are not yet complete and some of the plan's approaches are untested, the extent to which they will result in the intended benefits is unknown; (7) according to HUD's Deputy Secretary, the process changes proposed by the 2020 plan, along with states and local entities and the use of contractors, will allow the agency to operate with 7,500 staff--a staffing target level established prior to the plan; (8) however, proposed staffing levels for each program area are generally not based upon systematic workload analyses to determine needs; (9) while the reform teams were instructed by the Deputy Secretary to determine staffing requirements based upon workload, they were also instructed to work within targeted staffing levels and the Department's staffing constraints; (10) the reform teams relied on a variety of factors, including some workload data, to show whether responsibilities could be carried out within targeted staffing levels; (11) because the downsizing target of 7,500 staff is not based upon a systematic assessment of needs and because proposed legislation could affect those needs, it is uncertain that HUD will have the capacity to carry out its responsibilities once the reforms are in place; (12) an August 1997 agreement between HUD and the American Federation of Government Employees National Council of HUD Locals 222 established the framework for managing personnel changes to implement the 2020 plan; (13) this agreement includes buyouts, reassignments, and an outplacement program for HUD employees and provides that a reduction in force may be used if necessary, but not before 2002; and (14) this agreement also provides for hiring new employees for some positions.
7,244
664
IRS' telephone assistors are located at 25 call sites around the country. In the 1999 filing season, IRS made major changes to its telephone customer service program. For example, IRS extended its hours of service to 24 hours a day, 7 days a week. IRS officials said they believed around-the- clock assistance would improve the level of service by distributing demand more evenly and support IRS' efforts to provide world-class service by making assistance available anytime. Also in 1999, IRS began managing its telephone operations centrally at the Customer Service Operations Center in Atlanta by using new call-routing technology. IRS' call router was designed to improve the overall level of service, as well as lessen disparities in the level of service across sites by sending each call to the first available assistor nationwide who had the necessary skills to answer the taxpayer's question. As part of this centralized management, IRS developed its first national call schedule that projected the volume of calls, for each half-hour, at each of IRS' 25 call sites, and the staff resources necessary to handle that volume. As in previous years, in the 2000 filing season, IRS had three toll-free telephone numbers taxpayers could call with questions about tax law, taxpayer accounts, and refunds. The three primary measures IRS used to evaluate its telephone performance were level of service, tax law accuracy, and account accuracy. IRS measures its level of service by determining the rate at which taxpayers that call IRS actually get through and receive assistance. Level of service is calculated by dividing the number of calls answered by the total call attempts. Calls answered is defined as calls that received service, either from assistors or telephone interactive applications. Total call attempts includes repeat calls and is the sum of calls answered, calls abandoned by the caller before receiving assistance, and calls that received a busy signal. IRS' tax law accuracy and account accuracy rates are based on a sample of nationwide calls that quality assurance staff listen in on and score for accuracy. Using IRS' Centralized Quality Review System, staff in Philadelphia listen to sample calls from beginning to end and determine whether the assistors provide accurate answers, follow procedural guidance to ensure a complete response, and are courteous to the taxpayers. If the assistors fail to adhere to any part of the guidance, or are not courteous to the taxpayers, the calls are counted as inaccurate. IRS began centrally monitoring calls to measure tax law accuracy in fiscal year1999 and account accuracy in fiscal year 2000. To address our objectives, we examined documents and interviewed IRS officials. Specifically: to assess IRS' performance in the three main telephone assistance toll-free numbers, we compared its 2000 filing season level of service, tax law accuracy, and account accuracy with its performance in the 1998 and 1999 filing seasons and its performance targets, and discussed with IRS officials how its performance compared with world-class customer service; to identify the key factors and describe how they affected performance in the 1999 and 2000 filing seasons we interviewed IRS officials, including executives, division chiefs, and first-line supervisors in Customer Service Field Operations and at call sites; and analyzed documents, including various reports that described and analyzed the factors that affected IRS' performance; to assess IRS' process for analyzing its performance in the 1999 and 2000 filing seasons in order to make improvements, we interviewed IRS officials, including National Office and Customer Service Field Operations officials responsible for collecting and analyzing data on IRS performance; and analyzed documents, including various reports related to the process, such as the 1999 National Office business review and statistical analyses of 2000 filing season performance; and to determine the basis for restricting supervisors from using productivity data to evaluate or discuss telephone assistor performance, we interviewed IRS officials, including officials in the Organizational Performance Division and Customer Service Field Operations; and analyzed documents related to the restriction, including the Internal Revenue Manual and materials used to train supervisors on the use of statistics. We performed our work at IRS' National Office in Washington, D.C.; Office of the Chief, Customer Service Field Operations, and Customer Service Operations Center in Atlanta; and the telephone assistance call sites in Atlanta, Dallas, and Kansas City, KS. We chose these three sites in order to include sites of various sizes, hours of operation, and work. We did not independently assess the accuracy of IRS' performance data, however, we verified that IRS had procedures in place intended to ensure data reliability. We did our work from January 2000 through February 2001 in accordance with generally accepted government auditing standards. We obtained written comments on a draft of this report from the Commissioner of Internal Revenue in a letter dated April 2, 2001. The comments are discussed at the end of this report and reprinted in appendix I. IRS telephone assistance showed mixed results in the 2000 filing season. Performance improved somewhat in the 2000 filing season as compared with 1999, but according to IRS officials, fell short of IRS' long-term goal to provide world-class customer service. While IRS had not established specific measures and goals for world-class service, it was considering adopting some of those used by leading telephone customer service organizations. In the 2000 filing season, IRS answered 36.1 million of the 61 million calls taxpayers made, resulting in a 59-percent level of service--better than the 50 percent IRS achieved in the 1999 filing season and its target of 58 percent, but short of the 69 percent IRS achieved in the 1998 filing season. IRS provided accurate responses in 73 percent of the tax law calls it answered--unchanged from 1999 and lower than its 2000 target of 80 percent. Account accuracy in the 2000 filing season was slightly lower than IRS' target of 63 percent. Table 1 shows IRS' performance during the 1998-2000 filing seasons. IRS officials in National Office and Customer Service Field Operations recognized that telephone performance in the 2000 filing season fell short of its long-term goal of providing world-class customer service--assistance comparable to that provided by leading public and private telephone customer service organizations. IRS has not defined world-class service in terms of specific measures and goals. However, IRS officials have acknowledged the need to change their performance measures to be more consistent with leading telephone customer service organizations. IRS' level of service measures the percentage of call attempts that receive assistance, with no consideration of how long callers wait for it. Some leading organizations measure service level as the percentage of calls answered within a specified period of time, such as answering 90 percent of calls within 30 seconds. IRS was considering adopting a similar measure and goal. However, IRS' performance in fiscal year 2000 fell substantially short of this level, with only 31 percent of calls being answered within 30 seconds. A number of interrelated factors influenced IRS' telephone assistance performance in the 2000 filing season. According to IRS, some of the key factors were the demand for assistance, staffing levels, assistor productivity, assistor skills, and IRS' guidance for assistors. Additionally, many of the factors were interrelated--changes in one factor could cause changes in others. According to an analysis by Customer Service Field Operations officials, IRS was able to answer a greater percentage of calls in the 2000 filing season compared with 1999 because demand for service substantially decreased. IRS measured demand in two ways: total call attempts and unique telephone number attempts. Total call attempts includes repeat calls and is the sum of calls answered, calls abandoned by the caller before receiving assistance, and calls that received a busy signal. The unique telephone number measure is designed to count the number of taxpayers who called, rather than the number of calls. It measures the number of calls from identifiable telephone numbers, and counts all call attempts from each telephone number as one call until it reaches IRS and is served, or until a 1-week window expires. Total call attempts decreased from 83.5 million in 1999 to 62.8 million, a 25-percent decrease, while unique number attempts decreased from 33.2 million to 25.9 million, a 22- percent decrease. According to IRS, demand declined partly because IRS issued 1.8 million fewer notices to taxpayers asking them to call IRS about such issues as math errors IRS detected while processing returns. Also, fewer taxpayers called about the status of their refunds because IRS processed returns more quickly. Additionally, timing of notices IRS sends taxpayers influences demand for assistance. For example, as we previously reported, in the 2000 filing season, because of contract delays, a contractor mailed the bulk of over 1 million notices to taxpayers over a 2-week period, rather than over a 7- week period as intended. When taxpayers called about the notices, IRS was unprepared to answer the unexpected increase in the number of telephone calls, which caused level of service to decline during this period. According to IRS officials, a factor that may have prevented the level of service from being higher in the 2000 filing season was IRS' decision to reduce the staff dedicated to telephone assistance as compared with 1999. Specifically, in the 2000 filing season, IRS dedicated 4,912 staff years to telephone assistance as compared to 5,339 staff years in 1999, an 8-percent decline. According to IRS officials, IRS dedicated fewer resources to telephone assistance to increase staffing in other programs, including the telephone collection system, adjustments, and service center compliance. IRS managers were concerned that in 1999, when IRS redirected resources from these other programs to telephone assistance, the backlog in these programs increased to unacceptable levels, causing uneven service and a decline in collection revenues. Assistor productivity is another factor that affects the level of service taxpayers receive from IRS. According to IRS officials, the level of service would have been higher had assistor productivity not declined in the 2000 filing season. This decline was in addition to a productivity decline that occurred in the 1999 filing season. According to analysts and officials in Customer Service Field Operations, a key indicator of productivity is the average time for an assistor to handle a call. Handle time is the total of the time an assistor spends talking to the taxpayer, the taxpayer is on hold, and the assistor spends in "wrap status", which is the time between hanging up at the end of a call and indicating readiness to receive another call. An IRS analysis showed that the average handle time increased from 318.5 seconds in the 1999 filing season to 371.5 seconds in the 2000 filing season, or about a 17-percent decline in productivity. According to a Treasury Inspector General for Tax Administration report, an increase in the number of calls an assistor handles has a profound effect on level of service. For example, if assistors had handled one more call per hour, IRS would have answered more than 8.5 million additional calls during the first 6 months of fiscal year 1999. While IRS had not determined all the causes of the decline in productivity since 1998, according to a July 2000 IRS study, approximately 58 percent of the productivity decline from 1999 to 2000 was due to assistors' receiving a greater percentage of calls that took longer to handle. For example, screening calls, in which the assistor talked with the taxpayer for only a short time to determine the taxpayer's question and where the call should be routed, decreased from 35 percent of the calls assistors handled in 1999 to 21 percent in 2000. The study concluded that assistors likely handled fewer of these calls because IRS changed its telephone message to discourage callers from posing as rotary dialers without a touch-tone telephone, allowing them to bypass the menu system and go directly to an assistor. This study did not identify what caused the remaining 42-percent decline in productivity in 2000. According to IRS officials, four policy changes that lowered productivity in the 1999 filing season continued to adversely affect productivity in the 2000 filing season. Specifically, in 1999, IRS discontinued automatically routing another call to an assistor immediately upon completion of a call; increased restrictions on using productivity data when evaluating assistors' performance; disproportionately diverted staff from the peak demand shifts to shifts when fewer taxpayers call when it implemented its 24-hour-a-day, 7-day-a- week assistance; and discontinued measuring productivity of individual call sites. First, as part of its November 1998 agreement with the National Treasury Employees Union, IRS discontinued using a call management tool--"auto- available"--that automatically routed another telephone call to an assistor as soon as a call was completed. Instead, assistors were placed in "wrap status" after each call and were unavailable until they pressed a keyboard button that made them available. Wrap status was designed to allow assistors time to document the results of a call or to allow them to take a momentary break after a stressful call. According to IRS officials, allowing assistors to determine when they were ready to take another call added time to each call, causing other callers to wait longer for service. With longer wait times, many taxpayers hung up before reaching an assistor, thereby reducing level of service. According to IRS statistics, for its tax law, account, and refund assistance lines, the average wrap times increased 94, 204, and 176 percent, respectively, from 1998 to 1999. Second, 1999 was the first filing season with increased restrictions on supervisors using productivity data to evaluate assistors' performance or discuss their performance. Some IRS studies of the 1999 filing season concluded that the restrictions negatively affected productivity. For example, one IRS study found that many site managers were concerned about their inability to properly manage assistors' use of wrap time without using productivity data. Five of the seven supervisors we spoke to about the 2000 filing season said they were dissatisfied with the restrictions. They said assistors know supervisors are restricted from using productivity data to evaluate employees' performance and that supervisors do not have adequate time to devote to monitoring and physical observation. Therefore, they said assistors are free to spend more time than necessary in wrap status. Our conversations with IRS officials, including supervisors at call sites and officials in the Organizational Performance Division, and review of related documents indicated officials were uncertain about the basis for the restriction, and some thought that it was mandated by the IRS Restructuring and Reform Act. We discuss this issue near the end of this report. Third, increasing the hours of telephone assistance to 24 hours a day, 7 days a week for the 1999 filing season may have decreased overall productivity because IRS disproportionately shifted staffing away from the hours when most taxpayers call. According to an IRS review, the diversion of staff away from hours when most taxpayers called resulted in a lower level of service because taxpayers waited longer for assistance, more taxpayers hung up while waiting, and demand increased because taxpayers redialed more. Limited data from a week in the 2000 filing season indicated that IRS continued to overstaff the night shift when compared to the other shifts. For example, for the week of April 2, 2000, through April 8, 2000, assistors working the night shift spent, on average, 44 percent of their time waiting to receive a call, whereas assistors working the day and evening shift spent 15 percent of their time waiting to receive a call. An IRS Customer Service Field Operations official responsible for scheduling staff said assistors spent more time waiting for calls at night because, when compared with the demand for assistance, IRS scheduled disproportionately more assistors during the night shift than other shifts. Assistors working nights generally had fewer skills, which required a disproportionate level of staffing to ensure that all needed skills were available. According to the official, IRS' attempts to attract more skilled assistors to work off-peak hours were unsuccessful. To counter the negative effects of staffing the extended hours, for fiscal year 2000, IRS limited its staffing of tax law assistance to 16 hours a day, 6 days a week after the filing deadline, when fewer taxpayers call with tax law questions. Fourth, beginning in 1999, IRS no longer had a performance measure that held sites accountable for productivity. Instead of measuring level of service as it had in the past, IRS measured a site's performance on the number of assistors assigned to answer telephone calls each half-hour as compared to the number of assistors specified in the site's half-hour work schedule. IRS made this change, in part, because the sites were no longer responsible for predicting and meeting demand. According to an IRS assessment of the 1999 filing season, replacing the site level of service measure with the measure of assistor presence diminished the focus on productivity and the extent to which sites sought opportunities to improve productivity. IRS Customer Service Field Operations officials added that, despite the decline in productivity, taxpayers might have received better service overall if assistors took the time needed to fully resolve each taxpayer's call, rather than being concerned about the number of calls answered. However, IRS had not determined if the decline in productivity had improved the quality of service. According to IRS officials, including the Commissioner, Customer Service Field Operations officials, and supervisors at call sites, the accuracy rates IRS achieved in the 2000 filing season continued to be adversely affected by assistor skill gaps--the difference between the skills assistors had and the skills needed by IRS. Skill gaps were caused, in part, when IRS implemented its new call router in 1999. With the call router, individual assistors were required to answer calls on a broader range of topics, often without adequate training or experience. Before the 1999 filing season, each call site decided how it would group topics for routing and assistor specialization. According to a cognizant official, the number of topic groups at sites ranged from 40 to 125, which allowed assistors to typically specialize in only one or two topics. Because the new call router could not handle differences in topic groups among call sites, nor efficiently route calls to that many groups, the topic groups had to be standardized and were reduced to 31. This increased the number of topics in each group, which typically required an assistor to answer calls on five or more tax law topics, creating a skill gap. IRS officials recognized that assistors had struggled with the amount of information they were required to know in 1999, so for the 2000 filing season IRS increased the number of topic groups to 46, which decreased the number of topics in each group. However, according to IRS officials, the loss of specialization continued to affect accuracy in the 2000 filing season. IRS officials said they were aware of how skill gaps had negatively affected the accuracy of the assistance taxpayers received in 1999 and, in August 1999, IRS began to revise its training materials to better prepare assistors to answer questions in their assigned topic groups. However, according to IRS officials, much of the new training material was not developed in time for the 2000 filing season. Furthermore, a cognizant IRS official said the first attempt to revise the training did not separate each topic into a self-contained course. For the 2001 filing season, IRS revised its training material so that each course contained only one topic, enabling IRS to provide assistors with just-in-time training on the specific topics they were assigned to work. IRS officials said organizational changes are needed to further reduce the number of topics assistors are expected to know. In a May 2000 memo, the Commissioner cited low accuracy scores and employee survey comments as evidence that IRS was expecting its assistors and managers to have knowledge in areas that are far too broad and that IRS was "attempting the impossible" by trying to fill skill gaps solely with training. IRS officials said IRS' reorganization would allow specialization by taxpayer group, but that even greater levels of specialization were needed. Accordingly, as part of its restructuring efforts, in June 2000, IRS began long-term planning efforts to create greater specialization at both the call site and assistor levels. The quality of the guidance assistors used also affected whether they provided accurate assistance. IRS officials at National Office and call sites said the guidance assistors used in the 2000 filing season to respond to account questions was confusing and difficult to use, causing assistors to make mistakes, thereby lowering the accuracy rate. IRS officials said that over the years, the Internal Revenue Manual--the assistors' guide for account questions--had grown from a collection of handbooks to a large, unwieldy document with duplicative and erroneous information. According to IRS officials, errors in the Manual had long been a problem for which sites had developed local "workaround" procedures. IRS established a task force to correct these problems, and issued a new draft version at the end of the 1999 filing season. While the draft Manual was smaller and contained less duplicative and erroneous information, it was missing some needed information and cross-references. However, IRS did not realize the extent of the problems with the Manual until October 1999, when it began holding assistors accountable for strictly adhering to the Manual as part of its central monitoring of account accuracy. As a result, the draft was recalled, and the task force continued to make corrections to the Manual throughout the filing season. The task force issued two new versions in February 2000 and May 2000. According to IRS officials, the frequent changes in the Manual made it difficult for assistors to know which version to use, sometimes leading to inaccurate answers. According to IRS officials responsible for Manual revision, as October 1, 2000, the task force had corrected problems with the Manual and related training material in time for the 2001 filing season. Additionally, IRS officials said they implemented a new guide in October 2000 to make it easier for assistors to follow the proper steps and provide accurate assistance to taxpayers with account questions. Determining how each factor affects level of service and accuracy is made even more difficult because many of the factors are interrelated; changes in one can affect another. For example, the demand for assistance, or the number of call attempts, is influenced by the level of productivity. Fewer incoming calls make it easier for a given number of assistors to answer a greater percentage of incoming calls. Answering a greater percentage of incoming calls--a higher productivity level--reduces the number of repeat calls, which reduces the number of calls overall. Similarly, the quality of guidance assistors use affects not only accuracy, but also demand. While step-by-step guidance on how to respond to questions would likely improve accuracy levels and service for some taxpayers, it could also cause assistors to take more time answering the call, lower productivity, and increase the number of taxpayers who are unable to get through, causing them to redial, and thereby increase demand. IRS' analysis of its telephone assistance performance in the 1999 and 2000 filing seasons was incomplete. Although IRS collected various data and conducted several analyses of performance, the approach either did not assess or assessed incompletely some of the key management decisions and other factors that affected performance. As a consequence, IRS management had less information than it could have on which to make decisions intended to improve future performance. IRS undertook many efforts in 1999 and 2000 intended to identify factors that affected performance. For example, IRS conducted a best practices productivity study in 1999 to identify best practices among IRS call sites and why productivity varied among them; reviewed its implementation of 24-hour-a-day, 7-day-a-week assistance to determine its effects on such things as costs and quality of assistance; conducted local and centralized monitoring of telephone calls to determine what errors assistors made and why; conducted a study in 2000 to determine why productivity had declined; established a filing season critique program in 2000 to solicit information from field staff about their problems and successes during the filing season; and conducted a 1999 fiscal year business review that addressed many of the factors that affected telephone performance. In some of its efforts, IRS began analyzing the data made available through management information systems at its Customer Service Operations Center, which opened in December 1998. For example, as a part of the 2000 productivity study noted above, IRS used statistical analysis to assess how productivity was affected by such factors as the complexity of calls handled and assistor experience and education. In a similar analysis, IRS assessed how call demand was affected by such factors as returns filed, notices issued, refunds issued, refund cycle times, and electronic filing return rates. Although IRS now has better quantitative data to assess its performance and make decisions about ways to improve performance, IRS officials said much work still needs to be done to understand the factors that affect performance. Other leading telephone customer service organizations we studied see the importance of continuous evaluation and incorporating evaluation results to make improvements. As we said in a recent report on management reform, "an organization cannot improve performance and customer satisfaction if it does not know what it does that causes current levels of performance and customer satisfaction." IRS' efforts to evaluate the factors affecting telephone assistance were incomplete and failed to provide IRS management with some significant information that could have been used to improve performance. For example, while IRS did several studies of productivity, the studies relied on handle time as the measure of productivity. Other segments of assistors' time that would affect overall productivity, including time spent waiting to receive a call, time spent away from the telephone (in meetings, breaks, and training), and time assistors were not assigned to answer calls, were not studied. In another example, the most extensive single review of the factors that affected performance--the 1999 National Office business review--did not assess how extending the hours of service to 24 hours, 7 days a week affected level of service. Earlier, we described how IRS' disproportionate move of assistors to the night shift created differentials between shifts in the time spent waiting for a call. Furthermore, while the National Office review examined the effects of demand on service, it did not examine why demand increased in 1999. Also, IRS did not evaluate the effectiveness of its management decision not to automatically route calls to assistors as soon as they completed a call, or the several other policy changes noted above, even though they were intended to significantly improve overall performance. The gaps in IRS' information about the factors affecting past performance impaired IRS' efforts to improve performance. An important example is the decline in productivity, as measured by handle time. As discussed earlier, some IRS officials believe that taxpayers may have received better service overall if assistors took the time needed to fully resolve taxpayers' calls. However, IRS had not determined whether overall service improved as a result of increased handle time. Also discussed earlier was the quality of guidance provided assistors. IRS did not realize until October 1999 the extent of problems in the Internal Revenue Manual, too late for fixes to be made for the 2000 filing season and sometimes leading to inaccurate answers for taxpayers. IRS' "balanced measures" performance management system and not the IRS Restructuring and Reform Act of 1998 was the basis for IRS restricting the use of productivity data to evaluate employee performance. The Act, and subsequent regulation, prohibited supervisors from using records of tax enforcement results, or other quantity measures, to impose production quotas on or evaluate employees that make judgments about the enforcement of tax laws. When designing and implementing the balanced measures system, IRS management decided to prohibit telephone assistance supervisors from using productivity data when evaluating all assistors, even those that do not make tax enforcement judgments. The prohibition was intended to promote a more balanced focus by assistors on efficiency, quality, and service. According to Organizational Performance Division officials, the balanced measures system does not prohibit supervisors from using productivity data to monitor employee performance. However, it requires supervisors to "get behind the numbers" and base discussions and evaluations of employee performance solely on the direct review of employees' work. Officials said IRS' design of the balanced measures system was heavily influenced by IRS' environment in 1997 and 1998, during which IRS was under intense pressure from Congress, the administration, and stakeholders to improve service to taxpayers. The National Performance Review Customer Service Task Force and National Commission on Restructuring the IRS had found that IRS' overall environment and performance measurement focused on productivity to the detriment of service to taxpayers, making employees strive to meet short-term performance and efficiency goals rather than have a balanced focus on efficiency, quality, and taxpayer service. IRS officials said the overemphasis on level of service and other productivity measures had resulted in employees perceiving that productivity was more important than quality, so assistors hurried through telephone calls and served taxpayers poorly, rather than taking the time necessary to give the taxpayer full, quality service. Also, officials said supervisors tended to consider measures as ends in themselves, rather than determining the causes behind employee performance and taking action to improve performance. IRS must significantly improve telephone assistance if it is to meet its long- term goal of providing world-class customer service to the tens of millions of taxpayers that call. While IRS has undertaken efforts to analyze its performance and identify ways to improve, these efforts have been incomplete. IRS' analyses did not cover all of the key management decisions and other key factors that affect telephone performance. Designing and conducting a comprehensive analysis of the key management decisions and other key factors that affect telephone performance in each filing season will be a difficult task because the factors that affect performance are multiple and interrelated. However, without a more comprehensive analysis of the factors that affect performance, IRS management lacks the information it needs to make decisions to improve performance. We recommend that the IRS Commissioner ensure, as part of its analysis of telephone assistance performance each filing season, that IRS take into account all key management decisions and other key factors that can affect performance, such as implementing 24-hour, 7-day assistance and the decline in assistor productivity, to determine their impact on the quality of service and to make improvements. The Commissioner of Internal Revenue provided written comments on a draft of this report in an April 2, 2001, letter, which is reprinted in appendix I. The Commissioner agreed with our assessment of IRS' telephone performance during the 2000 filing season and with our recommendation. The Commissioner stated that the assessment of key management decisions and direction should be fully integrated into both the planning process and performance review. He recognized that IRS needed to improve its performance analysis to take into account all key management decisions and other factors that can affect performance. He stated that this would be done as a part of IRS' annual filing season evaluation. The Commissioner again expressed concern with our comparison of IRS' performance in the 2000 filing season with its performance in the 1998 filing season, commenting that "comparisons to 1998 are not valid due to the changes made to accommodate our technological advance to a national networked system." As stated in our evaluation of the Commissioner's comments on our earlier report, we believe it is appropriate to compare IRS' performance before and after such operational changes. The changes made after 1998 were intended to improve IRS' telephone service. The only way to tell if service improved is to compare performance levels before and after the changes. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to Representative William J. Coyne, Ranking Minority Member of the Subcommittee; Representative William Thomas, Chairman, and Representative Charles B. Rangel, Ranking Minority Member, Committee on Ways and Means; the Honorable Paul H. O'Neill, Secretary of the Treasury; the Honorable Charles O. Rossotti, Commissioner of Internal Revenue; and the Honorable Mitchell E. Daniels, Jr., Director, Office of Management and Budget. We will also make copies available to others upon request. If you have any questions or would like additional information, please call James R. White at (202) 512-9110 or Carl Harris at (404) 679-1900. Key contributors to this report are Ronald W. Jones, Julie Schneiberg, and Sally Gilley.
The Internal Revenue Service (IRS) must significantly improve telephone assistance if it is to meet its long-term goal of providing world-class customer service to the tens of millions of taxpayers who call. Although IRS has tried to analyze its performance and identify ways to improve, these efforts have been incomplete. IRS' analyses did not cover all of the key management decisions and other key factors that affect telephone performance. Designing and conducting a comprehensive analysis of the key management decisions and other key factors that affect telephone performance in each filing season will be difficult because the factors that affect performance are multiple and interrelated. However, without a more comprehensive analysis of the factors that affect performance, IRS lacks the information it needs to make decisions to improve performance.
6,794
155
Child pornography is prohibited by federal statutes, which provide for civil and criminal penalties for its production, advertising, possession, receipt, distribution, and sale. Defined by statute as the visual depiction of a minor--a person under 18 years of age--engaged in sexually explicit conduct, child pornography is unprotected by the First Amendment, as it is intrinsically related to the sexual abuse of children. In the Child Pornography Prevention Act of 1996, Congress sought to prohibit images that are or appear to be "of a minor engaging in sexually explicit conduct" or are "advertised, promoted, presented, described, or distributed in such a manner that conveys the impression that the material is or contains a visual depiction of a minor engaging in sexually explicit conduct." In 2002, the Supreme Court struck down this legislative attempt to ban "virtual" child pornography in Ashcroft v. The Free Speech Coalition, ruling that the expansion of the act to material that did not involve and thus harm actual children in its creation is an unconstitutional violation of free speech rights. According to government officials, this ruling may increase the difficulty of prosecuting those who produce and possess child pornography. Defendants may claim that pornographic images are of "virtual" children, thus requiring the government to establish that the children shown in these digital images are real. Recently, Congress enacted the PROTECT Act, which attempts to address the constitutional issues raised in The Free Speech Coalition decision. Historically, pornography, including child pornography, tended to be found mainly in photographs, magazines, and videos. With the advent of the Internet, however, both the volume and the nature of available child pornography have changed significantly. The rapid expansion of the Internet and its technologies, the increased availability of broadband Internet services, advances in digital imaging technologies, and the availability of powerful digital graphic programs have led to a proliferation of child pornography on the Internet. According to experts, pornographers have traditionally exploited--and sometimes pioneered--emerging communication technologies--from the dial-in bulletin board systems of the 1970s to the World Wide Web--to access, trade, and distribute pornography, including child pornography. Today, child pornography is available through virtually every Internet technology (see table 1). Among the principal channels for the distribution of child pornography are commercial Web sites, Usenet newsgroups, and peer-to-peer networks. Web sites. According to recent estimates, there are about 400,000 commercial pornography Web sites worldwide, with some of the sites selling pornographic images of children. The child pornography trade on the Internet is not only profitable, it has a worldwide reach: recently a child pornography ring was uncovered that included a Texas-based firm providing credit card billing and password access services for one Russian and two Indonesian child pornography Web sites. According to the U.S. Postal Inspection Service, the ring grossed as much as $1.4 million in just 1 month selling child pornography to paying customers. Usenet. Usenet newsgroups also provide access to pornography, with several of the image-oriented newsgroups being focused on child erotica and child pornography. These newsgroups are frequently used by commercial pornographers who post "free" images to advertise adult and child pornography available for a fee from their Web sites. Peer-to-peer networks. Although peer-to-peer file-sharing programs are largely known for the extensive sharing of copyrighted digital music, they are emerging as a conduit for the sharing of pornographic images and videos, including child pornography. In a recent study by congressional staff, a single search for the term "porn" using a file-sharing program yielded over 25,000 files. In another study, focused on the availability of pornographic video files on peer-to-peer sharing networks, a sample of 507 pornographic video files retrieved with a file-sharing program included about 3.7 percent child pornography videos. Table 2 shows the key national organizations and agencies that are currently involved in efforts to combat child pornography on peer-to-peer networks. The National Center for Missing and Exploited Children (NCMEC), a federally funded nonprofit organization, serves as a national resource center for information related to crimes against children. Its mission is to find missing children and prevent child victimization. The center's Exploited Child Unit operates the CyberTipline, which receives child pornography tips provided by the public; its CyberTipline II also receives tips from Internet service providers. The Exploited Child Unit investigates and processes tips to determine if the images in question constitute a violation of child pornography laws. The CyberTipline provides investigative leads to the Federal Bureau of Investigation (FBI), U.S. Customs, the Postal Inspection Service, and state and local law enforcement agencies. The FBI and the U.S. Customs also investigate leads from Internet service providers via the Exploited Child Unit's CyberTipline II. The FBI, Customs Service, Postal Inspection Service, and Secret Service have staff assigned directly to NCMEC as analysts. Two organizations in the Department of Justice have responsibilities regarding child pornography: the FBI and the Justice Criminal Division's Child Exploitation and Obscenity Section (CEOS). The FBI investigates various crimes against children, including federal child pornography crimes involving interstate or foreign commerce. It deals with violations of child pornography laws related to the production of child pornography; selling or buying children for use in child pornography; and the transportation, shipment, or distribution of child pornography by any means, including by computer. CEOS prosecutes child sex offenses and trafficking in women and children for sexual exploitation. Its mission includes prosecution of individuals who possess, manufacture, produce, or distribute child pornography; use the Internet to lure children to engage in prohibited sexual conduct; or traffic in women and children interstate or internationally to engage in sexually explicit conduct. Two other organizations have responsibilities regarding child pornography: the Customs Service (now part of the Department of Homeland Security) and the Secret Service in the Department of the Treasury. The Customs Service targets illegal importation and trafficking in child pornography and is the country's front line of defense in combating child pornography distributed through various channels, including the Internet. Customs is involved in cases with international links, focusing on pornography that enters the United States from foreign countries. The Customs CyberSmuggling Center has the lead in the investigation of international and domestic criminal activities conducted on or facilitated by the Internet, including the sharing and distribution of child pornography on peer-to-peer networks. Customs maintains a reporting link with NCMEC, and it acts on tips received via the CyberTipline from callers reporting instances of child pornography on Web sites, Usenet newsgroups, chat rooms, or the computers of users of peer-to-peer networks. The center also investigates leads from Internet service providers via the Exploited Child Unit's CyberTipline II. The U.S. Secret Service does not investigate child pornography cases on peer-to-peer networks; however, it does provide forensic and technical support to NCMEC, as well as to state and local agencies involved in cases of missing and exploited children. Child pornography is easily shared and accessed through peer-to-peer file- sharing programs. Our analysis of 1,286 titles and file names identified through KaZaA searches on 12 keywords showed that 543 (about 42 percent) of the images had titles and file names associated with child pornography images. Of the remaining files, 34 percent were classified as adult pornography, and 24 percent as nonpornographic (see fig. 1). No files were downloaded for this analysis. The ease of access to child pornography files was further documented by retrieval and analysis of image files, performed on our behalf by the Customs CyberSmuggling Center. Using 3 of the 12 keywords that we used to document the availability of child pornography files, a CyberSmuggling Center analyst used KaZaA to search, identify, and download 305 files, including files containing multiple images and duplicates. The analyst was able to download 341 images from the 305 files identified through the KaZaA search. The CyberSmuggling Center analysis of the 341 downloaded images showed that 149 (about 44 percent) of the downloaded images contained child pornography (see fig. 2). The center classified the remaining images as child erotica (13 percent), adult pornography (29 percent), or nonpornographic (14 percent). These results are consistent with the observations of NCMEC, which has stated that peer-to-peer technology is increasingly popular for the dissemination of child pornography. However, it is not the most prominent source for child pornography. As shown in table 3, since 1998, most of the child pornography referred by the public to the CyberTipline was found on Internet Web sites. Since 1998, the center has received over 76,000 reports of child pornography, of which 77 percent concerned Web sites, and only 1 percent concerned peer-to-peer networks. Web site referrals have grown from about 1,400 in 1998 to over 26,000 in 2002--or about a nineteenfold increase. NCMEC did not track peer-to-peer referrals until 2001. In 2002, peer-to-peer referrals increased more than fourfold, from 156 to 757, reflecting the increased popularity of file-sharing programs. Juvenile users of peer-to-peer networks face a significant risk of inadvertent exposure to pornography when searching and downloading images. In a search using innocuous keywords likely to be used by juveniles searching peer-to-peer networks (such as names of popular singers, actors, and cartoon characters), almost half the images downloaded were classified as adult or cartoon pornography. Juvenile users may also be inadvertently exposed to child pornography through such searches, but the risk of such exposure is smaller than that of exposure to pornography in general. To document the risk of inadvertent exposure of juvenile users to pornography, the Customs CyberSmuggling Center performed KaZaA searches using innocuous keywords likely to be used by juveniles. The center image searches used three keywords representing the names of a popular female singer, child actors, and a cartoon character. A center analyst performed the search, retrieval, and analysis of the images. These searches produced 157 files, some of which were duplicates. From these 157 files, the analyst was able to download 177 images.. Figure 3 shows our analysis of the CyberSmuggling Center's classification of the 177 downloaded images. We determined that 61 images contained adult pornography (34 percent), 24 images consisted of cartoon pornography (14 percent), 13 images contained child erotica (7 percent), and 2 images (1 percent) contained child pornography. The remaining 77 images were classified as nonpornographic. Because law enforcement agencies do not track the resources dedicated to specific technologies used to access and download child pornography on the Internet, we were unable to quantify the resources devoted to investigations concerning peer-to-peer networks. These agencies (including the FBI, CEOS, and Customs) do devote significant resources to combating child exploitation and child pornography in general. Law enforcement officials told us, however, that as tips concerning child pornography on the peer-to-peer networks increase, they are beginning to focus more law enforcement resources on this issue. Table 4 shows the levels of funding related to child pornography issues that the primary organizations reported for fiscal year 2002, as well as a description of their efforts regarding peer-to-peer networks in particular. An important new resource to facilitate the identification of the victims of child pornographers is the National Child Victim Identification Program, run by the CyberSmuggling Center. This resource is a consolidated information system containing seized images that is designed to allow law enforcement officials to quickly identify and combat the current abuse of children associated with the production of child pornography. The system's database is being populated with all known and unique child pornographic images obtained from national and international law enforcement sources and from CyberTipline reports filed with NCMEC. It will initially hold over 100,000 images collected by federal law enforcement agencies from various sources, including old child pornography magazines. According to Customs officials, this information will help, among other things, to determine whether actual children were used to produce child pornography images by matching them with images of children from magazines published before modern imaging technology was invented. Such evidence can be used to counter the assertion that only virtual children appear in certain images. The system, which became operational in January 2003, is housed at the Customs CyberSmuggling Center and can be accessed remotely in "read only" format by the FBI, CEOS, the U.S. Postal Inspection Service, and NCMEC. In summary, Mr. Chairman, our work shows that child pornography as well as adult pornography is widely available and accessible on peer-to-peer networks. Even more disturbing, we found that peer-to-peer searches using seemingly innocent terms that clearly would be of interest to children produced a high proportion of pornographic material, including child pornography. The increase in reports of child pornography on peer-to-peer networks suggests that this problem is increasing. As a result, it will be important for law enforcement agencies to follow through on their plans to devote more resources to this technology and continue their efforts to develop effective strategies for addressing this problem. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other Members of the Committee may have at this time. If you should have any questions about this testimony, please contact me at (202) 512-6240 or by E-mail at [email protected]. Key contributors to this testimony were Barbara S. Collier, Mirko Dolak, James M. Lager, Neelaxi V. Lakhmani, James R. Sweetman, Jr., and Jessie Thomas. Peer-to-peer file-sharing programs represent a major change in the way Internet users find and exchange information. Under the traditional Internet client/server model, access to information and services is accomplished by interaction between clients--users who request services--and servers--providers of services, usually Web sites or portals. Unlike this traditional model, the peer-to-peer model enables consenting users--or peers--to directly interact and share information with each other, without the intervention of a server. A common characteristic of peer-to-peer programs is that they build virtual networks with their own mechanisms for routing message traffic. The ability of peer-to-peer networks to provide services and connect users directly has resulted in a large number of powerful applications built around this model. These range from the SETI@home network (where users share the computing power of their computers to search for extraterrestrial life) to the popular KaZaA file-sharing program (used to share music and other files). As shown in figure 4, there are two main models of peer-to-peer networks: (1) the centralized model, in which a central server or broker directs traffic between individual registered users, and (2) the decentralized model, based on the Gnutella network, in which individuals find each other and interact directly. As shown in figure 4, in the centralized model, a central server/broker maintains directories of shared files stored on the computers of registered users. When Bob submits a request for a particular file, the server/broker creates a list of files matching the search request by checking it against its database of files belonging to users currently connected to the network. The broker then displays that list to Bob, who can then select the desired file from the list and open a direct link with Alice's computer, which currently has the file. The download of the actual file takes place directly from Alice to Bob. This broker model was used by Napster, the original peer-to-peer network, facilitating mass sharing of material by combining the file names held by thousands of users into a searchable directory that enabled users to connect with each other and download MP3 encoded music files. Because much of this material was copyrighted, Napster as the broker of these exchanges was vulnerable to legal challenges, which eventually led to its demise in September 2002. In contrast to Napster, most current-generation peer-to-peer networks are decentralized. Because they do not depend on the server/broker that was the central feature of the Napster service, these networks are less vulnerable to litigation from copyright owners, as pointed out by Gartner. In the decentralized model, no brokers keep track of users and their files. To share files using the decentralized model, Ted starts with a networked computer equipped with a Gnutella file-sharing program such KaZaA or BearShare. Ted connects to Carol, Carol to Bob, Bob to Alice, and so on. Once Ted's computer has announced that it is "alive" to the various members of the peer network, it can search the contents of the shared directories of the peer network members. The search request is sent to all members of the network, starting with Carol; members will in turn send the request to the computers to which they are connected, and so forth. If one of the computers in the peer network (say, for example, Alice's) has a file that matches the request, it transmits the file information (name, size, type, etc.) back through all the computers in the pathway towards Ted, where a list of files matching the search request appears on Ted's computer through the file-sharing program. Ted can then open a connection with Alice and download the file directly from Alice's computer. The file-sharing networks that result from the use of peer-to-peer technology are both extensive and complex. Figure 5 shows a map or topology of a Gnutella network whose connections were mapped by a network visualization tool. The map, created in December 2000, shows 1,026 nodes (computers connected to more than one computer) and 3,752 edges (computers on the edge of the network connected to a single computer). This map is a snapshot showing a network in existence at a given moment; these networks change constantly as users join and depart them. One of the key features of many peer-to-peer technologies is their use of a virtual name space (VNS). A VNS dynamically associates user-created names with the Internet address of whatever Internet-connected computer users happen to be using when they log on. The VNS facilitates point-to- point interaction between individuals, because it removes the need for users and their computers to know the addresses and locations of other users; the VNS can, to certain extent, preserve users' anonymity and provide information on whether a user is or is not connected to the Internet at a given moment. Peer-to-peer users thus may appear to be anonymous; they are not, however. Law enforcement agents may identify users' Internet addresses during the file-sharing process and obtain, under a court order, their identities from their Internet service providers. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The availability of child pornography has dramatically increased in recent years as it has migrated from printed material to the World Wide Web, becoming accessible through Web sites, chatrooms, newsgroups, and now the increasingly popular peer-to-peer file sharing programs. These programs enable direct communication between users, allowing users to access each other's files and share digital music, images, and video. GAO was requested to determine the ease of access to child pornography on peer-to-peer networks; the risk of inadvertent exposure of juvenile users of peer-to-peer networks to pornography, including child pornography; and the extent of federal law enforcement resources available for combating child pornography on peer-to-peer networks. Today's testimony is based on GAO's report on the results of that work (GAO- 03-351), Because child pornography cannot be accessed legally other than by law enforcement agencies, GAO worked with the Customs Cyber-Smuggling Center in performing searches: Customs downloaded and analyzed image files, and GAO performed analyses based on keywords and file names only. Child pornography is easily found and downloaded from peer-to-peer networks. In one search, using 12 keywords known to be associated with child pornography on the Internet, GAO identified 1,286 titles and file names, determining that 543 (about 42 percent) were associated with child pornography images. Of the remaining, 34 percent were classified as adult pornography and 24 percent as non-pornographic. In another search using three keywords, a Customs analyst downloaded 341 images, of which 149 (about 44 percent) contained child pornography. These results are in accord with increased reports of child pornography on peer-to-peer networks; since it began tracking these in 2001, the National Center for Missing and Exploited Children has seen a fourfold increase--from 156 in 2001 to 757 in 2002. Although the numbers are as yet small by comparison to those for other sources (26,759 reports of child pornography on Web sites in 2002), the increase is significant. Juvenile users of peer-to-peer networks are at significant risk of inadvertent exposure to pornography, including child pornography. Searches on innocuous keywords likely to be used by juveniles (such as names of cartoon characters or celebrities) produced a high proportion of pornographic images: in our searches, the retrieved images included adult pornography (34 percent), cartoon pornography (14 percent), child erotica (7 percent), and child pornography (1 percent). While federal law enforcement agencies--including the FBI, Justice's Child Exploitation and Obscenity Section, and Customs--are devoting resources to combating child exploitation and child pornography in general, these agencies do not track the resources dedicated to specific technologies used to access and download child pornography on the Internet. Therefore, GAO was unable to quantify the resources devoted to investigating cases on peer-to-peer networks. According to law enforcement officials, however, as tips concerning child pornography on peer-to-peer networks escalate, law enforcement resources are increasingly being focused on this area.
4,410
717
Roughly half of all workers participate in an employer-sponsored retirement, or pension, plan. Private sector pension plans are classified as either defined benefit or defined contribution plans. Defined benefit plans promise to provide, generally, a fixed level of monthly retirement income that is based on salary, years of service, and age at retirement, regardless of how the plan's investments perform. In contrast, benefits from defined contribution plans are based on the contributions to and the performance of the investments in individual accounts, which may fluctuate in value. The Employee Retirement Income Security Act of 1974 (ERISA) establishes the responsibilities of employee benefit plan decision makers and the requirements for disclosing and reporting plan fees. Typically, the plan sponsor is a fiduciary. A plan fiduciary includes a person who has discretionary authority or control over plan management or any authority or control over the management or disposition of plan assets. ERISA requires that plan sponsors responsible for managing employee benefit plans carry out their plan responsibilities prudently and solely in the interest of the plan's participants and beneficiaries. Plan sponsors, as fiduciaries, are required to act on behalf of plan participants and their beneficiaries. These responsibilities include selecting and monitoring service providers to the plan, reporting plan information to the government and to participants, adhering to the plan's investment policy statement and other plan documents (unless inconsistent with ERISA), identifying parties-in-interest to the plan and taking steps to monitor transactions with them, selecting investment options the plan will offer and diversifying plan investments, and ensuring that the services provided to their plan are necessary and that the cost of those services is reasonable. Plan sponsors may receive some information on an investment option's expenses that includes management fees, distribution and/or service fees, and certain other fees, such as accounting and legal fees. These fees are usually disclosed in the fund's prospectus or fund profile. To better enable the agency to effectively oversee 401(k) plan fees, we recommended in November 2006 that the Secretary of Labor should require plan sponsors to report to Labor a summary of all fees that are paid out of plan assets or by participants. This summary should list fees by type, particularly investment fees that are being indirectly incurred by participants. In addition to receiving information about investment fees, sponsors may receive information about expenses for administration and other aspects of plan operations. Sponsors can also have providers fill out the Form 5500, which ultimately gets filed with Labor, and includes information about the financial condition and operation of their plans. Generally, information on 401(k) expenses is reported on two sections of the Form 5500, Schedule A and Schedule C. However, our November 2006 study reported that the form is of little use to plan sponsors and others in terms of understanding the cost of a plan. While plan sponsors may receive information on investment and other fees, they may not be receiving information on certain relevant business arrangements. In November 2006, we reported that several opportunities exist for such business arrangements to go undisclosed, given the various parties involved in creating and administering 401(k) plans. Problems may occur when pension consultants or other companies providing services to a plan also receive compensation from other service providers. Service providers may be steering plan sponsors toward investment products or services in which they have a direct business interest themselves without disclosing such arrangements. In addition, plan sponsors, being unaware, are often unable to report information about these arrangements to Labor on Form 5500 Schedule C. Our November 2006 report also recommended that Congress consider amending ERISA to require that service providers disclose to plan sponsors the compensation that providers receive from other service providers. In our prior report on 401(k) fees, we found that the fee information that ERISA requires 401(k) plan sponsors to disclose is limited and does not provide participants with an easy way to compare investment options. All 401(k) plans are required to provide disclosures on plan operations, participant accounts, and the plan's financial status. Although they often contain some information on fees, these documents are not required to disclose the fees borne by individual participants. Overall, we found that the information currently provided to participants does not provide a simple way for them to compare plan investment options and their fees, and are provided to participants in a piecemeal fashion. Additional fee disclosures are required for certain--but not all--plans in which participants direct their investments. ERISA requires disclosure of fee information to participants where plan sponsors seek liability protection from investment losses resulting from participants' investment decisions. Such plans--known as 404(c) plans--are required to provide participants with a broad range of investment alternatives, descriptions of the risks and historical performance of such investment alternatives, and information about any transaction fees and expenses in connection with buying or selling interests in such alternatives. Upon request, 404(c) plans must also provide participants with, among other information, the expense ratio for each investment option. Plan sponsors may voluntarily provide participants with more information on fees than ERISA requires, according to industry professionals. For example, plan sponsors that do not elect to be 404(c) often distribute prospectuses or fund profiles when employees become eligible for the plan, just as 404(c) sponsors do. Still, absent requirements to do so, some plan sponsors may not identify all the fees participants pay. Some participants may be able to make comparisons across investment options by piecing together the fees that they pay, but doing so requires an awareness of fees that most participants do not have. Assessing fees across investment options can be difficult for participants because the data are typically not presented in a single document that facilitates comparison. However, most 401(k) investment options have expense ratios that are provided in prospectuses or fund profiles and can be compared; based on industry data, expenses for the majority of 401(k) assets, which are in investment options such as mutual funds, can be expressed as an expense ratio. Plan sponsors, as fiduciaries, must consider plan fee information related to a broad range of functions. According to Labor, ERISA requires that sponsors evaluate fee information associated with the investment options offered to participants and the providers they hire to perform plan services and consider the reasonableness of the expenses charged by the various providers of services to the plan. In addition, the sponsor must understand information concerning certain arrangements, such as when a service provider receives some share of its revenue from a third party. While industry professionals might agree about some of the information that sponsors need, they disagree about how much information is needed about individual expense components when a package of plan services, known as a bundled arrangement, is sold to a sponsor for a single price. Some pension plan associations and practitioners have made various suggestions to help plan sponsors collect meaningful information on expenses. Labor has also undertaken a number of activities related to the information on plan expenses that sponsors should consider. In order to carry out their duties, plan sponsors have an obligation under ERISA to prudently select and monitor plan investment options made available to the plan's participants and beneficiaries and the persons providing services to the plan. Understanding and evaluating the fees and expenses associated with a plan's investments and services are an important part of a fiduciary's responsibility. Plan sponsors need to monitor the fees and expenses associated with the plan's investment options and the services provided by outside vendors, including any revenue sharing arrangements, to determine whether the expenses continue to be reasonable for the services provided. Industry experts have suggested that plan sponsors be required to obtain complete information about investment options before adding them to the plan's menu and obtain information concerning arrangements where a service provider receives some share of its revenue from a third party. A number of associations recently put together a list of service- and fee- related data elements they believe defined contribution plan sponsors and service providers should discuss when entering into agreements. The data elements include such information as payments received by plan service providers from affiliates in connection with services to the plan, float revenue, and investment-related consulting services. The list is meant as a reference tool for plan sponsors and providers to use to determine the extent to which a service provider receives compensation in connection with its services to the plan from other service providers or plan investment products (e.g., revenue sharing or finders' fees). According to the associations that formulated this tool, the information can aid plan sponsors to evaluate any potential conflicts of interest that may arise in how fees are allocated among service providers. In our prior work, we noted that plan sponsors may not have information on arrangements among service providers that, according to Labor officials, could steer plan sponsors toward offering investment options that benefit service providers but may not be in the best interest of participants. For example, the Securities and Exchange Commission (SEC) released a report in May 2005 that raised questions about whether some pension consultants are fully disclosing potential conflicts of interest that may affect the objectivity of the advice. In addition, specific fees that are considered to be "hidden" may mask the existence of a conflict of interest. Hidden fees are usually related to business arrangements where one service provider to a 401(k) plan pays a third-party provider for services, such as record keeping, but does not disclose this compensation to the plan sponsor. The problem with hidden fees is not how much is being paid to the service provider, but with knowing what entity is receiving the compensation and whether or not the compensation fairly represents the value of the service being rendered. While there is general agreement that understanding the fees and expenses associated with a plan's services is an important part of a fiduciary's responsibility, pension professionals disagree about how much information is needed about the expense components of bundled fee arrangements. One representative speaking on behalf of five industry associations stated he did not believe that the requirement to "unbundle" bundled services and provide individual costs in many detailed categories was particularly helpful because the information provided would not be very meaningful and the costs of providing this information would ultimately be passed on to plan participants through higher administrative fees. He also raised concerns about how a service provider would disclose component costs for services that are not offered outside a bundled contract. In addition, he said that posting such information could force public disclosure of proprietary information regarding contracts between service providers and plan sponsors. Finally, he stated that as long as they are fully informed of the services being provided, many plan sponsors might prefer reviewing aggregate costs so that they can compare and evaluate whether the overall fees are reasonable without analyzing each itemized fee. On the other hand, a representative of another pension association contended that it is possible with very little cost to develop an allocation methodology to provide a reasonable breakdown of fees for plan services. He believes that not disclosing component pricing provides a competitive advantage, enabling bundled providers to tell plan sponsors that they can offer certain retirement plan services for free--when fees are deducted from investment returns--while unbundled providers are required to disclose the fees for the same services. He further stated that any disclosure requirements should apply uniformly to all service providers. In his view this would allow plan fiduciaries to assess the reasonableness of fees by comparison and thereby allow fiduciaries to determine whether certain services are needed, which could lead to lower fees. Industry professionals have suggested that, before hiring a service provider or adding investment options to the plan's menu, plan sponsors should obtain complete fee information, including information concerning arrangements in which a service provider receives some share of its revenue from a third party. Pension plan associations and practitioners have made various suggestions to help plan sponsors collect meaningful information on expenses. In 2004 the ERISA Advisory Council on Employee Welfare and Pension Benefit Plans created a Working Group to study retirement plan investment management fees and expenses as they were currently reported to Labor. In addition to issues related to annual reporting, the Working Group was also interested in determining whether plan sponsors currently receive adequate data from the service providers in order to both understand and report fees. In its final report, the Working Group made the following recommendations, among others, in an effort to further educate plan sponsors and fiduciaries about plan fees: Plan sponsors should avoid entering transactions with vendors who refuse to disclose the amount and sources of all fees and compensation received in connection with plan. Plan sponsors should require plan providers to provide a detailed written analysis of all fees and compensation (whether directly or indirectly) to be received for its services to the plan prior to retention. Plan sponsors should obtain all information on fees and expenses as well as revenue sharing arrangements with each investment option. Plan sponsors should also determine the availability of other mutual funds or share classes within a mutual fund with lower revenue sharing arrangements prior to selecting an investment option. Plan sponsors should require vendors to provide annual written statements with respect to all compensation, both direct and indirect, received by the provider in connection with its services to the plan. Plan sponsors need to be aware that with asset-based fees, fees can grow just as the size of the asset pool grows, regardless of whether any additional services are provided by the vendor, and as a result, asset- based fees should be monitored periodically. Plan sponsors should calculate the total plan costs annually. More recently in 2007, one witness before the ERISA Advisory Council recommended further that plan sponsors should evaluate fees associated with three categories of services: Net investment expenses would not only include investment expenses, such as the expense ratio of a mutual fund, but would also subtract any fees or commissions paid to a broker, consultant, or advisor for services in the categories below. Administrative expenses would include specific charges for operational services, such as record keeping, administration, compliance, and communication, as well as revenue sharing or other payments from investments. Advisory expenses would include amounts paid directly by the plan to consultants, advisors, or brokers, as well as indirect payments from sources such as investments or related companies. In addition, some industry professionals believe that plan sponsors, as they monitor investment alternatives, should review investment alternative results against appropriate benchmarks and compare their plans' options to competing funds with similar investment goals. A benchmark is used to compare specific investment results with that of the market or economy. Industry professionals also noted that although there are appropriate benchmarks for mutual funds, benchmarks are not as readily available for other types of investment products. According to one industry professional that we spoke with, plan sponsors do not have good benchmarks to assess the reasonableness of investment options' expense ratios. Only limited information is available, and a national database of funds and their expense ratios does not exist. He further stated that without such a source, selecting which funds constitute a meaningful comparison set is not an easy task, and may be open to interpretation. Disclosure encourages price competition, but in his opinion, because of the lack of available information, the 401(k) market is relatively ineffective at fostering price competition. Labor, in its comments on our November 2006 report, stated that the agency has proposed a number of changes to the Form 5500, including changes that would expand the information required to be reported on the Schedule C. The changes are intended to assist plan sponsors in assessing the reasonableness of compensation paid for services and potential conflicts of interest that might affect those services. According to testimony earlier this month from the Assistant Secretary of Labor, the agency will be issuing a final regulation requiring additional public disclosure of fee and expense information on the Form 5500 within the next few weeks. This change will be helpful to plan sponsors as they look retrospectively at the preceding plan year. In addition, Labor was considering an amendment to its regulation under section 408(b)(2) of ERISA, expected to be issued this year. This amendment would help to ensure that plan sponsors have sufficient information on the compensation to be paid to the service provider and the revenue sharing compensation paid by the plan for the specific services and potential conflicts of interest that may exist on the part of the service provider. Labor's ERISA Advisory Council currently has a working group focusing on fiduciary responsibility and revenue sharing. One area of focus is what service providers should be required to provide when they enter into a revenue sharing or rebate arrangement. Labor also provides a model form on its Web site specifically designed to assist plan fiduciaries and service providers in exchanging complete disclosures concerning the costs involved in service arrangements. Other associations and entities continue to develop model fee disclosure forms for plan sponsors. We are currently conducting work in the area of 401(k) plan sponsor practices, identifying how plan sponsors decide which features to include in the plans they establish and how plan sponsors oversee plan operations. Part of our work will consider how plan sponsors monitor the fees charged to their plans. We expect to issue a report in 2008. Before making informed decisions about their 401(k) plan investments, participants must first be made aware of the types of plan fees that they pay. For example, according to one nationwide survey, some participants do not even know that they pay plan fees. In 2006, we reported that investment fees constitute the majority of fees in 401(k) plans and are typically borne by participants. Most industry professionals agree that information about investment fees--such as the expense ratio, a fund's operating fees as a percentage of its assets--is fundamental for plan participants. Participants also need to be aware of other types of fees-- such as record-keeping fees and redemption fees or surrender charges imposed for changing or selling investments--to gain a more complete understanding of all the fees that can affect their account balances. Whether participants receive only basic expense ratio information or more detailed information on various fees, presenting the information in a clear, easily comparable format can help participants understand the content of the disclosure. Currently, most participants are responsible for directing their investments among the choices offered by their 401(k) plans, but may not be aware of the different fees that they pay. According to industry professionals, participants are often unaware that they pay any fees associated with their 401(k) plan. In fact, studies have shown that 401(k) participants often lack the most basic knowledge--that there are fees associated with their plan. When asked in a recent nationwide survey whether they pay any fees for the 401(k) plan, as figure 1 shows, 65 percent of 401(k) participants responded that they do not pay fees. Seventeen percent said they do pay fees, and 18 percent stated that they do not know. When this same group was asked how much they pay in fees, as shown in figure 2, 83 percent reported not knowing. Although it is clear that participants require fee information to make informed decisions, it is not so clear what fee information is most relevant. In 2006, we reported that investment fees constitute the majority of fees in 401(k) plans and are typically borne by participants. Investment fees are, for example, fees charged by companies that manage a mutual fund for all services related to operating the fund. These fees pay for selecting a mutual fund's portfolio of securities and managing the fund; marketing the fund and compensating brokers who sell the fund; and providing other shareholder services, such as distributing the fund prospectus. These fees are charged regardless of whether the mutual fund or other investment product, such as collective investment funds or group annuity contracts, is part of a 401(k) plan or purchased by individual investors in the retail market. As such, the fees are usually different for each investment option available to participants in a 401(k) plan. In our previous report, we recommended that Congress consider amending ERISA to require all sponsors of participant-directed plans to disclose fee information on 401(k) investment options to participants in a way that facilitates comparison among the options, such as via expense ratios. As mentioned earlier, there have been at least two bills recently introduced in Congress on the subject. Industry professionals have also suggested that comparing the expense ratio across investment options is the most effective way to compare options' fees. They generally agree that an expense ratio provides valuable information that participants need and can be used to compare investment options because it includes investment fees, which constitute most of the total fees borne by participants. According to an industry official, the disclosure of expense ratios might include a general description of how expense ratios vary depending on the type and style of investment. For example, investment options with relatively high fees, such as actively managed funds, tend to have larger expense ratios than funds that are not actively managed. Also, investment options that are only available to institutional investors tend to have lower expense ratios than other types of funds. Most of the investment options offered in 401(k) plans have expense ratios that can be compared, but this information is not always provided to participants. In addition, investment options other than mutual funds may not be required to produce prospectuses that include expense ratios, but according to industry professionals, most options have expense ratio equivalents that investment industry professionals can identify. Industry professionals also believe that participants need information on other fees that are not included in the expense ratio but still affect their account balances. For example, annual fees or fees on a per transaction basis that can be deducted from account balances should be disclosed, such as administrative and record-keeping fees, participant loan origination fees, and annual loan charges. In addition, industry professionals also recommended that certain investment-specific fees be disclosed, including redemption fees or sales charges--fees that may be imposed by the provider as a result of changing investments in a given period, surrender charges--fees that may be imposed as a result of selling or withdrawing money from the investment within a given number of years after investing, and wrap fees--fees that are assessed on the total assets in a participant's account. Some industry professionals recommended that plan participants be provided information on their returns net of all fees so that they can clearly see what their investments have earned after fees. Others recommended that information be disclosed that explains how the investment and administrative costs of the plan affect their investment returns and their overall retirement savings in the plan. These officials believed that such information would help participants understand that fees are an important factor to consider when directing their investments. Whether participants are provided with basic expense ratio information or more detailed information on various fees, or both, providing the information in a clear, easily comparable format can assist participants in understanding the information disclosed. In our prior reports on helping the public understand Social Security information and on more effective disclosures for credit cards, we found that certain practices help people understand complicated information. These practices include language--writing information in clear language, layout--using straightforward layout and graphics, length--providing a short document, comparability--making options easy to compare in a single document, distribution--offering a choice of paper or electronic distribution. In our prior work, we noted that Labor is considering the development of a new rule regarding the fee information required to be furnished to participants under its section 404(c) regulation. According to Labor officials, they are attempting to identify the critical information on fees that plan sponsors should disclose to participants of 404(c) plans (but not all participant-directed plans) and the best way to do so. The initiative is intended to explore what steps might be taken to ensure that participants have the information they need about their plan and available investment options, without imposing additional costs, given that such costs are likely to be charged against the individual accounts of participants and affect their retirement savings. The officials are currently considering what fee information should be provided to participants and what format would enable participants to easily compare the fees across a plan's various investment options. Labor is also currently evaluating comments received from consumer groups, plan sponsors, service providers, and others as it develops its regulation. Labor also has ongoing efforts designed to help participants and plan sponsors understand the importance of plan fees and the effect of those fees on retirement savings. Labor has developed and makes available on its Web site a variety of educational materials specifically designed to help plan participants understand the complexities of the various fee and compensation arrangements involved in 401(k) plans. Its brochure titled A Look at 401(k) Plan Fees is targeted to participants and beneficiaries of 401(k) plans who are responsible for directing their own investments. Both 401(k) plan sponsors and participants need fee information in order to make the most informed decisions. For plan sponsors, requiring that certain information on fees be disclosed can help them understand what services they are paying for, who is benefiting, and whether their current arrangements are in the best interest of plan participants. Requiring plan sponsors to report more complete information to Labor on fees--including those paid out of plan assets by participants--would put the agency in a better position to effectively oversee 401(k) plans and, in doing so, to protect an increasing number of participants. The mere act of requiring such information may actually promote competition among the entities that provide services to plans and possibly reduce the fees service providers charge. For plan participants, given the voluminous amount of information that could be disclosed, determining the relevant information that participants most need is key. At a minimum, providing information such as expense ratios or other investment-specific fee information could be the place to start. Also, making sure that the information is accessible in terms of the language, layout, length, comparability, and distribution can ensure that participants actively utilize the information disclosed. As participants become more sophisticated or demand more information, decisions can then be made about the type and format of additional fee information. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other members of the committee may have at this time. For further information regarding this testimony, please contact Barbara D. Bovbjerg, Director, Education, Workforce, and Income Security Issues, at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Tamara E. Cross, Assistant Director; Daniel F. Alspaugh; Monika R. Gomez; Matthew J. Saradjian; Susannah L. Compton; Craig H. Winslow; and Walter K. Vance. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Employers are increasingly moving away from traditional pension plans to what has become the most dominant and fastest growing type of plan, the 401(k). For 401(k) plan sponsors, understanding the fees being charged helps fulfill their fiduciary responsibility to act in the best interest of plan participants. Participants should consider fees as well as the historical performance and investment risk for each plan option when investing in a 401(k) plan because fees can significantly decrease retirement savings over the course of a career. GAO's prior work found that information on 401(k) fees is limited. GAO previously made recommendations to both Congress and the Department of Labor (Labor) on ways to improve the disclosure of fee information to plan participants and sponsors and reporting of fee information by sponsors to Labor. Both Labor and Congress now have efforts under way to ensure that both participants and sponsors receive the necessary fee information to make informed decisions. These efforts on the subject have generated significant debate. This testimony provides information on 401(k) plan fees that (1) sponsors need to carry out their responsibilities to the plan and (2) plan participants need to make informed investment decisions. To complete this statement, GAO relied on previous work and additional information from Labor and industry professionals regarding information about plan fees. Information on 401(k) plan fee disclosure serves different functions for plan sponsors and participants. Plan sponsors need to understand a broad range of information on expenses associated with their plans to fulfill their fiduciary responsibilities. Sponsors need information on expenses associated with the investment options that they offer to participants and the providers they hire to perform plan services. Such information would help them meet their fiduciary duty to determine if expenses are reasonable for the services provided. In addition, sponsors also need to understand the implication of certain business arrangements between service providers, such as revenue sharing. Despite some disagreements about how much information is needed, industry professionals have made various suggestions to help plan sponsors collect meaningful information on expenses. Labor has also undertaken a number of activities related to the information on plan fees that sponsors should consider. Participants need fee information to make informed decisions about their investments--primarily, whether to contribute to the plan and how to allocate their contributions among the investment options the plan sponsor has selected. However, many participants are not aware that they pay any fees, and those who are may not know how much they are paying. Most industry professionals agree that information about an investment option's relative risk, its historic performance, and the associated fees is fundamental for plan participants. Some industry professionals also believe that other fees that are also charged to participants should be understood, so that participants can clearly see the effect these fees can have on their account balances.
5,656
561
Countries provide food aid through either in-kind donations or cash donations. In-kind food aid is food procured and delivered to vulnerable populations, while cash donations are given to implementing organizations to purchase food in local, regional, or global markets. U.S. food aid programs are all in-kind, and no cash donations are allowed under current legislation. However, the administration has recently proposed legislation to allow up to 25 percent of appropriated food aid funds to purchase commodities in locations closer to where they are needed. Other food aid donors have also recently moved from providing primarily in-kind aid to more or all cash donations for local procurement. Despite ongoing debates as to which form of assistance are more effective and efficient, the largest international food aid organization, the United Nations (UN) World Food Program (WFP), continues to accept both. The United States is both the largest overall and in-kind provider of food aid to WFP, supplying about 43 percent of WFP's total contributions in 2006 and 70 percent of WFP's in-kind contributions in 2005. Other major donors of in-kind food aid in 2005 included China, the Republic of Korea, Japan, and Canada. In fiscal year 2006, the United States delivered food aid through its largest program to over 50 countries, with about 80 percent of its funding allocations for in-kind food donations going to Africa, 12 percent to Asia and the Near East, 7 percent to Latin America, and 1 percent to Eurasia. Of the 80 percent of the food aid funding going to Africa, 30 percent went to Sudan, 27 percent to the Horn of Africa, 18 percent to southern Africa, 14 percent to West Africa, and 11 percent to Central Africa. Over the last several years, funding for nonemergency U.S. food aid programs has declined. For example, in fiscal year 2001, the United States directed approximately $1.2 billion of funding for international food aid programs to nonemergencies. In contrast, in fiscal year 2006, the United States directed approximately $698 million for international food aid programs to nonemergencies. U.S. food aid is funded under four program authorities and delivered through six programs administered by USAID and USDA; these programs serve a range of objectives, including humanitarian goals, economic assistance, foreign policy, market development, and international trade. (For a summary of the six programs, see app. I.) The largest program, P.L. 480 Title II, is managed by USAID and represents approximately 74 percent of total in-kind food aid allocations over the past 4 years, mostly to fund emergency programs. The Bill Emerson Humanitarian Trust, a reserve of up to 4 million metric tons of grain, can be used to fulfill P.L. 480 food aid commitments to meet unanticipated emergency needs in developing countries or when U.S. domestic supplies are short. U.S. food aid programs also have multiple legislative and regulatory mandates that affect their operations. One mandate that governs U.S. food aid transportation is cargo preference, which is designed to support a U.S.-flag commercial fleet for national defense purposes. Cargo preference requires that 75 percent of the gross tonnage of all government-generated cargo be transported on U.S.-flag vessels. A second transportation mandate, known as the Great Lakes Set-Aside, requires that up to 25 percent of Title II bagged food aid tonnage be allocated to Great Lakes ports each month. Multiple challenges in logistics hinder the efficiency of U.S. food aid programs by reducing the amount, timeliness, and quality of food provided. While in some cases agencies have tried to expedite food aid delivery, most food aid program expenditures are for logistics, and the delivery of food from vendor to village is generally too time-consuming to be responsive in emergencies. Factors that increase logistical costs and lengthen time frames include uncertain funding processes and inadequate planning, ocean transportation contracting practices, legal requirements, and inadequate coordination in tracking and responding to food delivery problems. While U.S. agencies are pursuing initiatives to improve food aid logistics, such as prepositioning food commodities and using a new transportation bid process, their long-term cost-effectiveness has not yet been measured. In addition, the current practice of selling commodities to generate cash resources for development projects--monetization--is an inherently inefficient yet expanding use of food aid. The current practice of selling commodities as a means to generate resources for development projects--monetization--is an inherently inefficient yet expanding use of food aid. Monetization entails not only the costs of procuring, shipping, and handling food, but also the costs of marketing and selling it in recipient countries. Furthermore, the time and expertise needed to market and sell food abroad requires NGOs to divert resources from their core missions. However, the permissible use of revenues generated from this practice and the minimum level of monetization allowed by the law have expanded. The monetization rate for Title II nonemergency food aid has far exceeded the minimum requirement of 15 percent, reaching close to 70 percent in 2001 but declining to about 50 percent in 2005. Despite these inefficiencies, U.S. agencies do not collect or maintain data electronically on monetization revenues, and the lack of such data impedes the agencies' ability to fully monitor the degree to which revenues can cover the costs related to monetization. USAID used to require that monetization revenues cover at least 80 percent of costs associated with delivering food to recipient countries, but this requirement no longer exists. Neither USDA nor USAID was able to provide us with data on the revenues generated through monetization. These agencies told us that the information should be in the results reports, which are in individual hard copies and not available in any electronic database. Various challenges to implementation, improving nutritional quality, and monitoring reduce the effectiveness of food aid programs in alleviating hunger. Since U.S. food aid assists only about 11 percent of the estimated hungry population worldwide, it is critical that donors and implementers use it effectively by ensuring that it reaches the most vulnerable populations and does not cause negative market impact. However, challenging operating environments and resource constraints limit implementation efforts in terms of developing reliable estimates of food needs and responding to crises in a timely manner with sufficient food and complementary assistance. Furthermore, some impediments to improving the nutritional quality of U.S. food aid, including lack of interagency coordination in updating food aid products and specifications, may prevent the most nutritious or appropriate food from reaching intended recipients. Despite these concerns, USAID and USDA do not sufficiently monitor food aid programs, particularly in recipient countries, as they have limited staff and competing priorities and face legal restrictions on the use of food aid resources. Some impediments to improving nutritional quality further reduce the effectiveness of food aid. Although U.S. agencies have made efforts to improve the nutritional quality of food aid, the appropriate nutritional value of the food and the readiness of U.S. agencies to address nutrition- related quality issues remain uncertain. Further, existing interagency food aid working groups have not resolved coordination problems on nutrition issues. Moreover, USAID and USDA do not have a central interagency mechanism to update food aid products and their specifications. As a result, vulnerable populations may not be receiving the most nutritious or appropriate food from the agencies, and disputes may occur when either agency attempts to update the products. Although USAID and USDA require implementing organizations to regularly monitor and report on the use of food aid, these agencies have undertaken limited field-level monitoring of food aid programs. Agency inspectors general have reported that monitoring has not been regular and systematic, that in some cases intended recipients have not received food aid, or that the number of recipients could not be verified. Our audit work also indicates that monitoring has been insufficient due to various factors including limited staff, competing priorities, and legal restrictions on the use of food aid resources. In fiscal year 2006, although USAID had some non-Title II-funded staff assigned to monitoring, it had only 23 Title II- funded USAID staff assigned to missions and regional offices in 10 countries to monitor programs costing about $1.7 billion in 55 countries. USDA administers a smaller proportion of food aid programs than USAID and its field-level monitoring of food aid programs is more limited. Without adequate monitoring from U.S. agencies, food aid programs may not effectively direct limited food aid resources to those populations most in need. As a result, agencies may not be accomplishing their goal of getting the right food to the right people at the right time. U.S. international food aid programs have helped hundreds of millions of people around the world survive and recover from crises since the Agricultural Trade Development and Assistance Act (P.L. 480) was signed into law in 1954. Nevertheless, in an environment of increasing emergencies, tight budget constraints, and rising transportation and business costs, U.S. agencies must explore ways to optimize the delivery and use of food aid. U.S. agencies have taken some measures to enhance their ability to respond to emergencies and streamline the myriad processes involved in delivering food aid. However, opportunities for further improvement remain to ensure that limited resources for U.S. food aid are not vulnerable to waste, are put to their most effective use, and reach the most vulnerable populations on a timely basis. To improve the efficiency of U.S. food aid--in terms of its amount, timeliness, and quality--we recommended in our previous report that the Administrator of USAID and the Secretaries of Agriculture and Transportation (1) improve food aid logistical planning through cost- benefit analysis of supply-management options; (2) work together and with stakeholders to modernize ocean transportation and contracting practices; (3) seek to minimize the cost impact of cargo preference regulations on food aid transportation expenditures by updating implementation and reimbursement methodologies to account for new supply practices; (4) establish a coordinated system for tracking and resolving food quality complaints; and (5) develop an information collection system to track monetization transactions. To improve the effective use of food aid, we recommended that the Administrator of USAID and the Secretary of Agriculture (1) enhance the reliability and use of needs assessments for new and existing food aid programs through better coordination among implementing organizations, make assessments a priority in informing funding decisions, and more effectively build on lessons from past targeting experiences; (2) determine ways to provide adequate nonfood resources in situations where there is sufficient evidence that such assistance will enhance the effectiveness of food aid; (3) develop a coordinated interagency mechanism to update food aid specifications and products to improve food quality and nutritional standards; and (4) improve monitoring of food aid programs to ensure proper management and implementation. DOT, USAID, and USDA--the three U.S. agencies to whom we directed our recommendations--have submitted written statements to congressional committees, as required by law, to report actions they have taken or begun to take to address our recommendations. In May 2007, these agencies established an interagency Executive Working Group to identify ways to respond to several of our recommendations. DOT stated that it strongly supported the transportation-related initiatives we recommended, noting that they offer the potential to help U.S. agencies achieve efficiencies and reduce ocean transportation costs while supporting the U.S. merchant fleet. USAID outlined actions it is considering, has initiated, or intends to take to address each of our nine recommendations. USDA stated that in general it found our recommendations to be helpful and cited some of its ongoing efforts to improve its food aid programs. However, USDA questioned some of our conclusions that it believed were the result of weaknesses in our methodology. For example, USDA does not agree that the current practice of monetization as a means to generate cash for development projects is an inherently inefficient use of resources. We maintain that it is an inherently inefficient use of resources because it requires food to be procured, shipped, and eventually sold, and the revenues from monetization may not recover shipping, handling, and other costs. Furthermore, U.S. agencies do not electronically collect data on monetization revenues, without which their ability to adequately monitor the degree to which revenues cover costs is impeded. We stand by our conclusions and recommendations, which are based on a rigorous and systematic review of multiple sources of evidence, including procurement and budget data, site visits, previous audits, agency studies, economic literature, and testimonial evidence collected in both structured and unstructured formats. Madam Chair and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions that you may have. Should you have any questions about this testimony, please contact Thomas Melito, Director, at (202) 512-9601 or [email protected]. Other major contributors to this testimony were Phillip Thomas (Assistant Director), Carol Bray, Ming Chen, Debbie Chung, Martin De Alteriis, Leah DeWolf, Mark Dowling, Etana Finkler, Kristy Kennedy, Joy Labez, Kendall Schaefer, and Mona Sehgal. The United States has principally employed six programs to deliver food aid: Public Law (P.L.) 480 Titles I, II, and III; Food for Progress; the McGovern-Dole Food for Education and Child Nutrition; and Section 416(b). Table 1 provides a summary of these food aid programs.
The United States is the largest global food aid donor, accounting for over half of all food aid supplies to alleviate hunger and support development. Since 2002, Congress has appropriated an average of $2 billion per year for U.S. food aid programs, which delivered an average of 4 million metric tons of food commodities per year. Despite growing demand for food aid, rising business and transportation costs have contributed to a 52 percent decline in average tonnage delivered between 2001 and 2006. These costs represent 65 percent of total emergency food aid, highlighting the need to maximize its efficiency and effectiveness. This testimony is based on a recent GAO report that examined some key challenges to the (1) efficiency of U.S. food aid programs and (2) effective use of U.S. food aid. Multiple challenges hinder the efficiency of U.S. food aid programs by reducing the amount, timeliness, and quality of food provided. Factors that cause inefficiencies include (1) insufficiently planned food and transportation procurement, reflecting uncertain funding processes, that increases delivery costs and time frames; (2) ocean transportation and contracting practices that create high levels of risk for ocean carriers, resulting in increased rates; (3) legal requirements that result in awarding of food aid contracts to more expensive service providers; and (4) inadequate coordination between U.S. agencies and food aid stakeholders in tracking and responding to food and delivery problems. U.S. agencies have taken some steps to address timeliness concerns. USAID has been stocking or prepositioning food domestically and abroad, and USDA has implemented a new transportation bid process, but the long-term cost effectiveness of these initiatives has not yet been measured. The current practice of using food aid to generate cash for development projects--monetization--is also inherently inefficient. Furthermore, since U.S. agencies do not collect monetization revenue data electronically, they are unable to adequately monitor the degree to which revenues cover costs. Numerous challenges limit the effective use of U.S. food aid. Factors contributing to limitations in targeting the most vulnerable populations include (1) challenging operating environments in recipient countries; (2) insufficient coordination among key stakeholders, resulting in disparate estimates of food needs; (3) difficulties in identifying vulnerable groups and causes of their food insecurity; and (4) resource constraints that adversely affect the timing and quality of assessments, as well as the quantity of food and other assistance. Furthermore, some impediments to improving the nutritional quality of U.S. food aid may reduce its benefits to recipients. Finally, U.S. agencies do not adequately monitor food aid programs due to limited staff, competing priorities, and restrictions on the use of food aid resources. As a result, these programs are vulnerable to not getting the right food to the right people at the right time.
2,854
584
The SSI program provides eligible aged, blind, or disabled persons with monthly cash payments to meet basic needs for food, clothing, and shelter. State Disability Determination Services determine whether SSI applicants are medically disabled, and SSA field office staff determine whether applicants meet the program's nonmedical (age and financial) eligibility requirements. To be eligible for SSI in 2002, persons may not have income greater than $545 per month ($817 for a couple) or resources worth more than $2,000 ($3,000 for a couple). When applying for SSI, persons must report information about their income, financial resources and living arrangements that affect their eligibility. Similarly, once approved, recipients must report changes to these factors in a timely manner. To a significant extent, SSA depends on program applicants and recipients to report changes in their medical or financial circumstances that may affect eligibility. To verify this information, SSA generally uses computer matching to compare SSI payment records with similar information contained in other federal and state government agencies' records. To determine whether recipients remain financially eligible for SSI benefits, SSA also conducts periodic redetermination reviews to verify eligibility factors such as income, resources and living arrangements. Recipients are reviewed at least every 6 years, but reviews may be more frequent if SSA determines that changes in eligibility are likely. In general, the SSI program is difficult and costly to administer because even small changes in monthly income, available resources, or living arrangements can affect benefit amounts and eligibility. Complicated policies and procedures determine how to treat various types of income, resources, and support that a recipient may receive. SSA must constantly monitor these situations to ensure benefit payments are accurate. After reviewing work spanning more than a decade, we designated SSI a high- risk program in 1997 and initiated work to document the underlying causes of long-standing problems and their impact on program integrity. In 1998, we reported on a variety of management issues related to the deterrence, detection, and recovery of SSI overpayments. Over the last several years, we also issued a number of reports and testimonies documenting SSA's progress in addressing these issues. Over the last several years, SSA has demonstrated a stronger management commitment to SSI program integrity issues, and today SSA has a much improved capability to verify program eligibility and detect payment errors than it did several years ago. However, weaknesses remain. SSA has made limited progress toward simplifying complex program rules that contribute to payment errors and is not fully utilizing several overpayment prevention tools, such as penalties and the suspension of benefits for recipients who fail to report eligibility information as required. SSA issued a report in 1998 outlining its strategy for addressing SSI program integrity problems and submitted proposals to Congress requesting new authorities and tools to implement its strategy. The Foster Care Independence Act of 1999 gave SSA new authority to deter fraudulent or abusive actions, better detect changes in recipient income and financial resources, and improve its ability to recover overpayments. Of particular note is a provision in the act that strengthened SSA's authority to obtain applicant resource information from banks and other financial institutions, since unreported financial resources are the second largest source of SSI overpayments. SSA also sought and received legislative authority to impose a period of benefit ineligibility ranging from 6 to 24 months for individuals who knowingly misrepresent facts. In addition to seeking and obtaining new legislative authority, SSA also began requiring its field offices to complete 99 percent of their assigned financial redetermination reviews and other cases where computer matching identified a potential overpayment situation caused by unreported wages, changes in living arrangements, or other factors. To further increase staff attention to program integrity issues, SSA also revised its work measurement system--used for estimating resource needs, gauging productivity, and justifying staffing levels--to include staff time spent developing information for referrals of potentially fraudulent cases to its Office of Inspector General (OIG). Consistent with this new emphasis, the OIG also increased the level of resources and staff devoted to investigating SSI fraud and abuse, in order to detect, and prevent, overpayments earlier in the disability determination process. The OIG reported that its investigative teams saved almost $53 million in fiscal year 2001 in improper benefit payments by providing information that led to denial of a claim or the cessation of benefits. Further, in a June 2002 SSI corrective action plan, SSA reaffirmed its commitment to taking actions to facilitate the removal of the SSI program from our high-risk list. To ensure effective implementation of this plan, SSA has assigned senior managers responsibility for overseeing additional planned initiatives, which include piloting new quality assurance systems, testing whether touchtone telephone technology can improve the reporting of wages, and using credit bureau data and public databases to better detect underreported income and unreported resources (automobiles and real property). To assist field staff in verifying the identity of recipients, SSA is also exploring the feasibility of requiring new SSI claimants to be photographed as a condition of receiving benefits. SSA has made several automation improvements over the last several years to help field managers and staff control overpayments. Last year, the agency distributed software nationwide that automatically scans multiple internal and external databases containing recipient financial and employment information and identifies potential changes in income and resources. This examination of financial data occurs automatically whenever a recipient's Social Security number (SSN) is entered into the system. SSA also made systems enhancements to better identify newly entitled recipients with unresolved overpayments from a prior SSI coverage period. Now, the process of detecting overpayments from a prior eligibility period and updating recipient records occurs automatically. Thus, a substantial amount of outstanding overpayments that SSA might not have detected under prior processes is now subject to collection action. In fact, the monthly amount of outstanding overpayments transferred to current records has increased on average by nearly 200 percent, from $12.9 million a month in 1999 to more than $36 million per month in 2002. In addition to systems and software upgrades, SSA now uses more timely and comprehensive data to identify information that can affect SSI eligibility and benefit amounts. In accordance with our prior report recommendation, SSA obtained access to the Office of Child Support Enforcement's National Directory of New Hires (NDNH), which is a comprehensive source of unemployment insurance and wage and new hires data for the nation. In January 2001, SSA field staff received access to NDNH for use in verifying applicant eligibility during the initial claims process. Recently, SSA also began requiring staff to use NDNH as a post- eligibility tool for verifying current recipients' continuing eligibility. With NDNH, SSA field staff now have access to more comprehensive and timely employment and wage information essential to verifying factors affecting SSI eligibility. SSA has estimated that using NDNH will result in about $200 million in overpayment preventions and recoveries per year. SSA has also enhanced existing computer data matches to better verify continuing financial eligibility. For example, SSA now matches SSI recipient SSNs against its master earnings record semiannually. In 2001, SSA flagged over 206,000 cases for investigation of unreported earnings, a three-fold increase over 1997 levels. To better identify individuals receiving income from unemployment insurance benefits, quarterly data matches have also replaced annual matches. Accordingly, the number of unemployment insurance detections has increased from 10,400 in 1997 to 19,000 last year. Further, SSA's ability to detect nursing home admissions, which can affect SSI benefits, has improved. SSA now conducts monthly matches with all states, and the number of overpayment detections related to nursing home admissions has increased substantially from 2,700 in 1997 to more than 75,000 in 2001. SSA's ability to detect recipients residing in prisons has also improved. Over the past several years, SSA has established agreements with prisons that house 99 percent of the inmate population, and last year it reported suspending benefits to 54,000 prisoners. Lastly, SSA has increased the frequency with which it matches recipient SSNs against tax records and other data essential to identify any unreported interest, income, dividends, and pension income individuals may be receiving. These matching efforts have also resulted in thousands of additional overpayment detections over the last few years. To obtain more current information on the income and resources of SSI recipients, SSA has also increased its use of on-line access to various state program data, such as unemployment insurance and workers' compensation. As a tool for verifying SSI eligibility, direct on-line connections are generally more effective than using periodic computer matches, because the information is more timely. Thus, SSA staff can quickly identify potential disqualifying income or resources at the time of application and before overpayments occur. In many instances, this allows the agency to avoid having to go through the difficult and often unsuccessful task of recovering overpaid SSI benefits. Field staff can directly query various state records to quickly identify workers compensation, unemployment insurance, or other state benefits individuals may be receiving. As of January 2002, SSA had access to 73 agencies in 42 states, as compared with 43 agencies in 26 states in 1998. Finally, to further strengthen program integrity, SSA took steps to improve its SSI financial redetermination review process. It increased the number of annual reviews from 1.8 million in fiscal year 1997 to 2.4 million in fiscal year 2001 and substantially increased the number of reviews conducted through personal contact with recipients, from 237,000 in 1997 to almost 700,000 in fiscal year 2002. SSA also refined its profiling methodology in 1998 to better target recipients that are most likely to have payment errors. SSA's data show that estimated overpayment benefits--amounts detected and future amounts prevented--increased by $99 million over the prior year. Agency officials indicated that limited resources would affect SSA's ability to do more reviews and still meet other agency priorities. In June 2002, SSA informed us that the Commissioner of SSA recently decided to make an additional $21 million available to increase the number of redeterminations this year. Despite its increased emphasis on overpayment detection and deterrence, SSA is not meeting its payment accuracy goals. In 1998, SSA pledged to increase its SSI overpayment accuracy rate from 93.5 percent to 96 percent by fiscal year 2002; however, the latest payment accuracy rate is 93.6 percent, and SSA does not anticipate achieving the 96 percent target until 2005. Various factors may account for SSA's inability to achieve its SSI accuracy goals, including the fact that key initiatives that might improve SSI overpayment accuracy have only recently begun. For example, field offices started to access NDNH wage data in 2001. This could eventually help address the number one source of overpayments--unreported wages, which in fiscal year 2000 accounted for $477 million in overpayments, or about 22 percent of overpayment errors. Further, SSA's data show that unreported financial resources, such as bank accounts, are the second largest source of SSI overpayments. Last year, overpayments attributable to this category totaled about $394 million, or 18 percent of all overpayments detected. SSA now has enhanced authority to obtain applicant resource information from financial institutions and plans to implement a pilot program later this year. Thus, when fully implemented, this tool may also help improve the SSI payment accuracy rate. SSA has made only limited progress toward addressing excessively complex rules for assessing recipients' living arrangements, which have been a significant and long-standing source of payment errors. SSA staff must apply a complex set of policies to document an individual's living arrangements and the value of in-kind support and maintenance (ISM)being received, which are essential to determining benefit amounts. Details such as usable cooking and food storage facilities with separate temperature controls, availability of bathing services, and whether a shelter is publicly operated can affect benefits. These benefit determination policies depend heavily on recipients to accurately report whether they live alone or with others; the relationships involved; the extent to which rent, food, utilities, and other household expenses are shared; and exactly what portion of those expenses an individual pays. Over the life of the SSI program, these policies have become increasingly complex as a result of new legislation, court decisions, and SSA's own efforts to achieve benefit equity for all recipients. The complexity of SSI program rules pertaining to living arrangements, ISM, and other areas of benefit determination is reflected in the program's administrative costs. In fiscal year 2001, SSI benefit payments represented about 6 percent of benefits paid under all SSA-administered programs, but the SSI program accounted for 31 percent of the agency's administrative expenses. Although SSA has examined various options for simplifying rules concerning living arrangements and ISM over the last several years, it has yet to take action to implement a cost-effective strategy for change. During our recent fieldwork, staff and managers continued to cite program complexity as a problem leading to payment errors, program abuse, and excessive administrative burdens. In addition, overpayments associated with living arrangements and ISM remain among the leading causes of overpayments after unreported wages and resources, respectively. SSA's lack of progress in addressing program simplification issues may limit its overall effectiveness at reducing payment errors and achieving its long- range 96 percent payment accuracy goal. SSA's fiscal year 2000 payment accuracy report noted that it would be difficult to achieve SSI accuracy goals without some policy simplification initiatives. In its recently issued SSI Corrective Action Plan, SSA stated that within the next several years it plans to conduct analyses of alternative program simplification options beyond those already assessed. Our work shows that administrative penalties and sanctions remain underutilized in the SSI program. Under the law, SSA may impose administrative penalties on recipients who do not file timely reports about factors or events that can lead to reductions in benefits--changes in wages, resources, living arrangements, and other support being received. Penalty amounts are $25 for a first occurrence, $50 for a second occurrence, and $100 for the third and subsequent occurrences. The penalties are meant to encourage recipients to file accurate and timely reports of information so that SSA can adjust its records to correctly pay benefits. The Foster Care Independence Act also gave SSA authority to impose benefit sanctions on persons who make representations of material facts that they knew, or should have known, were false or misleading. In such circumstances, SSA may suspend benefits for 6 months for the initial violation, 12 months for the second violation, and 24 months for subsequent violations. SSA issued interim regulations to implement these sanction provisions in July 2000. Currently, however, staff rarely use penalties to encourage recipient compliance with reporting policies. SSA data show that, over the last several years, the failure of recipients to report key information accounted for 71 to 76 percent of overpayment errors and that these errors involved about 1 million recipients annually. Based on SSA records, we estimate that at most about 3,500 recipients were penalized for reporting failures in fiscal year 2001. SSA staff we interviewed cited a number of obstacles or impediments to imposing penalties, as noted in our 1998 report, such as: (1) penalty amounts are too low to be effective; (2) imposition of penalties is too administratively burdensome; and (3) SSA management does not encourage the use of penalties. Although SSA has issued guidance to field office staff emphasizing the importance of assessing penalties, this action alone does not sufficiently address the obstacles cited by SSA staff. SSA's administrative sanction authority also remains rarely used. SSA data indicate that, between June 2000 and February 2002, SSA field office staff referred about 3,000 SSI cases to the OIG because of concerns about fraudulent activity. In most instances, the OIG returned the referred cases to the field office because they did not meet prosecutorial requirements, such as high amounts of benefits erroneously paid. Despite the large number of cases where staff believed fraud and abuse might be occurring, as of January 2002, field staff had actually imposed sanctions in only 21 SSI cases. Our interviews with field staff identified insufficient awareness of the new sanction authority and some confusion about when to impose sanctions. In one region, for example, staff and managers told us that they often referred cases to the OIG when fraud was suspected, but that it had not occurred to them that these cases could be considered for benefit sanctions if the OIG did not pursue investigation and prosecution. In our prior work, we reported that SSA had historically placed insufficient emphasis on recovering SSI overpayments. Over the past several years, SSA has been working to implement new legislative provisions to improve the recovery of overpayments. However, a number of key initiatives are still in the early planning or implementation stages, and it is too soon to gauge what effect they will have on SSI collections. Moreover, we are also concerned that SSA's current waiver policies and practices may be preventing the collection of millions of dollars in outstanding debt. In 1998, SSA began seizing the tax refunds from former SSI recipients with outstanding overpayments. SSA reported that this initiative has yielded $221 million in additional overpayment recoveries at the end of calendar year 2001. In 2002, SSA also began recovering SSI overpayments by reducing the Social Security retirement and disability benefits of former recipients without first obtaining their consent. SSA expects that this initiative will produce about $115 million in additional overpayment collections over the next several years. SSA also recently began reporting former recipients with outstanding debts to credit bureaus and to the Department of the Treasury. Credit bureau referrals are intended to encourage individuals to voluntarily begin repaying their outstanding debts. The referrals to Treasury will provide SSA with an opportunity to seize other federal benefit payments individuals may be receiving. While overpayment recovery practices have been strengthened, SSA has not yet implemented some key recovery initiatives that have been available to the agency for several years. Although regulations have been drafted, SSA has not yet implemented administrative wage garnishment, which was authorized in the Debt Collection Improvement Act of 1996. In addition, SSA has not implemented several provisions in the Foster Care Independence Act of 1999. These provisions allow SSA to offset federal salaries of former recipients, use collection agencies to recover overpayments, and levy interest on outstanding debt. According to SSA, draft regulations for several of these initiatives are being reviewed internally. SSA officials said that they could not estimate when these additional recovery tools will be fully operational. Our work showed that SSI overpayment waivers have increased significantly over the last decade and that current waiver policies and practices may cause SSA to unnecessarily forego millions of dollars in additional overpayment recoveries annually. Waivers are requests by current and former SSI recipients for relief from the obligation to repay SSI benefits to which they were not entitled. Under the law, SSA field staff may waive an SSI overpayment when the recipient is without fault and the collection of the overpayment either defeats the purpose of the program, is against equity and good conscience, or impedes effective and efficient administration of the program. To be deemed without fault, and thus eligible for a waiver, recipients are expected to have exercised good faith in reporting information to prevent overpayments. If SSA determines a person is without fault in causing the overpayment, it then must determine if one of the other three requirements also exists to grant a waiver. Specifically, SSA staff must determine whether denying a waiver request and recovering the overpayment would defeat the purpose of the program because the affected individual needs all of his/her current income to meet ordinary and necessary living expenses. To determine whether a waiver denial in some instances would be against equity and good conscience, SSA staff must decide if an individual incurred additional expenses in relying on the benefit, and thus requiring repayment would affect his/her economic condition. Finally, SSA may grant a waiver when recovery of an overpayment may impede the effective or efficient administration of the program--for example, when the overpayment amount is equal to or less than the average administrative cost of recovering an overpayment, which SSA currently estimates to be $500. Thus, field staff we interviewed generally automatically waive overpayments of $500 or less. In December 1993, SSA markedly increased the threshold for automatic SSI overpayment waivers from $100 to $500. Officials told us that this change was based on an internal study of administrative costs related to investigating and processing waiver requests for SSA's Title II disability and retirement programs, but not on SSI waivers directly. They were unable to locate the study for our review and evaluation. While staff and managers had varying opinions regarding the time and administrative costs associated with denying waiver requests, they also acknowledged that numerous recent automation upgrades may be cause for reexamining the current $500 waiver threshold. Our analysis of waiver data indicated that since the automatic waiver threshold was changed, the amount of SSI overpayments waived increased 400 percent, from $32 million in fiscal year 1993 to $161 million in fiscal year 2001. This increase has significantly outpaced the growth in both the number of SSI recipients served and total annual benefits paid, which increased by 12 and 35 percent respectively during this same period. Furthermore, the ratio of waived overpayments to total SSI collections has also increased. In fiscal year 1993, SSA waived overpayments were equivalent to about 13 percent of its SSI collections. By 1995, waiver amounts more than doubled, to $66 million, and were equivalent to about 20 percent of SSI collections for that year. By fiscal year 2001, SSI waivers represented nearly 23 percent of SSI collections. While not conclusive, the data indicate that liberalization of the SSI waiver threshold may be a factor in the increase in waived overpayments. SSA has not studied the impact of the increased threshold. However, officials believe that the trend in waived SSI overpayments is more likely due to annual increases in the number of periodic reviews of recipients' medical eligibility. These reviews have resulted in an increase in benefit terminations and subsequent recipient appeals. During the appeals process, recipients have the right to request that their benefits be continued. Those who lose their appeal can then request a waiver of any overpayments that occurred during the appeal period. SSA will usually grant these requests under its current waiver policies. Another factor affecting trends in waivers may be staff application of waiver policies and procedures. Although SSA has developed guidance to assist field staff in deciding whether to deny or grant waivers, we found that field staff have considerable leeway to grant waivers based on an individual's claim that he or she reported information to SSA that would have prevented an overpayment. In addition, waivers granted for amounts of less than $2,000 are not subject to second-party review, while another employee in the office--not necessarily a supervisor--must review those above $2,000. During our field visits, we also identified variation among staff in their understanding of how waiver decisions should be processed, including the extent to which they receive supervisory review and approval. In some offices, review was often minimal or nonexistent regardless of the waiver amount, while other offices required stricter peer or supervisory review. In 1999, SSA's OIG reported that the complex and subjective nature of SSA's Title II waiver process, as well as clerical errors and misapplication of policies by staff, resulted in SSA's incorrectly waiving overpayments in 9 percent of 26,000 cases it reviewed. The report also noted that 50 percent of the waivers reviewed were unsupported and that the OIG could not make a judgment as to the appropriateness of the decision. While the OIG only examined waivers under the Title II programs and for amounts over $500, the criteria for granting SSI waivers are generally the same. Thus, we are concerned that similar problems with the application of waiver policies could be occurring in the SSI program. Mr. Chairman, this concludes my prepared statement. I will be happy to respond to any questions you or other Members of the Subcommittee may have. For information regarding this testimony, please contact Robert E. Robertson, Director, or Dan Bertoni, Assistant Director, Education, Workforce, and Income Security at (202) 512-7215. Individuals making contributions to this testimony include Barbara Alsip, Gerard Grant, William Staab, Vanessa Taylor, and Mark Trapani.
As the nation's largest cash assistance program for the poor, the Supplemental Security Income (SSI) program SSI provided $33 billion in benefits to 6.8 million aged, blind, and disabled persons last year. In 2001, the outstanding SSI debt and newly detected overpayments totaled $4.7 billion. To deter and detect overpayments, the agency obtained legislative authority to use additional tools to verify recipients financial eligibility for benefits, enhanced its processes for monitoring and holding staff accountable for completing assigned SSI workloads, and improved its use of automation to strengthen its overpayment detection capabilities. However, because a number of initiatives are still in the planning or early implementation stages, it is too soon to assess their ultimate impact on SSI payment accuracy. In addition to improving its overpayment deterrence and detection capabilities, SSA has made recovery of overpaid benefits a high priority.
5,347
191
Head Start is administered by HHS' Administration for Children and Families (ACF). Services are provided at the local level by public and private nonprofit agencies that receive their funding directly from HHS. These agencies include public and private school systems, community action agencies, government agencies, and Indian tribes. Grantees may contract with one or more other public or private nonprofit organizations--commonly referred to as delegate agencies--in the community to run all or part of their local Head Start programs. Grantees may choose to provide center-based programs, home-based programs, or a combination of both. Once approved for funding as a result of a competitive application process, Head Start grantees do not compete for funding in succeeding years. However, they are required to submit applications for continuation awards (hereafter called awards) to support their programs beyond the initial grantee budget year. After Head Start receives its annual appropriation from the Congress, the respective HHS regional offices make awards to grantees in their administrative service areas at the beginning of each grantee's budget year as shown in table 1. Grantees use their awards for the following purposes, among others, to: purchase or rent a facility if providing a center-based program; hire qualified teachers, aides, and support staff; coordinate or contract with Public Health agencies and local health providers to deliver medical and dental services; buy or lease vehicles to transport children to Head Start centers; purchase utilities, services, and supplies needed to operate a center and administer the program; and comply with program standards and local building and health codes that ensure quality and safety. During a grantee budget year, grantees may also receive supplemental awards for specific purposes (such as expanding enrollment) or to cover normal, though sometimes unexpected, expenses such as repairing a roof or purchasing a new heating system. In addition, grantee accounts may be adjusted as the result of a routine financial audit or Head Start regional office review of grantees' files. These activities sometimes identify unspent funds that the grantee did not report due to an error or oversight. HHS requires grantees to get their Head Start accounts audited every 2 years, though many grantees hire accountants to perform an audit every year. As shown in figure 1, grantees, as expected, may not necessarily spend all of their award by the end of their budget year. HHS permits grantees to carry over unspent funds into the next grantee budget year to complete any program objectives that remain unmet from the previous year. HHS regional offices generally handle carryover funds in two ways: 1. Carryover balances from a previous year or years are added to an award that a grantee receives in a subsequent year. This procedure is known as "reprogramming" funds, and the amount of carryover funds added to a grantee's award is called total obligating authority (TOA). 2. Carryover balances from a previous year or years offset or reduce the award that a grantee receives in a subsequent year. This procedure is known as "offsetting" funds, and the amount of carryover deducted from the award is called new obligating authority (NOA). New $ (TOA) New $ (NOA) The growth in Head Start funding since 1990 (see fig. 2) reflects the federal government's commitment to expanding the number of children in the program and to ensuring program quality. Overall program funding increased from about $1.5 billion in fiscal year 1990 to about $3.5 billion in fiscal year 1995. Twice in fiscal year 1990 and once each in fiscal years 1991, 1992, and 1993, the Congress appropriated additional funding for Head Start to, among other things, increase local enrollments, strengthen the program's social, health, and parent involvement components; improve services for disabled children; initiate and improve literacy programs; and enhance salaries, benefits, training, and technical assistance for program staff. ACF allocated these expansion funds on the basis of a formula as required by statute. Despite this dramatic growth in Head Start appropriations, HHS awarded virtually all program funding to eligible grantees. Head Start's program obligation rates for each of these years stayed at or above 99 percent, while the total number of grantees increased from 1,321 in fiscal year 1990 to about 1,400 in fiscal year 1994. Overall program outlay rates (that is, the ratio of outlays to budget authority) during this period indicate that outlays remained stable as grantees received infusions of Head Start expansion or quality improvement funding. However, at the grantee level, this funding growth increased grantee awards and unspent balances for the grantees included in our universe during the grantee budget years we examined. We found that total grantee awards for the 1,197 Head Start grantees covered by our review increased from $1.4 billion to $2.3 billion from grantee budget years 1992 through 1994, while mean awards rose from $1.2 million to $1.9 million in these same years. (See table 2.) During grantee budget years 1992, 1993, and 1994--a period of intense growth--about two-thirds of the 1,197 grantees had unspent balances at the end of each budget year. Almost 40 percent of these 1,197 grantees had unspent balances every year. As shown in table 2, these balances totaled approximately $54 million, $101 million, and $130 million, in grantee budget years 1992, 1993, and 1994, respectively, and varied greatly by grantee. However, these unspent balances were a small part of grantees' total awards. On the basis of our analysis, unspent balances represented from about 5 to 8 percent of the award for those grantees with unspent balances and from 4 to 6 percent of total awards for all grantees in the aggregate. (See app. II for the reported unspent balances of the 108 grantees included in our sample.) Unspent balances resulted from (1) small differences between the amount of a grantee's annual award and its actual expenditures at the end of its grantee budget year, (2) situations that delay a grantee's expenditure of funds or that hamper a grantee's ability to spend funds before the year's end, and (3) a combination of these and other reasons. We found that almost two-thirds of grantees in grantee budget year 1992 and about half in grantee budget years 1993 and 1994 had small differences between their total award approved at the beginning of a grantee budget year and the amount spent at year's end. We considered these spending variances small if the amount of unspent funds was 5 percent or less of a grantee's award in a given year. These small budget variances could have occurred because, for example, (1) grantees' projected budgets--upon which grant awards are based--did not equal their actual expenditures or (2) grantees did not purchase an item or service as originally planned. For example, a grantee in Ohio had ordered two buses and playground equipment for its Head Start center. However, these items were not delivered nor paid for before the grantee's budget year ended, resulting in an unspent balance of $84,762. We found that from 10 to 24 percent of grantees with unspent balances in grantee budget years 1992 through 1994 (1) had problems renovating or building a center, which delayed planned expenditures until subsequent years, or (2) received additional funding late in a grantee budget year, making it difficult for grantees to spend all of their funds before year's end. For example, a Head Start grantee in Colorado received funding to increase its program enrollment in early September 1991--about 2 months before the grantee's budget year was to end on October 30. Due to the short time remaining, the grantee could not spend $89,980 of the amount awarded for expanding program enrollment. This same grantee had agreed verbally with a private company to prepare a site so that the grantee could place a modular unit on it to serve as a Head Start center. Site preparation would have involved establishing water, sewer, gas, and electrical hookups at the site. Before any work began, however, new owners took over the company and did not honor the verbal agreement between the grantee and the previous owner. It took the grantee 2 years to find another site suitable for the center, and that facility required extensive renovations. HHS' Office of Inspector General reported in 1991 and 1993 that acquiring adequate, affordable space was a major problem for Head Start grantees attempting to expand program enrollments. Grantees told the Inspector General's office that it can take up to a year to find suitable space that then may have to be renovated. Strict construction licensing requirements and delays in license approval could also slow spending for center construction or renovation. The Inspector General reported that space problems were most prevalent among grantees funded to increase enrollment by more than 200 children. The grantees believed that being notified at least 6 months in advance of funding disbursements would help to alleviate this problem. Head Start grantees interviewed by the Inspector General's staff also said that receiving expansion funding late in the budget year results in carryover fund balances. After expansion, more than twice as many grantees interviewed had carryover balances of over $50,000. Many grantees believe that even with adequate lead time large expansions should not occur annually. According to the grantee files we reviewed, unspent balances sometimes occurred for reasons other than small budget variances or timing issues. On the basis of information included in grantee files and discussions with regional office program officials, we found, for example, that unspent balances occurred because grantees experienced accounting or management problems during 1 or more years, depended on large government bureaucracies, such as New York City's, to provide certain goods and services, which often slowed program expenditures; or assumed the program operations and accounts of a former grantee. Also, unspent balances may have occurred for a combination of reasons described above. In other cases we could not determine the reason for grantees' unspent balances on the basis of file information or discussions with Head Start regional office officials. Unspent balances occur when a grantee's total award differs from the amount the grantee spent during its budget year. As previously stated, these unspent funds may be carried over into a subsequent grantee budget year. For our analysis, we defined carryover funds as any unspent funds used to either offset or add to a grantee's award during a subsequent budget year. Carryover funds are not always added to or offset in the year immediately following the year the unspent funds occurred. For example, a grantee in Florida with $45,913 in unspent funds in grantee budget year 1992 did not have this amount totally added to or offset as carryover funds in grantee budget year 1993. In fact, $45,759 was added to its budget year 1993 award and the remaining $154 was used to offset the grantee's budget year 1994 award. A grantee in Minnesota, on the other hand, had $3,840 from grantee budget year 1993 added to its budget year 1995 award. Yet, a Michigan grantee had its entire grantee budget year 1992 unspent balance of $1,568 offset as carryover funds in 1993. On the basis of our analysis of grantee files, we found that in grantee budget year 1993 HHS added about half of all carryover funds to grantees' awards as TOA and the remaining proportion of carryover funds was offset as NOA. Of the grantees in our sample with TOA in grantee budget year 1993, the unspent funds added to grantee awards ranged from $10,900 to $533,500 and averaged approximately $96,000. If we had included the grantee representing New York City in our calculation, the upper end of this range would have been about $4.2 million. NOA for the same period ranged from $59 to $664,700 and averaged about $39,000. In grantee budget year 1994, we found that about three-fourths of carryover funding was added to awards as TOA, and the remainder was offset as NOA. Of the grantees in our sample with TOA in grantee budget year 1994, the amount of unspent funds added to grantee awards ranged from $3,200 to $2.4 million and averaged about $197,400. NOA for the same period ranged from $17 to $621,000 and averaged approximately $58,600. This trend appears to continue in grantee budget year 1995, though data for this year were incomplete when we performed our final calculations in October 1995. We found that HHS generally adds to or offsets grantee carryover funds within 2 grantee budget years after an unspent balance occurs. For example, for both grantee budget years 1993 and 1994, we found that about 90 percent of carryover funds added to grantee awards was 1 year old, and the remainder was from 2 to 3 years old; and from about 70 to 90 percent of carryover funds offsetting grantee awards was from 1 to 2 years old, and the remainder was 3 or more years old. Because Head Start carryover funds are generally spent in 2 grantee budget years but are available for up to 5 fiscal years following the fiscal year in which they are initially awarded (31 U.S.C., sec. 1552(a)), we asked Head Start regional office officials why certain carryover balances were reprogrammed or offset as long as 3 or more years after an unspent balance occurred. Regional office officials gave the following administrative and grantee-specific reasons: Regional office staff may not process grantee files in a timely manner due to grantee or staff errors, delays in data entry, staff turnover, large workloads, and differences in staff competence. Final forms documenting carryover balances are not due from grantees until 90 days after the budget year's end. Incorrect carryover balances may not be caught immediately because independent auditors may take up to 13 months to complete an audit of a grantee's program accounts for a given year. Actions, such as reprogramming or offsetting carryover balances, could be suspended if a grantee appeals an HHS decision to disallow funding. A grantee's bankruptcy proceedings delayed a regional office from offsetting certain carryover funds. For grantee budget years 1993 and 1994 combined, we estimated that carryover funds totaled $139 million. Of this amount, carryover funds added to grantee awards (TOA) totaled $97 million and those offsetting grantee awards (NOA) totaled $42 million. We focused our analysis of intended use on the TOA portion because NOA has no identifiable intended purpose. On the basis of our review of Head Start grantee files, the intended use of a large proportion of Head Start carryover funds from grantee budget years 1993 and 1994 combined was to be used for expanding program enrollments and renovating or buying facilities. Of the $97 million of TOA carryover funds, the intended use of 40 percent of these funds was for expansion and 37 percent was for facilities. Data from the files indicated that about 23 percent of the total TOA for these years was reportedly to be used for capital equipment, supplies, and other purposes such as staff training and moving expenses. Data were incomplete for grantee budget year 1995. We found that grantees in our sample with TOA in grantee budget years 1993 and 1994 combined to be used for facilities ranged from $901 to $611,000 and averaged approximately $116,000. TOA reportedly to be used for expansion ranged from $4,200 to $2.4 million and averaged about $296,000. In summary, although overall program outlay rates remained stable during a period of intense program growth (fiscal years 1990-95), Head Start grantees accrued increasingly larger average unspent balances in grantee budget years 1992 through 1994. Depending on the size of grantees' awards, their reported unspent balances in those years ranged from as little as $2 to about $2 million. On the basis of Head Start files, we determined in most cases that these unspent balances resulted from (1) small differences between grantees' budget estimates and actual expenditures; (2) grantee problems renovating or constructing facilities, which delayed planned expenditures; and (3) the receipt of supplemental funding by grantees late in their budget year, which made it difficult for grantees to spend their funds before year's end. Of the unspent funds added to grantee awards in budget years 1993 and 1994 combined, we found that grantees planned to use these dollars for increasing local program enrollments and buying or improving program facilities--activities that grantees often do not complete in a single year. As arranged with your office, we will make copies available to the Secretary of Health and Human Services and other interested parties. We will also make copies available to others on request. Please contact Fred E. Yohey, Assistant Director, on (202) 512-7218 or Karen A. Whiten, Evaluator-in-Charge, if you or your staff have any questions. Other GAO contributors to this report are listed in appendix III. We designed our study to collect information about the extent and nature of Head Start carryover funds. To do so, we visited a sample of Head Start regional offices and examined key documents in selected grantee files. Results are generalizable to Head Start grantees that (1) were at least 3 years old in 1994, (2) had at least some but less than $60 million in new funding in 1994, and (3) were located in 10 of the 12 Head Start regions. Our work was performed between June and October 1995 in accordance with generally accepted government auditing standards. We reviewed grantee files for a nationally representative sample of Head Start grantees. We focused our efforts on grantee budget years that ended in 1992 through 1995, examining file documents at selected Head Start regional offices. To generate national estimates, we employed a two-stage cluster sampling strategy. The Head Start regions constituted the first stage of the sample. Of the 12 Head Start regions, 2 are operated from the Department of Health and Human Services headquarters in Washington, D.C.--1 for Native Americans and the other for migrant workers. Because these regional offices share a unique relationship with headquarters, they were not included in the regions to be sampled. We organized the 10 remaining regions by the amount of grantee new funding received in federal fiscal year 1994, separating them into three groups or strata: regions with new funding of $500 million or more; regions with new funding of $200 to $499 million; and regions with new funding of less than $200 million. Table I.1 shows our population of regions. Total fiscal year 1994 new funding (dollars in millions) We then selected a sample of regions in each strata using a random number generator program. Table I.2 shows the regions selected in our sample. Total fiscal year 1994 new funding (dollars in millions) Stage two of the sample consisted of individual Head Start grantees. Head Start had 1,270 grantees in the 10 regions in fiscal year 1994. Because we were reviewing 2 to 3 years of data, we excluded any grantee not in existence at least 3 years. We also excluded all grantees with no new funding in fiscal year 1994. This reduced the number of grantees in our population to 1,201. We organized grantees in our sample regions by fiscal year 1994 new funding and put them into four strata: those with fiscal year 1994 new funding of less than $1 million; those with $1 million or more but less than $3 million; those with $3 million or more but less than $5 million; and those with $5 million or more. We then selected a random sample of grantees in each strata. Table I.3 shows the distribution of grantees by strata of our population and sample. Once the fieldwork was completed and records evaluated, we determined that one very large grantee with fiscal year 1994 new funding of $60 million or more was, because of its complexity, unique and required special handling. Therefore, we set aside this one grantee--The City of New York Human Resources Administration, Agency for Child Development. We did not include data collected from this site in our overall estimates but used the data as a case study of a very large grantee. By eliminating the very large grantees, we reduced our population further by 4 grantees to 1,197, thereby reducing our sample from 108 to 107 grantees. Our findings, therefore, are representative of grantees in the 10 Head Start regions that are at least 3 years old with at least some but less than $60 million in fiscal year 1994 new funding. We provided the list of sample grantees to each selected regional office, which collected records for our review. We examined key documents from the files and summarized the information using a data collection instrument. Data elements we collected included the number of service years for a selected grantee; total federal funds authorized for specific funding periods; the unspent balance of federal funds for specific funding periods and its intended usage; and the amount of carryover funds added to or offsetting grantee awards in grantee budget years 1993, 1994, and 1995 by type and source year. To link source year with carryover funds, we gathered information from the Financial Assistance Award form, which identifies the grantee service year in which the unspent funds occurred. Once data collection was complete, we compiled and merged the data. Data elements were verified and traced to documents maintained in the grantee files for 91 percent of the cases. We then computed weights to produce national estimates from our sample and calculated analytic variables. To calculate the age of carryover funds, we subtracted the source year from the grantee's current service year. The Head Start grantee funding process presented unique data collection challenges. We made no attempt to capture the fiscal year funding. Rather, we used each grantee's budget year ending date to guide our compilation of financial data. Because our analysis is based on data from a sample of grantees, each reported estimate has an associated sampling error. The size of the sampling error reflects the estimate's precision; the smaller the error, the more precise the estimate. The magnitude of the sampling error depends largely on the size of the obtained sample and the amount of data variability. Our sampling errors for the estimates were calculated at the 95-percent confidence level. This means that in 95 out of 100 instances, the sampling procedure we used would produce a confidence intervalcontaining the population value we are estimating. Some sampling errors for our dollar estimates are relatively high because dollar amounts vary substantially. Sampling errors also tend to be higher for those estimates based on a subset of sample cases. For example, estimates of the mean and total amounts of grantee unspent balances are based on fewer than the 107 grantees in our sample and have large sampling errors. Therefore, these estimates must be used with extreme caution. For a complete list of sampling errors for dollar estimates and proportions in this report, see tables I.4 and I.5, respectively. Number of sample grantees contributing to estimate +/-$82,904103 +/-$98,912,337 +/-$102,334 +/-$123,885,671 +/-$102,618 +/-$122,810,684 +/-$19,304 +/-$15,412,760 +/-$41,297 +/-$32,225,320 +/-$36,568 +/-$27,732,575 1993/1994 Carryover funds offsetting grantee awards (NOA) +/-$25,705 +/-$18,371,913 1993/1994 Carryover funds added to grantee awards (TOA) Estimated proportion (percent) Sampling error (percentage points) Grantees with unspent balances all 3 years Unspent balances as a percent of total Amount of unspent as a percent of award Unspent balances due to small budget variances Unspent balances due to timing issues Unspent balances due to other reasons Unspent balances due to unknown reasons (continued) Estimated proportion (percent) Sampling error (percentage points) Because we wanted to obtain general information about the extent and frequency of Head Start carryover funds, we limited our investigation to reviewing grantee records maintained at HHS' Atlanta, Chicago, Dallas, Denver, and New York regional offices. We gave officials at these regional offices an opportunity to review the accuracy of the data we collected and subsequently used to develop our estimates. We did not contact individual grantees to verify records nor did we visit grantee sites. We did not follow the flow of funds to determine if program abuses had occurred nor did we make any attempt to determine whether program grantees actually used the funds for the purposes intended. North Wilkesboro, N.C. Hardinsburg, Ky. Fort Lauderdale, Fla. Huntsville, Ala. Cheraw, S.C. Chattanooga, Tenn. Tuscaloosa, Ala. Jacksonville, N.C. Savannah, Ga. Monticello, Ga. Williamston, N.C. Brooksville, Fla. Montgomery, Ala. Jacksonville, Fla. La Grange, Ky. Florence, S.C. (continued) Eatonton, Ga. Lucedale, Miss. Cartersville, Ga. Ashland, Miss. Logansport, Ind. Coldwater, Mich. Washington Court House, Ohio Stevens Point, Wis. (continued) Rockford, Ill. Greenville, Mich. Scottville, Mich. Oklee, Minn. Grand Rapids, Mich. Alpena, Mich. Janesville, Wis. Port Huron, Mich. Rushford, Minn. East St. Louis, Ill. Zumbrota, Minn. Rock Falls, Ill. NA - Information not available. (continued) Stonewall, Tex. Winnsboro, La. Bay City, Tex. NA - Information not available. NA - Information not available. (continued) Kingston, N.Y. Brooklyn, N.Y. NA - Information not available. The following individuals made important contributions to this report: Robert Rogers and Karen Barry planned this review, and Karen managed the data collection. David Porter and Lawrence Kubiak collected much of the data from the HHS regional offices. Patricia Bundy also helped to collect data, conducted follow-up discussions with HHS headquarters and regional office officials, and assisted with report processing. Dianne Murphy drew the sample and performed the analysis. Steve Machlin calculated sampling errors. Harry Conley and Michael Curro provided technical assistance, and Demaris Delgado-Vega provided legal advice. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the: (1) amount of Head Start funding unspent by program grantees at the end of budget years 1992 to 1994 and the reasons for these unspent funds; (2) proportion of carryover funds that were added to grantee awards and that are 1 or more budget years old; and (3) grantees' intended use of carryover funds. GAO found that: (1) about two-thirds of the grantees reviewed had unspent balances of $69,000 to $177,000 during budget years 1992 through 1994; (2) most of the unspent balances resulted from small differences between grantees' budget estimates and actual expenditures, problems related to building Head Start centers, and grantees' inability to spend their awards because of the Department of Health and Human Services (HHS) disbursement problems; (3) one-half of all the carryover funds in budget year 1993 and about three-fourths of the carryover funds in budget year 1994 were added to grantee awards in subsequent budget years; (4) about one-half and one-fourth of carryover funds in grantee budget years 1993 and 1994 offset grantee awards; (5) Head Start offset 70 to 90 percent of its grantee awards with carryover funds within 2 budget years of an unspent balance; and (6) carryover funds added to grantee awards were used to expand Head Start enrollments, build new facilities, purchase capital equipment and train staff.
6,282
327
The federal government relies heavily on contractors to provide a range of goods and services. In fiscal year 2007, about 160,000 contractors provided support to federal agencies. A large portion of these contractors was concentrated in five agencies: DOD, DHS, DOE, NASA, and GSA. Among these five agencies, DOD accounts for 72 percent of all contract obligations across about 77,000 contractors in fiscal year 2007 (see table 1). These five agencies often rely on the same contractors. Table 2 shows the number and percentage of contractors DHS, NASA, DOE, and GSA had in common with DOD in fiscal year 2007. The FAR requires agencies to consider past performance information as an evaluation factor in certain negotiated competitive procurements--along with other evaluation factors such as price, management capability, and technical excellence. Contractor past performance information may include the contractor's record of conforming to contract requirements and to standards of good workmanship; record of forecasting and controlling costs; adherence to contract schedules; and history of reasonable and cooperative behavior and commitment to customer satisfaction. Although the FAR requires officials in selecting contractors to consider past performance as an evaluation factor in certain negotiated procurements, agencies have broad discretion in deciding its importance relative to other factors in the evaluation scheme. Agencies determine which of the contractor's past contracts are similar to the contract to be awarded in terms of size, scope, complexity, or contract type and the relative importance of past performance. For procurements with clearly defined requirements and minimal risk of unsuccessful contract performance, cost or price may play a more important role than past performance in selecting contractors. For procurements with less clearly defined requirements and a higher risk of unsuccessful contract performance, it may be in the government's best interest to consider past performance, technical capability, and other factors as more important than cost or price. The FAR requires that solicitations disclose the evaluation factors that will be used in selecting a contractor and their relative importance. In evaluating past performance information, agencies must consider, among other things, the 1) currency and relevancy, 2) source and context, and 3) general trends in the contractor's performance. The solicitation must also describe how offerors with no performance history will be evaluated. Once a contract is awarded, the government should monitor a contractor's performance throughout the performance period. Surveillance includes oversight of a contractor's work to provide assurance that the contractor is providing timely and quality goods or services and to help mitigate any contractor performance problems. An agency's monitoring of a contractor's performance may serve as a basis for past performance evaluations. The FAR requires agencies to prepare an evaluation of contractor performance for each contract that exceeds the simplified acquisition threshold at the time the work is completed and gives agencies discretion to include interim evaluations for contracts with a performance period exceeding one year. The DOD has generally higher thresholds based on business sectors. A number of systems across the government are used to capture contractor performance information, which is eventually passed on to PPIRS. DOD maintains three systems for its military departments and agencies--Architect-Engineer Contract Administration Support System (ACASS), Construction Contractor Appraisal Support System (CCASS), and Contractor Performance Assessment Reporting System (CPARS). NASA has its own system, the Past Performance Database (PPDB). DHS and DOE are transitioning to using DOD's CPARS. Other civilian departments use the Contractor Performance System (CPS) managed by the National Institutes of Health. Effective July 1, 2002, all federal contractor past performance information currently captured through these disparate systems was to be centrally available for use by all federal agency contracting officials through PPIRS--a Web-enabled, governmentwide application for consolidating federal contractor performance information. Since its implementation, concerns have been raised about the completeness of the information in PPIRS. In February 2008, a DOD Inspector General report noted that the information in CPARS, which feeds information into PPIRS, was incomplete and questioned whether or not acquisition officials had access to all the information they needed to make business decisions. Specifically, in reviewing performance assessment reports in CPARS, the Inspector General reported that for DOD contracts valued at more than $5 million, 82 percent did not contain detailed narratives sufficient to establish that ratings were credible and justifiable; 68 percent had performance reports that were overdue; and 39 percent were registered more than a year late. In addition, the report identified material internal control weaknesses in the Air Force, Army, and Navy procedures for documenting and reporting contractor performance information. Agencies considered past performance information in evaluating contractors for the contract solicitations we reviewed, but many of the officials we spoke with noted that past performance rarely, if ever, was the deciding factor in their contract award decisions. Their reluctance to base award decisions on past performance was due, in part, to their skepticism about the comprehensiveness and reliability of past performance information and difficulty assessing its relevance to specific acquisitions. For the 62 contract solicitations we reviewed, the ranking of past performance as an evaluation factor relative to other non-cost factors varied. The company's technical approach was the non-cost factor considered most important for most solicitations. Past performance as an evaluation factor was ranked first in order of importance in about 38 percent of solicitations (appendix I provides more details on the methodology for selecting and reviewing contract solicitations). Contracting officials who viewed past performance as an important evaluation factor noted that basing contract award decisions, in part, on past performance encourages companies to achieve better acquisition outcomes over the long term. For example, according to officials at one Air Force location, an incumbent contractor was not awarded a follow-on contract worth over $1 billion primarily because of poor performance on the prior contract. As a result, the contractor implemented several management and procedural changes to improve its performance on future contracts. Despite the fact that past performance was an evaluation factor in all the solicitations we reviewed, over 60 percent of the contracting officers we talked with stated that past performance is rarely or never a deciding factor in selecting a contractor. Many contracting officers stated they preferred to rely on other more objective factors such as technical approach or price. Officials cited several reasons for their reluctance to rely more on past performance in making award decisions including difficulty obtaining objective and candid past performance information. For example, over half of the contracting managers we met with noted that officials who are assessing a contractor's performance have difficulty separating problems caused by the contractor from those caused by the government, such as changing or poorly defined government requirements. Fear of damaging contractor relations may also influence assessments of contractor performance, particularly in areas where there are a limited number of contractors that can provide a particular good or service. Some contracting officials told us there may also be a tendency to "water down" assessments if they perceive a contractor may contest a negative rating. Contracting officials also cited other challenges for not relying more on past performance information including 1) difficulty assessing relevance to the specific acquisition or offerors with no relevant past performance information, 2) lack of documented examples of past performance, and 3) lack of adequate time to identify, obtain, and analyze past performance information. Contracting officials often rely on multiple sources of past performance information. Most officials told us they found information from the prospective contractor's prior government or industry customer references--gathered through interviews or questionnaires--as the most useful source of past performance information. Moreover, several contracting officials noted that they use questionnaires to obtain past performance information on major subcontractors. Officials noted, however, that questionnaires are time-consuming and the performance information collected through them is not shared governmentwide. Other sources of past performance information include informal contacts such as from other contracting officers who have dealt with the contractor in the past. Most contracting officials we spoke with also used PPIRS, but cited the absence of information in PPIRS as one reason for typically relying on other sources along with challenges in ascertaining information that was relevant to the specific acquisitions. Several contracting officials stated a governmentwide system like PPIRS, if populated, could reduce the time and effort to collect past performance information for use in selecting contractors. Regardless of the source used, contracting officials agreed that for past performance information to be meaningful in contract award decisions, it must be documented, relevant, and reliable. Our review of PPIRS data for fiscal years 2006 and 2007 found relatively little past performance information available for sharing and potential use in contract award decisions. One reason is that agencies are not documenting contractor performance information that feeds into PPIRS to include, in some cases, contract actions involving task or delivery orders placed against GSA's MAS. Other information that could provide key insights into a contractor's performance, such as information on contract terminations for default and a prime contractor's management of subcontractors, was also not systematically documented. Contracting managers also lack tools and metrics to monitor the completeness of past performance data in the systems agencies use to record past performance information. Further, the lack of standardized evaluation factors and rating scales in the systems that collect past performance information has limited the system's usefulness in providing an aggregate level picture of how contractors are performing. Finally, lack of central oversight of PPIRS has undermined efforts to capture adequate past performance information. The FAR requires agencies to prepare an evaluation of contractor performance for each contract that exceeds the simplified acquisition threshold ($100,000 in most cases) when the contract work is completed. While the FAR definition of a contract can be read to include orders placed against GSA's Multiple Award Schedule (MAS), the FAR does not specifically state whether this requirement applies to contracts or task or delivery order contracts awarded by another agency. While DOD and many agencies we reviewed have issued supplemental guidance reiterating the FAR requirement to evaluate and document contractor performance--information that ultimately should be fed into PPIRS--the agencies generally did not comply with the requirement. We estimated that the number of contracts that required a performance assessment in fiscal year 2007 for agencies we reviewed would have totaled about 23,000. For the same period, we found about 7,000 assessments in PPIRS--about 31 percent of those contracts requiring an assessment (see table 3). About 75 percent of all past performance reports in PPIRS were from DOD, with the Air Force accounting for the highest percent of completed assessments; however, there were relatively few for some military services--a finding consistent with the DOD IG's February 2008 report. For the civilian agencies we reviewed, there were relatively few performance reports in PPIRS compared to the number we estimated. For example, for fiscal year 2007, an estimated 13 percent of DHS contracts that would potentially require a performance assessment were documented in PPIRS. For specific types of contract actions, such as task and delivery orders placed against GSA's MAS, we found little contractor performance information in PPIRS. Between fiscal years 1998 and 2008, purchases made against MAS have grown from over $7 billion to $37 billion. Similarly, the number of MAS contracts has increased from 5,200 in the mid-1990s to 18,000 in fiscal year 2008. Despite this significant growth, the number of performance reports in PPIRS for orders placed against MAS contracts is minimal. For example, about 5 percent of the DHS orders and none of NASA's were assessed in fiscal year 2007. Contracting officials we spoke with confirmed that these assessments were generally not being done; some told us that they believed GSA was collecting this information. According to GSA officials, however, agencies are responsible for documenting and reporting MAS contractor performance, and GSA does not generally request feedback on performance for MAS contractors. Without this information, GSA is in no position to know how a contractor is performing when deciding whether or not to continue doing business with that contractor. Currently, there is no governmentwide requirement for agencies to document in PPIRS when a contract has been terminated because the contractor defaulted on the terms of the contract. Consequently, contracting officers may not have access to all information on a contractor's past performance that could factor into a contract award decision. The recent awarding of contracts to defaulted contractors highlights the need for information on contract terminations when making contracting decisions. For example, a $280-million Army munitions contract was awarded to a contractor that had previously been terminated for default on several different contracts. The contracting officer told us that this information, if available, would have factored into the contract award decision. Subsequently, this same contractor defaulted under that contract. Similarly, an October 2008 report issued by the Office of the Special Inspector General for Iraq Reconstruction documented that at least eight contractors that had one or more of their projects terminated for default received new contracts and purchase orders. As part of this audit, the office examined whether the agencies had evaluated the contractors' prior performance before awarding contracts and whether they had considered suspending or debarring the poor performing contractors. Although the report found that the awards to defaulted contractors were within the authority provided by the FAR, it raised questions about the degree to which the contractors' prior performance was considered. In June 2008, the FAR Council opened a case to address termination for default reporting. In addition, DOD issued policy in July 2008 on the need for departmentwide centralized knowledge of all contracts that have been terminated regardless of dollar amount. At the subcontractor level, apart from evaluating a prime contractor's management of its subcontractors, historically, the federal government has had limited visibility into subcontractor performance despite the increased use in subcontractors. In January 2008, we reported that total subcontract awards from DOD contracts had increased by 27 percent over a 4-year period--from $86.5 billion in fiscal year 2002 to $109.5 billion in fiscal year 2006. As we reported, federal contractors must manage contract performance, including planning and administering subcontracts as necessary, to ensure the lowest overall cost and minimize technical risk to the government. The FAR provides that the agency's past performance evaluation should take into account past performance information regarding a prospective contractor's subcontractors that will perform major or critical aspects of a requirement when such information is relevant to an acquisition. Agency contracting officials informed us that they do not assess the performance of these subcontractors. Rather, if they collect any information, it is in their assessments of the prime contractor's subcontract management. However, not all collection systems used by agencies allow for systematic capturing of subcontract management information, if it was applicable in a procurement. DOD's CPARS system has a separate rating factor for subcontract management for systems contracts whereas systems used by NASA and other civilian agencies do not have a separate factor. DOD guidance states assessments must not be done on subcontractors, but CPARS allows the assessing official to address the prime contractor's ability to manage and coordinate subcontractor efforts. Beyond this information on subcontractors, no additional information is routinely collected on subcontractors. In addition, the FAR was recently revised to explain that information on contractor ethics can be considered past performance information. The FAR now states that a contractor's history of reasonable and cooperative behavior and commitment to customer satisfaction may be considered part of a contractor's past performance. This type of data is not currently being systematically captured and documented for use in contract award decisions. Several contracting officials acknowledged that documenting contractor performance was generally not a priority, and less than half of the contracting managers we talked with tracked performance assessment completeness. Some agency officials we spoke with said that a lack of readily accessible system tools and metrics on completeness has made it difficult to manage the assessment process. CPARS and CPS--assessment reporting systems used by DOD and DHS--do not have readily accessible system tools and metrics on completeness for managers to track compliance. According to officials who manage CPARS, a team is developing requirements for system tools and metrics but has been challenged to develop useful measures because of a lack of complete and reliable contract information from FPDS. OFPP officials similarly acknowledged there was a lack of tools and metrics for agency contracting officials to monitor and manage the process of documenting contractor performance. For example, managers currently do not have the ability to readily identify contracts that require an assessment, how many are due and past due, and who is responsible for completing assessments. According to these officials, holding managers accountable for outcomes without adequate tools to manage the assessment process would be difficult. However, a few contracting managers we spoke with placed a high priority on documenting contractor performance, noting that doing so tended to improve communication with contractors and encourage good performance. One Air Force Commander issued guidance reiterating that CPARS is a key component in selecting contractors; that Commander personally oversees the performance reporting system, requiring a meeting with responsible officials when a CPARS report is overdue. DHS officials recognized that more emphasis is needed on documenting performance assessments and told us they have included a past performance review as part of their chief procurement officer oversight program for fiscal year 2009. Other indicators that some management officials placed a high priority on documenting performance include the following: Assigning past performance focal points--some activities assigned focal points, individuals with specific responsibilities that included providing training and oversight. At two Air Force locations, focal points also reviewed performance narratives for quality. Designating assessing officials--some activities designated managers as the official assessor of contractor performance rather than contracting officers or program office officials. Who to assign accountability to is another challenge. OFPP generally views the completion of contractor performance assessments as a contracting officer function. However, many contracting officials we talked with stated they often do not have the required information to complete an assessment and have to rely on program officials to provide the information. Some contracting offices delegated responsibility for completing assessments to the program office but acknowledged program office officials have little incentive to complete assessments because they often did not see the value in them. We previously reported in 2005 that conducting contactor surveillance at DOD, which includes documenting contractor performance, was not a high priority and that accountability for performing contractor surveillance was lacking. Differing number and type of rating factors and rating scales agencies use to document contractor performance limit the usefulness of the information in PPIRS. NASA's PPDB system has four rating factors, and the CPS database, which is used by other civilian agencies, has five rating factors. In contrast, DOD's CPARS system has a total of 16 rating factors. Each system also uses a different rating scale. Table 4 highlights these differences. Officials from GSA's Integrated Acquisition Environment, which has oversight of governmentwide acquisition systems, acknowledged the utility of PPIRS is currently limited by the differences in rating factors and scales. Because the ratings are brought into PPIRS as-is, aggregate ratings for contractors cannot be developed--the data are too disparate. As a result, contracting officials making contract award decisions may have to open and read through many ratings to piece together an overall picture of a contractor's performance. Ultimately, the lack of this information hinders the federal government's ability to readily assess a contractor's performance at an aggregate level or how overall performance is trending over time. No one agency oversees, monitors, manages, or funds PPIRS to ensure agency data fed into the system is adequate, complete, and useful for sharing governmentwide. While GSA is responsible for overseeing, and consolidating governmentwide acquisition related systems, which include PPIRS, OFPP is responsible for overall policy concerning past performance, and DOD funds and manages the technical support of the system. In May 2000, OFPP published discretionary guidance entitled "Best Practices for Collecting and Using Current and Past Performance Information." Consistent with the FAR, this guidance stated that agencies are required to assess contractor performance and emphasized the need for an automated means to document and share this information. Subsequently, OFPP issued a draft contractor performance guide in 2006 designed to help agencies know their role in addressing and using contractor performance information. However, the guide was not intended to, nor does it, establish governmentwide roles and responsibilities for managing and overseeing PPIRS data. Since 2005, several efforts have been initiated to improve PPIRS and provide pertinent and timely performance information, but little progress has been made. Several broad goals for system improvement, established in 2005 by an OFPP interagency group, have yet to be met. Likewise, a short-term goal of revising the FAR to mandate the use of PPIRS by all government agencies has yet to be achieved. OFPP acknowledges that PPIRS falls short of its goal to provide useful information to contracting officials making contracting decisions. When PPIRS was established in 2002, OFPP officials envisioned it would simplify the task of collecting past performance information by eliminating redundancies among the various systems. In 2005, the Chief Acquisition Officers Council, through an OFPP interagency work group, established several broad goals for documenting, sharing, and using past performance information, including the following: Standardize different contracting ratings used by various agencies. Provide more meaningful past performance information, including terminations for default. Develop a centralized questionnaire system for sharing governmentwide. Possibly eliminate multiple systems that feed performance information in PPIRS. However, little progress has been made in addressing these goals. According to OFPP officials, funding needs to be dedicated to address these goals and realize long-term improvements to the current past performance system. GSA officials who oversee acquisition related systems, to include PPIRS, told us that as of February 27, 2009, efforts remain unfunded and no further action had been taken to make needed improvements. The first step in securing funding, according to OFPP and GSA officials, is mandating the use of PPIRS. However, proposed changes to the FAR that would clarify past performance documentation requirements and require the use of PPIRS have been stalled. The proposed rule provides clearer instruction to contracting officers by delineating the requirement to document contractor performance for orders that exceed the simplified acquisition threshold, including those placed against GSA MAS contracts, or for orders against contracts awarded by another agency. In proposing FAR changes, OFPP focused, in part, on accountability by requiring agencies to identify individuals responsible for preparing contractor performance assessments. While the comment period for the proposed changes closed in June 2008, the changes have not been finalized. An OFPP policy official stated that the final rule is expected to be published by June 2009. With the federal government relying on many of the same contractors to provide goods and services across agencies, the need to share information on contractors' past performance in making contract award decisions is critical. While the need for a centralized repository of reliable performance information on federal contractors was identified in 2002 when OFPP implemented PPIRS, we identified several underlying problems that limit the usefulness of information in PPIRS for governmentwide sharing. These problems include the lack of accountability or incentive at agencies to document assessments in the system, lack of standard evaluation factors and rating scales across agencies, and a lack of central oversight to ensure the adequacy of information fed into the system. Any efforts to improve sharing and use of contractor performance information must, at a minimum, address these deficiencies. Until then, PPIRS will likely remain an inadequate information source for contracting officers. More importantly, the government cannot be assured that it has adequate performance information needed to make sound contract award decisions and investments. To facilitate governmentwide sharing and use of past performance information, we recommend that the Administrator of OFPP, in conjunction with agency chief acquisition officers, take the following actions: Standardize evaluation factors and rating scales governmentwide for documenting contractor performance. Establish policy for documenting performance-related information that is currently not captured systematically across agencies, such as contract terminations for default and a prime contractor's management of its subcontractors. Specify that agencies are to establish procedures and management controls, to include accountability, for documenting past performance in PPIRS. Define governmentwide roles and responsibilities for managing and overseeing PPIRS data. Develop system tools and metrics for agencies to use in monitoring and managing the documenting of contractor performance, such as contracts requiring an evaluation and information on delinquent reports. Take appropriate action to finalize proposed changes to the FAR that clarify responsibilities and performance documentation requirements for contract actions that involve orders placed against GSA's Multiple Award Schedule. To improve management and accountability for timely documenting of contractor past performance information at the agency level, we recommend that the departments of Defense, Energy, Homeland Security, and NASA establish management controls and appropriate management review of past performance evaluations as required and in line with any OFPP policy changes. We provided a draft of this report to OFPP and the departments of Defense, Energy, Homeland Security, GSA, and NASA. We received e-mail comments from OFPP, in which OFPP concurred with the recommendations. We received written comments from the other five agencies, which are included as appendixes III through VII. In their written comments, the agencies agreed with the recommendation on improving management controls and most agencies outlined specific actions planned or taken to address the recommendation. In written comments to the draft of this report, DHS did not agree with the figures contained in table 3 of the report regarding estimated contracts requiring an assessment and number of assessments in PPIRS for selected agencies. DHS stated that our numbers significantly understate the percentage of DHS contracts for which assessments were performed and are possibly inaccurate or misleading in how DHS compared to other agencies. DHS presented its own data and requested that we revise ours. We applied the same methodology across all civilian agencies, including DHS, and found no basis for using the numbers or methodology provided by DHS. For example, while DHS indicates we should not include delivery orders, as we state in the note under table 3, our estimates did not include individual orders issued by agencies that exceed the threshold. Therefore, we stand by our methodology and data, which as we stated in the report, presents a conservative estimate of the contracts that required an assessment. Also, we assessed the reliability of data we used and found it to be sufficiently reliable for the purposes of our analyses. As a result, we are not revising the figures in table 3. As noted in our report, improvements are needed across agencies for the management and accountability of timely documenting contractor past performance information. In its response, DHS agreed that significant strides need to be made in this area. In written comments to the draft of this report, GSA stated that our recommendation should be changed to show that the FAR Council in lieu of agency chief acquisition officers would be involved in developing and disseminating governmentwide acquisition policy through the FAR. According to an OFPP policy official, while the FAR Council would be involved in evaluating policy and making changes to the FAR, OFPP is responsible for overall policy concerning past performance and can make policy changes without involving the FAR Council. In line with our recommendations, this would include standards for evaluating past performance and policies for collecting and maintaining the information. As we state in the report, the Chief Acquisition Officers Council, through an OFPP interagency work group, has already established several broad goals for documenting, sharing, and using past performance information. Our recommendations to OFPP, in coordination with this Council, are in part aimed at actions necessary to address these goals. These recommendations could be implemented through an OFPP policy memorandum and could result in changes to the FAR, which we recognize would need to be coordinated through the FAR Council as appropriate. As a result, we are not making changes to the recommendation. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this report. We will then send copies of this report to interested congressional committees; the Director of the Office of Management and Budget, the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; the Secretary of the Department of Homeland Security; the Secretary of the Department of Energy; the Secretary of the National Aeronautics and Space Administration; and the Administrator of the General Services Administration. In addition, we will also make copies available at no charge on the GAO Web site at http://www.gao.gov. If you have questions about this report or need additional information, please contact me at (202) 512-4146 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. See appendix VIII for a list of key contributors to this report. To assess agencies' use of information on contractors' past performance in awarding contracts, we reviewed and analyzed the Federal Acquisition Regulation (FAR) and Office of Federal Procurement Policy (OFPP) guidance on use of past performance. We also reviewed source selection guidance for the Department of Defense (DOD), Department of Energy (DOE), Department of Homeland Security (DHS), National Aeronautics and Space Administration (NASA), and the General Services Administration (GSA)--agencies accounting for a large percentage of federal contractors. To obtain agency contracting officials' views on using past performance, we used FPDS-NG data to select 11 buying offices across the agencies to provide a cross-section of buying activities. At these locations, we interviewed 121 contracting officials including supervisory contract personnel to include division/branch contracting managers, contracting officers, and contract specialists to discuss 1) how past performance factored into the contract award decision, 2) sources upon which they rely for the information, 3) completing contractor performance assessments, and 4) challenge in using and sharing past performance information. To identify the importance of past performance relative to other non-cost factors in specific solicitations, we used FPDS-NG data from fiscal year 2007 and the first eight months of fiscal year 2008, to identify 62 competitively awarded contracts--49 definitive contracts and 13 orders placed against indefinite delivery vehicle contracts. We selected these contracts to represent a range of contracts across different buying activities and--though not generalized to all contract actions within these agencies-- represented a range of products and services, types of contracts, and dollar values as shown in appendix II. We obtained contract documents to verify the fields used in FPDS-NG to select the contracts, including type of contract and product service code, and found the data reliable enough for the purpose of selecting the contracts. For these contracts, we obtained source selection documents including sections M of the request for proposals, which described the evaluation factors for award, and the source selection decision document that described how past performance was evaluated for each offeror. We reviewed the evaluation factors for each solicitation to identify how past performance ranked in order of importance relative to other non-cost factors in the evaluation scheme and summarized the results. To assess the extent to which selected agencies in our review complied with requirements for documenting contractor performance, we analyzed FPDS-NG and PPIRS data and used information provided by the DOD CPARS program office. In estimating the number of contracts requiring an assessment for fiscal years 2006 and 2007 for civilian agencies in our review, we aggregated contract actions in FPDS-NG for each year to identify the number of contracts that exceeded the reporting thresholds of $550,000 for construction contracts (FAR SS 36.201), $30,000 for architect and engineering (FAR SS 36.604), and generally $100,000 for most other contracts (FAR SS 2.101). We excluded contracts that are exempt from performance assessments under FAR subpart 8.7--acquisitions from non profit agencies employing people who are blind or severely disabled. For indefinite delivery contracts, including GSA's multiple award schedule, orders were accumulated against the base contract for each agency and counted as one contract if the cumulative orders exceeded the reporting thresholds. This analysis provides a conservative estimate of the number of contracts that require an assessment because it does not include individual orders that may exceed the threshold or contract actions that span fiscal years. For this analysis, we used contract number and dollar obligation fields from FPDS-NG and found them reliable enough for the purpose of this analysis. Because DOD uses different reporting thresholds based on business sectors--information that is not available in FPDS- NG--we obtained compliance reports from the CPARS program office for fiscal years 2006 and 2007, which included estimates of the number of performance assessments that would have been required for DOD components and the number of those contracts with completed assessments. To determine the number of fiscal year 2006 and 2007 contracts with performance assessments for civilian agencies, we obtained and analyzed data from the PPIRS program office on contracts with assessments, including the number of assessments against GSA MAS contracts, as of February 26, 2009. To assess the reliability of data provided, we accessed the PPIRS system and compared the number of contracts with assessments with those provided by the CPARS and PPIRS program offices, and found the data sufficiently reliable for the purpose of our analysis. To assess the usefulness of PPIRS for governmentwide sharing of past performance information, we compared information in each of the three systems used to document contractor performance information including rating factors and rating scales. In addition, we met with agency officials who have responsibilities for managing the various systems--including the Naval Sea Logistics Center Detachment, Portsmouth, which administers CPARS and PPIRS, and officials at NASA who administer the Past Performance Database. To identify challenges that may hinder the systematic governmentwide sharing of past performance information, we interviewed contracting officials from 11 buying offices regarding a number of issues to include 1) roles in the assessment process, 2) challenges in completing assessments, 3) performance information not currently captured that might be useful for selecting contractors, 4) and use of metrics for managing and monitoring compliance with reporting requirements. Finally, we met with OFPP, GSA, and DOD to discuss the extent of oversight of PPIRS data and roles and responsibilities as applicable. To assess efforts under way or planned to improve the sharing of information on contractor performance, we obtained and reviewed memorandums, plans, and other documents produced by OFPP including proposed FAR changes and any proposed past performance guidelines. We met with officials from these offices to discuss challenges already identified in sharing and using past performance information, goals they may have established for improving the system, and status of efforts to address them. Our work was conducted at the following locations: OFPP, Washington D.C.; GSA, Arlington, Va; the Air Force Space and Missile Systems Center, El Segundo, Ca; Hill Air Force Base, Ogden, Utah; the Army Communications and Electronics Command, Fort Monmouth, N.J.; the Army Sustainment Command, Rock Island, Ill.; the Army Contracting Command, Fort Belvoir, Va.; the Naval Air Systems Command, Patuxent River, M.d.; the Naval Sea Systems Command, Washington, D.C.; the Defense Contract Management Agency located in Arlington, Va.; DHS including the Customs and Border Protection, Washington, D.C., and the Transportation Security Administration, Arlington, Va.; NASA including the Goddard Space Flight Center, Greenbelt, M.d. and the Johnson Space Center, Houston, Tex.; DOE including the National Nuclear Security Administration Service Center located in Albuquerque, N.M. We conducted this performance audit from February 2008 to February 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Ann Calvaresi Barr, Director; James Fuquay, Assistant Director; Usman Ahmad; Jeffrey Barron; Barry DeWeese; Julia Kennon; Flavio Martinez; Susan Neill; Karen Sloan; Sylvia Schatz; and Bradley Terry made key contributions to this report.
In fiscal year 2007, federal agencies worked with over 160,000 contractors, obligating over $456 billion, to help accomplish federal missions. This reliance on contractors makes it critical that agencies have the information necessary to properly evaluate a contractor's prior history of performance and better inform agencies' contract award decisions. While actions have been taken to improve the sharing of past performance information and its use--including the development of the Past Performance Information Retrieval System (PPIRS)--concerns remain about this information. This report assesses agencies' use of past performance information in awarding contracts; identifies challenges that hinder systematic sharing of past performance information; and describes efforts to improve contractor performance information. In conducting this work, GAO analyzed 62 contract solicitations from fiscal years 2007 and 2008 and met with 121 contracting officials. While the solicitations represent a range of contracts and contractors, GAO's findings cannot be generalized to all federal contracts. Agencies considered past performance information in evaluating contractors for each of the 62 solicitations GAO reviewed. Generally, factors other than past performance, such as technical approach or cost, were the primary factors for contract award decisions. A majority of officials told us their reluctance to rely more on past performance was due, in part, to their skepticism about the reliability of the information and difficulty assessing relevance to specific acquisitions. Contracting officials agreed that for past performance information to be useful for sharing, it must be documented, relevant, and reliable. However, GAO's review of PPIRS data for fiscal years 2006 and 2007 indicates that only a small percentage of contracts had a documented performance assessment; in particular, we found little contractor performance information for orders against the General Services Administration's (GSA) Multiple Award Schedule. Other performance information that could be useful in award decisions, such as contract terminations for default and subcontract management, was not systematically captured across agencies. Some officials noted that a lack of accountability and lack of system tools and metrics made it difficult for managers to ensure timely performance reports. Variations in evaluation and rating factors have also limited the usefulness of past performance information. Finally, a lack of central oversight and management of PPIRS data has hindered efforts to address these and other shortcomings. Several efforts have been initiated to improve PPIRS, but little progress has been made. In 2005, an interagency work group established several broad goals for improving past performance information, including standardizing performance ratings used by various agencies. However, these goals have yet to be met, and no funding has been dedicated for this purpose. In April 2008, changes to federal regulations were proposed that would clarify past performance documentation requirements and require the use of PPIRS. However, as of February 2009, the proposed changes had not been finalized.
7,794
597
NRC is responsible for ensuring that the nation's 103 operating commercial nuclear power plants pose no undue risk to public health and safety. Now, however, the electric utility industry is faced with an unprecedented, overarching development: the economic restructuring of the nation's electric power system, from a regulated industry to one driven by competition. According to one study, as many as 26 of the nation's nuclear power plant sites are vulnerable to shutdown because production costs are higher than the projected market prices of electricity. As the electric utility industry is deregulated, operating and maintenance costs will affect the competitiveness of nuclear power plants. NRC acknowledges that competition will challenge it to reduce unnecessary regulatory burden while ensuring that safety margins are not compromised by utilities' cost-cutting measures. Since the early 1980s, NRC has been considering the role of risk in the regulatory process, and in August 1995, NRC issued a policy statement that advocated certain changes in the development and implementation of its regulations through an approach more focused on risk assessment. Under such an approach, NRC and the utilities would give more emphasis to those structures, systems, and components deemed more significant to safety. The following example illustrates the difference between NRC's existing and a risk-informed approach. One particular nuclear plant has about 635 valves and 33 pumps that the utility must operate, maintain, and periodically replace according to NRC's existing regulations. Under a risk-informed approach, the utility found that about 515 valves and 12 pumps presented a low safety risk. The utility identified 25 components that were a high risk but would have been treated the same as other components under the existing regulations. If the utility concentrated on the 120 valves, 21 pumps, and 25 components that have been identified as having a high safety risk, it could reduce its regulatory compliance burden and costs. NRC staff estimate that it could take 4 to 8 years to implement a risk-informed regulatory approach and are working to resolve many issues to ensure that the new approach does not endanger public health and safety. Although NRC has issued guidance for utilities to use risk assessments to meet regulatory requirements for specific activities and has undertaken many activities to implement a risk-informed approach, more is needed to ensure that utilities have current and accurate documentation on the design of the plant and structures, systems, and components within it and final safety analysis reports that reflect changes to the design and other analyses conducted after NRC issued the operating license. ensure that utilities make changes to their plants based on complete and accurate design and final safety analysis information. determine whether, how, and what aspects of NRC's regulations to change. develop standards on the scope and detail of the risk assessments needed for utilities to determine that changes to their plants' design will not negatively effect safety. determine whether compliance with risk-informed regulations should be mandatory or voluntary. Furthermore, NRC has not developed a comprehensive strategy that would move its regulation of nuclear plant safety from its traditional approach to an approach that considers risk. Design information provides one of the basis for NRC's safety regulation. Yet, for more than 10 years, NRC has questioned whether utilities had accurate design information for their plants. Inspections of 26 plants that NRC completed early in fiscal year 1999 confirmed that for some plants (1) utilities had not maintained accurate design documentation, (2) NRC did not have assurance that safety systems would perform as intended at all times, and (3) NRC needed to clarify what constitutes design information subject to NRC's regulations. As of November 1998, NRC had taken escalated enforcement actions for violations found at five plants--Three Mile Island, Perry, H.B. Robinson, Vermont Yankee, and D.C. Cook. NRC took these actions because it did not have assurance that the plants' safety systems would perform as intended. One utility, American Electric Power, shut down its D.C. Cook plant as a result of the inspection findings. NRC does not plan additional design team inspections because it concluded that the industry did not have serious safety problems. NRC's Chairman disagreed with this broad conclusion, noting that (1) the inspection results for the five plants indicate the importance of maintaining current and accurate design and facility configuration information, (2) the inspections did not apply to the industry as a whole but to only certain utilities and plants within the industry, and (3) other NRC inspections identified design problems at other such nuclear power plants as Crystal River 3, Millstone, Haddam Neck, and Maine Yankee. The Commissioners and staff agreed that NRC would oversee design information issues using such tools as safety system engineering inspections. The 26 inspections also identified a need for NRC to better define the elements of a plant's design that are subject to NRC's regulations. NRC staff acknowledge that the existing regulation is a very broad, general statement that has been interpreted differently among NRC staff and among utility and industry officials. According to NRC staff, it is very difficult to develop guidance describing what constitutes adequate design information. Therefore, NRC has agreed that the Nuclear Energy Institute (NEI) would provide explicit examples of what falls within design parameters. NEI plans to draft guidance that will include examples of design information and provide it to NRC in January 1999. Concurrently, NRC is developing regulatory guidance on design information. NRC staff expect to recommend to the Commission in February 1999 that it endorse either NRC's or NEI's guidance and seek approval to obtain public comments in March or April 1999. NRC staff could not estimate when the agency would complete this effort. At the time NRC licenses a plant, the utility prepares a safety analysis report; NRC regulations require the utility to update the report to reflect changes to the plant design and the results of analyses that support modifying the plants without prior NRC approval. As such, the report provides one of the foundations to support a risk-informed approach. Yet, NRC does not have confidence that utilities make the required updates, which results in poor documentation of the safety basis for the plants. NRC published guidance for the organization and contents of safety analysis reports in June 1966 and updated the guidance in December 1980. NRC acknowledges that the guidance is limited, resulting in poorly articulated staff comments on the quality of the safety analysis reports and a lack of understanding among utilities about the specific aspects of the safety analysis reports that should be updated. On June 30, 1998, NRC directed its staff to continue working with NEI to finalize the industry's guidelines on safety analysis report updates, which NRC could then endorse. Once the agency endorses the guidelines, it will obtain public comments and revise them, if appropriate. NRC expects to issue final guidelines in September 1999. According to NRC documents, if a utility does not have complete and accurate design information, the evaluations conducted to determine whether it can modify a plant without prior NRC approval can lead to erroneous conclusions and jeopardize safety. For more than 30 years, NRC's regulations have provided a set of criteria that utilities must use to determine whether they may change their facilities (as described in the final safety analysis report) or procedures or conduct tests and experiments without NRC's prior review and approval. However, in 1993, NRC became aware that Northeast Nuclear Energy Company had refueled Millstone Unit 1 in a manner contrary to that allowed in the updated final safety analysis and its operating license. This led NRC to question the regulatory framework that allows licensees to change their facilities without prior NRC approval. As a result, NRC staff initiated a review to identify the short- and long-term actions needed to improve the process. For example, in October 1998, NRC published a proposed regulation regarding plant changes in the Federal Register for comment; the comment period ended on December 21, 1998. NRC requested comments on criteria for identifying changes that require a license amendment and on a range of options, several of which would allow utilities to make changes without prior NRC approval despite a potential increase in the probability or consequences of an accident. NRC expects to issue a final regulation in June 1999. In addition, in February 1999, NRC staff expect to provide their views to the Commission on changing the scope of the regulation to consider risk. NRC's memorandum that tracks the various tasks related to a risk-informed approach and other initiatives did not show when NRC would resolve this issue. Until recently, NRC did not consider whether and to what extent the agency should revise all its regulations pertaining to commercial nuclear plants to make them risk-informed. Revising the regulations will be a formidable task because, according to NRC staff, inconsistencies exist among the regulations and because a risk-informed approach focuses on the potential risk of structures, systems, or components, regardless of whether they are located in the plant's primary (radiological) or secondary (electricity-producing) systems. With one exception, NRC has not attempted to extend its regulatory authority to the secondary systems. NRC staff and NEI officials agree that the first priority in revising the regulations will be to define their scope as well as the meaning of such concepts as "important to safety" and "risk significant" and integrating the traditional and risk-informed approaches into a cohesive regulatory context. In October 1998, NEI proposed a phased approach to revise the regulations. Under the proposal, by the end of 1999, NRC would define "important to safety" and "risk significant." By the end of 2000, NRC would use the definitions in proposed rulemakings for such regulations as definition of design information and environmental qualification for electrical equipment. By the end of 2003, NEI proposes that NRC address other regulatory issues, such as the change process, the content of technical specifications, and license amendments. After 2003, NEI proposes that NRC would address other regulations on a case-by-case basis. NRC staff agreed that the agency must take a phased approach when revising its regulations. The Director, Office of Nuclear Regulatory Research, said that, if NRC attempted to revise all provisions of the regulations simultaneously, it is conceivable that the agency would accomplish very little. The Director said that NRC needs to address one issue at a time while concurrently working on longer-term actions. He cautioned, however, that once NRC starts, it should be committed to completing the process. At a January 1999 meeting, NRC's Chairman suggested a more aggressive approach that would entail risk informing all regulations across the board. NRC's memorandum that tracks the various tasks related to a risk-informed approach and other initiatives did not show when the agency would resolve this issue. NRC and the industry view risk assessments as one of the main tools to be used to identify and focus on those structures, systems, or components of nuclear plant operations having the greatest risk. Yet, neither NRC nor the industry has a standard or guidance that defines the quality, scope, or adequacy of risk assessments. NRC staff are working with the American Society of Mechanical Engineers to develop such a standard. However, this issue is far from being resolved. The Society is developing the standard for risk assessments in two phases (internal events and emergency preparedness). NRC staff estimate that the agency would have a final standard on the first phase by June 2000 but could not estimate when the second phase would be complete. To ensure consistency with other initiatives, in December 1998, NRC staff requested the Commission's direction on the quality of risk assessments needed to implement a risk-informed approach. Since it may be several years until NRC has a standard, the Commission should also consider the effect that the lack of a standard could have on its efforts to implement a risk-informed regulatory approach. NRC has not determined whether compliance with revised risk-informed regulations would be mandatory or voluntary for utilities. In December 1998, NRC's staff provided its recommendations to the Commission. The staff recommended that implementation be voluntary, noting that it would be very difficult to show that requiring mandatory compliance will increase public health and safety and could create the impression that current plants are less safe. In its analysis, the staff did not provide the Commission with information on the number of plants that would be interested in such an approach. In January 1999, the Commissioners expressed concern about a voluntary approach, believing that it would create two classes of plants operating under two different sets of regulations. Utilities may be reluctant to shift to a risk-informed regulatory approach for various reasons. First, the number of years remaining on a plant's operating license is likely to influence the utility's views. NRC acknowledged that if a plant's license is due to expire in 10 years or less, then the utility may not have anything to gain by changing from the traditional approach. Second, the costs to comply may outweigh the benefits of doing so. Considering the investment that will be needed to develop risk-informed procedures and operations and identify safety-significant structures, systems, or components, utilities question whether a switch will be worth the reduction in regulatory burden and cost savings that may result. Third, design differences and age disparities among plants make it difficult for NRC and the industry to determine how, or to what extent, a standardized risk-informed approach can be implemented across the industry. Although utilities built one of two types of plants--boiling water or pressurized water--each has design and operational differences. Thus, each plant is unique, and a risk-informed approach would require plant-specific tailoring. Since the early 1980s, NRC has considered applying risk to the regulatory process. NRC staff estimate that it will be at least 4 to 8 years before the agency implements a risk-informed approach. However, NRC has not developed a strategic plan that includes objectives, time lines, and performance measures for such an approach. Rather, NRC has developed an implementation plan, in conjunction with its policy statement on considering risk, that is a catalog of about 150 separate tasks and milestones for their completion. It has also developed guidance for some activities, such as pilot projects in the four areas where the industry wanted to test the application of a risk-informed approach. In one case, NRC approved a pilot project for Houston Lighting and Power Company at its South Texas plant, and the utility found that it could not implement it because the pilot project would conflict with other NRC regulations. Given the complexity and interdependence of NRC's requirements, such as regulations, plant design, and safety documents and the results of ongoing activities, it is critical that NRC clearly articulate how the various initiatives will help achieve the goals set out in the 1995 policy statement. Although NRC's implementation plan sets out tasks and expected completion dates, it does not ensure that short-term efforts are building toward NRC's longer-term goals; does not link the various ongoing initiatives; does not help the agency determine appropriate staff levels, training, skills, and technology needed and the timing of those activities to implement a risk-informed approach; does not provide a link between the day-to-day activities of program managers and staff and the objectives set out in the policy statement; and does not address the manner in which it would establish baseline information about the plants to assess the safety impact of a risk-informed approach. In a December 1998 memorandum, NRC staff said that once the Commission provides direction on whether and how to risk-inform the regulations and guidance on the quality of risk assessments to support their decisions for specific regulations, they would develop a plan to implement the direction provided. The staff did not provide an estimated time frame for completing the plan. For many years, the nuclear industry and public interest groups have criticized NRC's plant assessment and enforcement processes because they lacked objectivity, consistency, and predictability. In January 1999, NRC proposed a new process to assess overall plant performance based on generic and plant-specific safety thresholds and performance indicators. NRC is also reviewing its enforcement process to ensure consistency with the staff's recommended direction for the assessment process and other programs. In 1997 and 1998, we noted that NRC's process to focus attention on plants with declining safety performance needed substantial revisions to achieve its purpose as an early warning tool and that NRC did not consistently apply the process across the industry. We also noted that this inconsistency has been attributed, in part, to the lack of specific criteria, the subjective nature of the process, and the confusion of some NRC managers about their role in the process. NRC acknowledged that it should do a better job of identifying plants deserving increased regulatory attention and said that it was developing a new process that would be predictable, nonredundant, efficient, and risk-informed. In January 1999, NRC proposed a new plant assessment process that includes seven "cornerstones." For each cornerstone, NRC will identify the desired result, important attributes that contribute to achieving the desired result, areas to be measured, and the various ways that exist to measure the identified areas. Three issues cut across the seven cornerstones: human performance, safety conscious work environment, and problem identification and resolution. As proposed, NRC's plant assessment process would use performance indicators, inspection results, other such information as utility self-assessments, and clearly defined, objective decision thresholds. The process is anchored in a number of principles, including that: (1) a level of safety performance exists that could warrant decreased NRC oversight, (2) performance thresholds should be set high enough to permit NRC to arrest declining performance, (3) NRC must assess both performance indicators and inspection findings, and (4) NRC will establish a minimum level of inspections for all plants (regardless of performance). Although some performance indicators would be generic to the industry, others would be plant-specific based, in part, on the results that utilities derive from their risk assessments. However, the quality of risk assessments and number of staff devoted to maintain them vary considerably among utilities. NRC expects to use a phased approach to implement the revised plant assessment process. Beginning in June 1999, NRC expects to pilot test the use of risk-informed performance indicators at eight plants, by January 2000 to fully implement the process, and by June 2001 to complete an evaluation and propose any adjustments or modifications needed. Between January 1999 and January 2001, NRC expects to work with the industry and other stakeholders to develop a comprehensive set of performance indicators to more directly assess plant performance relative to the cornerstones. For those cornerstones or aspects of cornerstones where it is impractical or impossible to develop performance indicators, NRC would use its inspections and utilities' self assessments to reach a conclusion about plant performance. NRC's proposed process illustrates an effort by the current Chairman and other Commissioners to improve NRC's ability to help ensure safe operations of the nation's nuclear plants as well as address industry concerns regarding excessive regulation. NRC's ensuring consistent implementation of the process ultimately established would further illustrate the Commissioners' commitment. NRC has revised its enforcement policy more than 30 times since its implementation in 1980. Although NRC has attempted to make the policy more equitable, the industry has had longstanding problems with it. Specifically, NEI believes that the policy is not safety-related, timely, or objective. Among the more contentious issues are NRC's practice of aggregating lesser violations into an enforcement action that results in civil penalties and its use of the term "regulatory significance." To facilitate a discussion about the enforcement program, including the use of regulatory significance and the practice of aggregating lesser violations, at NRC's request, NEI and the Union of Concerned Scientists reviewed 56 enforcement actions taken by the agency during fiscal year 1998. For example, NEI reviewed the escalated enforcement actions based on specific criteria, such as whether the violation that resulted in an enforcement action could cause an offsite release of radiation, onsite or offsite radiation exposures, or core damage. From an overall perspective, the Union concluded that NRC's actions are neither consistent nor repeatable and that the enforcement actions did not always reflect the severity of the offense. According to NRC staff, they plan to meet with various stakeholders in January and February 1999 to discuss issues related to the enforcement program. Another issue is the use of the term "regulatory significance" by NRC inspectors. NRC, according to NEI and the Union of Concerned Scientists, uses "regulatory significance" when inspectors cannot define the safety significance of violations. However, when the use of regulatory significance results in financial penalties, neither NRC nor the utility can explain to the public the reasons for the violation. As a result, the public cannot determine whether the violation presented a safety concern. NEI has proposed a revised enforcement process. NRC is reviewing the proposal as well as other changes to the enforcement process to ensure consistency with the draft plant safety assessment process and other changes being proposed as NRC moves to risk-informed regulation. NRC's memorandum of tasks shows that the staff expect to provide recommendations to the Commission in March 1999 that address the use of the term regulatory significance and in May 1999 on considering risk in the enforcement process. In January 1999, we provided the Congress with our views on the major management challenges that NRC faces. We believe that the management challenges we identified have limited NRC's effectiveness. In summary, we reported that: NRC lacks assurance that its current regulatory approach ensures safety. NRC assumes that plants are safe if they operate as designed and follow NRC's regulations. However, NRC's regulations and other guidance do not define, for either a licensee or the public, the conditions necessary for a plant's safety; therefore, determining a plant's safety is subjective. NRC's oversight has been inadequate and slow. Although NRC's indicators show that conditions throughout the nuclear energy industry have generally improved, they also show that some nuclear plants are chronically poor performers. At three nuclear plants with long-standing safety problems that we reviewed, NRC did not take aggressive action to ensure that the utilities corrected the problems. As a result of NRC's inaction, the conditions at the plants worsened, reducing safety margins. NRC's culture and organizational structure have made the process of addressing concerns with the agency's regulatory approach slow and ineffective. Since 1979, various reviews have concluded that NRC's organizational structure, inadequate management control, and inability to oversee itself have impeded its effectiveness. Some of the initiatives that NRC has underway have the potential to address the first two management challenges. However, the need to ensure that NRC's regulatory programs work as effectively as possible is extremely important, particularly in light of major changes taking place in the electric utility industry and in NRC. Yet changing NRC's culture will not be easy. In a June 1998 report, the Office of the Inspector General noted that NRC's staff had a strong commitment to protecting public health and safety. However, the staff expressed high levels of uncertainty and confusion about the new directions in regulatory practices and challenges facing the agency. The employees said that, in their view, they spend too much time on paperwork that may not contribute to NRC's safety mission. The Inspector General concluded that without significant and meaningful improvement in management's leadership, employees' involvement, and communication, NRC's current climate could eventually erode the employees' outlook and commitment to doing their job. This climate could also erode NRC's progress in moving forward with a risk-informed regulatory approach. According to staff, NRC recognizes the need to effectively communicate with its staff and other stakeholders and is developing plans to do so. Mr. Chairman and Members of the Subcommittee, this concludes our statement. We would be pleased to respond to any questions you may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the actions the Nuclear Regulatory Commission (NRC) has taken to move from its traditional regulatory approach to an approach that considers risk in conjunction with engineering analyses and operating experience-termed risk-informed regulation, focusing on the: (1) issues that NRC needs to resolve to implement a risk-informed regulatory approach; (2) status of NRC's efforts to make two of its oversight programs--overall plant safety assessments and enforcement-risk-informed; and (3) major management challenges that NRC faces. GAO noted that: (1) since July 1998, NRC has accelerated some activities needed to implement a risk-informed regulatory approach and has established and set milestones for others; (2) however, NRC has not resolved the most basic of issues; (3) that is, that some utilities do not have current and accurate design information for their nuclear power plants, which is needed for a risk-informed approach; (4) also, neither NRC nor the nuclear utility industry have standards or guidance that define the quality or adequacy of the risk assessments that utilities use to identify and measure the risks to public health and the environment; (5) furthermore, NRC has not determined if compliance with risk-informed regulations will be voluntary or mandatory for the nuclear utility industry; (6) more fundamentally, NRC has not developed a comprehensive strategy that would move its regulation of the safety of nuclear power plants from its traditional approach to an approach that considers risk; (7) in January 1999, NRC released for comment a proposed process to assess the overall safety of nuclear power plants; (8) the process would establish generic and plant-specific safety thresholds and indicators to help NRC assess overall plant safety; (9) NRC expects to phase in the new process over the next 2 years and evaluate it by June 2001, at which time NRC would propose any adjustments or modifications needed; (10) in addition, NRC has been examining the changes needed to its enforcement program to make it consistent with, among other things, the proposed plant safety assessment process; (11) for many years, the nuclear industry and public interest groups have criticized the enforcement program as subjective; (12) in the spring of 1999, NRC staff expect to provide the Commission recommendations for revising the enforcement program; (13) in January 1999, GAO identified major management challenges that limit NRC's effectiveness; (14) the challenges include the lack of a definition of safety and lack of aggressiveness in requiring utilities to comply with safety regulations; and (15) NRC's revised plant safety assessment and enforcement initiatives may ultimately help the agency address these management challenges and carry out its safety mission more effectively and efficiently.
5,395
572
The growing sophistication and effectiveness of cyber attacks, and the increase of information assurance and information assurance-enabled information technology (IT) products available for use on national security systems, have heightened federal attention to the need for information assurance. As a result of these trends, acquiring commercial IT products that perform as vendors claim on national security systems has become a governmentwide challenge. While not a complete solution, an important way to increase confidence in commercial IT products is through independent testing and evaluation of their security features and functions during design and development. In 1997, NIST and the National Security Agency collaborated to form the NIAP. The purpose of the partnership is to boost consumers' and federal agencies' confidence in information security products and enhance the ability of U.S. companies to gain international recognition and acceptance for their products. The five main goals of NIAP are to: promote the development and use of evaluated IT products and systems; champion the development and use of national and international standards for IT security; foster research and development in IT security requirements definition, test methods, tools, techniques, and assurance metrics; support a framework for international recognition and acceptance of IT security testing and evaluations; and facilitate development and growth of a commercial security testing industry within the U.S. To facilitate achievement of these goals, NIAP developed a national program called the Common Criteria Evaluation and Validation Scheme. The program is based on an international standard of general concepts and principles of IT security evaluations for the international community. The program evaluates, through various evaluation assurance levels (see app. II), commercial-off-the-shelf information assurance and information assurance-enabled products for the federal government. These products can be items of hardware, software, or firmware. As part of the evaluation, agencies can specify a degree of confidence desired in a product through protection profiles. While a protection profile is not required in order to have a product evaluated, a vendor is required to develop a security target. NIAP evaluations are performed by accredited Common Criteria testing laboratories. While a product is undergoing evaluation, the NIAP validation body--an activity currently managed by the National Security Agency--approves participation of security testing laboratories in accordance with accreditation policies and procedures. It also reviews the results of the security evaluations performed by the laboratories and issues a validation report, which summarizes and provides independent validation of the results. A product is considered NIAP-certified only after it is both evaluated by an accredited laboratory and validated by the validation body. Upon successful completion of these requirements, the validation body issues a Common Criteria certificate for the evaluated product. All evaluated products that receive a NIAP Common Criteria certificate appear on a validated products list available on NIAP's Web site. According to the Committee on National Security Systems--a forum for the discussion of policy issues that sets federal policy and promulgates direction, operational procedures, and guidance for the security of national security systems--the fact that a product appears on the validated products list does not by itself mean that it is secure. A product's listing on any Common Criteria validated products list means that the product was evaluated against its security claims and that it has met those claims. Figure 1 outlines the NIAP evaluation process. In order to maintain the validity of an evaluation when a product upgrades to its next version, a vendor can request either a re-evaluation of the entire new product version or validation of only the changes in the product. To request the latter, a vendor must participate in the NIAP Assurance Maintenance Program. To participate in this program, a vendor must submit a request that addresses how it plans to maintain the product and a report of what will be maintained. Vendors can select any one of the 10 accredited commercial testing laboratories to perform product evaluations. The vendor and testing laboratory negotiate evaluation costs, which can vary according to the laboratory and the assurance level the product is tested against (see fig. 2). NVLAP identifies NVLAP-accredited laboratories on its Web site. Accreditation criteria are established in accordance with the U.S. Code of Federal Regulations (CFR, Title 15, Part 285), NVLAP Procedures and General Requirements, and encompass the requirements of ISO/IEC 17025 and the relevant requirements of ISO 9002. the scope of evaluation--the tendency of vendors to include elements in their security target that agencies may not require introduces additional costs; and the design of the product--if a product is designed so that its security functions are performed by a small number of modules, it may be possible to limit the portion of the product that must be examined. In January 2000, as revised in June 2003, a federal policy was established that required the use of evaluated products for national security systems. Specifically, the Committee on National Security Systems established National Security Telecommunications and Information Systems Security Policy Number 11. The policy required, effective July 1, 2002, that all commercial-off-the-shelf information assurance and information assurance-enabled IT products acquired for use on national security systems be evaluated and validated in accordance with one of the following criteria: 1. The International Common Criteria for Information Security Technology Evaluation Recognition Arrangement,2. The NIAP Common Criteria Evaluation and Validation Scheme, 3. The NIST Federal Information Processing Standards Cryptographic Module Validation Program. The objective of the policy is to ensure that these products, which are acquired by the federal government, undergo a standardized evaluation validating that a product either performs as its claims or meets the user's security requirements. The policy requires that the evaluation and validation of such products be conducted by accredited commercial laboratories or by the National Security Agency for government off-the shelf products. It does not require mandatory compliance for information assurance products acquired prior to July 1, 2002, and includes a provision for deferred compliance, on a case-by-case basis, when information assurance-evaluated products do not cover the full range of potential user application, or do not incorporate the most current technology. Moreover, while not a requirement, the federal policy includes provisions for departments and agencies who may wish to consider using the NIAP process for the acquisition and appropriate implementation of evaluated and validated products for non-national security systems. The use of commercial products that have been independently tested and evaluated is only a part of a security solution that contributes to the overall information assurance of a product. Other complementary controls are needed, including sound operating procedures, adequate information security training, overall system certification and accreditation, sound security policies, and well-designed system architectures. According to the Committee on National Security Systems, the protection of systems encompasses more than just acquiring the right product. The committee notes that once acquired, these products must be integrated properly and subjected to a system accreditation process, as discussed above, which will help to ensure the integrity of the information and systems to be protected. For federal agencies, such an overall security solution is spelled out by the Federal Information Security Management Act. The act requires federal agencies to protect and maintain the confidentiality, integrity, and availability of their information and information systems. Among other things, the act requires each agency (including agencies with national security systems) to develop, document, and implement agencywide information security programs to provide information security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. More specifically, the Federal Information Security Management Act stipulates that the head of each agency operating or exercising control of a national security system is responsible for providing information security protections commensurate with the risk and magnitude of harm that could result should a security breach occur. The act also stipulates that agency heads are responsible for implementing information security policies and practices as required by standards and guidelines for national security systems. The Department of Defense and the Director of Central Intelligence have authority under the act to develop policies, guidelines, and standards for national security systems. The Federal Information Security Management Act also requires NIST, among other things, to provide technical assistance to agencies; to evaluate private sector security policies and practices; to evaluate commercially available IT, as well as practices developed for national security systems; and to assess the potential application by agencies to strengthen information security for non-national systems. While the NIAP evaluation process offers benefits to national security systems, its effectiveness has not been measured or documented, and considerable challenges to acquiring and using NIAP-evaluated products exist. NIAP process participants--vendors, laboratories, federal agencies, and NIAP officials--identified benefits to using the process for use in national security systems, including independent testing and evaluation of IT products and accreditation of the performing laboratories, which can give agencies confidence that the products will perform as claimed; international recognition of evaluated products, which provides agencies broader product selection and reduces vendor burden; discovery of software flaws in product security features and functions, which can cause vendors to fix them; and improvements to vendor development processes, which help to improve the overall quality of current and future products. Independent testing and evaluation of commercial IT products and accreditation of the laboratories that perform the test and evaluations can give agencies increased assurance that the products will perform as vendors claim. Independent testing is a best practice for assuring conformance to functional, performance, reliability, and interoperability specifications--especially for systems requiring elevated levels of security or trust. As discussed previously, NIAP requires vendors to obtain independent testing and evaluation of specific security features and functions that are built into their products. Agencies are able to use the results of validation reports to distinguish between competing products and thus make better-informed IT procurement decisions. Further, the Committee on National Security Systems encourages agencies to review the security target of a product and determine its appropriateness for the environment in which the product will operate. In our survey, 15 of 18 federal agencies reported that they have derived benefits from acquiring and using products evaluated by the NIAP process. Of these 15 agencies, 11 reported that the availability of evaluated products helped the agency make IT procurement decisions; 9 reported that the process provided their agency with thorough and accurate product documentation; and 1 reported that evaluated products provided a common method of performing a particular security service that is implemented in different types of security or security-enabled devices, potentially resulting in a greater degree of standardization of elements (such as audit entries). Moreover, the NIST-administered National Voluntary Laboratory Accreditation Program (NVLAP) reviews laboratories annually to ensure competence and compliance with standards. Accreditation is granted to laboratories following their successful completion of a process that includes an application submission and fee payment by the laboratory, an on-site assessment, participation in proficiency testing, resolution of any deficiencies identified during the process, and a technical evaluation. The issuance of a certificate formally signifies that a laboratory has demonstrated that it meets all NVLAP requirements and operates in accordance with management and the technical requirements of the relevant standards. However, the accreditation does not imply any guarantee of laboratory performance or test and calibration data; it is solely a finding of laboratory competence and compliance with standards. Figure 3 shows the laboratory accreditation process. Another benefit of the NIAP evaluation process is NIAP's membership in the Arrangement on the Recognition of Common Criteria Certificates in the Field of IT Security. As part of the goals of the arrangement, members can increase the availability of evaluated IT products and protection profiles for national use and eliminate duplicate evaluations of IT products and protection profiles, thus giving agencies a broader selection of evaluated products from which to choose. Agencies have the ability to acquire products that have been evaluated at evaluation assurance levels 1 through 4 from any of the countries that have an evaluation scheme. As of February 2006, there were 22 global signatories to the recognition arrangement, and 247 evaluated products available. The recognition arrangement also reduces the burden on vendors by limiting the number of criteria to which their products must conform and the number of evaluations that a vendor needs to complete in order to sell a product internationally. Because NIAP evaluations (evaluation assurance levels 1-4) are accepted by the arrangement, vendors that go through the NIAP process can sell their evaluated products in any of the 22 member countries. Vendors are able to save time and money since they do not need to complete multiple evaluations to sell their product in different countries. Another benefit of the NIAP process is that it uncovers flaws during product evaluations and can cause vendors to fix them. NIAP, vendor, and laboratory officials stated that the NIAP evaluation process has uncovered flaws and vulnerabilities in evaluated products. According to NIAP officials, software flaws are found in nearly all evaluated products, with an evaluation resulting in an average of two to three fixes. According to the four vendors included in our review, the NIAP evaluation process discovered flaws or vulnerabilities in their products or their product documentation. Also, officials from one of the laboratories included in our review stated that out of the 90 products they have evaluated, all of them had documentation flaws. Although vendors have the option of removing from the evaluation security features or functions in which flaws have been identified, any flaws in the remaining security features or functions must be fixed in order to successfully complete the product evaluation. Nonetheless, agencies procuring NIAP-evaluated products have a higher level of assurance that the product's security features and functions will perform as claimed in the validation report. Product evaluations can influence vendors to make improvements to their development processes that raise the overall quality of their current and future products. To complete a successful evaluation, vendors submit to laboratories their development documentation, which describes various processes related to security, such as software configuration controls. Officials at six of the seven vendors we visited stated that product evaluations had a positive influence on their development process. According to one of the six vendors, completed product evaluations that result in improvements to their development process would likely transfer to the development process of other products and help improve the overall quality of their products. Laboratory officials also stated that NIAP evaluations often result in vendors improving their software development process because vendors adopt some of the methodologies used to pass evaluation, such as test methods and documentation, for their own quality assurance processes. Additionally, we previously reported that vendors who are proactive and adopt effective development processes and practices can drastically reduce the number of flaws in their products. NIAP process participants--NIAP officials and selected vendors, laboratories, and federal agencies--identified challenges to acquiring and using NIAP-evaluated products. NIAP-evaluated products do not always meet agencies' needs, which limit agencies' acquisition and use of these products. A lack of vendor awareness of the NIAP evaluation process impacts the timely completion of the evaluation and validation of products. A reduction in the number of validators available to certify products could contribute to delays in validating products for agency use; and A lack of performance measures and difficulty in documenting the effectiveness of the NIAP process makes it difficult to demonstrate the program's usefulness or improvements made to products' security features and functions or improvements to vendors' development processes. Collectively, these challenges hinder the effective use of the NIAP evaluation process by vendors and agencies. Meeting agency needs for NIAP-evaluated products for use in national security systems can be a challenge. According to agency responses to our survey, 10 of 18 agencies that purchased NIAP-evaluated products reported experiencing challenges in acquiring those products. Specifically, 10 agencies noted that products on the NIAP-evaluated product list were not the most current versions; and 7 agencies noted that products needed by their agency were not included on the NIAP-evaluated product list. Agencies also reported additional challenges for acquiring NIAP-- evaluated products. Specifically, choices for evaluated products are somewhat limited compared to the general product marketplace; and the length of time required for a product to complete the evaluation process can delay availability of the most up-to-date technology. However, opportunities exist to better match agency needs with the availability of NIAP-evaluated products: Agencies can write protection profiles to define the exact security parameter specifications that they need. For example, two of the vendors we visited stated that they had their products evaluated against the Controlled Access Protection Profile, which provides agencies with a set of security functional and assurance requirements for their IT products and also provides a level of protection against threats of inadvertent or casual attempts to breach the system security. Vendors can enter the evaluation process before their products are publicly released, which can allow consumers to acquire the most up-to- date technology. One vendor we visited had taken such a proactive approach. Agencies can use the NIAP-validated products list to identify products that meet their needs. Because the number of available NIAP-evaluated products is increasing, agencies now have a variety of products from which to choose. In January 2002, there were about 20 evaluated products. As of February 2006, there were 127 evaluated products and 142 products in evaluation. These evaluated products span across 26 categories of information assurance products and information assurance-enabled products from which to choose, including operating systems and firewalls. As products continue to enter evaluation, agencies' needs may be better met. Vendors can, by participating in the NIAP Assurance Maintenance Program, maintain the validity of an evaluation when a product upgrades to its next version by either requesting a re-evaluation of the entire new product version or validation of only the changes in the product. Vendors' participation in this program may allow agencies to have the most recent products available to them. Agencies can increase their selection of products through the Common Criteria Recognition Arrangement--available on the Common Criteria portal Web site--which currently has 247 evaluated products available. The products listed on the Web site give agencies more choices of products evaluated at evaluation assurance levels 4 and below. Another challenge faced by the NIAP process is the lack of vendor awareness regarding the requirements of the evaluation process. For example, vendors who are new to the evaluation process are not aware of the extensive documentation requirements. Creating documentation to meet evaluation requirements can be an expensive and time-consuming process. According to laboratory officials, about six months is the average time for vendors to complete the required documentation before test and evaluation can begin. However, if vendors consistently maintain their documentation, subsequent evaluations can be faster and less expensive since the vendor has previously produced the documentation and is already familiar with the process. Also, some vendors are not as active as others in the evaluation process, which can cause varying lengths of time for completing the evaluation. Vendors who are actively involved in the process are usually able to complete the process more quickly, including fixing flaws, than those who are not actively involved. According to one laboratory, the more active a vendor is in the evaluation process, the faster and less expensive it will be for the vendor. As such, the amount of involvement by the vendor during the process and the timeliness with which it fixes discovered flaws affects the length of time the product is in evaluation. Furthermore, some vendors and laboratories do not have the same perception of the length of time required to perform the evaluation. According to laboratory officials, the length of time needed for conducting product evaluations varies depending on the type of product being evaluated and the evaluation assurance level (see fig. 4). Vendors are often not aware of these requirements and tend to underestimate the length of time required for evaluations. Vendors and laboratories also perceive the length of evaluations differently because they punctuate start and end dates differently. Vendors measure the length of an evaluation from the day they decide to go into evaluation to the day they receive their product certificate. Their measurement includes selecting and negotiating with a laboratory, preparing required documentation, and testing the security features and functions. Laboratories, on the other hand, consider the length of an evaluation to be from the day they sign a contract with the vendor to the day they complete testing. While Common Criteria user forums for program participants have been held, which NIAP participated in, NIAP itself has not developed education and training workshops that focus on educating participants on specific requirements--such as the documentation requirements. These workshops could help ensure that vendors and laboratories are aware of the NIAP process and could contribute to the efficiency of product evaluations. NIAP officials acknowledge that such educational offerings could be beneficial. Over the last year, NIAP has seen a reduction in the number of qualified validators. NIAP officials stated that one of the most significant challenges the NIAP process faces is hiring and maintaining qualified personnel to validate products. In fiscal year 2005, the NIAP program lost approximately four government validators and six contractor validators. According to the NIAP Director, maintaining qualified personnel to perform validation tasks is difficult largely because many validators are nearing retirement age and the job is not an attractive position for recent college graduates. Validators have a complex job with tasks that span the entire evaluation process; they incrementally review the results of the various tests of functional and assurance requirements as they are completed by the laboratory. As a result, once validators are hired, it typically takes 12 to 24 months to train new validators to become proficient in performing validation tasks. If the NIAP program continues to see a reduction in validators, there could be an increased risk that a backlog of products needing to obtain NIAP certifications will develop, which could also impact the already lengthy evaluation process. The number of products entering evaluation is steadily increasing (in fiscal year 2002 there were approximately 20 products in evaluation and as of February 2006, there were 142 products in evaluation). Additionally, approximately five to seven products enter into evaluation each month. To address the widening gap between the number of products entering the process and the number of validators available to review products, NIAP intends to pursue legislation allowing it to recoup the costs of validations and hire additional staff. A best practice in public and private organizations is the use of performance measurements to gain insight into--and make adjustments to--the effectiveness and efficiency of programs, processes, and people. Performance measurement is a process of assessing progress toward achieving predetermined goals, and includes gathering information on the efficiency with which resources are transformed into goods and services, the quality of those outputs, and the effectiveness of government operations in terms of their specific contributions to program objectives. Establishing, updating, and collecting performance metrics to measure and track progress can assist organizations in determining whether they are fulfilling their vision and meeting their customer-focused strategic goals. The NIAP program lacks performance metrics to measure process effectiveness and thus faces difficulty in documenting its effectiveness. The program has not collected and analyzed data on the findings, flaws, and fixes resulting from product tests and evaluations. NIAP officials pointed out that nondisclosure agreements between laboratories and vendors make it difficult to collect and document such data. According to NIAP officials, there is existing laboratory information on findings, flaws, and fixes, but it has not been collected because of nondisclosure agreements. Nondisclosure agreements are important for protecting vendors' proprietary data from being released to the public and competitors. However, releasing summary laboratory information on findings, flaws and fixes, while at the same time considering the requirements of nondisclosure agreements, could be beneficial to determining the effectiveness of the NIAP program. Without this type of information, NIAP will have difficulty demonstrating its effectiveness and will be challenged to know and to demonstrate whether the process is meeting its goals. While the National Security Telecommunications and Information Systems Security Policy Number 11 already allows agencies with non-national security systems to acquire NIAP-evaluated products, expanding the policy to mandate that such systems acquire NIAP-evaluated products may yield many of the same benefits and challenges experienced by current process participants, and could further exacerbate resources. For example, one identified benefit for national security systems--independent testing and evaluation of IT products--gives agencies confidence that validated features of a product, whether acquired for national or non-national security systems, will perform as claimed by the vendor. Similarly, one challenge--a reduction in the number of validators for certifying products--could contribute to delays in validating products, whether for national or non-nation security systems. Further, expanding the requirement to mandate the policy for non-national security systems may further exacerbate current resource constraints, related to hiring and maintaining qualified personnel to validate products. Nevertheless, agencies with non-national security systems have in fact acquired NIAP-evaluated products. Specifically, ten of the federal agencies we surveyed indicated that they have used the NIAP process to acquire evaluated products for non-national security systems, even though they are not required to do so. One agency is considering the use of NIAP-evaluated products during its product reviews, and is also considering including NIAP-evaluated products as part of its procurement strategy. Moreover, agencies seeking information assurance for their non-national security systems, but who do not acquire NIAP-evaluated products, have guidance and standards available to them. Specifically, as required by the Federal Information Security Management Act, NIST has developed and issued standards and guidelines, including minimum information security requirements, for the acquisition and use of security-related IT products for non-national security systems. These standards and guidelines are to be complementary with those established for the protection of national security systems and information contained in such systems. Further, NIST issued additional guidance to agencies for incorporating security into all phases of the system development life cycle process as a framework for selecting and acquiring cost-effective security controls. In August 2000, NIST also issued guidance on security assurance for non-national security systems in NIST Special Publication 800-23: Guideline to Federal Organizations on Security Assurance and Acquisition/Use of Tested/Evaluated Products. While a range of controls are needed to protect national security systems against increasingly sophisticated cyber attacks, establishing effective policies and processes for acquiring products that have been validated by an independent party is important to the federal government's ability to procure and deploy the right technologies. Acquiring NIAP-evaluated products can increase the federal government's confidence that its IT products and systems will perform security features and functions as claimed. Despite the benefits of acquiring and using IT products that have gone through the rigorous tests and evaluations of NIAP, the program faces considerable challenges that hinder its effective use by vendors and agencies. These challenges include the difficulty in matching agencies' needs with the availability of NIAP-evaluated products, vendors' lack of awareness regarding the evaluation process, a reduction in the number of validators to certify products, and difficulty in measuring and documenting the effectiveness of the NIAP process. Until these challenges are addressed, they will continue to undermine the efficacy of NIAP. Regarding expanding the NIAP requirement to non-national security systems, pursing this approach may further exacerbate current resource constraints. To assist the NIAP in documenting the effectiveness of the NIAP evaluation process, we recommend that the Secretary of Defense direct the Director of the National Security Agency, in coordination with NIST under the provisions of the NIAP partnership, to take the following two actions: 1. Coordinate with vendors, laboratories, and various industry associations that have knowledge of the evaluation process to develop awareness training workshops for program participants. 2. Consider collecting, analyzing, and reporting metrics on the effectiveness of NIAP tests and evaluations. Such metrics could include summary information on the number of findings, flaws, and associated fixes. In providing written comments on a draft of this report (reprinted in app. III), the Deputy Assistant Secretary of Defense (Deputy Chief Information Officer), partially agreed with one of our recommendations, agreed with the other, and described ongoing and planned efforts to address them. While the Deputy Assistant Secretary agreed with our recommendation to develop awareness training workshops for NIAP program participants, she stated that the NIAP must also live with the realities of the challenges that we identified in our report. The Deputy Assistant Secretary noted that, as our report highlights, the NIAP program is facing considerable challenges with resources and funding to sustain the current day-to-day running of the program and that it is not feasible for the NIAP office to increase its current efforts in developing and hosting the recommended training and education. Nonetheless, she also noted that the Secretary of Defense should direct the Director of the National Security Agency, in coordination with the NIST under the provisions of the NIAP, to coordinate with the vendors, laboratories, and various industry associations that have knowledge of the evaluation process to develop awareness training workshops for program participants within the current constraints and to work with the commercial laboratories, vendors, and others to identify ways that organizations outside of NIAP can further this initiative. We agree that NIAP should continue its efforts in awareness and education training, and endorse increasing such efforts as resources permit. The Deputy Assistant Secretary agreed with our recommendation to collect, analyze, and report metrics on the effectiveness of NIAP tests and evaluations, and stated that the NIAP has already started researching ways to institute metrics to help determine the effectiveness of the evaluation program. She noted that the goal of collecting metrics is to demonstrate to the NIAP constituency that NIAP evaluations do provide value by improving the security of the evaluated products and by providing the end customer with assurance that these products perform their security functions as intended even when faced with adverse conditions. The Department of Defense and the Department of Homeland Security also provided technical comments, which we considered and addressed in our report, as appropriate. We are sending copies of this report to the Departments of Commerce (National Institute of Standards and Technology), Defense, and Homeland Security; the Office of Management and Budget; the General Services Administration, and to other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6244 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to identify (1) the governmentwide benefits and challenges of the National Information Assurance Partnership (NIAP) evaluation process; and (2) the potential benefits and challenges of expanding the requirement of NIAP to non-national security systems, including sensitive but unclassified systems. To determine the benefits and challenges for both objectives, we analyzed and reviewed a number of policy documents and reports from both industry and government. We also reviewed relevant federal policies relating to information security issues. To gain insight into the NIAP evaluation process, we met with software vendors and certification laboratories to discuss their experiences with NIAP, their applicable processes, and reviewed their relevant documentation. We selected vendors based on broad or distinguishing product capabilities demonstrating a range of features, brand recognition based on high ratings received in reviews conducted by information security magazines, and vendors mentioned more frequently in various discussions with industry experts and in information security literature. Vendors selected represented different information technology (IT) market sectors, are considered leaders in their field, and varied in size. To determine the industrywide perspective on NIAP, we met with two IT industry groups: The Information Technology Association of America and Cyber Security Industry Alliance. We selected these industry groups because they represent a cross-section of the IT industry as a whole. To gain insight into the program's functions and usefulness to agencies, we spoke with government officials from the Department of Commerce (specifically the National Institute of Standards and Technology), Department of Defense, Department of Homeland Security, General Services Administration, and the Office of Management and Budget. We also surveyed officials from the 24 federal agencies designated under the Chief Financial Officers Act of 1990 to determine their current use of NIAP- evaluated products, the perceived usefulness of the program, and the benefits and challenges associated with acquiring and using NIAP- evaluated products. For each agency survey, we identified the office of the chief information officer, notified them of our work, and distributed the survey instrument to each via an e-mail attachment. In addition, we discussed the purpose and content of the survey instrument with agency officials when requested. All 24 agencies responded to our survey. We did not verify the accuracy of the agencies' responses; however, we reviewed supporting documentation that agencies provided to validate their responses. We contacted agency officials when necessary for follow-up information. We then analyzed the agencies' responses. Although this was not a sample survey, and, therefore, there were no sampling errors, conducting any survey may introduce other kinds of errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database (or were analyzed) can introduce unwanted variability into the survey results. We took steps in the development of the survey instrument, the data collection, and the data analysis to minimize these survey-related errors. For example, we developed the questionnaire in two stages. First, we had a survey specialist design the survey instrument in collaboration with subject-matter experts. Then, we pretested the instrument at two federal departments and internally at GAO to ensure that questions were relevant, clearly stated, and easy to answer. We conducted our work in Washington, D.C., from May 2005 through February 2006, in accordance with generally accepted government auditing standards. In addition to the individual named above, Jenniffer Wilson (Assistant Director), Neil Doherty, Jennifer Franks, Joel Grossman, Matthew Grote, Min Hyun, Anjalique Lawrence, J. Paul Nicholas, Karen Talley, and Amos Tevelow were key contributors to this report.
In 1997, the National Security Agency and the National Institute of Standards and Technology formed the National Information Assurance Partnership (NIAP) to boost federal agencies' and consumers' confidence in information security products manufactured by vendors. To facilitate this goal, NIAP developed a national program that requires accredited laboratories to independently evaluate and validate the security of these products for use in national security systems. These systems are those under control of the U.S. government that contain classified information or involve intelligence activities. GAO was asked to identify (1) the governmentwide benefits and challenges of the NIAP evaluation process on national security systems, and (2) the potential benefits and challenges of expanding the requirement of NIAP to non-national security systems, including sensitive but unclassified systems. While NIAP process participants--vendors, laboratories, and federal agencies--indicated that the process offers benefits for use in national security systems, its effectiveness has not been measured or documented, and considerable challenges to acquiring and using NIAP-evaluated products exist. Specific benefits included independent testing and evaluation of products and accreditation of the performing laboratories, the discovery and correction of product flaws, and improvements to vendor development processes. However, process participants also face several challenges, including difficulty in matching agencies' needs with the availability of NIAP-evaluated products, vendors' lack of awareness regarding the evaluation process, and a lack of performance measures and difficulty in documenting the effectiveness of the NIAP evaluation process. Collectively, these challenges hinder the effective use of the NIAP evaluation process by vendors and agencies. Expanding the requirement of the NIAP evaluation process to non-national security systems is likely to yield similar benefits and challenges as those experienced by current process participants. For example, a current benefit--independent testing and evaluation of IT products--gives agencies confidence that validated features of a product will perform as claimed by the vendor. However, federal policy already allows agencies with non-national security systems to consider acquiring NIAP-evaluated products for those systems, and requiring that they do so may further exacerbate current resource constraints related to the evaluation and validation of products. In the absence of such a requirement, agencies seeking information assurance (measures that defend and protect information and information systems by ensuring their confidentiality, integrity, authenticity, availability, and utility) for their non-national security systems have other federal guidance and standards available to them.
7,309
512
The DHS Privacy Office was established with the appointment of the first Chief Privacy Officer in April 2003. The Chief Privacy Officer is appointed by the Secretary and reports directly to him. The Chief Privacy Officer serves as the designated senior agency official for privacy, as has been required by the Office of Management and Budget (OMB) of all major departments and agencies since 2005. As a part of the DHS organizational structure, the Chief Privacy Officer has the ability to serve as a consultant on privacy issues to other departmental entities that may not have adequate expertise on privacy issues. In addition, there are also component-level and program-level privacy officers at the Transportation Security Administration (TSA), U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT) program, and U.S. Citizenship and Immigration Services. When the Privacy Office was initially established, it had 5 full-time employees, including the Chief Privacy Officer. Since then, the staff has expanded to 16 full-time employees. As of February 2007, the Privacy Office also had 9 full-time and 3 half-time contractor staff. The first Chief Privacy Officer served from April 2003 to September 2005, followed by an Acting Chief Privacy Officer who served through July 2006. In July 2006, the Secretary appointed a second permanent Chief Privacy Officer. The Privacy Office is responsible for ensuring that DHS is in compliance with federal laws that govern the use of personal information by the federal government. Among these laws are the Homeland Security Act of 2002 (as amended by the Intelligence Reform and Terrorism Prevention Act of 2004), the Privacy Act of 1974, and the E-Gov Act of 2002. Based on these laws, the Privacy Office's major responsibilities can be summarized into these four broad categories: 1. reviewing and approving PIAs, 2. integrating privacy considerations into DHS decision making, 3. reviewing and approving public notices required by the Privacy Act, 4. preparing and issuing reports. The Privacy Office is responsible for ensuring departmental compliance with the privacy provisions of the E-Gov Act. Specifically, section 208 of the E-Gov Act is designed to enhance protection of personally identifiable information in government information systems and information collections by requiring that agencies conduct PIAs. In addition, the Homeland Security Act requires the Chief Privacy Officer to conduct a PIA for proposed rules of the department on the privacy of personal information. According to OMB guidance, a PIA is an analysis of how information is handled: (1) to ensure that handling conforms to applicable legal, regulatory, and policy requirements regarding privacy; (2) to determine the risks and effects of collecting, maintaining, and disseminating personally identifiable information in an electronic information system; and (3) to examine and evaluate protections and alternative processes for handling information to mitigate potential risks to privacy. Agencies must conduct PIAs before they (1) develop or procure information technology that collects, maintains, or disseminates personally identifiable information or (2) initiate any new data collections of personal information that will be collected, maintained, or disseminated using information technology--if the same questions are asked of 10 or more people. To the extent that PIAs are made publicly available, they provide explanations to the public about such things as what information will be collected, why it is being collected, how it is to be used, and how the system and data will be maintained and protected. Integrating privacy considerations into the DHS decision-making process Several of the Privacy Office's statutory responsibilities involve ensuring that the major decisions and operations of the department do not have an adverse impact on privacy. Specifically, the Homeland Security Act requires that the Privacy Office assure that the use of technologies by the department sustains, and does not erode, privacy protections relating to the use, collection, and disclosure of personal information. The act further requires that the Privacy Office evaluate legislative and regulatory proposals involving the collection, use, and disclosure of personal information by the federal government. It also requires the office to coordinate with the DHS Officer for Civil Rights and Civil Liberties on those issues. Reviewing and approving public notices required by the Privacy Act The Privacy Office is required by the Homeland Security Act to assure that personal information contained in Privacy Act systems of records is handled in full compliance with fair information practices as set out in the Privacy Act of 1974. The Privacy Act places limitations on agencies' collection, disclosure, and use of personally identifiable information that is maintained in their systems of records. The act defines a record as any item, collection, or grouping of information about an individual that is maintained by an agency and contains that individual's name or other personal identifier, such as a Social Security number. It defines "system-of- records" as a group of records under the control of any agency from which information is retrieved by the name of the individual or by an individual identifier. The Privacy Act requires agencies to notify the public, via a notice in the Federal Register, when they create or modify a system-of- records notice. This notice must include information such as the type of information collected, the types of individuals about whom information is collected, the intended "routine" uses of the information, and procedures that individuals can use to review and correct their personal information. The act also requires agencies to define--and limit themselves to-- specific purposes for collecting the information. The Homeland Security Act requires the Privacy Office to prepare annual reports to Congress detailing the department's activities affecting privacy, including complaints of privacy violations and implementation of the Privacy Act of 1974. In addition to the reporting requirements under the Homeland Security Act, Congress has occasionally directed the Privacy Office to report on specific technologies and programs. For example, in the conference report for the DHS appropriations act for fiscal year 2005, Congress directed the Privacy Office to report on DHS's use of data mining technologies. The Intelligence Reform and Terrorism Prevention Act of 2004 also required the Chief Privacy Officer to submit a report to Congress on the impact on privacy and civil liberties of the DHS-maintained Automatic Selectee and No-Fly lists, which contain names of potential airline passengers who are to be selected for secondary screening or not allowed to board aircraft. In addition, the Privacy Office can initiate its own investigations and produce reports under its Homeland Security Act authority to report on complaints of privacy violations and assure technologies sustain and do not erode privacy protections. One of the Privacy Office's primary responsibilities is to review and approve PIAs to ensure departmental compliance with the privacy provisions (section 208) of the E-Gov Act of 2002. The Privacy Office has established a PIA compliance framework to carry out this responsibility. The centerpiece of the Privacy Office's compliance framework is its written guidance on when a PIA must be conducted, how the associated analysis should be performed, and how the final document should be written. Although based on OMB's guidance, the Privacy Office's guidance goes further in several areas. For example, the guidance does not exempt national security systems and also clarifies that systems in the pilot testing phase are not exempt. The DHS guidance also provides more detailed instructions than OMB's guidance on the level of detail to be provided. For example, the DHS guidance requires a discussion of a system's data retention period, procedures for allowing individual access, redress, correction of information, and technologies used in the system, such as biometrics or radio frequency identification (RFID). The Privacy Office has taken steps to continually improve its PIA guidance. Initially released in February 2004, the guidance has been updated each year since then. These updates have increased the emphasis on describing the privacy analysis that should take place in making system design decisions that affect privacy. For example, regarding information collection, the latest guidance requires program officials to explain how the collection supports the purpose(s) of the system or program and the mission of the organization. The guidance also reminds agencies that the information collected should be relevant and necessary to accomplish the stated purpose(s) and mission. To accompany its written guidance, the Privacy Office has also developed a PIA template and conducted a number of training sessions to further assist DHS personnel. Our analysis of published DHS PIAs shows significant quality improvements in those completed recently compared with those from 2 or 3 years ago. Overall, there is a greater emphasis on analysis of system development decisions that impact privacy, because the guidance now requires that such analysis be performed and described. For example, the most recent PIAs include assessments of planned uses of the system and information, plans for data retention, and the extent to which the information is to be shared outside of DHS. Earlier PIAs did not include any of these analyses. The emphasis on analysis should allow the public to more easily understand a system and its impact on privacy. Further, our analysis found that use of the template has resulted in a more standardized structure, format, and content, making the PIAs more easily understandable to the general reader. In addition to written guidance, the Privacy Office has also taken steps to integrate PIA development into the department's established operational processes. For example, the Privacy Office is using the OMB Exhibit 300 budget process as an opportunity to ensure that systems containing personal information are identified and that PIAs are conducted when needed. OMB requires agencies to submit an Exhibit 300 Capital Asset Plan and Business Case for their major information technology systems in order to receive funding. The Exhibit 300 template asks whether a system has a PIA and if it is publicly available. Because the Privacy Office gives final departmental approval for all such assessments, it is able to use the Exhibit 300 process to ensure the assessments are completed. According to Privacy Office officials, the threat of losing funds has helped to encourage components to conduct PIAs. Integration of the PIA requirement into these management processes is beneficial in that it provides an opportunity to address privacy considerations during systems development, as envisioned by OMB's guidance. Because of concerns expressed by component officials that the Privacy Office's review process takes a long time and is difficult to understand, the office has made efforts to improve the process and make it more transparent to DHS components. Specifically, the office has established a five-stage review process. Under this process, a PIA must satisfy all the requirements of a given stage before it can progress to the next one. The review process is intended to take 5 to 6 weeks, with each stage intended to take 1 week. Figure 1 illustrates the stages of the review process. Through efforts such as the compliance framework, the Privacy Office has steadily increased the number of PIAs it has approved and published each year. Since 2004, PIA output by the Privacy Office has more than doubled. According to Privacy Office officials, the increase in output was aided by the development and implementation of the Privacy Office's structured guidance and review process. In addition, Privacy Office officials stated that as DHS components gain more experience, the output should continue to increase. Because the Privacy Office has focused departmental attention on the development and review process and established a structured framework for identifying systems that need PIAs, the number of identified DHS systems requiring a PIA has increased dramatically. According to its annual Federal Information Security Management Act reports, DHS identified 46 systems as requiring a PIA in fiscal year 2005 and 143 systems in fiscal year 2006. Based on the privacy threshold analysis process, the Privacy Office estimates that 188 systems will require a PIA in fiscal year 2007. Considering that only 25 were published in fiscal year 2006, it will likely be very difficult for DHS to expeditiously develop and issue PIAs for all of these systems because developing and approving them can be a lengthy process. According to estimates by Privacy Office officials, it takes approximately six months to develop and approve a PIA, but the office is working to reduce this time. The Privacy Office is examining several potential changes to the development process that would allow it to process an increased number of PIAs. One such option is to allow DHS components to quickly amend preexisting PIAs. An amendment would only need to contain information on changes to the system and would allow for quicker development and review. The Privacy Office is also considering developing standardized PIAs for commonly-used types of systems or uses. For example, such an assessment may be developed for local area networks. Systems intended to collect or use information outside what is specified in the standardized PIA would need approval from the Privacy Office. The Privacy Office has also taken steps to integrate privacy considerations in the DHS decision-making process. These actions are intended to address a number of statutory requirements, including that the Privacy Office assure that the use of technologies sustain, and do not erode, privacy protections; that it evaluate legislative and regulatory proposals involving the collection, use, and disclosure of personal information by the federal government; and that it coordinate with the DHS Officer for Civil Rights and Civil Liberties. For example, in 2004, the first Chief Privacy Officer established the DHS Data Privacy and Integrity Advisory Committee to advise her and the Secretary on issues within the department that affect individual privacy, as well as data integrity, interoperability, and other privacy-related issues. The committee has examined a variety of privacy issues, produced reports, and made recommendations. In December 2006, the committee adopted two reports; one on the use of RFID for identity verification and another on the use of commercial data. According to Privacy Office officials, the additional instructions on the use of commercial data contained in the May 2007 PIA guidance update were based, in part, on the advisory committee's report on commercial data. In addition to its reports, which are publicly available, the committee meets quarterly in Washington, D.C., and in other parts of the country where DHS programs operate. These meetings are open to the public and transcripts of the meetings are posted on the Privacy Office's Web site. DHS officials from major programs and initiatives involving the use of personal data such as US-VISIT, Secure Flight, and the Western Hemisphere Travel Initiative, have testified before the committee. Private sector officials have also testified on topics such as data integrity, identity authentication, and RFID. Because the committee is made up of experts from the private sector and the academic community, it brings an outside perspective to privacy issues through its reports and recommendations. In addition, because it was established as a federal advisory committee, its products and proceedings are publicly available and thus provide a public forum for the analysis of privacy issues that affect DHS operations. The Privacy Office has also taken steps to raise awareness of privacy issues by holding a series of public workshops. The first workshop, on the use of commercial data for homeland security, was held in September 2005. Panel participants consisted of representatives from academia, the private sector, and government. In April 2006, a second workshop addressed the concept of public notices and freedom of information frameworks. In June 2006, a workshop was held on the policy, legal, and operational frameworks for PIAs and privacy threshold analyses and included a tutorial for conducting PIAs. Hosting public workshops is beneficial in that it allows for communication between the Privacy Office and those who may be affected by DHS programs, including the privacy advocacy community and the general public. Another part of the Privacy Office's efforts to carry out its Homeland Security Act requirements is its participation in departmental policy development for initiatives that have a potential impact on privacy. The Privacy Office has been involved in policy discussions related to several major DHS initiatives and, according to department officials, the office has provided input on several privacy-related decisions. The following are major initiatives in which the Privacy Office has participated. Passenger name record negotiations with the European Union United States law requires airlines operating flights to or from the United States to provide the Bureau of Customs and Border Protection (CBP) with certain passenger reservation information for purposes of combating terrorism and other serious criminal offenses. In May 2004, an international agreement on the processing of this information was signed by DHS and the European Union. Prior to the agreement, CBP established a set of terms for acquiring and protecting data on European Union citizens, referred to as the "Undertakings". In September 2005, under the direction of the first Chief Privacy Officer, the Privacy Office issued a report on CBP's compliance with the Undertakings in which it provided guidance on necessary compliance measures and also required certain remediation steps. For example, the Privacy Office required CBP to review and delete data outside the 34 data elements permitted by the agreement. According to the report, the deletion of these extraneous elements was completed in August 2005 and was verified by the Privacy Office. In October 2006, DHS and the European Union completed negotiations on a new interim agreement concerning the transfer and processing of passenger reservation information. The Director of International Privacy Policy within the Privacy Office participated in these negotiations along with others from DHS in the Policy Office, Office of General Counsel, and CBP. The Western Hemisphere Travel Initiative is a joint effort between DHS and the Department of State to implement new documentation requirements for certain U.S. citizens and nonimmigrant aliens entering the United States. DHS and State have proposed the creation of a special identification card that would serve as an alternative to a traditional passport for use by U.S. citizens who cross land borders or travel by sea between the United States, Canada, Mexico, the Caribbean, or Bermuda. The card is to use a technology called vicinity RFID to transmit information on travelers to CBP officers at land and sea ports of entry. Advocacy groups have raised concerns about the proposed use of vicinity RFID because of privacy and security risks due primarily to the ability to read information from these cards from distances of up to 20 feet. The Privacy Office was consulted on the choice of identification technology for the cards. According to the DHS Policy Office, Privacy Office input led to a decision not to store or transmit personally identifiable information on the RFID chip on the card. Instead, DHS is planning on transmitting a randomly-generated identifier for individuals, which is to be used by DHS to retrieve information about the individual from a centralized database. REAL ID Act of 2005 Among other things, the REAL ID Act requires DHS to consult with the Department of Transportation and the states in issuing regulations that set minimum standards for state-issued REAL ID drivers' licenses and identification cards to be accepted for official purposes after May 11, 2008. Advocacy groups have raised a number of privacy concerns about REAL ID, chiefly that it creates a de facto national ID that could be used in the future for privacy-infringing purposes and that it puts individuals at increased risk of identity theft. The DHS Policy Office reported that it included Privacy Office officials, as well as officials from the Office of Civil Rights and Civil Liberties, in developing its implementing rule for REAL ID. The Privacy Office's participation in REAL ID also served to address its requirement to evaluate legislative and regulatory proposals concerning the collection, use, and disclosure of personal information by the federal government. According to its November 2006 annual report, the Privacy Office championed the need for privacy protections regarding the collection and use of the personal information that will be stored on the REAL ID drivers' licenses. Further, the office reported that it funded a contract to examine the creation of a state federation to implement the information sharing required by the act in a privacy-sensitive manner. As we have previously reported, DHS has used personal information obtained from commercial data providers for immigration, fraud detection, and border screening programs but, like other agencies, does not have policies in place concerning its uses of these data. Accordingly, we recommended that DHS, as well as other agencies, develop such policies. In response to the concerns raised in our report and by privacy advocacy groups, Privacy Office officials said they were drafting a departmentwide policy on the use of commercial data. Once drafted by the Privacy Office, this policy is to undergo a departmental review process (including review by the Policy Office, General Counsel, and Office of the Secretary), followed by a review by OMB prior to adoption. These examples demonstrate specific involvement of the Privacy Office in major DHS initiatives. However, Privacy Office input is only one factor that DHS officials consider in formulating decisions about major programs, and Privacy Office participation does not guarantee that privacy concerns will be fully addressed. For example, our previous work has highlighted problems in implementing privacy protections in specific DHS programs, including Secure Flight and the ADVISE program. Nevertheless, the Privacy Office's participation in policy decisions provides an opportunity for privacy concerns to be raised explicitly and considered in the development of DHS policies. The Privacy Office has also taken steps to address its mandate to coordinate with the DHS Officer for Civil Rights and Civil Liberties on programs, policies, and procedures that involve civil rights, civil liberties, and privacy considerations, and ensure that Congress receives appropriate reports. The DHS Officer for Civil Rights and Civil Liberties cited three specific instances where the offices have collaborated. First, as stated previously, both offices have participated in the working group involved in drafting the implementing regulations for REAL ID. Second, the two offices coordinated in preparing the Privacy Office's report to Congress assessing the privacy and civil liberties impact of the No-Fly and Selectee lists used by DHS for passenger prescreening. Third, the two offices coordinated on providing input for the "One-Stop Redress" initiative, a joint initiative between the Department of State and DHS to implement a streamlined redress center for travelers who have concerns about their treatment in the screening process. The DHS Privacy Office is responsible for reviewing and approving DHS system-of-records notices to ensure that the department complies with the Privacy Act of 1974. Specifically, the Homeland Security Act requires the Privacy Office to "assur that personal information contained in Privacy Act systems of records is handled in full compliance with fair information practices as set out in the Privacy Act of 1974." The Privacy Act requires that federal agencies publish notices in the Federal Register on the establishment or revision of systems of records. These notices must describe the nature of a system-of-records and the information it maintains. Additionally, OMB has issued various guidance documents for implementing the Privacy Act. OMB Circular A-130, for example, outlines agency responsibilities for maintaining records on individuals and directs government agencies to conduct biennial reviews of each system-of- records notice to ensure that it accurately describes the system-of- records. The Privacy Office has taken steps to establish a departmental process for complying with the Privacy Act. It issued a management directive that outlines its own responsibilities as well as those of component-level officials. Under this policy, the Privacy Office is to act as the department's representative for matters relating to the Privacy Act. The Privacy Office is to issue and revise, as needed, departmental regulations implementing the Privacy Act and approve all system-of-records notices before they are published in the Federal Register. DHS components are responsible for drafting system-of-records notices and submitting them to the Privacy Office for review and approval. The management directive was in addition to system-of-records notice guidance published by the Privacy Office in August 2005. The guidance discusses the requirements of the Privacy Act and provides instructions on how to prepare system-of-records notices by listing key elements and explaining how they must be addressed. The guidance also lists common routine uses and provides standard language that DHS components may incorporate into their notices. As of February 2007, the Privacy Office had approved and published 56 system-of-records notices, including updates and revisions as well as new documents. However, the Privacy Office has not yet established a process for conducting a biennial review of system-of-records notices, as required by OMB. OMB Circular A-130 directs federal agencies to review their notices biennially to ensure that they accurately describe all systems of records. Where changes are needed, the agencies are to publish amended notices in the Federal Register. The establishment of DHS involved the consolidation of a number of preexisting agencies, thus, there are a substantial number of systems that are operating under preexisting, or "legacy," system-of-records notices-- 218, as of February 2007. These documents may not reflect changes that have occurred since they were prepared. For example, the system-of- records notice for the Treasury Enforcement and Communication System has not been updated to reflect changes in how personal information is used that has occurred since the system was taken over by DHS from the Department of the Treasury. The Privacy Office acknowledges that identifying, coordinating, and updating legacy system-of-records notices is the biggest challenge it faces in ensuring DHS compliance with the Privacy Act. Because it focused its initial efforts on PIAs and gave priority to DHS systems of records that were not covered by preexisting notices, the office did not give the same priority to performing a comprehensive review of existing notices. According to Privacy Office officials, the office is encouraging DHS components to update legacy system-of-records notices and is developing new guidance intended to be more closely integrated with its PIA guidance. However, no significant reduction has yet been made in the number of legacy system-of-records notices that need to be updated. By not reviewing notices biennially, the department is not in compliance with OMB direction. Further, by not keeping its notices up-to-date, DHS hinders the public's ability to understand the nature of DHS systems-of- records notices and how their personal information is being used and protected. Inaccurate system-of-records notices may make it difficult for individuals to determine whether their information is being used in a way that is incompatible with the purpose for which it was originally collected. Section 222 of the Homeland Security Act requires that the Privacy Officer report annually to Congress on "activities of the Department that affect privacy, including complaints of privacy violations, implementation of the Privacy Act of 1974, internal controls, and other matters." The act does not prescribe a deadline for submission of these reports; however, the requirement to report "on an annual basis" suggests that each report should cover a 1-year time period and that subsequent annual reports should be provided to Congress 1 year after the previous report was submitted. Congress has also required that the Privacy Office report on specific departmental activities and programs, including data mining and passenger prescreening programs. In addition, the first Chief Privacy Officer initiated several investigations and prepared reports on them to address requirements to report on complaints of privacy violations and to assure that technologies sustain and do not erode privacy protections. In addition to satisfying legal requirements, the issuance of timely public reports helps in adhering to the fair information practices, which the Privacy Office has pledged to support. Public reports address openness-- the principle that the public should be informed about privacy policies and practices and that individuals should have a ready means of learning about the use of personal information--and the accountability principle--that individuals controlling the collection or use of personal information should be accountable for taking steps to ensure implementation of the fair information principles. The Privacy Office has not been timely and in one case has been incomplete in addressing its requirement to report annually to Congress. The Privacy Office's first annual report, issued in February 2005, covered 14 months from April 2003 through June 2004. A second annual report, for the next 12 months, was never issued. Instead, information about that period was combined with information about the next 12-month period, and a single report was issued in November 2006 covering the office's activities from July 2004 through July 2006. While this report generally addressed the content specified by the Homeland Security Act, it did not include the required description of complaints of privacy violations. Other reports produced by the Privacy Office have not met statutory deadlines or have been issued long after privacy concerns had been addressed. For example, although Congress required a report on the privacy and civil liberties effects of the No-Fly and Automatic Selectee Lists by June 2005, the report was not issued until April 2006, nearly a year late. In addition, although required by December 2005, the Privacy Office's report on DHS data mining activities was not provided to Congress until July 2006 and was not made available to the public on the Privacy Office Web site until November 2006. In addition, the first Chief Privacy Officer initiated four investigations of specific programs and produced reports on these reviews. Although two of the four reports were issued in a relatively timely fashion, the other two reports were issued long after privacy concerns had been raised and addressed. For example, a report on the Multi-state Anti-Terrorism Information Exchange program, initiated in response to a complaint by the American Civil Liberties Union submitted in May 2004, was not issued until two and a half years later, long after the program had been terminated. As another example, although drafts of the recommendations contained in the Secure Flight report were shared with TSA staff as early as summer 2005, the report was not released until December 2006, nearly a year and a half later. According to Privacy Office officials, there are a number of factors contributing to the delayed release of its reports, including time required to consult with affected DHS components as well as the departmental clearance process, which includes the Policy Office, the Office of General Counsel, and the Office of the Secretary. After that, drafts must be sent to OMB for further review. In addition, the Privacy Office did not establish schedules for completing these reports that took into account the time needed for coordination with components or departmental and OMB review. Regarding the omission of complaints of privacy violations in the latest annual report, Privacy Office officials noted that the report cites previous reports on Secure Flight and the Multi-state Anti-Terrorism Information Exchange program, which were initiated in response to alleged privacy violations, and that during the time period in question there were no additional complaints of privacy violations. However, the report itself provides no specific statements about the status of privacy complaints; it does not state that there were no privacy complaints received. Late issuance of reports has a number of negative consequences beyond noncompliance with mandated deadlines. First, the value these reports are intended to provide is reduced when the information contained is no longer timely or relevant. In addition, since these reports serve as a critical window into the operations of the Privacy Office and on DHS programs that make use of personal information, not issuing them in a timely fashion diminishes the office's credibility and can raise questions about the extent to which the office is receiving executive-level attention. For example, delays in releasing the most recent annual report led a number of privacy advocates to question whether the Privacy Office had adequate authority and executive-level support. Congress also voiced this concern in passing the Department of Homeland Security Appropriations Act of 2007, which states that none of the funds made available in the act may be used by any person other than the Privacy Officer to "alter, direct that changes be made to, delay, or prohibit the transmission to Congress" of its annual report. In addition, on January 5, 2007, legislation was introduced entitled "Privacy Officer with Enhanced Rights Act of 2007". This bill, among other things, would provide the Privacy Officer with the authority to report directly to Congress without prior comment or amendment by either OMB or DHS officials who are outside the Privacy Office. Until its reports are issued in a timely fashion, questions about the credibility and authority of the Privacy Office will likely remain. In order to ensure that Privacy Act notices reflect current DHS activities and to help the Privacy Office meet its obligations and issue reports in a timely manner, in our report we recommended that the Secretary of Homeland Security take the following four actions: 1. Designate full-time privacy officers at key DHS components, such as Customs and Border Protection, the U.S. Coast Guard, Immigration and Customs Enforcement, and the Federal Emergency Management Agency. 2. Implement a department-wide process for the biennial review of system-of-records notices, as required by OMB. 3. Establish a schedule for the timely issuance of Privacy Office reports (including annual reports), which appropriately consider all aspects of report development, including departmental clearance. 4. Ensure that the Privacy Office's annual reports to Congress contain a specific discussion of complaints of privacy violations, as required by law. Concerning our recommendation that it designate full-time privacy officers in key departmental components, DHS noted in comments on a draft of our report that the recommendation was consistent with a departmental management directive on compliance with the Privacy Act and stated that it would take the recommendation "under advisement." However, according to Privacy Office officials, as of July 2007, no such designations have been made. Until DHS appoints such officers, the Privacy Office will not benefit from their potential to help speed the processing of PIAs, nor will component programs be in a position to benefit from the privacy expertise these officials could provide. DHS concurred with the other three recommendations and noted actions initiated to address them. Specifically, regarding our recommendation that DHS implement a process for the biennial review of system-of-records notices required by OMB, DHS noted that it is systematically reviewing legacy system-of-records notices in order to issue updated notices on a schedule that gives priority to systems with the most sensitive personally identifiable information. DHS also noted that the Privacy Office is to issue an updated system-of-records notice guide by the end of fiscal year 2007. As of July 2007, DHS officials reported that they have 215 legacy SORNs that need to be reviewed and either revised or retired. Until DHS reviews and updates all of its legacy notices as required by federal guidance, it cannot assure the public that its notices reflect current uses and protections of personal information. Concerning our recommendations related to timely reporting, DHS stated that the Privacy Office will work with necessary components and programs affected by its reports to provide for both full collaboration and coordination within DHS. Finally, regarding our recommendation that the Privacy Office's annual reports contain a specific discussion of privacy complaints, as required by law, DHS agreed that a consolidated reporting structure for privacy complaints within the annual report would assist in assuring Congress and the public that the Privacy Office is addressing the complaints that it receives. In summary, the DHS Privacy Office has made significant progress in implementing its statutory responsibilities under the Homeland Security Act; however, more work remains to be accomplished. The office has made great strides in implementing a process for developing PIAs, contributing to greater output over time and higher quality assessments. The Privacy Office has also provided the opportunity for privacy to be considered at key stages in systems development by incorporating PIA requirements into existing management processes. The office faces continuing challenges in reducing its backlog of systems requiring PIAs, ensuring that system-of-records notices are kept up to date, and in issuing reports in a timely fashion. Mr. Chairman, this concludes my testimony today. I would be happy to answer any questions you or other members of the subcommittee may have. If you have any questions concerning this testimony, please contact Linda Koontz, Director, Information Management, at (202) 512-6240, or [email protected]. Other individuals who made key contributions include John de Ferrari, Nancy Glover, Anthony Molet, David Plocher, and Jamie Pressman. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Homeland Security (DHS) Privacy Office was established with the appointment of the first Chief Privacy Officer in April 2003, as required by the Homeland Security Act of 2002. The Privacy Office's major responsibilities include: (1) reviewing and approving privacy impact assessments (PIA)--analyses of how personal information is managed in a federal system, (2) integrating privacy considerations into DHS decision making and ensuring compliance with the Privacy Act of 1974, and (3) preparing and issuing annual reports and reports on key privacy concerns. GAO was asked to testify on its recent report examining progress made by the DHS Privacy Office in carrying out its statutory responsibilities. GAO compared statutory requirements with Privacy Office processes, documents, and activities. The DHS Privacy Office has made significant progress in carrying out its statutory responsibilities under the Homeland Security Act and its related role in ensuring compliance with the Privacy Act of 1974 and E-Government Act of 2002, but more work remains to be accomplished. Specifically, the Privacy Office has established a compliance framework for conducting PIAs, which are required by the E-Gov Act. The framework includes formal written guidance, training sessions, and a process for identifying systems requiring such assessments. The framework has contributed to an increase in the quality and number of PIAs issued as well as the identification of many more affected systems. The resultant workload is likely to prove difficult to process in a timely manner. Designating privacy officers in certain DHS components could help speed processing of PIAs, but DHS has not yet taken action to make these designations. The Privacy Office has also taken actions to integrate privacy considerations into the DHS decision-making process by establishing an advisory committee, holding public workshops, and participating in policy development. However, limited progress has been made in one aspect of ensuring compliance with the Privacy Act--updating public notices for systems of records that were in existence prior to the creation of DHS. These notices should identify, among other things, the type of data collected, the types of individuals about whom information is collected, and the intended uses of the data. Until the notices are brought up-to-date, the department cannot assure the public that the notices reflect current uses and protections of personal information. Further, the Privacy Office has generally not been timely in issuing public reports. For example, a report on the Multi-state Anti-Terrorism Information Exchange program--a pilot project for law enforcement sharing of public records data--was not issued until long after the program had been terminated. Late issuance of reports has a number of negative consequences, including a potential reduction in the reports' value and erosion of the office's credibility.
7,739
588
The Federal Reserve System was created by the Federal Reserve Act in 1913 as the central bank of the United States to provide a safe and flexible banking and monetary system. The System is composed primarily of 12 FRBs with 25 branches (organized into 12 districts), the Federal Open Market Committee, and the Federal Reserve Board, which exercises broad supervisory powers over the FRBs. The primary functions of the Federal Reserve System are to (1) conduct the nation's monetary policy by influencing bank reserves and interest rates, (2) administer the nation's currency in circulation, (3) buy or sell foreign currencies to maintain stability in international currency markets, (4) provide financial services such as check clearing and electronic funds transfer to the public, financial institutions, and foreign official institutions, (5) regulate the foreign activities of all U.S. banks and the domestic activities of foreign banks, and (6) supervise bank holding companies and state chartered banks that are members of the System. The FRBs also provide various financial services to the U.S. government, including the administration of Treasury securities. The FRBs' assets are comprised primarily of investments in U.S. Treasury and agency securities. As of December 31, 1994, the FRBs reported a securities portfolio balance of $379 billion (87 percent of total assets). These securities primarily consist of Treasury bills, Treasury notes, and Treasury bonds that the FRBs buy and sell when conducting monetary policy. The FRBs act as Treasury's fiscal agent by creating Treasury securities in electronic (book-entry) form upon authorization by the U.S. Treasury and administering ongoing principal and interest payments on these securities. Treasury securities are maintained on electronic recordkeeping systems operated and controlled by the FRBs. The U.S. Treasury maintains an independent record of total Treasury securities outstanding but not individual ownership records. The FRBs maintain records of securities held by depository institutions, by the central banks of other countries, and which they hold for their own account. These records do not indicate whether securities held by the depository institutions are for their own accounts or on behalf of their customers. The portion of these securities owned by the FRBs is maintained on recordkeeping systems that the New York FRB operates. A security's historical cost is comprised of the security's face value (par) and any difference between this face value and the security's purchase price. These differences are referred to as premiums when the purchase price is higher than the face value and as discounts when the price is less than the face value. These amounts are reduced over the life of the security to adjust interest income. Federal Reserve notes are the primary paper currency of the United States in circulation and the FRBs' largest liability. As of December 31, 1994, the FRBs reported a Federal Reserve note balance of $382 billion (89 percent of total liabilities). Notes are printed by the U.S. Treasury's Bureau of Engraving and Printing and shipped to the FRBs, who store them in their vaults until they are withdrawn by financial institutions. Notes do not mature or expire and are liabilities of the FRBs until they are returned to the FRBs. The amount the FRBs report as their liabilities for outstanding notes is actually a running balance of all notes issued from inception that have not been returned to the FRBs. The Federal Reserve Act designates certain assets of each FRB as eligible collateral for the reported Federal Reserve note liability. The majority of the assets pledged as collateral are each FRB's Treasury securities. In addition, the FRBs have entered into cross-collateralization agreements under whose terms the assets pledged as collateral to secure each FRB's notes are also pledged to secure the notes of all the FRBs. Therefore, as long as total collateral assets held by the FRBs equal or exceed the FRBs' total liabilities for notes, the note liability of each individual FRB is fully secured. To conduct our work, we (1) gained an understanding of relevant accounting and reporting policies and procedures by reviewing and analyzing documentation and interviewing key FRB and Board personnel, (2) reviewed documentation supporting selected significant balance sheet amounts originating at the Dallas FRB, and (3) tested the effectiveness of certain internal controls in place at the Dallas FRB and the Federal Reserve Automation Services (FRAS) in Richmond, Virginia, and Dallas, Texas. We conducted our work primarily at the Federal Reserve Banks of Dallas and New York; the Dallas FRB's branches in Houston, San Antonio, and El Paso; the two FRAS sites mentioned above; and the Board of Governors of the Federal Reserve System in Washington, D.C., between July 1994 and November 1995 in accordance with generally accepted government auditing standards. We requested written comments on a draft of this report from the Chairman, Board of Governors of the Federal Reserve System. The Secretary of the Board provided us with written comments. These comments are discussed in the "Agency Comments and Our Evaluation" section and are reprinted in appendix I. Our work at the Dallas FRB, its three branches, and the Federal Reserve Automation Services identified internal control issues that we considered to be significant enough to warrant management's attention. Our findings were detailed in separate reports to officials of the Dallas FRB and FRAS, as applicable. In these reports, we provided suggestions for improvements and documented the many corrective actions either taken, underway, or planned by Dallas FRB and FRAS officials. The issues we identified at the Dallas FRB include weaknesses in controls over financial reporting, those aspects of automated systems that were controlled in Dallas, check processing, and Federal Reserve note inventories. For example, (1) reconciliations of general ledger accounts and activity were not always based on independent records, (2) the automated systems did not prohibit access by all terminated employees, (3) accounting adjustments related to check processing activity were not appropriately reviewed, and (4) inventory counts of Federal Reserve notes at some branches were not always properly conducted and documented. The management of the Dallas FRB has already taken action on some of our suggestions to resolve these issues. We also identified weaknesses in general controls over the automated systems maintained and operated by FRAS and used by the Dallas FRB. These weaknesses involved controls over access to sensitive information and the computer center, changes to system software, testing the disaster recovery plan, and the use of special privileges on automated tasks. For example, (1) access to job management software was not restricted to authorized individuals, (2) access to the FRAS computer center was inappropriately granted to contractor personnel, (3) FRAS lacked policies and procedures for testing and certifying software changes prior to implementation, and (4) FRAS had not tested the communication network linking the Federal Reserve System. FRAS officials agreed with our suggestions for improvement and, in most cases, initiated corrective actions prior to the conclusion of our work. The FRBs used different practices to track new note issuances than they used to track the notes they held in their vaults, resulting in inconsistent note accounting and reporting. Furthermore, various changes to the Federal Reserve Act, the notes' interchangeable nature, and the way in which the FRBs meet their note collateral requirements appear to have made the tracking of note issuances by identifier unnecessary. When new notes are issued, the FRB whose identifying marking appears on the note records a liability for the note amount. Notes that are held in each FRB's vault, regardless of identifier, reduce this liability to arrive at the reported amount of notes outstanding. Consequently, for each FRB, the reported amount of notes outstanding does not accurately reflect the actual amount of outstanding notes bearing that FRB's identifier. Various changes to the act have also diminished the importance of these FRB identifiers. Originally, the act required an identifier on each note to help ensure that each FRB satisfied statutory gold reserve requirements for its notes in circulation. However, these gold reserve requirements have since been repealed. Additionally, in response to changes in the act, notes in the vault are no longer sorted and recorded by identifier. Historically, the identifiers facilitated the FRBs' sorting of notes to comply with other note-related provisions. For example, the act originally prohibited the FRBs from paying out notes with other FRBs' identifiers to customers. To comply with the act, each FRB sorted notes received from customers and returned notes to the other FRBs, as appropriate. The Congress eliminated these provisions to reduce costs and inefficiencies in the FRBs' note-related operations. Additionally, under the act's original provisions, the FRBs were required to return all excessively worn notes to the Comptroller of the Currency for destruction. Each FRB was credited with the amount of its notes to be destroyed. To further reduce costs, the Congress amended the act to modify these requirements. As a result, unfit notes may be destroyed at any FRB and the Board of Governors then apportions the note destructions among the FRBs. The act allows the Board to determine the method by which note destructions will be apportioned. Other factors affecting notes further diminish the importance of using identifiers to associate each note with a specific FRB for accounting and reporting purposes. As the nation's currency, all notes are accepted at any FRB and are used interchangeably, regardless of their identifiers. In addition, the FRBs comply with the act's collateral requirements by pledging each FRB's eligible assets as collateral to secure the notes of all the FRBs. Individual FRB note liabilities are less meaningful than the combined note liability because of the notes' cross-collateralization. Thus, continuing to use specific note identifiers to record note liabilities appears to be unnecessary. The FRBs have responded to the inefficiencies involved in using identifiers to track notes by automating the note accounting and reporting process. This has eliminated much of the effort involved in tracking notes manually. However, the inconsistency between how the issuances of new notes and the contents of the vault are accounted for and reported has continued. In November 1994, the Board contracted with an independent accounting firm to audit the asset accounts allocated among the FRBs for calendar years 1994 through 1999. The contract also requires audits of the combined financial statements of the FRBs as of December 31 for each of the years from 1995 through 1999. During these years, the financial statements of each individual FRB will also be audited once based on the schedule shown in table 1. Under this contract, the combined financial statements will be audited more frequently than the individual statements. This audit approach is appropriate in light of the needs of users of the combined financial statements. The FRBs operate under agreements which specify that assets pledged as collateral by each FRB for its outstanding notes are available to secure the notes of all the FRBs. Accordingly, the combined assets of the FRBs are used to determine whether the notes are adequately collateralized, thus making this combined presentation the most meaningful. These audits of the FRBs' combined financial statements will give the Federal Reserve the opportunity to make audited financial statements publicly available. These annual audits enhance the credibility of reported information and conforms to the practices of the central banks of many other major industrialized nations. Although the Federal Reserve's past annual reports have included the FRBs' financial statements, these statements were not audited and lacked adequate disclosure of key information, such as significant accounting policies followed by the FRBs. In contrast, the central banks of France, Germany, the United Kingdom, and Canada issue publicly available annual reports that include audited financial statements and the independent auditors' reports. Presently, there is no requirement that the combined financial statements of the FRBs be audited in accordance with generally accepted government auditing standards (GAGAS). Audits conducted under the contract will be performed in accordance with generally accepted auditing standards (GAAS). We believe that performing these audits under GAGAS would enhance the value of these audits. GAGAS audits incorporate the GAAS requirements, but go further by requiring additional tests of internal controls and compliance with laws and regulations and reports on these matters. The unique role of the FRBs and the nature of records underlying reported balances of Treasury securities and notes preclude full reliance on traditional auditing procedures. For example, confirming account balances with independent parties is an effective audit procedure to substantiate reported balances. However, this procedure cannot be performed for the FRBs' Treasury security investments and Federal Reserve note liabilities. As part of functions it performs on behalf of Treasury, the New York FRB maintains the ownership records for Treasury securities, including those in the FRBs' portfolio. However, the New York FRB also maintains the related accounting records for these securities. In contrast, Federal Reserve notes are held by parties independent of the FRBs. However, records of specific note holders cannot be maintained because notes continuously circulate throughout the country and the world. Consequently, the FRBs' ownership of Treasury securities and the amount of notes outstanding cannot be independently confirmed. The FRBs retain supporting documentation for the cost of securities transactions for about 2 years. As a result, verifying the entire historical cost of securities that have been in the FRBs' portfolio for extended periods is difficult. However, by retaining support and detailed records for the price paid for new security purchases, the FRBs could eventually support the entire cost of the securities portfolio when the current holdings either are sold or mature. The portion of recorded cost that cannot be readily supported relates to security premiums and discounts. The recorded amounts of premiums and discounts were not significant to the FRBs' total Treasury security account balance as of December 31, 1994. However, auditing the completeness of these recorded amounts is complicated by the lack of supporting documentation and records. Certain Federal Reserve note characteristics affect related accounting and further complicate audit efforts. For example, notes do not mature or expire. In some countries, such as the United Kingdom and France, after a new currency issue is placed in circulation, the old issue is no longer valid for trade, and the liability for the old currency is removed after an appropriate period. However, the United States does not invalidate old note issues when a new note issue is placed in circulation. All notes issued are recorded as liabilities until returned to the FRBs. Additionally, many notes are held by collectors or are held in foreign countries and may never be returned to the FRBs. Destructibility, another note characteristic, also affects the note balance and complicates the FRB audits. Since notes were first issued, they have been destroyed by fires, wars, and other accidents and natural disasters beyond the FRBs' control. The value of notes destroyed in this manner in a single year is unlikely to be large relative to the balance. However, the cumulative effect of these destructions and of other notes that may not be returned to the FRBs is unknown. The existence of these factors is not disclosed in the FRBs' financial statements. We commend the Board for taking the step to contract for external, independent financial statement audits over the next 5 years. We believe that the Board's current commitment to auditing the FRBs' combined financial statements should be sustained and become a permanent part of the Board's operating practices. Presenting audited, combined FRB financial statements that contain appropriate disclosures will enhance the credibility of the Federal Reserve's annual report and will help meet the needs of financial statement users, including the Congress and the public. Institutionalizing such annual, external independent audits will also place the Federal Reserve System on a par with the central banks of other major industrialized nations with respect to financial reporting practices. In conducting these audits, the FRBs' external auditors will need to address the audit challenges posed by the FRBs' unique roles. Recording note liabilities based on bank identifiers is an inefficient use of FRB resources, and reporting this liability under the current approach does not serve a meaningful purpose. Discontinuing the practice of tracking and recording each FRB's note liability based on note identifiers would increase efficiency and provide a consistent basis for the note liabilities reported by the FRBs. To bring about consistency and improve the efficiency of Federal Reserve note accounting and reporting procedures, we recommend that in conjunction with planning and implementing future changes to the automated systems used to account for and report notes, the Board of Governors of the Federal Reserve System consider incorporating changes in the function of these systems to allow FRBs to account for and report notes without regard to the identifiers printed on the notes; directing the FRBs to discontinue using specific FRB identifiers printed on notes as the basis for recording each FRB's liability for Federal Reserve notes; stopping the tracking of shipments by FRB identifiers; directing each FRB to record its note liability based on the Federal Reserve notes it actually receives and holds without regard to FRB identifiers; and apportioning note destructions among FRBs on an appropriate basis without regard to FRB identifiers. To enhance the combined financial statements as a vehicle for informing Federal Reserve management, the Congress, and the public about the operations of Federal Reserve Banks, we recommend that the Board of Governors of the Federal Reserve System do the following: Adopt a policy to institutionalize annual, external independent audits of the FRBs' combined financial statements as a routine operating procedure. These audits should be performed in accordance with GAGAS. Make the FRBs' audited combined financial statements and independent auditor's report publicly available upon issuance. For example, these documents could be included in the Federal Reserve System's annual report. Include disclosures in the financial statements that (1) appropriately describe the significant accounting policies followed, such as the basis for the reported note liability and the treatment of the notes held in the vault, and (2) provide the information typically included in financial statements of other central banks and private sector financial institutions. Regarding our recommendations to bring about consistency and improve the efficiency of Federal Reserve note accounting and reporting procedures, the Board acknowledged in a letter dated January 11, 1996, that changes to the Federal Reserve Act and Federal Reserve policies have blurred the distinction among Federal Reserve notes with different unique identifiers. The Board acknowledged that the accounting process for note destructions offers an opportunity for further efficiencies to be gained in this area. The Board stated it will give consideration to the accounting method used for Federal Reserve notes as the accounting and tracking systems associated with the notes are reviewed for possible redesign. Our other recommendations were intended to enhance the Federal Reserve Banks' combined financial statements as a vehicle for informing Federal Reserve management, the Congress, and the public about the operations of the Federal Reserve Banks, and we believe implementing them would enhance management's accountability. The Board stated it will give careful consideration to our recommendations concerning the use of external auditors, presentation of financial statements, and the application of auditing standards. We are sending copies of this report to the Chairman of the Board of Governors of the Federal Reserve System; the Secretary of the Treasury; the Chairman of the House Committee on Banking and Financial Services; the Chairman and Ranking Minority Member of the Senate Committee on Banking, Housing, and Urban Affairs; and the Director of the Office of Management and Budget. Copies will be made available to others upon request. Please contact me at (202) 512-9406 if you or your staff have any questions. Major contributors to this report are listed in appendix II. Helen T. Desaulniers, Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed several internal control issues at the Federal Reserve Bank (FRB) of Dallas and the Federal Reserve Automation Services' (FRAS) accounting procedures, focusing on: (1) Dallas FRB financial accounting and reporting and electronic data processing (EDP) control weaknesses; (2) the efficiency and consistency of Federal Reserve note accounting; and (3) auditing issues that need the attention of the Federal Reserve System's Board of Governors and its auditor. GAO found that: (1) at the Dallas FRB, its 3 branches, and FRAS, weaknesses exist in accounting records, asset accountability, and the use of automated systems; (2) Dallas FRB control weaknesses include failure to use independent records to verify and reconcile general ledger accounts and activity, limit access to FRB automated systems, review accounting adjustments related to check processing activity, and properly conduct and document Federal Reserve note inventories; (3) FRAS and Dallas FRB general EDP weaknesses include inadequate control over access to sensitive information, system software changes, disaster recovery plan testing, and the use of special privileges on automated tasks; (4) the Federal Reserve could improve the consistency and efficiency of its note accounting procedures by eliminating the use of the FRB identifier on each note for recording liabilities for notes in circulation; (5) the Board of Governors has contracted for annual independent external audits of the combined FRB asset accounts and financial statements over the next 5 years and one audit of each FRB during the same period to enhance the credibility of reported information; and (6) the auditor will face challenges identifying the ownership and original cost of U.S. Treasury securities, confirming amounts held by note holders, and the notes' unique characteristics of nonmaturity and destructibility.
4,446
365
Shortages of chemical and biological defense equipment are a long-standing problem. After the Persian Gulf Conflict, the Army changed its regulations in an attempt to ensure that early-deploying units would have sufficient equipment on hand upon deployment. This direction, contained in U.S. Forces Command Regulation 700-2, has not been universally implemented. Presently, neither the Army's more than five active divisions composing the crisis response force nor the early-deploying Army reserve units we visited had complied with the new stocking level requirements. All had shortages of critical equipment; three of the more than five active divisions had 50 percent or greater shortages of protective suits, and shortages of other critical items were as high as 84 percent, depending on the unit and the item. This equipment is normally procured with operation and maintenance funds. These shortages occurred primarily because unit commanders consistently diverted operation and maintenance funds to meet what they considered higher priority requirements, such as base operating costs, quality-of-life considerations, and costs associated with other-than-war deployments such as those to Haiti and Somalia. Relative to the DOD budget, the cost of purchasing this protective equipment is low. Early-deploying active divisions in the continental United States could meet current stocking requirements for an additional cost of about $15 million. However, unless funds are specifically designated for chemical and biological defense equipment, we do not believe unit commanders will spend operation and maintenance funds for this purpose. The shortages of on-hand stock are exacerbated by inadequate installation warehouse space for equipment storage, poor inventorying and reordering techniques, shelf-life limitations, and difficulty in maintaining appropriate protective clothing sizes. The Army is presently considering decreasing units' stocking requirements to the levels needed to support only each early-deploying division's ready brigade and relying on depots to provide the additional equipment needed on a "just-in-time" basis before deployment. Other approaches under consideration by the Army include funding these equipment purchases through procurement accounts, and transferring responsibility for purchasing and storing this material on Army installations to the Defense Logistics Agency. New and improved equipment is needed to overcome some DOD defensive shortfalls, and DOD is having difficulty meeting all of its planned chemical and biological defense research goals. Efforts to improve the management of the materiel development and acquisition process have so far had limited results and will not attain their full effect until at least fiscal year 1998. In response to lessons learned in the Gulf War, Congress directed DOD to improve the coordination of chemical and biological doctrine, requirements, research, development, and acquisition among DOD and the military services. DOD has acted. During 1994 and 1995, it established the Joint Service Integration Group to prioritize chemical and biological defense research efforts and develop a modernization plan and the Joint Service Materiel Group to develop research, development, acquisition, and logistics support plans. The activities of these two groups are overseen by a single DOD office --the Assistant Secretary of Defense (Atomic Energy)(Chemical and Biological Matters). While these groups have begun to implement the congressional requirements of P.L. 103-160, progress has been slower than expected. At the time of our review, the Joint Service Integration Group expected to produce during 1996 its proposed (1) list of chemical and biological defense research priorities and (2) joint service modernization plan and operational strategy. The Joint Service Materiel Group expects to deliver its proposed plan to guide chemical and biological defense research, development, and acquisition in October 1996. Consolidated research and modernization plans are important for avoiding duplication among the services and otherwise achieving the most effective use of limited resources. It is unclear whether or when DOD will approve these plans. However, DOD officials acknowledged that it will be fiscal year 1998 at the earliest, about 5 years after the law was passed, before DOD can begin formal budgetary implementation of these plans. DOD officials told us progress by these groups has been adversely affected by personnel shortages and collateral duties assigned to the staff. DOD efforts to field specific equipment and conduct research to address chemical and biological defense deficiencies have produced mixed results. On the positive side, DOD began to field the Biological Integrated Detection System in January 1996 and expects to complete the initial purchase of 38 systems by September 1996. However, DOD has not succeeded in fielding other needed equipment and systems designed to address critical battlefield deficiencies identified during the Persian Gulf Conflict and earlier. For example, work initiated in 1978 to develop an Automatic Chemical Agent Alarm to provide visual, audio, and command-communicated warnings of chemical agents remains incomplete. Because of service decisions to fund other priorities, DOD has approved and acquired only 103 of the more than 200 FOX mobile reconnaissance systems originally planned. Of the 11 chemical and biological defense research goals listed in DOD's 1995 Annual Report to the Congress, DOD met 5 by their expected completion date of January 1996. Some were not met. For example, a DOD attempt to develop a less corrosive and labor-intensive decontaminate solution is now not expected to be completed until 2002. Chemical and biological defense training at all levels has been a constant problem for many years. For example, in 1986, DOD studies found that its forces were inadequately trained to conduct critical tasks. It took 6 months during the Persian Gulf Conflict to prepare forces in theater to defend against chemical and biological agents. However, these skills declined again after this conflict. A 1993 Army Chemical School study found that a combined arms force of infantry, artillery, and support units would have extreme difficulty performing its mission and suffer needless casualties if forced to operate in a chemical or biological environment because the force was only marginally trained. Army studies conducted from 1991 to 1995 showed serious weaknesses at all levels in chemical and biological defense skills. Our analysis of Army readiness evaluations, trend data, and lessons learned reports from this period also showed individuals, units, and commanders alike had problems performing basic tasks critical to surviving and operating in a chemical or biological environment. Despite DOD efforts-- such as doctrinal changes and command directives--designed to improve training in defense against chemical and biological warfare since the Gulf War, U.S. forces continue to experience serious weaknesses in (1) donning protective masks, (2) deploying detection equipment, (3) providing medical care, (4) planning for the evacuation of casualties, and (5) including chemical and biological issues in operational plans. The Marine Corps also continues to experience similar problems. In addition to individual service training problems, the ability of joint forces to operate in a contaminated environment is questionable. In 1995, only 10 percent of the joint exercises conducted by four major CINCs included training to defend against chemical and biological agents. None of this training included all 23 required chemical/biological training tasks, and the majority included less than half of these tasks. Furthermore, these CINCs plan to include chemical/biological training in only 15 percent of the joint exercises for 1996. This clearly demonstrates the lack of chemical and biological warfare training at the joint service level. There are two fundamental reasons for this. First, CINCs generally consider chemical and biological training and preparedness to be the responsibility of the individual services. Second, CINCs believe that chemical and biological defense training is a low priority relative to their other needs. We examined the ability of U.S. Army medical units that support early-deploying Army divisions to provide treatment to casualties in a chemically and biologically contaminated environment. We found that these units often lacked needed equipment and training. Medical units supporting early-deploying Army divisions we visited often lacked critical equipment needed to treat casualties in a chemically or biologically contaminated environment. For example, these units had only about 50 to 60 percent of their authorized patient treatment and decontamination kits. Some of the patient treatment kits on hand were missing critical items such as drugs used to treat casualties. Also, none of the units had any type of collective shelter to treat casualties in a contaminated environment. Army officials acknowledged that the inability to provide treatment in the forward area of battle would result in greater rates of injury and death. Old versions of collective shelters are unsuitable, unserviceable, and no longer in use; new shelters are not expected to be available until fiscal year 1997 at the earliest. Few Army physicians in the units we visited had received formal training on chemical and biological patient treatment beyond that provided by the Basic Medical Officer course. Further instruction on chemical and biological patient treatment is provided by the medical advanced course and the chemical and biological casualty management course. The latter course provides 6-1/2 days of classroom and field instruction needed to save lives, minimize injury, and conserve fighting strength in a chemical or biological warfare environment. During the Persian Gulf Conflict, this course was provided on an emergency basis to medical units already deployed to the Gulf. In 1995, 47 to 81 percent of Army physicians assigned to early-deploying units had not attended the medical advanced course, and 70 to 97 percent had not attended the casualty management course. Both the advanced and casualty management courses are optional, and according to Army medical officials, peacetime demands to provide care to service members and their dependents often prevented attendance. Also, the Army does not monitor those who attend the casualty management course, nor does it target this course toward those who need it most, such as those assigned to early-deploying units. DOD has inadequate stocks of vaccines for known threat agents, and an immunization policy established in 1993 that DOD so far has chosen not to implement. DOD's program to vaccinate the force to protect them against biological agents will not be fully effective until these problems are resolved. Though DOD has identified which biological agents are critical threats and determined the amount of vaccines that should be stocked, we found that the amount of vaccines stocked remains insufficient to protect U.S. forces, as it was during the Persian Gulf Conflict. Problems also exist with regard to the vaccines available to DOD. Only a few biological agent vaccines have been approved by the Food and Drug Administration (FDA). Many remain in Investigational New Drug (IND) status. Although IND vaccines have long been safely administered to personnel working in DOD vaccine research and development programs, the FDA usually requires large-scale field trials in humans to demonstrate new drug safety and effectiveness before approval. DOD has not performed such field trials due to ethical and legal considerations. DOD officials said that they hoped to acquire a prime contractor during 1996 to subcontract vaccine production and do what is needed to obtain FDA approval of vaccines currently under investigation. Since the Persian Gulf Conflict, DOD has consolidated the funding and management of several biological warfare defense activities, including vaccines, under the new Joint Program Office for Biological Defense. In November 1993, DOD established a policy to stockpile sufficient biological agent vaccines and to inoculate service members assigned to high-threat areas or to early-deploying units before deployment. The JCS and other high-ranking DOD officials have not yet approved implementation of the immunization policy. The draft policy implementation plan is completed and is currently under review within DOD. However, this issue is highly controversial within DOD, and whether the implementation plan will be approved and carried out is unclear. Until that happens, service members in high-threat areas or designated for early deployment in a crisis will not be protected by approved vaccines against biological agents. The primary cause for the deficiencies in chemical and biological defense preparedness is a lack of emphasis up and down the line of command in DOD. In the final analysis, it is a matter of commanders' military judgment to decide the relative significance of risks and to apply resources to counter those risks that the commander finds most compelling. DOD has decided to concentrate on other priorities and consequently to accept a greater risk regarding preparedness for operations on a contaminated battlefield. Chemical and biological defense funding allocations are being targeted by the Joint Staff and DOD for reduction in their attempts to fund other, higher priority programs. DOD allocates less than 1 percent of its total budget to chemical and biological defense. Annual funding for this area has decreased by over 30 percent in constant dollars since fiscal year 1992, from approximately $750 million in that fiscal year to $504 million in 1995. This reduction has occurred in spite of the current U.S. intelligence assessment that the chemical and biological warfare threat to U.S. forces is increasing and the importance of defending against the use of such agents in the changing worldwide military environment. Funding could decrease even further. On October 26, 1995, the Joint Requirements Oversight Council and the JCS Chairman proposed to the Office of the Secretary of Defense (OSD) a cut of $200 million for the next 5 years ($1 billion total) to the counterproliferation budget. The counterproliferation program element in the DOD budget includes funding for the joint nuclear, chemical, and biological defense program as well as vaccine procurement and other related counterproliferation support activities. If implemented, this cut would severely impair planned chemical and biological defense research and development efforts and reverse the progress that has been made in several areas, according to DOD sources. OSD supported only an $800 million cut over 5 years and sent the recommendation to the Secretary of Defense. On March 7, 1996, we were told that DOD was now considering a proposed funding reduction of $33 million. The battle staff chemical officer/chemical noncommissioned officers are a commander's principal trainers and advisers on chemical and biological defense operations and equipment operations and maintenance. We found that chemical and biological officer staff positions are being eliminated and that when filled, staff officers occupying the position are frequently assigned collateral tasks that reduces the time available to manage chemical and biological defense activities. At U.S. Army Forces Command and U.S. Army III Corps headquarters, for example, chemical staff positions are being reduced. Also, DOD officials told us that the Joint Service Integration and Joint Service Materiel Groups have made limited progress largely because not enough personnel are assigned to them and collateral duties are assigned to the staff. We also found that chemical officers assigned to a CINC's staff were frequently tasked with duties not related to chemical and biological defense. The lower emphasis given to chemical and biological matters is also demonstrated by weaknesses in the methods used to monitor their status. DOD's current system for reporting readiness to the Joint Staff is the Status of Resources and Training System (SORTS). We found that the effectiveness of SORTS for evaluating unit chemical and biological defense readiness is limited largely because (1) it allows commanders to be subjective in their evaluations, (2) it allows commanders to determine for themselves which equipment is critical, and (3) reporting remains optional at the division level. We also found that after-action and lessons-learned reports and operational readiness evaluations of chemical and biological training are flawed. At the U.S. Army Reserve Command there is no chemical or biological defense position. Consequently, the U.S. Army Reserve Command does not effectively monitor the chemical and biological defense status of reserve forces. The priority given to chemical and biological defense varied widely. Most CINCs assign chemical and biological defense a lower priority than other threats. Even though the Joint Staff has tasked CINCs to ensure that their forces are trained in certain joint chemical and biological defense tasks, the CINCs we visited considered such training a service responsibility. Several DOD officials said that U.S. forces still face a generally limited, although increasing, threat of chemical and biological warfare. At Army corps, division, and unit levels, the priority given to this area depended on the commander's opinion of its relative importance. At one early-deploying division we visited, the commander had an aggressive system for chemical and biological training, monitoring, and reporting. At another, the commander had made a conscious decision to emphasize other areas, such as other-than-war deployments and quality-of-life considerations. As this unit was increasingly being asked to conduct operations other than war, the commander's emphasis on the chemical and biological warfare threat declined. Officials at all levels said training in chemical and biological preparedness was not emphasized because of higher priority taskings, low levels of interest by higher headquarters, difficulty working in cumbersome and uncomfortable protective clothing and masks, the time-consuming nature of the training, and a heavy reliance on post-mobilization training and preparation. We have no means to determine whether increased emphasis on chemical and biological warfare defense is warranted at the expense of other priorities. This is a matter of military judgment by DOD and of funding priorities by DOD and the Congress. We anticipate that in our report due in April 1996, we will recommend that the Secretary of Defense reevaluate the low priority given to chemical and biological defense and consider adopting a single manager concept for the execution of the chemical and biological program given the increasing chemical and biological warfare threat and the continuing weakness in the military's defense capability. Further, we anticipate recommending that the Secretary consider elevating the office for current oversight to its own Assistant Secretary of Defense level, rather than leaving it in its present position as part of the Office of the Assistant Secretary for Atomic Energy. We may make other recommendations concerning opportunities to improve the effectiveness of existing DOD chemical and biological activities. We would be pleased to respond to any questions you may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed its assessment of U.S. forces' capability to fight and survive chemical and biological warfare. GAO noted that: (1) none of the Army's crisis-response or early-deployment units have complied with requirements for stocking equipment critical for fighting under chemical or biological warfare; (2) the Department of Defense (DOD) has established two joint service groups to prioritize chemical and biological defense research efforts, develop a modernization plan, and develop support plans; (3) although DOD has begun to field a biological agent detection system, it has not successfully fielded other needed equipment and systems to address critical battlefield deficiencies; (4) ground forces are inadequately trained to conduct critical tasks related to biological and chemical warfare, and there are serious weaknesses at all levels in chemical and biological defense skills; (5) medical units often lack the equipment and training needed to treat casualties resulting from chemical or biological contamination; (6) DOD has inadequate stocks of vaccines for known threat agents and has not implemented an immunization policy established in 1993; and (7) the primary cause for these deficiencies is a lack of emphasis along the DOD command chain, with DOD focusing its efforts and resources on other priorities.
3,953
258
VA is responsible for administering health care and other benefits, such as compensation and pensions, life insurance protection, and home mortgage loan guarantees, that affect the lives of more than 25 million veterans and approximately 44 million members of their families. In providing these benefits and services, VA collects and maintains sensitive medical record and benefit payment information for veterans and their family members. AAC is one of VA's three centralized data centers. It maintains the department's financial management and other departmentwide systems, including centralized accounting, payroll, vendor payment, debt collection, benefits delivery, and medical systems. AAC also provides, for a fee, information technology services to other government agencies. As of November 1998, the center either provided or had entered into contracts to provide information technology services, including batch and online processing and workers' compensation and financial management computer applications, for nine other federal agencies. In fiscal year 1998, the VA's payroll was more than $11 billion and the centralized accounting system processed more than $7 billion in administrative payments. AAC also maintains medical information for both inpatient and outpatient care. For example, AAC systems document admission, diagnosis, surgical procedure, and discharge information for each stay in a VA hospital, nursing home, or domiciliary. In addition, AAC systems contain information concerning each of the guaranteed or insured loans closed by VA since 1944, including about 3.5 million active loans. As one of VA's three centralized data centers, AAC is part of a vast array of computer systems and telecommunication networks that VA relies on to support its operations and store the sensitive information the department collects in carrying out its mission. The remaining two data centers support VA's compensation, pension, education, and life insurance benefit programs. In addition to the three centralized data centers, the Veterans Health Administration operates 172 hospitals at locations across the country that operate local financial management and medical support systems on their own computer systems. These data centers and hospitals are interconnected, along with 58 Veterans Benefits Administration regional offices, the VA headquarters office, and customer organizations such as non-VA hospitals and medical universities, through a wide area network. All together, VA's network services over 700 locations nationwide, including Puerto Rico and the Philippines. Our objective was to evaluate and test the effectiveness of information system general controls over the financial systems maintained and operated by VA at AAC. General controls, however, also affect the security and reliability of nonfinancial information, such as veteran medical and loan data, maintained at this processing center. Specifically, we evaluated information system general controls intended to protect data, files, programs, and equipment from unauthorized access, modification, and destruction; prevent the introduction of unauthorized changes to application and provide adequate segregation of duties involving application programming, system programming, computer operations, security, and quality assurance; ensure recovery of computer processing operations in case of a disaster or other unexpected interruption; and ensure that an effective computer security planning and management program is in place. We restricted our evaluation to AAC because VA's Office of Inspector General was planning to review information system general controls for fiscal year 1998 at the Hines and Philadelphia benefits delivery centers. To evaluate information system general controls, we identified and reviewed AAC's general control policies and procedures. We also tested and observed the operation of information system general controls over AAC's information systems to determine whether they were in place, adequately designed, and operating effectively. In addition, we determined the status of previously identified computer security weaknesses, but did not perform any follow-up penetration testing. We performed our review from October 1998 through March 1999, in accordance with generally accepted government auditing standards. Our evaluation was based on the guidance provided in our Federal Information System Controls Audit Manual (FISCAM) and the results of our May 1998 study of security management best practices at leading organizations. After we completed our fieldwork, the director of AAC provided us with updated information regarding corrective actions. We did not verify these corrective actions but plan to do so as part of future reviews. VA provided us with written comments on a draft of this report, which are discussed in the "Agency Comments" section and reprinted in appendix I. AAC has made substantial progress in addressing the computer security issues we previously identified. At the time of our review in 1998, AAC had corrected 40 of the 46 weaknesses that we discussed with the director of AAC and summarized in our September 1998 report on VA computer security. AAC had addressed most of the access control, system software, segregation of duties, and service continuity weaknesses we identified in 1997 and had improved computer security planning and management. For example, AAC had reduced the number of users with access to the computer room, restricted access to certain sensitive libraries, audit information, and established password and dial-in access controls, developed a formal system software change control process, expanded tests of its disaster recovery plan, and established a centralized computer security group. AAC was also proactive in addressing additional computer security issues we identified during our current review. We identified a continuing risk of unauthorized access to financial and sensitive veteran medical and benefit information because the center had not fully implemented a comprehensive computer security planning and management program. If properly designed, such a program should identify and correct the types of additional access control and system software weaknesses that we found. In addition, AAC risks certain types of unauthorized access not being detected because it had not completely corrected the user access monitoring weaknesses we previously identified. Our May 1998 study of security management best practices found that a comprehensive computer security planning and management program is essential to ensure that information system controls work effectively on a continuing basis. Under an effective computer security planning and management program, staff (1) periodically assess risks, (2) implement comprehensive policies and procedures, (3) promote security awareness, and (4) monitor and evaluate the effectiveness of the computer security environment. In addition, a central security staff is important for providing guidance and oversight for the computer security planning and management program to ensure an effective information system control environment. AAC had established a solid foundation for its computer security planning and management program by creating a centralized computer security group, developing a comprehensive security policy, and promoting security awareness. However, AAC had not yet instituted a framework for continually assessing risks or routinely monitoring and evaluating the effectiveness of information system controls. In March 1999, the director of AAC told us that the center plans to expand its computer security planning and management program to include these aspects. In addition, the director told us that AAC had augmented its security management organization by hiring two additional security experts in May 1999. A comprehensive computer security planning and management program should provide AAC with a solid foundation for ensuring that appropriate controls are designed, implemented, and operating effectively. Periodically assessing risk is an important element of computer security planning because it provides the foundation for the other aspects of computer security management. Risk assessments not only help management determine which controls will most effectively mitigate risks, but also increase awareness and, thus, generate support for adopted policies and controls. An effective risk assessment framework generally includes procedures that link security to business needs and provide for continually managing risk. VA policy requires that risk assessments be performed when significant changes are made to a facility or its computer systems, but at least every 3 years. AAC had not formally reassessed risk since 1996 even though significant changes to the facility and its systems had occurred. For example, AAC management told us that the center had replaced its mainframe computer, implemented a new mainframe operating system, and expanded the facility to accommodate a VA finance center in 1998. Although the director of AAC told us in March 1999 that changes in computer security risks were considered by implementation teams responsible for these events, documentation of such considerations were not available. Formal risk assessments should be performed for such significant changes. The director of AAC also told us that management would perform a risk assessment later in 1999 to comply with VA policy. One reason that AAC had not formally assessed risks when these significant changes occurred was that the center had not developed a framework for assessing and managing risk on a continuing basis. In March 1999, the director of AAC told us that a risk assessment framework would be developed and added to the AAC security handbook. According to the director, this planned risk assessment framework will define the types of changes that require a risk assessment; specify risk assessment procedures that can be adapted to different indicate who should conduct the assessment, preferably a mix of individuals with knowledge of business operations, security controls, and technical aspects of the computer systems involved; and describe requirements for documenting the results of the assessment. In addition to assessing risk to identify appropriate controls, it is also important to determine if the controls in place are operating as intended to reduce risk. Our May 1998 study of security management best practices found that an effective control evaluation program includes processes for (1) monitoring compliance with established information system control policies and guidelines, (2) testing the effectiveness of information system controls, and (3) improving information system controls based on the results of these activities. AAC had not established a program to routinely monitor and evaluate the effectiveness of information system controls. Such a program would allow AAC to ensure that policies remain appropriate and that controls accomplish their intended purpose. Although AAC had substantially corrected previously identified computer security weaknesses, we tested additional access and system software controls and found weaknesses that posed risks of unauthorized modification, disclosure, or destruction of financial and sensitive veteran medical and benefit information. These weaknesses included inadequately limiting access of authorized users to sensitive data and programs, maintaining the system software environment, and reviewing network security. Several of these weaknesses could have been identified and corrected if AAC had been monitoring compliance with established procedures. For example, periodically reviewing AAC user access authority to ensure that it was limited to the minimum required access level based on job requirements would have allowed AAC to discover and fix the types of additional access control weaknesses we identified. Likewise, routinely evaluating the technical implementation of its system software would have permitted AAC to eliminate or mitigate the additional system software exposures we identified. A program to regularly test information system controls would also have allowed AAC to detect additional network security weaknesses. For example, using network analysis software designed to detect network vulnerabilities, we determined that intrusion attempts on 2 of the 10 network access control paths would not be detected. Although AAC fixed this problem before our fieldwork was completed, AAC staff could have identified and corrected this exposure using similar network analysis software available to them. AAC staff told us that they also plan to begin evaluating the intrusion detection system periodically. In addition, AAC had not established a process to test network security when major changes to the network occur. Although AAC had used network analysis software to detect network vulnerabilities earlier in October 1998, we determined that both a production and a development network system had a system program with vulnerabilities commonly known to the hacker community. These vulnerabilities could have provided the opportunity to bypass security controls and gain unlimited access to AAC network systems. Although AAC staff determined that the vulnerable programs were no longer needed and deleted them before our fieldwork was completed, these vulnerabilities could have been prevented had network security been reassessed when the network environment changed. AAC was also not adequately monitoring certain user access activity. A comprehensive user access monitoring program would include routinely reviewing user access activity to identify and investigate both failed attempts to access sensitive data and resources and unusual or suspicious patterns of successful access to sensitive data and resources. Such a program is critical to ensuring that improper access to sensitive information would be detected. Because the volume of security information available is likely to be too voluminous to review routinely, the most effective monitoring efforts are those that selectively target unauthorized, unusual, and suspicious patterns of access to sensitive data and resources, such as security software, system software, application programs, and production data. AAC had begun reviewing failed attempts to access sensitive data and resources, but had not established a program to monitor successful access to these resources for unusual or suspicious activity. In March 1999, the director of AAC told us that the center is expanding its user access activity monitoring to identify and investigate unusual or suspicious patterns of access to sensitive resources, such as updates to security files that were not made by security staff, changes to sensitive system files that were not performed by system modifications to production application programs that were not initiated by production control staff, revisions to production data that were completed by system or deviations from normal patterns of access to sensitive veteran medical and benefit data. In addition to the access activity monitoring and computer security program planning and management weaknesses that remain open from 1997, we identified 16 additional issues during our 1998 review. For example, AAC had not restricted access to certain sensitive data and programs based on job routinely reviewed access authorities granted to employees to ensure that they were still appropriate, adequately reviewed certain components of its operating system to ensure continued system integrity, adequately documented changes to network servers, documented testing of certain emergency changes to its financial issued technical security standards for maintaining the integrity of system and security software for certain operating system environments. AAC had corrected 6 of the 16 additional issues identified in 1998 before we completed our site visit in Austin. Addressing the remaining additional issues should help AAC ensure that an effective computer security environment is achieved and maintained. We discussed these issues with AAC management and staff and were told that they would be addressed by September 1999. AAC had made substantial progress in improving information system general controls. In addition to correcting most of the access control, system software, segregation of duties, and service continuity weaknesses we had previously identified, AAC had strengthened its computer security planning and management program by creating a centralized computer security group, developing a comprehensive security policy, and promoting security awareness. Until AAC completes implementing its computer security planning and management program by establishing a framework for continually assessing risks and routinely monitoring and evaluating the effectiveness of information system controls, it will not have adequate assurance that appropriate controls are established and operating effectively. We identified additional access, system software, and application change control weaknesses that continued to place financial and sensitive veteran medical and benefit information on AAC systems at risk of improper modification, disclosure, or destruction and assets at risk of loss. Unauthorized access may not be detected because AAC had not begun identifying and investigating unusual or suspicious patterns of successful access to sensitive data and resources. AAC could have identified and corrected these types of weaknesses, which could also adversely affect other agencies that depend on AAC for computer processing support, had it fully implemented an effective computer security planning and management program. We recommend that the Acting VA Chief Information Officer (CIO) work with the director of AAC to implement policies and procedures for assessing and managing risk on a establish processes for (1) monitoring compliance with established information system control policies and procedures, (2) testing the effectiveness of information system controls, and (3) improving information system controls based on the results of these activities; and expand the center's user access activity monitoring program to identify and investigate unusual or suspicious patterns of successful access to sensitive data and resources for unauthorized access. We also recommend that the Acting VA CIO coordinate with the director of AAC to ensure that the remaining computer security weaknesses are corrected. These weaknesses are summarized in this report and detailed in a separate report, which is designated for "Limited Official Use," also issued today. In commenting on a draft of this report, VA agreed to implement our recommendations by September 30, 1999. Specifically, VA stated that AAC would update its security handbook to include a risk assessment framework, establish a program to routinely monitor and evaluate the effectiveness of controls, and complete procedures for monitoring successful access to sensitive computer resources by the end of September 1999. VA also informed us that AAC had taken action to correct all but three of the other weaknesses we identified and plans to address the remaining weaknesses by September 30, 1999. Within 60 days of the date of this letter, we would appreciate receiving a statement on actions taken to address our recommendations. We would like to thank AAC for the courtesy and cooperation extended to our audit team. We are sending copies of this report to Senator Arlen Specter, Senator Ted Stevens, Senator Robert C. Byrd, Senator Fred Thompson, Senator Joseph Lieberman, Senator John D. Rockefeller IV, Representative C. W. Bill Young, Representative Lane Evans, III, Representative Bob Stump, Representative David Obey, Representative Dan Burton, and Representative Henry A. Waxman in their capacities as Chairmen or Ranking Minority Members of Senate and House Committees. We are also sending copies to Togo D. West, Jr., Secretary of Veterans Affairs and the Honorable Jacob J. Lew, Director of the Office of Management and Budget. In addition, copies will be made available to others upon request. If you have any questions or wish to discuss this report, please contact me at (202) 512-3317. Major contributors to this report are listed in appendix II. David W. Irvin, Assistant Director Debra M. Conner, Senior EDP Auditor Shannon Q. Cross, Senior Evaluator Charles M. Vrabel, Senior EDP Auditor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO assessed the effectiveness of information system general controls at the Department of Veterans Affairs' (VA) Austin Automation Center (AAC). GAO noted that: (1) AAC had made substantial progress in correcting specific computer security weaknesses that GAO identified in its previous evaluation of information system controls; (2) AAC had established a solid foundation for its computer security planning and management program by creating a centralized computer security group, developing a comprehensive security policy, and promoting security awareness; (3) however, AAC had not yet established a framework for continually assessing risks and routinely monitoring and evaluating the effectiveness of information system controls; (4) GAO also identified additional computer security weaknesses that increased the risk of inadvertent or deliberate misuse, fraudulent use, improper disclosure, and destruction of financial and sensitive veteran medical and benefit information on AAC systems; (5) an effective computer security planning and management program would have allowed AAC to identify and correct the types of additional weaknesses that GAO identified; (6) in addition, AAC continues to run the risk that unauthorized access may not be detected because it had not established a program to identify and investigate unusual or suspicious patterns of successful access to sensitive data and resources; (7) these weaknesses could also affect other agencies that depend on AAC information technology services; (8) AAC was very responsive to addressing new security exposures identified and corrected several weaknesses before GAO's fieldwork was completed; (9) the Acting Assistant Secretary for Information Technology said VA would implement all of GAO's recommendations by September 30, 1999; and (10) addressing the remaining issues will help ensure that an effective computer security environment is achieved and maintained.
3,851
356