report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
To promote aviation, FAA provides grants and land to airports. Grants for airport development were authorized under the Federal Airport Aid Program from 1946 through 1970 and the Airport Development Aid Program from 1971 through 1981. During 1982 through 1997, more than $20.5 billion in grants was awarded to commercial service and general aviation airports under the current program, the Airport Improvement Program. About 45 percent of these grants, totaling almost $4.7 billion, was made to general aviation airports for airport development, including more than $800 million for land acquisition. While commercial airports provide scheduled airline passenger service, general aviation airports primarily serve noncommercial aviation traffic, including business and recreational aircraft. FAA’s Office of Airports administers the Airport Improvement Program and oversees both commercial and general aviation airports’ compliance with federal grant and land transfer requirements. In December 1997, FAA established an Airports Compliance Division within the Office of Airports in Washington, D.C. While 10 full-time compliance policy specialists are currently assigned to the Compliance Division to advise field offices on airport land and revenue issues, 23 field offices provide day-to-day oversight of about 2,000 general aviation airports that have received grant funds or land from the federal government. This oversight responsibility includes ensuring that the airports use airport land in accordance with federal statutes and regulations by monitoring airports’ activities and taking enforcement actions, when necessary. Enforcement begins with a formal notification to an airport of its noncompliance and can include such actions as withholding aviation grants or other transportation funds and filing a lawsuit. In order to receive federal grants, airports must certify that they will abide by the federal requirements contained in the grant agreements by providing written assurances pertaining to the airports’ operation and maintenance. For example, an airport must ensure that it will be available for public use and that airport users will be charged comparable fees. Generally, grant requirements remain in effect throughout the useful life of the facilities developed under the grant but do not exceed 20 years.However, for land acquired with grant funds, these requirements remain in effect as long as an airport is on the land. In addition to providing financial assistance, federal agencies can transfer deeds to federal land to airports. Since World War II, the federal government has transferred land that is considered excess to the federal government’s needs to about 350 general aviation airports under the Surplus Property Act. Land that is not excess to a federal agency’s current needs—called nonsurplus land—can also be transferred for airport purposes; about 100 general aviation airports have received nonsurplus land. Unlike financial grant obligations that are a part of a contract related to the operation and maintenance of airport facilities, the statutory authorities for the transfer of surplus or nonsurplus federal land place conditions on an airport owner’s title to the land. If these conditions are not met, title to the land may be reverted to (i.e., be reclaimed by) the federal government. Both financial grants and land transfers restrict the use of airport land and airport revenues to airport purposes. An airport purpose is any activity that involves, makes possible, is required for the safety of, or is otherwise directly related to the operation of aircraft, such as the use of aircraft hangars, repair facilities, or runways. An approved Airport Layout Plan reflects the agreement between FAA and the airport owner on the allocation of airport areas for specific operational and support functions. In general, land designated in the plan cannot be used, leased, or sold for purposes other than airport purposes without the consent of the Administrator of FAA. If the airport wishes to alter the use of any land designated in the plan—for city parks or departments or industrial parks, for example—FAA must agree that the land is not needed for present or foreseeable airport purposes and must grant permission regardless of whether the land was acquired by federal grant or land transfer or without federal assistance. If the altered use generates revenues, the airport must agree to reinvest the proceeds in the airport. With the consent (called a “release” of the land) of the Administrator, airport land not needed for aviation purposes may be sold or leased so that the airport can use the resulting revenues to support airport development, improvement, maintenance, and operations. FAA has released land from restrictions on its use for airport purposes at an estimated 205 airports since 1990. About 75 percent of these releases involved fewer than 20 acres of land, and about 50 percent involved fewer than 10 acres. Appendix I provides more information regarding our estimate of releases nationwide. Most releases we reviewed involved such issues as using unneeded land to generate revenues for the airport by developing an industrial park, for example, or by using the land as easements for roadways. If an airport has altered the use of (or sold) airport land without FAA’s authorization, the agency may require the airport to return the land to its former use and condition or approve the new use of the land after the fact. In general, all airport revenues must be expended for the airports’ operating expenses and other nonoperating expenditures, such as capital development. The purpose of the restriction on revenue uses is to prevent revenue losses or diversions by ensuring that airports receiving federal assistance use airport revenues for operation and development. The goal is to make airports as self-sustaining as possible and minimize the need for further federal assistance. Therefore, when FAA approves the use of land for nonairport purposes to generate revenues for an airport, its policy requires that the airport receive fair market value for the sale or lease of the land. Generally, if an airport sells or leases land for less than fair market value, the revenues are considered to be lost, or forgone. Revenues are considered to be “diverted” when an airport fails to use revenues generated from activities that take place on airport land for airport purposes. In addition, if an airport owner, typically a local government, uses airport land for nonairport purposes, such as for city parks or departments, and does not pay rent to the airport account, the revenues are also considered to be diverted from the airport. Since 1992, the Department of Transportation’s Inspector General has reported lost or diverted revenues of over $18 million at 11 general aviation airports. FAA closed eight of these cases involving about $15 million, as reported by the Inspector General—recovering $1.7 million in three cases. FAA officials noted that all cases were resolved with the Inspector General’s concurrence. To assist operators in avoiding revenue diversion, on February 16, 1999, FAA published a policy statement on the appropriate use of funds generated by airports. FAA’s new policy requires audits of some airports to determine if revenue diversion has occurred. In addition, the policy allows FAA to select airports to be audited where there are indications that revenue diversion may have occurred. According to an official in the Department of Transportation, this new compliance tool will allow FAA to target the airports most likely to have substantial revenue diversion—primarily commercial service airports. Only four FAA field offices have implemented internal controls to ensure that general aviation airports comply with federal requirements to use airport land for airport purposes. These offices use periodic self-certifications or limited on-site monitoring, and all the field offices rely primarily on third-party complaints to identify noncompliance. FAA headquarters staff cited an agencywide emphasis on voluntary compliance in the early 1990s and staffing reductions as reasons for not implementing FAA’s compliance policy. When airports are not monitored, the unauthorized use of airport land is more likely to occur and can lead to the loss or diversion of airport revenues and increased risks to aviation safety. Of the 23 FAA field offices that are responsible for monitoring general aviation airports’ compliance with federal requirements, only four—FAA’s field offices in Kansas City, Missouri; Renton, Washington; Denver, Colorado; and Helena, Montana—regularly monitor and document airports’ compliance with land-use requirements. Combined, these offices oversee 426—or 21 percent—of the approximately 2,000 general aviation airports that have received grant funds or land from the federal government. FAA’s compliance policy handbook—FAA Order 5190.6A, Airport Compliance Requirements—clearly requires field offices to monitor airports for compliance with federal requirements. This requirement serves as an internal control to provide assurance that the federal government’s investment in airports is protected from mismanagement, fraud, waste, and abuse. According to the handbook, FAA field offices must be continuously aware of which airports are not in compliance and conduct limited surveillance of each airport every 4 years to detect recurring deficiencies, system weaknesses, or individual abuses of federal requirements. The surveillance requirement may be met through site visits or by obtaining a written certification by the airport that it is complying with federal requirements. The handbook was last updated in 1990, and FAA officials said they are working on an update, as discussed below. The four field offices that meet FAA’s monitoring requirement did so by obtaining airport self-certifications. As required by the handbook, at least once every 4 years, the four offices require each airport to certify in writing that it is abiding by federal requirements. In addition, according to FAA officials, staff in the Renton, Helena, and Denver field offices occasionally review airports’ compliance with land-use requirements during on-site safety reviews, conducting 27 such on-site compliance inspections in 1998 (17, 7, and 3, respectively) out of about 350 airports they oversee. FAA headquarters staff cited an agencywide emphasis on voluntary compliance in the early 1990s and staffing reductions as reasons why field offices might choose not to implement FAA’s compliance policy. However, these reasons do not fully explain why the majority of FAA’s field offices chose not to monitor compliance. Despite an emphasis on voluntary compliance, the compliance handbook clearly states the requirement to have a monitoring system in place, as noted above. Furthermore, Airports Division staffing levels have not changed significantly since 1985 and have actually increased. For example, in 1985, Airports was authorized 476 positions. In 1990, Office of Airports staffing was 473, and by 1998 the office was authorized 485 positions. Staff in the 14 field offices we visited said that they rely primarily on informal and formal third-party complaints to identify noncompliance and that third-party complaints are sufficient to identify the few cases of inadvertent noncompliance. Field offices reported that nine formal third-party complaints have been made regarding land use at general aviation airports since 1990. Relying on airports’ self-certifications and third-party complaints does not ensure compliance. For example, in 1994, the Department of Transportation’s Inspector General reported revenue losses at the Malden Municipal Airport in Malden, Missouri and found that the airport had incorrectly responded to the Kansas City field office’s self-certification questionnaire regarding the use of surplus property. In summarizing its reviews of airports’ revenue use in a separate effort, the Inspector General reported material weaknesses in FAA’s monitoring of commercial and general aviation airports’ revenues and concluded that relying on self-certifications and third-party complaints was insufficient for ensuring compliance with federal requirements on revenue use. The report indicated that 14 of the 15 general aviation and commercial airport owners identified as not complying with revenue use requirements had previously certified they were in compliance, and third-party complaints had been filed against only 2 of the 15 airports. FAA officials consider noncompliance with federal land-use requirements to be a problem, but they believe the incidence of the problem is limited. However, without a monitoring system in place, FAA may not know when unauthorized use of airport land occurs. Because FAA lacks an effective compliance monitoring program, the extent of unauthorized land use at general aviation airports is unknown. However, information supplied by field offices on the releases of land at general aviation airports contained 24 instances of unauthorized land use that have occurred since 1990. Some of these cases went undetected for over a decade. These examples involved 15 states under the oversight of 12 different FAA field offices. The seriousness of the land-use violations ranged from minor isolated infractions to periods of repeated unauthorized use spanning more than two decades without correction. Unauthorized land use may lead to the loss or diversion of revenues or increased safety risks at an airport. For example, the Inspector General identified almost $6.8 million in lost or diverted revenues at 5 of the 24 airports where the unauthorized use of airport land occurred, and FAA said it had recovered about $184,000 from 3 of the other general aviation airports where unauthorized use occurred. Safety problems can also result from the unauthorized use of airport land, as when a landfill attracts wildlife and thereby increases the risk of “birdstrikes” by landing and departing aircraft. In discussing the 24 cases, FAA officials concluded that the small number of cases did not indicate that FAA’s compliance program was inadequate or that a reallocation of resources by the Office of Airports was warranted. However, as stated previously, without a monitoring system in place, the extent of noncompliance is unknown and FAA has no assurance that airports are complying with federal requirements. The following three examples illustrate how unauthorized land use can continue undetected for long periods of time when airports’ compliance is not actively monitored and how unauthorized use can result in lost or diverted revenues or safety risks. All three airports obtained surplus land and grants from the federal government that restrict the use of airport land and revenues to airport purposes, and the unauthorized use of airport land went undetected or uncorrected for decades at all three airports. According to FAA field staff, from about 1970 until 1981, Laurel, Mississippi, constructed police and fire training centers, a building to house street maintenance equipment, a dog pound, a fire station, a water plant, a little league park, and a landfill on airport land without FAA’s authorization and without reimbursing the airport. FAA field staff identified the unauthorized use in 1981 and reported that birds attracted by the landfill constituted a safety hazard in 1982 but waited until 1994 to restrict dumping at the landfill. In July 1998, FAA allowed the city to offset the lost revenues with the value of city services provided to the airport. FAA field staff could not explain why it had not addressed the safety problems and land and revenue use issues for more than 12 years and 17 years, respectively. Between approximately 1972 through 1992, Venice, Florida, constructed a mobile home facility, little league parks, and a senior center and developed the airport’s coastal land for public use without obtaining FAA’s authorization or reimbursing the airport. In response to a complaint, the Inspector General reported in 1993 that $2.4 million in revenues was lost over a 4-year period and that almost 300 acres of federal land was inappropriately used. FAA approved an after-the-fact release of airport land in June 1998 and, according to a field staff, required the city to charge fair market rent on future leases but recovered no lost revenues (estimated to be at least $25 million) from leases that were below market value because the agency had no authority to terminate the leases. In 1995, in response to a complaint, the Inspector General reported that Stuttgart, Arkansas, used a hangar for a mosquito control unit and established a landfill on airport land without obtaining FAA’s authorization or reimbursing the airport and leased large tracts of airport land for $1 per year. Although the Inspector General identified $47,000 in diverted revenues over a 3-year period, an FAA official said that the total amount diverted was much greater because the unauthorized use spanned two decades. Furthermore, because the leases of airport land were legally binding, no amounts could be recovered. The city agreed to pay $6,250 per year for the landfill, and FAA officials said that future leases will obtain fair market rents. In addition, the Inspector General found that FAA approved airport leases that allowed hunting and farming on airport land. These activities created a safety hazard because crops were planted to attract ducks to the airport. As a result, an aircraft struck a bird in 1990. Although the city subsequently banned hunting on the airport land, another birdstrike occurred in 1994, causing $20,000 damage to one plane. And in March 1995, the Inspector General found three duck blinds at the airport as well as duck decoys near the blinds. FAA has not revised its compliance and enforcement guidance to keep it current. FAA Order 5190.6A, Airports Compliance Requirements, was last updated in October 1989, and the section on enforcement was last updated in August 1973. While field offices rely on the handbook as a primary source of guidance, FAA officials said that numerous policy changes have been made and that they informally convey changes through training and action memos. They noted that they had drafted a policy guidance letter in March 1999 for distribution to the field offices to update information on FAA’s enforcement policy, which dates to 1973. However, these informal efforts do not provide a consolidated record of FAA’s compliance policy. FAA’s lack of timely action to publish compliance policy for airports’ revenue use was identified by the Inspector General in 1998 as a contributing factor to airports’ noncompliance. FAA’s Airports Compliance Division staff said they planned to issue a revised compliance policy order in 1999. FAA’s order listing airports that are subject to federal requirements from land and grant agreements, last revised in April 1990, is similarly outdated. As a result, FAA headquarters does not have ready access to current summary data on the government’s interests in general aviation airports nationwide, and information that could be useful in quantifying field offices’ monitoring workloads and making resource allocation decisions is not available. In response to our request, field offices gathered data showing that since the order was issued in 1990, 14 listed general aviation airports had closed, while 81 others had become subject to federal requirements. FAA has a variety of statutory and administrative alternatives for resolving airports’ noncompliance but has generally chosen not to use them. Instead, FAA prefers to address noncompliance through negotiation and settlement. However, in several instances FAA’s attempts at negotiation have been unsuccessful, and airports’ lack of willingness to comply with federal requirements justified greater efforts to enforce compliance. Furthermore, FAA has not followed its policy of obtaining fair market value for airport land when it knows the use will change, as in the case of a Kansas City, Missouri, general aviation airport now being closed. The federal government is entitled to the same legal remedies as any other party to a contract that has been breached, and FAA’s enforcement policy calls for FAA officials to seek injunctions or judgments in the courts should the circumstances warrant. However, FAA officials do not believe it is generally practical to take legal actions. They said that these matters must be referred to the Department of Justice for prosecution and that the dollar amounts involved are usually too low to be a high priority for Justice. Alternatively, FAA may assess civil penalties of up to $50,000 without going to court. FAA has not used this power for unauthorized land or revenue use. For example, FAA allowed the closure of a small general aviation airport in Fall River, Massachusetts, in February 1996 but required the airport to reinvest the residual value of federal grant funding (approximately $30,000) in a nearby general aviation airport in New Bedford, Massachusetts. However, at the time of our review, almost 2 years after the agreement was signed, Fall River had not provided the funds and FAA had not pursued legal action or civil penalties. FAA headquarters staff said they had not pursued legal action because they believed that ongoing negotiations might prove to be successful. The Congress has strengthened FAA’s enforcement powers to resolve revenue diversion cases by including restrictive language in appropriations and transportation laws. For fiscal years 1994 and 1995, the Congress specified that transportation funds could be withheld from any local government that diverts revenues generated by a public airport. The Airport Revenue Protection Act of 1996 made this enforcement action permanent, giving the Secretary of Transportation the authority to withhold aviation, transit, and rail funds from local governments that fail to reimburse airports for illegally diverted funds and to assess civil penalties against those that fail to reimburse the federal government. FAA headquarters officials said that the agency has never recommended that transportation funds be withheld under these laws. FAA also has special enforcement remedies that are inherent in various types of airport agreements, such as returning to the federal government the ownership of land it provided if an airport does not comply with federal requirements. FAA officials said that FAA has never had the title to surplus or nonsurplus land revert to the agency because it could not manage the day-to-day operations of an airport. FAA also may take administrative actions, such as denying requested land releases or withholding funds provided under the Airport Improvement Program. For example, FAA field staff denied the airport’s request to release land at Scholes Field in Galveston, Texas, in May 1998 because of unresolved issues identified by the Inspector General, including lost revenues and continued attempts by the city to sell off and/or lease airport land at less than fair market value. FAA also informed the airport that a failure to take corrective action could result in the city’s becoming ineligible for federal grants. However, FAA field staff and headquarters officials noted that withholding grant funds is not a significant deterrent for communities that would rather close an airport or support it to a lesser extent. FAA’s compliance handbook requires that staff first attempt to have an airport voluntarily correct compliance deficiencies. FAA officials told us that airports are generally willing to take corrective action and FAA prefers to exhaust all avenues of voluntary corrective action and negotiate a settlement of the noncompliance before undertaking enforcement actions. For example, in September 1998 at the Waterville-R. LaFleur general aviation airport in Waterville, Maine, the airport began excavation for construction of an industrial park before requesting a release of the land from FAA. However, once airport officials realized they needed FAA’s approval, they voluntarily stopped work until the release was approved. FAA said that a confrontational approach using its enforcement authority would be justified only if it resulted in a higher level of compliance than maintaining a cooperative relationship with airports. FAA resolves unauthorized use of airport land by obtaining airports’ assurances that future revenue diversion or losses will not occur; allowing cities to apply the value of services they provided to the airport, such as police and fire protection, to offset the lost or diverted airport revenues; requiring that future leases call for fair market value; or working out other solutions. For example, FAA field staff drove by Frederick Municipal Airport in Maryland and discovered an 11-acre public works facility constructed on airport land. FAA arranged for the city to provide another land parcel adjoining the airport in exchange for the unauthorized conversion of the 11 acres of airport land subject to federal restrictions on its use. If an airport does not voluntarily make corrections, the handbook requires FAA to place the airport in pending noncompliance status and notify the airport of the right to a hearing. As a result of the hearing (or if no hearing has been requested), the FAA field manager is to determine whether the airport is in noncompliance and whether FAA should take appropriate sanctions or civil penalties. However, we found that FAA field offices very rarely place airports in pending noncompliance or noncompliance status. For example, out of the 24 cases of unauthorized land use we reviewed, only 2 had been placed in noncompliance status. In the following two cases—Queen City Municipal Airport in Allentown, Pennsylvania, and Bader Field in Atlantic City, New Jersey—FAA has not initiated the enforcement process by placing the airports in pending noncompliance status despite long-standing compliance problems at both airports. In 1965, FAA approved the city’s request to use an airport hangar located on 6 acres of airport land as a vehicle maintenance facility without releasing the land or requiring the city to reimburse the airport, as required by the federal surplus land transfer agreement signed in 1948. FAA officials could not explain the reason for the inappropriate 1965 decision. In 1984, FAA denied the city’s request to release the land and advised the city that it had violated federal surplus property requirements. In 1987, FAA released airport property for a highway right-of-way and between 1989 and 1992, FAA awarded four grants totaling $2.3 million to repair runways and develop a master plan. FAA officials stated that the land release and grants were provided to persuade the city not to proceed with plans to close the airport, an important regional reliever airport. In response to a complaint, the Inspector General reported in January 1997 that the city owed the airport about $2.8 million for the nonpayment of rent for the maintenance facility from 1984 through 1995. Nonetheless, the airport was scheduled to receive an Airport Improvement Program grant for almost $300,000 in February 1999. After we discussed the airports’ continuing noncompliance with FAA headquarters officials in December 1998, they sent a letter to Queen City on March 5, 1999, informing the city that FAA would consider taking steps to collect a portion of the $2.8 million. FAA officials said they took steps to delay the award of the $300,000 grant until the airport’s noncompliance was resolved. Figure 1 shows an aerial view of the airport and the unauthorized use of airport hangars for a vehicle maintenance facility. According to FAA officials, since the early 1970s, Atlantic City has used airport property without obtaining FAA’s approval or reimbursing the airport as required by federal grants that expire in 2006. Specifically, the city constructed a high school football field on airport land and used airport buildings for a police annex and fire station without approval or reimbursement. Furthermore, the airport’s condition has gradually deteriorated during the 1990s, and the city claims that the airport is unsafe and therefore should be closed. Safety issues have resulted from unauthorized use. In May 1996, after the city allowed the unauthorized excavation of an aircraft parking area for a minor league baseball stadium being constructed on the airport land, an aircraft accident occurred. A plane hit an unmarked and unlighted excavation hole at night. No injuries occurred. In addition, during the stadium’s construction, land survey spikes were driven into the airport runways and left for an unknown period of time while the airport was still open. In 1997, in defiance of FAA’s explicit instructions that FAA’s approval was required to build the stadium on the airport land, the Mayor of Atlantic City informed FAA that construction of the baseball stadium would proceed. FAA subsequently signed a memorandum of agreement that allowed the stadium to be built, hoping that through cooperation, the city would make needed safety improvements and not close the airport. The agreement required the city to reimburse the airport for the fair market value of the baseball stadium land but did not resolve the city’s unauthorized use of and lack of compensation for airport buildings and the land for the high school football stadium. Figures 2 and 3 show the unauthorized minor league baseball stadium, police annex, and aircraft operations area at Bader Field in Atlantic City, New Jersey. Although almost 2 years had passed without corrective action at the time of our review, FAA had not cited the airport for official noncompliance, requested the Inspector General to investigate the overt revenue diversion, or used other stronger enforcement methods. In December 1998, we discussed the noncompliance with FAA headquarters officials, who agreed to determine if the situation was serious enough to warrant enforcement action. On January 8, 1999, FAA requested the city to provide financial reports showing that rent from the baseball stadium was being deposited into an airport account. However, the city did not respond to the request or repeated phone calls, and a follow-up letter was sent on March 19, 1999. We referred this matter to the Inspector General in March 1999. Even when FAA is aware of changes to an airport before they occur, it has not always followed its own policies and required that airports recoup the fair market value for the sale of surplus property. For example, in July 1998, FAA signed a memorandum of agreement to release the Richards-Gebaur Memorial Airport from grant assurances and surplus land deed requirements once Kansas City, Missouri, and the Kansas City Southern Railroad agree to establish an intermodal rail-highway depot on the runway. FAA’s agreement with Kansas City required an estimated $15 million to be reinvested in other area airports over a 20-year period. By signing the agreement, FAA did not follow its policy requiring an independent appraisal to determine fair market value of the airport property. The only indication of the value of the 1,300-acre property was a $33 million estimate cited in field office files, but FAA officials did not provide an appraisal to support or refute the estimate. “This objective is not met unless an amount equal to the net sale proceeds based on the current FMV of the property is realized as a consequence of the release and such amount is committed to airport purposes . . . A sale and disposal of airport property for less than its FMV is inconsistent with the intent of the statute.” FAA’s decision to allow Kansas City to sell the Richards-Gebaur Memorial Airport without conducting an appraisal or ensuring that fair market value is obtained was improper. In commenting on our preliminary findings in March 1999, FAA headquarters officials said that they are carefully reviewing the proposed leases of the airport and will consider amending the memorandum of agreement or rejecting the deal entirely. Figure 4 shows an aerial view of the airport. The railroad tracks are parallel and to the left of the 8,700 foot runway that is to be converted to an intermodal rail/truck facility. Figure 5 shows some of the facilities on the airport land. FAA’s compliance program is intended to ensure that the public interest and investment in general aviation airports receiving federal assistance are protected. This is not currently the case. Because none of FAA’s 23 field offices conduct regularly scheduled on-site visits to monitor general aviation airports’ compliance with federal requirements, FAA does not know the extent of noncompliance at these airports nationwide. Nor does FAA always effectively enforce federal requirements. Not surprisingly, therefore, we found unauthorized land and revenue uses and increased risks to aviation safety. These problems are compounded by the lack of current policy guidance and current summary information on the nature of airports’ federal requirements nationwide. Overall, FAA’s failure to effectively implement its compliance program leaves the nation’s federally assisted general aviation airports vulnerable to mismanagement, fraud, waste, and abuse. Although our review focused on FAA’s land-use requirements for general aviation airports receiving federal assistance, FAA’s lack of an effective compliance and enforcement program leaves compliance with other federal requirements, such as the requirement to keep airports available for public use and to charge airport users comparable fees, vulnerable as well. Addressing noncompliance after it occurs is often difficult. Therefore, prevention is key to ensuring compliance with federal requirements. Airports’ self-certifications can also serve to educate airports about their requirements. However, self-certifications alone are not a sufficient internal control because, in some instances, noncompliance may be deliberate, suggesting the need for a more hands-on monitoring approach than that provided by self-certifications. Consequently, on-site monitoring and enforcement action are also essential elements of any preventative compliance strategy. In instances in which negotiations with an airport are unsuccessful, FAA has not used its available enforcement actions effectively to deter violations or recoup losses to the federal government. It has not withheld transportation grants, taken back the title to airport land, or taken action through the courts. When such actions are not taken, even in cases of long-standing noncompliance, the lack of action becomes a de facto policy of permissiveness. This de facto policy has occurred in two cases we identified—Bader Field and Queen City Municipal Airport. In the case of Richards-Gebaur Memorial Airport, however, FAA still has time to adhere to its own policy of obtaining fair market value for the land being sold for other purposes. To effectively implement the internal controls contained in FAA’s compliance policy, we recommend that the Secretary of Transportation direct the Administrator of FAA to revise current compliance policy guidance to airports to require regularly scheduled monitoring methods that provide for periodic on-site visits. In conjunction with periodic on-site visits, the monitoring component could include requiring periodic self-certifications of compliance from all airports and formally coordinating with interested parties who may have information about airports’ compliance, such as general aviation organization field representatives. In addition, FAA should provide specific criteria for initiating enforcement action and set reasonable time frames for taking progressively stronger enforcement actions in cases in which efforts to achieve voluntary corrective action are unsuccessful. In those cases of noncompliance that FAA cannot resolve in a reasonable period of time, we further recommend that the Secretary of Transportation direct the Administrator of FAA to apply the enforcement tools already provided by the Congress by holding field offices accountable for taking enforcement actions, particularly in cases of long-term, repeated, or willful unauthorized land or revenue use. Until FAA develops and implements a compliance and enforcement program that provides adequate internal control over airports’ compliance with federal requirements, the FAA Administrator should determine whether the internal control weakness disclosed in this report should be included when providing information to the Secretary for inclusion in the Secretary’s annual report to the President and the Congress, as required by the Federal Managers’ Financial Integrity Act of 1982. Finally, the FAA Administrator should resolve long-standing instances of noncompliance and revenue diversion by taking enforcement action to protect the public investment in aviation at the Queen City Municipal Airport and at Bader Field. FAA should also require that Kansas City obtain a fair market appraisal of the value of airport land and, upon closing Richards-Gebaur Memorial Airport, reinvest an amount equal to the appraised value in local area airports to promote aviation, as required by FAA’s policy. We provided copies of a draft of this report to the Department of Transportation and the Federal Aviation Administration (FAA) for review and comment. We met with FAA officials including the Director, Office of Airport Safety and Standards. We also met with Department of Transportation officials from the Office of Acquisition and Grant Management and the Office of General Counsel. FAA officials had several concerns about the information in the report but did not comment specifically on the report’s recommendations. FAA officials said that over the last several years FAA has been working to enhance its oversight of airport owners’ compliance with federal obligations. For example, FAA recently issued a revised airport revenue use policy that requires audits of some airports to determine if revenue diversion has occurred and allows FAA to select airports to be audited where there are indications that revenue diversion may have occurred. It also established an Airports Compliance Division within the Office of Airports in December 1997 and assigned 10 full-time compliance policy specialists to advise field offices on airport land and revenue issues and provide procedural support for compliance personnel in the field. The FAA officials said that, using the new policy and staff, the agency is working to ensure that it provides effective oversight to the airports in the national airport system. FAA officials also said that, over the past several years, the agency has enhanced its airport industry outreach and educational efforts by providing agency-sponsored seminars and participating in conferences sponsored by airport industry and airport owner associations. The efforts to improve oversight of airports’ compliance described in FAA’s comments are included in this report, and, therefore, we made no changes in response to these comments. Because FAA said that it has enhanced its airport industry outreach and educational efforts, we deleted from our recommendations a reference to a need to improve these educational efforts. The agency officials did not dispute the report’s findings on the level of monitoring and enforcement, but offered a number of reasons for why monitoring and enforcement efforts are not more extensive. FAA officials told us that providing effective oversight requires that the agency prioritize its use of the limited available staff. FAA officials indicated that its financial compliance activities are most appropriately focused on commercial service airports that receive the majority of federal funding. While the FAA officials agreed that it may be possible to further strengthen compliance oversight activities with general aviation airports, they believe that the draft report did not provide sufficient context in which to understand the relative scope of the issues described in the report. Although there are over 2,000 general aviation airports subject to federal requirements, they account for a relatively small proportion of funding and thus do not warrant a greater expenditure of FAA’s limited resources, the officials stated. With approximately one person assigned to compliance duties per region (and many of these people have multiple duties), the FAA officials said that it is appropriate to focus compliance oversight efforts on the 840 commercial service airports that account for about 80 percent of Airport Improvement Program funding. The draft report included information placing general aviation airports in context within all airports receiving federal funding. Therefore, we made no changes in response to these comments. While we acknowledge that the oversight of commercial service airports is important, this does not relieve FAA from its responsibility for ensuring that all airports comply with the requirements associated with the federal funding or land they receive. We visited 14 and surveyed 9 of the 23 FAA field offices responsible for overseeing general aviation airports. The 14 offices we visited represent seven of the nine FAA regions and manage airports in 33 of the 50 states, or about two-thirds of all public airports with land subject to federal requirements in the United States. Specifically, we visited the following field offices to discuss implementation of the compliance program: Atlanta, Georgia; Burlingame, California; Burlington, Massachusetts; Camp Hill, Pennsylvania; Detroit, Michigan; Dulles, Virginia; Garden City, New York; Jackson, Mississippi; Kansas City, Missouri; Lawndale, California; Orlando, Florida; and three field offices in Fort Worth, Texas. We obtained information regarding compliance monitoring and cases of noncompliance with the federal requirements for land and revenue use, complaints and investigations, and land releases and airport closures from each of the 23 FAA field offices. We took a random sample of 506 general aviation airports and obtained data for each airport in the sample in the 23 field offices to estimate the incidence of releases on a nationwide basis. With general aviation interest groups, including the Aircraft Owners and Pilots Association, the National Air Transportation Association, and the American Association of Airport Executives, we discussed the issue of airports’ compliance. These groups identified a number of issues regarding land use at general aviation airports, including the diversion of airport revenues and safety concerns. We performed our review in accordance with generally accepted government auditing standards from September 1998 through April 1999. We are providing copies of this report to the Honorable Rodney E. Slater, the Secretary of Transportation; and the Honorable Jane F. Garvey, Administrator, FAA. We will also make copies available to others on request. If you or your staff have any questions about this report, please call me at (202) 512-2834. Major contributors to this report are listed in appendix II. As discussed in the “Scope and Methodology” section of the report, we used a probability sample of airports to estimate the incidence of releases on a nationwide basis. Since we used a sample to develop our estimates, each estimate has a measurable precision. This precision can be used to develop upper and lower bounds for each estimate. This range is called a confidence interval. Confidence intervals are stated at a certain confidence level—in this case, 95 percent. For example, a confidence interval at the 95-percent confidence level means that in 95 of 100 instances, the sampling procedure that we use would produce a confidence interval containing the universe value we are estimating. The following table contains the estimates that we made and their upper and lower bounds at the 95-percent confidence level. Note that we found 51 releases in our sample, but we were unable to obtain the number of released acres for 1 of the releases. Dave Hooper The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO provided information on: (1) the Federal Aviation Administration's (FAA) monitoring of general aviation airports' compliance with federal land-use requirements; and (2) FAA's use of enforcement tools to resolve cases of noncompliance. GAO noted that: (1) FAA does not adequately monitor general aviation airports' compliance with federal requirements and does not have the internal controls in place to protect the federal government's investment in the airports from mismanagement, fraud, waste, and abuse; (2) although FAA's compliance policy clearly calls for monitoring airports to ensure they meet federal requirements, only 4 of FAA's 23 field offices monitor compliance; (3) this monitoring, however, relies primarily on airports themselves certifying that they are complying with federal requirements; (4) in 1994, the Department of Transportation's Inspector General concluded that relying on such certifications was insufficient for ensuring compliance with federal requirements on revenue use, noting that 14 of the 15 airport owners identified as not complying with revenue use requirements had previously certified that they were in compliance; (5) one result of FAA's lack of monitoring is that airports' unauthorized use of land has gone undetected in some cases for over a decade; (6) unauthorized use has resulted in the loss or diversion of millions of dollars in airport revenues from general aviation airports, typically owned by a local government; (7) in some cases, increased risks to aviation safety also resulted; (8) FAA determined that birds attracted by an unauthorized landfill at Hesler-Noble Field in Laurel, Mississippi, posed a possible danger to aircraft; (9) FAA generally addresses airports' noncompliance with federal requirements through negotiation and settlement rather than the use of available enforcement actions; and (10) when negotiations are unsuccessful and persistent noncompliance occurs, FAA has not always taken appropriate enforcement action--such as withholding transportation grants, taking back the title to airport land, or taking action through the courts.
The Department of the Treasury is authorized by the Congress to borrow money on the credit of the United States to fund operations of the federal government. The Bureau of the Public Debt is the organizational entity within Treasury that is responsible for prescribing the debt instruments and limiting and restricting the amount and composition of the debt. BPD accomplishes this by issuing marketable Treasury bills, notes, and bonds as well as nonmarketable securities, such as U.S. Savings Bonds. The bureau is also responsible for paying interest to investors and redeeming investors’ securities. In addition, BPD has been given the responsibility to issue Treasury securities to trust funds for trust fund receipts not needed for current benefits and expenses. During fiscal year 1997, BPD issued over $2.34 trillion in Treasury securities to the public while redeeming about $2.31 trillion of debt held by the public. Most of the $2.34 trillion was raised through more than 160 securities auctions as well as the continual sale of savings securities at 40,000 locations throughout the country and investments in securities by state and local governments. Further, there was $152 billion of net borrowings from federal entities, primarily trust funds. BPD relies on a number of financial systems to process and track the money that is borrowed and to account for the securities it issues. One of its primary systems is the Public Debt Accounting and Reporting System, which is used to account for the federal debt. BPD also relies on various other systems to track marketable securities, savings bonds, and securities issued to state and local government entities and to generate interest transactions for the different securities. All of BPD’s financial activities are processed at its data processing center in Parkersburg, West Virginia. In carrying out its debt responsibilities, BPD receives assistance from Federal Reserve Banks (FRB) located throughout the country, which serve as Treasury’s fiscal agents. For instance, FRBs issue Treasury securities in electronic (book entry) form upon authorization by the Treasury and administer principal and interest payments on these securities. There are 12 FRBs with 25 branches throughout the country. FRBs use a number of information systems to help process issuance and redemption activities; generate interest payments; and account for marketable Treasury securities, nonmarketable savings securities, and savings securities stock. Data are initially processed at FRBs and then forwarded to BPD’s Parkersburg, West Virginia, data center for further processing. The overall effectiveness of the BPD computer controls depends on the controls implemented by BPD’s Assistant Commissioner for the Office of Information Technology. This person serves as Chief Information Officer and is responsible for overseeing the development, implementation, and operation of information processing systems. Our objectives were to evaluate and test the effectiveness of the controls over key financial management systems maintained and operated by BPD. Specifically, we evaluated general controls intended to protect data, files, and programs from unauthorized access, modification, prevent the introduction of unauthorized changes to systems and ensure that system software development and maintenance, applications software development and maintenance, computer operations, security, and quality assurance are performed by different people; ensure recovery of computer processing operations in case of a disaster or other unexpected interruption; and ensure that an adequate computer security planning and management program is in place. To evaluate the general controls, we identified and reviewed BPD’s information system general control policies and procedures, conducted tests and observations of controls in operation, and held discussions with officials at the BPD data center to determine whether general controls were in place, adequately designed, and operating effectively. In addition, we attempted to obtain access to sensitive data and programs. These attempts, referred to as penetration testing, were performed with the knowledge and cooperation of BPD officials. To evaluate certain application controls, we tested two key BPD financial applications maintained and operated at the data center. Specifically, we evaluated application controls intended to ensure that access privileges establish individual accountability and proper segregation of duties, limit the processing privileges of individuals, and prevent and detect inappropriate or unauthorized activities; data are authorized, converted to an automated form, and entered into the application accurately, completely, and promptly; data are properly processed by the computer and files are updated correctly; and files and reports generated by the application (1) represent transactions that actually occur and (2) accurately reflect the results of processing, and reports are controlled and distributed to the authorized users. To assist in our evaluation and testing of general and application controls, we contracted with an independent public accounting firm. We determined the scope of our contractor’s audit work, monitored its progress, and reviewed related working papers to ensure that the resulting findings were adequately supported. During the course of our work, we communicated interim findings and recommended corrective actions to BPD officials who informed us of the steps they planned to take or had taken to address the vulnerabilities we identified. We performed follow-up work to assess the status of any corrective actions taken as of September 30, 1997. The results of the follow-up work were also communicated to BPD. We performed our work at the BPD data center in Parkersburg, West Virginia, from March 1997 through January 1998 in accordance with generally accepted government auditing standards. We requested oral comments on a draft of this report from the Secretary of the Treasury or his designee. On August 31, 1998, the Commissioner of the Bureau of the Public Debt provided us with oral comments, which are discussed in the “Agency Comments” section. Our review of general controls over BPD’s financial systems did not identify any weaknesses that placed BPD’s financial information at significant risk of being accessed, compromised, or destroyed. However, we found certain vulnerabilities that warrant management’s attention and action. Specifically, we found that BPD could improve its general controls by (1) strengthening logical access controls over the use of powerful system capabilities that can be used to access data and programs, (2) strengthening physical controls to further restrict and prevent unauthorized access, and (3) enhancing its service continuity and contingency plans. BPD could also improve its oversight and monitoring of computer security by ensuring that known security violations are investigated and resolved. A key control used by organizations to protect and control access to information maintained in their systems is the use of logical access controls. Logical access controls consist of safeguards, such as passwords, user IDs, and security software programs, that prevent unauthorized users from gaining access to computing resources and restrict the access of legitimate users to the specific systems, programs, and files that they need to conduct their work. BPD did not adequately control powerful system capabilities to prevent unauthorized changes to data and programs that could adversely affect the integrity and availability of the on-line systems environment. We also identified vulnerabilities in certain controls that detect unauthorized access to BPD’s systems. Another key control for safeguarding financial data and computer resources from internal and external threats is physical access controls, such as locks, guards, fences, and surveillance equipment. Our review at the data center found physical access control vulnerabilities could allow access to sensitive areas within the BPD data center by employees whose jobs did not warrant such access. An organization’s ability to respond to and maintain service after an emergency can be significantly affected by how well it has planned for such contingencies and tested those plans. An organizational contingency plan describes how an organization will deal with a full range of emergencies, from electrical power failures to catastrophic events, such as earthquakes, floods, and fires. The plan specifies the organization’s emergency response, backup operations, and postdisaster recovery procedures to ensure the availability of critical resources and facilitate the continuity of operations. It also identifies essential business functions and prioritizes resources in order of criticality. To be most effective, a contingency plan should be periodically tested, and employees should be trained in and familiar with its use. In reviewing BPD’s service continuity and contingency planning, we found vulnerabilities related to the close proximity of off-site storage, currentness and completeness of contingency plan testing, and adequacy of the backup power supply. In addition to establishing controls and preparing emergency response plans, an effective computer security management program requires that the organization be actively involved in planning and overseeing computer security activities. Such management involvement should include assigning explicit security responsibilities, regularly assessing risks, establishing and communicating security policies and procedures based on risks, and monitoring and periodically reviewing security controls. In reviewing general controls, we found security management vulnerabilities related to (1) “conflict of interest” issues in the reporting and follow-up of security violations and (2) verifying that background checks have been performed before granting employees access to systems. We also noted additional security management vulnerabilities related to the development of BPD-specific security policies and oversight of the security violation follow-up process. However, we verified that corrective actions resolving these vulnerabilities had been completed by BPD subsequent to September 30, 1997. In addition to testing general controls, we tested application controls for two key BPD financial applications maintained and operated at the BPD data center. We identified the following areas where improvements could be made: (1) strengthening access controls by further restricting system access rights and improving security monitoring and (2) managing accuracy controls more effectively by ensuring that established procedures are followed to prevent the unauthorized deletion of exception reports. Like general access controls, access controls for specific applications should be established to ensure individual accountability and proper segregation of duties, limit the processing privileges of individuals, and prevent and detect inappropriate or unauthorized activities. For the applications reviewed, we found that BPD granted greater access rights to users than required for their jobs, maintained inadequate documentation of access authorizations granted to users, and did not adequately monitor user activities relating to the applications. Accuracy controls are one of the processing controls used to ensure that data are valid and correctly processed. For one application, we determined that the automated controls for identifying and correcting exceptions need improvement. Specifically, established procedures were not followed to prevent inappropriate use of a powerful software utility to delete exception reports from production databases. The deletion of exception conditions may cause inaccuracies in the application’s reporting. Because FRBs are integral to the operations of BPD, we also assessed general controls over BPD financial systems operated at FRBs and application controls for four key BPD financial applications maintained and operated by FRBs. Overall, we found these controls were effective. However, we found several vulnerabilities in general and application controls that require FRB management’s attention and action. These include vulnerabilities in general controls involving (1) access to systems, programs, and data, including unauthorized external access and (2) service continuity and contingency planning. We also found vulnerabilities in access controls over two of the applications. During our review, we communicated our interim findings and recommended corrective actions for each specific finding to FRB management, and, in most cases, FRBs have acted or are acting to resolve the vulnerabilities that we identified. We will review the status of FRBs’ other corrective actions as part of our fiscal year 1998 financial audits. Further, we are providing a separate report to the Board of Governors of the Federal Reserve System that summarizes the details of the control vulnerabilities at FRBs. BPD implemented other controls that reduce the risk that the computer control vulnerabilities identified in this report could result in material losses or misstatements in the financial statements occurring and not being detected promptly. For instance, we determined that the assignment of duties for issuing and redeeming securities provides adequate segregation between FRB and BPD personnel, and that reconciliations of their independent records are performed daily. In addition, although the organizational placement of the security branch function could create “conflict of interest” situations, we found that discussion of security issues at periodic Executive Board meetings provides an opportunity for management to identify any potential instances of conflicts of interest. Overall, the BPD and FRB general and application controls, combined with other effective features of their control environment, such as the clear separation of duties for issuing and redeeming securities, resulted in our opinion that management of BPD fairly stated that its related internal controls, including computer controls, were effective. As evidenced by our work on the financial audit of the Bureau of the Public Debt’s Fiscal Year 1997 Schedule of Federal Debt, we determined that the financial information presented on the schedule was materially correct. In addition, these controls have reduced BPD’s susceptibility to inadvertent or deliberate misuse, fraudulent use, alteration, or destruction of financial data by users and others gaining access to the systems. However, left uncorrected, the vulnerabilities included in this report could increase the risk of inappropriate disclosure and modification of sensitive information, misuse or damage of computer resources, and disruption of critical operations and thus warrant management’s attention and action. To improve areas of vulnerability in general controls and application controls over BPD’s financial systems cited in our July 31, 1998, “Limited Official Use” version of this report, we recommended in that report that you direct the Commissioner of the Bureau of the Public Debt to take the following actions. Correct each individual vulnerability we identified and communicated to BPD during our testing and summarized in the “Limited Official Use” report. Assign responsibility and accountability for correcting each vulnerability to designated individuals. These individuals should report regularly to the Commissioner on the status of all vulnerabilities, including actions taken to correct them. Work with FRBs to implement corrective actions to improve the computer control vulnerabilities related to BPD systems supported by FRBs that we identified and communicated to FRBs during our testing. BPD agreed with our findings and recommendations. The Commissioner of the Bureau of the Public Debt indicated that he was pleased that the review of BPD’s general controls over financial systems did not identify any reportable conditions. Further, he stated that in most cases, BPD has corrected or is already taking actions to resolve the vulnerabilities identified in this report. We are sending copies of this report to the Commissioner of the Bureau of the Public Debt; the Director of the Office of Management and Budget; the Chairmen and Ranking Minority Members of the Senate Committee on Appropriations and its Subcommittee on Treasury, General Government, and Civil Service, Senate Committee on Finance, Senate Committee on Governmental Affairs, Senate Committee on the Budget, House Committee on Appropriations and its Subcommittee on Treasury, Postal Service, and General Government, House Committee on Ways and Means, House Committee on Government Reform and Oversight and its Subcommittee on Government Management, Information and Technology, House Committee on the Budget; and other interested congressional committees. Copies will be made available to others upon request. Should you or members of your staff have any questions concerning this report, please contact me at (202) 512-3406. Major contributors to this report are listed in appendix I. J. Lawrence Malenich, Assistant Director Barbara S. Oliver, Audit Manager Gregory C. Wilshusen, Assistant Director—Technical Advisor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the general and application controls that support key automated financial systems maintained and operated by the Bureau of the Public Debt (BPD). GAO noted that: (1) overall, GAO found that BPD implemented effective computer controls; however, GAO identified certain vulnerabilities in general controls involving: (a) access to data and programs; (b) physical access; (c) contingency planning; and (d) security management; (2) GAO also identified vulnerabilities in the controls for two key BPD financial applications maintained and operated at the BPD data center in Parkersburg, West Virginia; (3) addressing these vulnerabilities requires: (a) strengthening access controls by further restricting system access rights and improving security monitoring; and (b) managing accuracy controls more effectively by ensuring that established procedures are followed to prevent unauthorized deletion of exception reports; (4) in most cases, BPD has corrected or is correcting the vulnerabilities that GAO identified; (5) GAO provided a general summary of the vulnerabilities that existed on September 30, 1997; (6) those that GAO verified had been fully resolved subsequent to September 30, 1997, GAO has so noted; and (7) GAO will review the status of BPD's other corrective actions as part of its fiscal year 1998 financial audits.
The Chemical and Biological Defense Program was established in 1994 and develops defense capabilities to protect the warfighter from current and emerging chemical and biological threats. Specifically, its mission is “to enable the warfighter to deter, prevent, protect against, mitigate, respond to, and recover from CBRN threats and effects as part of a layered, integrated defense.” The CBDP Enterprise conducts research and develops defenses against chemical threats, such as cyanide and mustard gases, and biological threats, such as anthrax and Ebola, and tests and evaluates capabilities and products to protect military forces from them. The CBDP Enterprise comprises 26 organizations across DOD that determine warfighter requirements, provide science and technology expertise, conduct research and development and test and evaluation on capabilities needed to protect the warfighter, and provide oversight. Figure 1 shows the CBDP Enterprise organizations included in our review and their roles. The ability of the CBDP Enterprise to successfully implement its mission in a resource-constrained environment, according to the 2012 CBDP Business Plan, relies on the integrated management of responsibilities performed by these organizations. The following CBDP Enterprise organizations have key roles and responsibilities: The Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs, among other things, serves as the advisor to the Secretary of Defense for activities that combat current and emerging chemical and biological threats. The Deputy Assistant Secretary of Defense for Chemical and Biological Defense is responsible for Chemical and Biological Defense Program oversight activities, acquisition policy guidance, and interagency coordination. The Secretary of the Army is the Executive Agent for the Chemical and Biological Defense Program. Within the Army, the Assistant Secretary of the Army for Acquisition, Logistics and Technology and the Office of the U.S. Army Deputy Chief of Staff, G-8 serve as cochairs of the Army Executive Agent Secretariat and are responsible for, among other duties, coordinating and integrating research, development, test, and evaluation, and acquisition requirements of the military departments for DOD chemical and biological warfare defense programs and reviewing all funding requirements for the CBDP Enterprise. The Deputy Under Secretary of the Army for Test and Evaluation provides oversight, policy, governance and guidance to ensure timely, adequate, and credible test and evaluation for the Army and the CBDP Enterprise. The Director, Army Test and Evaluation Office, serves as the Test and Evaluation Executive for the CBDP Enterprise. The Program Analysis and Integration Office (PAIO) is the analytical arm of the CBDP Enterprise and is responsible for monitoring the expenditures of research, development, test, and evaluation activities. It provides analysis, review, and integration functions for the CBDP Enterprise. The Joint Program Executive Office for Chemical and Biological Defense oversees the total life-cycle acquisition management for assigned chemical and biological programs, among others. The Office of the Joint Chiefs of Staff, Joint Requirements Office for Chemical, Biological, Radiological, and Nuclear Defense (hereinafter referred to as the Joint Requirements Office) serves as a focal point to the Chairman of the Joint Chiefs of Staff for all chemical and biological issues, among others, associated with combating weapons of mass destruction, and supports the development of recommendations to the Secretary of Defense regarding combatant commanders’ chemical and biological requirements for operational capabilities, among others. The Joint Science and Technology Office for Chemical and Biological Defense (hereinafter referred to as Joint Science and Technology Office) oversees science and technology efforts in coordination with the military services’ research and development laboratories, to include efforts with other agencies, laboratories, and organizations. The CBDP Enterprise’s four primary research and development and test and evaluation facilities, as seen in figure 2, include the U.S. Army Edgewood Chemical Biological Center (hereinafter referred to as Edgewood), Aberdeen Proving Ground, Maryland; the U.S. Army Medical Research Institute of Infectious Diseases on the National Interagency Biodefense Campus, Ft. Detrick, Maryland; the U.S. Army Medical Research Institute of Chemical Defense, Aberdeen Proving Ground, Maryland; and the West Desert Test Center (hereinafter referred to as West Desert), Dugway Proving Ground, Utah. These facilities conduct research and development and test and evaluation of chemical and biological defense capabilities and are owned and operated by the U.S. Army and support the mission of the Chemical and Biological Defense Program. Additional information about DOD’s chemical and biological defense primary research and development and test and evaluation facilities can be found in appendix III. Figure 2 shows the location of the CBDP Enterprise’s primary research and development and test and evaluation facilities. The CBDP Enterprise’s plans—which are used as guidance to meet its mission—articulate infrastructure goals and identify the ways (i.e., the functions, roles and responsibilities, and business practices) to achieve them. These plans include the following: The 2012 Chemical Biological Defense Program Strategic Plan is intended to map the direction and articulate the outcomes that the CBDP Enterprise aims to achieve. The plan responds to evolving threats and the fiscal environment by setting a vision to align resources to meet four strategic goals: (1) equip the force to protect and respond to CBRN threats and effects; (2) prevent surprise by anticipating threats and developing new capabilities for the warfighter to counter emerging threats; (3) maintain the infrastructure—both physical and intellectual—the department requires to meet and adapt to current and future needs for personnel, equipment, and facilities within funding constraints; and (4) lead CBDP Enterprise components in integrating and aligning activities. The 2012 Chemical Biological Defense Program Business Plan describes the ways in which the CBDP Enterprise intends to meet the four strategic goals identified in the 2012 CBDP Strategic Plan. The 2012 CBDP Business Plan assigns responsibility and provides the structures and processes to implement the 2012 CBDP Strategic Plan. PAIO’s 2014 CBDP Infrastructure Implementation Plan, endorsed by the Office of the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs, articulates the process by which the CBDP Enterprise intends to review its physical infrastructure to support the identification of required infrastructure and determine whether any potentially duplicative or redundant infrastructure capabilities exist within the CBDP Enterprise. PAIO’s 2008 Non-Medical Physical Infrastructure Capabilities Assessment was an assessment conducted by PAIO on the capabilities of the CBDP Enterprise’s existing infrastructure to support critical mission areas. The assessment was requested by the Special Assistant, Chemical and Biological Defense and Chemical Demilitarization Programs, to support critical mission areas. The study made four recommendations to the CBDP Enterprise: 1. Identify its required research and development and test and evaluation infrastructure capabilities to support its mission.2. Create a joint strategic vision for military construction investment across all elements of the CBDP Enterprise. 3. Establish a military construction program aligned with the joint strategy and processes integrating goals, objectives, and validation across the CBDP Enterprise. 4. Address the use of project validation, cost/benefit analysis, and investment business case issues for infrastructure decisions. The CBDP Enterprise annual planning process is designed to support decision making by program leadership regarding investments in research and development. This process is intended to incorporate chemical and biological threat information and chemical and biological defense warfighter requirements into the formulation of CBDP Enterprise strategic programming guidance for research and development investment decisions. The CBDP Enterprise’s 2014 risk assessments are based on DOD’s 2001Quadrennial Defense Review Report risk framework. The four dimensions of the risk framework are as follows: Force management—the ability to recruit, retain, train, and equip sufficient numbers of high-quality personnel and sustain the readiness of the force while accomplishing its many operational tasks. Operational—the ability to achieve military objectives in a near-term conflict or other contingency. ODASD (CBD) officials told us that, since the recommendations were made, they have expanded the recommendations to include all infrastructure investments, not just infrastructure funded by military construction appropriations. Future challenges—the ability to invest in new capabilities and develop new operational concepts needed to dissuade or defeat mid- to long-term military challenges. Institutional—the ability to develop management practices and controls that use resources efficiently and promote the effective operation of the defense establishment. Together, the results from the four dimensions of the risk framework are expected to allow DOD to consider tradeoffs among fundamental resource constraints. The CBDP Enterprise has taken some actions, such as the development of infrastructure goals, to address its infrastructure needs; however, after nearly 7 years, the CBDP Enterprise has not fully achieved its goal to address the 2008 PAIO recommendation that it identify required infrastructure capabilities to ensure alignment of its infrastructure to its mission to address threats. At that time, the CBDP Enterprise made no plan and did not make infrastructure a priority to address the recommendation. CBDP Enterprise officials acknowledge the importance, validity, and necessity of addressing the 2008 recommendation and recognized these points in their 2012 CBDP Business Plan. However, the CBDP Enterprise has made limited progress in achieving this infrastructure goal because CBDP Enterprise officials told us that they were focused on higher priorities and had no CBDP Enterprise-wide impetus to address the infrastructure recommendations. OASD (NCB) previously identified the need for an entity that has the responsibility and level of authority needed to ensure achievement of this infrastructure goal, but DOD has not designated such an entity with CBDP Enterprise- wide responsibility and authority to lead this effort, nor has it established timelines and milestones for doing so. The CBDP Enterprise has taken actions, but has not fully achieved its goal to address the 2008 PAIO recommendation to identify required infrastructure (intellectual and physical) capabilities to address current and emerging chemical and biological threats. According to ODASD (CBD) officials, the CBDP Enterprise recognizes the importance, validity, and necessity of addressing this (and other) PAIO recommendations from the 2008 study, which would transform the way the CBDP Enterprise manages its infrastructure. However, at that time, CBDP Enterprise officials did not make a plan or set infrastructure as a priority to address the recommendation. In addition, CBDP Enterprise officials told us that they have not addressed this recommendation because they were focused on higher priorities. Since the 2008 PAIO recommendation, OASD (NCB) issued the 2012 CBDP Strategic Plan, which, for the first time, established maintaining infrastructure as a strategic goal. Additionally, OASD (NCB) issued the 2012 CBDP Business Plan, which proposed an assessment of CBDP’s required knowledge and skill capabilities of its personnel and physical infrastructure capabilities across the CBDP Enterprise to meet this strategic goal. In addition to these actions, the Deputy Assistant Secretary of Defense for Chemical and Biological Defense requested that the National Research Council of the National Academy of Sciences conduct a study to identify the science and technology capabilities needed for the CBDP Enterprise to meet its mission. However, it was not until 2014 and 2015 that the Joint Science and Technology Office and PAIO, respectively, initiated studies to address the 2012 CBDP Business Plan proposal and 2008 recommendation to identify its required infrastructure capabilities. Figure 3 depicts the CBDP Enterprise’s limited progress, as shown by the gap from 2008 to 2014, to complete its goal to identify its required infrastructure capabilities. In December 2014, the Joint Science and Technology Office initiated a study of the CBDP Enterprise’s existing intellectual infrastructure to (1) determine the knowledge and skill capabilities of its personnel and (2) identify the required capabilities of its personnel to implement its mission. According to Joint Science and Technology Office officials, they are using the 18 warfighter core capabilities—the framework for meeting the program’s mission—to assist in identifying the CBDP Enterprise’s required knowledge and skill capabilities for personnel. (See app. IV for additional information about the 18 core capabilities.) These officials told us that they are working with CBDP’s Senior Scientist Board and the leadership of the three primary CBDP research and development facilities to identify the required knowledge and skill capabilities for the CBDP Enterprise’s personnel.the proposed methodology will help them identify expertise and leadership that currently exists within the primary research and development facilities. The methodology also will help them identify the required knowledge and skill capabilities of its personnel to (1) ensure that research and development products are making progress towards project goals and (2) address the 18 warfighter core capabilities. In addition, Joint Science and Technology Office officials stated that their study to identify required knowledge and skill capabilities of the CBDP Enterprise’s personnel will also help them determine any existing capabilities gaps. As of January 2015, the Joint Science and Technology Office’s infrastructure study produced a presentation on definitions for infrastructure-related issues and a proposed methodology to determine According to the official overseeing this study, how required knowledge and skill capabilities of the CBDP Enterprise’s personnel will be maintained. However, the office does not have an end date for this study or a timeline and milestones to assess its progress. In addition, PAIO developed a physical infrastructure implementation plan in July 2014 to study the CBDP Enterprise’s existing physical infrastructure capabilities. The study includes a timeline and milestones for various actions, including that, from July 2015 through February 2016, PAIO establish an inventory of all the physical infrastructure capabilities within the CBDP Enterprise and conduct an analysis of these capabilities to determine their specific functions and the CBDP Enterprise’s level of reliance on these capabilities. According to PAIO officials, this analysis will help the CBDP Enterprise achieve its goal by determining its required physical infrastructure. ODASD (CBD) officials acknowledged the need to identify required knowledge and skills capabilities of the CBDP Enterprise’s personnel and physical infrastructure capabilities to ensure alignment of the Army-owned infrastructure to address current and emerging chemical and biological threats. PAIO officials stated that the information gained from their study and from the Joint Science and Technology Office study will need to be combined to gain a comprehensive understanding of the status of CBDP’s infrastructure. Specifically, they stated that the studies will provide additional information to CBDP Enterprise leadership on the existing infrastructure capabilities to help determine required infrastructure and identify any potential gaps to address threats. The limited progress in fully achieving the CBDP Enterprise goal to identify required infrastructure capabilities, and transform the way infrastructure is managed, is because OASD (NCB) has not identified and designated an entity that has the responsibility and authority needed to lead the effort to ensure the achievement of this and other CBDP Enterprise goals (e.g., the other three 2008 PAIO recommendations, as identified in the Background section of this report, and the goal established in the 2012 CBDP Business Plan—an assessment of the CBDP Enterprise’s required infrastructure capabilities), and no timelines or milestones have been established for their completion. Key practices for federal agencies to address challenges in achieving successful transformation of their organizations, particularly in the implementation phase, call for (1) establishing a dedicated authority responsible for the transformation’s day-to-day management to ensure it receives the full- time attention needed to be sustained and successful and (2) establishing timelines and milestones for achieving goals. The CBDP Enterprise does not have a dedicated entity with the responsibility and authority needed to lead the effort to ensure the achievement of its infrastructure goals. The Strategic Portfolio Review assesses, among other things, how efficiently the CBDP Enterprise is maintaining its infrastructure. ODASD (CBD) officials confirmed that, initially, the Army’s PAIO was designated as the Infrastructure Manager for the CBDP Enterprise. However, according to PAIO and ODASD (CBD) officials, PAIO does not have the authority to manage the CBDP Enterprise’s infrastructure. A decision subsequently was made by ODASD (CBD) that PAIO would no longer serve in this capacity, but would continue in its role to provide infrastructure analysis and integration for the CBDP Enterprise. In July 2014, ODASD (CBD) officials told us the U.S. Army and individual installation leadership were designated as Infrastructure Managers over intellectual and physical infrastructure capabilities for the CBDP’s Enterprise’s primary research and development and test and evaluation facilities under their purview. However, individual installation leadership does not have the responsibility and authority to maintain CBDP Enterprise-wide visibility and oversight to ensure that CBDP Enterprise- wide infrastructure goals are achieved. A dedicated authority, such as an entity responsible for the day-to-day management of the transformation, could lead the effort to help ensure the CBDP Enterprise receives the full- time attention needed to achieve and sustain its goals to help ensure progress is made as intended. By identifying and designating an entity with the responsibility and authority to lead the effort to set priorities, make timely decisions, and move quickly to implement leadership decisions for ensuring the timely achievement of the CBDP Enterprise’s goals, such as identifying required infrastructure capabilities, the CBDP Enterprise would be better positioned to support resource decisions regarding the infrastructure capabilities needed to address threats. Additionally, no timelines and milestones were established to complete the recommendations identified in the 2008 PAIO study or the goals established in the 2012 CBDP Business Plan or the 2014 Joint Science and Technology Office study to identify required knowledge and skill capabilities in its personnel because no entity has responsibility and authority needed to lead the effort to implement this and other CBDP Enterprise goals. Moreover, CBDP Enterprise officials told us that they were focused on higher priorities during this time, such as funding for medical countermeasures capabilities. As a result, the recommendation made nearly 7 years ago and subsequent goals to address the recommendation have not been implemented and there is no timeline for their completion. According to key practices for transforming organizations, it is essential to set and track timelines to build momentum and to demonstrate progress from the beginning. Establishing timelines and milestones for achieving these goals (e.g., the 2008 PAIO recommendations and the goal established in the 2012 CBDP Business Plan), would better position the CBDP Enterprise to track its progress towards meeting its infrastructure goals, pinpoint performance shortfalls and gaps, and suggest midcourse corrections to ensure progress is being made to address current and emerging threats and meet its mission. Further, identifying and designating an entity and establishing timelines and milestones would better position the CBDP Enterprise to address any existing challenges in transforming the way the CBDP Enterprise manages its infrastructure and completing its goal to identify the infrastructure capabilities needed to meet its mission. The CBDP Enterprise has taken some actions to identify, address, and manage potential fragmentation, overlap, and duplication. Further, during the course of our review, in January 2015, PAIO began a study of CBDP Enterprise infrastructure to identify potential duplication. However, PAIO does not plan to identify, request, or consider information from existing infrastructure studies from other federal agencies. By identifying, requesting, and considering information from existing infrastructure studies from other federal agencies working in this area, PAIO will be better positioned to meet DOD’s goal to avoid duplication by having more information about existing infrastructure across the federal government for use by the CBDP Enterprise to support its work. Based on our analysis of information from each of the four primary research and development and test and evaluation facilities and ODASD (CBD) on infrastructure capabilities, the CBDP Enterprise’s primary research and development and test and evaluation facilities have taken some actions to identify, address, and manage fragmentation, overlap, and duplication. For example, the CBDP Enterprise has a research and development project-selection process in place, managed by the Joint Science and Technology Office, to help reduce the potential for fragmentation and overlap of CBDP Enterprise infrastructure and duplication of efforts within the research and development component. The Joint Science and Technology Office reviews and selects the projects that support the CBDP Enterprise mission at the CBDP Enterprise’s primary research and development facilities. By having one entity (the Joint Science and Technology Office) make decisions regarding the selection of research and development projects to meet its mission, the CBDP Enterprise is able to help reduce the potential for fragmentation and overlap of its infrastructure and duplication of efforts within the research and development component of the CBDP Enterprise. In addition, the U.S. Army Medical Research and Materiel Command is piloting a Competency Management Initiative, among other things, to identify any potential duplication and gaps across the knowledge and skills of command personnel. The initiative, which includes the U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID) and the U.S. Army Medical Research Institute of Chemical Defense (USAMRICD), examines intellectual capabilities and competencies needed to meet the mission based on chemical and biological threats. U.S. Army Medical Research and Materiel Command officials expect results from this initiative in 2015. Furthermore, the potential for duplication is reduced because the missions of the CBDP Enterprise’s four primary research and development and test and evaluation facilities are different. For example, USAMRICD focuses on medical chemical defense, USAMRIID focuses on medical biological defense, Edgewood focuses on nonmedical materiel solutions to chemical and biological threats, and West Desert conducts developmental and operational testing and evaluation. The difference in missions reduces the potential for fragmentation, overlap, and duplication within the CBDP Enterprise. In addition, in responding to our questionnaire, officials at CBDP’s four primary facilities told us they consider potential infrastructure fragmentation, overlap, and duplication when determining whether additional infrastructure capabilities are needed to support their work. For example, officials found the potential for duplication during the planning phase for a new facility, which would house animals for future research for USAMRIID on the National Interagency Biodefense Campus at Fort Detrick, Maryland. A set of studies on medical countermeasure test and evaluation facility requirements, conducted for the U.S. Army Assistant Chief of Staff for Facilities, Planning and Programming Division, determined, among other things, that there was sufficient capacity for holding animals in existing facilities that conduct research with animals. The study resulted in the cancellation of USAMRIID’s plans to construct a new medical countermeasure test and evaluation facility, including a holding facility for animals (vivarium), with an overall estimated cost savings of about $600 million, according to USAMRIID officials. During the course of our review, PAIO began a study in January 2015 of the CBDP Enterprise’s infrastructure, among other things, to inventory CBDP Enterprise infrastructure to support identification of (1) required infrastructure capabilities and (2) any potential duplication and unnecessary redundancy across the CBDP Enterprise’s primary research and development and test and evaluation facilities’ physical infrastructure. This study by PAIO will be the first CBDP Enterprise-wide review of infrastructure since its 2008 review of nonmedical physical infrastructure investments. PAIO developed an infrastructure implementation plan in July 2014 to guide its study, among other things, to determine whether there are any potentially duplicative or unnecessary redundant infrastructure capabilities. PAIO plans to inventory CBDP Enterprise infrastructure from July 2015 to October 2015. In addition, PAIO plans to analyze the infrastructure information for potential duplication from October 2015 to February 2016. Its infrastructure implementation plan states that there can be value in some redundancy of infrastructure across the facilities and that the definition of duplication and unnecessary redundancy, which will be established during the study, will take this into account. For example, West Desert at Dugway Proving Ground and Aberdeen Test Center each has aircraft decontamination pads to support their testing and evaluation mission. However, if an aircraft became contaminated with a chemical or biological agent, the facilities have the infrastructure capability to decontaminate a civilian or military aircraft during a contingency or national emergency. According to West Desert officials, having the infrastructure at both facilities allows aircraft coming from the Pacific or Europe to be handled and decontaminated without the additional risk of continental travel and refueling. However, during the course of our review, we found potential duplication or redundant swatch testing infrastructure capabilities that may not add value to CBDP’s test and evaluation infrastructure capabilities. Specifically, West Desert and Edgewood both have the infrastructure to conduct testing of swatch material for chemical agents. In addition, the Quality Evaluation Facility at Pine Bluff Arsenal, Arkansas, a non-CBDP Enterprise DOD facility, also has swatch testing infrastructure capabilities. For example, officials from the Joint Program Executive Office for Chemical and Biological Defense, one of the swatch testing customers for all three facilities, told us that its current workload would not completely fill the capacity of either of the CBDP facilities, which could indicate potential duplication if other DOD or private sector customers did not require services to ensure each facility is at full capacity. According to Edgewood and West Desert officials, having swatch testing infrastructure capabilities in both locations enables efficient transition of technology and continuity of data from early research and development at Edgewood to advanced development and operational testing by West Desert. Officials from PAIO stated that their study will review similar infrastructure examples, but within the CBDP Enterprise only, to determine what infrastructure, if any, is duplicative or redundant and what infrastructure, if any, is necessary redundancy. As part of the study methodology, PAIO plans to obtain input from the Joint Science and Technology Office, the Joint Program Executive Office for Chemical and Biological Defense, and the Deputy Under Secretary of the Army for Test and Evaluation and provide the results of its infrastructure inventory and any potential duplication found to the primary research and development and test and evaluation facilities. Once the results are known later in 2015, facility leadership is then expected to provide a rationale for sustaining any potentially duplicative or redundant infrastructure capabilities. Finally, in October 2015, the study’s methodology provides that PAIO will analyze any additional information from facility leadership to determine which infrastructure capabilities are potentially duplicative or redundant. According to PAIO and ODASD (CBD) officials, the study will provide information to CBDP Enterprise leadership—the Office of the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs and the Executive Agent—to support their decisions on any potential infrastructure efficiencies and to support oversight of investment. PAIO plans to identify potential duplication within the CBDP Enterprise; however, PAIO does not plan to identify, request, or consider information from existing studies about infrastructure capabilities of other federal agencies with research and development or test and evaluation infrastructure to study chemical and biological threats. Additional information about other federal agencies’ infrastructure capabilities may enhance PAIO’s review of CBDP Enterprise infrastructure and potential duplication by providing more information on what infrastructure other federal agencies in this field have to support their work. For example, the Department of Health and Human Services’ Centers for Disease Control and Prevention, the National Institutes of Health’s National Institute of Allergy and Infectious Diseases, Integrated Research Facility; and the Department of Homeland Security’s National Biodefense Analysis and Countermeasures Center have infrastructure and study chemical or biological threats. Information about existing infrastructure inventory, such as their capability to conduct specialized research of biological agents with a known potential for aerosol transmission or that may cause serious and potentially lethal infections, and whether that infrastructure is available for use to help avoid duplication within the CBDP Enterprise, would help bolster PAIO’s study. In addition, examples of our prior work on fragmentation, overlap, and duplication found that multiple agencies were involved in federal efforts to combat chemical or biological threats. We also found that it may be appropriate for multiple agencies or programs to be involved in the same area of work due to the nature or magnitude of the federal effort; however, multiple programs and capabilities may also create inefficiencies, such as the examples found in our prior reports. For example, in 1999, prior to the anthrax attacks in the United States, we found ineffective coordination among DOD and other federal agencies with chemical and biological programs that could result in potential gaps or overlap in research and development programs. Further, we found in September 2009 that there was no federal entity responsible for oversight of the expansion of high-containment laboratories—those designed for handling dangerous pathogens and emerging infectious diseases— across the federal government. We also found in June 2010 that the mission responsibilities and resources needed to develop a biosurveillance capability—the ability to provide early detection and situational awareness of potentially catastrophic biological events—were dispersed across a number of federal agencies, creating potential inefficiencies and overlap and duplication of effort. Finally, in May 2014, we found that the Department of Health and Human Services coordinates and leads federal efforts to determine CBRN medical countermeasure priorities and the development and acquisition of CBRN medical countermeasures for the civilian sector, primarily through the Public Health Emergency Medical Countermeasures Enterprise—an interagency body that includes other federal agencies with related responsibilities. We made a number of recommendations to address these issues and, as of January 2015, about one-third have been partially or fully implemented. (See app. V for additional information about the findings, recommendations, and agency actions taken and see the Related GAO Products section at the end of this report for other reports on high- containment laboratories and biodefense.) PAIO officials told us that they identified and requested some information from other federal agencies to support the development of PAIO’s infrastructure implementation plan. However, according to PAIO and ODASD (CBD) officials, PAIO does not have the authority and resources to require other federal agencies to provide information about their infrastructure capabilities. DOD Directives 5134.08 and 3200.11 outline policy goals, among other things, for avoiding duplication, such as using existing DOD and other federal agency facilities and conducting certain oversight activities aimed at avoiding unnecessary duplication within the CBDP Enterprise. According to CBDP Enterprise officials, these types of deliberate data sharing arrangements can be enhanced by interagency agreements that are directed and supported at more senior levels within each department. Identifying, requesting, and considering information from existing infrastructure studies from other federal agencies about their chemical and biological infrastructure capabilities would not necessarily require new authority. PAIO would be better positioned to support the CBDP Enterprise’s effort to meet DOD’s goal to avoid duplication by determining what infrastructure is used by other federal agencies and whether that infrastructure could be available for use by the CBDP Enterprise to support its work in this area. Until PAIO determines what infrastructure capabilities exist outside of the CBDP Enterprise, there is potential for unnecessary duplication and inefficient and ineffective use of its government resources. The CBDP Enterprise used data on chemical and biological threats from the intelligence community and plans to use threat data and the results from risk assessments first conducted in 2014 by the Joint Requirements Office and ODASD (CBD) to support planning for its future portfolio planning process for research and development. However, the CBDP Enterprise has not updated its guidance and planning process to include specific responsibilities and timeframes for risk assessments. ODASD (CBD) tasked the Joint Requirements Office to conduct an operational risk assessment of warfighter chemical and biological defense requirements to support the CBDP Enterprise’s future years’ portfolio planning process, according to ODASD (CBD) officials. The assessment was based on threat information from the Defense Intelligence Agency’s Chemical, Biological, Radiological, and Nuclear Warfare Capstone Threat Assessment, a survey, and DOD guidance to determine the level of risk DOD is willing to accept in protecting its forces against chemical and biological threats under various operational conditions. CBDP Enterprise officials stated that they plan to use results from the piloted risk assessment during Phase I of their annual portfolio planning cycle, as stated in the 2012 CBDP Business Plan. Phase I includes a review of threats and risk analyses to support the development of strategic investment guidance and focus areas by the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs. For example, the guidance may include specific chemical or biological threats or defense capabilities that the program leadership wants the CBDP Enterprise to address, which then guides the types of scientific and technology proposals the research and development facilities will submit to support CBDP Enterprise goals. This investment program guidance is then to be used by the CBDP Enterprise organizations to focus the development of capabilities to counter threats. When they conducted the pilot risk assessments, the Joint Requirements Office and ODASD (CBD) used a modified version of DOD’s 2001 Quadrennial Defense Review Report risk framework—force management, future challenges, operational risk, and institutional risks—and guidance from the 2012 CBDP Strategic Plan. For its assessment of current and future operational risk, the Joint Requirements Office defined operational risk as the ability of the current force to execute strategy successfully within acceptable human, materiel, financial, and strategic costs. To conduct the operational risk assessment, the Joint Requirements Office developed an operationally driven methodology that consisted of six interrelated elements. The Joint Requirements Office used information from five of the elements—a joint assessment, survey, analysis, intelligence, and subject- matter expertise—to identify the topics of the tabletop exercise. Information from the sixth element—other exercises and operational evaluations, specific threats, potential gaps, potential risks, or the construct of potential threats on the battlefield—was used to develop scenarios for the tabletop exercise. According to Joint Requirements Office officials, the purpose of the tabletop exercise was to gain an understanding of the chemical and biological operational defense capabilities against the most demanding and dangerous threats. The tabletop exercise was conducted through a series of action-reaction- counteraction sequences for each scenario. Officials facilitated discussions on military defense and key observations on defensive capabilities among CBDP Enterprise members, operational planners, and other subject-matter experts during the tabletop exercise using the framework of the CBDP Enterprise’s 18 warfighter core capabilities categorized into four areas—Sense, Shape, Shield, and Sustain. (See app. IV for additional information about the core capabilities.) The Joint Requirements Office provided ODASD (CBD) with information about lessons learned from the tabletop exercise and other analyses, and identified other operational scenarios to support future operational risk assessments. According to ODASD (CBD) officials, the operational risk assessment provided recommendations and new information on the use of defense capabilities in an operational setting to CBDP Enterprise leadership to support future planning about the strategic direction of the CBDP Enterprise in addressing chemical and biological threats. Also, in 2014, ODASD (CBD) conducted its own assessment of force management and institutional risk to the CBDP Enterprise. A separate risk assessment of future challenges—the fourth risk area in the 2001Quadrennial Defense Review Report’s framework—was not conducted. According to ODASD (CBD) officials, future challenges were incorporated into the operational and institutional risk assessments by including planned future capabilities against future threats as well as the development of those future capabilities, respectively. To assess force management risk, the office assessed “equipping the force.” Specifically, officials assessed 23 systems used by the military forces that were employed in the Joint Requirements Office’s operational risk assessment. The focus of the force management risk assessment was to identify current or planned capabilities that did not meet the force planning According to the results of the assessment, there were construct levels.no unacceptably high risks identified in equipping the force that needed to be address in fiscal year 2016–2020 program guidance. The assessment indicated the programs associated with the 23 systems appear to not pose an unacceptable risk. To assess the second area of risk— institutional risk—officials collected data on the CBDP Enterprise’s infrastructure and processes. The intent of the pilot infrastructure risk assessment was to identify unacceptably high-risk areas or concerns that would need additional guidance and be addressed during the fiscal year 2016–2020 planning cycle. ODASD (CBD) officials did not find any critical shortfalls in research and development or test and evaluation infrastructure or identify unacceptable risk. However, the assessment found some challenges in the process of moving capabilities from development to production. In addition, the results confirmed the difficulty of identifying shortfall risks in the CBDP Enterprise infrastructure because the primary research and development facilities are funded by proposal rather than by facility, thus requiring future risk assessments to look beyond the infrastructure that exists to determine whether unacceptable risk exists. ODASD (CBD) officials stated that they expect the results of the risk assessments to support the CBDP Enterprise’s future investment for research and development of chemical and biological defense capabilities. The CBDP Enterprise’s guidance and planning process does not include who will conduct and participate in risk assessments and when those assessments will be conducted. Federal standards for internal control state that, over time, management should continually assess and evaluate its internal control to assure activities being used are effective and updated when necessary. In addition, decision makers should identify risks associated with achieving program objectives, analyze them to determine their potential effect, and decide how to manage the risk and identify what actions should be taken. The standards also call for written procedures, to better ensure leadership directives are implemented. However, which organizations within the CBDP Enterprise are responsible for conducting and participating in risk assessments and when the assessments will be conducted to support the portfolio planning process for research and development investment is not outlined in the CBDP Enterprise’s guidance on roles and responsibilities or included in its planning process. Specifically, according to DOD Directive 5160.05E, the Joint Requirements Office is “responsible for collaborating with appropriate Joint Staff elements” on, among other things, chemical and However, the guidance does not explicitly biological risk assessment.identify which organizations within the CBDP Enterprise are responsible for conducting and participating in risk assessments. The 2012 CBDP Business Plan identifies the Joint Requirements Office as the primary organization responsible for planning chemical and biological risk assessments for the CBDP Enterprise. Further, the plan includes steps in its planning process to review threats and risk analyses, but does not specify when risk assessments will be conducted. Without written procedures on who will conduct or participate in risk assessments and the use of DOD’s risk framework, there is no guarantee that risk assessments will be conducted or when they will be conducted. ODASD (CBD) and Joint Requirements Office officials stated that they plan to conduct additional risk assessments in the future, as reported to Congress, because of the increasing chemical and biological threats and the challenges of the austere fiscal environment. However, the use of risk assessments by the CBDP Enterprise has not been fully institutionalized because the CBDP Enterprise has not updated its guidance on roles and responsibilities and its planning process because this is the first year that risk assessments were conducted. According to ODASD (CBD) officials, updating the roles and responsibilities guidance and related planning process would be beneficial, but they have not done so because the CBDP Enterprise is evaluating the results and lessons learned from the pilot. As of March 2015, ODASD (CBD) and Joint Requirements Office officials had not formally committed to updating such guidance or established a time frame for doing so to fully institutionalize the use of risk assessments. Without updated guidance, the CBDP Enterprise will continue to rely on the Deputy Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs to request risk assessments, rather than having the assessments occur at established times during the investment planning process. Written guidance, as called for by federal standards for internal control, would better ensure that leadership directives are implemented as intended. Written guidance that identifies which CBDP Enterprise entities are responsible for conducting and participating in risk assessments and when such assessments are to be conducted would help ensure that risk assessments are conducted as intended. In this way, new information from the results of the tabletop exercise from the risk assessment about how defense capabilities, such as 1 of the 18 warfighter core capabilities, are used in an operational setting would better position the CBDP Enterprise to prioritize future research and development investments. Going forward, addressing internal control standards by updating its guidance and the planning process to fully institutionalize the use of risk assessments would support planning, help ensure that the CBDP Enterprise leadership directives are implemented, and end dependence upon any particular agency official to request risk assessments to support future investment planning. The CBDP Enterprise has taken a number of actions in recent years to address chemical and biological defense research and development and test and evaluation infrastructure, but initially did not develop a plan to address the 2008 PAIO recommendation or make infrastructure a priority. While the CBDP Enterprise should continue to address its priorities, it remains important that it also ensures that its infrastructure is aligned to meet its mission given ever-changing threats. Additional actions would help the CBDP Enterprise to more effectively and efficiently identify, align, and manage DOD’s chemical and biological defense infrastructure. By identifying and designating an entity with the responsibility and authority to lead the effort for ensuring the timely achievement of the CBDP Enterprise’s infrastructure goals to identify required infrastructure capabilities and by establishing timelines and milestones to implement the 2008 PAIO recommendations and the goal established in the 2012 CBDP Business Plan, the CBDP Enterprise would be better positioned to align its infrastructure to meet its mission to address threats. Thus, the CBDP Enterprise would be able to determine whether its infrastructure is properly aligned to meet its mission to address current and emerging chemical and biological threats. Implementing the 2008 PAIO recommendation that the CBDP Enterprise identify its required infrastructure capabilities is an important first step in identifying potential infrastructure duplication that may exist across the CBDP Enterprise. By identifying, requesting, and considering information from existing infrastructure studies of other federal agencies about their chemical and biological infrastructure capabilities, PAIO may be better positioned to enhance its study by providing additional information, for example, about infrastructure capability and the availability of facilities, to help the CBDP Enterprise avoid potential infrastructure duplication and gain potential efficiencies by using those agencies’ existing infrastructure. Finally, the CBDP Enterprise can capitalize on its progress made in 2014, when the Joint Requirements Office and ODASD (CBD) conducted risk assessments, by updating the roles and responsibilities guidance in DOD Directive 5160.05E and the CBDP Enterprise’s planning process to identify which organizations are responsible for conducting and participating in risk assessments and when they would occur. By updating guidance and the planning process, the CBDP Enterprise can fully institutionalize the use of risk assessments and not depend on an individual official to request risk assessments. Fully institutionalizing the use of risk assessments would support CBDP Enterprise planning and may provide new information about chemical and biological defense capabilities to further prioritize the CBDP Enterprise’s future research and development investments. We are making five recommendations to improve the identification, alignment, and management of DOD’s chemical and biological defense infrastructure. To help ensure that the CBDP Enterprise’s infrastructure is properly aligned to address current and emerging chemical and biological threats, we recommend that the Secretary of Defense direct the appropriate DOD officials to take the following two actions: identify and designate an entity within the CBDP Enterprise with the responsibility and authority to lead the effort to ensure achievement of the infrastructure goals (e.g., the four 2008 PAIO recommendations, including the recommendation that the CBDP Enterprise identify its required infrastructure capabilities, and the goal established in the 2012 CBDP Business Plan), and establish timelines and milestones for achieving identified chemical and biological infrastructure goals, including implementation of the 2008 PAIO recommendation that the CBDP Enterprise identify its required infrastructure capabilities. To enhance PAIO’s ongoing analysis of potential infrastructure duplication in the CBDP Enterprise and gain potential efficiencies, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics to identify, request, and consider any information from existing infrastructure studies from other federal agencies with chemical and biological research and development and test and evaluation infrastructure. To fully institutionalize the use of risk assessments to support future investment decisions, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics to take the following two actions: update the roles and responsibilities guidance in DOD Directive 5160.05E to identify which organizations are responsible for conducting and participating in CBDP Enterprise risk assessments, and update the CBDP Enterprise’s portfolio planning process, to include when risk assessments will be conducted. In commenting on a draft of this report, DOD concurred with all five of our recommendations and discussed actions it is taking and plans to take to implement them. DOD concurred with our first recommendation to identify and designate an entity within the CBDP Enterprise with the responsibility and authority to lead the effort to ensure achievement of the infrastructure goals (e.g., the four 2008 PAIO recommendations, including the recommendation that the CBDP Enterprise identify its required infrastructure capabilities, and the goal established in the 2012 CBDP Business Plan). The department concurs that an entity needs to lead the effort to ensure achievement of the infrastructure goals. Further, OASD (NCB) officials believe that these responsibilities and authorities are currently in place under existing laws and regulations. The 2012 Chemical and Biological Defense Program (CBDP) Strategic Plan identified one of the four strategic goals of CBDP as “to maintain infrastructure to meet and adapt current and future needs for personnel, equipment, and facilities within funding constraints.” To achieve this goal, OASD (NCB) and the U.S. Army, as the Executive Agent for Chemical and Biological Defense, share responsibility to ensure achievement of CBDP’s strategic infrastructure goals in close collaboration and coordination with the infrastructure managers (i.e., the individual installation commanders and directors of the facilities). According to OASD (NCB) officials, the department is in the process of revising DOD Directive 5160.05E and will ensure that the directive appropriately captures the roles and responsibilities related to CBDP infrastructure capabilities. We believe these actions, if fully implemented, would address our recommendation. DOD also concurred with our second recommendation to establish timelines and milestones for achieving identified chemical and biological infrastructure goals, including implementation of the 2008 PAIO recommendation that the CBDP Enterprise identify its required infrastructure capabilities. DOD officials agree that the most effective means of ensuring CBDP infrastructure goals are achieved is to set realistic timelines and milestones. According to OASD (NCB) officials, the CBDP Enterprise is undertaking a thoughtful effort to identify the infrastructure capabilities necessary to successfully complete its mission. The CBDP Enterprise solicited support from the National Research Council of the National Academies of Science to identify what science and technology core capabilities need to be in place within DOD laboratories to support CBRN research, development, test, and evaluation. The CBDP Enterprise also is in the midst of internal reviews of both current infrastructure capabilities and those that are needed to fulfill mission requirements. The combined results of these studies will enable the CBDP Enterprise to align its core capabilities with the necessary supporting infrastructure, and to develop implementation and sustainment plans with timelines and milestones for required CBDP infrastructure capabilities and the studies will consider GAO’s recommendation on this issue. We believe that if these studies are completed and implementation and sustainment plans are developed with established timelines and milestones, then these actions would address our recommendation. DOD concurred with our third recommendation to identify, request, and consider any information from existing infrastructure studies from other federal agencies with chemical and biological research and development and test and evaluation infrastructure. OASD (NCB) officials said the department agrees that information from existing federal chemical and biological infrastructure studies should be considered as inputs to the CBDP Enterprise infrastructure analysis efforts. They added that DOD maintains strong partnerships with the Departments of Homeland Security and Health and Human Services, which will facilitate DOD’s accomplishment of this recommendation. We agree. DOD concurred with our fourth recommendation to update the roles and responsibilities guidance in DOD Directive 5160.05E to identify which organizations are responsible for conducting and participating in CBDP Enterprise risk assessments. According to the OASD (NCB) officials, the department is in the process of revising DOD Directive 5160.05E, and will include the risk assessment process in the roles and responsibilities section. If fully implemented, this action would address our recommendation. Finally, DOD concurred with our fifth recommendation to update the CBDP Enterprise’s portfolio planning process, to include when risk assessments will be conducted. OASD (NCB) officials noted that the risk assessment process was initially piloted in 2014 to determine its utility for informing CBDP Enterprise portfolio planning and guidance. They said that, moving forward, the CBDP Enterprise plans to conduct risk assessments annually to support portfolio planning and guidance. We believe this action, if fully implemented, would address our recommendation. The full text of DOD’s comments is reprinted in appendix VI. DOD also provided us with technical comments, which we incorporated, as appropriate. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Under Secretary of Defense for Acquisition, Technology and Logistics; the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs; the Deputy Assistant Secretary of Defense for Chemical and Biological Defense; the Chairman of the Joint Chiefs of Staff; the Secretary of the Army; and the Director, Office of Management and Budget. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-9971 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. The Chemical and Biological Defense Program (CBDP) Enterprise comprises 26 organizations from across the Department of Defense (DOD) that determine warfighter requirements, provide science and technology expertise, conduct research and development and test and evaluation on capabilities needed to protect the warfighter, conduct program integration, and provide oversight. These key organizations include the following: Secretary of the Army Deputy Under Secretary of the Army Assistant Secretary of the Army for Acquisition, Logistics and Joint Program Executive Office for Chemical and Biological Defense Deputy Under Secretary of the Army for Test and Evaluation U.S. Army Chief of Staff Vice Chief of Staff of the Army U.S. Army Test and Evaluation Command West Desert Test Center Office of the U.S. Army Deputy Chief of Staff, G-8 Program Analysis and Integration Office U.S. Army Materiel Command U.S. Army Research, Development, and Engineering Command Edgewood Chemical Biological Center U.S. Army Medical Command U.S. Army Medical Research and Materiel Command U.S. Army Medical Research Institute of Chemical Defense U.S. Army Medical Research Institute of Infectious Chairman, Joint Chiefs of Staff Director, Force Structure, Resources, and Assessment Directorate Joint Requirements Office for Chemical, Biological, Radiological, and Nuclear Defense Office of the Under Secretary of Defense for Acquisition, Technology and Logistics Office of the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs Office of the Deputy Assistant Secretary of Defense for Defense Threat and Reduction Agency Joint Science and Technology Office for Chemical and Biological Defense In addition, according to officials from the Office of the Deputy Assistant Secretary of Defense for Chemical and Biological Defense, the Department of the Navy, the Department of the Air Force, the National Guard Bureau, and combatant commands also have key roles in the Chemical and Biological Defense Program. To determine the extent to which the Chemical and Biological Defense Program (CBDP) Enterprise has achieved its goal to identify required infrastructure capabilities to address current and emerging chemical and biological threats, we reviewed the Program Analysis and Integration Office’s (PAIO) 2008 study, Chemical and Biological Defense Program’s Non-Medical Physical Infrastructure Capabilities Assessment, which assessed the physical infrastructure capabilities of the CBDP Enterprise to support the CBDP mission. The study was requested by the Special Assistant, Chemical and Biological Defense and Chemical Demilitarization Programs, and it resulted in four recommendations that the CBDP Enterprise take to address its infrastructure. Specifically, we analyzed PAIO’s 2008 recommendation that the CBDP Enterprise identify its required infrastructure capabilities, part of its core capabilities, and compared them to the actions taken by the CBDP Enterprise since then through January 2015. We reviewed the recommendations with officials from the Office of the Deputy Assistant Secretary of Defense for Chemical and Biological Defense (ODASD ) and determined that the office recognized the 2008 recommendations to be valid and confirmed that the CBDP Enterprise recognizes the importance and necessity of addressing them. The CBDP Enterprise is using the recommendations as criteria in its efforts to address its research and development and test and evaluation intellectual and physical infrastructure. We conducted site visits to the CBDP Enterprise’s four primary research and development and test and evaluation facilities: Edgewood Chemical Biological Center (Edgewood) at Aberdeen Proving Ground, Maryland; U.S. Army Medical Research Institute of Chemical Defense (USAMRICD) at Aberdeen Proving Ground; U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID) on the National Interagency Biodefense Campus at Fort Detrick, Maryland; and West Desert Test Center (West Desert) at Dugway Proving Ground, Utah. We included the four primary facilities in our review because they conduct the majority of the research and development and test and evaluation activities for the program. By including all of the primary facilities, we are gaining information across the CBDP Enterprise. However, this information is not generalizable to all facilities that may be used by the program to implement its mission. We developed and administered a questionnaire to these facilities, based on the 2012 Chemical and Biological Defense Program (CBDP) Strategic Plan and our objectives, to collect information about the knowledge and skill capabilities of its personnel and physical infrastructure capabilities of each of the facilities, including any changes and challenges to the CBDP Enterprise’s infrastructure, and any actions they have taken to identify required infrastructure capabilities (See app. III for additional information on these facilities.) We pretested our questionnaire with officials from ODASD (CBD) and the following CBDP Enterprise organizations: Edgewood, PAIO, Joint Science and Technology Office, and the Office of the Deputy Under Secretary of the Army for Test and Evaluation. The pretest was intended to solicit feedback on whether our questionnaire (1) would provide answers to the engagement’s objectives, (2) was written in a way that would be familiar to leadership officials of the primary research and development and test and evaluation facilities receiving them, and (3) should include additional questions to gain information about the CBDP Enterprise’s infrastructure. We incorporated the feedback, as appropriate, into our final questionnaire sent to the primary research and development and test and evaluation facilities. We interviewed leadership officials of these facilities about their written responses to our questionnaire. During our site visits to the four primary research and development and test and evaluation facilities, we toured the facilities and new buildings under construction to gain an understanding of how the infrastructure supports their missions. We also obtained information from officials from other CBDP Enterprise organizations that have responsibilities to the program, such as ODASD (CBD), the Joint Science and Technology Office, and PAIO, on their actions to identify required infrastructure capabilities and the CBDP Enterprise’s progress. We reviewed their plans and presentation to identify required infrastructure capabilities and interviewed them to discuss the plans. Finally, we compared key practices on the implementation of organizational transformation, such as the importance of establishing a dedicated authority responsible for day-to-day management for an organization’s change initiatives with the necessary authority and resources to set priorities, make timely decisions, and move quickly to implement top leadership’s decisions regarding organizational transformation, and a timeline and milestones to successfully implement organizational change, with actions the CBDP Enterprise has taken to implement its goal to identify required infrastructure capabilities needed to address current and emerging chemical and biological threats. We used these criteria from our work to analyze whether the CBDP Enterprise followed key implementation steps to successfully transform the way the CBDP Enterprise addresses its infrastructure goals. To determine the extent to which the Department of Defense’s (DOD) CBDP Enterprise has identified, addressed, and managed potential fragmentation, overlap, and duplication in its chemical and biological defense infrastructure, we reviewed CBDP guidance and policies on the program and related testing facility guidance; a study in 2011 on infrastructure needs to support medical countermeasures; and a 2014 PAIO infrastructure implementation plan to support the CBDP Enterprise’s efforts to avoid duplication. We reviewed the information to determine how the CBDP Enterprise identifies, addresses, and manages potential fragmentation, overlap, and duplication. We reviewed DOD Directive 5134.08 on the responsibilities of the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs and DOD Directive 3200.11 on the responsibilities of the Major Range and Test Facility Bases. The directives outline policy goals, such as using existing DOD and other federal agencies’ facilities and certain oversight activities aimed at avoiding unnecessary duplication. We did not conduct an independent assessment of potential fragmentation, overlap, and duplication within the CBDP Enterprise. We developed and administered a questionnaire to CBDP’s four primary research and development and test and evaluation facilities discussed above—Edgewood, USAMRICD, USAMRIID, and West Desert—based on our annual report to Congress on fragmentation, overlap, and duplication to identify any additional policies on duplication and understand their processes or actions to identify, address, and manage fragmentation, overlap, and duplication. responses to the questionnaire, we compared their processes and actions to DOD guidance to determine the extent to which the CBDP Enterprise reported that it avoided duplication and identified, addressed, and managed potential infrastructure duplication. In addition, we analyzed information about the facilities’ missions and infrastructure. We interviewed research and development facility officials about their infrastructure studies and the steps that they had taken to identify, address, or manage fragmentation, overlap, and duplication. We analyzed the studies, conducted for the U.S. Army Assistant Chief of Staff for Facilities, Planning and Programming Division, that identified potential infrastructure duplication and that were used to make infrastructure decisions about USAMRIID’s new facility. In addition, we reviewed the Competency Management Initiative program developed by the U.S. Army Medical Research and Materiel Command to identify knowledge and skill capabilities and potential duplication, among other factors, within the command, to include USAMRIID and USAMRICD. See GAO, 2014 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits, GAO-14-343SP (Washington, D.C.: Apr. 8, 2014). We reviewed the plan and studies of the Joint Science and Technology Office and the Army’s PAIO to identify required knowledge and skill capabilities and physical infrastructure capabilities, to include identifying potential duplication. We analyzed information about the missions and infrastructure of each CBDP primary research and development and test and evaluation facility to understand their role within the CBDP Enterprise. Based on the information from our questionnaire, we collected information from West Desert and Edgewood on their swatch testing infrastructure capabilities, infrastructure utilization, competitors, and customers. In addition, we interviewed research and development facility officials about the steps they have taken to identify, address, or manage fragmentation, overlap, and duplication. We did not collect information about the research and development and test and evaluation projects conducted at the facilities; therefore, we were unable to determine whether similar infrastructure capabilities at the facilities were overlapping or duplicative or used for different purposes. To determine the extent to which the CBDP Enterprise has used threat data and plans to use threat data and the results of risk assessments to support future investment planning in research and development for chemical and biological threats, we received a threat briefing from the Defense Intelligence Agency and the U.S. Army’s National Ground Intelligence Center, similar to the annual threat data received by the CBDP Enterprise, to understand the type of threat data on chemical and biological threats. We analyzed DOD Directive 5160.5E8 to determine which offices are responsible for conducting and participating in the CBDP Enterprise’s risk assessments. We reviewed the standards for internal control in the federal government for use of risk assessment and written procedures and compared them to any actions taken by the Joint Requirements Office and ODASD (CBD) to ensure the guidance and process are being followed. We interviewed officials from the Joint Requirements Office and ODASD (CBD) about who is responsible for conducting risk assessments and about how they used the risk assessment framework, which was introduced in the 2001 Quadrennial Defense Review Report, to conduct their risk assessment.the program’s annual portfolio planning process described in its 2012 CBDP Business Plan to understand the role of risk assessment in the CBDP Enterprise’s planning process. We compared internal control standards on written procedures to those used by the CBDP Enterprise to conduct its risk assessments. We obtained information on the operational, force management, and institutional risk assessments conducted by the Joint Requirements Office and ODASD (CBD) to understand the process used to conduct the CBDP Enterprise’s risk assessments. We interviewed officials from ODASD (CBD), which develops CBDP Enterprise-wide guidance to ensure strategic goals are achieved, to determine how threat data and the results of risk assessments are used—or will be used in the future—to support investment planning in research and development. We obtained relevant documentation and interviewed officials from the following organizations: Office of the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs Office of the Deputy Assistant Secretary of Defense for Chemical and Biological Defense (ODASD ) Office of the Assistant Secretary of Defense for Health Affairs Joint Chiefs of Staff Force Structure, Resources, and Assessment Directorate (J-8) Joint Requirements Office for Chemical, Biological, Radiological, and Nuclear Defense Defense Threat Reduction Agency Joint Science and Technology Office for Chemical and Biological Defense Defense Intelligence Agency U.S. Army Office of the Assistant Secretary of the Army for Acquisition, Logistics, and Technology Joint Program Executive Office for Chemical and Biological Defense Office of the U.S. Army Deputy Chief of Staff, G-8 Program Analysis and Integration Office (PAIO) Office of the Deputy Under Secretary of the Army for Test and U.S. Army Medical Research and Materiel Command U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID), National Interagency Biodefense Campus, Fort Detrick, Maryland U.S. Army Medical Research Institute of Chemical Defense (USAMRICD), Aberdeen Proving Ground, Maryland U.S. Army Materiel Command U.S. Army Research, Development and Engineering Command Edgewood Chemical Biological Center (Edgewood), Aberdeen U.S. Army Test and Evaluation Command West Desert Test Center (West Desert), Dugway Proving Ground, U.S. Army Intelligence and Security Command National Ground Intelligence Center National Interagency Confederation for Biological Research, National Interagency Biodefense Campus, Fort Detrick, Maryland We conducted this performance audit from January 2014 to June 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Chemical and Biological Defense Program (CBDP) Enterprise’s research and development and test and evaluation infrastructure is a key component in defending the nation against chemical and biological threats. For example, prior to deploying the MV Cape Ray to the Mediterranean Sea to demilitarize chemical weapons from Syria, the U.S. Army Medical Research Institute of Chemical Defense (USAMRICD) provided training to its medical staff, inspected the ship, and evaluated the medical preparedness of the mission. In July 2014, the United States began using equipment and personnel expertise from the U.S. Army’s Edgewood Chemical Biological Center (Edgewood), according to Edgewood officials, to neutralize chemical weapons materials from Syria as shown in this video. In another example, according to a U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID) official, USAMRIID is supporting the development of multiple products against Ebola, including the experimental therapeutic drug ZMapp, which was provided to the American health care workers infected with the Ebola virus during the outbreak in West Africa in 2014. The CBDP Enterprise’s primary research and development and test and evaluation facilities have different missions, but serve the same military population and engage in similar activities to protect the warfighter from chemical and biological threats. While these facilities support the CBDP Enterprise in carrying out its mission, they are owned and operated by the U.S. Army. Edgewood’s mission is to be the nation’s provider of innovative solutions to countering weapons of mass destruction. Edgewood is located on Aberdeen Proving Ground, Maryland. Edgewood aligns with the CBDP Enterprise by “enabling the warfighter to deter, prevent, protect against, mitigate, respond to, and recover from chemical, biological, radiological, and nuclear threats and effects as part of a layered, integrated defense.” To do this, Edgewood’s core areas of work include chemistry and biological sciences; science and technology for emerging threats; chemical, biological, radiological, nuclear, and high-yield explosives analysis and testing; chemical and biological agent handling and surety; and chemical and biological munitions and field operations. Edgewood also conducts training of civilians and military personnel to respond to chemical and biological threats, cosponsoring some training with USAMRICD. In fiscal year 2013, 40.8 percent of Edgewood’s funding came from the CBDP Enterprise, and the remainder came from the Army (15.5 percent), non-CBDP Department of Defense (DOD) organizations (35.7 percent), federal agencies (4.1 percent), and nonfederal agencies (3.9 percent). As of October 2014, Edgewood had a staff of 1,421 whose work is focused on nonmedical materiel solutions to chemical and biological threats. Since 2008, Edgewood has completed projects intended to more safely perform the research and development required to address current and emerging chemical and biological threats. Changes at the facility’s Advanced Chemical Laboratory include the addition of 10,000 square feet of state-of-the-art laboratories for safely handing emerging agents, including materials with no known medical countermeasures. According to Edgewood officials, planned changes at its Advanced Threat Defense Facility are expected to facilitate expansion of emerging threat bench- scale experiments to large-scale evaluations to enable enhanced research capabilities and to include unique infrastructure capabilities to address the challenges of emerging chemical threats from vapors, solids, liquids, and aerosols. According to Edgewood officials, the biggest challenge for the future is sustaining core intellectual and physical infrastructure in a time of budget austerity. Second, these officials stated that a lack of a funding mechanism for sustainment of the facility is a challenge. The Program Analysis and Integration Office (PAIO) determined that the cost of sustainment is about $26.4 million for fiscal year 2015. There is a plan to fund sustainment of the chemical and biological infrastructure to support the CBDP mission in the Fiscal Year 2015–2019 Program Objective Memorandum; however, as of January 2015, there was no agreement within the CBDP Enterprise to support the primary research and development facilities in this way. Third, officials told us that Edgewood is maintaining 28 abandoned buildings on its campus. Figure 4 shows an example of an abandoned facility at Edgewood. Building 3222, now over 70 years old, was a medical research laboratory with about 33,000 square feet and was built in 1944. Figure 5 shows another example of an abandoned facility at Edgewood. Building 3300 was a chemistry laboratory used to develop and evaluate decontamination technology to mitigate chemical and biological threats. This facility has about 44,350 square feet and was built in 1966. According to Edgewood officials, it will cost over $74 million to demolish all 28 buildings, which is equivalent to about 1 year’s facilities sustainment and support costs for the CBDP Enterprise’s three primary research and development facilities put together. In addition, according to Edgewood officials, to maintain one of its most expensive buildings until it is demolished is estimated to cost about $600,000 a year. Officials said that the ability to maintain and expand their intellectual infrastructure also is strained in the current fiscal environment. Currently, Edgewood plans to maintain these facilities until funding becomes available to demolish the buildings. USAMRICD’s mission is to discover and develop medical products and knowledge solutions against chemical and biochemical threats by means of research, education and training, and consultation. USAMRICD is located on Aberdeen Proving Ground, Maryland. Its core areas of work include analytics, which includes diagnostics, forensics, and the Absorption, Distribution, Metabolism, Excretion, Toxicology Center of Excellence to support drug development; agent mitigation, which includes personnel decontamination and bioscavenger enzymes to neutralize chemical warfare agents; toxicant countermeasures, which includes countermeasures against vesicants, metabolic poisons, and pulmonary toxicants; nerve agent countermeasures, and toxin countermeasures. USAMRICD develops educational tools and conducts training courses for military and civilian personnel, with emphasis on medical care of chemical causalities. USAMRICD’s campus consists of 15 buildings and about 173,000 square feet of laboratories and support areas. In fiscal year 2013, about 61 percent of USAMRICD’s funding came from the CBDP Enterprise, with about 15 percent coming from non-CBDP DOD organizations, and about 24 percent coming from non-DOD federal organizations. As of July 2014, USAMRICD had a staff of 362 personnel supporting its work to develop medical chemical defenses for the warfighter. According to USAMRICD officials, there have been no major upgrades or additions to the current infrastructure since 2008 due to the construction of a new building. USAMRICD officials said they expect to begin moving into the facility in 2015. According to USAMRICD officials, the laboratory and research support areas of the facility will consist of about 250,000 square feet across four buildings when the new facility is complete. The entire new facility is about 526,000 square feet and is on track to be designated a Leadership in Energy and Environmental Design facility. Figure 6 shows USAMRICD’s new headquarters and laboratory facility. A Leadership in Energy and Environmental Design program promotes “green” building design, green construction practices, and evaluation of the whole building’s lifetime environmental performance. USAMRIID’s mission is to provide leading-edge medical capabilities to deter and defend against current and emerging biological threats. USAMRIID is located on the National Interagency Biodefense Campus at Fort Detrick, Maryland. After the terrorist attacks of September 11, 2001, additional funding allowed USAMRIID to increase its workforce to enhance its existing mission to address biological threats, to include biological threat characterization, enhanced studies of disease, and the development of medical countermeasures. Its core areas of work include preparing for uncertainty; research, development, test, and evaluation of medical countermeasures; rapidly identifying biological agents; training and educating the force; and providing expertise in medical biological defense. For example, USAMRIID also conducts field training for operational forces in areas such as threat identification and diagnostic methods. Figure 7 shows an example of a USAMRIID field training exercise. USAMRIID’s campus consists of 20 buildings and 582,369 square feet of laboratory and support space, with 134,469 square feet of Biosafety Level-2 (BSL-2), BSL-3, and BSL-4 laboratory space. According to USAMRIID officials, USAMRIID is the only DOD facility with BSL-4 containment laboratories. In addition, USAMRIID officials stated that about 80 percent of USAMRIID’s work is medical countermeasures research and development. Figure 8 shows USAMRIID staff in a BSL-4 containment laboratory. In 2012, the Office of the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs (OASD ) assigned USAMRIID the responsibility of performing BSL-3 and BSL-4 developmental testing and evaluation of medical countermeasures. As a result, USAMRIID made adjustments to the facility’s laboratory infrastructure and retained key subject-matter experts required to perform studies under the Good Laboratory Practices system promulgated by the Food and Drug Administration. In fiscal year 2013, about 50 percent of USAMRIID’s funding came from the CBDP Enterprise, with about 38 percent coming from non-CBDP DOD organizations, and the remaining 12 percent from non-DOD federal agencies and non-federal agencies. As of July 2014, USAMRIID maintained a staff of 841 personnel to support its work in biological defense research. USAMRIID is constructing a new headquarters and laboratory building, and officials said they expect to begin moving into the building in 2017. According to USAMRIID officials, the new facility will provide additional laboratory space and a new laboratory design to improve workflow and productivity, particularly when performing animal studies. The new facility will include several new capabilities, which may enhance understanding of the pathophysiology of animals and the effectiveness of medical countermeasures to address biological threats. In response to our questionnaire, USAMRIID officials told us that they are concerned about a potential intellectual infrastructure gap in supporting medical countermeasures test and evaluation, a new responsibility as of 2013. The new mission will require USAMRIID personnel to meet additional standards for conducting research and testing. In addition, USAMRIID officials stated that it will be a challenge in sustaining their new facility. USAMRIID officials said that it would be helpful if the CBDP Enterprise provided stable, sustainment funding in a way similar to the funding received for the test and evaluation facilities. PAIO estimated that the cost of sustainment and other support activities at USAMRIID is about $32.7 million in fiscal year 2015. Currently, the research and development facilities receive funds to sustain the facilities through individual research and development projects awarded by ODASD (CBD) through the Joint Science and Technology Office. West Desert Test Center (West Desert) at Dugway Proving Ground, Utah, has a mission to safely test warfighters’ equipment to high standards within cost and schedule. West Desert enables the delivery of reliable defense products to the warfighter through rigorous developmental and operational testing from the test tube to the battlefield. Its core areas of work include chemical and biological laboratory, chamber, and field testing; dissemination and explosives; dispersion modeling; meteorology; data science; and test engineering and integration. Dugway Proving Ground is one of DOD’s major range and test facility bases. In fiscal year 2013, about 77 percent of West Desert’s work was conducted for the CBDP Enterprise, with the rest from other DOD organizations (15 percent), non-DOD federal government agencies (2 percent), and industry, academia, and international organizations (6 percent). In response to section 232 of the Bob Stump National Defense Authorization Act for Fiscal Year 2003, West Desert charges DOD customers for the costs that are directly related to testing. Therefore, West Desert receives annual funding through the Army and OASD (NCB) for facility sustainment. As of July 2015, West Desert had a staff of 518 personnel on a facility of about 1,252 square miles, including mountain terrain, mixed desert terrain, and salt flats. According to West Desert officials, some of the infrastructure capabilities added since 2008 include upgrades and improvements to their test grid and dynamic test chamber. Additionally, West Desert has two major ongoing efforts to align infrastructure with emerging chemical and biological threats. The Whole System Live Agent Testing Chamber allows full-system testing of biological detection equipment in a BSL-3 environment with controlled humidity and wind speed—a capability that does not exist elsewhere. The second capability, the Modular Chemical Chamber Test Capabilities, tests warfighter capabilities against emerging chemical threats. This testing capability will include the installation of Secondary Containment Modules to roll into and out of a large multipurpose chemical-warfare-agent-testing chamber in the West Desert’s Bushnell Materiel Testing Facility. The use of modular chambers allows for reconfiguration of the facility for upcoming tests while other testing is being conducted within the Bushnell Materiel Testing Facility. According to West Desert officials, this modular concept is expected to reduce test costs and timelines, while increasing test throughput and adding flexibility in meeting customer test requirements. As part of its future plans to ensure its infrastructure is aligned to address emerging threats, West Desert officials stated that upcoming test requirements for conventional agents are in place and that priorities for future capabilities will focus on the ability to rigorously test military systems against threats from nontraditional chemical agents and toxic industrial chemicals and materials. In addition, West Desert is constructing an addition to its Life Sciences Test Facility. This annex, to support testing of field and chamber samples and analysis of test data, among other uses, will include about 41,200 square feet, with about 16,200 square feet of BSL-2 and BSL-3 laboratories, including an aerosol chamber. According to West Desert officials, this facility will address a current shortfall in BSL-3 laboratory and chamber testing capacity. Figure 9 shows West Desert’s new Life Sciences Test Facility annex. West Desert officials identified potential gaps in West Desert’s physical infrastructure and knowledge and skill capabilities. West Desert plans to establish a nontraditional (chemical) agent staging facility to support the modular test chambers being installed in the Bushnell Material Testing Facility. According to West Desert officials, as of January 2015, the project had not been approved for funding through the Military Construction–Defense budget account. In addition, officials have identified gaps in subject-matter expertise in molecular biology, virology, chemical engineering, analytical chemistry, aerosol-dissemination technology, information technology, catalysis, and automation technology. According to West Desert officials, government compensation restrictions will likely preclude the hiring of full-time personnel in the areas of information technology and chemical engineering. The Joint Requirements Office developed a list of capabilities needed by military forces to defend against chemical and biological threats in an operational environment. As shown in figure 10, the 18 core capabilities are categorized into four areas: Sense, Shape, Shield, and Sustain. These four areas are described as follows: The “Sense” area is the capability to continually provide information about the chemical, biological, radiological, and nuclear (CBRN) situation at a time and place by detecting, identifying, and quantifying CBRN hazards in air or water, and on land, personnel, equipment, or facilities. This capability includes detecting, identifying, and quantifying those CBRN hazards in all physical states (solid, liquid, and gas). The “Shape” area provides the ability to characterize the CBRN hazard to the force commander and to develop a clear understanding of the current and predicted CBRN situation; to collect, query, and assimilate information from sensors, intelligence, and medical personnel in near-real time to inform personnel, among other actions and responsibilities. The “Shield” area capabilities provide protection to the force from chemical and biological threats by preventing or reducing individual and collective exposures, applying prophylaxis to prevent or mitigate negative physiological effects, and protecting critical equipment. The “Sustain” area capabilities allow forces to conduct decontamination and medical actions that enable the quick restoration of combat power, maintain or recover essential functions that are free from the effects of CBRN hazards, and facilitate the return to preincident operational capability as soon as possible. Since 1999, we have found potential fragmentation, overlap, and duplication of the federal government’s chemical and biological research and development laboratory facilities, but also we have found improved coordination among federal agencies developing biological countermeasures. In 1999 and 2000, prior to the anthrax attacks in the United States, we found ineffective coordination among the Department of Defense (DOD) and other federal agencies with chemical and biological programs that could result in potential gaps or overlap in research and development programs. In August 1999, we found that the formal and informal program coordination mechanisms that existed between four military and civilian nonmedical chemical and biological programs may not ensure that potential overlap, gaps, and opportunities for collaboration would be addressed. Specifically, we found that coordinating mechanisms between DOD’s Chemical and Biological Defense Program (CBDP), DOD’s Defense Advanced Research Projects Agency’s Biological Warfare Program, the Department of Energy’s Chemical and Biological Nonproliferation Program, and the Counterterror Technical Support Program lacked information on prioritized user needs, lacked validated chemical and biological defense equipment requirements, and lacked information on how these programs relate their research and development projects to needs. We concluded that information on user needs and defined requirements may allow coordination mechanisms to compare the specific goals and objectives of research and development projects to better assess whether overlaps, gaps, and opportunities for collaboration exist. We did not make recommendations in this report. In July 2014, we testified before the House Committee on Energy and Commerce Subcommittee on Oversight and Investigations on recent incidents at government high-containment laboratories and the need for strategic planning and oversight of high-containment laboratories. In September 2009, we found that there was no federal entity responsible for strategic planning and oversight of high-containment laboratories— those designed for handling dangerous pathogens and emerging infectious diseases—across the federal government. We concluded in September 2009 that without an entity responsible for oversight and visibility across the high-containment laboratories and a strategy for requirements for the laboratories, there was little assurance of having facilities with the right capacity to meet the nation’s needs. We made several recommendations to address these issues, including identifying a single entity charged with periodic government-wide strategic evaluation of high-containment laboratories, developing a mechanism for sharing lessons learned from reported laboratory accidents, and implementing a personnel reliability program for high-containment laboratories, among other recommendations. In our February 2013 report on high-containment laboratories, we made two recommendations—first, that periodic assessment of national biodefense research and development needs be conducted and, second, that the Executive Office of the President, Office of Science and Technology Policy, examine the need to establish national standards for high-containment laboratories. The Executive Office of the President, Office of Science and Technology Policy, concurred with our two recommendations. Regarding biosurveillance, in June 2010, we found that the federal government could benefit from a focal point that provides leadership to the interagency community developing this capability. Biosurveillance is the ability to provide early detection and situational awareness of potentially catastrophic biological events. Specifically, we found that the mission responsibilities and resources needed to develop a biosurveillance capability were dispersed across a number of federal agencies, creating the potential for overlap and duplication of effort. In addition, we found that there was no broad, integrated national strategy that encompassed all stakeholders with biodefense responsibilities to guide the prioritization and allocation of investment across the entire biodefense enterprise, among other responsibilities. We made two recommendations to the Homeland Security Council within the Executive Office of the President to (1) identify a focal point, which was implemented when an Interagency Policy Group was convened to complete a National Biosurveillance Strategy in 2012 and (2) develop a national biosurveillance strategy, which remains open until a mechanism to identify resource and investment needs, including investment priorities, is included in an implementation plan. See GAO, Public Health Preparedness: Developing and Acquiring Medical Countermeasures Against Chemical, Biological, Radiological, and Nuclear Agents, GAO-11-567T (Washington, D.C.: Apr. 13, 2011). Security. This organization is a decision-making body responsible for providing recommendations to the Secretary of Health and Human Services on coordination of medical countermeasures development against chemical and biological threats, among other responsibilities. Similarly, in May 2014, we found coordination of effort among federal agencies located on the National Interagency Biodefense Campus. The following textbox provides our observations on the program’s efforts at the National Interagency Biodefense Campus to collaborate with other federal agencies to reduce potential infrastructure fragmentation, overlap, and duplication. GAO Observations on the National Interagency Biodefense Campus The National Interagency Biodefense Campus at Fort Detrick, Maryland, was established in 2004. An official with the U.S. Army Medical Research and Materiel Command testified before the House Select Committee on Homeland Security in 2004 that the campus would share common infrastructure and supporting requirements, such as roadways, libraries, and regulatory and quality assurance responsibility. In addition, the official stated that the campus would minimize duplication of effort, technology, and infrastructure. During the course of our review, we found some examples of actions taken by the CBDP Enterprise’s primary research and development facility at the National Interagency Biodefense Campus to reduce the potential for duplication of physical and intellectual infrastructure. A set of studies on medical countermeasure test and evaluation facility requirements, conducted for the U.S. Army Assistant Chief of Staff for Facilities, Planning and Programming Division, determined, among other things, that there was sufficient capacity for holding animals in existing facilities that conduct research with animals. During planning for a new medical countermeasure test and evaluation facility, a decision was made that the U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID) on the National Interagency Biodefense Campus would cancel its own plans to construct this building, including an animal holding facility (vivarium). According to USAMRIID officials, the cancellation had an overall estimated cost savings of about $600 million. USAMRIID and the National Institute of Allergy and Infectious Diseases Integrated Research Facility plan to share Biosafety Level-3 (BSL-3) and BSL-4 imaging laboratories capabilities. USAMRIID officials said that this reduces the need for each facility to have a BSL-3 and BSL-4 imaging laboratory. The National Interagency Confederation for Biological Research, a governance structure for the National Interagency Biodefense Campus, encourages intellectual collaboration in efforts related to research of biological pathogens across agency boundaries, such as collaborative award programs and annual scientific forums. We conducted a number of reviews since 1999 on the efforts of federal agencies to reduce potential fragmentation, overlap, and duplication through coordination of their efforts to manage chemical and biological programs. We found improved coordination that may reduce potential fragmentation, overlap, and duplication of research and development of medical countermeasures. In addition to the contact named above, GAO staff who made significant contributions to this report include Mark A. Pross, Assistant Director; Richard Burkard; Russ Burnett; Jennifer Cheung; Rajiv D’Cruz; Karen Doran; Edward George; Mary Catherine Hult; Mae Jones; Amie Lesser; Elizabeth Morris; Steven Putansu; Sushil Sharma; Sarah Veale; and Michael Willems. High-Containment Laboratories: Recent Incidents of Biosafety Lapses. GAO-14-785T. Washington, D.C.: July 16, 2014. Biological Defense: DOD Has Strengthened Coordination on Medical Countermeasures but Can Improve Its Process for Threat Prioritization. GAO-14-442. Washington, D.C.: May 15, 2014. High-Containment Laboratories: Assessment of the Nation’s Need Is Missing. GAO-13-466R. Washington, D.C.: February 25, 2013. Public Health Preparedness: Developing and Acquiring Medical Countermeasures Against Chemical, Biological, Radiological, and Nuclear Agents. GAO-11-567T. Washington, D.C.: April 13, 2011. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue. GAO-11-318SP. Washington, D.C.: March 1, 2011. Biosurveillance: Efforts to Develop a National Biosurveillance Capability Need a National Strategy and a Designated Leader. GAO-10-645. Washington, D.C.: June 30, 2010. High-Containment Laboratories: National Strategy for Oversight Is Needed. GAO-09-1045T. Washington, D.C.: September 22, 2009. High-Containment Laboratories: National Strategy for Oversight Is Needed. GAO-09-1036T. Washington, D.C.: September 22, 2009. High-Containment Laboratories: National Strategy for Oversight Is Needed. GAO-09-574. Washington, D.C.: September 21, 2009. Chemical and Biological Defense: Observations on DOD’s Risk Assessment of Defense Capabilities. GAO-03-137T. Washington, D.C.: October 1, 2002. Chemical and Biological Defense: Coordination of Nonmedical Chemical and Biological R&D Programs. GAO/NSIAD-99-160. Washington, D.C.: August 16, 1999.
The United States faces current and emerging chemical and biological threats, and defenses against these threats enable DOD to protect the force, preclude strategic gains by adversaries, and reduce risk to U.S. interests. GAO was asked to review DOD efforts to manage its chemical and biological defense infrastructure capabilities. This report examines the extent to which the CBDP Enterprise has: (1) achieved its goal to identify required infrastructure capabilities to address current and emerging chemical and biological threats; (2) identified, addressed, and managed potential fragmentation, overlap, and duplication in its chemical and biological defense infrastructure; and (3) used and plans to use threat data and the results of risk assessments to support its investment planning for chemical and biological defense. GAO analyzed CBDP infrastructure policies, plans, and studies from organizations across the CBDP Enterprise from fiscal years 2008 through 2014. A key component of the 26 Department of Defense (DOD) organizations that constitute the Chemical and Biological Defense Program (CBDP) Enterprise is the chemical and biological defense research and development and test and evaluation infrastructure. After nearly 7 years, the CBDP Enterprise has not fully achieved its goal to identify required infrastructure capabilities. The Joint Chemical, Biological, Radiological, and Nuclear Defense Program Analysis and Integration Office (PAIO), CBDP's analytical arm, recommended in 2008 that the CBDP Enterprise identify required infrastructure capabilities, such as laboratories to research chemical and biological agents, to ensure alignment of the infrastructure to its mission. CBDP Enterprise officials recognize the importance, validity, and necessity of addressing the 2008 recommendation. The CBDP Enterprise has made limited progress in achieving this infrastructure goal because CBDP Enterprise officials told GAO that they were focused on higher priorities and had no CBDP Enterprise-wide impetus to address the infrastructure recommendations. The Office of the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs previously identified the need for an entity that has the responsibility and authority needed to ensure achievement of this goal, but DOD has not designated such an entity. By identifying and designating an entity with the responsibility and authority to lead infrastructure transformation, the CBDP Enterprise would be better positioned to achieve this goal. The CBDP Enterprise has taken some actions at its laboratories to identify duplication in its chemical and biological defense infrastructure. DOD directives outline goals, such as to avoid duplication by using existing DOD and other federal agencies' facilities. As part of an ongoing study to identify required infrastructure, in July 2015 PAIO plans to inventory and analyze CBDP Enterprise infrastructure for potential duplication. However, study officials stated that they do not plan to identify, request, or consider information about infrastructure capabilities from existing studies of other federal agencies, such as the Department of Homeland Security, because their office does not have the authority or resources to require such information. By considering existing information, which would not necessarily require new authority, PAIO will have more information about existing infrastructure inventory across the federal government, such as its capability and potential availability for use. The CBDP Enterprise used threat data and plans to use threat data and the results from risk assessments piloted in 2014 to support its future portfolio planning process to prioritize research and development investment. However, the CBDP Enterprise has not updated its guidance and planning process to fully institutionalize the use of risk assessments. Federal standards for internal control state that agencies should have written procedures to better ensure leadership directives are implemented. According to CBDP Enterprise officials, while updating the guidance would be beneficial, they had not committed to updating such guidance or established a time frame for doing so. By updating its guidance to fully institutionalize the use of risk assessments, the CBDP Enterprise would be better positioned to prioritize future research and development investments. GAO recommends, among other things, that DOD (1) designate an entity to lead the effort to identify required infrastructure; (2) identify, request, and consider any information from chemical and biological infrastructure studies of other federal agencies to avoid potential duplication; and (3) update the CBDP Enterprise's guidance and planning process to fully institutionalize the use of risk assessments. DOD concurred with all five of GAO's recommendations and discussed actions it plans to take.
The FHLBank System, established in 1932 by the Federal Home Loan Bank Act, is a group of government-sponsored enterprises comprising 12 regional, federally chartered banks. Each FHLBank is cooperatively owned by its members––such as commercial and community banks, thrifts, credit unions, and insurance companies. The FHLBanks represent 12 districts located in Atlanta, Boston, Chicago, Cincinnati, Dallas, Des Moines, Indianapolis, New York, Pittsburgh, San Francisco, Seattle, and Topeka. As of year-end 2014, over 7,300 financial institutions were members of the FHLBank System. The number of members in individual FHLBank districts ranged from 303 to 1,155, with the FHLBank of Pittsburgh having the fewest members and Des Moines the most. In 2014, the 12 FHLBanks had approximately $914 billion in assets, with asset size ranging from $35 billion (Seattle) to $138 billion (Atlanta). FHLBank members included commercial banks (66 percent of all members), credit unions (17 percent), thrifts (12 percent), insurance companies (4 percent), and certified community development financial institutions (CDFI) (less than 1 percent). Eligible financial institutions may become members of the FHLBank, generally in the district where they have their principal place of business. For example, if a financial institution’s home office is in Hartford, Connecticut, the institution would join the FHLBank of Boston. However, subsidiaries of financial institutions’ holding companies may be members of the same or of a different FHLBank, depending on where the subsidiary’s principal place of business is located. Eligible financial institutions become members through an application process and, once approved, purchase stock in their regional FHLBank. The FHLBank System issues debt securities in capital markets, generally at relatively favorable rates made possible by the system’s status as a government-sponsored enterprise. Buyers of FHLBank debt securities represent the entire spectrum of domestic and international investors, including commercial banks, central banks, investment managers, major corporations, pension funds, government agencies, and individuals. The FHLBanks do not engage in lending to the public but instead provide loans to lenders to support housing finance and community lending. These loans, or advances, to member institutions are primarily collateralized by residential mortgage loans. Some member institutions also may pledge small business, small farm, and small agribusiness loans as collateral. In addition to the advances, FHLBank members also have the ability to earn dividends on purchased capital stock and access to various products and services, such as letters of credit and payment services. Beginning in 1989, Congress expanded the FHLBanks’ role and responsibilities. FIRREA created two principal housing and community lending programs—the Affordable Housing Program (AHP) and the Community Investment Program (CIP)—in addition to community support programs. Other community lending programs may be voluntarily offered by the FHLBanks under the Community Investment Cash Advance (CICA) program regulations, which were issued by the Federal Housing Finance Board (now FHFA) from an authority in FIRREA that authorized other community lending programs. AHP and CIP are required programs under FIRREA, while other CICA programs are voluntary. The subsidy may be in the form of a grant or a subsidized interest rate on a secured loan from an FHLBank to a member lender. According to statute, low- or moderate-income households are defined as households with incomes of 80 percent or less of area median income. Very low-income household means any household that has an income of 50 percent or less of the area median. combining them with other programs and funding sources, such as the Low-Income Housing Tax Credit and investments from private developers. The competitive application program generally constitutes at least 65 percent of an FHLBank’s required annual contribution to supporting affordable housing. Under the homeownership set-aside program, member institutions apply to their FHLBanks for grants then disburse those grants to homeowners. At least one-third of the annual set-aside contribution must be allocated to first-time homebuyers. CIP provides member institutions with advances and letters of credit for housing, economic development, and mixed-use projects. CIP offers flexible advance terms for members to undertake community-oriented mortgage lending. The price of advances made under CIP shall not exceed the FHLBank’s cost of issuing consolidated obligations of comparable maturity, taking into account reasonable administrative costs. beneficiaries on behalf of members, help facilitate residential housing financing and community lending and may provide liquidity or other funding, among other purposes. Finally, under the voluntary CICA programs, FHLBanks offer advances, letters of credit, and grants for economic development that are targeted to small business concerns, or geographies, or households at specified income levels. Letters of credit, which support obligations to third-party FIRREA also created advisory councils for the FHLBanks—made up of 7 to 15 persons from each district and drawn from community and nonprofit organizations that are actively involved in providing or promoting low- and moderate-income housing or community lending. The advisory councils meet with representatives of the FHLBank’s board of directors at least quarterly to provide advice on ways in which the FHLBank can carry out its housing finance and community lending mission. They also submit an annual analysis to FHFA on the low- and moderate-income housing and community lending activities of their FHLBank. 12 C.F.R. § 1292.5(d)(2). to accept alternative forms of collateral, such as agricultural and small business loans, from small members known as community financial institutions. The goal of these reforms was to help smaller banks or thrifts, which may have limited single-family mortgages and other traditional assets to pledge as collateral, gain greater access to the liquidity offered by FHLBank advances. In so doing, the reforms were targeted to help improve economic development credit opportunities in rural areas and other underserved communities. Section 20 of the Federal Home Loan Bank Act (12 U.S.C. § 1440) requires each FHLBank to be examined at least annually. reporting annually on the FHLBanks’ activities in affordable housing and community development. FHFA’s Office of Inspector General has also reported on certain aspects of the FHLBanks, including the AHP and advances made to the largest institutions. register a class of its common stock and comply with SEC disclosure As a result, FHLBanks must file periodic and annual requirements.financial reports and other information (such as annual and quarterly reports, forms 10-K and 10-Q) with SEC. See FHFA, Office of Inspector General, Recent Trends in Federal Home Loan Bank Advances to JPMorgan Chase and Other Large Banks (Washington, D.C.: April 2014); FHFA’s Oversight of the Federal Home Loan Banks’ Affordable Housing Program (Washington, D.C.: April 2013); and An Overview of FHLBank System’s Structure, Operations, and Challenges, accessed on June 18, 2014 at http://www.fhfaoig.gov/Content/Files/FHLBankSystemOverview.pdf. 2011 FHFA established an OMWI that would be responsible for all matters related to diversity in FHFA’s management, employment, and business activities. The OMWI is responsible for developing and implementing standards and procedures to promote diversity in all activities and submitting an annual report detailing the actions taken to promote diversity and inclusion. FHFA’s OMWI is tasked with helping to ensure that minorities, women, and persons with disabilities are fully included in all activities at the FHLBanks, and with helping to promote diversity on FHLBank boards of directors. Each bank has its own board of directors made up of member and independent directors. Member directors represent and by law must be officers or directors of member institutions. Independent directors are individuals from outside the FHLBank membership base. Additionally, at least two of the independent directors must be designated public interest directors with more than 4 years of experience in representing consumer or community interests on banking services, credit needs, housing, or financial consumer protections. In 2014, the size of the FHLBank boards ranged from 14 to 19 directors, for a total of 189 board directors across all 12 FHLBanks. Of the 189 board directors, 108 were member directors and 81 independent directors, including 26 public interest directors (see fig. 1). Each board elects a chair and vice chair who serve 2-year terms. Of the 12 board chairs, 7 were member directors and 5 independent directors, including 1 public interest director (see table 1). Unlike the boards of publicly traded companies, which typically include company managers among their board directors, no representatives from FHLBank management may serve on FHLBank boards. Each FHLBank has a president who reports to the bank’s board of directors. Two rules applicable to the FHLBanks govern the roles and responsibilities of FHLBank board directors. The first describes the directors’ general authority and powers, which include establishing and maintaining effective risk management policies, internal control systems, and strategic business plans; establishing audit committees; adopting annual operating expense budgets and a capital expenditures budget; and paying dividends; among other things. The second outlines the general responsibilities of the FHLBank board directors with respect to the management of internal controls, audit systems, asset growth, investments, and credit and counterparty risk. FHFA officials told us that board responsibilities included establishing the risk appetite of the bank (including reviewing risk exposures), overseeing bank products policies (e.g., collateral, underwriting standards, and advance policies), and overseeing executive performance. Additionally, according to FHFA’s examination manual, which includes board director responsibilities, the boards are responsible for hiring and retaining senior management. FHLBank boards typically organize themselves into committees. The boards are required to have an audit committee and typically also have committees on finance, governance, and affordable housing and economic development. Because the boards are not required to have specific committees (other than the audit committee) or a minimum number of committees, the type and number of committees and the tasks they perform vary across FHLBank boards. Our previous work on diversity in financial institutions has primarily focused on workforce diversity and diversity management. we identified leading practices that should be considered when developing and implementing diversity management. In our 2005 report, we identified a set of nine leading diversity management practices. They are (1) top leadership commitment; (2) diversity as part of an organization’s strategic plan; (3) diversity linked to performance; (4) measurement; (5) accountability; (6) succession planning; (7) recruitment; (8) employee involvement; and (9) diversity training. While we developed these leading practices for workforce diversity, most are also relevant to boards of directors. See GAO-13-238, GAO-10-736T, and GAO-06-617. Banks between 2006 and 2010 and found that it was limited. In this 2011 report, we found that the Federal Reserve Banks generally reviewed the demographics of their boards and used a combination of personal networking and community outreach efforts to identify potential candidates for directors. While we found that some Federal Reserve Banks recruited more broadly, we recommended that the Board of Governors of the Federal Reserve System encourage all Reserve Banks to consider ways to help enhance the economic and demographic diversity of perspectives on the boards, including by broadening their potential candidate pool, which has been implemented. HERA made several key changes to the FHLBank board directors. First, it changed the process for selecting independent directors. Previously, the regulator had appointed some directors. Now, for the first time since the FHLBank System’s inception, FHLBank boards nominate candidates for certain director positions and FHLBank members vote for the candidates. Second, HERA added skill requirements for independent directors, removed compensation caps, and changed directors’ terms from 3 to 4 years. FHFA implemented HERA’s governance changes through two final rules—one covering nominations and elections, including skill requirements and terms, and one addressing directors’ compensation. Board directors and FHLBank representatives generally view HERA’s governance changes as positive, because FHLBank boards now have the ability to nominate qualified candidates who help meet the overall boards’ needs. The governance changes HERA made to FHLBank boards changed the way certain directors are selected (see table 2). According to HERA, the majority of each FHLBank’s directors must be member directors, who by previously existing statute are nominated and chosen by member institutions. Independent directors must make up not fewer than two-fifths (40 percent) of each board. Most significantly, HERA required that independent directors be elected rather than appointed by the regulator. From the inception of the FHLBank System in 1932, the Federal Home Loan Bank Act had required the regulator to appoint some directors. Under HERA, independent directors are nominated by FHLBank boards and elected by the FHLBank members at large. HERA also specified that each independent director (other than public interest directors) have demonstrated knowledge of, or experience in, at least one of the following areas: financial management; auditing and accounting; risk management; derivatives; project development; or organizational management. Public interest directors must now have more than 4 years of experience representing consumer or community interests on banking services, credit needs, housing, or financial consumer protections. These requirements significantly expanded the previous requirements, as table 2 shows. HERA also removed compensation caps for board directors and required FHFA to include information on directors’ compensation in its annual report to Congress. Finally, HERA expanded the length of directors’ terms from 3 to 4 years, but maintained the 3 consecutive full-term limit, so that overall term limits are now 12 years instead of 9 years. FHFA implemented HERA’s governance changes through two final rules- —one covering nominations and elections, including skill requirements and terms, and one addressing directors’ compensation. According to FHFA officials, these two rules fully implemented HERA’s governance changes. In October 2009, FHFA issued the first final rule, which addressed several requirements. Nominations and elections: In establishing the rule, FHFA added new provisions to govern the process for nominating individuals for independent directorships and for conducting elections in conjunction with the member director elections. Nominations and elections of member directors remain largely unchanged. According to FHFA officials, as of November 2014, 21 new independent directors have been elected across the FHLBank System under the new nomination and election format. Composition and term length: The rule established that FHFA would determine the board size for each FHLBank annually and designate at least a majority, but no more than 60 percent, of the directorships as member directors, as required by HERA, and the remainder as independent directors. Each FHLBank board determines how many of the independent directors will be public interest directors––but must ensure that at all times the board has at least two public interest directors. The rule also stated that all of the directors would serve 4-year terms, but FHFA may set a shorter term for some directors so that all directors’ terms do not end at the same time. Skill requirements and application forms: The rule requires that each FHLBank conduct elections for expired independent directorships, as HERA requires. The rule also notes that independent directors must have the qualifications listed in HERA and must complete application forms. Before nominating any individual for an independent directorship, other than a public interest directorship, the board of directors of an FHLBank must decide whether the nominee’s knowledge or experience is commensurate with that needed to oversee a financial institution of the FHLBanks’s size and complexity. The rule also requires that any independent director nominee who runs unopposed in an election receive at least 20 percent of the votes eligible to be cast. FHFA’s oversight responsibilities: The rule requires that each FHLBank board submit a list of independent director nominees, along with the completed application forms, to FHFA for review. FHFA has some oversight responsibilities pertaining to the eligibility and qualifications of nominees. For example, FHFA officials told us that the agency must review the list of nominees and the application forms before the FHLBanks announce who is running. According to FHFA, the agency reviews the forms to ensure that candidates meet all eligibility and qualification requirements. FHFA told us that the agency performed additional research on the candidates, verified information provided on the forms, ensured that they have not engaged in criminal activity, and checked employment history. According to FHFA officials, the agency conducts additional searches to help verify other roles or positions a candidate lists on an application. FHFA officials said that the candidates proposed by FHLBanks generally met eligibility and qualification requirements. FHFA requests supplemental information when the information on the candidate’s qualification is not sufficient. FHFA officials cited only one instance of a candidate who did not meet the requirements and was not nominated, since the HERA rules were implemented. In April 2010, FHFA issued the second final rule implementing HERA’s governance changes related to director compensation and expenses. The rule removed the previous statutory compensation caps, as mandated in HERA, and allowed each FHLBank to pay its directors “reasonable compensation and expenses,” which can be reviewed by the FHFA Director. Additionally, the rule stated that each FHLBank board must annually adopt a written compensation policy. The FHLBanks must submit a copy of the compensation policy to FHFA, along with any other studies or supporting materials used to determine the compensation and expense amounts paid to directors. By statute, FHFA provides information on compensation and expenses in its annual report to Congress. Our interviews with bank management and staff, board directors, advisory council representatives, and trade associations representing member institutions generally reflected the view that HERA’s changes had been positive. The primary reason they cited was that HERA gave FHLBank boards the ability to nominate qualified candidates who would help meet the boards’ needs. Representatives from bank management at five of the six FHLBanks we interviewed told us that the election of independent directors gave boards greater control and helped ensure that candidates who possessed specific skills and experience could be nominated. The majority (81 percent) of those board directors who provided comments for our survey also supported the election of independent directors.example, in response to an open-ended question asking about the change, 18 board directors noted that the new nomination and election process allowed boards to select individuals whose skills and experience would augment those of the board as a whole. Additionally, advisory For council representatives from the six FHLBanks we interviewed told us that they now had an increased role in working with the current board directors to help identify and vet candidates who would meet board needs. Representatives from two trade associations we interviewed told us that they supported the change as well, because, for example, member institutions preferred to have a vote and to be able to elect the independent directors. Representatives from bank management and staff at the six FHLBanks we interviewed were positive about HERA’s inclusion of specific skills and experience criteria for independent directors, noting that the criteria provided guidance for the process of identifying and nominating candidates. In addition, the majority of board directors who provided comments on our survey (87 percent) believed HERA’s skills and experience requirements for independent directors were appropriate or positive for several reasons. For example, the criteria set minimum skill requirements and helped boards intentionally find individuals with certain skills. One board director commented that the required skills help the nominating committees to search more broadly for good individuals. Another commented that HERA’s skill requirements are broad enough to enable the boards to build varied skill sets to match emerging challenges, for instance, to nominate attorneys who understand SEC regulations—a skill that many member directors may not have. Representatives from all six FHLBanks we interviewed said that removing compensation caps was a positive change, mainly because the FHLBanks could now make compensation commensurate with the time commitment required of board directors. Before HERA, compensation varied by position on the board (chair, vice chair, all other directors) but was capped across all 12 FHLBanks (see table 3). According to our analysis of FHFA data, the annual compensation for board directors across the 12 FHLBanks has increased significantly since HERA. For example, in 2013 the average chair, vice chair, and director compensation amounts were roughly three times the pre-HERA amounts. FHLBank representatives noted that compensation was now more competitive with compensation on private sector boards and would help attract and retain qualified individuals. Further, the results of our survey of 2014 board directors showed that 69 percent of respondents believed the skill levels of directors had increased as a result of the removal of compensation caps. Board directors and representatives from bank management at the six FHLBanks we interviewed offered positive comments about increasing the term length from 3 to 4 years because of the steep learning curve involved with understanding FHLBank operations and financing. They said that this learning curve was especially steep during the directors’ first 1 or 2 years and that the additional time allowed directors to gain more in- depth knowledge about FHLBank operations. Our board director survey results showed that the majority of board directors who provided comments (87 percent) supported the longer term length. For example, in response to our request for their thoughts on the change, several board directors noted that because of the level of complexity associated with FHLBanks, longer term lengths helped with continuity of experience on the board. Additionally, as part of their survey responses about thoughts on the expanded term length, a few directors pointed out that many individuals were familiar with publicly traded companies but that the FHLBanks’ cooperative structure was a unique business model. Managers from three of the six FHLBanks we interviewed told us that the increased term length had improved the continuity and stability of its directorships, in particular helping balance turnover when directors must leave because they reached term limits or because of mergers and acquisitions of member institutions. In 2014, women represented about 16 percent of board directors and racial or ethnic minorities around 10 percent of board directors, and the majority of FHLBank board directors were non-Hispanic white males. Our survey showed that board directors had a variety of skill sets and education levels. Among member directors (who by statute represent member financial institutions), when compared to the overall representation, member board directors from commercial banks had the most representation, thrifts and credit unions had some representation, and insurance companies and community development financial institutions had the least or no representation on any FHLBank board. According to FHFA officials and representatives from the six FHLBanks we spoke with, several challenges hindered efforts to increase representation of women and racial or ethnic minorities. These challenges included low director turnover, statutory requirements related to board composition, and limited diversity in the financial services sector. Women represented roughly 16 percent of all FHLBank board directors in 2014. However, some of the FHLBanks had higher representation of women than others. While each of the 12 FHLBank boards had at least one female board director in 2014, two banks—Dallas and Pittsburgh— had five female board directors each, representing 31 percent of their boards (see fig. 2). One of the 12 FHLBank boards’ chair was a woman (Atlanta). Further, our analysis found that the majority of women directors were independent directors, rather than member directors (71 percent). Additionally, the majority of board directors were white (non-Hispanic), including all board chairs. For example, racial or ethnic minorities represented roughly 10 percent of board directors in 2014. Like the representation of women on boards, the representation of racial or ethnic minorities varied by FHLBank. In 2014, three FHLBank boards had no racial or ethnic minority representation (Des Moines, Indianapolis, and Topeka), and racial or ethnic minorities represented 21 percent of three FHLBank boards—New York (four board directors), San Francisco, and Seattle (three board directors each) (see fig. 3). We found that, as with women directors, the majority of racial or ethnic minorities were independent directors (81 percent). We received information on the number of women and racial or ethnic minority board directors over the past 5 years from 11 of the 12 FHLBanks. Based on the information, we that found the representation of women on FHLBank boards over the past 5 years had increased for 8 boards, stayed the same for 1 board, and decreased for the 2 remaining boards. Additionally, the number of ethnic or racial minority board directors increased for 3 boards, stayed the same for 4 boards, and decreased for the remaining 4 boards. Board directors reported a variety of skills and education levels and, to a lesser extent, had varied industry experience. For example, in our survey of 2014 board directors, both member and independent directors reported having skills in corporate governance and board operations (93 percent), strategic planning (89 percent), organizational management (88 percent), financial management (86 percent), and asset or liability management (80 percent). Of all the directors who responded to our survey, roughly half reported having affordable housing expertise (slightly more independent than member directors, 50 and 39 directors, respectively). Additionally, 4 of the 12 board chairs indicated that they had expertise in affordable housing. Member directors reported high skill rates in areas such as accounting (80 percent), commercial and community banking (96 percent), and credit administration (83 percent). HERA did not specify skill requirements for member directors. Independent directors were more likely to report skills such as project development (76 percent), community and economic development (67 percent), and affordable housing (66 percent). As previously discussed, HERA specified that independent directors have certain skills and experience. These independent directors reported having skills and experience in specified areas: financial management (74 percent), auditing (37 percent) and accounting (62 percent), risk management (61 percent), derivatives (37 percent), project development (76 percent), and organizational management (86 percent). According to our survey results, of the HERA skill requirements, independent board directors reported having the most expertise in organizational management and the least expertise in derivatives. Figure 4 below summarizes board director skills, as self- reported in our survey, for the 12 FHLBank boards. Board directors represented a broad spectrum of education levels. According to our survey of 2014 board directors, most reported their highest degree or level of education as a bachelor’s degree (42 percent), followed by a master’s degree (33 percent), professional degree or doctorate (19 percent), and associate’s degree or high school diploma (5 percent). Additionally, a third of respondents reported having certifications in addition to traditional degrees. For example, 37 percent of board directors reported having additional certifications that qualified them as certified public accountants, chartered financial analysts, or licensed real estate brokers. Based on our survey results, there was some variation with respect to board directors’ careers. As previously noted, member directors represent and by law must be officers or directors of member institutions. Accordingly, we found that almost all member directors reported working in the banking and finance industry, and that they primarily held executive management positions. Our survey found independent directors reported greater industry sector representation and different types of occupations. For example, roughly a quarter of independent directors reported working in industries other than banking and finance, including industries such as community and economic development, affordable housing, state and local government, and real estate. Similarly, independent directors were more likely than member directors to have worked as consultants, real estate developers, affordable housing specialists, and attorneys. Overall, few board directors reported having worked in the federal government (15 percent) or insurance industry (28 percent). In 2014, for member directors who represented member financial institutions, we found that commercial banks had the most representation, thrifts and credit unions some representation, and insurance companies and community development financial institutions the least or no representation (see table 4).member directors were from commercial banks, while the overall percentage of commercial bank membership was 66 percent. Credit unions had some representation on FHLBank boards (5 percent) compared with their general membership base (17 percent). According to FHFA officials and the FHLBanks, the boards face some challenges that may limit their ability to increase diversity, including low director turnover, certain statutory requirements, and lack of diversity in the financial services sector. For example, only a few directorships are open for election each year because director turnover is limited. FHFA determines the size of each FHLBank’s board (in consultation with the bank), and FHFA officials told us that they had been seeking to limit the number of directors on each board to prevent the boards from becoming too large. Consequently, board size has decreased for some FHLBanks, resulting in fewer open seats for new independent directors who may bring diverse perspectives. Further, HERA mandates that directors serve 4-year terms. By statute, directors continued after HERA to be permitted to serve up to three consecutive terms, which after the HERA extension, resulted in total term lengths of 12 years for some directors. Given the 12- year term lengths, opportunities for new directors to join the boards are limited. Additionally, because directors can serve three consecutive full terms, many boards nominate incumbent independent directors for reelection, and these nominees may run unopposed. FHFA officials cited these challenges and acknowledged that the low turnover and term lengths, in particular, meant that increasing diversity would take time. Other statutory requirements may also hinder efforts to change the composition of FHLBank boards. As previously discussed, HERA defines the composition of the board. At least a majority of the board must be member directors, with not fewer than two-fifths independent directors (40 percent). While the FHLBank boards nominate independent directors, member directors are nominated by the member institutions within the FHLBank district, limiting the boards’ role in selecting these directors. FHFA told us that the agency was aware of the potential difficulty of identifying diverse candidates for member directors and noted that increased board diversity would likely come from independent directors. Further, geographic requirements also affect the selection of board directors. For example, statute requires at least one member director per state. Additionally, statute requires that member directors be officers or directors of a member institution located in the district in which the FHLBank is located and that independent directors be residents of the district. Additionally, boards must balance efforts to increase racial or ethnic and gender diversity with other qualities, such as HERA skill requirements, SEC expertise, and other needs of the boards. For example, board directors from the Atlanta and Des Moines FHLBank boards stated that their boards had been seeking individuals with risk management expertise in an effort to satisfy HERA requirements. Further, FHLBank management and SEC officials noted the importance of finding candidates with SEC expertise, because the FHLBanks are SEC registrants. SEC officials explained that the process of registering with SEC caused the FHLBanks to improve their accounting and organizational systems and added that board directors assumed personal liability risk for the FHLBanks. In terms of other board needs, three board directors from the Pittsburgh FHLBank board told us that their board was specifically looking for candidates with insurance expertise. This need to balance traditional gender and racial or ethnic diversity with required skills and other board needs has limited efforts to increase women and minority representation on boards. FHFA officials noted the overall lack of diversity in the financial services sector, including at FHLBank member institutions, as increasing the challenges facing efforts to improve board diversity. The representation of women and racial or ethnic minorities on financial services companies’ boards was limited as well. For example, according to The Conference Board’s 2013 survey of financial services companies’ boards, roughly 14 percent of board directors were women and fewer than 10 percent were racial or ethnic minorities. We have previously reported that the main challenge to improving diversity was identifying candidates, noting that minorities and women are often underrepresented in both internal and external candidate pools for financial services management positions. In response to the board governance changes under HERA, FHLBanks have established processes to identify and nominate independent directors. We found that these processes generally followed several commonly cited practices for improving board diversity, such as making diversity a priority and disclosing diversity practices. For example, our review of FHLBank bylaws and related written policies and procedures found that 9 of the 12 FHLBanks noted the importance of emphasizing diversity in the process of nominating and electing board directors. Additionally, FHFA has taken some steps to demonstrate the importance of diversity on FHLBank boards. In the preamble to its 2009 rule, FHFA urged FHLBanks to consider the importance of board diversity when identifying and nominating candidates. Further, in May 2015, FHFA issued a rule that allows for the collection and reporting of FHLBank board diversity information to the FHFA. FHFA plans to evaluate this information, as well as the boards’ outreach activities for identifying diverse candidates. Since HERA’s enactment, FHLBanks and their boards have developed and refined new processes to help them identify and nominate independent directors and respond to new requirements for their board directors. As previously discussed, in October 2009 FHFA issued a rule implementing several of HERA’s governance changes, including eligibility requirements for independent directors and election processes. In the preamble of the rule, FHFA encouraged FHLBanks to consider the diversity of their board membership but did not mandate them to do so. Our review of the FHFA rule, the board minutes of all 12 FHLBanks, and selected interviews with FHLBank management and board directors found that currently FHLBanks and their boards generally follow similar processes for identifying new board directors. The steps the FHLBanks take are assessing the boards’ needs as well as the makeup of the current identifying any skill or experience gaps among the current directors; discussing the number of seats that will be on the board (determined by FHFA) and determining how many board seats will be open to new candidates during the election cycle; discussing upcoming elections and potential nominees for independent directorships with their advisory councils and seeking advisory councils’ help with candidate outreach; opening the election cycle with a call for nominees for member directors and applications for independent directors; interviewing candidates for independent directorships and discussing their attributes to determine which candidate(s) to select for the ballot; verifying the eligibility of candidates for independent directorships and submitting candidates’ materials to FHFA for review; providing a ballot to each member institution listing the candidates who will run in the upcoming election cycle; and declaring the results of the election to members, nominees, and FHFA. During the nomination and election process for independent directors, as well as to some extent when boards request nominees for member directors, the FHLBanks take steps that correspond to commonly cited practices to increase diversity on boards, particularly the representation of women and minority directors. As we have noted, our prior work has identified practices organizations should consider in developing and implementing diversity management programs in the overall workforce, in the financial services sector, and at federal financial regulators. We selected practices from our prior work and from those cited by representatives of corporate governance organizations and academic researchers that we determined were most applicable to FHLBank boards. As described later, these practices include making diversity a priority, conducting a skills and needs assessment, targeting diverse candidates, seeking new ways to find candidates, and disclosing diversity practices. Our review of all 12 FHLBanks’ bylaws, policies and procedures, SEC filings, board meeting minutes, and interviews with FHLBank representatives at 6 FHLBanks showed that boards consider diversity in a variety of ways that correspond to these practices. We did not assess whether these practices had an effect on the diversity and composition of the FHLBank boards. Making diversity a priority: Many of the FHLBank boards had taken actions that demonstrated a commitment to diversity. Our review of FHLBank bylaws, policies and written procedures, and written responses found that 9 of the 12 FHLBanks noted the importance of emphasizing diversity in the process of nominating and electing board directors. For example, the FHLBank of Boston’s bylaws state that when making nominations, the board may consider factors including any experience on the board, qualifications, skills, and experience that could strengthen the board, and diversity. The FHLBank of Dallas reported that the board considers diversity important to a well-rounded board of directors able to meet its fiduciary responsibilities to the bank’s members and staff. Demonstrating a commitment to diversity—such as discussing diversity in leadership policy statements or newsletters, or including diversity as part of the strategic plan—has been cited by GAO and others as a first step towards addressing diversity in an organization. Conducting a skills and needs assessment: FHLBanks regularly assessed the skills, expertise, and experience of current board directors– –allowing them to evaluate the board and identify additional skills and experience that would help improve the board’s effectiveness and address current challenges. Our review found all 12 FHLBank boards used a skills matrix either annually or on an as-needed basis—for example, to help identify the types of independent candidates to consider. Additionally, several board directors noted that their board’s use of a skill matrix or needs assessment helped in identifying desired skills and experiences for potential nominees, including those required for independent directors under HERA, and additional experiences such as in the insurance or information technology industries. Board directors from the FHLBanks of Atlanta and Pittsburgh told us they included diversity as part of their matrix (in addition to skills). Several corporate governance and diversity organizations have cited a skills matrix as a tool that boards can use to facilitate assessments of board needs in terms of skills and diversity. Targeting diverse candidates: In our review of FHLBank board minutes and interviews with board directors, we found that some FHLBank boards explicitly discussed targeted efforts to add women or minority candidates to their boards. For example, the chair of the board of the FHLBank of Des Moines discussed finding a Native American director candidate because the FHLBank district had a significant Native American population. In our review of FHLBank board meeting minutes, we found that several boards had discussed the need to consider women and minority representation in the nomination process and had referenced presentations by FHFA OMWI staff or FHFA’s rule as it was proposed in June 2014 as among their reasons for doing so. Representatives from corporate governance organizations and academic researchers cited the practice of setting broad targets or goals for increasing the number of women and minority directors on boards as one way boards could increase diversity and ensure that they were taking steps to consider a diverse array of candidates. Seeking new ways to find candidates: The FHLBanks have taken several steps to diversify their applicant pools for directors. Nine of the 12 FHLBanks post information about their board director nomination and election processes on their websites. The FHLBank of Boston specifically encourages women and minority candidates to apply. Additionally, 7 FHLBanks reported that they sometimes consulted with outside groups to help solicit nominees, including organizations representing affordable housing, economic development, and consumer or community interests. The FHLBank of Topeka reported using a search firm, and the FHLBank of Seattle reported using director databases to identify independent director candidates for nomination. The FHLBank of Pittsburgh conducted outreach with organizations such as Executive Women International and Hispanic and African American Chambers of Commerce to seek out diverse applicants. Further, directors from the FHLBank boards of Dallas and Seattle reported using social media platforms to help attract candidates. For example, representatives from one FHLBank told us that LinkedIn had been useful in recruiting employees for the FHLBank and that they had decided to use it to identify potential independent director nominees. We found that all 12 FHLBank boards had separate governance committees and that 11 of the charters for these committees had responsibility for overseeing director nominations and elections. Also, as noted earlier, the FHLBanks consult with their advisory councils about potential nominees for independent directors. Board directors from 4 of the 6 FHLBanks we met with told us that advisory council members themselves had been nominated and subsequently elected to their boards. Board directors from three FHLBanks we interviewed told us that their advisory councils also provided them with information about potential candidates. For example, staff from the FHLBank of Atlanta told us that their board relied heavily on the advisory council to identify qualified, diverse candidates for independent director positions. These activities are in line with practices cited by representatives from corporate governance organizations and academic researchers, which reported that increasing board diversity could require current directors to reach out beyond the typical pool of applicants or their own personal networks to find qualified women and minority candidates. They cited several steps boards could take to find these candidates, such as partnering with outside organizations, using search firms to identify a broader pool of candidates, and creating nominating committees to focus on the selection of candidates. Disclosing diversity practices: Some FHLBanks disclose the board’s diversity policies and the way they consider diversity in nominating and selecting directors. We found that the boards of three FHLBanks (Atlanta, New York, and Seattle) had adopted public statements on board diversity and inclusion that they published on their websites. The statements noted that the FHLBanks encouraged the consideration of diversity (including gender, minority status, and disability classifications) in soliciting and nominating director candidates. In its 2014 public election announcement, the FHLBank of San Francisco’s board stated it would consider, among other things, the overall composition of the board, the diverse communities of the FHLBank’s district, and other factors it believes would contribute to a diverse, balanced, and effective board. In addition, 3 of the 12 FHLBanks stated in their annual SEC 10-K filings that their boards One of these considered diversity in nominating independent directors. FHLBanks, the FHLBank of Pittsburgh, noted that board directors should also consider diversity when nominating candidates for member director positions. These factors could be seen as additional attributes that enhanced traditional qualifications for board directors, according to the financial statements. Some representatives from corporate governance organizations and academic researchers have said that the disclosure of companies’ diversity policies and the way they consider diversity in nominating and selecting directors could increase the representation of women and minorities on boards, particularly if they were required to do so by a regulator or an exchange. In 2009, SEC issued a rule requiring public companies to disclose whether, and if so how, they consider diversity when nominating candidates for director. Proxy Disclosure Enhancements, 74 Fed. Reg. 68334 (Dec. 23, 2009); 17 C.F.R. § 229.407(c)(2)(vi). However, FHLBanks are not subject to this rule because they are exempt from the requirements to file proxy statements. 12 U.S.C § 1426a. SEC did not define diversity in this rule and, in adopting this rule, clarified that companies should be allowed to define the term in ways that they consider appropriate. FHFA has taken some steps to demonstrate the importance of diversity on FHLBank boards. In rules issued in 2009 and 2010, FHFA urged FHLBanks to consider the importance of board diversity. In the 2009 rule, FHFA encouraged the boards to consider the diversity of their boards both when requesting nominees for member directors and nominating candidates as independent directors. In the 2010 rule, FHFA stated that FHLBanks’ policies and procedures should encourage the consideration of diversity. FHFA created an OMWI in January 2011 to promote diversity and the inclusion of women and minorities in all FHLBank activities, including the promotion of diversity on FHLBank boards. OMWI staff conducted a listening tour in 2013 that included visits to each of the 12 FHLBanks to discuss the OMWI’s role and learn more about FHLBanks’ diversity practices. OMWI staff also asked for suggestions on how to evaluate diversity metrics. Figure 5 outlines the timeline of events from the enactment of HERA through the creation of OMWI and subsequent activities related to FHLBank board diversity. In June 2014, FHFA proposed a rule that would require FHLBanks to report the demographic profiles of board directors and a description of their related diversity outreach activities. FHFA finalized the rule in May 2015. The rule requires FHLBanks to report to FHFA their board directors’ voluntarily self-reported demographic information, something FHLBank management and employees are already required to do. The purpose of the proposal was to allow FHFA and the FHLBanks to obtain access to data about board diversity, particularly numbers of women and minority directors, in order to better assess current levels of diversity and the impact of their outreach efforts (including those practices discussed earlier). The final rule states that current FHFA regulations require FHLBanks to implement policies that encourage the consideration of diversity in the nomination or selection of nominees for board directors. The rule states the FHLBanks are required to report on board diversity data and outreach efforts in their annual reports to OMWI, beginning September 30, 2015. FHFA officials also told us that although FHLBanks could interpret diversity broadly and might seek other types of diversity on their boards, the OMWI focuses on the representation of women and minorities, and OMWI expected FHLBanks to have diversity outreach programs that targeted these two groups. Our prior work and that of a global research institute have cited the practice of collecting diversity data as a way to better measure and assess the effectiveness of diversity programs on increasing diversity. For example, our 2011 report on governance practices at the Federal Reserve Banks found that each bank submitted demographic information that directors provided voluntarily. Moreover in our April 2013 report on OMWIs, we found that not all OMWIs were reporting on the outcomes of their diversity practices. As a result, we recommended that the federal financial regulators collect data and report on measurement outcomes as part of their reports to Congress in order to enhance their own diversity initiatives. The agencies and Federal Reserve Banks generally agreed with our recommendations and as of April 2015 the recommendations remained open. Letter from FHLBanks of Atlanta, Boston, Chicago, Cincinnati, Des Moines, New York, Pittsburgh, Seattle, Topeka and the FHLBanks’ Office of Finance to FHFA regarding Notice of Proposed Rulemaking Regarding Minority and Women Inclusion Amendments RIN: 2590-AA67, August 25, 2014. continue to ensure greater access to affordable housing for all sectors of society. Under its new director, FHFA’s OMWI reported starting several new initiatives related to board diversity. OMWI officials are working with FHFA policy and supervision staff to develop new supervisory guidance for reviewing diversity efforts. They plan to analyze diversity data collected under the May 2015 rule to assess trends and outreach efforts. They also noted several upcoming collaborations among the FHLBanks to promote diversity throughout the FHLBank System, including on boards of directors. For example, during an upcoming summit of human resources directors, OMWI staff are scheduled to present training on how FHLBanks can improve diversity in the workforce and implement diversity requirements. The full extent of community lending supported by FHLBanks is unknown because data are not available on member institutions’ use of advances. However, the FHLBanks support community lending through two types of programs: those designed by the FHBanks targeted to the needs of individual districts (unique programs) and system-wide programs that are authorized by statute and generally available in every district. Based on our analysis, the level of activity under these programs varies across the FHLBanks. Our survey results and interviews with FHLBank board directors indicated that directors were responsible for overseeing community lending and that those who served on their bank’s affordable housing or community development committees had the most oversight responsibilities. Board directors responding to our survey and FHLBank representatives noted that several factors affected FHLBanks’ ability to increase members’ community lending. Community lending is a key component of the FHLBank System’s mission. However, it is difficult to know the full extent of FHLBanks’ support for community lending because data on how member institutions use regular advances are not available. Specifically, the FHLBanks annually report their total advances to member institutions. These advances may be used for a variety of purposes, including housing finance and community lending, and member institutions do not have to specify how they use them. Advances can be structured in any number of ways, allowing each FHLBank member institution to find a funding strategy that is tailored to their specific needs. While the FHLBanks report advances for community lending through the CIP and CICA program, ultimately other advances may support community lending. For example, FHLBanks may offer advances that have various terms and rates to meet community needs such as long-term financing for small businesses or agricultural development. Additionally, the FHLBanks offer letters of credit to facilitate member institutions’ transactions with third parties but report only the balances of those letters of credit to FHFA. Letters of credit can impact community lending—for example, by helping developers obtain funding for economic development. Our review of FHLBank reports and statements from FHLBank representatives showed that 6 of the 12 FHLBanks offered unique community lending programs with different funding types in addition to the system-wide programs. These unique programs include Chicago’s Community First Fund®, Cincinnati’s Zero Interest Fund, Dallas’s Partnership Grant Program, Pittsburgh’s Banking on Business and Blueprint Communities®, San Francisco’s Access to Housing and Economic Assistance for Development, and Topeka’s Joint Opportunities for Building Success (see table 5). These unique programs often complement the FHLBank’s system-wide programs. In 2015, $2 million to $4 million dollars in funding was made available for two of the three loan funds and the third loan fund was in the amount of $50 million. In addition, all three grant programs were approved for up to under $1 million dollars. Most of these unique programs were implemented before HERA. Representatives of four of the six FHLBanks we spoke with (Chicago, Dallas, Pittsburgh, and San Francisco) said that conducting a needs assessment had been helpful in the planning, development, and execution of unique community lending programs. Specifically, some of these FHLBanks hired third parties to conduct a housing needs assessment or surveyed their members and used the findings to help board directors and advisory councils better understand the regional characteristics and needs of the states within their district. FHLBank representatives we spoke to from Chicago, Dallas, and Pittsburgh said that there had been an increased focus on community lending in recent years. According to these FHLBank representatives, this shift occurred as a result of aligning the activity of the advisory council with the strategic goals of the FHLBank, encouraging directors to attend advisory council meetings, and strengthening the FHLBank’s community investment staff. For example, the FHLBank of Chicago expanded its community investment team to better engage with and support the FHLBank’s members and their communities. Use of system-wide programs varied, with some FHLBank member institutions using the community lending advances under the voluntary CICA program more than others. Total CICA advances across the FHLBanks increased less than 2 percent between 2013 and 2014 but two FHLBanks more than doubled their community lending under these Specifically, the FHLBank of Indianapolis increased its programs.lending to member institutions by 71 percent, and the FHLBank of Chicago increased lending to member institutions by 54 percent (see fig. 6). Another system-wide program the FHLBanks use to support community lending is CIP. Under CIP, the FHLBanks provide advances to member institutions for housing projects, economic development projects, or both. Membership use under CIP varies. For example, in 2014 four FHLBanks funded member institutions for both housing projects and economic development projects (Boston, Chicago, Cincinnati, and Topeka). The remaining eight FHLBanks provided member institutions with CIP advances for housing projects only. In 2014, total CIP advances were approximately $44.6 million. Results of our survey of 2014 board directors showed that 71 percent of respondents had some level of responsibility for overseeing community and economic development programs at their FHLBanks. Of these respondents, 53 percent were member directors, 31 percent were independent directors (not public interest), and 16 percent were public interest directors. In addition, about 36 percent of these respondents stated that they had an oversight role because they were serving on their FHLBanks’ affordable housing and community and economic development committee. According to our survey results, about 70 percent of public interest and independent directors had a career in community and economic development or expertise in community and economic development and the majority of these respondents were appointed prior to HERA. Additionally, six FHLBank board chairs (two member directors, three independent directors, and one public interest director) reported having community and economic development experience. FHLBank board directors and representatives we interviewed stated that directors who serve on affordable housing and economic development committees have greater responsibility for overseeing their FHLBank’s community lending programs and the metrics that monitor and evaluate these programs. Among other things, these board directors review district needs assessments, provide quarterly reports on the status of programs, determine funding allocations, and recommend new products for development. These directors also review AHP implementation and community lending plans by suggesting changes and providing comments during meetings. For instance, they ensure that affordable housing and community lending programs comply with applicable legal and regulatory requirements and are effective in meeting the goals for these programs. Board directors who serve on these committees make suggestions to the full board for approval, and committee chairs provide reports on the committee’s activity. Appendix II provides a description of the various affordable housing and community and economic development committees of the 12 FHLBanks. FHLBank board directors serving on affordable housing and economic development committees generally serve as liaisons to the advisory council and attend advisory council meetings. Six FHLBank board directors stated that the advisory council members provide the boards with an “on-the-ground” assessment of the FHLBank’s programs. In addition, the board encourages the advisory council to suggest potential community and economic development products and programs for the board to consider. The role of the board is to determine how possible products and programs would align with the FHLBank’s risk management profile. Our board director survey results showed and FHLBank representatives told us that several factors affected their ability to increase community development lending to member institutions, including FHFA program requirements and statutory limitations, funding for FHLBanks and the availability of developers, collateral requirements, and the financial position of the FHLBanks. FHFA program requirements and statutory limitations: Fifty percent of those responding to our survey said that FHFA program requirements affected community development lending. FHLBank representatives and five survey respondents commented that both the CIP and CICA program had outdated eligibility criteria and regulatory requirements—for example, they said that statutory limits on eligible incomes impacted the volume of applicants. Thirty-eight percent of respondents noted that statutory limitations affected community lending. However, 33 percent of respondents did not view statutory limitations as a limiting factor, and 29 percent of respondents were not sure if statutory limitations were a limiting factor. Funding and the availability of developers: Two frequently mentioned factors affecting community lending were the availability of funds to meet the number of requests from member institutions and the availability of developers and other lenders. Our survey results showed that 47 percent of respondents thought the availability of funds to meet the number of requests from members was a limitation. In addition, 43 percent of respondents stated that the availability of developers and other lenders to fund community and economic development activities limited this type of lending. Seven survey respondents commented that there were a limited number of developers as a result of the economic downturn. FHLBanks’ collateral requirements: According to 41 percent of respondents to our survey of board directors, collateral requirements were a limitation on community lending activities. In addition, nine survey respondents generally commented that collateral requirements of the FHLBanks may have limited CDFIs’ borrowing capabilities. We recently reported on issues affecting nondepository CDFIs’ ability to become members of the FHLBank System and found that collateral requirements could prevent many of these institutions from becoming members. CDFIs’ primary mission is to provide capital and development services to economically distressed communities underserved by conventional financial institutions. FHLBank representatives and board directors we spoke with from all six FHLBanks generally said that they had discussed expanding the pool of eligible collateral in response to industry needs but that it was important to closely monitor any associated risks. They also told us that the FHLBanks were focused on valuing collateral appropriately and taking the needs of their members into consideration. Financial position of the FHLBanks: According to our survey of 2014 board directors, 23 percent of respondents indicated that the financial position of their FHLBank affected community lending. Representatives from two of the six FHLBanks (Des Moines and Dallas) we spoke with told us that demand for advances had dropped in recent years because their members had seen an increase in deposits, further limiting their ability to expand their support of community lending. Total outstanding advances across the FHLBank System in 2014 represented an 11 percent decline from their 2009 levels. However, in 2013 and 2014 total outstanding advances across the FHLBank System rose 15 percent and 13 percent, respectively (see fig. 7). Other factors: We also heard from FHLBank representatives that HERA created a temporary exception to the general restriction against federal guarantees of tax-exempt bonds that enabled bond issuers to use FHLBank letters of credit to enhance tax-exempt bonds for nonresidential community development projects, lowering the cost of those bonds. This exception expired at the end of 2010. While this exception was not widely used across the FHLBank System, some FHLBanks found it to be successful in increasing community lending. For example, the FHLBank of Atlanta used the exception to restore a hotel and revitalize a hospital in its district. Representatives of the FHLBank of Atlanta told us that they relied on letters of credit to reduce financing costs for their member institutions in order to support community lending, and were satisfied with the results produced by the letters of credit in their district. We provided a draft of this report to FHFA and each of the 12 FHLBanks for review and comment. Additionally, we provided relevant excerpts of the draft report to SEC for review and comment. We received technical comments from FHFA, SEC, and the FHLBanks of Atlanta, Boston, Chicago, and San Francisco which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to FHFA, each of the FHLBanks, the Council of FHLBanks, and SEC. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report examines (1) the Housing and Economic Recovery Act (HERA) of 2008’s governance changes and their implementation; (2) diversity on Federal Home Loan Bank (FHLBank) boards and challenges the Federal Housing Finance Agency (FHFA) and the FHLBanks have faced in trying to increase it; (3) efforts that FHLBanks and FHFA have taken to improve diversity; and (4) FHLBanks’ community lending programs and the boards’ oversight of them. For the purposes of this report, the concept of diversity includes representation by gender, race, and ethnicity and can encompass differences in backgrounds, skill sets, and experience. For each of our objectives, we reviewed relevant laws, particularly HERA and FHFA’s implementing regulations. We obtained information, for descriptive purposes, from FHFA on the number of member financial institutions for each of the 12 FHLBanks, as well as the asset size of each FHLBank, for 2013 and 2014. We conducted 30 semistructured interviews with 6 of the 12 FHLBanks (Atlanta, Chicago, Dallas, Des Moines, Pittsburgh, and San Francisco). To help ensure that we collected a range of perspectives, we selected these locations based on several criteria, including asset size, number of member institutions, female and minority representation on boards, community and economic development lending activity, and location. We conducted these interviews to learn more about board governance and diversity practices and how the board and bank management oversee FHLBank policies and programs. We generally conducted in-person interviews with officials at four of the six FHLBanks except in specific cases where we could not schedule in-person interviews and held teleconferences for the two remaining FHLBanks. For each of the six locations, we held group interviews with FHLBank management and community investment staff and a separate group interview with representatives from the FHLBanks’ advisory councils. The interviews with FHLBank management and staff were conducted jointly. Generally, the number of representatives from FHLBank management and staff differed by FHLBank because we allowed the FHLBanks to determine who should be present based on the questions we sent in advance, but in all cases the FHLBank’s corporate secretary participated. For our interviews with advisory councils, we generally interviewed the chair and vice-chair of the advisory council in addition to other advisory council representatives. Additionally, we conducted individual interviews with three board directors (a member, independent, and public interest director) from each of the six FHLBanks. We also interviewed officials from FHFA, including officials within FHFA’s Division of Bank Regulation and Office of Minority and Women Inclusion (OMWI). We examined three sets of board meeting minutes for each of the 12 FHLBanks from 2013 to 2014 including (1) full board meeting minutes, (2) committee meeting minutes for each FHLBank’s governance committee, and (3) committee meeting minutes for each FHLBank’s affordable housing and/or community and economic development committee. We reviewed a total of 560 documents on-site at FHFA and used a data collection instrument to independently code relevant information, as discussed later in this appendix. We also interviewed representatives of the Council of the FHLBanks––the trade association representing the 12 FHLBanks. Additionally, we interviewed five trade associations—the American Bankers Association, Independent Community Bankers Association, Community Development Bankers Association, Credit Union National Association, and the National Federation of Community Development Credit Unions. These interviews included representatives from FHLBank member institutions that were also members of these trade associations. Finally, we interviewed representatives from several corporate governance organizations and academic institutions, as discussed later in this appendix. As part of our work, we conducted a web-based survey of all FHLBank directors who served in 2014. The purpose of this survey was to gather information from FHLBank directors on their roles and responsibilities and obtain their opinions on HERA’s changes, as well as to collect data on diversity on the FHLBank boards. We sent a survey to all 189 directors who served during 2014. We received information from FHFA on the number of board directors designated for 2014 across all 12 FHLBanks and we confirmed the names and director types for the 189 directors with contacts at each of the FHLBanks. We received completed surveys from 178 board directors, for a 94 percent response rate. Six of the 12 FHLBanks had a 100 percent response rate (Atlanta, Boston, Dallas, Pittsburgh, San Francisco, and Topeka). The remaining six FHLBanks (Chicago, Cincinnati, Des Moines, Indianapolis, New York, and Seattle) had response rates ranging from 80 percent to 94 percent. The web-based survey was administered from November 19, 2014, to January 31, 2015. Board directors were sent an e-mail invitation to complete the survey on a GAO web server using a unique user name and password. Nonrespondents received three reminder e-mails from GAO to complete the survey. We presented at the Council of FHLBanks’ September 2014 meeting to encourage board directors to participate in the survey. As a final step, we contacted the corporate secretaries at each of the FHLBanks and asked for their preference on whether we should send a final reminder e-mail or conduct telephone calls to all nonrespondents. Based on their preferences, we made phone calls to some of the nonrespondents and sent the remaining nonrespondents a fourth and final e-mail reminder about the survey. Because this survey was not a sample survey, it has no sampling errors. The practical difficulties of conducting any survey may introduce nonsampling errors, such as difficulties interpreting a particular question that can introduce unwanted variability into the survey results. We took steps to minimize nonsampling errors by pretesting the survey questions in person and by teleconference with five board directors (including three board chairs) from five different FHLBanks. We also received comments from the Council of the FHLBanks––the trade association representing all 12 FHLBanks—based on their review of our draft survey. We conducted pretests to make sure that (1) the questions were clear and unbiased, (2) the data and information were readily obtainable, and (3) the survey did not place an undue burden on respondents. We made appropriate revisions to the content and format of the survey after the pretests and independent review. All data analysis programs used to generate survey results were independently verified for accuracy. Additionally, in reviewing the answers from board directors, we confirmed that respondents had correctly bypassed inapplicable questions (skip patterns). To determine HERA’s governance changes and the implementation status, we reviewed relevant laws and FHFA’s regulations, including final rules on director eligibility, elections, and compensation, and interviewed FHFA officials to determine the extent to which HERA’s changes to governance had been implemented. Specifically, we reviewed information about FHLBank governance in the Federal Home Loan Bank Act; the Financial Institutions Reform, Recovery, and Enforcement Act of 1989; and the Gramm-Leach-Bliley Act. To determine board directors’ roles and responsibilities and committee information, we reviewed relevant regulations, FHFA’s examination manual (which includes a section on board director responsibilities), and FHLBank bylaws and committee charters. We also reviewed FHFA’s proposed rule on board director responsibilities, corporate practices, and corporate governance We also matters, and as of April 2015, this rule had not been finalized.analyzed FHFA data on board director compensation for 2013 and compared it to the pre-HERA compensation caps established by the Gramm-Leach-Bliley Act. As part of our semistructured interviews, we asked FHLBank management and staff, FHLBank advisory council representatives, and board directors about their views on HERA’s changes and summarized their responses. Additionally, we analyzed responses we received from our survey of board directors for questions related to the impact of HERA’s governance changes. Finally, we asked trade associations and representatives from FHLBank member institutions their opinions on HERA’s governance changes. To summarize the composition and diversity of the 2014 board directors, we summarized information received from each of the 12 FHLBanks and analyzed results from our survey. Because demographic information was not publicly available for FHLBank board directors, we requested information from the FHLBanks. Specifically, we sent a request to each FHLBank to collect data on the number of men, women, racial or ethnic minorities, non-racial or nonethnic minorities, and the total number of board members who served during 2014. We also collected information about board directors who served in 2014 from the U.S. Securities and Exchange Commission’s (SEC) 10-K filings, SNL Financial, and FHLBank websites. Additionally, in our survey to 2014 board directors, we asked respondents to self-identify gender, origin, and race. The responses we received from the FHLBanks were generally based on observation while the survey data was self-reported. For data on the number of women and racial or ethnic minorities, we relied on information provided by each of the FHLBanks. For women and minority representation on FHLBank boards, we compared the data received from the FHLBanks to the survey data we received for data reliability purposes. To determine FHLBank and FHFA practices related to board diversity, we first reviewed relevant regulations and all 12 FHLBanks’ policies and procedures for identifying, nominating, and selecting board directors. We reviewed and analyzed publicly available information including FHLBanks’ bylaws, 2013 SEC 10-K filings, and FHLBank websites for information about their nomination and election procedures and whether these documents discussed diversity of board directors. Specifically, we reviewed FHLBank’s bylaws to determine whether they generally contained the same information such as whether the board consulted with outside groups or the advisory council for nominees, whether the board considered diversity in the nomination process, or whether the board had a committee in charge of elections. In addition, we reviewed each FHLBank’s SEC 10-K filing, Part III, Item10—where each FHLBank includes information about FHFA regulations and eligibility requirements for independent board director nominees. We reviewed the SEC 10-K filings for information about board directors and whether the FHLBanks stated they considered diversity in the nomination of board directors. We also reviewed each FHLBank’s website during the 2014 election cycle to determine if the FHLBank posted an announcement about their upcoming director election process, included a timeline of their election process, and checked if the announcement posted encouraged women and racial or ethnic minorities to apply. We also obtained and reviewed any written nomination policies and procedures, in addition to committee charters, annual reports, and other information provided by each of the FHLBanks, to determine how their board sought nominations for directors, the extent to which they solicited nominees from outside organizations, and whether they had any statements related to board diversity. Additionally, we reviewed GAO survey results on the nomination and election processes and summarized information received from our interviews with FHLBank management and staff and FHLBank board directors. We also reviewed full board meeting minutes (as available) for each of the 12 FHLBanks from 2013 to 2014, including committee meeting minutes from each FHLBank’s governance committees. Specifically, using our data collection instrument we independently coded examples of discussion of the bank’s nomination and election processes. Lastly, we reviewed HERA requirements for FHLBank elections and interviewed FHFA’s Division of Banking and Regulation about their process for reviewing member and independent nominees for directorships and FHFA’s member and independent director application forms. To identify commonly cited practices used to improve board diversity, we reviewed our previous work on diversity management in the financial industry, among federal financial regulators, and on Federal Reserve Bank boards. We also conducted a literature search to identify practices cited in studies and research papers related to diversity on the boards of directors and in the selection and election of board directors, using key words such as board diversity and directors and board directors and elections or nominations. Out of over 100 articles the literature search returned, we reviewed 31 reports and articles that seemed most relevant. We did not find any articles that directly addressed FHLBank governance so we relied mostly on literature focused on diversity on corporate boards. We reviewed all of the practices cited in our prior work and the selected literature to develop a list of practices cited by multiple organizations as ones that would promote diversity on boards. We selected practices that we determined were most applicable to FHLBank boards, as their election processes are different from those of public corporations with shareholders. literature that pertained only to corporate boards of publicly traded companies, such as reforming shareholder elections, because they were not applicable to FHLBanks. We also interviewed representatives from organizations including Catalyst, The Conference Board, the National Association of Corporate Directors, and researchers from Stanford University’s Rock Center for Corporate Governance, and Harvard Business School. We selected these organizations based on their research on topics related to board composition and the effect of diversity on boards. We also reviewed SEC’s rule related to corporate disclosure of boards’ director and nomination selection processes.analyzed the FHLBank practices and compared them to the commonly cited practices. As previously discussed, FHLBanks are private, member-owned cooperatives. The board is more limited in its ability to influence nominees for member director positions. officials about the status of the proposed rule and the activities they had undertaken to review FHLBank board diversity. In addition, we reviewed public comment letters for FHFA’s proposed rule on board diversity to obtain information about others’ views on the proposed rule, for example, the FHLBanks’ joint comment letter to FHFA. We also reviewed FHFA’s final rule on board diversity when it was issued in May 2015. To summarize the extent of community lending by FHLBanks, we analyzed FHFA data on community and economic development programs—the Community Investment Program (CIP) and the Community Investment Cash Advance (CICA) program—including data on advance commitments from 2008 through 2014. We also reviewed SNL Financial data for information on FHLBank total advances from 2009 through 2014 to assess conditions post-HERA. To assess the reliability of FHFA data, we reviewed information about the data and the systems that produced them and interviewed FHFA officials on how they assess the reliability of data on affordable housing and community lending programs, and reviewed the data ourselves to assess completeness and look for inconsistencies. To verify the accuracy of SNL Financial data, we compared a sample of advance-level data to SEC 10-Ks to look for inconsistencies and interviewed SNL Financial representatives about any changes to their data systems. We consider the information to be reliable for our purposes of determining the level of FHLBank community lending advances and reporting on total advances across the FHLBanks. We also reviewed publicly available documentation from FHLBanks, including community lending plans, advisory council reports, and annual reports, as well as information on each FHLBank’s website, to determine if the FHLBank offered a unique community lending program in addition to their system-wide community lending programs. For the 6 FHLBanks we spoke with we asked them to verify that our list of their unique and system-wide programs was accurate. We also reviewed relevant FHFA reports to identify FHLBank policies and programs including their annual reports on housing and community development activities of the FHLBanks. Finally, as discussed above, we reviewed full board meeting minutes including committee meeting minutes from each FHLBank’s relevant affordable housing and/or community and economic development committees. Specifically, using our data collection instrument we independently coded examples of discussion of the FHLBank’s affordable housing and community investment policies and programs. As part of our semistructured interviews, we asked FHLBank management and staff, FHLBank advisory council representatives, and board directors about their policies and programs that support community lending, limitations to expanding support of community lending, and interactions between board directors and advisory council representatives. We incorporated these limitations into our survey of board directors and also surveyed board directors on their involvement in overseeing community lending. We summarized responses received to these two questions including open- ended responses on board directors’ opinions on any limitations the FHLBanks may face in expanding such lending. We also interviewed the trade associations cited earlier about their members’ use of FHLBank community lending programs. We conducted this performance audit from June 2014 to May 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Each of the 12 Federal Home Loan Bank (FHLBank) boards have a committee that addresses affordable housing and community and economic development programs and policies for their respective FHLBank (see table 6). Daniel Garcia- Diaz, (202) 512-8678, [email protected]. In addition to the individual named above, Kay Kuhlman (Assistant Director), Beth Ann Faraguna (Analyst-in-Charge), Susan Baker, Emily Chalmers, Pamela Davidson, Katherine Bittinger Eikel, Jill Lacey, Marc Molino, Lauren Nunnally, Kelsey Sagawa, and Jessica Sandler made key contributions to this report.
In 2014, the FHLBank System had over 7,300 member institutions and approximately $570 billion of loans (advances) outstanding. The system consists of 12 regionally based banks that are cooperatively owned by member institutions. Its mission is to serve as a reliable source of liquidity for members to support housing finance and community lending. In 2014, each FHLBank had a board of 14 to 19 directors that includes elected directors from member institutions and independent directors from outside the system, including at least 2 with consumer or community interests. GAO was asked to review legislative changes to FHLBank governance and the diversity of the FHLBanks' boards of directors. This report discusses (1) the governance changes and their implementation; (2) FHLBank boards' diversity; (3) FHLBank and FHFA efforts to improve diversity; and (4) community lending programs and boards' oversight of them. GAO analyzed FHLBank data and reviewed bylaws, policies, board meeting minutes, and regulations. GAO also reviewed previous work on diversity in the financial services industry, as well as literature on governance and diversity, and surveyed all 189 FHLBank directors serving in 2014 (with a 94 percent response rate). GAO interviewed FHLBank management and staff, board directors, and advisory councils at six FHLBanks selected by number of member institutions, asset size, volume and type of community lending activities, and location. The Housing and Economic Recovery Act (HERA) of 2008 changed several aspects of the Federal Home Loan Bank System's (FHLBank System) governance. Among other things, HERA required that independent directors on the FHLBank boards be elected by member institutions (for example, commercial banks, thrifts, credit unions, and insurance companies) rather than appointed by the regulator. HERA also added certain skill requirements, removed compensation caps, and created a new regulator for the system, the Federal Housing Finance Agency (FHFA). FHFA fully implemented the governance changes through two rules in 2009 and 2010. Board directors, FHLBank representatives, and others generally view HERA's governance changes as positive because the changes give FHLBank boards' greater control over nominees and help ensure that candidates have specific skills and experience. Women and minority representation on FHLBank boards is limited (see figure). A woman chaired 1 of 12 FHLBank boards in 2014, but no racial or ethnic minorities did. Most women and racial or ethnic minorities were independent directors rather than member directors. Directors' skill sets were more diverse. For example, member directors responding to GAO's survey were more likely than independent directors to report having skills in accounting and banking. Independent directors were more likely to report having skills in project development, community and economic development, and affordable housing. Women and Minority Representation on Federal Home Loan Bank Boards, 2014 FHLBanks and FHFA have taken steps to increase board diversity. Since HERA's enactment, FHLBanks and their boards have developed processes to identify and nominate independent directors. GAO found that these processes generally followed several commonly cited practices for improving diversity, such as diversifying the applicant pools for directors. A 2009 FHFA rule encourages FHLBanks to consider diversity when selecting candidates, and a 2015 rule requires the FHLBanks to report information on board diversity in their annual reports. FHFA plans to begin evaluating board data and other information on outreach activities related to board diversity. Community lending varies across the FHLBanks. For example, 6 of the 12 FHLBanks offer unique community lending programs in addition to the system-wide programs. Under the Community Investment Program, which provides funds for housing and economic development, 4 of the 12 FHLBanks used the funds for economic development in 2014. FHLBanks have committees that are responsible for overseeing these activities, and, according to GAO survey results, directors serving on these committees have greater responsibility for overseeing community lending programs.
Administrative records are a growing source of information about individuals and households. Administrative records include records from government agencies, such as tax data and Medicare records, as well as commercial sources from major national data vendors. National administrative records refer to data compiled and maintained nationwide, including files compiled for the purpose of administering federal programs. In comparison, data compiled and maintained by municipalities are referred to as local administrative records, such as building permits and local tax records. According to Bureau officials, for the 2020 Census, the Bureau is researching how to determine the quality and usefulness of administrative records for obtaining addresses or information about individuals. Administrative records could reduce the cost of the census if they can help the Bureau reduce the workload for several operations, including address list building; quality assurance; and nonresponse follow-up, which, at $1.6 billion in 2010 and lasting several weeks, was the largest and most costly census field operation.testing program has nine research and testing projects that are exploring the expanded use of administrative records for these purposes. The Bureau’s 2020 research and The Bureau already uses administrative records to produce annual population estimates for the nation, states, counties, cities, towns, and townships as part of its program to estimate changes in population size and distribution since the previous census. The Bureau produces estimates at the state and county-level based on births, deaths, migration, and changes in the number of people who live in group quarters, such as college dormitories and nursing homes. Estimates of the population of subcounty communities—which consist of both incorporated places, such as cities, boroughs, and villages, and minor civil divisions, such as towns and townships—are primarily based on data on housing units, occupancy rates, and persons per household plus an estimate of the population in group quarters. Local governments may challenge these population estimates through a process established by the Bureau’s Population Estimates Challenge Program. Local data sources for challenges have included building permits, non-permitted construction, demolition permits, non-permitted demolitions, certificates of occupancy, utility connection data, and real and personal property tax information on residential units. The Bureau permitted localities to submit challenges for population estimates from 2001 to 2008. Counties and subcounties could challenge the Bureau’s estimate based on evidence that the number of housing units in their locality differed from the Bureau’s estimate of housing units. From 2001 to 2008, the Bureau reports revising population estimates from 287 challenges. These challenges were from 211 governments in 36 states and the District of Columbia. There were as few as 3 challenges in 2001 and as many as 61 each in 2006 and 2007. In 2010, the Bureau temporarily halted the challenge program beginning for estimates from 2009 to accommodate the 2010 decennial census. The Bureau is resuming the challenge program in 2013. According to the Bureau, communities that challenged their population estimate saw their estimate revised upward by an average of about 9.4 percent over the 8-year period. These challenges ranged from an estimated increase of over 186 percent for Bluffton, South Carolina, in 2008, to a decrease of almost 18 percent for Winthrop, Massachusetts, during the same year. In total, four communities submitted challenges containing evidence that resulted in a decrease in population. The effect of the program on smaller communities that participated was much larger, on average in percentage terms, than it was for larger communities that participated. According to Bureau data summarizing the program, communities with populations of less than 100,000 averaged an almost 13 percent upward revision in their population estimate—including the results of the challenge—compared to an average revision of less than 2 percent for communities with a population of more than 1 million (see table 1). Previously, the Bureau provided communities that chose to challenge their population estimate with examples of the types of records they could use to support their calculations, including permits for new residential construction, public utility connection data, real and personal property tax information on residential units, and records of annexations and other types of legal boundary changes. In practice, according to agency officials, the Bureau generally accepted all challenges, largely without regard to the data sources provided so long as they supported calculations of population change and covered the reporting periods required by the challenge program. The rule changes are intended to improve the quality of data that communities use to challenge the Bureau’s population estimates and will affect the scope of what county and state governments can challenge. According to Bureau officials, these changes are based on research that shows that estimates based on some methods and records (e.g., births, deaths, and migration) are substantially more accurate than estimates based on others. The rules vary for different levels of government—subcounty, county or equivalent, and state. Subcounty. Under the new rules, subcounty governments should use building, demolition, and mobile home permits, and group quarter counts to challenge population estimates. The Bureau plans to reject challenges that rely solely on other types of data, except in cases where the data provide overwhelming evidence that the Bureau’s estimate of population growth is in error.research shows that public utility records can vary widely in their reliability to indicate population growth, as well as in their availability to the public. Therefore, utility records will be treated as corroborating, or secondary, evidence to support other preferred data expected to be provided in support of challenges. Utility records were ordinarily accepted as a basis for successful challenges in the past. For example, according to a senior Bureau official, County and equivalent. Under the changes, county governments and their equivalents should use birth and death records, immigration data, and group quarters counts to challenge population estimates—a significant reduction in the types of records from what the Bureau has historically accepted. According to Bureau officials, research conducted over the past several years has demonstrated that county-level estimation methods based on other data sources do not produce, on average, as accurate estimates of population. Consequently, the Bureau plans to reject county-level challenges relying on other data sources. Under the changes, the Bureau will only accept county-level challenges to either the accuracy of the data the Bureau itself used when producing estimates or to whether the Bureau carried out its estimation procedures properly, such as in handling data files properly or carrying out calculations. Bureau officials acknowledged that local governments might see this reduction in the scope of permissible county-level challenges as being too restrictive, as evidenced by some public comments submitted in response to the proposed changes. Bureau officials stated that they will continue their dialogue with representatives of the Federal-State Cooperative for Population Estimates—a partnership between the Bureau and state governments—and other researchers about improving methods of estimating population, and may revisit the structure of the challenge program if future research demonstrates that an alternative method of estimating county population consistently outperforms that used by the Bureau. Figure 1 shows changes in the types of administrative records accepted by the challenge program. State. The Bureau will generally no longer permit state governments to submit their own challenges. Bureau officials stated that counties and cities are the most appropriate entities to submit a population challenge for their community because they have greater knowledge of their population. Bureau officials also said that they want to avoid situations where a state may challenge estimates for communities where respective local governments disagree with such challenges. The Bureau will allow states to submit challenges for counties or equivalents where there is no seat of government, such as in certain New England states and in parts of Alaska, but will otherwise require all communities affected by a challenge to have their government communicate directly with the Bureau. The changes to procedures are intended to improve the accuracy of revisions to population estimates stemming from challenges. The Bureau modified procedures so that challenges by subcounty governments to the Bureau’s estimates of people living in traditional housing—not living in group quarters arrangements—will no longer affect county-level population estimates. Previously, successful subcounty challenges were added to respective county-level populations. The Bureau justifies the change with its research demonstrating that its method for estimating the county population that lives in housing is generally better than the method typically used to challenge subcounty estimates. According to Bureau officials, the method used to challenge subcounty estimates introduced an upward bias for communities that were experiencing population decline or a slowing of population growth. According to the change, any challenge that results in an increase to the estimate of a subcounty community’s population living in traditional housing will be offset by a downward revision to the estimate for all other communities in the same county, in effect reallocating the estimated population within the county. Bureau officials emphasized that under the new procedures, as before, if a government successfully challenges the Bureau’s estimate of the population living in group quarters arrangements within its community, the revision will be added to the county population as well, and will not be offset by changes to populations in other communities in the county. The Bureau also changed procedures so that it routinely reviews subcounty population challenges in light of each community’s population growth trend, requiring corroborating information when a government claims population growth that is inconsistent with the trend. For example, one local government in the Midwest successfully challenged the Bureau’s 2006 population estimate and received an increased population estimate of about 5.5 percent, even though evidence it submitted showed the number of housing units in the community was about 3.5 percent lower than the Bureau’s original calculation for that year. Another local government successfully challenged the Bureau’s 2007 population estimate and received an increased population estimate of about 7.4 percent, even though its evidence showed an increase in housing units of only about 0.8 percent. According to 2010 Census counts in both of these cases, the average actual annual population growth over the decade was far below the challenge result, and the trend in population was declining. A senior Bureau official with whom we spoke said that, in addition, this and other changes in procedures will reduce the incentive that some communities have had to file a challenge in order to provide a community only a temporary reprieve from an otherwise declining trend in their population. Challenge program officials told us that the program focuses its quality assurance on (1) reviewing the calculations presented in the documentation submitted by local governments as part of challenge submissions and (2) checking documents and calculations for internal consistency. Specifically, different worksheets local governments could choose to complete under the program have different data and calculations required to support a challenge, which Bureau staff review for accuracy and consistency. For example, Bureau staff would identify when local governments provided data on local building permits with a significant lag time, and then revise the submitted calculations accordingly since some of the buildings would thus have been constructed outside the time frame for which the population change was being estimated. Bureau officials told us that the Bureau assures quality of locally- submitted records by requiring a community’s highest elected official— such as a mayor or county commission chair—to certify the validity of data used in any challenge to a population estimate that its government submits. The Bureau takes such certification at “face value” and does not examine the quality of these records, which Bureau officials said would be prohibitively time consuming to investigate or verify. Bureau officials said that no change to that approach is planned for the future of the program. However, the Bureau is considering how to describe a quality threshold that local records should satisfy, or steps that local governments can take to check the quality of their records. Additionally, the Bureau distributes a review guide containing standardized procedures to local governments that are interested in participating in the challenge program. The guide includes instructions on filling out standardized worksheets and descriptions of the types of administrative records that can be relied on as sources of data for the challenge. According to Bureau officials, this helps to ensure consistency across challenges and potential revisions to population estimates. Moving forward, according to officials, the Bureau is preparing a quality assurance plan for Bureau staff who review challenges to better ensure proper handling and processing of challenges, as well as the review of calculations. The Bureau intends to develop the plan over the coming months, after the program has resumed. The Bureau is also undertaking an agency-wide effort to update its record keeping policies, which would include revising rules for retaining documentation submitted as part of the challenge program. Changes to record keeping policies would help the Bureau maintain challenge program documentation—including calculations of revised estimates—in the event the Bureau needs to perform any follow-up reviews. In our review of the 287 challenges submitted to the Bureau resulting in revised population estimates from 2001 to 2008, we identified a number of cases where documentation of challenges appeared either incomplete or inconsistent. For example, in some cases, documentation of corrections to local government’s calculations was missing, as was certification from a community’s highest elected official that data submitted with a challenge were valid. Also missing were notices to local governments from the Bureau on whether challenges were accepted. A lack of documentation could make it difficult for an independent verification of the integrity of the program. We discussed these documentation issues with Bureau officials. In response, they explained the steps they were taking to address them, such as documenting a record keeping policy for the program that includes descriptions of what documentation to create for each challenge and checklists of specific items to be retained in files. Bureau officials agreed to provide us with copies of the revised record keeping policy when the challenge program is resumed later in 2013. Because of these planned actions, we are not making a recommendation to the Bureau at this time. The Bureau is exploring the use of local records for the 2020 Census, but this effort will be of a lower priority than research on the use of national administrative records, in part because national records show greater promise than local records for controlling costs. The Bureau’s research on the use of national administrative records for the 2020 Census is focused on their possible use for such purposes as (1) building the address list, (2) counting people, and (3) quality assurance and evaluation processes. In contrast, Bureau officials believe that the use of local administrative records is most likely to support the development of the 2020 address list. The Bureau has used local census records in this regard in prior decennials. In previous enumerations, the Bureau developed its address list in part by going door-to-door and canvassing every block in the country to verify street addresses. However, the process of going door-to-door is labor intensive. As a result, for 2020, the Bureau is exploring how to reduce much of the costly, national canvassing done in the past by combining both local and national administrative data with the United States Postal Service data it already uses, allowing the Bureau to continuously update its master address file throughout the decade. In particular, the Bureau is considering how it can more seamlessly integrate regular input from local government address lists and related geographic information systems into its existing address list and map development processes. The Bureau would like to update this information continuously rather than wait to receive the input as part of a one-time decennial update. In addition, the Bureau will research the quality of these updates to determine the extent to which it can rely on them without necessarily having to verify them door-to-door. Bureau officials stated that if time and resources permit, the Bureau will consider other ways that local records can be used to supplement the use of national records. The officials stated that while it is important to continue research on local records, because they may be helpful in targeting decennial operations to hard-to-count groups or those in certain geographic areas, the results of 2010 Census research and testing on national records has led Bureau officials to conclude that continuing research on national records, such as those listed in figure 2, should be a higher priority. Bureau officials believe that research on the more broadly available national records could yield cost savings more quickly than research on locally available records, and that given resource constraints, they should attempt to “lock in” at least some of the more likely cost savings before pursuing more uncertain ones. Moreover, legislation may be needed to allow the Bureau to expand its use of national records for decennial purposes. Figure 2 shows the national and local records the Bureau is considering and the operations these records could support. Beyond helping to develop the address list, local administrative records that have been used in the challenge program, such as school enrollment data, could supplement national records to either improve or evaluate the counting of targeted populations. Some Bureau stakeholders have suggested to the Bureau that these records, in some locations, may be more comprehensive and accurate than those records on which the Bureau already relies. For example, the Bureau is researching how statewide data, such as Supplemental Nutrition Assistance Program data from Illinois, Maryland, New York, and Texas, could enhance person and housing unit coverage—access to similar records is under negotiation with several other states. By matching records in these files with the 2010 Census and other administrative records, the Bureau is determining their suitability for use to identify addresses or people and their characteristics, as well as their accuracy. Additionally, the Bureau is considering research on other state-level files, such as those maintained by Bureau partners in Florida and Montana, which maintain records on utility use and residential construction, respectively. However, the Bureau would face some challenges using these records for decennial purposes. Bureau officials said local records are generally more difficult to systematically access or to apply broadly to census operations. In some cases, laws restrict the use of these records to certain purposes, so the Bureau may need to negotiate their use with various parties or work towards legislative change. For example, going beyond school system enrollment data and obtaining access to student and school lunch data that might help target hard-to-reach populations is restricted to use for educational purposes by the Family Educational Additionally, some of these records are not Rights and Privacy Act.aggregated nationally, which could make it difficult for the Bureau to obtain them. According to the Bureau, in other cases such as the Supplemental Nutrition Assistance Program, not all states maintain records equally well. If the Bureau ultimately has the time and resources to supplement national data with local data, officials stated that they will work to address these issues. We provided a draft of this report to the Department of Commerce. In response, we received written comments from the department, which are reprinted in appendix II. In its comments, the department stated that it appreciates the time and effort that we put into the draft report and thanked us for responding to technical comments provided earlier by Bureau staff. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days after the date of this report. At that time, we will send copies of this report to the Acting Secretary of Commerce, the Under Secretary of Economic Affairs, the Acting Director of the U.S. Census Bureau, and interested congressional committees. The report also is available at no charge on GAO’s website at http://www.gao.gov. If you have any questions about this report please contact me at (202) 512-2757 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. The GAO staff that made major contributions to this report are listed in appendix III. To describe the U.S. Census Bureau’s (Bureau) changes to how local administrative records will be used in the challenge program, we reviewed the August 10, 2012, Federal Register outlining proposed changes; the final rule, which was issued in the January 3, 2013, Federal Register; Bureau reports and presentations, which served as the basis for the changes; and comments received from state and local data experts solicited by the Federal Register on the proposed changes prior to the Bureau’s issuance of the final rule. We also interviewed current and retired officials from the Bureau’s Population Division responsible for implementing the challenge program. To describe the changes to how the Bureau will assure the quality of population estimates updated by the challenge program, we reviewed the Federal Register notice outlined earlier, and we interviewed Bureau officials to identify additional quality assurance steps the Bureau intends to implement. To better understand the current procedures and implications of these changes, we reviewed documentation the Bureau provided to local governments on submitting accurate documentation to challenge population estimates, including worksheets used to calculate revised population estimates. We conducted a case file review to identify specific quality assurance steps the Bureau has previously taken to review the quality of submissions from local governments. During our case file review, we observed a number of instances where case files and other documentation of challenges appeared to be incomplete or inconsistent. We reviewed each of the documentation weaknesses, and we shared our observations with ranking Bureau executive and program managers. To examine the Bureau’s plans to use the types of local administrative records currently used by the challenge program to improve the cost or quality of the 2020 Decennial Census, we reviewed Bureau documentation on research and testing of administrative records for the 2020 Census, as well as our prior reports on the Bureau’s research and testing efforts. Additionally, we interviewed the Bureau officials responsible for the research and testing efforts related to administrative records to understand which local records and processes from the challenge program the Bureau is considering for the 2020 Census. Other key contributors to this report include Ty Mitchell, Assistant Director; David Bobruff; Benjamin Crawford; Sara Daleski; Robert Gebhart; Will Holloway; Andrea Levine; Mark Ryan; and Timothy Wexler.
The Bureau's Population Estimates Challenge Program gives local governments the opportunity to challenge the Bureau's annual estimates of their population counts during the years between decennial censuses. Challenges rely on local administrative records, such as building and demolition permits. In addition to their role in the challenge program, these and national administrative records, such as tax data and Medicare records, could save the Bureau money if they are used to help build the Bureau's master address list, and reduce the need for certain costly and labor-intensive door-to-door visits among other things. GAO was asked to review changes to the challenge program and the Bureau's use of administrative records. This report describes the changes to: (1) how local administrative records will be used in the challenge program; (2) how the Bureau will assure the quality of population estimates updated by the challenge program; and describes (3) what plans, if any, the Bureau has to use the types of local administrative records used for the challenge program to improve the cost or quality of the 2020 Decennial Census. GAO reviewed documentary and testimonial evidence from Bureau officials and state and local data experts. Additionally, GAO interviewed Bureau officials to identify changes to the challenge program and reviewed documentation on the challenge program's quality assurance processes. GAO provided a draft of this report to the Department of Commerce. In response the Bureau provided technical comments, which were incorporated as appropriate. The Census Bureau (Bureau) issued significant changes to rules governing the records that communities use to challenge the Bureau's population estimates. Previously, the Bureau routinely accepted all challenges, largely without regard to the data sources cited or provided so long as they supported the calculations and covered the appropriate reporting periods. According to Bureau officials, these changes are based on research that shows that estimates based on some methods and records (e.g., births, deaths, and migration) are substantially more accurate than estimates based on others. Among other changes, the Bureau modified procedures so that challenges by subcounty governments to the Bureau's estimates of people living in housing units will no longer affect countylevel population estimates. Moving forward, any such challenge resulting in an increase in the estimate of a subcounty population will be offset by a downward revision to the population estimate of all other communities in the same county. Also, the Bureau plans to routinely review population challenges in light of each community's population growth trend.Corroborating data will be required for challenges inconsistent with the trend. Challenge program officials told GAO that in the past the program focused quality assurance on (1) reviewing the calculations in the documentation submitted by local governments as part of challenge submissions and (2) checking documents and calculations for internal consistency. Moving forward, the Bureau is preparing a quality assurance plan for Bureau staff who review challenges to better ensure proper handling and processing of challenges, as well as the review of calculations. The Bureau's 2020 research and testing program is exploring the use of local administrative records for the 2020 Census, such as those used in the challenge program, but this effort is a lower priority than research on the use of national records, in part because national administrative records show greater promise than local records for controlling costs. Bureau officials said local records show the most promise for supporting the development of the 2020 address list. Specifically, the Bureau is exploring how it can use local records to more seamlessly and continually update address lists and maps, rather than waiting to receive such information as part of a one-time decennial update. Bureau officials stated that it is important to continue research on local records because they may be helpful in targeting decennial operations to hard-to-count groups or those in certain geographic areas. However, the results of 2010 Census research and testing on national records have led Bureau officials to conclude that continuing research on national records should be a higher priority.
Surveillance of foodborne diseases allows public health officials to recognize trends, detect outbreaks, pinpoint the causes of these outbreaks, and develop effective prevention and control measures. Such surveillance presents a complex challenge. Many foods today are imported, prepared and/or eaten outside the home, and widely distributed after processing. As a result, an outbreak of foodborne disease can involve people in different localities, states, and even countries. The number and diversity of foodborne diseases further complicate surveillance. Although many of the more well-known foodborne pathogens are bacteria, such as E. coli O157:H7 and Salmonella, foodborne diseases are caused by a variety of other pathogens, including viruses, parasites, and toxins. Some of these diseases also can be transmitted by nonfood sources, such as through water or through person-to-person contact. Appendix II describes the major foodborne diseases currently under national surveillance. The surveillance process usually begins when a person with a foodborne disease seeks medical care. To help determine the cause of the patient’s illness, a physician may rely on a laboratory test, which could be performed in the physician’s own office, a hospital, an independent clinical laboratory, or a public health laboratory. If the test shows that the patient is ill with a disease (including a foodborne disease) that must be reported under state law, or if the physician diagnoses the disease without the use of a test, the cases are usually reported to the local health department. Health department staff collect these reports, check them for completeness, contact health-care professionals to obtain missing information or clarify unclear responses, and forward them to state health agencies. Staff resources devoted to disease reporting vary with the overall size and mission of the health department. Because nearly half of local health agencies have jurisdiction over a population of fewer than 25,000, many cannot support a large, specialized staff to work on disease reporting. The states have principal responsibility for protecting the public’s health and therefore take the lead in conducting surveillance. In state health departments, epidemiologists analyze the data reported and decide when and how to supplement passive reporting with active surveillance methods, conduct outbreak and other disease investigations, and design and evaluate disease prevention and control efforts. They also transmit state data to CDC, providing routine reporting on selected diseases. Surveillance data are transmitted to CDC both electronically and using paper-based systems. Information about individual cases of disease is reported through two electronic systems. The National Electronic Telecommunications System for Surveillance collects data submitted by epidemiologists about patient demographics and residences, suspected or confirmed diagnoses, and the dates of disease onset. In contrast, the second system, the Public Health Laboratory Information System, collects more definitive data from public health laboratory officials on pathogens identified by laboratory tests. Both systems also offer disease-specific reporting options that states may use to report additional data to CDC. For some surveillance systems, such as the Viral Hepatitis Surveillance Program, data are submitted to CDC both electronically and using paper forms. For other surveillance systems, such as the Foodborne Disease Outbreak Surveillance System, the data are submitted primarily through paper reporting. CDC officials told us they have an ongoing effort to integrate public heath information collected through these and other systems. They estimate this effort will take several years to complete. Federal participation in the foodborne disease surveillance network focuses on CDC activities—particularly those of the National Center for Infectious Diseases. CDC analyzes the data furnished by states to (1) monitor national health trends, (2) formulate and implement prevention strategies, (3) evaluate state and federal disease prevention efforts, and (4) identify outbreaks that affect multiple jurisdictions, such as more than one state. CDC routinely provides public health officials, medical personnel, and others information on disease trends and analyses of outbreaks. In fiscal year 2000, CDC’s budget for foodborne disease surveillance through the Food Safety Initiative was $29 million. In order to maximize the effectiveness of its surveillance efforts, CDC works with the Council of State and Territorial Epidemiologists, a professional association of public health epidemiologists from each U.S. state and territory. They are responsible for monitoring trends in health and health problems and devising prevention programs that promote the entire community’s health. The council is currently in its eighth year of a cooperative agreement with the CDC and has approximately 15 separate activities on which they work collaboratively with the CDC. CDC also works with the Association of Public Health Laboratories, which links local, state, national, and global health leaders in order to promote the highest quality laboratory practices worldwide. However, regardless of the completeness and comprehensiveness of a surveillance system, it can generally detect only a fraction of disease cases—the tip of the iceberg, at best, as shown in figure 1. Very few people who contract foodborne diseases actually seek treatment, are properly diagnosed, have their diagnoses confirmed through laboratory analysis, and then have their cases reported through the surveillance systems. For example, a recent CDC-sponsored study estimated that 340 million annual episodes of acute diarrheal illness occurred in the United States, but only 7 percent of people who were ill sought treatment. The study further estimated that physicians requested laboratory testing of a stool culture for only 22 percent of those patients who sought treatment, which produced about 6 million test results that could be reported. Although federal participation in foodborne disease surveillance focuses on CDC activities, two other federal agencies have a key role in the wider arena of food safety and use surveillance information in their programs. USDA’s Food Safety and Inspection Service is responsible for ensuring that meat, poultry, and processed egg products moving in interstate and foreign commerce are safe. This agency primarily carries out its responsibilities through inspections at meat, poultry, and egg processing plants to ensure that these products are safe, wholesome, and accurately labeled. In addition, the Food and Drug Administration in the Department of Health and Human Services is responsible for ensuring that all other domestic and imported food products are safe. Unlike the USDA, the Food and Drug Administration, by and large, conducts post-market surveillance through domestic inspections and testing of products already in commerce to assure that foods are safe and comply with appropriate standards. This is especially true for imported foods where the surveillance program is primarily post-market testing, because the Federal Food, Drug and Cosmetic Act does not provide explicit inspection authority outside the United States. In addition to their other duties, these two agencies work to remove from the market foods that are implicated in foodborne disease outbreaks. CDC conducts surveillance of foodborne diseases through 20 systems. Four of these—the Foodborne Disease Outbreak Surveillance System, FoodNet, PulseNet, and the Surveillance Outbreak Detection Algorithm— focus on foodborne diseases and cover multiple pathogens. The other 16 either collect data about a variety of diseases, only some of which are foodborne, or focus exclusively on a single foodborne disease. Collectively, these systems provide information to detect and control the spread of foodborne disease. The Foodborne Disease Outbreak Surveillance System collects nationwide information about the occurrence and causes of foodborne outbreaks. This system relies on local health officials to correctly identify, investigate, and report outbreaks to CDC through state public health officials. CDC uses the system to, among other things, compile and periodically report national outbreak data. In 1997, the latest year for which published data are available, states and U. S. territories reported 806 outbreaks to CDC through this system. Furthermore, information from this system can serve as a basis for regulatory and other changes to improve food safety. For example, data from the Foodborne Disease Outbreak Surveillance System has played an important role in documenting the importance of shell eggs as a source of human infection with Salmonella Enteritidis. In response to this data and other reports pointing out the dangers posed by improperly handled eggs, government agencies and the egg industry have taken steps to reduce Salmonella contamination of eggs. These steps include refrigerating eggs during transport from the producer to the consumer, identifying and removing infected laying flocks, diverting eggs from infected flocks to pasteurization facilities, and increasing on-farm quality assurance and sanitation measures. CDC has advised state health departments, hospitals, and nursing homes of specific measures to reduce Salmonella Enteritidis infection, and the USDA tests the breeder flocks that produce egg-laying chickens to ensure that they are free of Salmonella Enteritidis. The Food and Drug Administration has amended its regulations, which now require that all shell eggs in retail establishments be held at a temperature of 45 degrees Fahrenheit or lower and that all egg cartons carry safe-handling instructions to inform consumers about proper storage and cooking of eggs. FoodNet is a surveillance system operating in nine sites selected by CDC on the basis of their capability to conduct active surveillance and because of their geographic location. FoodNet produces a more stable and accurate national estimate than is otherwise available of the frequency and sources of nine foodborne pathogens, hemolytic uremic syndrome (a serious complication of E. coli O157:H7 infection), Guillain-Barre syndrome (a serious complication of Campylobacter infection), and toxoplasmosis. These improved estimates result from the use of active surveillance and additional studies that are not characteristic of CDC’s other foodborne surveillance systems. Public health departments who participate in FoodNet receive funds from CDC to systematically contact laboratories in their geographical areas and solicit incidence data. In 1999, state officials participating in FoodNet contacted each of the more than 300 clinical labs within the FoodNet areas on a regular basis. FoodNet studies include various “case control” studies, which are used to determine factors, such as food preparation or handling practices, that affect the risk of infection by pathogens covered by the system. The studies also examine the association between infections and specific foods. In addition, public health officials that participate in FoodNet conduct surveys to identify physician and lab practices that may limit the identification of foodborne diseases. PulseNet is a nationwide network of public health laboratories that perform DNA “fingerprinting” on four types of foodborne bacteria in order to identify and investigate potential outbreaks. The four bacteria fingerprinted by PulseNet—Salmonella, E. coli O157:H7, Listeria, and Shigella—were selected because of their public health importance and the availability of specific “fingerprinting” methods for the pathogens. These four bacteria are either common or have severe symptoms, or both. Public health officials in 46 state and 2 local public health laboratories as well as the food safety laboratories of the USDA and the Food and Drug Administration submit “fingerprint” patterns of bacteria isolated from patients and/or contaminated food to the PulseNet database. The PulseNet network permits rapid comparison of the patterns in the database. Matches may indicate an outbreak. Similar patterns in samples taken from different patients suggest that the bacteria come from a common source, for example, a widely distributed contaminated food product. In addition, strains isolated from food products can be compared with those isolated from ill persons to provide evidence that a specific food caused the disease. By identifying these connections, PulseNet provides critical data for identifying and controlling the source of an outbreak, thus reducing the burden of foodborne disease for the pathogens within the scope of this network. Thirty survey respondents told us that, in the last 3 years, PulseNet had identified a cluster of cases in their state that turned out to be a previously unknown outbreak. In addition, 42 respondents reported that PulseNet helped their state detect and investigate outbreaks of E. coli O157:H7, Salmonella, Listeria, and/or Shigella. Twenty-five of these said that PulseNet greatly helped in this area. In 2000, over 17,000 patterns were submitted to the PulseNet database, and 105 potential outbreaks were identified and investigated. Another system that CDC uses to detect potential foodborne outbreaks is the Surveillance Outbreak Detection Algorithm. In contrast to PulseNet, which uses advanced technology to compare bacterial DNA, the Surveillance Outbreak Detection Algorithm uses statistical analysis to compare currently reported incidence of two common pathogens, Salmonella and Shigella, to a historical baseline in order to detect unusual increases in a specific serotype, such as Salmonella Enteritidis. Such increases may indicate an outbreak. CDC selected Salmonella and Shigella because there are many different serotypes of these organisms, and tracking and comparing the frequency of each serotype was a task well suited for computer analysis. In addition, baseline data for these two pathogens were already available through the National Salmonella Surveillance System and the National Shigella Surveillance System, described below and in appendix III. Beginning in 2002, CDC plans to expand the system to include E. coli O157:H7. Twenty-five of the states that we surveyed told us that in their state, at least once in the last 3 years the Surveillance Outbreak Detection Algorithm had identified a cluster of cases in their state that turned out to be a previously unknown outbreak. In addition to these 4 systems, CDC also has the following 16 systems that either collect information about a number of diseases, only some of which are foodborne, or focus solely on one disease: The Botulism Surveillance System is a national system designed to collect information about all types of botulism, including foodborne. Because every case of foodborne botulism is considered a public health emergency, CDC maintains intensive surveillance for botulism in the United States. The CaliciNet is a network of public health laboratories that perform genetic “fingerprinting” for foodborne viruses, allowing rapid identification and comparison of strains. The Creutzfeldt-Jakob Disease Surveillance Program monitors the occurrence of this disease through periodic review of national cause-of- death data. Surveillance for this disease was enhanced in 1996 to monitor for the possible occurrence of new variant Creutzfeldt-Jakob Disease after this new form of the disease was reported to have possibly resulted from consumption of cattle products contaminated with bovine spongiform encephalopathy (also known as “mad cow” disease). The Epidemic Information Exchange (Epi-X) is a secure Web-based communications network that allows local, state, and federal public health officials to share and discuss outbreak data on a real-time basis. This system can immediately notify health officials of urgent public health events so that they can take appropriate actions. The Escherichia coli O157:H7 Outbreak Surveillance System is a national system established to collect detailed information about risk factors and vehicles of transmission for E. coli infection and is used to inform the public about new vehicles of transmission. The National Antimicrobial Resistance Monitoring System is used to monitor the antimicrobial resistance of certain bacteria that are under surveillance through other systems. The system currently operates in 17 sites throughout the United States. The National Giardiasis Surveillance System includes data from participating states about reported cases of giardiasis—a condition caused by a parasite found in contaminated water or food such as fruits and vegetables. This system began in 1992, when the Council of State and Territorial Epidemiologists assigned giardiasis a code that enabled states to begin voluntarily reporting surveillance data on this disease to CDC electronically. The National Notifiable Diseases Surveillance System is a national system that collects information about 58 diseases, most of which are not considered foodborne, about which regular, frequent, and timely information is considered necessary for their prevention and control. Data from the system are used to analyze disease trends and determine relative disease burdens on a national basis. The National Salmonella Surveillance System is a national system that collects information on the isolates of Salmonella that are serotyped in state public health laboratories, as well as the isolates from food and animals. This system tracks the frequency of more than 500 specific serotypes to determine trends, detect outbreaks, and focus interventions. The system can detect outbreaks either locally or spread out over several jurisdictions. The National Shigella Surveillance System is a national system that collects information on the isolates of Shigella that are serotyped in state public health laboratories. This system tracks the frequency of more than 40 specific serotypes to determine trends, detect outbreaks, and focus interventions. The system can detect outbreaks either locally or spread out over several jurisdictions. The Salmonella Enteritidis Outbreak Surveillance System is a national system designed to track these outbreaks and to collect information on implicated food items and the results of traceback investigations conducted by local agencies and the Food and Drug Administration. The Sentinel Counties Study of Viral Hepatitis is carried out in six U.S. counties to elicit more detailed information on individual hepatitis cases and collect samples for further analyses. The Trichinellosis Surveillance System is a national surveillance system used to monitor long-term trends for this disease. The Typhoid Fever Surveillance System is a national surveillance system for monitoring long-term trends in the epidemiology of typhoid fever in the United States. The system provides information about risk factors that is used in making vaccine recommendations. The Vibrio Surveillance System is composed of two parts: a national system used for reporting cases of Vibrio cholerae (cholera), and another system, which is more geographically limited, that is used for reporting all Vibrio infections. All cases reported to this system are confirmed through laboratory tests by the relevant state or CDC. Surveillance data for this system are used to identify environmental risk factors, retail food outlets where high-risk exposures occur, and target groups that may benefit from consumer education. The Viral Hepatitis Surveillance Program is a national system designed to collect information about acute cases of viral hepatitis: hepatitis A; hepatitis B; and non-A, non-B hepatitis (including hepatitis C). States report basic demographic information for each case, as well as other factors, such as risk-factor information. These data are essential for monitoring trends in the characteristics of the various types of viral hepatitis. Collectively, these surveillance systems provide crucial national data needed to detect and control the spread of foodborne disease. More detailed information about these systems is contained in appendix III, in alphabetical order by system. Public health officials that we contacted said that both untimely release of surveillance data by CDC and the gaps in some of CDC’s data limit the surveillance systems’ usefulness. Some of these problems have resulted from staff shortages at CDC, while others have been caused by shortages of trained epidemiologists and laboratory personnel at state and local health departments. Another contributing factor is that each state decides which diseases it will track and which ones it will not. Therefore, the diseases that are reported to CDC vary from one state to another. In response to these problems, CDC has taken action to address its staff deficiencies and to assist state and local health officials to improve their data collection and reporting abilities. CDC’s actions represent a good first step toward providing public health officials with more timely and complete surveillance data. Delayed dissemination of information from CDC’s foodborne disease surveillance systems has impaired the usefulness of the data. For example, for the Foodborne Disease Outbreak Surveillance System, CDC did not publish outbreak data for the years 1993–1997 until March 2000. CDC officials told us that the late publication of the March 2000 outbreak report was due in part to staff shortages. As of June 2001, data from 1997 was the most recent available from this system. Officials from both the Food and Drug Administration and USDA’s Food Safety and Inspection Service told us that this delay limited the data’s usefulness. In addition, of the 52 respondents to our survey, 26 said that the 3-year lag between the end of the reporting period and the publication of CDC’s March 2000 report diminished the usefulness of the report to their state. Of the 43 survey respondents that used this report, nearly all said that the outbreak data was used as a source of information about foodborne disease trends or to determine associations between pathogens and food. Many survey respondents also told us that more rapid reporting or release of data from FoodNet, PulseNet, and the Surveillance Outbreak Detection Algorithm would improve the systems’ usefulness. For FoodNet, CDC publishes surveillance results annually. However, as of June 2001, CDC had not published any detailed results from its case control studies about the proportion of foodborne disease caused by specific foods or food preparation and handling practices, even though FoodNet has been operational since 1995. CDC officials told us that they had submitted the results of these surveys and studies to professional journals, but the results were never published. For PulseNet, nearly half of the survey respondents said that more rapid analysis of data and more rapid reporting of identified clusters would make the system more useful. In addition, 33 of the respondents said that direct access to the PulseNet database would make the system more useful. For the Surveillance Outbreak Detection Algorithm, 25 of the respondents said that more rapid analysis of state, regional, and national data by CDC would make that system more useful. In addition, 20 respondents said more rapid reporting of clusters by CDC would make the system more useful. CDC officials told us that the late publication of the March 2000 outbreak report was due in part to staff shortages. CDC took action to address this problem when the agency hired four new staff between June 2000 and September 2000 to take on the responsibilities of collecting, verifying, coding, processing, and summarizing the outbreak data in addition to other duties. In the future, CDC plans to release outbreak data annually beginning with 1998 data, instead of aggregating these data over several years. CDC is currently compiling 2001 outbreak data and intends to publish it by the end of 2002. In addition, CDC is developing a system, called the Electronic Foodborne Outbreak Reporting System, which will allow states to electronically transmit reports of foodborne disease outbreaks. Thirty-six survey respondents indicated that this system would increase the timeliness of their initial outbreak reports to CDC. Finally, in November 2000 CDC introduced an electronic bulletin board, known as Epi-X, which allows local, state, and federal public health officials to share outbreak data on a real-time basis. This system can automatically notify health officials of urgent public health events so that they can take appropriate actions. CDC also has plans to provide more rapid reporting or release of data from FoodNet and PulseNet. For FoodNet, CDC officials said they plan to publish by the end of 2001 a number of case control study results that were previously unavailable. For PulseNet, CDC told us it has developed new software that, effective June 30, 2001, gives all participating certified laboratories direct access to the PulseNet database. This allows state officials to query the PulseNet database directly instead of waiting for CDC to send them notice of a new pattern. However, CDC’s ability to disseminate surveillance data in a timely fashion also depends in part on the timeliness of state and local officials’ submittal of the data. For example, for the Foodborne Disease Outbreak Surveillance System, 24 of the survey respondents said they did not report any outbreak data for 2000 until the end of the year or even later. Thus, data could be over a year old before it gets reported to CDC. Similarly, CDC officials also told us that for the Surveillance Outbreak Detection Algorithm, some states report information only quarterly, which is too late to allow CDC to provide early detection of an ongoing outbreak. Because responsibility for surveillance of foodborne diseases rests primarily with the states, states’ reporting of data to CDC is voluntary. To assist in overcoming this problem, CDC is developing a new program known as the National Electronic Disease Surveillance System. This system is intended to facilitate the ready exchange of data between local and state health departments, among states, and among states and CDC. While this may not overcome delayed reporting by the states, it should make information more readily available. In addition, through their Epidemic Intelligence Service program, CDC is training medical doctors, researchers, and scientists, who serve in 2-year assignments, about the needs of both state health departments and CDC. Agency officials said that they hope graduates from the program will understand the value of sharing information in a timely manner and help speed the flow of information into CDC. The completeness of CDC’s data is dependent in large part on the submissions from state and local health officials, which often do not report all cases or all information requested about individual cases. For example, 17 survey respondents told us that not all of the outbreaks in their states were reported to the Foodborne Disease Outbreak Surveillance System. Moreover, for those outbreaks that were reported, 25 survey respondents said the responsible pathogen was identified in only half or fewer of their reports submitted to CDC. Further, as regards the contaminated food item that caused the outbreaks, 28 survey respondents said they identified and reported the responsible food item in half or fewer of their reports. According to FDA and FSIS officials, identifying the responsible pathogen and the contaminated food item is critical for understanding and controlling foodborne disease, and for tracing the cause of the contaminant to its original source. Survey respondents cited several reasons for the gaps in outbreak information sent to CDC. Table 1 summarizes some of the major reasons. As the table shows, the majority of the respondents said shortages of personnel and capacity in state and local health departments, among other things, hinder their ability to detect and investigate foodborne disease outbreaks. A complete listing of conditions that could hinder state and local public health officials is included in our questionnaire results, contained in appendix I. Another cause of incomplete data submissions to the Foodborne Disease Outbreak Surveillance System, as well as to other systems, is the lack of standard disease reporting requirements among states. Each state has a separate list of “reportable” diseases that must be reported to the state health department. The lists vary greatly from state to state because of differences in the extent to which the diseases occur. For example, while 32 survey respondents indicated that health providers in their state are required to notify state or local health departments about cases of cyclosporiasis, 19 said notification was not required. (See app. I for more information on state reporting requirements for a number of foodborne pathogens.) Although states can forward data to CDC about diseases that are not reportable, overall data about such diseases are often incomplete because of deficiencies in reporting by physicians and labs. To improve local and state health officials’ ability to respond to a broad range of public health issues relating to infectious diseases, which include foodborne outbreaks, CDC provides funding to state and local health departments through its Emerging Infections Programs and its Epidemiology and Laboratory Capacity program. Funding for these two programs has increased from $900,000 in 1994 to approximately $50 million in 2001. These programs are designed to address staffing or technology shortages, or both, and will help the states provide CDC with more complete information. For example, states have received grants to significantly increase the capacity of their laboratories. According to CDC officials, now nearly every state has properly trained staff able to use PulseNet technology. To encourage more standardized reporting among the states, CDC consults annually with the Council of State and Territorial Epidemiologists to determine which infectious diseases, including foodborne diseases, are important enough to merit routine reporting to CDC. Officials from CDC told us they have also entered into cooperative agreements with the council and with the Association of Public Health Laboratories to assess the states’ capability and capacity to address public health issues, including foodborne diseases. In commenting on a draft of this report, CDC officials generally agreed with the overall message of the report and provided technical comments to ensure completeness and accuracy. We incorporated these comments into our report as appropriate. CDC comments are presented in appendix IV. To describe CDC’s foodborne disease surveillance systems, we obtained information from CDC on the systems used most often in conducting foodborne disease surveillance activities. We examined each of these systems to identify their use and how they operate. We also discussed the systems’ use and operation with officials from the Food and Drug Administration, USDA’s Food Safety and Inspection Service, the Council of State and Territorial Epidemiologists, the Association of Public Health Laboratories, the National Pork Producers Council, the American Meat Institute, the National Broilers Council, and the Center for Science in the Public Interest. As a result of our initial work, we then directed the remainder of our review effort to four surveillance systems that focus on foodborne disease and that cover more than one pathogen. These four systems were the Foodborne Disease Outbreak Surveillance System, FoodNet, PulseNet, and the Surveillance Outbreak Detection Algorithm. We reviewed extensive literature about each of these four systems and examined the systems’ input and reporting documentation. To identify limitations of these surveillance systems, we sent mail-back questionnaires to officials in the 50 state health departments, as well as in the District of Columbia, and New York City. We pretested this survey in three states to ensure that our questions were clear, unbiased, and precise, and that responding to the survey did not place an undue burden on the health departments. We received completed questionnaires from 100 percent of those surveyed. We discussed limitations identified in the survey with CDC and other federal and state public health officials and with other groups that use foodborne disease surveillance systems. To identify initiatives designed to address these limitations, we met with CDC officials responsible for the surveillance systems and discussed actions they have taken or plan to take to address the limitations. We conducted our review from August 2000 through July 2001 in accordance with generally accepted government auditing standards. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies to the congressional committees with jurisdiction over food safety issues; the Secretary of Health and Human Services; the Director, Office of Management and Budget; and other interested parties. We will also provide copies to others on request. If you or your staff have any questions about this report, please call me on (202) 512-3841. Key contributors to this report are listed in appendix V. Appendix II: Major Foodborne Pathogens Under Surveillance by the Centers for Disease Control and Prevention Fever, abdominal cramps, diarrhea (often bloody) Profuse watery diarrhea, vomiting, circulatory collapse, shock Fever, abdominal pain, diarrhea (often bloody) Person-to-person or by contaminated food or water (fecal-oral) Contaminated food or water; person to person; contact with a contaminated item (fecal-oral) The Botulism Surveillance System was established in 1973 to collect detailed information about all types of botulism—foodborne, wound, infant, and child or adult. Because every case of foodborne botulism is considered a public health emergency, CDC maintains intensive surveillance for botulism in the United States. All states except California and Alaska must contact CDC when a case of botulism is suspected, because CDC is the main source of the antitoxin used to treat botulism. As a result, most cases of botulism are reported to CDC immediately. CDC officials follow up on these cases to collect demographic information about the affected individuals, as well as additional information about which foods were involved and their handling and preparation. This information is especially important because the hazardous food may still be available. Geographic Scope: National. Pathogen: Clostridium botulinum. Cases Reported: In 1999, a total of 174 cases were reported to this system, of which 26 were foodborne. CaliciNet, an initiative currently under development, is a network of public health laboratories that uses DNA sequence analysis for “fingerprinting” of foodborne viruses. The network permits rapid comparison of the genetic patterns of foodborne caliciviruses through an electronic sequence database at CDC. Laboratories participating in CaliciNet detect “Norwalk- like” viruses in samples from patients involved in outbreaks of gastroenteritis. Depending on the capabilities in the laboratory, amplification products from positive samples are sequenced locally, sent to a contract laboratory for sequencing, or sent to CDC for confirmatory testing and sequencing. Comparison of newly identified sequences with those in the database may help public health laboratories to identify cases with a common source. Geographic Scope: Thirteen state health departments (California, Florida, Idaho, Iowa, Maryland, Michigan, Minnesota, Missouri, New York, Oregon, Virginia, Washington, and Wisconsin) and the Los Angeles County health department are currently submitting samples for confirmatory testing and genetic analysis. Ten other state health departments (Colorado, Connecticut, Illinois, Nevada, New Hampshire, New Mexico, Ohio, Rhode Island, South Carolina, and Tennessee) are currently undergoing proficiency testing. Pathogens: “Norwalk-like” viruses and “Sapporo-like” viruses. Cases Reported: In 1999, 94 specimens from 9 states were submitted for confirmatory testing and genetic analysis at CDC. CDC monitors the occurrence of Creutzfeldt-Jakob disease through periodic review of national multiple-cause-of-death data. Surveillance for this disease was enhanced in 1996 to monitor for the possible occurrence of new variant Creutzfeldt-Jakob disease after this new form of the disease was reported to have possibly resulted from consumption of cattle products contaminated with bovine spongiform encephalopathy (also known as “mad cow” disease). One enhancement focused on striking differences in the age distribution of new variant Creutzfeldt-Jakob disease cases, for which the median age at death is 28 years, from that of sporadic cases of Creutzfeldt-Jakob disease in the United States, for which the median age at death is 68 years. This enhancement included an ongoing review of the clinical and pathologic records of U.S. victims of Creutzfeldt-Jakob disease under 55 years of age. In addition, in collaboration with the American Association of Neuropathologists, CDC established a National Prion Disease Pathology Surveillance Center to facilitate neuropathologic evaluation of patients suspected of having Creutzfeldt-Jakob disease or other diseases caused by prions. Geographic Scope: National. Pathogens: The agents of Creutzfeldt-Jakob disease and the new variant form of Creutzfeldt-Jakob are believed to be prions. Cases Reported: Between January 1979 and June 2001, over 5,000 U.S. cases of Creutzfeldt-Jakob disease were reported; no evidence of the occurrence of new variant Creutzfeldt-Jakob disease in the United States was detected. The Epidemic Information Exchange, known as Epi-X, is a secure, Web- based communications network for public health officials that simplifies and expedites the exchange of routine and emergency public health information among state and local health departments, CDC, and the U.S. military. CDC recognized that the public health profession had a need for rapid communication, research, and response to widespread food and food-product contamination. After consulting with more than 300 health officials, CDC developed this new system, which enables federal, state, and local epidemiologists, laboratory staff, and other health professionals to quickly notify colleagues of disease outbreaks as they are identified and investigated. The system allows users to compare information on current and past outbreaks through an easily searchable database, discuss a response to the outbreak with colleagues through e-mail, Internet, and telecommunications capabilities, and request epidemiological assistance from CDC on-line. Epi-X is endorsed by the Council of State and Territorial Epidemiologists. Geographic Scope: National. Pathogens: Any pathogen, including bacteria, chemicals, parasites, and viruses (also products or devices). Cases Reported: From November 2000 through August 2001, 153 outbreaks were reported, including 37 foodborne outbreaks. Two health alerts related to foodborne outbreaks of food contamination were issued; over 85 percent of Epi-X users were notified within 30 minutes. The Escherichia coli (E. coli) O157:H7 Outbreak Surveillance System began in 1982, after the first recognized outbreak of this pathogen, and was established to collect detailed information about risk factors and vehicles of transmission for E. coli infection. State health departments are encouraged to report any outbreak of E. coli O157:H7 infection in their state to CDC. Data are collected on outbreaks caused by all sources including food, recreational water, drinking water, animal contact, and person-to-person transmission. E. coli O157:H7 infections can be quite serious and may result in death. Therefore, public health officials at CDC follow up with state health departments on reported outbreaks of E. coli infection to determine their cause and prevent additional spread. Data from this surveillance system are used to inform the public about new vehicles of transmission. Geographic Scope: National. Pathogen: E. coli O157:H7. Cases Reported: In 1999, 38 confirmed outbreaks (causing 1,897 illnesses) were reported to CDC. CDC created the Foodborne Disease Outbreak Surveillance System in 1973 to collect data about cases of foodborne disease that are contracted by two or more patients as a result of ingesting a common food. In the event of such an outbreak, state and local public health department officials provide data to the system about the pathogen that caused the outbreak, the contaminated food that was involved, and contributing factors associated with foodborne disease outbreaks. The data help focus public health actions intended to reduce illnesses and deaths caused by foodborne disease outbreaks. Trend analysis of the data shows whether outbreaks occur seasonally and whether certain foods are more likely to contain pathogens. It also helps public health officials identify critical control points in the path from farm to table that can be monitored to reduce food contamination. However, the data from this system do not always identify the pathogen responsible for a given outbreak; such identification may be hampered by delayed or incomplete laboratory investigation, inadequate laboratory capacity, or inability to recognize a particular pathogen as a cause of foodborne disease. Geographic Scope: All 50 states, the District of Columbia, Guam, Puerto Rico, and the U.S. Virgin Islands. Pathogens: Any pathogen, including bacteria, chemicals, parasites, and viruses. Cases Reported: In 1997, 806 outbreaks were reported to CDC through this system. The Foodborne Diseases Active Surveillance Network, also known as FoodNet, is a collaborative project of the CDC, the USDA, the Food and Drug Administration, and nine sites that gathers information about nine foodborne pathogens, two syndromes, and toxoplasmosis. A significant distinction between FoodNet and other foodborne surveillance systems is that FoodNet participants actively and routinely contact the clinical laboratories in their areas to collect information about the number of cases of each disease covered by this system. For other systems, state and local reporting practices to CDC may not be consistent from state to state. In addition to the active surveillance efforts, FoodNet participants conduct studies and surveys of the physicians, laboratories, and populations within the nine sites. Case control studies are used to determine risk factors, such as food preparation or handling practices, for acquiring infections from the pathogens covered by the system, as well as the association between these infections and specific foods. These studies have been conducted for E. coli O157:H7, Salmonella, Campylobacter, and others. CDC also collects information through population surveys, in which individuals who live in a FoodNet catchment area and were not part of a case control study are surveyed about their consumption of certain foods and how often they see a physician. To determine which tests are typically performed at laboratories in FoodNet areas, CDC administers laboratory surveys. Finally, state officials in the FoodNet areas have administered two physician surveys. The first survey asked physicians to describe actions they take when seeing a patient with a possible foodborne illness, while the second asked how they educate patients about foodborne diseases. FoodNet data can also test the efficacy of interventions designed to reduce the incidence of foodborne pathogens. Geographic Scope: Nine sites consisting of parts or all of the states of California, Colorado, Connecticut, Georgia, Maryland, Minnesota, New York, Oregon, and Tennessee. Pathogens: Nine pathogens—Campylobacter, Cryptosporidium, Cyclospora, E. coli O157:H7, Listeria monocytogenes, Salmonella, Shigella, Vibrio, Yersinia enterocolitica—and hemolytic uremic syndrome (a serious complication of E. coli O157:H7 infection), Guillain- Barre syndrome (a serious complication of Campylobacter infection), and toxoplasmosis. Cases Reported: The number of cases varies by pathogen. The National Antimicrobial Resistance Monitoring System for Enteric Bacteria began in 1996 as a collaborative effort among CDC, the Food and Drug Administration, and USDA. Its purpose is to monitor the resistance of human enteric (intestinal) bacteria. Participating health departments forward some portion of their isolates for six types of bacteria to CDC for susceptibility testing. Susceptibility testing involves determining the sensitivity of the bacteria toward 17 antimicrobial agents that inhibit their growth. Campylobacter isolates are submitted only by the FoodNet sites and are tested against 8 antimicrobial agents instead of 17. Because these data have been collected continually since 1996, trend analyses are possible. This can provide useful information about patterns of emerging resistance, which in turn can guide mitigation efforts. Geographic Scope: Seventeen state and local public health laboratories in California, Colorado, Connecticut, Florida, Georgia, Kansas, Los Angeles County, Maryland, Minnesota, Massachusetts, New Jersey, New York City, New York, Oregon, Tennessee, Washington, and West Virginia participate in this system. Pathogens: Campylobacter, Enterococcus, E. coli O157:H7, Salmonella non-typhoidal, Salmonella typhi, and Shigella. Cases Reported: The number of cases varies by pathogen. The National Giardiasis Surveillance System began in 1992 when the Council of State and Territorial Epidemiologists assigned giardiasis a code that enabled states to voluntarily report giardiasis cases to CDC electronically. For each case, basic information is collected, such as the age, sex, and race of the patient, as well as the place and time of infection. This surveillance system provides data used to educate public health practitioners and health-care providers about the scope and magnitude of giardiasis in the United States. The data can also be used to establish research priorities and to plan future prevention efforts. In June 2001, the Council of State and Territorial Epidemiologists voted to add giardiasis to the list of Nationally Notifiable Diseases. Geographic Scope: Forty-three states, the District of Columbia, New York City, Guam, and Puerto Rico. Pathogen: Giardia intestinalis (also known as Giardia lamblia). Cases Reported: In 1999, over 23,000 cases of giardiasis were reported to CDC through this system. The National Notifiable Diseases Surveillance System collects information about 58 diseases designated as nationally notifiable—that is, diseases about which regular, frequent, and timely information regarding individual cases is considered necessary for their prevention and control. The first annual report on notifiable diseases was published in 1912 for 10 diseases. CDC assumed responsibility for the collection and publication of this data in 1961. The list of nationally notifiable diseases is revised periodically to include emerging pathogens and to delete those whose incidence has declined significantly. CDC also publishes provisional figures for some of these diseases weekly. Policies for reporting notifiable disease cases can vary by disease or reporting jurisdiction, depending on case status classification (i.e., confirmed, probable, or suspect). Reporting of diseases is mandated by legislation or regulation only at the state and local level. Thus, the list of diseases considered notifiable varies slightly by state. Public health officials report basic information for each case, such as age, sex, and race of the patient, as well as the place and time of infection. The data reported in the annual summaries for this system are useful for analyzing disease trends and determining relative disease burdens. Geographic Scope: National. Pathogens/Diseases: Botulism, cholera, cryptosporidiosis, cyclosporiasis, E. coli, hepatitis A, listeriosis, salmonellosis, shigellosis, trichinosis, and typhoid fever (also 47 other pathogens or diseases, which are not considered to be foodborne). Number of Cases Reported: The number of cases varies by disease. The National Salmonella Surveillance System began in 1962 when the Council of State and Territorial Epidemiologists and the Association of Public Health Laboratories agreed that state public health laboratories would routinely test samples of Salmonella to determine their serotype and report the results to CDC. For many years these reports were submitted as paper forms, but for the last 10 years, reporting has been electronic. In addition to the specific serotype, the reports include the age, sex, and county of residence of the person from whom the sample was isolated, the clinical source (such as stool, blood, or abscess), and the date the sample was received in the state laboratory. CDC maintains the national reference laboratory for Salmonella and provides the laboratory reagents and training needed to determine the serotypes. These data are used to identify long-term trends and specific populations at risk for infection, detect and investigate outbreaks, and monitor the effectiveness of prevention efforts. Geographic Scope: All 50 states, New York City, and Guam. Pathogens: Salmonella enterica. Cases Reported: In 1999, approximately 32,750 cases were reported to CDC through this system. The National Shigella Surveillance System began in 1963 when the Council of State and Territorial Epidemiologists and the Association of Public Health Laboratories agreed that state public health laboratories would routinely test samples of Shigella to determine their serotype and report the results to CDC. For many years these reports were submitted as paper forms, but for the last 10 years, reporting has been electronic. In addition to the specific serotype, the reports include the age, sex, and county of residence of the person from whom the sample was isolated, the clinical source (such as stool, blood, or abscess), and the date the sample was received in the state laboratory. CDC maintains the national reference laboratory for Shigella and provides the laboratory reagents and training needed to determine the serotypes. These data are used to identify long- term trends and specific populations at risk for infection, detect and investigate outbreaks, and monitor the effectiveness of prevention efforts. Geographic Scope: All 50 states, New York City, and Guam. Pathogen: Shigella species. Cases Reported: In 1999, approximately 12,000 cases were reported to CDC through this system. PulseNet is a national network of public health laboratories that, since 1996, has been using standardized methods to perform genetic “fingerprinting” of four types of foodborne bacteria. The network permits rapid comparison of the bacteria’s genetic patterns through an electronic database at CDC. Laboratories participating in PulseNet use a method called pulsed-field gel electrophoresis to identify the genetic patterns in bacterial pathogens isolated from patients and from suspected food items. Once the patterns are generated, they are entered into an electronic database of patterns at the state or local health department and transmitted to CDC where they are filed in the PulseNet database. If patterns submitted by laboratories during a defined time period are found to match, CDC will alert the laboratory officials of the match so that a timely investigation can be performed. PulseNet can help public health authorities recognize when cases of foodborne illness occurring at the same time in geographically separate locales are caused by the same strain of bacteria and may be due to a common exposure, such as a food item. An epidemiologic investigation of those cases can then determine what they have in common. If a bacterial pathogen is isolated from a suspected food, the pathogen’s genetic pattern can be quickly compared with the patterns of pathogens isolated from patients. Matching patterns can indicate possible nationwide outbreaks and lead to public health actions such as epidemiologic investigations, product recalls, and long-term prevention measures. Geographic Scope: 46 state and 2 local public health laboratories—New York City and Los Angeles County—and the food safety laboratories of the Food and Drug Administration and USDA. Pathogens: E. coli O157:H7, Salmonella, Listeria, and Shigella. Cases Reported: In 2000, over 17,000 patterns were submitted to the CDC PulseNet database, and 105 potential outbreaks were investigated by state and local officials. The Salmonella Enteritidis Outbreak Surveillance System began in 1985. This passive system collects reports of outbreaks as they occur throughout the calendar year. States are encouraged to report any outbreak of Salmonella Enteritidis infection in their state to CDC. The surveillance system tracks morbidity and mortality associated with outbreaks and collects information on implicated food items and on the results of traceback investigations conducted by local agencies and the Food and Drug Administration. Surveillance data have been used to identify risk factors for Salmonella Enteritidis infection, contaminated food items, and groups that may benefit from education. Geographic Scope: National. Pathogen: Salmonella Enteritidis. Outbreaks Reported: In 1999, 44 confirmed outbreaks of Salmonella Enteritidis were reported, affecting U.S. residents in 17 states. The Sentinel Counties Study of Viral Hepatitis began in 1979 to collect more detailed information on risk factors for cases of acute viral hepatitis and to detect newly emerging viruses. Under contracts with CDC, county health departments collect data for each reported case and a serum sample for each reported case and report the information to CDC. In recent years, data from this system have been used to better characterize hepatitis A epidemiology and to develop molecular subtyping techniques. Geographic Scope: Six counties–Pinellas, Florida; Jefferson, Alabama; Denver, Colorado; Pierce, Washington; Multnomah, Oregon; and San Francisco, California. Pathogens: Hepatitis A; hepatitis B; and non-A, non-B hepatitis (including hepatitis C). Cases Reported: In 1999, 240 cases of hepatitis A, 134 cases of hepatitis B, and 32 cases of non-A, non-B hepatitis (including hepatitis C) were reported to CDC through this system. The Surveillance Outbreak Detection Algorithm was designed to detect unusual clusters of cases of a foodborne disease that indicate a potential outbreak. The algorithm was first used in 1996 for Salmonella cases. The algorithm compares, by serotype, the number of cases reported through the Public Health Laboratory Information System during a given week with a 5-year historical baseline for that serotype and week to detect unusual increases from the baseline. The weekly comparisons are done on a national, regional, and state basis. If they detect any unusual clusters, CDC notifies the affected state(s) by fax. The Surveillance Outbreak Detection Algorithm is useful for identifying multistate outbreaks, especially where individual cases may be quite diffuse. The software also has an interface with which any user can easily generate basic statistical information. The interface also produces graphs and maps to facilitate identification of trends or anomalies. State health departments have access to a limited version of the algorithm via the Public Health Laboratory Information System. Geographic Scope: National. Pathogens: Salmonella and Shigella. Cases Reported: Using the algorithm, CDC officials identified 133 potential Salmonella outbreaks in 1999 and 273 in 2000. The Trichinellosis (Trichinosis) Surveillance System was created in 1947, when the U.S. Public Health Service began collecting statistics on cases of infection at the national level. In 1965, trichinellosis was included among the notifiable diseases that physicians report weekly to state health departments and to CDC through the National Morbidity Reporting System. A standardized surveillance form was developed to collect detailed information for each case. Geographic Scope: National. Pathogen: Trichinella spp. Cases Reported: In 1999, 12 cases were reported to CDC through this system. The Typhoid Fever Surveillance System was established in 1962 to collect detailed information about all cases of Salmonella typhi. State health department officials are asked to complete a typhoid fever surveillance report form when a laboratory confirms a case of typhoid fever. The form collects demographic information about each case, as well as information about patients’ international travel and vaccination history, and the antibiotic susceptibility of isolates. This information is especially important for developing travel advisories, vaccination recommendations, and treatment guidelines. Geographic Scope: National. Pathogen: Salmonella typhi. Cases Reported: In 1999, 115 cases were reported to this system. The Vibrio Surveillance System began in 1988 and is composed of two parts. One is a national passive system for reporting cases of toxigenic Vibrio cholerae infection (cholera), and the other is a more active system that covers all types of Vibrio infections in a more limited geographic area. For the active system, investigators use a standardized form to collect clinical data, information about patients’ underlying illnesses, and epidemiologic data about patients’ seafood consumption and exposure to seawater for the week preceding illness. Surveillance data have been used to identify environmental risk factors, retail food outlets where high-risk exposures occur, and groups that may benefit from consumer education. Geographic Scope: National for the cholera portion of the system; the non-cholera portion of the system initially included only the Gulf Coast states of Alabama, Florida, Louisiana, and Texas but is open to all states and has expanded to include, among others, the FoodNet sites and states along both the East and West coasts. Pathogen: Toxigenic Vibrio cholerae; Vibrio spp. Cases Reported: In 2000, four cases of Vibrio cholerae and 295 laboratory-confirmed cases of other types of Vibrio infections were reported to CDC through this system. To enhance the accuracy and completeness of reporting, CDC requests that participating states verify the information reported twice a year. The Viral Hepatitis Surveillance Program was created in 1961 to collect demographic, clinical, serologic, and risk-factor information on cases of acute viral hepatitis. The data collected through the program are essential for monitoring trends in the epidemiologic characteristics of the various types of viral hepatitis. These data are also valuable for monitoring the effectiveness of prevention programs. Pathogens: Hepatitis A; hepatitis B; non-A, non-B hepatitis (including hepatitis C). Geographic Scope: National. Number of Cases Reported: In 1999, 17,047 cases of hepatitis A, 7,694 cases of hepatitis B, and 3,111 cases of non-A, non-B hepatitis were reported through National Electronic Telecommunication Surveillance System. Information about risk factors was reported through the Viral Hepatitis Surveillance Program for approximately 33 percent of these cases. Source of Data: States report this information to CDC through the extended-record capability of the National Electronic Telecommunication Surveillance System or by submitting a paper form with this information. In addition to those named above, Carolyn Boyce, Cathy Helm, Natalie Herzog, Cynthia Norris, Paul Pansini, and Stuart Ryba made key contributions to this report.
Foodborne diseases in the United States cause an estimated 76 million illnesses, 325,000 hospitalizations, and 5,000 deaths annually, according to the Centers for Disease Control and Prevention (CDC). Surveillance is the most important tool for detecting and monitoring both existing and emerging foodborne diseases. In the United States, surveillance for foodborne disease is also used to identify outbreaks--two or more cases of a similar illness that result from ingestion of a common food--and their causes. CDC has 18 surveillance systems used to detect cases or outbreaks of foodborne disease, pinpoint their cause, recognize trends, and develop effective prevention and control measures. Four principal systems--the Foodborne Disease Outbreak Surveillance System, PulseNet, FoodNet, and the Surveillance Outbreak Detection Algorithm--focus on foodborne diseases and cover more than one pathogen. Although CDC's systems have contributed to food safety, the usefulness of several of these surveillance systems is impaired both by CDC's untimely release of surveillance data and by gaps in the data collection. CDC is providing funds to state and local health departments to address their staffing and technology needs to help the states provide CDC with more complete information. CDC officials have entered into a cooperative agreement with the Association of Public Health Laboratories to assess the states' capability and capacity to address public health issues, including foodborne disease. CDC consults annually with the Council of State and Territorial Epidemiologists to encourage more standardized reporting among states.
The Prometheus 1 project is part of NASA’s Prometheus Nuclear Systems and Technology program to develop nuclear power technologies capable of providing power and propulsion for a new generation of missions. The Prometheus 1 spacecraft is being designed to use nuclear power and electric propulsion technologies to explore the outer reaches of the solar system. The Jupiter Icy Moons Orbiter (JIMO) mission—a 4 to 6-year study of three of Jupiter’s moons: Callisto, Europa, and Ganymede—was the original destination identified by NASA. The JIMO mission’s overarching science objectives were to (1) investigate the origin and evolution of the three moons; (2) scout their potential for sustaining life; and (3) determine the current rate of movement of surface ice and the rates at which the moons are weathered. With an unprecedented level of power, Prometheus 1, the first in a potential series of spacecraft, is expected to support the use of high capability science instruments and high power communications systems to provide scientists with an a unprecedented amount of scientific information. Figure 1 depicts the notional Prometheus 1 spacecraft. NASA contracted with the Jet Propulsion Laboratory (JPL) to manage the Prometheus 1 project and to manage development of the science mission payload. In turn, JPL awarded a $400-million contract for the initial development of the Prometheus 1 spacecraft to Northrop Grumman Space Technology in September 2004. NASA is collaborating with the Department of Energy’s Office of Naval Reactors to develop and handle all issues related to the spacecraft’s nuclear reactor. The Prometheus 1 project will have to compete for funding with other NASA programs. In January 2004, the President charged NASA with implementing a new strategy for space exploration—which includes the Prometheus 1 project—while simultaneously returning the shuttle to flight status and completing the International Space Station. NASA laid out its plan for implementing the strategy in its fiscal year 2005 budget request. In essence, NASA’s implementation plan holds aeronautics, science, and other activities at near constant levels and transitions funding levels currently dedicated to the Space Station and shuttle programs to the space exploration strategy as the Space Station and shuttle programs phase out. This plan was predicated upon NASA’s annual funding level receiving increases to about $18 billion a year by fiscal year 2008 and then remaining near that level, except for inflation, through at least 2020. In the last several years, we have undertaken a best practices body of work on how leading developers in industry and government use a knowledge-based approach to develop products that reduces risks and increases the likelihood of successful outcomes. Development of a sound business case based on this best practices model enables decision makers to be reasonably certain about their products at critical junctures during development and helps them make informed investment decisions. Our best practice work has shown that developing a sound business case based on matching requirements to resources is essential to implementing a knowledge-based approach. A sound business case includes the following elements well-defined requirements, preliminary design, realistic cost estimates, and mature technology. A knowledge-based business case also involves the use of controls or exit criteria to ensure that the required knowledge has been attained at each critical juncture. It ensures that managers will (1) conduct activities to capture relevant product development knowledge, (2) provide evidence that knowledge was captured, and (3) hold decision reviews to determine that appropriate knowledge was captured to allow a move to the next phase. If the knowledge attained at each juncture does not confirm the business case on which the effort was originally justified, the program does not go forward. Use of this approach has enabled leading organizations to deliver high quality products on time and within budget. Product development efforts that have not followed a knowledge-based business case approach can be frequently characterized by poor cost, schedule, and performance outcomes. Although NASA does not require projects to develop a formal business case based on matching requirements to resources, JPL project implementation policy, which establishes JPL’s institutional structure for implementation and management of JPL flight projects in accordance with NASA policies, does require projects to develop documentation that includes elements essential to a sound business case. For example, before entering the preliminary design phase, JPL projects are required to develop preliminary requirements, a conceptual design, realistic cost estimates, and technology development plans. JPL projects are required to update and improve the fidelity of information in these documents by PDR. The information in these documents could provide NASA decision makers with the information necessary to support sound business case decisions based on matching requirements to resources at preliminary mission and systems review (PMSR) and PDR. In September 2004, the Congressional Budget Office reported that if NASA’s costs for implementing the strategy were similar to prior analogous NASA programs—such as Apollo, Viking, and Mars Exploration Rover—NASA’s funding needs could increase by 15 to 23 percent—or $40 billion to $61 billion—over the 16-year estimate. The Congressional Budget Office concluded that if funding were held constant, NASA would likely have to either eliminate mission content or delay schedules. NASA is still in the process of preparing initial justification for the Prometheus 1 project to enter the preliminary design phase. Consequently, at this time the level of funding NASA needs to execute the project is not fully defined. According to project officials, however, funding levels would need to be increased to support the planned launch of Prometheus 1 to Jupiter’s Icy Moons. While NASA plans to have defined preliminary system requirements and an initial estimate of the life-cycle cost for Prometheus 1 by summer 2005—when the project enters the preliminary design phase— the agency faces significant challenges in doing so. According to Prometheus 1 project management, current funding is inadequate to support a 2015 launch of Prometheus 1 as initially planned. Following small funding increases from fiscal years 2005 through 2007, the budget profile becomes relatively flat through fiscal year 2009 (see fig. 2). Project officials believe that the current profile would need to be increased beginning in fiscal year 2007 to reflect project needs of a Jupiter Icy Moons mission. Decision makers will not get their first comprehensive picture of the project’s requirements and the resources needed to meet those requirements—the first basis for funding decisions—until PMSR scheduled for summer 2005. While the fiscal year 2006 request includes an updated Prometheus 1 funding profile, a funding profile based on life-cycle cost estimates—which NASA plans to have when it enters the preliminary design phase—will not be included until NASA’s fiscal year 2007 request. The Prometheus 1 project office is required to develop preliminary requirements by PMSR. Defining the project’s requirements and developing life-cycle cost estimates by then could be challenging, given the short time frames and NASA’s past difficulties developing requirements and estimates. While it is not unusual for a project at this stage in acquisition to still be defining requirements, several factors could make it difficult for NASA to develop preliminary requirements by PMSR. The contractor, Northrop Grumman, was only recently selected, and according to project officials, input from both the contractor and Office of Naval Reactors is needed to finalize the preliminary ground, space, and launch systems requirements mandatory for PMSR. In addition, NASA continues to refine its requirements. For example, Prometheus 1 project management increased requirements for reactor lifetime, reactor power, and propellant tank capacity to ensure that the Prometheus 1 spacecraft and reactor designs could be used to support follow-on missions. Currently, project managers are working with broad NASA requirements for deep space exploration and more refined project requirements specific to the Prometheus 1 ground, space, and launch systems. NASA is also required to have an initial life-cycle cost estimate for Prometheus 1 at PMSR. However, because the estimate is based on a conceptual design, preliminary system requirements, and detailed technology development plans that are not yet complete, it will be difficult for NASA to develop an estimate in the short time available by PMSR. The project office is working with Northrop Grumman to merge and finalize the conceptual design. Once the conceptual design is finalized, the project office will update the work breakdown structure and develop a “grass roots” estimate of the spacecraft cost. However, project officials do not expect to receive cost estimates from the Office of Naval Reactors and Northrop Grumman, which are also needed to develop the estimate, until the end of February 2005. The JPL Costing Office will prepare a separate cost estimate based on its experiences with prior programs, and both JPL and NASA will contract for additional independent cost estimates. Adding to these complexities, NASA has historically had difficulty establishing life-cycle cost estimates. In May 2004, we reported that NASA’s basic cost-estimating processes—an important tool for managing programs—lack the discipline needed to ensure that program estimates are reasonable. Specifically, we found that 10 NASA programs that we reviewed in detail did not meet all of our cost-estimating criteria—based on criteria developed by Carnegie Mellon University’s Software Engineering Institute. Moreover, none of the 10 programs fully met certain key criteria—including clearly defining the program’s life cycle to establish program commitment and manage program costs, as required by NASA. In addition, only three programs provided a breakdown of the work to be performed. Without this knowledge, we reported that the programs’ estimated costs may be understated and thereby subject to underfunding and cost overruns, putting programs at risk of being reduced in scope or requiring additional funding to meet their objectives. In this report we recommended that NASA take a number of actions to improve its cost - estimating practices. NASA concurred noting that our recommendations validated and reinforced the importance of activities underway at NASA. By PDR—which occurs at end of the preliminary design phase and is scheduled for 2008—the fidelity of the information is expected to improve and could allow NASA to develop a business case that would match requirements with resources and provide decision makers with the information needed to determine whether continued investment in the project is warranted. However, in the past NASA has had difficulties developing the realistic requirements and cost estimates needed to develop a sound business case. To help ensure program requirements do not outstrip resources, leading commercial firms obtain the right knowledge about a new product’s technology, design, and production at the right time. We have issued a series of reports on the success these firms have had in estimating the time and money to develop new and more sophisticated products—the kinds of results that NASA seeks. Our best practice work has shown that developing business cases based on matching requirements to resources before program start leads to more predictable program outcomes—that is, programs are more likely to be successfully completed within cost and schedule estimates and deliver anticipated system performance. A sound business case includes the following elements—well-defined requirements, a preliminary design, realistic cost estimates, and mature technology. While NASA does not require projects to develop a formal business case based on matching requirements to resources, JPL policy, which implements NASA policy, does require projects to develop documentation that could support formulation of a sound business case. Before a JPL project enters the preliminary design phase, JPL project implementation policy requires that the project develop preliminary requirements, a conceptual design, realistic cost estimates, and technology development plans. This policy also requires that the fidelity of information in these documents improve by PDR. The requirements and resource estimates NASA is developing for PMSR could form the basis for an initial business case based on matching Prometheus 1 requirements to available resources. However, Prometheus 1 project management plans to continue directing requirements changes to accommodate follow-on missions. While our work shows that the preliminary design phase is the appropriate place to conduct systems engineering to support requirement/cost trade-off decisions, NASA needs to remain cognizant that adding requirements could increase cost and risk. In addition, NASA has had past difficulty developing the realistic requirements and cost estimates needed to develop a sound business case. These difficulties have resulted in the termination of several major efforts after significant investment of resources. For example, in 2002 NASA terminated the Space Launch Initiative (SLI) program—a $4.8 billion, 5-year program to build a new generation of space vehicles to replace its aging space shuttle. SLI was a complex and challenging endeavor for NASA, both technically and from a business standpoint. The SLI program faced some of the same challenges that Prometheus 1 is struggling with today, such as the need to develop and advance new airframe and propulsion technologies. SLI did not achieve its goals, in part, because NASA did not develop realistic requirements and cost estimates. Leading firms make an important distinction between technology development and product development. Technologies that are not mature continue to be developed in the technology base—they are not included in a product development. Our best practices work has also shown that there is a direct relationship between the maturity of technologies and the accuracy of cost and schedule estimates. NASA’s Prometheus 1 technologies are currently immature. The Prometheus 1 project office is preparing technology development plans to guide the development of each key technology during the preliminary design phase. Maturing technologies during the preliminary design phase is a key element of matching needs to resources before entering the product development phase. Our best practices work has shown that technology readiness levels (TRL)—a concept developed by NASA—can be used to gauge the maturity of individual technologies (see fig. 4). (See app. I for detailed definition of TRLs.) Specifically, TRL 6—demonstrating a technology as a fully integrated prototype in a realistic environment—is the level of maturity needed to minimize risks for space systems entering product development. While development of Prometheus 1 critical technologies is under way, the technologies will require extensive advancement before they are mature enough to provide the revolutionary capabilities of the Prometheus 1 spacecraft. The overall technology objective for Prometheus 1 is to safely develop and operate a spacecraft with a nuclear-reactor-powered electric propulsion system. To achieve this objective, the spacecraft will require advancement in several technology areas, including, nuclear electric power, power conversion and heat rejection systems, nuclear electric propulsion, high power communications, radiation-hardened electronics, and AR&D. (See app. II for a more detailed explanation of these technologies.) NASA’s fiscal year 2005 budget request indicates that these technologies are either at TRL 3 (individual technologies have been demonstrated in a laboratory environment) or TRL 4 (system components have been demonstrated in a laboratory environment). Before NASA conducts the PDR in 2008, it will need to mature the technologies—each of which comes with a unique set of engineering challenges. To gauge the maturation of the Prometheus 1 technologies, the Prometheus 1 project office is preparing technology development plans, which rely on the use of maturity criteria tables (MCT), a concept similar to TRLs. The specific maturation criteria for each technology vary greatly, but all technologies are to be matured by PDR to the point that developmental models are complete, all major risks to each technology are retired, all major manufacturing issues are resolved, and plans for obtaining life data that will provide confidence that the hardware will meet the mission lifetime requirements are in hand. Prometheus 1 project officials believe these criteria roughly correspond to a TRL 5 (component and/or breadboard validation in a relevant environment) or a TRL 6 (system/subsystem model or prototype demonstration in a relevant environment). The program office’s position is that using MCTs that are equivalent to TRL 5 and TRL 6 at PDR is appropriate because the program office is both the technology developer and product developer and, as such, has a thorough understanding of how mature the technologies need to be at certain points in time as the program progresses. Nevertheless, the dual role of project office as both technology and product developer is not unique, and our best practices body of work shows that a TRL 6 is the level of maturity needed to minimize risks for space systems entering product development. NASA is quickly approaching one of the most critical phases in its acquisition of Prometheus 1—the preliminary design phase. While the impetus for the changes made to the program—subsequent to our providing a draft of this report to NASA for comment—recognize the technical, schedule, and operational risk of this program, there is still much work to be done. Based on the information presented at PMSR, now scheduled for summer 2005, NASA will need to decide at what level to fund the project. However, NASA will be challenged to develop the information required at PMSR, given the compressed time frames. Although PDR is still several years out, NASA will face significant challenges in meeting this milestone, given the immaturity of the revolutionary technologies that NASA anticipates will be needed to successfully launch Prometheus 1. While NASA is developing well-defined criteria tables for maturing Prometheus 1 technologies, the many inherent unknowns in developing technologies frequently results in unanticipated difficulties and delays. NASA’s current policy does not require projects to develop knowledge-based business cases that match requirements to available resources and include controls to ensure that sufficient knowledge has been attained and therefore the agency had not planned to develop such a business case for Prometheus 1. We have found, however, that establishing a formal business case based on a knowledge-based approach that includes matching requirements and available resources— which include technical and engineering knowledge, time and funding— and controls to ensure that sufficient knowledge has been attained at critical junctures within the product development process is an essential part of any product development justification. The risk associated with failing to meet these challenges is considerable. If NASA decides to move forward without adequate information at PMSR—that matches requirements and available resources and provides NASA decision makers with a clear understanding of Prometheus 1’s potential return on investment—Prometheus 1 may be unable to compete for funding within NASA. Ultimately, NASA could find, as it has in the past, that the program must be cancelled after having invested millions of dollars. We recommend that the NASA Administrator take the following two actions: identify at PMSR the level of resources the agency is committing to the project and direct project officials to develop project requirements based on this resource constraint and ensure that prior to proceeding beyond PDR (currently planned for 2008) a sound business case is established which includes confirmation that (1) critical technologies have been successfully demonstrated as mature, (2) systems engineering has been conducted to support requirements/cost trade-off decisions, (3) requirements and resource estimates have been updated based on the results of the preliminary design phase, (4) knowledge based criteria are established at each critical juncture to ensure that relevant product development knowledge is captured, and (5) decision reviews are held to determine that appropriate knowledge was captured to allow a move to the next phase. In written comments on a draft of this report, NASA’s Deputy Administrator stated that the agency concurs with the recommendations, adding that the recommendations validate and reinforce the importance of activities underway at NASA to improve NASA’s management of complex technical programs. Subsequent to our draft report being provided to NASA for comment, significant changes were made to the Prometheus 1 project. NASA’s fiscal year 2006 budget request includes changes to the Prometheus 1 project that directly address the recommendations in this report. According to NASA’s budget justification, the agency is planning a less complex mission than the original JIMO mission. According to program officials who we consulted with following the release of the budget, eliminating the long reactor lifetime, stringent radiation hardening, multiple launches, and AR&D required for the JIMO mission will allow NASA “to walk before it runs” and significantly reduce cost and technical risks. As a result, NASA has delayed PMSR until summer 2005 and is conducting an analysis of alternatives to identify a relevant mission with reduced technical, schedule, and operational risk. The fiscal year 2006 budget request also reshapes the Prometheus 1 funding profile to provide an orderly increase in developmental activities. Notwithstanding agreement with our recommendations, the Deputy Administrator stated that NPR 7120.5B requires projects to develop a business case. As we noted in this draft report, we recognize that NASA policy requires the development of elements that could support the formulation of a knowledge-based business case. However, we found no explicit requirement within NPR 7120.5B for NASA projects to develop a business case of any kind. More importantly, while NPR 7120.5B does require that projects establish controls to monitor performance against cost, schedule, and performance baselines and to conduct reviews throughout the project’s lifecycle, it does not establish specific knowledge- based controls to ensure that the knowledge necessary to match resources to requirements is in hand before moving forward. For example, whereas NPR 7120.5B requires projects to conduct a preliminary design review before entering NASA’s implementation phase, i.e., product development, it does not establish knowledge-based criteria to ensure that technologies needed to meet essential product requirements have been demonstrated to work in a realistic environment. Likewise, NASA policy requires a critical design review during a project’s implementation phase but does not include knowledge-based criteria to ensure the design is stable. We have found that such knowledge-based criteria, when tied to major events on a program’s schedule, can disclose whether gaps or shortfalls exist in demonstrated knowledge, which can presage future cost, schedule and performance problems. In his comments, the Deputy Administrator also noted that the Exploration Systems Mission Directorate is in the process of initiating a number of reforms to its project management policies and specified formulation dates in the coming months. He outlined these reforms and explained how they will allow NASA to address the recommendations in our report. We are encouraged by these planned changes. If properly implemented, they could be positive steps toward implementing a knowledge-based approach to project management. The Deputy Administrator also requested that the relationship between JPL and NASA project management requirements be explicitly stated in the report. We moved the information from a footnote into the body of the report to clarify that relationship. We also addressed NASA’s technical comments as appropriate throughout the report. To determine whether NASA is establishing justification for the project and ensuring critical technologies are mature, we conducted interviews with NASA Exploration Systems Mission Directorate and Prometheus 1 project officials at NASA Headquarters, Washington, D.C.; Marshall Space Flight Center, Huntsville, Ala.; and the Jet Propulsion Laboratory, Pasadena, Calif. We obtained and reviewed pertinent documents from the agency. We conducted quantitative and qualitative analyses of project schedules, risk assessments, budget documentation, technology maturity assessments and technology development plans. We compared these documents to criteria established in JPL and NASA policies governing developmental programs and to criteria for a knowledge based approach to acquisition described in GAO’s best practices body of work. We discussed key project challenges with Prometheus 1 project officials, and conducted GAO team meetings to discuss analyses and developing issues. Our audit work was completed between April 2004 and January 2005. As agreed with your office, unless you announce its contents earlier, we will not distribute this report further until 30 days from its issuance date. At that time, we will send copies to the NASA Administrator and interested congressional committees. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or [email protected]. Key contributors to this report are acknowledged in appendix IV. None (Paper studies and analysis) Invention begins. Once basic principles are observed, practical applications can be invented. The application is speculative and there is no proof or detailed analysis to support the assumption. Examples are still limited to paper studies. None (Paper studies and analysis) Active research and development is initiated. This includes analytical studies and laboratory studies to physically validate analytical predictions of separate elements of the technology. Examples include components that are not yet integrated or representative. Analytical studies and demonstration of nonscale individual components (pieces of subsystem). Basic technological components are integrated to establish that the pieces will work together. This is relatively “low fidelity” compared to the eventual system. Examples include integration of “ad hoc” hardware in a laboratory. Low fidelity breadboard. Integration of nonscale components to show pieces will work together. Not fully functional or form or fit but representative of technically feasible approach suitable for flight articles. Fidelity of breadboard technology increases significantly. The basic technological components are integrated with reasonably realistic supporting elements so that the technology can be tested in a simulated environment. Examples include “high fidelity” laboratory integration of components. High fidelity breadboard. Functionally equivalent but not necessarily form and/or fit (size weight, materials, etc.). Should be approaching appropriate scale. May include integration of several components with reasonably realistic support elements/ subsystems to demonstrate functionality. Lab demonstrating functionality but not form and fit. May include flight demonstrating breadboard in surrogate aircraft. Technology ready for detailed design studies. Representative model or prototype system, which is well beyond the breadboard tested for TRL 5, is tested in a relevant environment. Represents a major step up in a technology’s demonstrated readiness. Examples include testing a prototype in a high fidelity laboratory environment or in simulated operational environment. Prototype—Should be very close to form, fit and function. Probably includes the integration of many new components and realistic supporting elements/subsystems if needed to demonstrate full functionality of the subsystem. High-fidelity lab demonstration or limited/restricted flight demonstration for a relevant environment. Integration of technology is well defined. Prototype near or at planned operational system. Represents a major step up from TRL 6, requiring the demonstration of an actual system prototype in an operational environment, such as in an aircraft, vehicle or space. Examples include testing the prototype in a test bed aircraft. Prototype. Should be form, fit and function integrated with other key supporting elements/subsystems to demonstrate full functionality of subsystem. Flight demonstration in representative operational environment such as flying test bed or demonstrator aircraft. Technology is well substantiated with test data. Technology has been proven to work in its final form and under expected conditions. In almost all cases, this TRL represents the end of true system development. Examples include developmental test and evaluation of the system in its intended weapon system to determine if it meets design specifications. Actual application of the technology in its final form and under mission conditions, such as those encountered in operational test and evaluation. In almost all cases, this is the end of the last “bug fixing” aspects of true system development. Examples include using the system under operational mission conditions. The nuclear reactor is the key element of the Prometheus 1 spacecraft. Without the power levels supplied by the reactor, the proposed propulsion, science, and communication systems are not feasible. Designing, constructing, and utilizing highly reliable, safe, portable nuclear reactors is not new—nuclear reactors have been used in submarines for almost 50 years. However, the United States has very little experience operating nuclear reactors in a space environment and tackling space unique nuclear application issues. The Office of Naval Reactors, the organizational unit in the Department of Energy responsible for developing nuclear reactors for the Navy, will be responsible for all portions of the Prometheus 1 reactor development effort. The space environment places significant weight constraints on the reactor design and requires semi-autonomous control. Unlike submarines and aircraft carriers, all spacecraft have serious weight constraints driven by the cost of launching payloads into orbit. Consequently, spacecraft designers put great effort into eliminating weight. Further, where conventional reactors have hands on operators, the Prometheus 1 reactor must be remotely controlled. NASA estimates that control communications will take about 40 minutes to travel one way between Earth and the Jovian system. A power conversion system accepts the thermal energy from the reactor and converts it to useful electrical power for the spacecraft. Power conversion is an integral part of any power generation system taking the form of steam turbine generators in terrestrial utility plants and nuclear submarines. NASA is considering two types of power conversion systems—dynamic and static. According to NASA, the dynamic systems under consideration offer the benefits of increased efficiency, reduced weight and mass, and decreased nuclear fuel requirements. The static systems, however, have a technology heritage in prior spacecraft and could offer increased reliability because they have no moving parts. Since the conversion process in a fission reactor is never 100 percent efficient, heat rejection is required to dissipate waste energy. This is usually accomplished with large pumped-water cooling systems on earth. Space based power conversion would require a large radiator system to dissipate the waste heat in the vacuum of space. The requirement to fold the large radiator system into the launch vehicle fairing and deploy it after launch complicates the radiator system design. (See fig. 1.) Operating electric propulsion systems in space applications, including deep space, is not new. There is extensive experience with electric propulsion systems on satellites. In addition, NASA’s Deep Space 1 spacecraft was propelled using an electric propulsion ion thruster, similar in nature to the concept being developed for Prometheus 1. The thruster power levels required by Prometheus 1 have been demonstrated in a laboratory environment. The lifetime required by Prometheus 1, however, has not been demonstrated. Furthermore, lifetime testing of existing ion thrusters has demonstrated that these thrusters were approaching “wear out failure” after 30,352 hours. The Prometheus 1 thrusters will need to be qualified for operational durations approaching 120,000 hours. NASA recognizes that they will have to develop models and accelerated aging techniques to demonstrate the lifetime requirement. The nuclear reactor will provide increased electrical power for communications. This translates to increased bandwidth and data rates. The high power communications system onboard the Prometheus 1 spacecraft, will provide tens of compact disks full of data back to earth. Analogous missions such as Cassini provide only a couple of floppy disks full of data. (A floppy disk typically holds about 1.44 MB of data. A compact disk typically holds about 700 MB of data.) According to project officials, the higher power communications system on the Prometheus 1 spacecraft will require upgrades to the Deep Space Network, which are out of the purview of the Prometheus 1 project. There is no launch vehicle in the present or proposed U.S. inventory capable of launching the Prometheus 1 spacecraft, conceived for a mission to Jupiter’s Icy Moons, into orbit in one piece. The conceptual design currently shows the Prometheus 1 spacecraft to weigh between 29 and 36 metric tons and be about 58 meters in length. The current concept is to use multiple launches, 2 to 5, to place the spacecraft components in orbit and to use AR&D technology to assemble the spacecraft in orbit. Prometheus 1 is relying on NASA’s Demonstration Autonomous Rendezvous Technology and Hubble Robotic Servicing Mission, and the Defense Advanced Research Projects Agency’s Orbital Express programs for AR&D technology. These programs use different sensors and approaches to AR&D thereby providing Prometheus 1 with various options for consideration. In addition to the contact named above, James Morrison, Jerry Herley, John Warren, Tom Gordon, Ruthie Williamson, Karen Sloan and Sylvia Schatz made key contributions to this report. Best Practices: Using a Knowledge-Based Approach to Improve Weapon Acquisition, GAO-04-386SP. Washington, D.C.: January 2004. Best Practices: Setting Requirements Differently Could Reduce Weapon Systems’ Total Ownership Costs. GAO-03-57. Washington, D.C.: February 11, 2003. Defense Acquisitions: DOD’s Revised Policy Emphasizes Best Practices, but More Controls Are Needed. GAO-04-53. Washington, D.C.: November 10, 2003. Best Practices: Capturing Design and Manufacturing Knowledge Early Improves Acquisition Outcomes. GAO-02-701. Washington, D.C.: July 15, 2002. Defense Acquisitions: DOD Faces Challenges in Implementing Best Practices. GAO-02-469T. Washington, D.C.: February 27, 2002. Best Practices: Better Matching of Needs and Resources Will Lead to Better Weapon System Outcomes. GAO-01-288. Washington, D.C.: March 8, 2001. Best Practices: A More Constructive Test Approach Is Key to Better Weapon System Outcomes. GAO/NSIAD-00-199. Washington, D.C.: July 31, 2000. Defense Acquisition: Employing Best Practices Can Shape Better Weapon System Decisions. GAO/T-NSIAD-00-137. Washington, D.C.: April 26, 2000. Best Practices: DOD Training Can Do More to Help Weapon System Program Implement Best Practices. GAO/NSIAD-99-206. Washington, D.C.: August 16, 1999. Best Practices: Better Management of Technology Development Can Improve Weapon System Outcomes. GAO/NSIAD-99-162. Washington, D.C.: July 30, 1999. Defense Acquisitions: Best Commercial Practices Can Improve Program Outcomes. GAO/T-NSIAD-99-116. Washington, D.C.: March 17, 1999. Defense Acquisition: Improved Program Outcomes Are Possible. GAO/T-NSIAD-98-123. Washington, D.C.: March 18, 1998. Best Practices: DOD Can Help Suppliers Contribute More to Weapon System Programs. GAO/NSIAD-98-87. Washington, D.C.: March 17, 1998. Best Practices: Successful Application to Weapon Acquisition Requires Changes in DOD’s Environment. GAO/NSIAD-98-56. Washington, D.C.: February 24, 1998. Major Acquisitions: Significant Changes Underway in DOD’s Earned Value Management Process. GAO/NSIAD-97-108. Washington, D.C.: May 5, 1997. Best Practices: Commercial Quality Assurance Practices Offer Improvements for DOD. GAO/NSIAD-96-162. Washington, D.C.: August 26, 1996.
In 2003, the National Aeronautics and Space Administration (NASA) initiated the Prometheus 1 project to explore the outer reaches of the Solar System. The Prometheus 1 spacecraft is being designed to harness nuclear energy that will increase available electrical power from about 1,000 watts to over 100,000 watts and enable the use of electric propulsion thrusters. Historically, NASA has had difficulty implementing some initiatives. NASA's failure to adequately define requirements and quantify the resources needed to meet those requirements has resulted in some projects costing more, taking longer, and achieving less than originally planned. Prometheus 1 will need to compete for NASA resources with other space missions--including efforts to return the shuttle safely to flight and complete the International Space Station. GAO was asked to determine (1) whether NASA is establishing initial justification for its investment in the Prometheus 1 project and (2) how the agency plans to ensure that critical technologies will be sufficiently mature at key milestones. NASA is in the process of establishing initial justification for its investment in the Prometheus 1 project but faces challenges establishing preliminary requirements and developing accurate cost estimates. Decision makers will not get their first comprehensive picture of the project's requirements and the resources needed to meet those requirements until the preliminary mission and systems review, scheduled for summer 2005. Defining the project's requirements and developing life-cycle cost estimates by then could be challenging, given the short time frames. The fidelity of this information should improve by the preliminary design review scheduled for 2008. At that time, NASA has the opportunity to use these more refined requirements and cost estimates to establish a sound business case for its investment in the Prometheus 1 project. According to Prometheus 1 project management, a flat funding profile is inadequate to ramp up for the planned 2015 launch of Prometheus 1, the project's first spacecraft to its original destination of Jupiter's Icy Moons. By matching requirements to resources a sound business case would allow NASA to determine whether trade-offs in the design of the spacecraft or the agency's expectations are needed to avoid outstripping available resources. Significant program cost and schedule increases in past programs can be traced to not matching requirements with resources at preliminary design review. While development of the Prometheus 1 technologies is under way, each will require extensive advancement before they are mature enough to support reliable cost estimates. NASA is preparing technology development plans that include measurable criteria to ensure the Prometheus 1 technologies are on track for meeting NASA's maturity requirements through the end of the preliminary design phase. GAO's best practices work has shown, however, that establishing a formal business case based on a knowledge-based approach that includes matching requirements and available resources--which include technical and engineering knowledge, time, and funding--and controls to ensure that sufficient knowledge has been attained at critical junctures within the product development process is an essential part of any product development justification. NASA's current policy does not require projects to develop knowledge-based business cases that match requirements to available resources and include controls to ensure that sufficient knowledge has been attained. Therefore, the agency had not planned to develop such a business case for Prometheus 1. Since GAO provided our draft report to NASA for comment, the agency released its fiscal year 2006 budget request that includes changes to Prometheus 1. If properly implemented, these changes could be positive steps in addressing the findings and recommendations in this report.
For decades, the federal government has relied on firefighting aircraft to assist in wildland fire suppression activities. These aircraft perform various firefighting activities, including gathering intelligence by detecting fires and conducting assessments of ongoing fires; delivering supplies such as water, food, and ground-based firefighting equipment; transporting firefighters; providing coordination and direction to aerial and ground-based firefighters; and delivering retardant or water to extinguish or slow the growth of fires. The federal government uses different types of firefighting aircraft, including large airtankers, very large airtankers, single-engine airtankers, amphibious fixed-wing water scoopers, helicopters, and fixed-wing surveillance and smokejumper aircraft to perform these aerial fire suppression activities. Table 1 describes these firefighting aircraft and their functions. In general, multiple types of aircraft operate simultaneously to suppress fires. For example, airtankers that drop retardant or water often work in tandem with surveillance aircraft— lead planes—that coordinate the firefighting operation and guide the airtankers in dropping the retardant or water in the correct location. The 2013 Interagency Standards for Fire and Fire Aviation Operations defines several types of federal firefighting aircraft—including large and very large airtankers, large and medium helicopters, and surveillance and smokejumper aircraft—as national resources that can be deployed anywhere in the country and support fire suppression operations in any jurisdiction, including federal lands and nonfederal lands in accordance with relevant intergovernmental agreements. In most instances, firefighting aircraft that drop retardant or water do not extinguish wildland fires but instead slow the spread of fires or reduce their intensity as firefighters on the ground work to contain or suppress fires. Firefighting aircraft that deliver retardant or water support ground- based firefighters by performing two main functions: (1) dropping retardant around wildland fires to slow fire growth to provide ground- based firefighters additional time to build or reinforce fireline and (2) reducing the intensity of fires by dropping water directly on them. In general, airtankers deliver retardant around fires to slow their spread, water scoopers drop water directly on fires to reduce their intensity, and helicopters can perform either function. Currently, all large and very large airtankers in the federal fleet are aircraft initially designed for other purposes—such as maritime patrol or civilian passenger transport—that have been retrofitted for the aerial fire suppression mission through the incorporation of retardant delivery systems—tanks affixed to aircraft that hold and release retardant. Conversely, single-engine airtankers and water scoopers are built to drop retardant and water, respectively, to fight wildland fires. Traditionally, airtankers have used retardant delivery systems that rely on gravity to evacuate retardant via doors that open in the bottom of the aircraft. However, some systems have been developed that use compressed air to force retardant out of the aircraft through nozzles rather than doors. Fire suppression activities can generally be categorized as initial attack, extended attack, or large fire support. Initial attack activities include those conducted during the first “operational period” after the fire is reported, generally within 24 hours. When fires are not controlled through initial attack, extended attack activities occur that generally involve the use of additional firefighting resources; when such fires grow large and complex, these activities may be referred to as large fire support. Federal and state wildland fire responders rely on a tiered interagency dispatch process for requesting and coordinating the use of firefighting resources, including aircraft, to respond to wildland fires. For example, when a wildland fire is reported, a local dispatch center identifies and dispatches, if available, fire response resources such as firefighters, aircraft, and equipment to perform initial attack activities. If sufficient resources are not available, local dispatch centers can request additional resources from the appropriate geographic area coordination center. In the event that sufficient resources are not available within a geographic area, its geographic area coordination center can request additional resources from the National Interagency Coordination Center, which serves as the focal point for coordinating the mobilization of resources for wildland fire and other incidents throughout the United States. A number of interagency organizations develop interagency firefighting standards, including those pertaining to the development and use of firefighting aircraft, and coordinate federal firefighting efforts. To coordinate the overall firefighting efforts of the Forest Service and other federal land management agencies, the interagency National Wildfire Coordinating Group was established in 1974. This interagency group develops and maintains standards, guidelines, and training and certification requirements for interagency wildland fire operations. Within this group, the National Interagency Aviation Committee is an interagency body of federal and state aviation operations managers responsible for providing common policy and direction for aviation resources involved in wildland firefighting. This committee was established to serve as a body of aviation experts, assisting the National Wildfire Coordinating Group with recognizing opportunities to enhance safety, effectiveness, and efficiency in aviation-related operations, procedures, programs, and coordination. In turn, the National Interagency Aviation Committee chartered the Interagency Airtanker Board to review and approve retardant and water delivery systems based on established performance criteria. The approval process—which includes an assessment of system design, testing of the systems’ performance, and a physical inspection of the aircraft with system installed—ensures that the systems meet basic standards for delivery of retardant or water. Interagency Airtanker Board approval serves as a guide to participating federal and state agencies for identifying acceptable aircraft and retardant or water delivery systems that may compete for agency contracts. The federal firefighting aircraft fleet includes some aircraft that are government owned, but most are obtained through contracts with private industry vendors. For example, the federal government owns some surveillance and smokejumper aircraft and contracts for the remainder, along with helicopters and aircraft that deliver retardant or water, from private industry vendors that own, operate, and maintain them. Currently, the Forest Service issues contracts for large and very large airtankers, as well as large and medium helicopters, and Interior issues contracts for single-engine airtankers and water scoopers. The agencies use two types of contracts for obtaining firefighting aircraft from vendors: exclusive-use and call-when-needed. Exclusive-use contracts require a vendor to provide an aircraft for service on any day covered by the “mandatory availability period” stipulated in the contract. The agencies pay vendors a daily rate regardless of whether the aircraft is used and also pay a fee for each hour flown if the aircraft is used. Conversely, call- when-needed contracts do not guarantee vendors any fee unless the aircraft is called upon to provide aerial fire support. This type of contract allows the government the flexibility to pay for firefighting aircraft only when they are used. However, the daily availability and flight hour rates for call-when-needed contracts are generally higher than those for exclusive-use contracts. In contrast to large airtankers, other types of firefighting aircraft are generally more available for federal contracting. For example, the agencies plan to have over 100 helicopters available in 2013 for fire suppression activities through exclusive-use contracts with hundreds more available through call-when-needed contracts. See appendix II for the number and types of aircraft in the federal firefighting aircraft fleet in 2013 and their associated cost rates. The Forest Service and Interior have also established agreements with other governments (i.e., cooperator governments), as well as the military, to augment the national firefighting aircraft fleet during periods of heavy fire activity. The United States and Canada have established a mutual aid agreement whereby the National Interagency Coordination Center and the Canadian Interagency Forest Fire Centre can request firefighting resources, including aircraft, from each other during periods of heavy fire activity. Similarly, some U.S. states and Canadian provinces have established regional intergovernmental agreements to facilitate the sharing of firefighting resources: the Northwest Fire Protection Agreement, the Great Lakes Forest Fire Compact, and the Northeastern Forest Fire Protection Compact. Through these agreements, firefighting resources, including aircraft, can be dispatched from their contracted agency, state, or province to assist on fires on other lands covered by the agreement. The Forest Service can also obtain aerial firefighting support through the Modular Airborne Firefighting System (MAFFS) program under an agreement with DOD. Under this program, DOD provides Lockheed Martin C-130 Hercules aircraft as additional capacity for aerial firefighting when requested by the Forest Service. Each of the aircraft is equipped with a MAFFS unit—a portable, pressurized retardant delivery system that can be inserted into military C- 130 aircraft to convert them into large airtankers when needed. The Forest Service owns the MAFFS units (eight in total) and provides the retardant, and DOD provides the C-130 aircraft, pilots, and maintenance and support personnel to fly the missions. A new generation of MAFFS units became operational in February 2009, and the fleet has since transitioned to use this system exclusively. Since 1995, the Forest Service and Interior have cumulatively produced nine major studies and strategy documents related to their firefighting aviation needs, but the agencies’ efforts to identify the number and type of firefighting aircraft needed have been hampered by limited information and collaboration. In particular, these efforts did not include information on the performance and effectiveness of firefighting aircraft and involved limited collaboration between agencies and with stakeholders in the fire aviation community. Forest Service and Interior efforts to identify the number and type of firefighting aircraft they need have largely consisted of developing major studies and strategy documents—nine since 1995. Based on reviews of academic and government studies and interviews with officials and representatives from across the fire aviation community, we identified the following key elements as important for understanding firefighting aircraft needs: Aircraft types – aircraft manufacturer, model, and size classification; Basing options – potential locations for aircraft bases; Acquisition models – options for obtaining aircraft, including purchasing aircraft or using vendor-owned aircraft; Aircraft capabilities – required capabilities of aircraft, such as retardant capacity and speed; Suppression methods – how to use aircraft to suppress fire, including initial attack and extended attack; and Aircraft performance and effectiveness – the results of using aircraft to support fire suppression activities. While the Forest Service and Interior studies and strategy documents contained various key elements, none included information on performance and effectiveness of aircraft in helping to suppress wildland fires because agencies have not collected such information. Figure 2 identifies which key elements were included in each of the major studies and strategy documents we analyzed. (See app. III for additional information on each of these efforts.) The agencies generally used cost- and efficiency-based metrics in these efforts, such as the potential cost of damage from wildland fires or the frequency with which requests for firefighting aircraft are unmet, to identify their firefighting aircraft needs. For example, the three-part National Study of Airtankers to Support Initial Attack and Large Fire Suppression, conducted from 1995 to 2005, estimated the number of large airtankers needed by comparing the cost of using large airtankers to help suppress wildland fires with the projected cost of the damage that could result from not suppressing the fires. In addition, the Forest Service’s 2013 Firefighting Aircraft Study focused on efficiency and identified the number of large airtankers needed by analyzing the annual number of requests for these aircraft that the Forest Service was unable to meet. However, agency efforts to identify their firefighting aircraft needs have not included information on the performance and effectiveness of using aircraft to suppress wildfires primarily because neither the Forest Service nor Interior has collected data on these aspects of firefighting aircraft. Specifically, the agencies have not established data collection mechanisms to track the specific tactical uses of firefighting aircraft—for example, where retardant or water is dropped in relation to a fire as well as the objective of a drop, such as protecting a structure or preventing a fire from moving in a specific direction—or measure their performance and effectiveness in those uses. Moreover, a 2012 study by the Forest Service’s Rocky Mountain Research Station found that the Forest Service did not collect information about the locations where airtankers drop retardant or the actual performance and effectiveness of these aircraft. In May 2012, we reported on the importance of performance information in another context and found that such information can inform key management decisions, such as allocating resources, or it can help determine progress in meeting the goals of programs or operations. General agreement exists among wildland firefighters that, based on their experience, using aircraft can be beneficial to suppressing fires, but little empirical data exist to measure the performance and effectiveness of such aircraft use. For example, a 2007 study cited anecdotal evidence that firefighting aircraft saved homes, and a 2012 study that surveyed fire management officials found that these officials believed aircraft were effective in reducing the amount of time required to contain wildfires, particularly in the most difficult fire suppression conditions. such views are not based on empirical data on aircraft performance and effectiveness, and other studies—including the Forest Service’s 2013 Firefighting Aircraft Study—found that no accurate information on the effectiveness of aerial fire suppression exists and noted that the factors contributing to the success of wildfire suppression efforts are poorly understood.difficult to assess the relative value of delivering retardant or water through helicopters, large airtankers, and single-engine airtankers and called for analytic tools focusing on this area to be developed. In addition, the 1998 National Study of Tactical Aerial Resource Management identified the need for better information on the intended use of surveillance aircraft—such as support for initial attack or large fire suppression activities—to determine the specific types of aircraft that will meet federal needs for aerial surveillance during firefighting. See M. Plucinski, J. Gould, G. McCarthy, and J. Hollis, “The Effectiveness and Efficiency of Aerial Firefighting in Australia, Part 1,” Bushfire Cooperative Research Centre, Technical Report A0701 (June 2007), and M. Plucinski, J. McCarthy, J. Hollis, and J. Gould, “The Effect of Aerial Suppression on the Containment Time of Australian Wildfires Estimated by Fire Management Personnel,” International Journal of Wildland Fire 21 (December 2011): 219-229. This limited availability of information on the performance and effectiveness of firefighting aircraft is an area of long-standing concern; since the 1960s, multiple reviews of federal fire aviation programs have called for the Forest Service and Interior to collect information on the performance of firefighting aircraft but neither agency has taken action until recently. Specifically, in May 2012, the Forest Service recognized the need for an approach to evaluate the effective and efficient use of firefighting aircraft and began a project on aerial firefighting use and effectiveness to develop technology, evaluation criteria, and performance measures to quantify and assess the effective use of large airtankers, helicopters, and water scoopers in delivering retardant, water, and fire- suppressing chemicals. According to Forest Service documents, the agency plans to collect information including whether an aircraft was used for initial attack or extended attack; the aircraft’s objective, such as building a line of retardant, directly suppressing fire, or protecting a specific structure; whether the fire is in grass, shrub, or timber; general weather conditions; and characteristics of the actual drop of retardant, such as the time, aircraft speed, retardant amount, and outcome. The agency collected some of this information during 2012, but it has not developed incremental goals for assessing progress or timelines for completing the project. The Forest Service faces several challenges in carrying out its project on aerial firefighting use and effectiveness. For example, during 2012, the agency collected information on the performance and effectiveness of one type of aircraft—large airtankers—from about 25 fires but needs information on several hundred fires to perform useful analysis on large airtanker performance, according to Forest Service officials managing the data collection effort. These officials said that it will likely take several years for the agency to collect the information needed to analyze and understand the effectiveness of the three types of firefighting aircraft— large airtankers, helicopters, and water scoopers—included in the project. Forest Service officials also told us that aerial firefighters have been reluctant to collect information on the results of using firefighting aircraft for several reasons, including safety concerns regarding adding to the workload of aerial firefighters while they are flying over fires, firefighters’ concerns that Forest Service will use the information to criticize their performance, a firefighting culture that values experience and history over data and scientific analysis, and the challenges in finding time to complete data collection while fighting wildfires. Interior officials said that the department is assisting the Forest Service in this data collection project but does not currently have plans to collect performance information on the firefighting aircraft it manages. Large airtankers have been the focus of the Forest Service’s current data collection effort as well as the agencies’ prior studies and strategy documents, but few efforts have focused on other types of firefighting aircraft. Specifically, eight of the agencies’ nine studies and strategy documents attempted to identify the appropriate number of large airtankers for the federal fleet. However, only three of the efforts—the 1998 National Study of Tactical Aerial Resource Management, the 2009 Interagency Aviation Strategy, and the 2012 Air Attack Against Wildfires: Understanding U.S. Forest Service Requirements for Large Aircraft— identified the number of other types of aircraft needed, despite the fact that each type of firefighting aircraft provides unique capabilities to support fire suppression operations. For example, water scoopers can deliver large quantities of water when a fire ignites near a water source, smokejumper aircraft can quickly transport firefighters and supplies to fires in remote areas, and helicopters have the versatility to transport firefighters, supplies, or small quantities of water or retardant. As a result, performance and effectiveness information on all types of firefighting aircraft helps agencies identify the number and type of firefighting aircraft they need, including assessing any potential new firefighting aircraft platforms or technologies that vendors may propose; understand the strengths and limitations of each type of aircraft in different situations; and understand how firefighting aircraft could help achieve their wildfire suppression goals. Obtaining information about aircraft performance and effectiveness could better inform agency estimates of firefighting aircraft needs to include in their strategies for obtaining aircraft, thus helping agencies better ensure the adequacy of the federal firefighting aircraft fleet. In contrast to U.S. federal agencies, some foreign and U.S. state governments that operate aerial firefighting programs have employed various methods to collect and use performance and effectiveness information on their firefighting aircraft. For example, in Canada, the British Columbia Forest Service requires aerial firefighters to complete an airtanker data report immediately after each airtanker flight. Officials then compile information gathered through these reports with information from their dispatch system to evaluate airtanker performance using a set of key performance indicators, such as the amount of time from the initial report of a fire to the time that an airtanker request is entered into the dispatch system, the distance between available airtankers and the actual fire, and the change in the size of the fire from the time an aircraft arrives at the fire to the time the fire is contained. According to British Columbia Forest Service officials, the performance information and indicators have been integral to improving British Columbia’s aerial firefighting program. For example, officials found that available aircraft were often over 100 miles from the wildfires where they dropped retardant. Based on this analysis, the province made significant changes to its methods for pre-positioning firefighting aircraft and as a result, available aircraft are generally within 60 miles of a wildfire. In addition, the Minnesota Department of Natural Resources requires officials to complete debriefing reports after each use of firefighting aircraft. The report includes information on the specific aircraft that were sent to the fire and gathers the firefighters’ views on whether areas such as dispatch information, aircraft briefings, target descriptions, and communications were adequate or need improvement. According to Minnesota Department of Natural Resources officials, information from these reports may help determine the best methods for suppressing fires when a specific set of aircraft is available. In efforts to identify the number and type of firefighting aircraft they need, agencies have engaged in limited collaboration with one another or with other stakeholders in the fire aviation community. For example, the Forest Service developed its 2012 Large Airtanker Modernization Strategy without obtaining input from representatives of state fire aviation programs or the large airtanker industry and did not coordinate with Interior until after the development of an initial draft. According to several agency officials we spoke with, the Forest Service did not invite Interior officials to provide their input on the strategy until after the agency sent the draft version to the Office of Management and Budget (OMB) for review and approval. Similarly, regarding Interior, senior Interior officials told us that Interior generally does not involve other agencies or stakeholders in developing annual estimates of the number of each type of aircraft to obtain through contracts. Rather, Interior develops these estimates by asking relevant Interior bureaus to provide the number of each type of aircraft it needs, compiling these estimates, and adjusting them based on available funding. The importance of collaboration with stakeholders and agencies has been noted in several government reports. For example, the interagency 2009 Quadrennial Fire Review identified the need to engage agency leaders, partners, and industry in a strategic dialogue about the demands for firefighting resources, such as aircraft, and noted the importance of Additionally, a innovative and efficient ways to meet those demands.2009 Department of Agriculture Inspector General’s report recommended that the Forest Service collaborate with stakeholders in the fire aviation community to develop goals and performance measures for the agency’s aviation strategic plan. Regarding collaboration with stakeholders, in April 2013, we reported that when agencies carry out activities in a fragmented and uncoordinated way, the resulting patchwork of programs can waste scarce funds, confuse and frustrate program customers, and limit the overall effectiveness of the federal effort. In addition, we reported in October 2011 that successful organizations involve stakeholders in developing their mission, goals, and strategies to help ensure that they target the highest priorities. In that report, we also stated that stakeholders can influence success or failure of agencies’ programs. Many Forest Service and Interior officials, as well as other stakeholders, we spoke with expressed concerns about limited collaboration, and many cited shortcomings with the formal mechanism for interagency collaboration—the National Interagency Aviation Committee, which includes representatives from the Forest Service, Interior and its bureaus, and the National Association of State Foresters. Some stakeholders told us the committee has not always considered the needs of all agencies involved in firefighting efforts. For example, in 2008 committee members collaboratively developed a national firefighting aviation strategy, the Interagency Aviation Strategy. A year later, however, the Forest Service developed an appendix to the strategy that outlined the Forest Service’s plans for replacing its large airtanker fleet, and the committee published an amended strategy—including that appendix—without providing member agencies the opportunity to review or contribute to it, according to agency officials. As a result, the large airtanker appendix does not reflect the opinions of all committee members, and consequently does not reflect the needs of the fire aviation community stakeholders that will require the use of large airtankers. In addition, Forest Service and Interior officials told us that agency staff who serve on the committee are generally firefighting operations staff and do not represent senior agency management. As a result, the collaboration that occurs through the committee is often limited to day-to-day operations activities rather than broader strategic efforts. The committee has implemented some leading practices that we previously reported can help enhance and sustain collaboration. Specifically, the committee’s members have defined and articulated a common purpose and have agreed on agency roles and responsibilities. For example, the committee’s charter identifies its purpose as serving as a body of aviation experts focused on identifying opportunities to enhance safety, effectiveness, and efficiency in aviation related operations, procedures, programs, and coordination. In addition, the committee’s 2009 Interagency Aviation Strategy defines the general aerial firefighting roles and responsibilities of federal and state agencies as well as aircraft contracting responsibilities of the Forest Service and Interior. However, we previously found that agencies often face a range of barriers, including concerns about controlling jurisdiction over missions and resources, when they attempt to collaborate with other agencies. Interior officials told us that the division of firefighting aircraft contracting responsibilities among the Forest Service and Interior—under which Forest Service issues contracts for large and very large airtankers and large and medium helicopters, while Interior issues contracts for single-engine airtankers and water scoopers—may not foster a culture of collaboration since each agency is focused on its own aircraft of responsibility. Although the committee has implemented some leading practices for collaboration, it has not taken additional steps to reinforce agency accountability for collaboration, such as developing mechanisms to monitor, evaluate, and report the results of collaborative efforts. We have reported that by creating the means to monitor, evaluate, and report the results of their collaborative efforts, federal agencies can better identify areas for improvement, although the specific ways in which this practice is implemented may differ based on the specific collaboration challenges agencies face. For example, mechanisms for monitoring the results of collaborative efforts may range from occasional meetings among agency officials to more formal periodic reviews where officials from each agency report on progress toward achieving the goals of interagency collaborative efforts. As we reported in August 2012, absent effective collaboration, interagency efforts could result in limited information being communicated and opportunities for incorporating stakeholder input being missed. Senior management in both the Forest Service and Interior told us they have begun discussions regarding how to improve their interagency collaboration. However, they said that these discussions have focused on obtaining firefighting aircraft for the 2013 fire season and have not yet addressed collaboration on strategic planning issues. Further, both Forest Service and Interior officials told us the Interagency Aviation Strategy is outdated and should be updated to more accurately reflect current firefighting aircraft needs. Engaging in effective collaboration to incorporate input from all fire aviation community stakeholders could better position the agencies in developing strategic planning documents— including any updates to the Interagency Aviation Strategy—that represent the national need for firefighting aircraft. The Forest Service plans to modernize the large airtanker fleet by obtaining large airtankers from various sources over the near, medium, and long terms, but each component of this approach faces challenges that make the continued availability of such aircraft to meet national fire suppression needs uncertain. The components of the agency’s approach include: (1) in the near term, continuing to contract with private vendors for “legacy” large airtankers—generally aging aircraft with limited future service life spans—on exclusive-use contracts and very large airtankers on call-when-needed contracts, as well as relying on agreements with cooperator governments and the military; (2) in the medium term, contracting with vendors for airtankers that are more modern and capable than those generally in use currently; and (3) in the long term, acquiring new federally-owned aircraft with expected service life spans of up to 30 years. Additionally, some federal and state agencies are considering alternative plans to obtaining aerial fire suppression support to reduce reliance on large airtankers. For the near-term, the Forest Service plans to primarily rely on exclusive- use “legacy” contracts to obtain large airtankers. However, during periods of heavy fire activity, the agency plans to obtain supplemental airtankers through call-when-needed contracts for very large airtankers, agreements with cooperator governments, and military aircraft equipped with MAFFS. However, agency officials and vendor representatives told us about limitations and challenges—including availability, performance, and cost—regarding these resources. Over the next 5 years—including the 2013 fire season—the Forest Service plans to rely on aircraft obtained through its “legacy” exclusive- use contracts, which has been the agency’s traditional acquisition model for obtaining large airtankers. The agency in 2013 announced contract awards for nine aircraft: seven P-2V Neptunes—Korean War-era piston- engine maritime patrol aircraft—and two British Aerospace BAe-146s— converted versions of modern commercial jets. However, the availability of the P-2V Neptunes in the short term is uncertain, and the Interagency Airtanker Board has documented concerns regarding performance of the retardant delivery systems on these BAe-146s. Lockheed P-2V Neptune. The age of the seven P-2V Neptunes—they average more than 50 years old—makes their availability throughout the entire 5-year contract period uncertain. Specifically, vendors told us they might need to retire some aircraft prior to the end of the current contract period because of the cost of maintaining the aging aircraft. In particular, they told us that the limited availability of replacement parts—and the difficulty of manufacturing new ones if no others exist—coupled with the requirements of increased maintenance and inspection standards make the P-2V Neptune difficult to operate in a cost-effective manner. Further, physical stresses on the aircraft could cause cracking of critical components during fire missions. For example, representatives from Neptune Aviation Services told us that the vendor retired one of its P-2V Neptunes after the 2012 fire season due to structural problems discovered during routine maintenance. They also said that the vendor probably could continue to operate approximately five P-2V Neptunes for the next 10 years but that the current heavy use of their fleet could shorten this timeframe. Ultimately, Neptune Aviation Services plans to retire its P-2V Neptune fleet and transition to operating modern aircraft exclusively. Neptune Aviation Services’ British Aerospace BAe-146s. Concerns regarding the performance of the retardant delivery system on Neptune Aviation Services’ BAe-146s have been documented during agency evaluations of the aircraft and were voiced by several agency officials we interviewed. During initial assessment of the system in 2011, the Interagency Airtanker Board determined that the retardant delivery system did not meet established performance criteria and identified problems regarding the system’s design and performance.However, in September 2012, the board approved, on an interim basis, the use of the retardant delivery system through the 2012 fire season so that information on its operational effectiveness could be collected and design deficiencies addressed. During the 2012 fire season, the BAe-146s collectively made approximately 300 retardant drops, which the board considered sufficient to collect data needed to assess their operational effectiveness. In December 2012, the Interagency Airtanker Board declined to extend the interim approval of Neptune Aviation Services’ BAe-146 system, citing the problematic retardant delivery system design and deficient performance during the 2012 fire season. In February 2013, however, the National Interagency Aviation Committee determined that the need for aircraft to deliver retardant for the 2013 fire season was sufficiently important to override the board’s decision. As a result, the board, at the direction of the committee, granted an extension of its interim approval of the retardant delivery system through December 15, 2013. Representatives of Neptune Aviation Services acknowledged that the system has limitations, but they stated that the company is developing a revised retardant delivery system and plans to retrofit all of its BAe-146 aircraft with the updated design by the beginning of the 2014 fire season. However, the Interagency Airtanker Board has noted that the deficiencies may persist due to the inherent design of the system, and fire management officials from the Forest Service, Interior, and several states that are familiar with this aircraft told us they have reservations about the retardant delivery system’s performance. The Forest Service announced call-when-needed contracts for two very large airtankers—converted versions of Boeing 747 and McDonnell Douglas DC-10 commercial jets—to provide extended attack and large fire support beginning in 2013 with durations of up to 3 years. However, some agency officials cited concerns about the aircrafts’ role, suitability for operating over rugged terrain, limited compatibility with current airtanker base infrastructure, and high costs (see fig. 3 for an example of a very large airtanker). The Forest Service previously contracted for very large airtankers, but according to Forest Service and Interior officials, firefighters were initially reluctant to request the very large airtankers for several reasons. For example, because of the size of these aircraft, some federal officials were uncertain whether they could safely operate in rugged terrain. Some officials also told us that firefighters did not request very large airtankers because they were uncertain how best to use this new tool. For example, the Forest Service identifies the primary mission of large airtankers as initial attack, whereas the solicitation for the very large airtanker call- when-needed contract stated that they will be used to provide support for extended attack on large fires—leading to uncertainty about the best tactics for employing them. Despite early reluctance to use very large airtankers, officials noted increased reliance on these aircraft; nevertheless, some agency officials continue to disagree about the most effective role—initial attack or large fire support—for these aircraft as well as whether or not they are suited to operating above rugged terrain. Additionally, very large airtankers can operate out of a limited number of established airtanker bases because their weight and size are too great for some existing base infrastructure such as runways or aircraft parking areas. Specifically, about half of the large airtanker bases nationwide—35 of 67—are currently or potentially capable of supporting DC-10 operations, according to a Forest Service official;with bases is even more limited in that it can operate from approximately 12 locations, not all of which are airtanker bases. However, some agency officials told us that the speed of these aircraft can compensate for their limited compatibility with existing airtanker bases and associated increased distances that the aircraft might need to fly to respond to fires. Some officials also noted concerns about the high costs of using the aircraft. (See app. II for the current contract rates of firefighting aircraft.) the 747’s compatibility The Forest Service plans to request large airtankers from two cooperator governments—Canada and the State of Alaska—during periods of high fire activity but these aircraft may not always be available. Under an agreement originally established in 1982, the Forest Service plans to rely on five Convair CV-580 large airtankers—converted commercial aircraft with retardant capacities of 2,100 gallons—provided by Canadian provinces as additional resources when additional large airtankers are needed. Additionally, Forest Service officials told us that, under a separate agreement, the agency can also request use of three CV-580s contracted by the State of Alaska. However, the use of these airtankers to supplement the federal large airtanker fleet is contingent upon the cooperator governments making them available. For example, such airtankers might already be committed to suppressing fires, which could prevent them from being released to assist other governments. Modular Airborne Firefighting System (MAFFS) As it has periodically done since the program’s inception in the early 1970s, the Forest Service plans to rely on the military to provide surge aerial firefighting capacity through the deployment of up to eight MAFFS- equipped C-130 aircraft (see fig. 4 for an example of a MAFFS-equipped C-130). However, a number of officials from the Forest Service, Interior, and state fire agencies stated that MAFFS performance can be inadequate in some circumstances. For example, while a Forest Service official noted that the MAFFS system has been approved by the Interagency Airtanker Board, some federal and state fire aviation officials told us that the retardant line dispersed by the MAFFS system is generally narrower than firefighters prefer, which can either allow a fire to jump across the retardant line or necessitate an additional drop to widen the line, if another aircraft is available. Additionally, some officials said the system is unable to penetrate dense forest canopies, thereby preventing the retardant from being effective when used in heavy timber. However, some federal and state officials told us that MAFFS can be used effectively on rangeland where grasses are the predominant fuel type. Further, some fire officials expressed concern regarding the limited experience that MAFFS crews may have in the fire aviation mission because they are not full-time aerial firefighters. A DOD accident investigation report conducted in response to a 2012 fatal crash of a MAFFS-equipped C-130H found that the limited total firefighting experience of the crew—in particular, the number of drops accomplished prior to the accident—was a contributing factor to the accident. report also stated that the crew’s training did not include essential components—including training on local terrain conditions and congested airtanker base operations—necessary to conduct MAFFS operations in the region where the crash occurred. A Forest Service official involved in managing MAFFS training told us that the agency has updated the training to better incorporate such components. United States Air Force Accident Investigation Board Report C-130H3, T/N 93-1458 (Oct. 27, 2012). For nearly 2 years, the Forest Service has attempted to award “next- generation” contracts with durations of 5 to 10 years to modernize the fleet with faster and more up-to-date large airtankers. However, these efforts have been delayed by bid protests, and it is uncertain when some vendors will complete federal approval and certification processes for their aircraft, which are necessary prior to use as airtankers on federal contracts. As a result, it is uncertain when the “next-generation” large airtankers will be available to support fire suppression activities. Additionally, private vendors that are developing the “next-generation” large airtankers told us that concerns regarding the consistency of the Forest Service’s approach to fleet modernization have increased the difficulty of making business decisions and could affect the number of aircraft they will be able to provide to the government. Recognizing the importance of aircraft to help fight wildland fires, the Forest Service and Interior have undertaken efforts to identify the number and type of firefighting aircraft they need over the years but have met with limited success. None of the agencies’ studies and strategy documents contained information on aircraft performance and effectiveness in supporting firefighting operations, which limits the agencies’ understanding of the strengths and limitations of each type of firefighting aircraft and their abilities to identify the number and type of aircraft they need. The Forest Service has started to collect some aircraft performance information, but it is limited and focused on large airtankers. Interior has no current plans to collect performance information on the aircraft it manages. Agencies have also engaged in limited collaboration with each other and with other stakeholders in the fire aviation community— including the private aircraft vendors on whom the Forest Service has traditionally relied to provide large airtankers. Incorporating input from all fire aviation community stakeholders in their strategic planning documents could better position the Forest Service and Interior in developing estimates of aircraft needs to include in their strategies that represent the national need for firefighting aircraft. This concern is illustrated by the variety of federal and state agencies taking steps to compensate for the decline in large airtankers, which highlights the number of parties affected by firefighting aircraft decisions and reinforces the need for collaboration. Overall, better knowledge about aircraft effectiveness—and more complete input from all involved parties—could inform Forest Service and Interior decisions and help them ensure the adequacy of the nation’s firefighting aircraft fleet. The challenges faced by the Forest Service in each phase of its large airtanker approach, which includes the potential acquisition of aircraft the federal government would own and operate for decades, underscore the need for a complete and collective understanding of the nation’s firefighting aircraft needs. To help the agencies enhance their abilities to identify their firefighting aircraft needs and better ensure they obtain aircraft that meet those needs, we recommend that the Secretaries of Agriculture and the Interior direct the Chief of the Forest Service and the Deputy Assistant Secretary for Public Safety, Resource Protection, and Emergency Services, respectively, to take the following three actions: Expand efforts to collect information on aircraft performance and effectiveness to include all types of firefighting aircraft in the federal fleet; Enhance collaboration between the agencies and with stakeholders in the fire aviation community to help ensure that agency efforts to identify the number and type of firefighting aircraft they need reflect the input of all stakeholders in the fire aviation community; and Subsequent to the completion of the first two recommendations, update the agencies’ strategy documents for providing a national firefighting aircraft fleet to include analysis based on information on aircraft performance and effectiveness and to reflect input from stakeholders throughout the fire aviation community. We provided the Departments of Agriculture, Defense, and the Interior with a draft of this report for their review and comment. The Forest Service (responding on behalf of the Department of Agriculture) and Interior generally agreed with our findings and recommendations, and their written comments are reproduced in appendixes IV and V respectively. The Forest Service and Interior also provided technical comments which we incorporated as appropriate. The Department of Defense did not provide comments. While the Forest Service generally agreed with our findings and recommendations and stated that it is committed to improving its collaboration efforts, it also reiterated its interest in obtaining C-27Js to augment its aerial firefighting capabilities, citing the benefit of low initial investment for aircraft that could potentially function in multiple roles. As stated in our report, we acknowledge the Forest Service’s incentive to obtain the C-27Js free of acquisition cost and their potential use in multiple roles. We also note, however, that the agency may face challenges regarding the retardant capacity and operating costs associated with the airtankers. We are sending copies of this report to the Secretaries of Agriculture, Defense, and the Interior; the Chief of the Forest Service; the Directors of the Bureau of Indian Affairs, Bureau of Land Management, Fish and Wildlife Service, and National Park Service; appropriate congressional committees; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. This report examines (1) Forest Service and Department of the Interior efforts undertaken to identify the number and type of firefighting aircraft they need and (2) the Forest Service’s approach to modernizing the large airtanker fleet and the challenges it faces in doing so. To examine Forest Service and Interior efforts to identify their firefighting aircraft needs, we reviewed major agency studies and strategy documents and interviewed agency officials responsible for managing fire aviation programs. We focused on those efforts conducted since 1995, when the Forest Service and Interior jointly conducted the first major study of their large airtanker needs. We reviewed the purpose, methodology, and results of each of these studies and strategy documents. We also reviewed seven academic and government studies on aerial firefighting and conducted interviews with agency officials, as well as officials representing stakeholders in the fire aviation community, including military, state, and international firefighting organizations, and companies that own and operate firefighting aircraft, to identify key elements that are important for understanding firefighting aircraft needs. (Information on the stakeholders included in our review is discussed in more detail later in this appendix.) Through these document reviews and interviews, and in consultation with internal GAO stakeholders including methodological specialists and staff knowledgeable about aviation contracting, we identified the following key elements: aircraft types, basing options, acquisition models, aircraft capabilities, suppression methods, and aircraft performance and effectiveness. We then reviewed the agency efforts to determine the extent to which each effort included analysis of these key elements. We also interviewed agency officials about the extent of collaboration involved in agency efforts to identify the number and type of firefighting aircraft they need. In light of the information collected, we reviewed our prior work on interagency collaboration and key practices that can help enhance and sustain collaborative efforts, and compared the practices of the formal body for coordination among aerial firefighting agencies—the National Interagency Aviation Committee—with key collaboration practices to determine the extent to which the committee’s practices were consistent with key practices we previously identified. The key practices we evaluated were: defining and articulating a common outcome; establishing mutually reinforcing or joint strategies to achieve the outcome; identifying and addressing needs by leveraging resources; agreeing upon agency roles and responsibilities; establishing compatible policies, procedures, and other means to operate across agency boundaries; developing mechanisms to monitor, evaluate, and report the results of collaborative efforts; and reinforcing agency accountability for collaborative efforts through agency plans and reports. GAO has also identified reinforcing individual accountability for collaborative efforts through agency performance management systems as a best practice for coordination, but we did not consider this practice in our assessment because performance management systems fell outside the scope of this review. To examine the Forest Service’s approach to modernizing the large airtanker fleet and the challenges it faces in doing so, we reviewed agency documents related to large airtanker acquisition, management, and operations and interviewed agency officials to identify the agency’s approach to obtaining these aircraft. We reviewed agency planning and acquisition documents, such as the National Interagency Aviation Committee’s 2009 Interagency Aviation Strategy, the Forest Service’s 2012 Large Airtanker Modernization Strategy, and Forest Service airtanker contract solicitations, which lay out the Forest Service’s approach to obtaining large airtankers in the short, medium, and long terms. that represents firefighting aircraft vendors and one that represents pilots—which we identified based on conversations with agency officials and vendor representatives. We also conducted site visits to the National Interagency Fire Center in Boise, Idaho; the facilities of the only two private vendors with current Forest Service “legacy” large airtanker contracts, located in Minden, Nevada, and Missoula, Montana; the manufacturing facility of a company that produces single-engine airtankers in Olney, Texas; and the headquarters of California’s fire aviation program —part of the California Department of Forestry and Fire Protection (CAL FIRE) in Sacramento—which manages more airtankers than the Forest Service. The results of our interviews and site visits are not generalizable. We conducted this performance audit from August 2012 to August 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Forest Service and Interior contract for, and to a lesser extent own, a variety of aircraft used to help suppress wildland fires. Table 2 provides information, as reported by Forest Service and Interior contracting officials, on the federal firefighting aircraft fleet for the 2013 fire season, including aircraft type, number available, and cost rates. Since 1995, the Forest Service and Interior have conducted or contracted for nine major studies and strategy documents that identify firefighting aircraft needs. Table 3 provides information on major efforts conducted by, or on behalf of, the Forest Service and Interior to identify the number and type of firefighting aircraft they need. In addition to the individual named above, Steve Gaty, Assistant Director; Kristin Hughes; Richard P. Johnson; and Matthew Tabbert made significant contributions to this report. Cheryl Arvidson, Steven Putansu, and Kiki Theodoropoulos provided technical assistance.
The Forest Service and Interior contract for aircraft to perform various firefighting functions, including airtankers that drop retardant. The Forest Service contracts for large airtankers and certain other aircraft, while Interior contracts for smaller airtankers and water scoopers. However, a decrease in the number of large airtankers, from 44 in 2002 to 8 in early 2013--due to aging planes and several fatal crashes--has led to concerns about the agencies' ability to provide aerial firefighting support. GAO was asked to review agency efforts to ensure the adequacy of the firefighting aircraft fleet. This report examines (1) Forest Service and Interior efforts to identify the number and type of firefighting aircraft they need and (2) the Forest Service's approach to modernizing the large airtanker fleet and the challenges it faces in doing so. GAO reviewed agency studies and strategies, assessing the extent to which they included key elements important for understanding fire aviation needs; reviewed large airtanker planning and acquisition documents; and interviewed agency officials and representatives of the fire aviation community selected to represent state agencies, aircraft vendors, and others. The Department of Agriculture's Forest Service and the Department of the Interior have undertaken nine major efforts since 1995 to identify the number and type of firefighting aircraft they need, but those efforts--consisting of major studies and strategy documents--have been hampered by limited information and collaboration. Specifically, the studies and strategy documents did not incorporate information on the performance and effectiveness of firefighting aircraft, primarily because neither agency collected such data. While government reports have long called for the Forest Service and Interior to collect aircraft performance information, neither agency did so until 2012 when the Forest Service began a data collection effort. However, the Forest Service has collected limited data on large airtankers and no other aircraft, and Interior has not initiated a data collection effort. In addition, although firefighting aircraft are often shared by federal agencies and can be deployed to support firefighting operations on federal and nonfederal lands, the agencies have not consistently collaborated with one another and other stakeholders to identify the firefighting aircraft they need. Many agency officials and stakeholders GAO contacted noted concerns about limited collaboration, and many cited shortcomings with the formal mechanism for collaboration--the National Interagency Aviation Committee. The committee has implemented some leading practices for collaboration such as defining and articulating a common purpose, but it has not taken additional steps to monitor and evaluate its collaborative activities, another leading practice. Collectively, additional information on aircraft performance and effectiveness and collaboration across agencies and with stakeholders could enhance agency estimates of their firefighting aircraft needs to more accurately represent national needs for such aircraft, and as a result, better position the agencies to develop strategic planning documents that represent those needs. The Forest Service plans to modernize the large airtanker fleet by obtaining large airtankers from various sources over the near, medium, and long term, but each component of this approach faces challenges that make the continued availability of such aircraft to meet national fire suppression needs uncertain. In the near term, the agency plans to rely on a mix of contracted "legacy" airtankers as well as supplemental aircraft available through additional contracts and agreements with other governments and the military. However, agency concerns exist regarding the availability, capability, and costs of these resources. In the medium term, the Forest Service has awarded contracts for "next-generation" large airtankers that are faster and more up-to-date than most "legacy" aircraft, but it is uncertain when all of these aircraft will begin supporting fire suppression activities. Specifically, bid protests delayed contract issuance, and most of the aircraft receiving awards have not been fully tested and approved. In the long term, the Forest Service's plan includes purchasing certain large airtankers and obtaining others through intergovernmental transfer at no initial cost if they are declared surplus by the military--a shift from its long-standing practice of contracting for rather than owning aircraft. However, the Forest Service was unable to justify its previous plans for purchasing large airtankers to the Office of Management and Budget, and concerns exist regarding the retardant capacity and operating cost of the other airtankers it would obtain through intergovernmental transfer. GAO recommends, among other things, that the Forest Service and Interior expand efforts to collect information on the performance and effectiveness of firefighting aircraft and enhance collaboration across agencies and the fire aviation community. The agencies generally agreed with GAO's findings and recommendations.
The Missile Defense Agency has been charged with developing and deploying ballistic missile defenses against threats posed by adversaries from all geographic regions, at all ranges, and in all phases of flight. At least 25 countries have acquired ballistic missiles, including many countries that are also seeking or have acquired weapons of mass destruction that could be used on these missiles. In response, the Missile Defense Agency has been developing defenses against short-, medium-, intermediate-, and intercontinental-range ballistic missiles that could be targeted against U.S. forces abroad, U.S. friends and allies, and the U.S. homeland. For example, the Terminal High Altitude Area Defense system, Patriot Advanced Capability-3, and Aegis Ballistic Missile Defense system are being developed primarily to provide an integrated capability to defend deployed U.S. forces, friends, and allies against short- and medium-range ballistic missiles. The Missile Defense Agency is also developing sea-based defenses to destroy short-range missiles in the terminal phase of flight in order to defend deployed forces. In addition, the Missile Defense Agency is developing a Ground-based Midcourse Defense system designed to destroy intercontinental-range ballistic missiles targeted against the U.S. homeland, deployed U.S. forces, friends, and allies. Some ballistic missile defense systems are being designed to defend against more than one type of threat. For example, the Aegis Ballistic Missile Defense system is being designed not only to defend deployed U.S. forces, allies, and friends from short- and medium-range missiles, but also to help defend the U.S. homeland from longer range missiles. Table 1 summarizes the threat categories to be addressed by U.S. ballistic missile defenses. While the Missile Defense Agency is responsible for developing missile defenses, the unified combatant commands are the military organizations primarily responsible for deterring attacks and for employing forces should deterrence fail. The Unified Command Plan, which is signed by the President, establishes the combatant commanders’ missions and responsibilities and establishes their geographic areas of responsibility. The most recent version of the Unified Command Plan, which was published in 2006, identified five combatant commands—U.S. Central Command, U.S. European Command, U.S. Northern Command, U.S. Pacific Command, and U.S. Southern Command—with responsibilities covering specific geographic regions. For example, U.S. Northern Command’s area of responsibility includes all of North America and surrounding waters; for missile defenses, U.S. Northern Command would have primary responsibility for defending the continental United States from an intercontinental-range missile attack. U.S. Strategic Command is a unified combatant command with responsibilities to integrate global missions and capabilities that cross the boundaries of the geographic commands. Initially assigned responsibility for nuclear deterrence, space, and computer network operations, the President, in January 2003, expanded the command’s responsibilities to include several missions not previously assigned to a combatant command. These missions were: global strike planning and execution; planning, integrating, and coordinating global missile defense (including missile defense advocacy); oversight of intelligence, surveillance, reconnaissance, and global command and control; and DOD information operations. In January 2005, the Secretary of Defense also assigned the command responsibility for integrating and synchronizing DOD’s efforts in combating weapons of mass destruction. DOD envisioned that U.S. Strategic Command’s global operations would potentially add value to the geographic combatant commands as they carried out their responsibilities, and provide the President and Secretary of Defense with an expanded range of military options for responding to future threats. U.S. Strategic Command and the Missile Defense Agency created the Warfighter Involvement Process in 2005 to accomplish U.S. Strategic Command’s responsibility to advocate for desired global missile defense characteristics and capabilities on behalf of all combatant commanders. Additionally, U.S. Strategic Command envisions using the process as a way for the military services and the Joint Staff to provide the Missile Defense Agency with guidance and advice on desired ballistic missile defense capabilities, operational approaches, and suitability and supportability features. The Warfighter Involvement Process is intended to provide a collaborative forum for the combatant commands, U.S. Strategic Command, Joint Staff, and military services to identify, assess, and articulate capability needs to the Missile Defense Agency, analyze the risks associated with capability gaps and redundancies, and examine possible solutions and implementation timelines. Although the Warfighter Involvement Process involves a variety of organizations, U.S. Strategic Command is responsible for administering and managing the various analytical activities, software tools, focus groups, and review boards that make up the process. GAO has previously reviewed DOD’s plans to operate ballistic missile defense systems as certain systems have transitioned from a research and development emphasis to operational military capabilities. For example, in 2006 we assessed DOD’s preparations to operate and support ballistic missile defenses that are under continuous development. In 2007, we reported that DOD’s long-term plans to develop boost and ascent phase missile defense systems did not consider operational perspectives on how many of these systems would be required for various deployment periods, or the challenges of establishing bases at potential deployment locations. Additionally, in response to a congressional mandate, we have annually reported since 2003 on the cost, schedule, testing, and performance progress that the Missile Defense Agency is making in developing ballistic missile defenses. U.S. Strategic Command and the Missile Defense Agency created the Warfighter Involvement Process in 2005 to identify and address the combatant commands’ ballistic missile defense capability needs, but the process has yet to overcome key limitations to its effectiveness. Although the Warfighter Involvement Process is still evolving, it has helped the Missile Defense Agency address some of the combatant commands’ needs. However, even as U.S. Strategic Command and the Missile Defense Agency move forward with the process, they have not finalized the implementation guidance needed to clarify their respective roles and responsibilities; have not yet established effective methodologies for identifying, prioritizing, and addressing combatant command needs; and have not involved senior civilian DOD leadership to adjudicate potential differences among the combatant commands’ priorities and provide a departmentwide perspective about how to best allocate resources. As a result, DOD is at risk of not addressing the combatant commands’ missile defense needs if improvements are not made that establish an effective and well documented process and provide a DOD-wide perspective when prioritizing these needs. Although the Warfighter Involvement Process was created in 2005 and is still evolving, the process has helped the Missile Defense Agency to address some combatant command ballistic missile defense capability needs. Since 2001, DOD has emphasized a capabilities-based development strategy to provide the combatant commands with the capabilities they require to deter and defeat a broad range of adversaries. By establishing the Missile Defense Agency in 2002, DOD intended to follow a more streamlined capabilities-based development strategy to rapidly develop and field ballistic missile defenses. Through the Warfighter Involvement Process, the agency has addressed some of the combatant commands’ capability needs in developing ballistic missile defenses. However, because the Warfighter Involvement Process is still evolving, the combatant commands have not yet formally determined the extent to which the agency’s plans are in line with the commands’ needs. The Warfighter Involvement Process has not fully evolved to effectively convey either the commands’ priorities to the Missile Defense Agency or the Missile Defense Agency’s planned adjustments back to the commands. When the Secretary of Defense created the Missile Defense Agency in 2002, DOD lacked a process for the agency to consider the combatant commands’ priorities as it developed ballistic missile defenses. Instead, the Missile Defense Agency focused on developing and deploying capabilities based on its own technology-driven assessment of what could be fielded quickly in order to meet the President’s direction to quickly field a limited ballistic missile defense system by 2004. As a result, the Missile Defense Agency expedited its initial designs and development plans without formally considering the combatant commands’ needs, according to the DOD Inspector General. Additionally, the agency identified long- term ballistic missile defense system capability goals before having a process in place to identify the commands’ capability needs. In emphasizing the rapid initial development of ballistic missile defense systems, the Missile Defense Agency anticipated that further investments could be needed to better meet the combatant commands’ requirements. Under the Secretary of Defense’s 2002 direction, the Missile Defense Agency’s approach has been to deploy capabilities early, which may only partially meet warfighter needs, and then incrementally improve the deployed capabilities’ effectiveness by inserting new technologies as they become available and as the threat warrants. To initiate this approach, the agency focused on further developing ballistic missile defenses that had been previously under development by the military services and subjected to DOD’s traditional joint requirements determination process. Officials from U.S. Strategic Command, U.S. Northern Command, the Missile Defense Agency, and the Office of the Secretary of Defense told us that the agency’s approach resulted in the rapid deployment of operational missile defenses. A senior Missile Defense Agency official added that the Secretary of Defense reviewed and approved the agency’s plans for developing this initial defensive capability. However, absent the combatant commands’ inputs, U.S. Strategic Command concluded in January 2005 that taking this approach made it difficult not only for the Missile Defense Agency to associate its actions with the commands’ requirements, but also for the combatant commands to evaluate the agency’s progress. According to U.S. Strategic Command, the lack of a process also created the potential for inefficiencies and unnecessary redundancies in the Missile Defense Agency’s investments, resulting in increased risk to the baseline costs and operational effectiveness of the ballistic missile defense systems under development. U.S. Strategic Command recognized the need to formalize a process to carry out its missile defense advocacy responsibilities, even as the Missile Defense Agency was focused on developing and deploying capabilities quickly. Following U.S. Strategic Command’s creation in 2002 and assignment of several new missions in January 2003, the command took a wide range of actions to implement and integrate these missions, such as developing various plans, concepts, and guidance; establishing procedures and processes; identifying personnel and funding resources; developing new relationships; building communication networks; and providing training, education, and exercises. Among these activities, U.S. Strategic Command took steps to establish its role as the combatant commands’ advocate for missile defenses. For example, in its November 2003 Strategic Concept for Global Ballistic Missile Defense, U.S. Strategic Command outlined its initial concept for developing and advocating for desired ballistic missile defense capabilities. Subsequently, in late 2004 and early 2005, U.S. Strategic Command recognized the need for creating a more formalized process for identifying and addressing the warfighter’s ballistic missile defense needs. Additionally, the command undertook several reorganizations, the latest occurring in late 2004 and early 2005, where it established a new functional component for integrated missile defense to bring focus and attention to the command’s operational responsibilities. The Missile Defense Agency has addressed some combatant command needs since it and U.S. Strategic Command created the Warfighter Involvement Process in 2005. A key output of this newly established process is the Prioritized Capabilities List, which is intended to specify how the combatant commands collectively prioritize the full range of capabilities needed to perform ballistic missile defense missions. U.S. Strategic Command first provided the Prioritized Capabilities List to the Missile Defense Agency in 2006; a revised list was also provided in 2007. Combatant commands that provided inputs to the Prioritized Capabilities List include: U.S. Central Command, U.S. European Command, U.S. Joint Forces Command, U.S. Northern Command, U.S. Pacific Command, and U.S. Strategic Command. Appendix II identifies short descriptions of the 27 capabilities listed in the 2007 Prioritized Capabilities List. Following the Warfighter Involvement Process’s creation and preparation of the first Prioritized Capabilities List, the Missile Defense Agency adjusted some investment programs in response to the combatant commands’ prioritized requirements. In particular: The Missile Defense Agency created new investment programs to develop sea-based defenses against short-range missiles in their terminal phase of flight. The first Prioritized Capabilities List identified the combatant commands’ need for a sea-based terminal defense capability, but at that time the Missile Defense Agency was not investing resources to develop sea-based terminal defenses. After receiving the first Prioritized Capabilities List, the Missile Defense Agency included a program in its fiscal year 2008 budget proposal to modify and deploy up to 100 Navy Standard Missile-2 interceptors as a near-term option. Additionally, the Missile Defense Agency created a second program to develop more capable systems that would be available in the long term. The Missile Defense Agency’s current plans for these programs include spending a total of $124 million through fiscal year 2011 on the near-term option, and $487 million through fiscal year 2013 to develop more advanced, long-term options. The Missile Defense Agency shifted funding to place greater emphasis on some existing investments because of requirements identified on the Prioritized Capabilities List. In particular, the Missile Defense Agency has been developing capabilities to sustain ballistic missile defense operations while simultaneously making the system available for testing, training, upgrades, and maintenance. Although the combatant commands had identified this capability need and the Missile Defense Agency funded this capability, it took on new urgency when the ballistic missile defense system was taken out of test mode and put in an operational status for the first time in 2006. While the system was operational, it was not available to either the Missile Defense Agency for developmental activities and maintenance or to the combatant commands for training. To address this shortfall, the Missile Defense Agency increased resources to more quickly develop concurrent testing, training, and operations capabilities. According to the Missile Defense Agency, the agency increased funding for this effort from about $0.5 million in fiscal year 2006 to $6.9 million in fiscal year 2007. The Missile Defense Agency has responded to numerous combatant command requests to change systems that have already been fielded. Working closely with U.S. Strategic Command’s functional component for integrated missile defense, the Missile Defense Agency has modified some systems’ hardware and software to meet the combatant commands’ capability needs. U.S. Strategic Command officials told us that the combatant commands typically identify the need for such changes as the result of exercises, training, or operational experience. Although officials we spoke with viewed the agency’s responsiveness to these requests as positive, some observed that a more effective process for involving the warfighter earlier in developing systems could reduce the need to change these systems once they had been developed and fielded. Although the Warfighter Involvement Process has not yet fully evolved, Missile Defense Agency and U.S. Strategic Command officials believe the agency has generally been responsive to the combatant commands’ capability needs. For example, a 2007 joint study by the Missile Defense Agency and U.S. Strategic Command concluded that the agency was at least partially addressing all of the combatant commands’ capability needs. Additionally, Missile Defense Agency officials told us that, based on the study’s results and the agency’s assessment of the 2007 Prioritized Capabilities List, the agency was making adjustments to its investment plans to help mitigate potential gaps between the commands’ needs and the agency’s programs. However, for approximately 3 years after it began making investments to develop and deploy systems, the Missile Defense Agency lacked the ability to ascertain the extent to which its efforts were aligned with the commands’ needs. Moreover, as of May 2008, the combatant commands had not yet formally assessed and responded to the Missile Defense Agency’s recently revised plans. As a result, the commands have not formally determined the extent to which the agency’s plans are in line with the commands’ needs. Although the Warfighter Involvement Process has helped address some of the commands’ needs, U.S. Strategic Command and the Missile Defense Agency have yet to overcome key limitations as they move forward with the process. These interrelated limitations include a lack of clear and well documented roles and responsibilities; ineffective methodologies for identifying, prioritizing, and addressing combatant command priorities; and the lack of senior civilian DOD participation in the process to adjudicate among the commands’ priorities and assess departmentwide risk about how to best allocate resources. U.S. Strategic Command and the Missile Defense Agency have not yet clarified their respective roles and responsibilities by putting into place the approved and complete guidance needed to implement the process and to hold them accountable for achieving results. The Office of Management and Budget’s guidance on establishing internal controls emphasizes that agencies should design management structures for programs to help ensure accountability for results. According to GAO’s Standards for Internal Control in the Federal Government, such management structures include clearly documented guidance, including policies, procedures, directives, instructions, and other documentation that establish roles and responsibilities needed to achieve an organization’s mission and objectives. Additionally, our prior work on internal controls and management accountability also has emphasized that complete guidance should be approved, current, and binding on all appropriate stakeholders. Lacking approved and complete guidance, the combatant commands have not had a clear understanding of U.S. Strategic Command’s and the Missile Defense Agency’s roles and responsibilities, and have lacked a mechanism to hold either organization accountable for effectively identifying, prioritizing, and addressing their needs. U.S. Strategic Command has not yet put into place approved guidance formally establishing its roles and responsibilities under the Warfighter Involvement Process, although it has been developing a commandwide instruction to do so since 2005. In preparing the instruction, U.S. Strategic Command solicited comments from stakeholder organizations, including other combatant commands and the Joint Staff, in order to build consensus around key relationships that support the Warfighter Involvement Process. Some stakeholders raised key issues about U.S. Strategic Command’s roles in the Warfighter Involvement Process. For example, U.S. Central Command officials commented that a draft version of U.S. Strategic Command’s instruction conveyed too much responsibility to U.S. Strategic Command for speaking on behalf of the other commands when advocating for their capability needs. In response, U.S. Strategic Command modified its instruction to more clearly limit its responsibilities for prioritizing the different commands’ needs. In addition to addressing stakeholder comments, U.S. Strategic Command changed the draft instruction to incorporate recommendations from a 2007 joint study by the Missile Defense Agency and U.S. Strategic Command on how to improve the Warfighter Involvement Process. In February 2008, the command also updated the draft instruction to account for its newly assigned responsibility relating to DOD’s efforts to integrate air and missile defenses across the department. U.S. Strategic Command officials told us that the command plans to approve and issue the instruction by mid-2008. However, the command’s draft instruction recognizes that further clarifications and details for implementing the Warfighter Involvement Process are still needed, which may require additional revisions after the current draft is approved. Until U.S. Strategic Command has approved guidance in place, the combatant commands continue to lack a mechanism that holds U.S. Strategic Command accountable for its roles and responsibilities under the Warfighter Involvement Process. The Missile Defense Agency also does not have finalized guidance in place detailing its responsibilities in the Warfighter Involvement Process. Lacking such guidance, officials from several combatant commands told us that the Missile Defense Agency has not provided them with enough insight into how it takes their needs into account. Although some of the Missile Defense Agency’s Warfighter Involvement Process responsibilities are identified in U.S. Strategic Command’s draft instruction, this instruction does not provide specific details about how the agency will carry them out. Additionally, the U.S. Strategic Command draft instruction will not be binding on the Missile Defense Agency once it is completed. In commenting on U.S. Strategic Command’s draft instruction, Joint Staff officials asked U.S. Strategic Command how the Missile Defense Agency would be held accountable for its Warfighter Involvement Process responsibilities. U.S. Strategic Command responded that its goal was for the Missile Defense Agency to either approve the U.S. Strategic Command instruction, or publish a complementary document stipulating its responsibilities. Missile Defense Agency officials told us in May 2008 that the agency had not yet taken either of these actions because U.S. Strategic Command’s instruction was still incomplete. Until recently, the Missile Defense Agency did not plan to prepare its own guidance for establishing its roles and responsibilities in the Warfighter Involvement Process. In March 2006, a senior Missile Defense Agency official stated to the DOD Inspector General that the agency did not plan to issue a new directive that complemented U.S. Strategic Command’s instruction. Instead, the official stated that the agency’s Integrated Program Policy and Systems Engineering Plan would be used to document the agency’s Warfighter Involvement Process responsibilities. However, these documents provide top-level direction and descriptions of the agency’s decision-making processes and lack specific details about how the agency would fulfill its Warfighter Involvement Process responsibilities. Moreover, the agency has not yet updated these documents to identify specific Warfighter Involvement Process roles and responsibilities. Additionally, a Missile Defense Agency official told us that, based on its experience during 2006 and 2007, the agency needed to prepare internal guidance to ensure that all of its project offices understood and could be held accountable for their responsibilities under the process. In May 2008, agency officials told us that the agency not only was planning to update some of this internal guidance, but also was beginning to prepare its own Warfighter Involvement Process guidance to complement U.S. Strategic Command’s instruction. Until the Missile Defense Agency completes this effort, the combatant commands will continue to lack both transparency into the Missile Defense Agency’s process for addressing their needs, and the means to hold the agency accountable. The Warfighter Involvement Process has not yet resulted in effective methodologies for the combatant commands to identify and prioritize their capability needs and for the Missile Defense Agency to address the combatant commands’ capability needs. According to U.S. Strategic Command’s draft instruction, the goals of the Warfighter Involvement Process include providing a unified means for the combatant commands to communicate desired capabilities to the Missile Defense Agency, and for the Missile Defense Agency to communicate its resultant acquisition plans back to the commands. The Prioritized Capabilities List is intended to achieve these goals through methodologies that clearly, completely, and accurately identify the commands’ needed capabilities, and distinguish one priority from the next. Additionally, U.S. Strategic Command’s draft Warfighter Involvement Process instruction indicates that an effective methodology for addressing the commands’ needs would clearly associate the agency’s investments with those needs. Lacking effective methodologies, the combatant commands have not communicated their capability needs in an understandable and useful way to the Missile Defense Agency, and the agency has not clearly communicated how the combatant commands’ capability needs are being addressed in its development and investment decisions. Some Combatant Commands’ Needs Not Clearly Identified Our work revealed several examples where the methodology used to develop the Prioritized Capabilities List did not effectively identify the specific capability needs of some of the combatant commands. In identifying the capability needs on the Prioritized Capabilities List, U.S. Strategic Command used a capabilities-based approach to prepare broad, generalized statements describing the full range of capabilities needed to operate a global ballistic missile defense system. As a result of this approach, several of the capabilities on the list encompass multiple functional areas, such as interceptors, sensors, and communications, which has made it difficult for the Missile Defense Agency to identify the specific capabilities that the commands require. Additionally, by focusing on developing the capabilities that the combatant commands would need in the future, U.S. Strategic Command officials told us the Prioritized Capabilities List has not provided an adequate format for the combatant commands to identify their needs for forces to meet ongoing operational requirements. Although U.S. Joint Forces Command officials told us that the 2007 list clearly identified the capabilities that were important to their command, officials from the three geographic combatant commands with whom we spoke told us that the list did not effectively represent their needs. For example: U.S. Northern Command officials told us that the capabilities did not adequately or clearly identify some of their more specific needs because the capabilities on the list encompass the specific needs of multiple commands, which could obscure the meaning and intent of the underlying needs of the individual commands. U.S. Pacific Command officials told us that the 2007 Prioritized Capabilities List did not fully meet their command’s needs because the list was not designed to identify the quantities of interceptors that the command needs to meet specific requirements for missile defense operations in the Pacific region, given the potential ballistic missile threats posed to U.S. forces and allies in the region. U.S. Central Command officials told us that the 2007 Prioritized Capabilities List provided the appropriate detail for systems that have yet to be developed. However, the officials also told us that U.S. Central Command’s primary need is to be sure that the command has access to sufficient short-range missile defense systems for operations in its region. They added that the Prioritized Capabilities List has not been an effective tool for advocating for these needs because it is focused, instead, on future capability requirements. U.S. Strategic Command officials stated that they used a capabilities-based approach to identify and prioritize capability needs because this approach is consistent with DOD’s traditional joint requirements determination process used by the combatant commands in non-missile-defense areas, which initially identifies requirements in broad terms. U.S. Strategic Command stated that this approach allowed it to identify and condense over 100 tasks required to plan and execute ballistic missile defense missions into the 27 capabilities on the 2007 Prioritized Capabilities List. U.S. Strategic Command officials added that this approach resulted in a list of manageable length and level of detail needed to provide the Missile Defense Agency with insight into the commands’ needs. The officials further stated that the list was not designed to identify the commands’ short-term operational requirements, adding that U.S. Strategic Command planned to put into place a Warfighter Involvement Process function to identify and advocate for the commands’ operational force requirements. However, the U.S. Strategic Command and Missile Defense Agency officials agreed that the lists prepared to date have not provided enough specific detail to inform the Missile Defense Agency about how to best address the commands’ needs when developing new capabilities. Until U.S. Strategic Command develops a methodology to more clearly identify the commands’ capability needs, the Prioritized Capabilities List’s effectiveness as a guide for the Missile Defense Agency for investing resources will continue to be limited. Combatant Commands’ Needs Not Consistently Prioritized In addition to not effectively identifying some of the combatant commands’ capability needs, the Warfighter Involvement Process also has not resulted in a consistent methodology for prioritizing these needs. In preparing the 2006 Prioritized Capabilities List, the combatant commands grouped the capabilities by the time frames in which they will be needed—either near-, mid-, or far-term. In contrast, for preparing the 2007 Prioritized Capabilities List, U.S. Strategic Command asked the combatant commands to evaluate each capability’s relative importance to (1) the command’s ballistic missile defense mission, weighted at 60 percent; (2) the command’s other missions, weighted at 30 percent; and (3) other joint capability areas, weighted at 10 percent. For each capability, the combatant commands were told to assign a rating of 1 (lowest importance) to 5 (highest importance) for each factor, multiply the rating by the appropriate weight, and add the three ratings up to develop a score for each capability. However, the individual combatant commands did not consistently apply this methodology: Some combatant commands took additional factors into account when prioritizing their individual capability needs. U.S. Strategic Command officials told us that each of the combatant commands was best positioned to determine for itself how to use the criteria for prioritizing the capabilities on the list. However, in the analysis accompanying the 2007 Prioritized Capabilities List, U.S. Strategic Command recognized as a limitation that the commands may have considered the current performance of a system or other criteria when prioritizing their needs. Missile Defense Agency officials told us that some combatant commands appeared to follow U.S. Strategic Command’s direction and prioritize the capabilities based on their overall importance to their current responsibilities, while other commands appeared to prioritize their needs based on what capabilities they were lacking. As a result, the Missile Defense Agency officials told us that the Missile Defense Agency lacked clarity on what the commands were trying to communicate in the Prioritized Capabilities List. The combatant commands also did not consistently rank their capability needs. For example, U.S. Northern Command officials told us that they believed it was important to clearly distinguish among priorities by not assigning the same score to more than one capability, whereas U.S. Joint Forces Command officials told us that duplicate scores indicated that some capabilities were equally important to the command. Additionally, U.S. Joint Forces Command officials told us that U.S. Strategic Command did not initially provide guidance on whether duplicate scores were acceptable; however, they stated that U.S. Strategic Command officials later told them that such results were valid. In addition to U.S. Joint Forces Command, which assigned the second-highest score to four capabilities, U.S. Central Command and U.S. Pacific Command both assigned the highest score to four capabilities, and U.S. European Command assigned the second-highest score to three capabilities. However, Missile Defense Agency officials told us that it would be more useful to the agency if the combatant commands more clearly distinguished among their prioritized needs by not assigning duplicate scores. Missile Defense Agency’s Response to the Prioritized Capabilities List Not Formally Assessed U.S. Strategic Command has not formally assessed the Missile Defense Agency’s responses to the 2006 and 2007 Prioritized Capabilities Lists to determine whether the agency has developed an effective methodology for addressing the combatant commands’ needs. Such an analysis of the Missile Defense Agency’s response is envisioned in U.S. Strategic Command’s draft Warfighter Involvement Process instruction. However, U.S. Strategic Command did not prepare a formal response to the agency’s first Achievable Capabilities List, which the Missile Defense Agency provided to the combatant commands in 2006. U.S. Strategic Command and Missile Defense Agency officials stated that the 2006 Achievable Capabilities List was ineffective because the agency did not analyze its detailed investment programs to determine the extent to which its programs were well aligned with the commands’ priorities. U.S. Strategic Command officials told us that clear, direct linkages between the Prioritized Capabilities List and the Missile Defense Agency’s programs were difficult to establish because the capabilities on the Prioritized Capabilities List are at a broad, generalized level and the Missile Defense Agency’s program of record is at a system-specific level. As a result, the Missile Defense Agency’s response to the first Prioritized Capabilities List did not provide U.S. Strategic Command with funding or budget information needed to prepare a formal response to the 2006 Achievable Capabilities List. The Missile Defense Agency has prepared a more complete and detailed response to the 2007 Prioritized Capabilities List, but U.S. Strategic Command has not yet formally analyzed the agency’s response. Missile Defense Agency officials told us that compared to the 2006 Achievable Capabilities List, the 2007 Achievable Capabilities List provides better information about how the agency has addressed the commands’ needs. Unlike the previous list, the 2007 Achievable Capabilities List provides more information, including a capability gap analysis and a detailed budget analysis that links each of the commands’ 27 capability needs to the agency’s investment programs. According to the Missile Defense Agency, at least four combatant commands have provided favorable feedback to the Missile Defense Agency about its 2007 response. However, the combatant commands have not yet formally assessed whether the agency’s methodology for addressing their needs is effective. As envisioned by the U.S. Strategic Command’s Warfighter Involvement Process draft instruction, U.S. Strategic Command would analyze and reply to the agency’s Achievable Capabilities List by preparing a Capability Assessment Report. U.S. Strategic Command stated that this report is to appraise the Missile Defense Agency’s funding plans, assess whether the agency’s development trends are expected to provide effective capabilities, and facilitate further interaction with the agency about potential changes to the Missile Defense Agency’s investments. Having received the agency’s most recent Achievable Capabilities List in April 2008, U.S. Strategic Command officials told us that they plan to complete this assessment and provide the Capability Assessment Report to the Missile Defense Agency by mid-August 2008. However, the officials told us that they did not expect that the Missile Defense Agency would have time to make significant adjustments to its fiscal year 2010 budget proposal after receiving the Capability Assessment Report. Until U.S. Strategic Command prepares this assessment, the agency will lack the commands’ formal feedback on how well it is addressing their needs and may miss opportunities to make adjustments to its plans and future budgets. U.S. Strategic Command and Missile Defense Agency Are Taking Steps to Improve Warfighter Involvement Process U.S. Strategic Command and Missile Defense Agency officials told us that the Warfighter Involvement Process has provided the Missile Defense Agency with important information about the combatant commands’ needed capabilities, and that they are taking steps to improve their respective inputs to the process. U.S. Strategic Command officials told us that the 2007 Prioritized Capabilities List highlighted an overall preference among the commands for the Missile Defense Agency to further improve existing capabilities, rather than develop new types of ballistic missile defenses. Missile Defense Agency officials added that the Warfighter Involvement Process has increased the agency’s interactions with the combatant commands, which has provided the agency a broader perspective of the combatant commands’ operational responsibilities, including insight into their operational needs for integrated planning, communications, and consequence management. Further, U.S. Strategic Command has sought new methodologies to enhance the ability to identify and prioritize the commands’ capability needs as the process has evolved. Moving forward, U.S. Strategic Command plans to improve the Prioritized Capabilities List by distinguishing between overall, long-term capability needs and shorter-term development goals. Command officials also told us that they intend to improve the list by clarifying the capability statements to provide better guidance to the Missile Defense Agency. According to the officials, this improved list would be prepared in time for the Missile Defense Agency to consider when it prepares its 2012 budget proposal. However, as of May 2008, U.S. Strategic Command had only begun the process of determining the methodologies for identifying and prioritizing the commands’ capability needs. Until U.S. Strategic Command prepares effective and consistent methodologies for identifying and prioritizing these capabilities, the Prioritized Capabilities List will continue to be of limited use to the Missile Defense Agency. Moreover, Missile Defense Agency officials indicated that they may need to make further improvements to the agency’s approach for addressing the commands’ needs. Unless the Missile Defense Agency has developed an effective methodology for addressing their needs, the commands’ ability to provide a detailed, formal assessment of the agency’s plans will be limited. Unlike DOD’s traditional process for prioritizing combatant command capability needs when DOD prepares its funding plans, the Warfighter Involvement Process has lacked the involvement of senior civilian DOD officials with a departmentwide perspective to adjudicate potential differences among the combatant commands’ priorities. Under DOD’s traditional process, the Chairman, Joint Chiefs of Staff, evaluates the combatant commands’ individual and collective requirements, and advises the Secretary of Defense on the extent to which DOD investment plans are addressing these requirements. In contrast, the Warfighter Involvement Process is not structured to involve senior civilian DOD leadership to provide their perspective on how to assess risk and allocate resources for missile defenses and other DOD needs. Instead, U.S. Strategic Command consolidated each command’s capability needs into an overall prioritized list, and then provided the list directly to the Missile Defense Agency. Lacking the involvement of senior civilian DOD officials in reviewing the commands’ priorities, the Missile Defense Agency has not benefited from receiving a broader, departmentwide perspective on which of the commands’ needs were the most significant. Under traditional DOD requirements processes, each combatant command is responsible for identifying and seeking the specific military capabilities that it needs to implement its own mission. Moreover, the commands’ capability needs differ and depend on their individual mission responsibilities. For example, U.S. Pacific Command and U.S. Central Command’s missions and geographic responsibilities primarily call for ballistic missile defenses that can address short- and medium-range missile threats to deployed forces and to U.S. friends and allies. U.S. Central Command officials added that they also require sea-based missile defense capabilities to provide greater operational flexibility in a politically volatile region. U.S. Northern Command’s mission is to defend the U.S. homeland, and its primary operational focus for ballistic missile defense is on intercontinental threats. As the combatant command responsible for developing robust, joint command and control capabilities and interoperable systems, U.S. Joint Forces Command has emphasized the need to integrate ballistic missile defense capabilities with air and cruise missile defenses. U.S. Strategic Command, which is responsible for planning, integrating, and coordinating global missile defense operations, has worldwide responsibilities that include working with all of the geographic commands on an equal basis to defend their respective regions. Given these varied mission needs, some combatant command officials told us that they were not satisfied with U.S. Strategic Command’s approach for preparing the 2007 Prioritized Capabilities List. To prepare the list, U.S. Strategic Command determined an overall score for each of the 27 capabilities on the list by adding together the scores that the commands had assigned to each individual capability. U.S. Strategic Command then listed the capabilities from highest to lowest aggregate score to consolidate the commands’ needs into a single, overall prioritized list. U.S. Strategic Command and U.S. Joint Forces Command officials told us that this was a reasonable approach to follow for consolidating the commands’ priorities because it equitably represented each command’s needs. However, other combatant command officials told us that they were dissatisfied with this approach. For example, U.S. Central Command officials told us this approach had limited utility because it did not consider or distinguish among the different commands’ mission responsibilities. U.S. Pacific Command officials similarly told us that compiling a single list should not be based only on the sum of each capability’s score, but should also consider each command’s specific military responsibilities relative to each other. U.S. Northern Command officials told us that the combatant commands’ varied mission requirements made it difficult to consolidate the commands’ capability needs in a meaningful way without judging which missile defense missions were the most pressing. Although they prefer to have their commands’ individual mission responsibilities taken into account when preparing the Prioritized Capabilities List, some combatant command officials told us that the Warfighter Involvement Process was not well structured to adjudicate potential differences among their needs. For example, in comments on a draft of U.S. Strategic Command’s Warfighter Involvement Process instruction, U.S. Central Command stated the Unified Command Plan did not implicitly or explicitly convey to U.S. Strategic Command the responsibility to assess the relative importance of the other commands’ capability needs. U.S. Northern Command officials told us that although U.S. Strategic Command is best positioned among the combatant commands to advocate for warfighter-desired ballistic missile defense capabilities, they were unsure whether the Unified Command Plan gave U.S. Strategic Command the responsibility, as the Warfighter Involvement Process administrator, to determine which of the other commands’ needs were the most important. U.S. Pacific Command officials also told us that U.S. Strategic Command may lack the proper perspective to assess and evaluate the other commands’ mission areas when determining overall priorities. The U.S. Pacific Command officials added that senior civilian DOD officials could apply a broader perspective to help specify whether the prioritized list should emphasize one command’s mission needs over another’s. Although U.S. Joint Forces Command officials told us that U.S. Strategic Command has the appropriate authorities to develop a Prioritized Capabilities List on behalf of the other commands, they stated that U.S. Strategic Command would have difficulty reaching consensus among the combatant commands about which of their mission needs were the most important, which could make the process of preparing a final list unnecessarily complicated and difficult. U.S. Strategic Command officials also stated that adjudicating the priorities of the other commands is not within the scope of the Warfighter Involvement Process; rather, the command officials told us that they intended use the Prioritized Capabilities List to identify the combatant commands’ collective priorities for developing a globally integrated ballistic missile defense system. U.S. Strategic Command further stated that senior DOD leadership should be responsible for instructing the Missile Defense Agency about how to best address these priorities. U.S. Strategic Command officials stated that, even as they did not adjudicate the other commands’ mission needs in preparing the 2007 Prioritized Capabilities List, they did not involve senior civilian DOD authorities to do so. Rather, U.S. Strategic Command sought the other combatant commands’ approval of the list, and then provided the list to the Missile Defense Agency without first seeking a review outside the Warfighter Involvement Process by DOD officials with responsibilities for assessing risk and allocating resources. In particular, U.S. Strategic Command convened a meeting of 1-star and 2-star general and flag officers from the combatant commands to review the list and resolve any disagreements before it was finalized. U.S. Strategic Command also circulated drafts of the list for the commands’ senior leadership to review, and made changes to the list in response to critical comments from one of the commands. As a result, while the commanders of the combatant commands approved the list before U.S. Strategic Command sent it to the Missile Defense Agency, the list did not receive a higher-level review to determine which of their priorities was most important. U.S. Strategic Command officials told us that they recognized that consolidating the individual commands’ needs into an overall set of priorities would result in some commands having their priorities ranked higher than those of other commands. However, U.S. Strategic Command officials added that they were responsive to the need to make the individual commands’ priorities transparent. For example, the analysis accompanying the 2007 Prioritized Capabilities List documented how each command individually ranked the 27 capabilities on the list, so that the Missile Defense Agency could gain insight into what the individual commands needed. Additionally, the analysis accompanying the 2007 list highlighted that U.S. Central Command, U.S. European Command, and U.S. Pacific Command gave higher scores for capabilities needed to defend deployed forces, U.S. allies, and friends, while U.S. Strategic Command and U.S. Northern Command prioritized higher those capabilities needed to defend the U.S. homeland. Further, U.S. Strategic Command and U.S. Joint Forces Command officials told us that the overall list provided a fair perspective on the commands’ overall priorities because the capabilities ranked highest on the consolidated list were highly ranked by multiple commands. However, without involving senior DOD officials to provide a departmentwide review of these overall priorities, assess the commands’ varied mission responsibilities, and provide their perspective on which priorities were the most significant, the consolidated list could obscure the importance of a key national defense priority if that need was ranked highly by only one command. In contrast to preparing the Prioritized Capabilities List, other aspects of U.S. Strategic Command’s ballistic missile defense responsibilities involve senior DOD officials for reviewing and adjudicating decisions that affect the other combatant commands. For example, under U.S. Strategic Command’s 2003 concept for planning, integrating, and coordinating global ballistic missile defense forces during a crisis, the Chairman, Joint Chiefs of Staff, would be responsible for considering a U.S. Strategic Command recommendation to reallocate ballistic missile defense forces from one combatant command’s region to another’s. Although U.S. Strategic Command’s concept states that “in most cases, U.S. Strategic Command’s recommendations will be understood and accepted by the other combatant commands,” the affected commands could present alternative recommendations to the Secretary of Defense if they disagreed with U.S. Strategic Command’s proposal. By providing for senior-level involvement during planning, U.S. Strategic Command ensures that the decision to reallocate forces from one region to another is made based on a full, DOD-wide perspective on how to best meet national security needs. DOD is taking steps to improve the oversight of ballistic missile defense developments, but so far these steps have not included plans to involve senior civilian DOD officials to adjudicate the combatant commands’ priorities. The Missile Defense Executive Board was chartered in March 2007 to review and make recommendations on the Missile Defense Agency’s comprehensive acquisition strategy to the Deputy Secretary of Defense. U.S. Northern Command officials stated to us that the Missile Defense Executive Board could play a valuable role by reviewing the Prioritized Capabilities List before it was provided to the Missile Defense Agency. Similarly, U.S. Strategic Command officials told us that the Missile Defense Executive Board could provide the combatant commands with a venue outside the Warfighter Involvement Process for reviewing and adjudicating their differing mission needs after the Prioritized Capabilities List is completed, but before the list is provided to the Missile Defense Agency. The U.S. Strategic Command officials added that the board could provide a perspective that U.S. Strategic Command lacked on the cost, risk, and benefits of allocating resources to develop specific priorities. Since late 2007 the board has been considering new processes to improve the management of DOD resources to develop and operate ballistic missile defenses. Chaired by the Under Secretary of Defense for Acquisition, Technology, and Logistics, the Board’s membership includes senior-level representatives from the Office of the Secretary of Defense, Joint Chiefs of Staff, U.S. Strategic Command, and other organizations. As a result, the board is expected to provide DOD with a means to exercise broader oversight of the Missile Defense Agency than its predecessor organizations. However, U.S. Strategic Command and Office of the Secretary of Defense officials told us that the board’s current focus is to align the services’ and Missile Defense Agency’s resource plans to support ballistic missile defense operations, rather than assess the relative importance of the combatant commands’ ballistic missile defense mission responsibilities and corresponding capability needs. Unless senior civilian DOD officials get involved in adjudicating the commands’ overall priorities before DOD makes resource decisions, the Missile Defense Agency will lack a departmentwide perspective on how to best allocate resources to meet the broad array of missile threats that confront U.S. national security. The Warfighter Involvement Process continues to evolve and mature as U.S. Strategic Command works with the other combatant commands to identify priorities and communicate them to the Missile Defense Agency. Because the process is distinct from DOD’s traditional process, U.S. Strategic Command has had to build consensus around new roles, responsibilities, and authorities needed to make the combatant commands’ capability needs known to the Missile Defense Agency. Even without a mature and effective Warfighter Involvement Process in place, the Missile Defense Agency has adjusted some of its investments to better meet the combatant commands’ capability needs. However, U.S. Strategic Command and the Missile Defense Agency have yet to overcome key limitations that complicate both U.S. Strategic Command’s efforts to advocate on behalf of the other commands, and the Missile Defense Agency’s ability to address their needs. Although U.S. Strategic Command has been drafting implementation guidance since 2005, neither the command nor the Missile Defense Agency has finalized such guidance, which is needed to clarify their respective roles and responsibilities. Additionally, the Prioritized Capabilities List has not been a clear and effective guide for the Missile Defense Agency to follow when making investment decisions. Moreover, the Missile Defense Agency has only recently analyzed the combatant commands’ needs and linked them to its investment programs; until the combatant commands formally assess and respond to the agency’s analysis, the extent to which the agency has effectively addressed the commands’ needs will remain unclear. Finally, the Warfighter Involvement Process faces challenges in determining the relative importance of the combatant commands’ varied ballistic missile defense responsibilities. Unless these priorities are vetted by senior civilian DOD officials with departmentwide responsibilities for assessing risk and allocating resources, the Missile Defense Agency will be left to act on the commands’ priorities without the benefit of a DOD-wide perspective on the best approach to counter the short-, medium-, intermediate-, and intercontinental-range missile threats facing the United States. To improve DOD’s process for identifying and addressing combatant command priorities for ballistic missile defense capabilities, we recommend the Secretary of Defense direct the Commander, U.S. Strategic Command, in conjunction with the Director, Missile Defense Agency, to take the following two actions: 1. complete and publish the implementation guidance needed to clearly define each organization’s roles and responsibilities for identifying, prioritizing, and addressing combatant command capability needs for ballistic missile defenses, and review and update such guidance, as needed, as DOD’s process continues to evolve; and 2. establish effective methodologies for identifying, prioritizing, and addressing combatant command capability needs for ballistic missile defenses. Further, to provide the Missile Defense Agency with feedback as to how well it has addressed the combatant commands’ priorities in preparing future funding plans, we recommend the Secretary of Defense direct the Commander, U.S. Strategic Command, in conjunction with the other combatant commands, to prepare an assessment of the Missile Defense Agency’s funding plans compared to the commands’ priorities, and provide the assessment to the Director, Missile Defense Agency. To provide a DOD-wide perspective on the combatant commands’ priorities, given their views on the range of ballistic missile threats facing the United States, we recommend the Secretary of Defense direct the Missile Defense Executive Board to review each Prioritized Capabilities List upon its release, including the individual commands’ priorities, and recommend to the Deputy Secretary of Defense an overall DOD-wide list of prioritized capabilities. We further recommend the Secretary of Defense direct the Deputy Secretary of Defense to provide guidance to the Director, Missile Defense Agency, on program priorities taking into account the Missile Defense Executive Board’s recommendation. In written comments on a draft of this report, DOD agreed with three recommendations and partially agreed with two recommendations. DOD also provided technical comments that we incorporated as appropriate. DOD’s comments are reprinted in appendix III. DOD agreed with our recommendation that U.S. Strategic Command and the Missile Defense Agency complete and publish implementation guidance needed to clearly define each organization’s roles and responsibilities for identifying, prioritizing, and addressing combatant command capability needs for ballistic missile defenses. In its comments, DOD stated that the department has initiated the implementing guidance to define organizational roles and responsibilities. Specifically, DOD commented that on June 25, 2008, U.S. Strategic Command approved an instruction, titled Missile Defense Warfighter Involvement Process, that defines and establishes the process and outlines the command’s roles and responsibilities to influence the development, coordination, administration, and advocacy of global missile defense capabilities. We believe this is a positive step. However, the issued instruction indicates that the command anticipates the need for future revisions to the instruction as the process continues to evolve and as DOD undertakes efforts to integrate air and missile defenses across the department. Since U.S. Strategic Command issued the instruction when our draft report was with DOD for comment, we modified the recommendation to direct U.S. Strategic Command and the Missile Defense Agency to regularly review and update their guidance as the process evolves. DOD also commented that the Missile Defense Agency is defining its own guidance for its organizational roles and responsibilities to complement U.S. Strategic Command’s guidance; however, DOD’s comments did not provide us with a schedule or time frame for the completion of this effort. Until the Missile Defense Agency’s guidance is completed, the combatant commands will continue to lack transparency into the Missile Defense Agency’s process for addressing their needs and the means to hold the agency accountable. DOD also agreed with our recommendation that U.S. Strategic Command and the Missile Defense Agency establish effective methodologies for identifying, prioritizing, and addressing the combatant commands’ capability needs for ballistic missile defenses. In its comments, DOD stated that U.S. Strategic Command and the Missile Defense Agency are implementing effective methodologies for identifying, prioritizing, and addressing combatant command capability needs. Yet DOD also acknowledged that these methodologies continue to be refined. Our report recognizes that U.S. Strategic Command and the Missile Defense Agency are taking steps to improve the methodologies used in the Warfighter Involvement Process; however, we identified limitations with the current methodologies used to identify and prioritize the combatant commands’ capability needs. For example, we found that the Prioritized Capabilities List did not fully identify some of the combatant commands’ specific needs. We also determined that the combatant commands did not consistently apply criteria for prioritizing their capability needs, and also did not clearly distinguish among their priorities. As U.S. Strategic Command works to refine the methodologies for identifying and prioritizing capabilities, it will need to overcome these challenges. DOD agreed with our recommendation that U.S. Strategic Command, in conjunction with the other combatant commands, prepare an assessment of the Missile Defense Agency’s funding plans compared to the commands’ priorities and provide feedback to the Missile Defense Agency. In its comments, DOD stated that U.S. Strategic Command is preparing a Capabilities Assessment Report that examines the effectiveness and programmatic aspects of the ballistic missile defense system compared to the commands’ priorities, which it will present to the Missile Defense Agency in the fall of 2008. DOD also commented that U.S. Strategic Command has prepared a “Quick Look” of this report, which it provided to the Missile Defense Agency in June 2008. We encourage U.S. Strategic Command to provide the final assessment to the Missile Defense Agency as soon as possible so that the agency can consider the results of the assessment in developing its future funding plans. DOD partially agreed with both of our recommendations intended to provide the Missile Defense Agency with a DOD-wide perspective on the combatant commands’ priorities. First, DOD partially agreed with our recommendation to direct the Missile Defense Executive Board to review each Prioritized Capabilities List upon its release, including the individual commands’ priorities, and recommend to the Deputy Secretary of Defense an overall DOD-wide list of prioritized capabilities. Second, DOD partially agreed with our recommendation to direct the Deputy Secretary of Defense to provide guidance to the Missile Defense Agency on program priorities based on the Missile Defense Executive Board’s recommendation. However, it is not clear how DOD intends to implement these recommendations. In its comments, DOD stated that the Missile Defense Executive Board reviews the Prioritized Capability List prepared by U.S. Strategic Command, but added that a DOD-wide list of prioritized capabilities is not needed because the U.S. Strategic Command-prepared list provides the agency with a single list of prioritized needs. DOD also commented that it disagreed with the need for the Deputy Secretary of Defense to provide additional guidance to the Missile Defense Agency. We believe that additional actions to implement both recommendations are needed. First, officials from U.S. Strategic Command and other combatant commands told us during our review that the Warfighter Involvement Process was not well structured to consider the combatant commands’ individual mission responsibilities when preparing a consolidated list of the commands’ priorities. As a result, U.S. Strategic Command’s list could obscure the importance of a key ballistic missile defense capability if that capability was ranked high by only one of the combatant commands. Comprised of senior-level representatives from the Office of the Secretary of Defense, Joint Chiefs of Staff, U.S. Strategic Command, the military departments, and other organizations, the Missile Defense Executive Board could provide a broader, defensewide perspective factoring in the cost, risk, and benefits of supporting one command’s priorities over another’s. Absent a DOD-wide list of prioritized capabilities, the Missile Defense Agency will continue to lack the benefit of a departmentwide perspective on which of the combatant commands’ priorities are the most significant. Additionally, we continue to believe that the Deputy Secretary of Defense should provide the Missile Defense Agency with guidance on program priorities based on a DOD-wide list of prioritized capabilities. In its comments, DOD stated that the Under Secretary of Defense for Acquisition, Technology, and Logistics, as chairman of the Missile Defense Executive Board, has established a process for issuing Acquisition Decision Memorandums to the Director, Missile Defense Agency. Although the Under Secretary of Defense for Acquisition, Technology, and Logistics is responsible for overseeing the Missile Defense Agency, the Deputy Secretary of Defense has been responsible for providing policy, planning, and programming guidance to the Missile Defense Agency since the agency’s establishment in 2002. Further, as discussed in our report, the Missile Defense Executive Board is responsible for making recommendations to the Deputy Secretary of Defense on the Missile Defense Agency’s comprehensive acquisition strategy. We are sending electronic copies of this report to interested congressional committees; the Secretary of Defense; the Chairman, Joint Chiefs of Staff; the Director, Missile Defense Agency; and the Commander, U.S. Strategic Command. We will also make electronic copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 404-679-1816 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. During this review, we focused on assessing the Department of Defense’s (DOD) process for identifying, prioritizing, and addressing overall combatant command priorities in developing ballistic missile defense capabilities. To do so, we obtained and reviewed key documentation from U.S. Strategic Command relevant to how the combatant commands identify and prioritize their ballistic missile defense capability needs. The U.S. Strategic Command documentation that we obtained included the July 4, 2007, October 31, 2007, and February 29, 2008, versions of a draft U.S. Strategic Command instruction establishing the Warfighter Involvement Process, U.S. Strategic Command’s November 2003 Strategic Concept for Global Ballistic Missile Defense, and the command’s November 2007 Report to Congress on USSTRATCOM Warfighter Involvement Process. We also obtained and reviewed U.S. Strategic Command briefings on the evolution of the Warfighter Involvement Process, current features of the process, and efforts to improve the process. Additionally, we obtained and reviewed the 2006 and 2007 Prioritized Capabilities Lists to understand the commands’ prioritized capability needs and U.S. Strategic Command’s approach for preparing these lists. To further our knowledge, we obtained and reviewed minutes of Warfighter Involvement Process management and focus group meetings, including the minutes of the meeting where the 2007 Prioritized Capabilities List was approved before it was sent to the Missile Defense Agency. In addition to U.S. Strategic Command documentation, we also obtained written comments provided by U.S. Central Command, U.S. European Command, U.S. Northern Command, and U.S. Pacific Command to U.S. Strategic Command on the draft Warfighter Involvement Process instruction. We also obtained combatant command comments provided to U.S. Strategic Command to help develop the Prioritized Capabilities Lists. We also reviewed testimonies from the commanders of U.S. Central Command, U.S. European Command, U.S. Northern Command, U.S. Pacific Command, and U.S. Strategic Command to help us better understand each command’s specific ballistic missile defense capability needs. In order to gain the Missile Defense Agency’s perspective on how it is addressing combatant command priorities, we reviewed Missile Defense Agency guidance, plans, directives, briefings, and other documentation that identifies key steps, stakeholders, and factors that the Missile Defense Agency considers during its process for planning, designing, developing, and fielding ballistic missile defense capabilities. For example, we reviewed the Missile Defense Agency’s Integrated Program Policy, dated July 2005, Ballistic Missile Defense Integrated Program Policy Implementation Guide, dated June 2005, and System Engineering Plan, dated July 2006, in order to understand the extent to which the agency has documented how it addresses combatant command priorities in its decision making. We also reviewed the Missile Defense Agency’s 2006 Achievable Capabilities List, which was its response to the 2006 Prioritized Capabilities List, and examined Missile Defense Agency briefings, budget documents, and testimonies by the Director, Missile Defense Agency. We also obtained and reviewed briefings describing a 2007 Missile Defense Agency and U.S. Strategic Command study of how to make the Warfighter Involvement Process more effective, and reviewed the 2007 Achievable Capabilities List to identify changes in the Missile Defense Agency’s approach for addressing combatant command priorities. Additionally, we obtained and reviewed drafts of the agency’s directive and instruction for implementing the Warfighter Involvement Process. We also reviewed public law, presidential guidance, and DOD directives, memorandums, briefings, and other documentation that establishes DOD’s overall approach to developing missile defense capabilities. Such documentation included chapters 5 and 6 of Title 10 of the United States Code; National Security Presidential Directive 23 dated December 16, 2002; the Unified Command Plan dated May 2006; DOD Directive 5134.9, Subject: Missile Defense Agency, dated October 9, 2004; and other Secretary of Defense guidance outlining the Missile Defense Agency’s roles and responsibilities. We also obtained and reviewed the Missile Defense Executive Board’s charter, as well as agendas and minutes from board meetings held in 2007. In conducting our work, we contacted officials at the Office of the Secretary of Defense, Joint Staff, Missile Defense Agency, U.S. Strategic Command, U.S. Central Command, U.S. Joint Forces Command, U.S. Northern Command, U.S. Pacific Command, the military services, and other organizations. Table 2 provides information on the organizations and offices contacted during our review. We conducted this performance audit from August 2007 to May 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Prioritized Capabilities List provided to the Missile Defense Agency in 2007 includes four categories of desired capabilities: weapons, sensors, battle management, and cross-functional capabilities. Each of the desired capabilities on the list is identified by a title and short description, and includes the following information: listing of the overall priority ranking of the capability, and whether the capability was ranked as one of the five highest priorities by one or more of the combatant commands; rationale for the capability; mission effect if the capability is not satisfied; summary of the applicable phases of flight, threats, and regions of key attributes, measures, and criteria for satisfying the capability. Additionally, the classified U.S. Strategic Command report that conveys the 2007 Prioritized Capabilities List to the Missile Defense Agency included a table that lists the combatant commands’ consolidated capability needs in order from highest to lowest priority. This table also identifies the scores that each of the participating combatant commands assigned to these capabilities. In preparing the 2007 Prioritized Capabilities List, U.S. Strategic Command updated the 26 capabilities that had been identified and provided to the Missile Defense Agency in the first list in 2006. These updates and revisions were intended to eliminate redundancies and more clearly communicate the commands’ intent. For example, the 2007 Prioritized Capabilities List included a 27th capability capturing the need for effective communication standards, which previously had been embedded into multiple capabilities on the 2006 list. Short descriptions of the capabilities on the 2007 Prioritized Capabilities List are provided in table 3. In addition to the contact named above, Gwendolyn R. Jaffe, Assistant Director; Grace A. Coleman; Nicolaas C. Cornelisse; Ronald La Due Lake; Jennifer E. Neer; Kevin L. O’Neill, Analyst in Charge; and Karen D. Thornton made significant contributions to this report.
In 2002, the Department of Defense (DOD) established the Missile Defense Agency to develop and deploy globally integrated ballistic missile defenses to protect the U.S. homeland, deployed forces, friends, and allies. To deliver an operational capability as quickly as possible, the agency was not subject to traditional DOD requirements and oversight processes. While directed to work closely with the combatant commands, the agency was not required to build missile defenses to meet specific operational requirements. GAO was asked to assess the extent to which DOD has developed a process that identifies, prioritizes, and addresses overall combatant command priorities as the Missile Defense Agency develops ballistic missile defense capabilities. To conduct its work, GAO reviewed relevant documents and visited several combatant commands, the Missile Defense Agency, Joint Staff, and other DOD organizations. DOD has taken some steps to address the combatant commands' ballistic missile defense needs, but it has not yet established an effective process to identify, prioritize, and address these needs, or to provide a DOD-wide perspective on which priorities are the most important. U.S. Strategic Command and the Missile Defense Agency created the Warfighter Involvement Process in 2005. Although the process is still evolving, the Missile Defense Agency has addressed some combatant command capability needs. However, even as they move forward with the process, U.S. Strategic Command and the Missile Defense Agency have not yet overcome three interrelated limitations to the process's effectiveness. First, U.S. Strategic Command and the Missile Defense Agency have not put into place the approved and complete guidance needed to implement the Warfighter Involvement Process, which would clearly define each organization's respective roles and responsibilities for identifying, prioritizing, and addressing the combatant commands' capability needs. This has left the combatant commands without an agreed-upon mechanism for influencing agency investments. Second, the Missile Defense Agency has lacked clear information about how to best address the commands' needs, and until recently has not clearly communicated how it has adjusted its investments in response to these needs. Without such information, the commands have not been able to provide feedback to the Missile Defense Agency about how well the agency has addressed their priorities in its funding plans. Third, senior civilian DOD leadership has not been involved in adjudicating potential differences among the commands' priorities. Instead, U.S. Strategic Command has consolidated and submitted the commands' prioritized capability needs to the Missile Defense Agency without first vetting these priorities though senior civilian DOD officials with departmentwide responsibilities for assessing risk and allocating resources. As a result, the Missile Defense Agency has not benefited from receiving a broader, departmentwide perspective on which of the commands' needs were the most significant. DOD has established a new board to advise senior Office of the Secretary of Defense officials on ballistic missile defense priorities; however, whether this board will be involved in reviewing or adjudicating differences among the commands' priorities is unclear. Missile Defense Agency and U.S. Strategic Command officials stated that the Warfighter Involvement Process is evolving. However, unless and until they overcome these interrelated limitations, DOD remains at risk of not effectively providing the combatant commands with the missile defense capabilities they need.
In 2004, the Coast Guard and the Minerals Management Service—a component of Interior that was subsequently reorganized into the Bureau of Ocean Energy Management, Regulation, and Enforcement (BOEMRE), and, most recently, the Bureau of Ocean Energy Management (BOEM) and the Bureau of Safety and Environmental Enforcement (BSEE)— signed a memorandum of understanding (MOU) to delineate inspection responsibilities between the agencies. Per the MOU, the Coast Guard is responsible for ensuring (1) the safety of life and property on offshore energy facilities and vessels engaged in OCS activities; (2) workplace safety and health, including enforcement of requirements related to personnel, workplace activities, and conditions and equipment on the OCS; and (3) security of offshore energy facilities. The MOU assigns Interior responsibility for, among other things, managing the nation’s oil, natural gas, and other mineral resources on the OCS in a safe and environmentally sound manner. In addition to delineating inspection responsibilities between the Coast Guard and Interior, the MOU is further divided into five memorandums of agreement, one of which addresses the agencies’ responsibilities where jurisdiction overlaps. In accordance with this memorandum of agreement, the Coast Guard is the lead agency with responsibility for the inspection and testing of all marine and lifesaving equipment onboard fixed and floating offshore energy facilities and MODUs, and Interior is the lead agency with responsibility for the inspection and testing of all production and drilling equipment on these facilities. The Coast Guard, however, had authorized Interior (specifically, what was then the Minerals Management Service) to perform inspections of fixed offshore energy facilities and to enforce Coast Guard regulations applicable to such facilities. For example, the Coast Guard is to conduct an initial inspection of each new fixed offshore energy facility to determine whether it is compliant with Coast Guard safety regulations. However, after the initial inspection, the Coast Guard has authorized Interior’s inspectors to conduct such safety inspections on behalf of the Coast Guard and enforce Coast Guard regulations applicable to those facilities as a means to avoid duplicating functions, reduce federal costs, and increase oversight for Coast Guard compliance without increasing the frequency of inspections. Therefore, with respect to fixed offshore energy facilities, the only inspections for which the Coast Guard is exclusively responsible beyond the initial safety inspection are the annual security inspections, to the extent that these facilities meet the applicable criteria, as described below. The Coast Guard continues to have responsibility for conducting inspections and enforcing its regulations on floating offshore energy facilities and MODUs. In accordance with federal laws, agreements between the Coast Guard and Interior described above, and Coast Guard guidance, Coast Guard is responsible for conducting annual security inspections of offshore energy facilities that meet or exceed any one of three thresholds for production or personnel—(1) producing greater than 100,000 barrels of oil a day, (2) producing more than 200 million cubic feet of natural gas per day, or (3) hosting more than 150 persons for 12 hours or more in each 24-hour period continuously for 30 days or more. We refer to the 57 offshore energy facilities that met or exceeded these thresholds at some point from 2008 through 2010—and were therefore subject to security inspections during those years—as “OCS facilities.” Of these 57 OCS facilities, all of which are located in the Gulf of Mexico, 41 are fixed OCS facilities and 16 are floating OCS facilities. Staff at Coast Guard headquarters oversee and develop policies and procedures for field staff to follow when conducting security inspections of OCS facilities and to assist affected owners and operators so that they can comply with maritime security regulations. Among other things, Coast Guard marine inspectors in the field units conduct security inspections of OCS facilities by taking helicopter rides to facilities that can range up to 200 miles offshore. Once arriving, inspectors are to conduct on-site interviews with facility security officers and observe operations to verify whether required security measures are in place. As of August 2011, the Coast Guard had about 12 active marine inspectors who were qualified to conduct security inspections of OCS facilities. These inspectors work out of six field units near the Gulf of Mexico—Mobile, Alabama; Morgan City, Louisiana; New Orleans, Louisiana; Corpus Christi, Texas; Galveston, Texas; and Port Arthur, Texas. In line with the responsibilities set forth in the MOU discussed above and to ensure compliance with applicable laws and regulations, Interior has an offshore oil and natural gas inspection program intended to verify that the operator complies with Interior regulations and requirements at a well site. Interior’s offshore oil and natural gas oversight includes inspections of production activities including drilling, regular production activities, meters, abandoned platforms, and pipelines, among other things. Also in accordance with the MOU between the two agencies, Interior conducts both “full” and “limited” inspections of fixed offshore energy facility on behalf of the Coast Guard. During the full inspections of staffed, fixed offshore energy facilities, Interior’s inspectors are to review all applicable Coast Guard requirements, which include 27 safety items. During limited inspections, which are to be conducted on all fixed offshore energy facilities in the course of conducting inspections at those facilities for Interior’s purposes, Interior’s inspectors are to review less than half of the safety items. During these inspections, Interior’s inspectors are to, among other things, check for safety items such as the presence of equipment designed to prevent tripping, slipping, or drowning. Coast Guard OCS facility guidance provides that Coast Guard personnel are to conduct security inspections of OCS facilities annually, but our analysis of inspections data show that the Coast Guard has not conducted such inspections for most of these OCS facilities. For example, the Coast Guard conducted about one-third of the required annual inspections of OCS facilities from 2008 through 2010 (see table 1). Specifically, our analysis of Coast Guard inspections data shows that in 2008 the Coast Guard inspected 7 of 56 OCS facilities, which was e 13 percent of the required annual inspections. More recently, in 2010, th Coast Guard inspected 23 of 51 (45 percent) OCS facilities that the Coast Guard should have inspected. Our analysis of Coast Guard inspections data shows that the Coast Guard generally inspected a greater percentage of floating OCS facilities than fixed OCS facilities (see table 2). For example, from 2008 through 2010, the Coast Guard conducted annual security inspections of 54 percent of floating OCS facilities compared to 24 percent of fixed OCS facilities. During our interviews with Coast Guard marine inspectors and their supervisors, we learned that some field units did not know that they were responsible for conducting security inspections of these fixed facilities, approximately one-third of which are not staffed because operations are automated. For example, marine inspectors in the Coast Guard field unit that oversees more than half of the OCS facilities stated that they had only recently learned that they were responsible for conducting security inspections of fixed OCS facilities. These marine inspectors stated that they thought that security inspections of the fixed OCS facilities within their area of responsibility were carried out by another field unit and that they had only been conducting annual security inspections of the floating OCS facilities. Further, other Coast Guard officials stated that it is easier to arrange for security inspections of floating OCS facilities because marine inspectors visit those facilities more frequently for other types of inspections, such as hull or safety inspections, whereas for fixed OCS facilities, the Coast Guard is required to conduct an initial safety inspection of each new facility and then is solely responsible for conducting annual security inspections of fixed OCS facilities once a year for annual security inspections. The Coast Guard does not have procedures in place to help ensure that its field units conduct annual security inspections of OCS facilities annually in accordance with its guidance. Standards for Internal Control in the Federal Government state that internal controls should include control activities, such as policies, procedures, and mechanisms that help ensure management directives are carried out. However, the Coast Guard does not have such control activities in place. For example, the Coast Guard’s OCS facility guidance does not describe specific procedures for the way in which Coast Guard staff should track whether annual security inspections have been conducted. Further, Coast Guard district officials and most local field unit supervisors and marine inspectors we spoke with do not maintain any kind of tool, such as a spreadsheet or calendar, to remind them when annual security inspections of OCS facilities are due. Coast Guard officials from five of the six Coast Guard field units that conduct annual security inspections of OCS facilities told us that they do not maintain a spreadsheet or other management tool to track whether annual security inspections had been conducted. For example, at three of these locations, Coast Guard officials told us they rely on owners and operators to inform them when inspections were due rather than independently tracking when annual inspections were due. As a result of no procedures or control activities to manage the offshore security inspection program, the Coast Guard is not complying with its established maritime security requirements for most of the OCS facilities. Without conducting annual inspections of OCS facilities, the Coast Guard may not be meeting one of its stated goals of reducing the risk and mitigating the potential results of an act that could threaten the security of personnel, the OCS facility, the environment, and the public. In our October 2011 report, we made a recommendation, among others, that the Coast Guard develop policies and procedures to monitor and track annual security inspections for OCS facilities to better ensure that such inspections are consistently conducted. The Coast Guard concurred with this recommendation and stated that it is planning to update its OCS facility policy guidance for field units to monitor and track annual security inspections for OCS facilities to better ensure that such inspections are consistently conducted. Interior’s inspection program has not consistently met its internal targets for production inspections, as we have reported in recent years. In 2008, we reported that Interior had not met its targets for conducting production inspections—examining metering equipment used to measure oil and natural gas production. Interior officials responsible for conducting production inspections in the Gulf of Mexico told us they completed about half of the required inspections in 2007, raising uncertainty about the accuracy of oil and natural gas measurement. In March 2010, we found that Interior had not routinely met its oil and natural gas production inspection goals. Specifically, we reported that Interior met its inspection goals only once—in 2008—during fiscal years 2004 through 2008, for four district offices we reviewed in the Gulf of Mexico and the Pacific. Interior inspection staff told us that, during these years, there was a shortage of inspectors and that inspections were delayed because of cleanup related to Hurricanes Katrina and Rita in 2005. We are unable to present data for these years because, according to Interior officials, district offices often did not correctly record production inspections on their inspection forms; since then, Interior instituted a policy to record inspections correctly. Also in March 2010, we reported that Interior had encountered persistent human capital challenges in its inspection programs designed to ensure accurate measurement of oil and natural gas from federal lands and waters. In particular, we reported that Interior was hindered by difficulties in hiring, training, and retaining key inspections staff. We reported that this difficulty in attracting and retaining key staff contributed to challenges in meeting its responsibilities to conduct inspections, thereby, reducing its oversight of oil and gas development on federal leases, potentially placing the environment at risk. In our report, we made a number of recommendations to Interior to address these issues, some of which Interior is already in the process of implementing. Although Interior has not consistently met its internal targets for production inspections, it has exceeded its target for Coast Guard compliance inspections. For fiscal year 2010, the most recent year reported, Interior’s goal was to conduct full inspections covering all applicable Coast Guard regulations on 10 percent of the estimated 1,000 staffed, fixed offshore energy facilities. For fiscal year 2010, Interior reported that it more than met this goal by conducting such inspections on 169 of the 1,021 staffed, fixed offshore energy facilities—about 17 percent. Further, Interior reported that it has met internal targets for these inspections for the previous 5 years. In addition, Interior reported that in fiscal year 2010 its inspectors also conducted limited inspections for compliance with Coast Guard regulations on all other fixed offshore energy facilities in the course of inspecting these facilities for their own purposes. Interior has recently been reorganizing its offshore inspection program, which has resulted in some uncertainty regarding its inspection capabilities. After the Deepwater Horizon incident in April 2010, Interior initiated a reorganization of its bureau responsible for overseeing offshore oil and natural gas activities. Specifically, in May 2010, Interior reorganized its Minerals Management Service—the bureau previously tasked with overseeing offshore oil and natural gas activities—and created the Bureau of Ocean Energy Management, Regulation, and Enforcement (BOEMRE). On October 1, 2011, Interior was further reorganized by dividing BOEMRE into two separate bureaus, the Bureau of Ocean Energy Management (BOEM)—which oversees leasing and resource management, and the Bureau of Safety and Environmental Enforcement (BSEE)—which is responsible for issuing oil and natural gas drilling permits and conducting inspections. We have reported that Interior could face challenges during its reorganization. In June 2011, we testified that Interior’s reorganization of activities previously overseen by MMS will require time and resources and may pose new challenges. We stated that while this reorganization may eventually lead to more effective operations, organizational transformations are not simple endeavors. We also expressed concern with Interior’s ability to undertake this reorganization while meeting its oil and natural gas oversight responsibilities. We believe that these concerns are still valid today. While Interior was reorganizing its oversight responsibilities, it was also reforming its inspection program and, according to Interior, these reforms have created uncertainty regarding future oversight inspections. As part of the inspections program reform, Interior plans to hire additional staff with expertise in oil and natural gas inspections and engineering and develop new training programs for inspectors and engineers involved in its safety compliance and enforcement programs. Specifically, Interior reported in February 2011 that it was seeking to hire additional inspectors for its offshore inspection program to meet its needs during fiscal years 2011 and 2012. Interior reported that it had 62 inspectors—which, it reported, was not sufficient to provide the level of oversight needed for offshore oil and natural gas production. Interior has also requested additional funding to implement these changes. Further, Interior has stated that its new inspection program may involve inspectors witnessing more high-risk activities, and in-depth examination of some aspects of Gulf oil and natural gas production, and so inspections may take more time in the future and be more difficult to fold into the existing inspection schedules. As a result, Interior reported that it was difficult to determine how many inspections would be conducted in fiscal year 2012. While the Deepwater Horizon incident was not the result of a breakdown in security procedures or the result of a terrorist attack, the loss of the Deepwater Horizon, a foreign-flagged MODU, and the resulting oil spill have raised concerns about U.S. oversight over MODUs that are registered to foreign countries. In this regard, various circumstances govern the extent to which the Coast Guard oversees the security of MODUs. In general, MODUs operating on the OCS implement security measures consistent with applicable security requirements—specifically, they implement requirements in accordance with U.S. security regulations and the International Maritime Organization’s International Ship and Port Facility Security (ISPS) Code. Depending on the particular characteristics and operations of the MODU—for example, its method of propulsion or its personnel levels—it may be subject to Coast Guard security regulations governing vessels (33 C.F.R. part 104) or OCS facilities (33 C.F.R. part 106). MODUs will fall under applicable Coast Guard regulations if (1) they are self-propelled—that is, they are capable of relocating themselves, as opposed to other types that require another vessel to tow them—in which case they are subject to the ISPS Code and 33 C.F.R. part 104, or (2) they meet production or personnel levels specified in 33 C.F.R. part 106. Whereas the Coast Guard may physically inspect a U.S.-flagged MODU to ensure compliance with applicable security requirements, the Coast Guard’s oversight of foreign-flagged, self-propelled MODUs, such as the Deepwater Horizon, is more limited. In the case of self-propelled, foreign-flagged MODUs, the Coast Guard will assess compliance with part 104 by reviewing a MODU’s International Ship Security Certificate, which certifies compliance with the ISPS Code. While Coast Guard inspectors may also observe security measures and ask security related questions of personnel, absent consent from the flag state, the inspectors generally do not have authority to review a self- propelled, foreign-flagged MODU’s vessel security plan. In all other cases where MODUs are subject to Coast Guard security requirements, the Coast Guard assesses compliance with part 104 or part 106 through annual security inspections. Figure 1 illustrates the types of MODUs, the applicable security requirements, and the means by which the Coast Guard assesses compliance. The Coast Guard may not be fully aware of the security measures implemented by self-propelled, foreign-flagged MODUs because of its limited oversight of such MODUs. The Coast Guard and BOEMRE, BSEE’s predecessor, conducted a joint investigation into the Deepwater Horizon incident, and the Coast Guard’s report from the investigation emphasized the need to strengthen the system of Coast Guard oversight of foreign- flagged MODUs. The Coast Guard’s report from the joint investigation stated that the Coast Guard’s regulatory scheme for overseeing the safety of foreign-flagged MODUs is insufficient because it defers heavily to the flag state to ensure safety. While the investigation focused on issues that were not related to security, such as safety, these findings may have implications for security oversight because the Coast Guard also relies on the flag state to carry out responsibilities for assessing compliance with security requirements. The joint investigation team recommended, among other things, that the Commandant of the Coast Guard develop more comprehensive inspection standards for foreign-flagged MODUs operating on the OCS. The Commandant concurred with this recommendation and has chartered an Outer Continental Shelf Activities Matrix Team, which has been tasked with providing recommendations on the establishment and implementation of an enhanced oversight regime for foreign-flagged MODUs on the U.S. OCS. According to Coast Guard officials, it is likely that MODUs operating in deepwater would be subject to security requirements because the industry is increasingly using dynamically positioned MODUs that are able to maintain position without being anchored to the seabed, and as such MODUs are self-propelled, they would be subject to the ISPS Code and 33 C.F.R. part 104. Additionally, the Coast Guard is conducting a study designed to help determine whether additional actions could better ensure the security of offshore energy infrastructure in the Gulf of Mexico, including MODUs. This study is expected to be completed in the fall of 2011. Gaining a fuller understanding of the security risks associated with MODUs could better inform Coast Guard decisions and potentially improve the security of these facilities. Further, the Coast Guard has implemented a new risk-based oversight policy for MODUs, including foreign-flagged MODUs, to address safety and environmental protection issues. This policy includes a targeting matrix to assist inspectors in determining whether a foreign-flagged MODU may require increased oversight, based on inspection history or other related factors, through more frequent examinations by the Coast Guard. Additionally, the policy calls on Coast Guard field units to conduct random, unannounced examinations of a portion of all MODUs in their areas of responsibility. Although this policy does not directly address security, increased oversight resulting from this new policy could help mitigate some of the ways in which a MODU might be at risk of a terrorist attack. Chairman LoBiondo, Ranking Member Larsen, and Members of the Subcommittee, this completes our prepared statement. We would be happy to respond to any questions you may have at this time. For questions about this statement, please contact Stephen L. Caldwell at (202) 512-9610 or [email protected], or Frank Rusco at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the contacts named above, key contributors to this testimony were Christopher Conrad, Assistant Director; Jon Ludwigson, Assistant Director; Lee Carroll and Erin O’Brien, analysts-in- charge; and Alana Finley. Thomas Lombardi provided legal support and Lara Miklozek provided assistance in testimony preparation. Maritime Security: Coast Guard Should Conduct Required Inspections of Offshore Energy Infrastructure. GAO-12-37. Washington, D.C.: October 28, 2011. Deepwater Horizon Oil Spill: Actions Needed to Reduce Evolving but Uncertain Federal Financial Risks. GAO-12-86. Washington, D.C.: October 24, 2011. Maritime Security: Progress Made, but Further Actions Needed to Secure the Maritime Energy Supply. GAO-11-883T. Washington, D.C.: August 24, 2011. Oil and Gas: Interior’s Restructuring Challenges in the Aftermath of the Gulf Oil Spill. GAO-11-734T. Washington, D.C.: June 2, 2011. Maritime Security: DHS Progress and Challenges in Key Areas of Port Security. GAO-10-940T. Washington, D.C.: July 21, 2010. Oil and Gas Management: Interior’s Oil and Gas Production Verification Efforts Do Not Provide Reasonable Assurance of Accurate Measurement of Production Volumes. GAO-10-313. Washington, D.C.: March 15, 2010. Mineral Revenues: Data Management Problems and Reliance on Self- Reported Data for Compliance Efforts Put MMS Royalty Collections at Risk. GAO-08-893R. Washington, D.C.: September 12, 2008. Maritime Security: Coast Guard Inspections Identify and Correct Facility Deficiencies, but More Analysis Needed of Program’s Staffing, Practices, and Data. GAO-08-12. Washington, D.C.: February 14, 2008. Maritime Security: Federal Efforts Needed to Address Challenges in Preventing and Responding to Terrorist Attacks on Energy Commodity Tankers. GAO-08-141. Washington, D.C.: December 10, 2007. Maritime Security: The SAFE Port Act: Status and Implementation One Year Later. GAO-08-126T. Washington, D.C.: October 30, 2007. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The April 2010 explosion of the Deepwater Horizon, a mobile offshore drilling unit (MODU), showed that the consequences of an incident on an offshore energy facility can be significant. A key way to ensure that offshore energy facilities are meeting applicable security, safety, and production standards is through conducting periodic inspections of the facilities. The Coast Guard and the Department of the Interior (Interior) share oversight responsibility for offshore energy facilities. The Coast Guard is to conduct security inspections of such facilities, whereas based on an agreement between the two agencies, Interior is to conduct safety compliance inspections on some offshore facilities on behalf of the Coast Guard as well as its own inspections to verify production. This testimony addresses: (1) the extent to which the Coast Guard has conducted security inspections of offshore energy facilities, and what additional actions are needed; (2) the extent to which Interior has conducted inspections of offshore energy facilities, including those on behalf of the Coast Guard, and challenges it faces in conducting such inspections; and (3) the Coast Guard's oversight authority of MODUs. This testimony is based on GAO products issued from September 2008 through October 2011. The Coast Guard conducted about one-third of its required annual security inspections of offshore energy facilities from 2008 through 2010 and does not have procedures in place to help ensure that its field units conduct such inspections in accordance with its guidance. The Coast Guard's guidance does not describe specific procedures for the way in which Coast Guard staff should track whether annual inspections have been conducted. For example, Coast Guard field unit supervisors and marine inspectors GAO interviewed from five of the six Coast Guard field units that are to conduct annual security inspections said that they do not maintain any tool to track whether such inspections had been conducted. GAO recommended in October 2011 that, among other things, the Coast Guard develop policies and procedures to monitor and track annual security inspections. The Coast Guard concurred and stated that it is planning to update its guidance for field units to address these issues. Interior's inspection program has not consistently met its internal targets for production inspections, and faces human capital and reorganization challenges, but has met its limited target for compliance inspections conducted for the Coast Guard. In March 2010, GAO found that for four district offices it reviewed, Interior only met its production inspection goals once during fiscal years 2004 through 2008. Further, GAO reported that difficulties in hiring, training, and retaining key staff had contributed to challenges in meeting its inspections goals. However, in recent years, Interior reported that it met its 10 percent target to conduct compliance inspections of staffed, fixed offshore energy facilities on behalf of the Coast Guard. In fiscal year 2010, Interior reported that it exceeded its target and conducts such inspections on 169 of the 1,021 staffed, fixed offshore energy facilities and has met this target for such inspections for the previous 5 years. In May 2010, Interior reorganized its bureau responsible for overseeing offshore energy activities. In June 2011, GAO reported that while this reorganization may eventually lead to more effective operations, GAO is concerned with Interior's ability to undertake this reorganization while meeting its oversight responsibilities. Among other things, Interior plans to hire additional staff with expertise in inspections and engineering. Amidst these changes, Interior reported that it was difficult to determine how many inspections it would conduct in fiscal year 2012. The Coast Guard has limited authority regarding the security of MODUs registered to foreign countries, such as the Deepwater Horizon. MODUs are subject to Coast Guard security regulations if (1) they are self-propelled or (2) they meet specific production or personnel levels. Whereas the Coast Guard may physically inspect a U.S.-flagged MODU to ensure compliance with applicable security requirements, the Coast Guard's oversight of foreign-flagged, self-propelled MODUs, such as the Deepwater Horizon, is more limited. The Coast Guard is conducting a study designed to help determine whether additional actions could better ensure the security of offshore energy facilities, including MODUs. Further, the Coast Guard has implemented a risk-based oversight policy for all MODUs to address safety and environmental protection issues. Although this policy does not directly address security, increased oversight resulting from this policy could help mitigate the risk of a terrorist attack to a MODU. GAO has previously recommended that the Coast Guard develop policies and procedures to monitor and track annual security inspections for offshore energy facilities and that Interior address its human capital challenges. The Coast Guard and Interior agreed.
In the life sciences, biosafety is a combination of the containment principles, technologies, practices, and procedures that are implemented to prevent the unintentional exposure to pathogens and toxins or their accidental release. In most countries, infectious agents are classified by risk group. Agent risk group classification emphasizes the potential risk and consequences of (1) exposure and infection for the laboratory worker or (2) the release of the agent into the environment with subsequent exposure of the general population. Risk group classification considers aspects of a given pathogen, in particular its infectivity; mode and ease of transmission; pathogenicity and virulence (including induced morbidity and case-fatality rate); susceptibility to physical or chemical agents; and the availability or absence of countermeasures, including vaccines, therapeutic remedies, and cures. Depending on the risk group classification, research on infectious agents is to be performed in facilities offering varying levels of containment, applying different types of primary containment protection (for example, biological safety cabinets), and ensuring that appropriate practices and procedures are in place. In the United States, laboratories working with human pathogens are classified by the type of agents used; activities being conducted; and the risks those agents pose to laboratory personnel, the environment, and the community. The Department of Health and Human Services (HHS) has developed and provided biosafety guidelines outlined in the manual titled Biosafety in Microbiological and Biomedical Laboratories (BMBL). This manual provides guidelines for work at four biosafety levels, with BSL-4 being the highest. The NIH Guidelines for Research Involving Recombinant DNA Molecules (NIH rDNA Guidelines) similarly describe four levels of biocontainment that closely parallel those described in the BMBL. The NIH rDNA Guidelines apply to all research involving recombinant DNA at institutions that receive any NIH funding for such research. Biosafety level designations, as defined in the BMBL, refer to levels of containment rather than categories of facilities. These levels of containment requirements could change from day to day depending on the risk of the work being conducted with particular agents. For example, BSL-2 practices are recommended for diagnostic work with B. anthracis, but BSL-3 practices are recommended for higher-risk work with B. anthracis, such as aerosol challenges. Table 1 shows the different biosafety levels specified in the guidelines for laboratories working with human pathogens. The levels refer to a combination of laboratory practices and procedures, safety equipment, and facilities that are recommended for laboratories that conduct research on these pathogenic agents and toxins. These laboratories are to be designed, constructed, and operated to (1) prevent accidental release of infectious or hazardous agents within the laboratory and (2) protect laboratory workers and the environment external to the laboratory, including the community, from exposure to the agents. Work in BSL-3 laboratories involves agents that may cause serious and potentially lethal infection. In some cases, vaccines or effective treatments are available. Types of agents that are typically handled in BSL-3 laboratories include B. anthracis (which causes anthrax), West Nile Virus, Coxiella burnetti (which causes Q fever), Francisella tularensis (which causes tularemia), and highly pathogenic avian influenza virus. Work in BSL-4 laboratories involves exotic agents that pose a high individual risk of life-threatening disease or aerosol transmission or related agents with unknown risks of transmission. Agents typically handled in BSL-4 laboratories include the Ebola virus, Marburg virus, and Variola major virus. Just as laboratories working with human pathogens are classified by BSLs 1-4, laboratories working with naturally infected vertebrate animals are classified by animal biosafety levels (ABSL) 1-4. The four ABSLs describe facilities and practices applicable to work with animals infected with agents assigned to biosafety levels 1-4, respectively. The recommendations describe four combinations of practices, procedures, safety equipment, and facilities for experiments with animals involved in infectious disease research and other studies that may require containment. Table 2 shows the different ABSLs specified in the guidelines for laboratories working with vertebrate animals. According to the BMBL, risk assessment and management guidelines for agriculture differ from human public health standards. Risk management for agricultural research is based on the potential economic impact of animal and plant morbidity and mortality, and the trade implications of disease. Worker protection is important, but greater emphasis is placed on reducing the risk of the agent escaping into the environment. Biosafety level-3 Agriculture (BSL-3Ag) is unique to agriculture because of the necessity to protect the environment from a high consequence pathogen in a situation where studies are conducted employing large agricultural animals or other similar situations in which the facility barriers serve as primary, rather than secondary, containment. BSL-3Ag facilities are specially designed, constructed, and operated at a unique containment level for research involving certain biological agents in large animal species. BSL-3Ag facilities are specifically designed to protect the environment by including almost all of the features ordinarily used for BSL-4 facilities as enhancements. All BSL-3Ag containment spaces must be designed, constructed, and certified as primary containment barriers. The Department of Agriculture’s Animal and Plant Health Inspection Service (APHIS) may require enhancements beyond BSL-3/ABSL-3 when working in the laboratory or vivarium with certain veterinary agents of concern. The NIH rDNA Guidelines provide containment standards for research involving rDNA and animals that are of sizes or have growth requirements that preclude the use of laboratory containment. Currently, the BMBL does not provide any comparable classification levels for laboratories working with plant pathogens. Many different federal agencies are involved with BSL-3 and BSL-4 laboratories in the United States in various capacities–they may be users, owners, regulators, or funding sources. Examples include the following: The Centers for Disease Control and Prevention (CDC) has its own high- containment laboratories. The Division of Select Agents and Toxins (DSAT), located within the Coordinating Office for Terrorism Preparedness and Emergency Response at CDC, regulates federal, state, academic, commercial, and private laboratories throughout the United States that possess, use, or transfer select agents. CDC also funds some laboratory activities carried out in state public health laboratories, commonly referred to as the Laboratory Response Network (LRN). The Department of Agriculture (USDA) has its own laboratories, and APHIS regulates laboratories working with select agents and toxins posing a risk to animal and plant health or animal and plant products. The National Institutes of Health (NIH), working through its various institutes, funds biomedical research, some of which requires high containment laboratories. NIH has containment and biosafety requirements that apply to this and other research that it funds when the research uses recombinant deoxyribonucleic acid (rDNA) molecules. The NIH rDNA Guidelines provide greenhouse containment standards for rDNA-containing plants, as well as plant-associated microorganisms and small animals. NIH has its own high-containment laboratories and has funded the construction of high-containment laboratories at academic institutions. The Food and Drug Administration (FDA) has its own laboratories and regulates manufacturing of biological products, some of which require high-containment laboratories. The Department of Commerce regulates the export of agents and equipment that have both military and civilian uses and that are often found in high-containment laboratories. The Department of Defense (DOD) has its own laboratories and funds research requiring high-containment laboratories. The Department of Labor’s Occupational Safety and Health Administration (OSHA) regulates and inspects private-sector employee safety and health within high-containment biological laboratories and regulates federal employee safety and health in these laboratories. However, OSHA does not have statutory responsibility for the occupational safety and health of (1) contractor employees performing work at government-owned, contractor- operated sites owned by the Department of Energy (DOE) or (2) state and local government employees. The Department of State (DOS) regulates the export of agents and equipment from defense-related high-containment laboratories. DOS also maintains a listing of some high-containment laboratories as part of U.S. commitments under the Biological and Toxin Weapons Convention. The Department of Justice’s (DOJ) Federal Bureau of Investigation (FBI) utilizes high-containment laboratories when its forensic work involves dangerous biological agents and conducts security risk assessments for the DSAT and APHIS select agent programs. The Department of Homeland Security (DHS) has its own high- containment laboratories and funds a variety of research requiring high- containment laboratories. The Department of Energy (DOE) has several BSL-3 laboratories doing research to develop detection and response systems to improve preparedness for a biological attack. The Department of the Interior has its own BSL-3 laboratories for work with infectious animal diseases. The Department of Veterans Affairs has BSL-3 laboratories for diagnostic and research purposes. The Environmental Protection Agency (EPA) has its own BSL-3 laboratories and also coordinates the use of various academic, state, and commercial high-containment laboratories nationwide as part of its emergency response mission (eLRN, environmental laboratory Response Network). Currently, no U.S. laws provide for federal government oversight of all high-containment laboratories. However, laws regulating the use, possession, and transfer of select agents and toxins impose requirements on entities with high-containment laboratories that work with these agents. The following is a short summary of pertinent laws, regulations, and guidance. Following the Oklahoma City bombing in 1995, Congress passed the Antiterrorism and Effective Death Penalty Act of 1996 to deter terrorism, among other reasons. Section 511 of title V of this act gave authority to the HHS Secretary to regulate the transfer, between laboratories, of certain biological agents and toxins. It directed the Secretary to promulgate regulations identifying a list of biological agents and toxins— called select agents—that have the potential to pose a severe threat to public health and safety, providing procedures governing the transfer of those agents, and establishing safeguards to prevent unauthorized access to those agents for purposes of terrorism or other criminal activities. In response to this act, the HHS Secretary established the select agent program within the CDC. In reaction to the September 11, 2001, terrorist attacks and the subsequent anthrax incidents, Congress passed several laws to combat terrorism (to prevent theft, unauthorized access, or illegal use) and, in doing so, significantly strengthened the oversight and use of select agents. The USA PATRIOT Act made it a criminal offense for certain restricted persons— including some foreign aliens, persons with criminal records, and those with mental defects—to transport or receive select agents. The act also made it a criminal offense for any individual to knowingly possess any biological agent, toxin, or delivery system in type or quantity not justified by a peaceful purpose. Subsequently, Congress passed the Public Health Security and Bioterrorism Preparedness and Response Act of 2002 (Bioterrorism Act), which (1) expanded the select agent program to include not only the regulation of the transfer but also the use and possession of select agents and (2) increased safeguards and security requirements. The Bioterrorism Act expanded the select agent program by granting comparable regulatory authorities to USDA for biological agents and toxins that present a severe threat to plant or animal health or plant or animal products; requiring coordination/concurrence between USDA and HHS on select agents and toxins regulated by both agencies (“overlap” agents and toxins); requiring the Secretaries of USDA and HHS to establish and maintain a list of each biological agent and toxin (select agent and toxin) that has the potential to pose a severe threat to public health and safety, animal or plant health, or animal or plant products and directing the Secretaries of HHS and Agriculture to biennially review and republish the select agent list, making revisions as appropriate to protect the public; requiring the Secretaries by regulation to provide for registration of facilities for the possession, use, and transfer of select agents and toxins, not just for those facilities sending or receiving select agents; requiring the Attorney General (delegated to the FBI’s Criminal Justice Information Services Division) to check criminal, immigration, national security, and other electronic databases with information submitted in the registration process for all individuals and nongovernmental entities to determine if the registrant is a restricted person as defined in the USA PATRIOT Act or has been reasonably suspected by federal law enforcement or intelligence agencies of committing a federal crime of terrorism or having known involvement in an organization that engages in terrorism or is an agent of a foreign power (this is called a security risk assessment); requiring the Secretaries to establish a national database that includes the names and locations of registered entities; the lists of agents and toxins such entities possess, use, or transfer; and information regarding the characterizations of such agents and toxins; requiring the Secretaries to promulgate regulations that include safeguard and security requirements for persons possessing, using, or transferring a select agent or toxin commensurate with the risk such an agent or toxin poses to public, animal, and plant health and safety, including required notification to the Secretaries and law enforcement agencies of theft, loss, or release of a listed agent or toxin; and establishing civil money penalties for persons violating the regulations and additional criminal penalties for knowingly possessing a select agent or toxin without registering it or knowingly transferring a select agent or toxin to an unregistered person. (See appendix III for the list of select agents and toxins as of November 11, 2008.) HHS originally established the select agent program within CDC in response to the Antiterrorism and Effective Death Penalty Act of 1996. Before the select agent program was created, CDC regulated only the importation of etiologic agents. CDC published regulations governing the select agent program that became effective on April 15, 1997. These regulations provided additional requirements for facilities transferring or receiving select agents and specifically (1) established a list of select agents that have the potential to pose a severe threat to public health and safety, (2) required registration of facilities before the domestic transfer of select agents can occur, and (3) developed procedures to document the transfer of agents. Subsequently, the Bioterrorism Act strengthened HHS’s authority to regulate facilities and individuals that possessed biological agents and toxins that pose a severe threat to public health and safety, and the Agricultural Bioterrorism Act granted comparable authority to the USDA to establish a parallel set of requirements for facilities and individuals that handle agents and toxins that pose a severe threat to animal or plant health or animal or plant products. USDA delegated its authority to the Animal and Plant Health Inspection Service (APHIS). Both CDC and APHIS issued similar regulations governing the select agent program; these regulations became effective on April 18, 2005. CDC issued regulations for select agents posing a threat to public health and safety. APHIS issued separate but largely identical regulations for select agents posing a threat to plants and animals. CDC and APHIS share oversight/registration responsibilities for overlap select agents that pose threats to both public health and animal health and animal products. In developing a list of select agents and toxins that have the potential to pose a severe threat to public health and safety, the HHS Secretary was required by the Bioterrorism Act to consider the criteria listed below. The Secretary directed the CDC to convene an interagency working group to determine which biological agents and toxins required regulation based on the following criteria: the effect on human health of exposure to the agent or toxin; the degree of contagiousness of the agent or toxin and the methods by which the agent or toxin is transferred to humans; the availability and effectiveness of pharmacotherapies and immunizations to treat and prevent any illness resulting from infection by the agent or toxin; and any other criteria, including the needs of children or other vulnerable populations, that the Secretary considers appropriate. Similarly, the Agricultural Bioterrorism Act required the USDA Secretary (delegated to APHIS) to consider the following criteria when selecting biological agents to be included in the list of select agents that pose a severe threat to animal or plant health or animal or plant products: the effect of exposure to the agent or toxin on animal or plant health and on the production and marketability of animal or plant products; the pathogenicity of the agent or the toxicity of the toxin and the methods by which the agent or toxin is transferred to animals and plants; the availability and effectiveness of pharmacotherapies and prophylaxis to treat and prevent any illness caused by an agent or toxin; and any other criteria that the Secretary considers appropriate to protect animal or plant health, or animal or plant products. Individuals and entities are required to register with CDC or APHIS prior to possessing, using, or transferring any select agents or toxins. Prior to registering, entities must designate a responsible official who has the authority and responsibility to act on behalf of the entity. Receiving a certificate of registration from the HHS Secretary or the Administrator of APHIS is contingent on CDC’s or APHIS’s review of the application package (APHIS/CDC Form 1) and the security risk assessment conducted by the FBI (composed of database checks and consisting of a report of criminal convictions and involuntary commitments greater than 30 days only) on the individual or nongovernmental entity (federal, state, or local govermental entities are exempt), the responsible official, and any individual who owns or controls the nongovernmental entity. Registration may also be contingent upon inspection of the facility. Submission of additional information—such as a biosecurity, biosafety, or incident response plan—is required prior to receiving a certificate of registration. Registration is valid for one physical location and for a maximum of 3 years. For facilities registered with CDC or APHIS that possess, use, or transfer select agents, the regulations require the following: 1. All individuals in the facility needing access to select agents and toxins must be approved by the Administrator of APHIS or the HHS Secretary following a security risk assessment by the FBI prior to having access (access approval is valid for 5 years). 2. The facility must develop and implement a written security plan sufficient to safeguard the select agent or toxin against unauthorized access, theft, loss, or release. 3. The facility must develop and implement a written biosafety plan commensurate with the risk of the agent or toxin; the plan must contain sufficient information on biosafety and containment procedures. 4. The facility must develop and implement a written incident response plan that fully describes the facility’s response procedures for the theft, loss, or release of a select agent or toxin; inventory discrepancies; security breaches; severe weather; workplace violence; bomb threats; suspicious packages; and other possible emergencies at the facility. 5. The facility must provide training on biosafety and security to individuals with access to select agents and to individuals not approved for access who will work in or visit areas where select agents or toxins are handled and stored. 6. The facility must maintain records relating to the activities covered by the select agent regulations. 7. The facility must immediately notify CDC or APHIS and appropriate federal, state, or local law enforcement agencies upon discovering a theft or loss of a select agent or toxin, and notify CDC or APHIS upon discovering the release of a select agent or toxin. As a matter of policy, CDC or APHIS inspects the premises and records of applicants, including a review of all required plans, before issuing the initial certificate of registration to ensure that the entity is compliant with the select agent regulations. Also, CDC and APHIS must be allowed to inspect, without prior notification, any facility where select agents or toxins are possessed, used, or transferred. CDC and APHIS perform site visits in cases where an entity may be adding a select agent or toxin, new laboratory facility, or new procedure that requires verification of the entity’s biosafety plans and procedures. Other inspections performed by CDC and APHIS include follow-up inspections based on observations from audits performed by federal partners, compliance inspections, and investigations of reported incidents that may have involved biosafety or security concerns that could affect public, animal, and plant health and safety. CDC and APHIS use specific checklists to guide their inspections. CDC and APHIS developed these checklists from the select agent regulations and the BMBL, and they are available at www.selectagents.gov. The BMBL has become the code of practice for laboratory principles, practices, and procedures. If CDC or APHIS discovers possible violations of the select agent regulations, several types of enforcement actions may occur: Administrative actions: CDC and APHIS may deny an application or suspend or revoke a registered entity’s certificate of registration if the individual or entity, responsible official, or owner of the entity is reasonably suspected of criminal violations or does not comply with the select agent regulations or if denial, suspension, or revocation is necessary to protect public, animal, or plant health and safety. A suspension can be for all select agent work at a registered entity or be specific to particular agents. Civil Money Penalties or Criminal Enforcement: CDC refers possible violations of the select agent regulations to the HHS Office of Inspector General (OIG). The HHS-OIG can levy civil money penalties (for an individual, up to $250,000 for each violation and, for an entity, up to $500,000 for each violation) or recommend criminal enforcement (imprisonment for up to 5 years, a fine, or both). As of April 29, 2009, CDC’s DSAT had referred 48 entities to the HHS-OIG for violating select agent regulations. HHS-OIG had levied $1,997,000 in civil money penalties against 13 of these entities. Information regarding these entities can be found on the following Web sites: http://oig.hhs.gov/fraud/enforcement/cmp/agents_toxins.asp and http://oig.hhs.gov/fraud/enforcement/cmp/agents_toxins_archive.asp. Also, the agricultural select agent program relies on APHIS’ own investigative unit, USDA Marketing and Regulatory Programs—Investigative and Enforcement Services (IES), for initial investigations of potential select agent violations. Like the HHS-OIG, IES can levy civil money penalties or recommend criminal enforcement. IES refers potential criminal violations to USDA’s OIG. From 2002—when APHIS first became involved with select agents—until May 7, 2009, the agricultural select agent program referred 39 entities or unregistered persons to IES for potential violations of the select agent regulations. USDA has levied $547,500 in civil money penalties against nine of these entities or unregistered persons. USDA does not publish information on select agent investigations or the results of these investigations. Referral to DOJ: DSAT or APHIS can refer possible criminal violations involving select agents to DOJ for further investigation or prosecution. The laws and regulations discussed above provide requirements for individuals and entities possessing, using, or transferring select agents and toxins but do not apply universally to high-containment laboratories. However, guidance for operating high-containment laboratories that is not legally mandatory is available. Pertinent guidance includes HHS’s BMBL manual and the NIH Guidelines for Research Involving Recombinant DNA Molecules. HHS’s BMBL Manual: The BMBL, prepared by NIH and CDC, categorizes laboratories on four biosafety levels (BSL) based on risk criteria, with BSL-4 laboratories being utilized for the study of agents that pose the highest threat risk to human health and safety. The BMBL describes a code of practice for biosafety and biocontainment in microbiological, biomedical, and clinical laboratories. The BMBL serves as the primary recognized source of guidance on the safe practices, safety equipment, and facility containment needed to work with infectious agents. The first publication was in 1984, and the most recent (5th edition) was published electronically in 2007. The select agent regulations reference the BMBL as a document to consider when entities are developing their written biosafety plans. Even though the BMBL is issued as a guidance document, DSAT and APHIS have incorporated certain elements of it into their inspection checklists as a requirement of the select agent program. The BMBL states that (1) biosafety procedures must be incorporated into the laboratory’s standard operating procedures or biosafety manual, (2) personnel must be advised of special hazards and are required to read and follow instructions on practices and procedures, and (3) personnel must receive training on the potential hazards associated with the work and the necessary precautions to prevent exposure. Further, the BMBL (5th edition) provides guidance on biosecurity, such as methods of controlling access to areas where agents are used or stored. The BMBL also states that a plan must be in place for informing police, fire, and other emergency responders concerning the type of biological materials in use in the laboratory areas. NIH Guidelines for Research Involving Recombinant DNA Molecules: Some of the work in BSL-3 and BSL-4 laboratories in the United States involves rDNA, and the standards and procedures for research involving rDNA are set by the NIH Guidelines for Research Involving Recombinant DNA Molecules (NIH rDNA Guidelines). Institutions must follow these guidelines when they receive NIH funding for work with rDNA. The guidelines include the requirement to establish an institutional biosafety committee (IBC), which is responsible for (1) reviewing rDNA research conducted at or sponsored by the institution for compliance with the NIH rDNA Guidelines and (2) reviewing categories of research as delineated in the NIH rDNA Guidelines. IBCs also periodically review ongoing rDNA research to ensure continued compliance with the guidelines. While the guidelines are only mandatory for those institutions receiving NIH funding, they have become generally accepted standards for safe working practice in this area of research and are followed voluntarily by many companies and other institutions not otherwise subject to their requirements. Since 2001, the number of BSL-4 and BSL-3 laboratories in the United States has increased, and this expansion has taken place across federal, state, academic, and private sectors and throughout the United States. Federal officials and experts believe that while the number of BSL-4 laboratories in the United States is known, the number of BSL-3 laboratories is unknown. Information about the number, location, activities, and ownership is available for high-containment laboratories that are registered with the DSAT or APHIS select agent programs but not for those outside the program. A number of issues are associated with determining the overall number of BSL-3 and BSL-4 laboratories. In our discussions with federal agency officials and experts and in our review of the literature, we found that the total number depended upon how the question was phrased. While data were generally available on the number of facilities or sites that contained a BSL-3 or BSL-4 laboratory, the precise number of independent rooms within those facilities qualifying as BSL-3 or BSL-4 laboratories was not generally specified. Some facilities contain more than one actual laboratory. For example, while CDC has two facilities with BSL-4 capacity, one of the facilities actually contains two separate BSL-4 laboratories, while the other has four separate BSL-4 laboratories. These officials and experts also told us that counting the number of laboratories is problematic because the definition of the term “laboratory” varies. A more meaningful measure is determining the net square footage of working BSL- 4 space. However, this information is often not available. In addition, there also are methodological issues associated with determining whether a laboratory is operational or not. The expansion of high-containment laboratories in the United States began in response to the emergency situation resulting from the anthrax attacks in 2001. Understandably, the expansion initially lacked a clear, governmentwide coordinated strategy. In that emergency situation, the expansion was based on the perceptions of individual agencies about the capacity required for their high-containment laboratory activities as well as the availability of congressionally approved funding. Decisions to fund the construction of high-containment laboratories were made by multiple federal agencies in multiple budget cycles. Federal and state agencies, academia, and the private sector considered their individual requirements, but a robust assessment of national needs was lacking. Since each agency has a different mission, an assessment of needs, by definition, is at the discretion of the agency. We have not found any national research agenda linking all these agencies that would have allowed for such a national needs assessment. Even now, after more than 7 years, we have not been able to find any detailed projections based on a governmentwide strategic evaluation of future capacity requirements in light of existing capacity; the numbers, location, and mission of the laboratories needed to effectively counter biothreats; and national public health goals. Without this information, there is little assurance of having facilities in the right places with the right specifications to meet a governmentwide strategy. For most of the past 50 years, there were only two entities with BSL-4 laboratories in the United States: federal laboratories at USAMRIID at Fort Detrick, Maryland, and at the CDC in Atlanta, Georgia. Between 1990 and 2000, three new BSL-4 laboratories were built: (1) the first BSL-4 university laboratory (a glovebox, rather than a conventional laboratory) at Georgia State University in Atlanta; (2) the University of Texas Medical Branch (UTMB) Robert E. Shope BSL-4 laboratory in Galveston, Texas; and (3) the Southwest Foundation for Biomedical Research, a privately funded laboratory in San Antonio, Texas. These entities were registered with CDC prior to 2004. In 2004, these entities registered their facilities with DSAT under the select agent regulations. As of June 2009, two new BSL-4 laboratories became operational: CDC Emerging Infectious Diseases laboratory in Atlanta, Georgia, and NIAID Rocky Mountain laboratory in Hamilton, Montana. To date, there are seven operational BSL-4 laboratories in the United States. Table 3 shows the number of entities with BSL-4 laboratories by calendar year and sector. Since the anthrax attacks in 2001, seven new BSL-4 facilities are in the planning, construction, or commissioning stage. Four of these facilities are in the federal sector, two are in the academic sector, and one is in the state/local government sector. The following are the BSL-4 facilities in the planning, construction, or commissioning stage in the federal sector: (1) NIAID Integrated Research Facility, Fort Detrick, Maryland; (2) DHS National Biodefense Analysis and Countermeasure Center, Fort (3) DHS National Bio- and Agro-Defense Facility (NBAF), Manhattan, (4) DOD USAMRIID Recapitalization, Fort Detrick, Maryland. This new BSL-4 laboratory will replace the existing USAMRIID laboratory. The following BSL-4 facilities are in the planning or construction stage in the academic sector and are funded by NIAID: (5) National Biocontainment Laboratory (NBL) at Boston University, Boston, Massachusetts, and (6) NBL at the University of Texas Medical Branch, Galveston, Texas. One BSL-4 facility is being built in the state/local government sector to identify and characterize highly infectious emerging diseases that pose a threat to public health: (7) Virginia Division of Consolidated Laboratory Services, Richmond, Virginia. The total number of BSL-4 laboratorieswill increase from 7 to 13 when these laboratories become operational. The locations of the BSL-4 laboratories that are currently registered, under construction, or in the planning stage are shown in figure 1. CDC officials told us that the enormous cost of construction would preclude operators from building a BSL-4 laboratory unless they were going to work with one or more of the select agents that require BSL-4 level containment. Based on this reasoning, these officials believe that they know all existing operational BSL-4 laboratories in the United States because these laboratories are required to be registered under the select agent regulations. However, registration with DSAT is a requirement based on possession of select agents and not ownership of a BSL-4 laboratory. Therefore, if a BSL-4 laboratory, like the laboratory in Richmond, Virginia, is commissioned using simulants, and all diagnostic work is done effectively by using biochemical reagents, gene probes, and possibly inactivated agents as controls, there would be no legal requirement for registration. Thus, CDC may not know of all BSL-4 laboratories. CDC officials stated that unlike the case with BSL-4 laboratories, operators might build BSL-3 laboratories and not work with select agents. For example, when building new laboratories or upgrading existing ones, many laboratory owners may build to meet BSL-3 level containment, often in anticipation of future work, even though they intend for some time to operate at the BSL-2 level with BSL-2 recommended agents. Consequently, CDC officials acknowledged that they do not know the total number of BSL-3 laboratories in the United States that are not registered to possess, use, or transfer select agents. In April 2007, we conducted a Web-based survey—based on a search of publicly available sources—of contacts knowledgeable about high- containment laboratories (for example, biosafety officers). A number of respondents who stated that their institutions had high-containment laboratories said that their laboratories were not working with select agents and were therefore not registered with the DSAT or APHIS select agent program. Although the respondents were not randomly selected, the results suggest that there may be many BSL-3 laboratories that do not work with select agents. These laboratories could potentially be tapped for use if national strategy required additional capacity. In 2004, there were far more entities registered with CDC that maintained BSL-3 laboratories than BSL-4 laboratories (150 versus 5), and this number grew to 242 in 2008. As shown in figure 2, these entities accounted for a total of 415 registered BSL-3 laboratories in 2004; this number grew to 1,362 by 2008 (a more than three-fold increase). Between 2004 and 2008, the largest increase occurred in the academic sector (from 120 to 474, an increase of 354 laboratories) followed by the federal government (from 130 to 395, an increase of 265 laboratories). Table 4 details these increases. APHIS experienced only a slight increase in the entities with BSL-3 laboratories that registered between 2004 and 2007 (from 41 to 45); however, in 2008, APHIS transferred 8 BSL-3 facilities to DSAT as the result of a change in the select agent list rules. Overall, the number of entities registered with APHIS was much lower than DSAT’s total. (See table 5.) As shown in table 6, the size of the state public health laboratories network increased following the 2001 anthrax attacks. According to a survey conducted by the Association of Public Health laboratories (APHL) in August 2004, state public health laboratories have used public health preparedness funding since 2001 to build, expand, and enhance BSL-3 laboratories. In 1998, APHL found that 12 of 38 responding states reported having a state public health laboratory at the BSL-3 level. As of March 2009, all 50 states had at least one state public health BSL-3 laboratory. Since the anthrax attacks of 2001, BSL-3 laboratories have started to expand geographically as well as by sector. As mentioned above, because individual states need to respond to bioterrorist threats, all 50 states now have some BSL-3 level capacity—at least for diagnostic and analytical services—to support emergency response. Additionally, NIAID recently funded the construction of 13 BSL-3 Regional Biocontainment Laboratories (RBL) within the academic research community at the following universities: (1) (2) (3) (4) (5) (6) (7) (8) (9) (10) University of Chicago, Argonne, Illinois; (11) University of Missouri, Columbia, Missouri; (12) University of Pittsburgh, Pittsburgh, Pennsylvania; and (13) University of Tennessee Health Science Center, Memphis, Colorado State University, Fort Collins, Colorado; Duke University Medical Center, Durham, North Carolina; George Mason University, Fairfax, Virginia; University of Hawaii, Manoa, Hawaii; University of Louisville, Louisville, Kentucky; University of Medicine and Dentistry of New Jersey; Newark, New Jersey; Tufts University, Grafton, Massachusetts; Tulane National Primate Research Center, Covington, Louisiana; University of Alabama, Birmingham, Alabama; Tennessee. NIAID is constructing RBLs to provide regional BSL-3 laboratory capacity to support NIAID’s Regional Centers of Excellence for Biodefense and Emerging Infectious Diseases Research. The RBLs are distributed regionally around the country. Figure 3 shows the sites of NIAID-funded RBLs in the United States. As expected, with an increase in the number of entities and laboratories that work with select agents, the number of individuals DSAT approved for access to work in the laboratories increased between 2004 and 2008. Table 7 shows the total number of individuals with active access approvals from DSAT and APHIS. In 2004, 8,335 individuals had access approvals. This number increased to 10,365 by 2008. The largest growth was in the academic sector. In 2004, 2,309 individuals in the academic sector had access approvals; this number increased to 3,110 by 2008 (an increase of 801 workers). In addition to those workers approved by DSAT, 4,149 individuals had access approvals through APHIS as of February 2009. It is important to note that as the number of new entities and high-containment laboratories increases, many new workers are being hired to work in these laboratories. However, not much is currently known about the characteristics of this workforce because there are no requirements in the select agent regulations to report on qualifications. In addition, there are no national standards for training of workers or standardized certification programs to test the proficiency of these workers. The increase in the number of entities and high-containment laboratories that work with select agents has implications for federal oversight. As part of regulatory requirements, DSAT and APHIS staff inspect each entity prior to issuing a certificate of registration to ensure that the entity is in compliance with the select agent regulations. In addition, as part of the entity’s renewal process, which occurs every 3 years, DSAT and APHIS inspectors are required to reinspect the entity. APHIS performs additional annual compliance inspections between the 3-year renewal cycles even if there is no change. DSAT performs additional inspections when an entity adds a select agent or toxin, a new laboratory facility, or a new procedure that requires verification of the entity’s biosafety plans and procedures. As mentioned previously, the number of entities and the number of BSL-3 laboratories working with select agents increased between 2004 and 2008. As a result of this increase, DSAT now has to inspect more entities. As shown in table 8, DSAT had a budget of $14 million and had 25 full-time equivalent inspectors (5 federal and 20 contract) in fiscal year 2004, when the interim regulations first provided for certificates of registration. However, its budget decreased between 2004 and 2008. In 2004, DSAT was responsible for providing oversight to 150 entities with 415 BSL-3 laboratories. In 2008, DSAT provided oversight to 242 entities with 1,362 BSL-3 laboratories with a decreased budget and only 3 more inspectors (11 federal and 17 contract). No evaluations are available to determine how this increased mission and decreased budget affected the quality of oversight. Before 2005, when APHIS had no select agent line item, it funded select agent program activities using a variety of existing funding sources (e.g., homeland security). As shown in table 9, APHIS received a budget of $2.5 million in fiscal year 2005. APHIS officials estimate that the service has devoted about 5 staff years to select agent inspections for each year since 2006. No evaluations are available to determine whether APHIS has sufficient resources to carry out its mission. Currently, no executive or legislative mandate directs any federal agency to track the expansion of all high-containment laboratories. Because no federal agency has the mission to track the expansion of BSL-3 and BSL-4 laboratories in the United States, no federal agency knows how many such laboratories exist in the United States. While there is a consensus among federal agency officials and experts that some degree of risk is always associated with high-containment laboratories, no one agency is responsible for determining, or able to determine, the aggregate or cumulative risks associated with the expansion of these high-containment laboratories. As shown in table 10, none of the 12 federal agencies that responded to our survey indicated that they have the mission to track and know the number of all BSL-3 and BSL-4 laboratories within the United States. While some federal agencies do have a mission to track a subset of BSL-3 and -4 laboratories that work with select agents and know the number of those laboratories, no single regulatory agency has specific responsibility for biosafety in all high-containment laboratories in the United States. According to some experts and federal agency officials, the oversight of these laboratories is fragmented and relies on self-policing. For example, if an entity is registered under the select agent regulations, DSAT or APHIS provides oversight. On the other hand, if an entity receives federal funding from NIH for rDNA research, the NIH Office of Biotechnology Activities provides oversight. These agencies assume that all risks would be dealt with by the entities’ self-regulation, consistent with the laboratory practice guidelines developed by NIH and CDC. Several federal agencies told us that they should know the number and location of all BSL-3 and -4 laboratories to carry out their agency missions. Some intelligence agencies, for example, indicated that—if there is another incident similar to the 2001 anthrax attacks—they would need to know the number and location of high-containment laboratories that do not work with select agents within the United States to identify all potential sources that could have been used to prepare the material. These officials told us that a determined scientist could easily take a small quantity of a select agent from his or her laboratory to a non-select-agent laboratory to grow the material. According to these intelligence agencies, these high-containment laboratories represent a capability that can be targeted by terrorists or misused by insiders with malicious intent. While some agencies have the specific responsibility for determining threats from rogue nations and foreign and domestic terrorists, we found that no agency has the mission to proactively determine the threat from insiders. According to most experts, there is a baseline risk associated with any high-containment laboratory. With expansion, the aggregate risks increase. However, no agency has the mission to determine whether the risks associated with expansion increase in proportion to the number of laboratories or at some different rate or whether factors such as location and resource limitations may affect the risk ratio. Because CDC and USDA regulations require that entities registering with the select agent program assess only the risks associated with their individual laboratories, CDC and USDA do not have the mission to determine the aggregate risks associated with the expansion of high-containment laboratories that work with select agents. High-containment laboratories can pose health risks for individual laboratory workers as well as the surrounding community. However, the relative risk profile of new versus more established laboratories is not known. According to CDC officials, the risks from accidental exposure or release can never be completely eliminated, and even laboratories within sophisticated biological research programs—including those most extensively regulated—have had and will continue to have safety failures. In addition, while some of the most dangerous agents are regulated under the CDC-APHIS select agent program, high-containment laboratories also work with agents not covered under this program. Laboratories outside the select agent program, especially those working with emerging infectious diseases, can also pose biosafety risks from accidental exposure or release. Several of these biological agents are listed in the BMBL as requiring BSL-3 practices, including West Nile Virus and Hantavirus. (See appendix IV for a list of biological agents recommended to be handled in BSL-3 laboratories that are not select agents). Consequently, laboratories having capabilities to work with biological agents, even though they do not posses select agents, are not currently subject to oversight. These laboratories also have associated biosecurity risks because of their potential as targets for terrorism or theft by either internal or external perpetrators. laboratories outside the select agent program also represent a capability that can be paired with dangerous pathogens and skilled but ill-intentioned scientists to become a threat. Currently, no laws in the United States specifically focus on all high- containment laboratories. In the United Kingdom (U.K.), by contrast, new high-containment laboratories that work with human, animal, or genetically modified (GM) pathogens need to notify the U.K. regulator (the Health and Safety Executive (HSE)) and receive either consent (for GM human pathogens) or license (for animal pathogens) before they commence their activities. Prior to construction of the facility, there is no requirement to inform HSE (except for planning authorities, who look at land use and building quality); however, in practice, HSE staff are involved at the design stage and at various points during the construction process. According to HSE staff, this early involvement has been extremely helpful in ensuring that new facilities meet the standards set out in the legislation and supporting guidance (related to the management, design, and operation of high- containment laboratories). This involvement has also enabled HSE to address the application of new technologies in high-containment laboratories (e.g., alkaline hydrolysis for waste destruction as an alternative to incineration). While the legislation in the U.K. states that a BSL-4 laboratory must have an incinerator on site for disposal of animal carcasses, HSE staff told us that they have been involved in discussions relating to new facilities where the entities wanted to replace the incinerator with an alkaline hydrolysis system. Similarly, all BSL-4 laboratories use cabinet lines (for human pathogens). HSE staff have been in discussion with entities about proposals to move to a suited system rather than rely entirely on primary containment. HSE staff told us that they are recognizing that technologies change and there may be good reasons to move away from established procedures, assuming that the alternatives being proposed provide a high degree of assurance that biosafety and biosecurity will not be compromised by the changes. In April 2010, the U.K. plans to implement a single regulatory framework for human, animal, and genetically modified pathogens that will include a legal requirement for duty holders to consult the regulatory authority prior to construction and for HSE to be a statutory consultee as part of the planning authorization. We reviewed four incidents that highlight the risks inherent in the expansion of high-containment laboratories: alleged insider misuse of a select agent and laboratory; Texas A&M University’s (TAMU) failure to report to CDC exposures to select agents in 2006; power outages at CDC’s high-containment laboratories in 2007 and 2008; and the release of foot- and-mouth disease virus in 2007 at the Pirbright facility in the U.K. We reviewed these incidents in detail because they represented different types of risk associated with high-containment laboratories and because a significant amount of information was available concerning them. According to the experts we talked with, many other incidents and accidents have occurred, mainly as a result of human error or equipment failure. Fortunately, most incidents/accidents do not have serious consequences for the health of laboratory workers, the general population, or the environment. The experts we spoke with also stated that it is highly probable that many incidents go unreported and unrecorded because of the lack of such serious consequences. Such underreporting represents lost opportunities to analyze and learn lessons that can provide a basis for continuing improvement and maintenance of laboratory safety. We are not making any generalizations about the magnitude of the problem involving other laboratories. However, the lessons we have identified highlight ways to improve biosafety and biosecurity. These lessons also have implications for institutional and federal oversight. In September and October 2001, letters containing spores of B. anthracis powder were distributed through the U.S. postal system to two senators, Thomas Daschle and Patrick Leahy, and members of the media. The letters led to the first U.S. cases of anthrax disease related to bioterrorism, and the subsequent investigation by FBI has been called “Amerithrax.” On August 6, 2008, the FBI alleged that the “sole culprit” in the 2001 anthrax attacks was Dr. Bruce Ivins, a U.S. Army scientist with a Ph.D. in microbiology who had worked for 28 years at the U.S. Army Medical Research Institute for Infectious Diseases (USAMRIID) at Ft. Detrick, Maryland. USAMRIID is the only DOD laboratory with the capability to study highly dangerous pathogens requiring maximum containment at BSL-4. Dr. Ivins had helped develop an anthrax vaccine for U.S. troops and was in charge of producing large quantities of wet anthrax spores for research. Immediately following the anthrax mailings in 2001, FBI took contaminated evidence to USAMRIID for analysis. Dr. Ivins was tasked by USAMRIID management to analyze the samples of spores sent through the mail and was also a technical consultant to the FBI in the early months of investigation. In March 2003, Dr. Ivins and two of his colleagues at USAMRIID received the Decoration for Exceptional Civilian Service—the highest award given to DOD civilian employees—for helping solve technical problems in the manufacturing of licensed anthrax vaccine. In December 2001, one of Dr. Ivins’ coworkers told Dr. Ivins that she observed on several occasions unsafe handling procedures by Diagnostic System Division personnel. She also told him that she might have been exposed to anthrax spores when handling an anthrax-contaminated letter. Dr. Ivins began sampling areas in the laboratory space that might have been contaminated with anthrax. He took samples from the shared office areas and later decontaminated her desk, computer, keypad, and monitor. However, he neither documented this incident in the Army record log book nor notified his superiors. He later acknowledged to Army officials that this was a violation of protocol. Dr. Ivins’ behavior was detailed in an Army investigation conducted in response to a second round of sampling he conducted in April, but his name did not surface at that time as a suspect in the anthrax attacks. After a spill incident inside of suite B-3 in building 1425 in April 2002, Dr. Ivins conducted a second round of unauthorized sampling of his shared office space and cold side areas outside of suite B-3. These findings were reported and sparked a buildingwide sampling inspection. An inspection conducted by the Army 7 months after the anthrax mailing found that suite B-3 in building 1425 at USAMRIID was contaminated with anthrax in four rooms of suite B-3 (306, 304, cold room, and 313 (Dr. Ivins’s laboratory)) and that the bacteria had escaped from secure to unprotected areas in the building. All the areas outside of suite B-3 that tested positive were associated with Dr. Ivins and members of the Bacteriology Division. The inspection report stated that “safety procedures at the facility and in individual laboratories were lax and inadequately documented; that safety supervision sometimes was carried out by junior personnel with inadequate training; and that exposures of dangerous bacteria at the laboratory, including anthrax, had not been adequately reported.” (See appendix V for additional information on the U.S. Army’s requirements for high-containment laboratories at the time of the 2001 anthrax incidents.) In 2005, the FBI investigation began to shift to a particular laboratory at USAMRIID, and it began to focus on Dr. Ivins as a suspect in 2007. According to the FBI, Dr. Ivins had the necessary expertise and equipment to make the anthrax powder in his laboratory. Specifically, at the time of the anthrax mailings, Dr. Ivins possessed extensive knowledge of various anthrax production protocols. He was adept at manipulating anthrax production and purification variables to maximize sporulation and improve the quality of anthrax spore preparations. He also understood anthrax aerosolization dosage rates and the importance of purity, consistency, and spore particle size due to his responsibility for providing liquid anthrax spore preparations for animal aerosol challenges. He also had used lyophilizers, biological safety cabinets, incubators, and centrifuges in vaccine research. Such devices are considered essential for the production of the highly purified, powdered anthrax spores used in the fall 2001 mailings. According to the FBI’s application for a search warrant, at the time of the attack, Dr. Ivins “(1) was the custodian of a large flask of highly purified anthrax spores that possess certain genetic mutations identical to the anthrax used in the attacks; (2) Ivins has been unable to give investigators an adequate explanation for his late night laboratory work hours around the time of both anthrax mailings; (3) Ivins has claimed that he was suffering serious mental health issues in the months preceding the attacks, and told a coworker that he had ‘incredible paranoid, delusional thoughts at times’ and feared that he might not be able to control his behavior; (4) Ivins is believed to have submitted false samples of anthrax from his laboratory to the FBI for forensic analysis in order to mislead investigators; (5) at the time of the attacks, Ivins was under pressure at work to assist a private company that had lost its FDA approval to produce an anthrax vaccine the Army needed for U.S. troops, and which Ivins believed was essential for the anthrax program at USAMRIID; and (6) Ivins sent an e-mail to a friend a few days before the anthrax attacks warning her that ‘Bin Laden terrorists for sure have anthrax and sarin gas’ and have ‘just decreed death to all Jews and all Americans,’ language similar to the anthrax letters warning ‘WE HAVE THIS ANTHRAX ... DEATH TO AMERICA ... DEATH TO ISRAEL.’” The FBI stated that in late 2005, forensic science (genetic analysis) used to trace the anthrax used in the 2001 attack had genetic markers consistent with the anthrax spores kept in a flask in the refrigerator in Dr. Ivins’s laboratory at Ft. Detrick, Maryland, to spores in the letters. During this time, Dr. Ivins kept his security clearance and passed a polygraph-assisted interrogation (also known as a “lie detector test”) in which he was questioned about his possible participation in the anthrax attacks. In November 2007, he was denied access to all high-containment laboratories and, in March 2008, to all laboratories at USAMRIID. It should be noted that while Dr. Ivins was denied access to the high-containment suites in November 2007, he was certified at that time into the personnel reliability program. On July 10, 2008, Dr. Ivins attended a briefing on a new pneumonic plague vaccine under development at the Army’s laboratory. After this briefing, he was escorted to a psychiatric evaluation off the installation by local authorities, and his access rights to the entirety of USAMRIID were withdrawn by the laboratory commander. An order was subsequently issued to installation security to prevent Dr. Ivins from entering the installation unescorted. A written bar order was signed with a plan to serve the document to Dr. Ivins. Before service of the order occurred, he died of a drug overdose on July 29, 2008. This incident highlights two lessons: (1) an ill-intentioned insider can pose a risk not only by passing on confidential information but also by removing dangerous material from a high-containment laboratory, and (2) it is impossible to have completely effective inventory control of biological material with currently available technologies. It is impossible to know the exact number of bacteria or virus in a laboratory’s inventory or working stocks at any specific time. At Ft. Detrick, ineffective procedures for the control of inventories and the unlimited use of laboratory facilities allegedly allowed Dr. Ivins the opportunity to pursue his own ends. As the number of high-containment laboratories increases, there will be an increase in the pool of scientists with expertise and, thus, the corresponding risk from insiders may also increase. Insiders Can Misuse Material and Facilities There are arguably two aspects to insider risk: the motive of the insider and the ability to misuse material and laboratory facilities. These two elements need to be understood if effective countermeasures are to be instituted in a proportionate manner. In this case, assuming Dr. Ivins was the culprit, no one can conclusively determine what motivated his actions since he committed suicide before his motive could be determined. With regard to the ability to misuse the facility, FBI records show that Dr. Ivins had unlimited access to material and laboratory facilities. However, it is still unclear whether the spores in the letters came directly from the flask under Dr. Ivins’s control or involved some further illicit culturing. In either case, material was illegally removed and laboratory facilities were misused—at a minimum, to dry and process the spores. It follows that research laboratories clearly represent a significant capability that can be potentially misused, and this capability is growing with the increasing number of high-containment laboratories. While efforts to strengthen inventory controls, assess and monitor personnel, and prevent facility misuse (for example, by video monitoring) have been undertaken to address insider threats, we are not aware of any evaluation of the effectiveness of these measures. While there are clearly major difficulties in imposing such controls in research laboratories, insider risk needs to be recognized and evaluated. Assuming that Dr. Ivins was the perpetrator in the anthrax attacks, he represents one rogue insider in a period of some 60 years, during which several thousand scientists and technicians had the opportunity to commit similar crimes. Thus, the probability of repeating that one event is, historically, very small. Devising any program to reliably reduce that figure for biological laboratory personnel is challenging. Furthermore, some DOD biological laboratory scientists and academicians we spoke with have pointed out that highly intrusive personnel reliability programs, which rely on profiling to identify insider threats, can have a negative effect on staff morale and performance by institutionalizing the concept that no one can be trusted. The National Science Advisory Board for Biosecurity reported that there is little evidence that personnel reliability measures are effective or have predictive value in identifying individuals who may pose an insider threat. In its report, the board recommended that “it is appropriate to enhance personnel reliability measures for individuals with access to select agents, but promulgation of a formal, national personnel reliability program is unnecessary at this time.” On February 11, 2004, DOD issued a directive (5210.88), “Safeguarding Biological Select Agents and Toxins” (BSAT). This directive established security policies and assigned responsibilities for safeguarding select agents and toxins. Specifically, this directive established, among other things, the following DOD policy: “Individuals who have a legitimate need to handle or use biological select agents and toxins, or whose duties afford access to storage and work areas, storage containers and equipment containing biological select agents or toxins shall be screened initially for suitability and reliability. This means that they shall be emotionally and mentally stable, trustworthy, and adequately trained to perform the assigned duties and shall be the subject of a current and favorably adjudicated National Agency Check with Local Agency Checks and Credit Checks for military and contractor employees and an Access National Agency Check with credit checks and written inquiries for civilian employees with a reinvestigation every 5 years and they shall be evaluated on a continuing basis using the criteria issued by the [Under Secretary of Defense for Intelligence.]” On April 18, 2006, DOD issued Instruction 5210.89, “Minimum Security Standards for Safeguarding Select Agents and Toxins.” This instruction established, among other things, the criteria and requirements for personnel regarding a biological personnel reliability program (BPRP). The purpose of a BPRP is to (1) ensure that each individual, who has authorized access to BSAT and/or supervises personnel with access to biological restricted areas and BSAT, including responsible and certifying officials, meets the highest standards of integrity, trust, and personal reliability and (2) identify any potential risk to public health, safety, and national security. Following the announcement of the FBI anthrax investigation at USAMRIID, the Secretary of the Army organized a task force on August 7, 2008, to evaluate the U.S. Army biological surety program, including safety, security, and personnel reliability. In response, the Inter-Service Council for Biosecurity and Biosafety, General Officer Steering Committee, issued a report on December 12, 2008. This report focused on seven areas: transportation of select agents and toxins; biological safety; biological security/physical security; inspection; personnel reliability program/foreign personnel; inventory/accountability of select agents and toxins; and training of personnel. Review of all seven areas indicated that armed service policies, regulations, standards, and procedures in effect before 2008 met or exceeded all federal and DOD requirements. The services, however, agreed on the need to establish common standards in each area. In addition, on March 10, 2008, the Interagency Security Committee Standard defined the criteria and process to be used in determining the facility security level of a federal facility as the basis for implementing governmentwide facility security standards. In October 2008, the office of the Under Secretary of Defense for Acquisition, Technology, and Logistics asked the Defense Science Board Task Force on DOD Biological Safety and Security to address the following questions: Are current and proposed policies in DOD and military department biological safety, security, and biological personnel reliability programs adequate to safeguard against accidental or intentional loss/misuse of biological select agents and toxins (BSAT) by external or internal actors? Are current DOD-related laboratories and operations that use or store BSAT meeting stringent standards for safety, security, and personnel reliability? How do DOD and military department programs compare with other government agency, academic, and industry programs? How can DOD usefully employ experience in other areas requiring the utmost safety and reliability when handling dangerous material (for example, the nuclear personnel reliability programs) for biosecurity policy development and implementation? In May 2009, the Defense Science Board published its report. With regard to insider risk, the report concluded that “a determined adversary cannot be prevented from obtaining very dangerous biological materials intended for nefarious purposes, if not from DOD laboratories, then from other sources. The best we can do is to make it more difficult. We need to recognize this reality and be prepared to mitigate the effects of a biological attack.” In October 2008, the White House Office of Science and Technology Policy asked the National Science Advisory Board for Biosecurity (NSABB) to recommend strategies for enhancing personnel reliability among individuals with access to biological select agents and toxins. Specifically, the NSABB was asked to identify the optimal framework for ensuring personnel reliability so that the need for biosecurity was balanced with rapid progress in the life sciences. The NSABB concluded in its report that “there is currently insufficient evidence of the effectiveness of personnel reliability program measures towards mitigating the risk of an insider threat to warrant the additional significant burden on research institutions.” However, the NSABB did recommend a number of ways to enhance the culture of research responsibility and accountability at institutions that conduct select agent research, noting that the recommended actions could be accomplished without significant expenditures, resources, or disruptions of research. On January 9, 2009, an executive order established a governmentwide working group to strengthen laboratory biosecurity in the United States. The executive order asked the working group to submit to the President, no later than 180 days after the date of the order, an unclassified report, with a classified annex as required, that sets forth the following: “a summary of existing laws, regulations, guidance, and practices with respect to security and personnel assurance reviewed under subsection (a) of this section and their efficiency and effectiveness; recommendations for any new legislation, regulations, guidance, or practices for security and personnel assurance for all federal and nonfederal facilities; options for establishing oversight mechanisms to ensure a baseline standard is consistently applied for all physical, facility, and personnel security and assurance laws, regulations, and guidance at all federal and nonfederal facilities; and a comparison of the range of existing personnel security and assurance programs for access to biological select agents and toxins to personnel security and assurance programs in other fields and industries.” The working group submitted its draft report and recommendations to the White House on July 9, 2009. According to HHS, the draft report is to be formally reviewed and accepted by the co-chairs—the Secretaries of Defense and Health and Human Services—before it is made public. While it may be possible to quantify the financial costs required to initiate and maintain enhanced oversight procedures—such as controls of inventories and laboratory usage—the impact of such procedures on work output is unquantifiable but nevertheless very real. According to some experts and high-containment laboratory scientists, intrusive personnel reliability programs can also have an adverse impact on staff work effectiveness. Accordingly, the security benefits achieved by such procedures must be evaluated to obtain some understanding of the cost/benefit ratio. Such an evaluation could incorporate various stress tests and assessments of procedures against a range of risk scenarios. Effective evaluation could improve the cost/benefit ratio by concentrating on procedures with higher returns on investment and could be more acceptable to laboratory personnel by demonstrating objective benefits. Regular reevaluation is critical to avoid adding oversight procedures on a subjective rather than objective basis. Inventory Procedures Did Not Impede Insider Misuse of Agents Prior to the fall of 2001, there were no effective inventory control procedures at USAMRIID—or indeed other institutions that worked with select agents—that would have impeded insider misuse of such agents. Anthrax spores were held in a liquid solution in a flask (RMR-1029) that originally (October 22, 1997) contained 1000 ml of spore suspension with a concentration of 3x10 spores/ml. While the flask had been under the control of Dr. Ivins since 1997, other laboratory staff may also have had access to it. However, no one in USAMRIID was specifically responsible for monitoring the use of materials by scientists. According to USAMRIID officials, Dr. Ivins’s laboratory notebook contained a record of the amounts of material removed at various times between 1997 and 2004, when the FBI finally removed the flask from USAMRIID. Additional undocumented removals from the flask could have been disguised simply by adding water to restore the volume. This would have reduced the spore concentration, but this concentration was apparently never checked. Even if it had been, experts told us that the normal biological experimental error involved in counting spores could have disguised the loss of up to 5 percent of the material. It is unclear whether the anthrax spores put in the letters came directly from the flask after being dried or whether a very small and undetectable quantity from the flask was cultured to produce enough new spores for the letters. In either scenario, the self-replicating nature of microorganisms and the inherent error associated with determining the absolute number of microorganisms in solution make inventory control a formidable if not impossible task with currently available technologies. According to DSAT officials, even though Dr. Ivins’ alleged crime occurred prior to the expansion of the select agent regulations in 2002, DSAT performed an extensive 2-week inspection of the entire USAMRIID facility in September 2008. DSAT believes that its findings regarding USAMRIID’s inventory records contributed to the decision of DOD to stand down USAMRIID operations pending a thorough review of its inventories. In addition, DSAT referred USAMRIID to the HHS-OIG for further investigation regarding the entity’s apparent noncompliance with the select agent regulations. According to HHS-OIG, this referral is still an ongoing investigation. In 2006, a series of incidents at the high-containment laboratories at Texas A&M University (TAMU), and their aftermath, raised issues related to barriers to reporting laboratory accidents, inadequate and ineffective training for laboratory personnel, the failure to inform medical personnel about the agents the laboratory staff work with, and uncertainty about what constitutes a potential exposure. TAMU is registered with DSAT and approved for work on several select agents. TAMU has several BSL-3 laboratories and works extensively on animal diseases, including those caused by the select agents Brucella melitensis, Brucella abortus, and Brucella suis. Brucella can cause brucellosis in humans, a disease causing flu-like symptoms, such as fever and fatigue. In severe cases, it can cause infections of the central nervous system. TAMU is also registered for use of Coxiella burnetii, an animal agent that can cause Q fever in humans. In February 2006, a laboratory worker from a non-select-agent laboratory was helping out with an experiment to aerosolize Brucella. The laboratory worker had no familiarity with the specifics of working with Brucella but did have experience working with the aerosol chamber. It was later determined that the laboratory worker had been exposed to the agent while cleaning the chamber after the experiment was run. At the time of the exposure, neither the exposed worker nor anyone else had any indication that an exposure had taken place. In fact, DSAT inspectors were on campus days after the Brucella exposure for a routine inspection but uncovered nothing that alerted them to what had happened. Symptoms did not start to appear in the exposed worker until more than a month after the exposure, and then the symptoms were flu- like. Confirmation of brucellosis was not made until another month had passed and the symptoms had worsened. However, once the brucellosis was identified, the worker notified appropriate authorities at TAMU. But no report was subsequently made to DSAT (as required by federal regulation), and a year passed before—by chance—an independent watchdog group reviewing unrelated documentation acquired through Texas’s freedom of information law, uncovered the lapse in reporting. This prompted TAMU to notify DSAT. The laboratory worker at TAMU who was exposed to Brucella was not authorized to work with that agent. The laboratory worker was, we were told, being allowed in the laboratory only to help out with operating the aerosolization chamber. According to DSAT, TAMU failed to report to DSAT that it was conducting aerosolization work with Brucella. Therefore, DSAT had no reason to verify training, experimental plans, and risk assessments during its inspections. According to select agent regulations, all staff—not only staff that have access to select agents or toxins, but also staff that will work in or visit areas where select agents are handled or stored—are required to be trained in the specifics of any agent before they work with it. The training must address the particular needs of the individual, the work they will do, and the risks posed by the select agents and toxins. However, the worker at TAMU did not receive training in the specifics of Brucella, including its characteristics, safe handling procedures, and potential health effects. While the worker was experienced in general BSL-3 procedures, her normal work regimen involved working with Mycobacterium tuberculosis, and her supervisor surmised that the differential in the potential for infection from Brucella was partially to blame for the exposure. However, experts have told us that if procedures that are effective to avoid exposure to live, virulent M. tuberculosis were being followed correctly, these should have been effective for Brucella despite the differences in the infectious dose (ID50). The exposed laboratory worker was highly experienced in handling M. tuberculosis, an infectious agent. The worker had been a laboratory director of a BSL-2 laboratory for the past 5 years, had a Ph.D. in microbiology, and was by many accounts highly competent and reliable. The worker applied the procedures governing safe work with M. tuberculosis to the Brucella experiment, but her experience with M. tuberculosis might have provided a false sense of security. At the time of the exposure to Brucella at TAMU on February 9, 2006, the laboratory worker and others in the laboratory did not realize she had been infected. In fact, DSAT conducted a routine inspection of TAMU on February 22, 2006—13 days after the exposure—but had no way of knowing that it had happened. According to the exposed worker, she first fell ill more than 6 weeks after the exposure. At that time, the first consultation with her physician indicated that she had the flu. Institutions generally do not give medical providers information about the specific agents that laboratory staff work with. Therefore, the physician was not alerted to the possibility that the worker’s symptoms could be the result of exposure to an infectious agent. After the symptoms persisted, a consultation with an infectious disease specialist confirmed that the laboratory worker’s blood contained an unknown microorganism. At that point, the worker recalled her work with Brucella weeks earlier. The Texas State Public Health Laboratory confirmed the infection with Brucella on April 16, 2006—62 days after the exposure. During the interim, the worker had resumed her normal activities. By the time the diagnosis was made, the exposed laboratory worker had become seriously ill. The delay in recognizing the infection resulted in delay of appropriate treatment, thus aggravating her condition. Such a misdiagnosis is not uncommon with infectious diseases, as the initial symptoms often appear flu-like, and brucellosis is not generally endemic in the population. According to DSAT, the worker might have developed an even more severe infection, possibly affecting her central nervous system or the lining of her heart, if the worker had not recalled the experiment with Brucella and alerted her physician to this fact. The physician might have been able to correctly diagnose the infection more quickly if the physician had been informed of the agent the individual worked with. In this incident, it was fortunate that transmission of brucellosis beyond the initial exposed individual was difficult and that there was no risk of the infection spreading to the surrounding community. Many other agents— including those that are not select agents (such as SARS coronavirus and M. tuberculosis)—cause diseases that are transmitted from human to human through coughing or fluid transfer. In addition to the incident of exposure to Brucella, DSAT noted that TAMU failed to report several incidents of potential exposure to Coxiella burnetii—a select agent and the causative agent for Q fever in humans. While the Brucella exposure eventually became apparent because of clinical symptoms in the laboratory worker, the C. burnetii incidents raised questions about what constitutes sufficient evidence of an exposure that the entity is required to report to DSAT. For C. burnetii and other agents, periodically measuring the titer or antibody levels within the blood serum of laboratory workers working with those agents provides one indication of exposure. If a person’s titer level is higher than his or her baseline level, then it may be concluded that the person has been exposed to the agent. In response to the draft report, HHS stated that the titer should be at least four times higher than baseline to be considered an exposure. However, HHS did not provide any support for its assertion, and we could not find any scientific support for picking this level. We consider that any titer elevation where that agent is being worked with in the laboratory requires further detailed investigation. In addition, the degree of titer elevation that can be considered as definitively diagnostic needs to be scientifically validated on an agent-by-agent basis. However, there are issues with using titer levels as an indication of exposure. For example, determining when the exposure took place is not straightforward, and methods for determining titers are not standardized across laboratories. TAMU has a program to monitor blood serum for those staff working with C. burnetii. While humans are very susceptible to Q fever, only about one- half of all people infected with C. burnetii show signs of clinical illness. During the DSAT inspection that was triggered by the uncovering of the Brucella incident, DSAT came across clinical records showing that several laboratory workers had elevated titers for C. burnetii. No reports of this possible exposure had been sent to DSAT. DSAT noted this issue and, on April 24, 2007, TAMU submitted the required Form 3 to DSAT. However, as a result of subsequent discussion with the individuals who had the elevated titers, TAMU officials began to doubt whether the elevated titers resulted from exposures that had occurred at TAMU. In one case, TAMU said, one of the infected laboratory workers had only recently been hired by TAMU but had worked in a clinical laboratory in China where C. burnetii was known to have been present. It is not clear how the elevated titer related to the employee’s baseline titer taken at the time of employment. In another case, the worker claimed to have been exposed many years earlier and to have always registered high, although the actual levels varied. DSAT officials disagreed with this interpretation and believed the high titers resulted from exposures at TAMU. TAMU officials told us that they initially responded to the uncovering of the elevated titer incidents by reporting to DSAT any subsequent elevated titer level identified in its laboratory workers. TAMU also told us that it is now unsure how to proceed; it has notified DSAT that, in its opinion, an exposure suggested by an elevated titer should be defined as having occurred only after clinical symptoms appear in the individual. TAMU has, therefore, ceased reporting incidents where there are only elevated titers. In the absence of clarity over the definition of exposure, TAMU officials have chosen to define it as they see fit. DSAT officials told us that they disagreed with TAMU’s interpretation. Reporting exposures only after clinical symptoms develop could have dangerous consequences for laboratory workers and even the public. DSAT conducted multiple follow-up inspections to assist TAMU in becoming compliant with the select agent regulations. In addition, on January 18, 2008, DSAT and APHIS posted a guidance document on the analysis of possible exposure incidents. According to DSAT, scenario 20 of this document specifically addresses the recommended response to an elevated antibody titer in a select agent worker. DSAT officials noted that reporting exposures only after clinical symptoms develop—given the requirements of the select agent regulations and the guidance provided in the theft, loss, and release guidance document—would be considered a violation of the select agent regulations. The common theme in the TAMU incidents was a lack of rigor in applying fundamental safety and training procedures coupled with a culture that embodied a reluctance to be open about problems both within the organization and with the regulator. According to our experts, such cultural reticence has historically been a factor in many previous incidents and can be remedied only by appropriate leadership at the highest level of the organization coupled with robust and continued action by the regulator. Barriers to Reporting Need to Be Identified and Overcome According to the literature and discussions with federal officials and experts, accidents in laboratories do occur, mostly as a result of human error due to carelessness, inadequate training, poor judgment, fatigue, or a combination thereof. In the case of theft, loss, occupational exposure, or release of a select agent, the laboratory must immediately report certain information to DSAT or APHIS. It has been suggested that there is a disincentive to report laboratory- acquired infections and other mishaps at research institutions because it could result in (1) negative publicity for the institution and the worker or (2) scrutiny from a granting agency that might lead to a suspension of research or an adverse effect on future funding. In order to enhance compliance with reporting requirements, barriers need to be identified, and targeted strategies need to be applied to remove those barriers. The literature identifies a number of barriers, including the lack of explicit standardized protocols; the lack of effective training on protocols; the lack of awareness that infection may have been laboratory-acquired; reporting systems that may have required individuals to pass through layers to reach the biosafety office (e.g., the supervisor, laboratory manager, or principal investigator); fear of punitive measures at the laboratory or institutional level; individual or institutional embarrassment; a poor relationship with medical support services (such as occupational safety and health services); and the lack of useful investigation/follow-up/feedback. In addition, these incidents need to be analyzed so that (1) biosafety can be enhanced by shared learning from mistakes and (2) the public can be reassured that accidents are thoroughly examined and the consequences of an accident are contained. One possible mechanism for analysis discussed in the literature is the reporting system used for aviation incidents that is administered by the National Transportation Safety Board and the Federal Aviation Administration. When mistakes are made, they are analyzed and learned from without being attributed to any one individual. Although experts have agreed that some form of personal anonymity would encourage reporting, it is not clear how this mechanism would be applied to high-containment laboratories where, for example, one may not know about the exposure or whether the event is significant enough to be reported. Compliance with Regulations Regarding Agent- and Experimental Task-Specific Training Is Needed to Ensure Maximum Protection The select agent regulations require safety risk assessments whenever work with select agents is proposed. Risk assessments are of paramount importance because the investigator, management, and biosafety representatives must establish guidelines for safe, secure, and efficient research. Personnel working with select agents need training to ensure their own safety and that of coworkers and the surrounding community. Training is specifically designed to address select agent characteristics that include infectivity and pathogenicity. Training must also address hazardous operations such as intentional aerosolization, centrifugation, and homogenization. Some laboratories require inexperienced workers to be mentored by personnel experienced in containment procedures, a process that can take up to a year to complete. The mentor maintains a checklist of important operations that must be performed in a responsible manner before the worker will be allowed to work independently. Non- laboratory personnel who require access to high-containment laboratories (inspection, maintenance, and calibration staff) must also receive training that covers emergency response and agent-specific information. If TAMU had provided effective, measurable staff training—including protocol-specific training on agent characteristics for Brucella (infectivity and pathogenicity), common routes of infection, and medical signs and symptoms information—the worker might have been more aware of the dangers involved when cleaning the aerosol chamber and could have been protected from this exposure. Typical routes of infection differ for M. tuberculosis and Brucella, and normal procedures, including gowning and respiratory equipment, vary for the two agents. For example, the laboratory worker wore protective glasses, but they were not tight fitting. Experts told us that if procedures that are effective to avoid exposure to live virulent M. tuberculosis were being followed correctly, these should have been effective for Brucella despite the difference in the infectious dose. According to an expert who has managed high-containment laboratories, there are risks involved in working alternately in BSL-2 and BSL-3 laboratories with their different levels of procedures and practices. Laboratory workers may develop a routine with BSL-2 procedures that may be difficult to consciously break when working with the more dangerous agents and activities requiring BSL-3 containment. Adequate training can help to minimize the risks involved. Standardized Mechanisms for Informing Medical Providers about the Agents Laboratory Staff Work with Must Be Developed Severe consequences for the worker can result from delays in (1) recognizing when an exposure has occurred or (2) medical providers accurately diagnosing any resulting infection. Further, if the worker acquires a disease that is easily spread through contact (direct physical and/or respiratory), there can also be severe consequences for the surrounding community. According to the BMBL, the incidents causing most laboratory-acquired infections are often accidental and unknown. Those involved can conclude that an exposure took place only after a worker reports illness—with symptoms suggestive of a disease caused by the relevant agent—some time later. An infected person may be contagious for weeks until clinical symptoms become apparent. It is important that exposure be identified as soon as possible so that proper diagnosis and prompt medical treatment can be provided. To do so, medical providers need to be informed, in a standardized way, of all the agents that laboratory staff work with. The issue of recognizing exposure and infection is not new, and organizations have put in place systems and procedures that, while not infallible, greatly facilitate such recognition. As part of the oversight process, a review and evaluation of such procedures and their effectiveness are likely to be beneficial. Current Confusion over the Definition of Exposure Needs to Be Addressed According to our experts, a system that requires documentation of all accidental releases of select agents by whatever means and ensures that this information is available to the inspecting/oversight authority would provide both a valuable database and the foundation for any further investigation. Any accidental release in an area where unprotected personnel are present should then be considered a de facto exposure and be immediately reported to the oversight authority whether or not there is any resulting infection. Laboratory personnel who contract any infection, even if there is no evidence of exposure, should inform their physician about their work, including details of the specific agent(s) that they work with. When we asked DSAT officials about the confusion over the definition of an exposure, they agreed that the terms need to be clearly defined and stated that they were drafting new guidance. DSAT officials noted, however, that it is unwise to wait until clinical symptoms appear before determining that an exposure has taken place, as this could potentially endanger a worker’s life and, in the case of a communicable disease, the lives of others. A DOD and NIH expert on this issue told us that correctly interpreting the meaning of elevated titers—whose characteristics can vary by agent, host, and testing laboratory—is challenging since many serological testing methods have not been validated. To help clarify any confusion about what is considered a reportable theft, loss, or release, CDC released a new guidance document. Scenario 20 in this document is an attempt to provide a simple approach by identifying three possible explanations for an elevated titer. However, it fails to go far enough and should state that an elevated titer of an agent that is being worked with in the laboratory should be regarded as prima facie evidence of exposure unless and until proved otherwise. Although clinical samples should then be taken at once to look for evidence of active infection, treatment of the person, as appropriate, should begin without delay to protect the health of the individual and, in some cases, safeguard the wider community. Serological testing is an indirect diagnostic tool suggesting, but not proving, exposure to an agent and is typically used to direct follow-up with more conclusive tests. Because elevated titers can be due to reasons other than active infection with a particular agent, the results need to be treated with caution. Nevertheless, an elevated antibody titer in cases where that agent is being worked with in the laboratory must always be a matter of concern and action. Serological testing is not definitive and scenario 20 does not provide clear guidance with regard to follow-up actions. Accordingly, standard operating procedures need to be developed by the institutions working together with biosafety officers/responsible officials and occupational health physicians to describe the appropriate course of action when elevated titers are observed. The use of serological testing as a method to identify potential exposures to select agents must be approached with a high degree of caution. First, guidelines must be very clear regarding the intended use of any serology- based screening program. If routine screening indicates elevated antibody titers against a specific pathogen over baseline levels, it may suggest a laboratory exposure to a pathogen; however, alternative explanations are also feasible. The increase in titers may indicate natural exposure to the agent (depending on the agent and location of the laboratory). The increase could also result from inconsistencies associated with laboratory testing. Most serological assays for select agents are not commonly conducted in clinical laboratories and are mostly performed in research laboratories. As such, these assays may not be properly controlled and validated. Assay-to-assay variation may be high, especially if experience is limited. Additionally, such assays are not particularly robust unless baseline specimens are available for comparison testing and serum samples are collected at relatively short intervals (for example, 3 to 6 months). Similarly, a serological screening program used as a method to diagnose infection or prevent the spread of contagious pathogens to the community is unlikely to be successful unless samples are taken at short intervals, as elevated antibody titers are usually detected after the period of maximum contagiousness of most pathogens. Therefore, the most appropriate use for a serological screening program would be to identify past exposures and to facilitate remedial training or conduct retrospective risk analyses that might lead to improved risk mitigation procedures and policies that might prevent future exposures. It is critical that guidance on the use of blood screening programs clearly identify the purpose of these programs and also provide guidance on how information from these programs should be used. Any suspicion of exposure should be reported and investigated, and the result of that investigation should be reported, thus providing a complete picture for DSAT and reducing subjective bias in reporting. The development of scientifically sound and standardized methods of identifying exposure is critical so that individual laboratory owners are not left to determine for themselves what is and what is not reportable. DSAT and APHIS could provide specific guidance on exposure benchmarks for each of the different select agents and toxins. On April 20, 2007, DSAT issued a cease-and-desist order suspending work with Brucella species at TAMU. On June 30, 2007, DSAT suspended all work with select agents at TAMU. The DSAT concerns included whether TAMU had a plan to prevent unauthorized access to select agents and toxins and a program that provided effective medical surveillance of occupational exposures to select agents and toxins. DSAT conducted a comprehensive site review and released a report in August 2007 that detailed a long list of safety violations, including instances in which the school did not immediately report or neglected to report laboratory worker infections or exposure to Brucella or C. burnetii. It also extended the suspension of research with select agents until the university addressed the issues in the August report. HHS’s Office of Inspector General (OIG) imposed a fine on TAMU for the select agent violations. The HHS OIG was delegated authority to impose civil monetary penalties of up to $250,000 against an individual and up to $500,000 against any other person, including any entity. The HHS OIG and TAMU disagreed on the number of violations. In February 2008, TAM agreed to pay a $1 million fine, which was an unprecedented amount for a fine paid by any institution under the select agent program. Continuity of electrical power is vital for the safe functioning of high- containment laboratories, in particular since maintenance of essential pressure differentials using electrically driven fans provides an important barrier for preventing the uncontrolled release of agents. Lapses in electrical power that occurred at a CDC laboratory raise concerns about standards in high-containment laboratory facility design, management of construction, and operations. On June 8, 2007, the CDC campus in Atlanta experienced lightning strikes in and around its new BSL-4 facility, and both the Georgia Power-supplied primary power and CDC-supplied backup power from its centrally-located generator plant were unavailable. The high-containment laboratory facility, not operational at the time, was left with only emergency battery power—which can provides limited electrical power for functions such as emergency lighting to aid in evacuation. Among other things, the outage shut down the high-containment laboratory’s negative air pressure system. While investigating the power outage, the CDC later determined that, some time earlier, a critical grounding cable buried in the ground outside the building had been cut by construction workers digging at an adjacent site. The cutting of the grounding cable, which had hitherto gone unnoticed by CDC facility managers, compromised the electrical system of the facility that housed the BSL-4 laboratory. With the grounding cable cut, the lightning strikes caused the circuit breakers in the building’s switchgear to disengage or open, resulting in a loss of primary power to the building. In addition, when the circuit breakers disengaged, the CDC’s backup generators were electrically isolated from the building and could not supply the building with power. It took approximately an hour for the CDC facility staff to reset the circuit breakers in the building to reengage the primary power. Because of the June 2007 power outage incident, questions about the design of the backup power system for the new facility resurfaced. When the CDC designed the backup power system for the new BSL-4 facility, it decided to use diesel generators centralized at CDC’s utility plant that also serve other facilities, as well as functions such as chillers, on the campus. According to internal documents provided to us, during the design phase for the facility, some CDC engineers had questioned the choice of this remotely placed, integrated design rather than a simpler design using local backup generators near the BSL-4 facility. According to CDC facility officials, the full backup power capabilities for the new BSL-4 facility were not in place at the time of the power outage but were awaiting completion of other construction projects on campus. Once these projects are completed, these officials said, the new BSL-4 facility will have multiple levels of backup power, including the ability to get power from a second central utility plant on campus, if needed. But some CDC engineers that we talked with questioned the degree of complexity in the design. They worried that an overly integrated backup power system might be more susceptible to failure. As a result of the power outage, CDC officials conducted a reliability assessment for the entire campus power system, which included the backup power design for the new BSL-4 facility. CDC concluded that its existing centrally located generators and planned power-related construction projects with equipment upgrades were more reliable and cost-effective than scenarios that locate generators at individual buildings. CDC officials reported that its backup power system is tested monthly, as required by building code. In commenting on our draft report, CDC provided studies and data that showed the theoretical reliability of the power system. However, CDC could not provide us documentation of actual non-testing instances where the backup generator system operated as designed. This incident highlighted the risks inherent in relying on standard building codes to ensure the safety of high-containment laboratories—as there are no building codes and testing procedures specifically for high-containment laboratories. In a second incident, on Friday January 4, 2008, CDC officials told us that nearby construction again damaged the grounding system of the building containing the new BSL-4 facility. The damage was observed when it occurred, but the cable was not repaired until the following week. While there was no loss of power to the BSL-4 facility, the potential for repeating a grounding-related power failure existed until repairs were made. According to CDC officials, at the time of both incidents, the new BSL-4 facility in building 18 was in preparation to become fully operational. No laboratory work of any kind had been conducted inside the BSL-4 laboratories, and no live agents were inside the facility as the commissioning process was still ongoing and the laboratories were not activated. However, given that the grounding cables were cut, it is apparent that the building’s integrity as it related to adjacent construction was not adequately supervised. Further, according to CDC officials, standard procedures under building codes do not require monitoring of the integrity of the electrical grounding of the new BSL-4 facility. CDC has now instituted annual testing of the electrical grounding system as the result of its review of these incidents. According to CDC officials, a third incident occurred on July 11, 2008, when a bird flew into the high voltage side of one of the Georgia Power transformers on the CDC campus, causing a failure in the primary electrical power supplied to buildings containing BSL-3 facilities. The CDC’s backup generators did not provide power because of the cascading effects of a failure by one of the generators. As in the June 2007 incident, the facilities were left with only temporary battery power, shutting down the fans powering the facility’s negative air pressure system. The generator problems were corrected by CDC in approximately an hour, at about the same time that Georgia Power completed its repairs and primary electrical power was restored. In any workplace building—regardless of the nature of its activities—there are safety features to protect the physical safety of workers. Various building codes cover many aspects of building design and construction required to achieve this safety objective, but the codes are subject to local interpretation. In general, the building codes enable (1) personnel to safely evacuate and (2) rescue personnel or firefighters to perform their jobs. By definition, additional hazards beyond those anticipated by standard building codes potentially exist in high-containment laboratories (BSL-3 and BSL-4), and they are addressed in BMBL. However, according to CDC and NIH, BMBL is only advisory. BMBL contains principles and guidelines, but the document does not provide specific detail on how functional requirements are to be translated into design solutions. According to our experts, there have been instances where modifications to laboratories were required after construction to achieve the necessary compliance. A more active, early, and continuing dialogue between builders, operators, and regulators may be beneficial in avoiding such waste and is especially relevant where tax dollars are committed to the creation or upgrading of high-containment laboratories. Because BMBL addresses issues relating to maintaining the containment of biological agents to protect both workers and the wider public, its guidelines are potentially more restrictive than the building codes. According to our expert panel, a clear and unambiguous set of standards stating the various capabilities that are required to maintain the integrity of all high-containment laboratories is necessary. Such a set of standards will need to integrate building codes with the BMBL provisions or amendments thereto. These standards should be national—not subject to local interpretation—and address the possibility that one or more emergency or backup systems may fail. Most importantly, any set of scenarios aimed at maintaining containment integrity must be empirically evaluated to demonstrate its effectiveness. Adequate oversight of any nearby activities—such as adjacent construction with its potential to compromise buried utilities—must also be taken into consideration when evaluating the safety measures required to manage the risks of high- containment laboratories. The CDC’s BSL-4 laboratory was designed with multiple layers of electrical power so that if primary power failed, a secondary source of power would be in place for continuity of operations. Failure to monitor the system’s integrity, however, compromised the ability of either power source to support critical operations. The power outages at CDC demonstrate a need to create understanding throughout the organization that effective biosafety involves layers of containment and, furthermore, that the loss of any one layer is serious even though the remaining layers, as intended, do maintain containment. Thus, procedures are required to regularly assess the functional integrity of every layer of containment and to initiate immediate corrective actions as required. The fact that taken as a whole, containment is being maintained is not a sufficient measure of system integrity: each component must be individually assessed and its operational effectiveness validated on a regular schedule. According to DSAT, since the CDC laboratory was not registered under the select agent regulations at the time of the incident, no DSAT action was required. High-containment laboratories are highly sophisticated facilities that require specialized expertise to design, construct, operate, and maintain. Because these facilities are intended to contain dangerous microorganisms, usually in liquid or aerosol form, even minor structural defects—such as cracks in the wall, leaky pipes, or improper sealing around doors—could have severe consequences. Supporting infrastructure, such as drainage and waste treatment systems, must also be secure. In August 2007, foot-and-mouth disease contamination was discovered at several local farms near Pirbright in the U.K., the site of several high- containment laboratories that work with live foot-and-mouth disease virus. Foot-and-mouth disease is one of the most highly infectious livestock diseases and can have devastating economic consequences. For example, a 2001 epidemic in the U.K. cost taxpayers over £3 billion, including some £1.4 billion paid in compensation for culled animals. Therefore, U.K. government officials worked quickly to contain and investigate this recent incident. The investigation of the physical infrastructure at the Pirbright site found evidence of long-term damage and leakage of the drainage system servicing the site, including cracked and leaky pipes, displaced joints, debris buildup, and tree root ingress. While the definitive cause of the release has not been determined, it is suspected that contaminated waste water from Pirbright’s laboratories leaked into the surrounding soil from the deteriorated drainage pipes and that live virus was then carried off-site by vehicles splashed with contaminated mud. The cracked and leaky pipes found at Pirbright are indicative of poor maintenance practice at the site. The investigation found that (1) monitoring and testing for the preventive maintenance of pipe work for the drainage system was not a regular practice on-site and (2) a contributing factor might have been a difference of opinion over responsibilities for maintenance of a key pipe within the drainage system. High-containment laboratories are expensive to build and expensive to maintain. Adequate funding for each stage needs to be addressed. Typically, in large-scale construction projects, funding for initial construction comes from one source, but funding for ongoing operations and maintenance comes from another. For example, NIAID recently funded 13 BSL-3 laboratories as regional biocontainment laboratories (RBL) and 2 BSL-4 laboratories as national biocontainment laboratories (NBL). According to NIAID, it contributed to the initial costs for planning, design, construction, and commissioning and provided funding to support the operation of these facilities. For these laboratories, the universities are partially responsible for funding maintenance costs. The Pirbright incident shows that beyond initial design and construction, ongoing maintenance plays a critical role in ensuring that high- containment laboratories operate safely and securely over time. Because even the smallest of defects can affect safety, ensuring the continuing structural integrity of high-containment laboratories is an essential recurring activity. The failure of part of the physical infrastructure at the U.K.’s Pirbright facility and the outbreak of foot-and-mouth disease highlight the importance of ongoing maintenance of such facilities, together with clear lines of responsibility regarding shared infrastructure facilities. In addition, this incident and other incidents emphasize the importance of regulators and laboratories working in partnership to either ensure that funding to maintain the infrastructure is available or alter work programs and eliminate activities that cannot be performed safely. Since the outbreak of foot-and-mouth disease originating from Pirbright, a number of regulatory decisions have been made: 1. The U.K. government undertook a review of the regulatory framework governing work with animal pathogens that resulted in a November 2007 report. The government accepted all the report’s recommendations, which included (1) moving regulation of work with animal pathogens from Defra to HSE and (2) developing a single regulatory framework covering work with human and animal pathogens based on the model provided by the Genetically Modified Organisms (Contained Use) Regulations 2000. This framework adopts a risk-based approach to regulation. 2. The Specified Animal Pathogens Order (SAPO) was amended in April 2008 to give inspectors increased powers, including the power to serve improvement and prohibition notices on entities (called duty holders in the U.K.) to remedy poor standards in such areas as containment and management. At the same time, HSE entered into an agency agreement with Defra to inspect premises where work with SAPO agents is carried out before Defra issues licenses; the license conditions are based on recommendations from HSE. Furthermore, HSE inspectors investigate any accidents and also proactively inspect facilities to ensure compliance with the license conditions. 3. Both organizations at Pirbright (Institute for Animal Health (IAH) and Merial) had their licenses amended or withdrawn following the outbreak. The IAH license was amended to allow diagnostic work (in the epidemiology building) and a limited amount of research in the arbovirology building. No animal work has been licensed to date, although new animal house facilities are nearing completion, and work may be licensed later this year. 4. All the drainage systems on-site have been tested and relined, and a new dual containment system has been laid to connect laboratories to a refurbished heat treatment plant. This new system is not yet operational, although it is in the final stages of commissioning. In the meantime, no laboratory or manufacturing effluent is discharged to the relined drainage system unless it has been heat treated by autoclaving (IAH) or been through a validated heat treatment cycle (Merial). The only effluent going to the drain and to the final chemical treatment plant is shower water, which should not contain virus as all activities are carried out in cabinets or in enclosed systems. 5. A newly refurbished building on the IAH has recently been licensed to allow small scale research on a number of SAPO 4 viruses. 6. Merial was fully relicensed following amendments to its procedures and joint Defra and HSE inspections. The new licenses are more detailed than the original versions and impose many more license conditions on the company. 7. No enforcement action has been taken against either organization following the outbreak of foot and mouth disease. The enforcing body (part of the local council) decided that there was insufficient evidence to prosecute either IAH or Merial. High-containment laboratories provide facilities that are needed for basic research, development of detection technologies, and diagnostic and medical countermeasures for biothreats. Accordingly, facilities are specialized and cannot easily be converted from one function to another. Medium- to long-term advance planning for the appropriate capacity levels is therefore essential, as is knowledge of existing capacity. Such advance planning needs to take into account the (1) projected future balance between biodefense and more traditional public health work, (2) the specific infectious disease problems and targets that the expansion is meant to address, and (3) targets for the laboratory expansion’s timetable or benchmarks as to when specific capacities need to be available. We were unable to identify any governmentwide strategic evaluation of these issues for high-containment laboratories. Furthermore, since no single agency is in charge of the current expansion, no one is determining the associated aggregate risks posed by the expansion. As a consequence, no federal agency can determine whether high-containment laboratory capacity may now be less than, meet, or exceed the national need or is at a level that can be operated safely. If an agency were tasked or a mechanism were established with the purpose of overseeing the expansion of high-containment laboratories, it could develop a strategic plan to (1) ensure that the number and capabilities of potentially dangerous high-containment laboratories are no greater or less than necessary, (2) balance the risks and benefits of expanding such laboratories, and (3) determine the type of oversight needed. Such an agency or mechanism could analyze the biothreat problems that need to be addressed by additional BSL-3 and -4 laboratories, the scientific and technical capabilities and containment features that such laboratories need to have, how the laboratories should be distributed geographically, and how the activities of the laboratories would be coordinated to achieve intended goals. Standards for several key issues have not been developed. The agency or mechanism responsible for overseeing the expansion of high-containment laboratories could also be responsible for coordinating with the scientific community to develop guidelines for high-containment laboratory design, construction, and commissioning and training standards for laboratory workers; providing definitions for exposure; developing appropriate inventory control measures; and providing guidance on the most efficient approach to personnel reliability programs. The oversight agency or mechanism could also address issues related to the ongoing funding needs of high-containment laboratories. While NIAID has provided funding to build RBLs and NBLs, these laboratories are expected to compete for funding from NIH to sustain their research. It is unclear what will happen to these facilities, their trained personnel, and their technology if no such funding is available. Further, as these facilities and other high-containment laboratories age, adequate funding sources must be identified for upgrades and maintenance, or the risks that they pose may outweigh their benefits. Once laboratories have been commissioned and begin operating, continuing maintenance and testing/validation programs are needed to ensure that operating standards and regulatory compliance are maintained. As facilities age, the costs of such programs will rise and are likely to consume an increasing proportion of budgets. Although this affects federal, industrial, and academic laboratories, the impact is likely to be greatest on academic laboratories. Although federal laboratories are subject to annual funding, they tend to have programs that have long-term commitments and are not usually subject to major changes even if principal investigators (scientists) relocate. Industrial laboratories exhibit similar stability of operations once they are committed to projects and programs. In all these cases, maintenance budgets are less tied to funding for research than are those of academic laboratories, which are highly dependent on research grant funding to support both infrastructure maintenance as well as research programs. Indeed, the two activities may compete for available money. Relocation of a principal investigator who is the recipient of research grant funding can create problems for the institute in maintaining the laboratory facilities. Given the high costs of creating high-containment laboratories, consideration also needs to be given to the issue of their maintenance and support as distinct from funding for research activity. The four incidents at USAMRIID, TAMU, CDC, and Pirbright exemplify a number of failures of systems and procedures that are meant, in combination, to maintain the biosafety of high-containment laboratories to protect laboratory workers and the public. DSAT and APHIS could examine these incidents and apply the lessons learned across the program. These incidents have been described and analyzed in detail both because they are recent and because detailed information was available about the various factors involved. Unfortunately, the incidents and their causal factors are not unique, and the scientific literature contains information about many incidents occurring over decades that often involved similar factors and the failure to maintain adequate biosafety. Overall, the safety record of high-containment laboratories has been good, although a number of weaknesses have become apparent over time. Consequently, along with expansion there needs to be a commensurate development of both operational and oversight procedures to address known deficiencies and, as far as practicable, proactively evaluate future risks. Laboratory operators, in collaboration with regulators, need to develop and work through potential failure scenarios and use that information to develop and put in place mechanisms to challenge procedures, systems, and equipment to ensure continuing effectiveness. We recommend that the National Security Advisor, in consultation with the Secretaries of Health and Human Services (HHS), Agriculture (USDA), Defense (DOD), and Homeland Security (DHS); the National Intelligence Council; and other executive departments as deemed appropriate identify a single entity charged with periodic governmentwide strategic evaluation of high-containment laboratories that will the number, location, and mission of the laboratories needed to effectively meet national goals to counter biothreats; the existing capacity within the United States; the aggregate risks associated with the laboratories’ expansion; and the type of oversight needed and (2) develop, in consultation with the scientific community, national standards for the design, construction, commissioning, and operation of high-containment laboratories, specifically including provisions for long- term maintenance. We recommend that the Secretaries of HHS and USDA develop (1) a clear definition of exposure to select agents and (2) a mechanism for sharing lessons learned from reported laboratory accidents so that best practices—for other operators of high-containment laboratories—can be identified. Should the Secretaries consider implementing a personnel reliability program for high-containment laboratories to deal with insider risk, we recommend that they evaluate and document the cost and impact of such a program. Recognizing that biological agent inventories cannot be completely controlled at present, we also recommend that the Secretaries of HHS and USDA review existing inventory control systems and invest in and develop appropriate technologies to minimize the potential for insider misuse of biological agents. We obtained written comments on a draft of our report from the Secretaries of HHS and USDA. The Executive Office of the President: National Security Council did not provide comments. HHS and USDA concurred with our recommendations that were directed to them (see appendixes VII and VIII). HHS officials also provided general comments, including some concerns that are discussed in appendix VII. In addition, DOD, HHS, and USDA officials provided technical comments, which have been addressed in the body of our report, as appropriate. We are sending copies of this report to the Executive Office of the President; the Attorney General; and the Secretaries of Agriculture, Defense, Health and Human Services, and Homeland Security. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-2700 or [email protected] or Sushil K. Sharma, Ph.D., Dr.PH, at (202) 512-3460 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IX. To determine the extent of expansion in the number of high-containment laboratories and the areas experiencing growth, we interviewed agency officials and experts and reviewed documents provided by agencies and scientific literature. To determine which federal agency has the mission to track and determine the aggregate risks associated with the proliferation of BSL-3 and BSL-4 laboratories in the United States, we surveyed 12 federal agencies that are involved with these laboratories in some capacity—for example, research, oversight, or monitoring. The survey requested information on whether the agency (1) has a mission to track the number of high-containment laboratories, (2) has a need to know the number of operating BSL-3 and BSL-4 laboratories, and (3) knows that number. The agencies that received our survey included the Department of Agriculture; the Department of Commerce; the Department of Defense; the Department of Energy; the Environmental Protection Agency; the Department of Health and Human Services, including the Centers for Disease Control and Prevention (CDC); the Department of Homeland Security; the Department of the Interior; the Department of Justice, including the Federal Bureau of Investigation; the Department of Labor, including the Occupational Safety and Health Administration; the Department of State; and the Department of Veterans Affairs. In addition, we sent our survey to intelligence agencies, including the Central Intelligence Agency, the National Counter-Terrorism Center, the Defense Intelligence Agency, and the Office of Intelligence Analysis within DHS. To supplement existing information on the current number of BSL-3 and BSL-4 laboratories in the United States, we surveyed 724 individuals, who were identified through various open sources as knowledgeable contacts on biosafety laboratories, through a self-administered electronic questionnaire posted on the World Wide Web between April 2007 and May 2007. We obtained responses from 295 respondents, for an overall response rate of 41 percent. Several important limitations should be noted about our survey. First, the universe of BSL-3 and -4 laboratories is unknown. While we used multiple sources to develop our list of potential respondents, there are likely other laboratories that we were unable to identify. Second, there may be duplicate responses in cases where multiple persons responded to the survey for a single institution. The data from our questionnaire are sufficiently reliable to demonstrate that there are BSL-3 or -4 laboratories that do not work with select agents. We also met with officials of the Division of Select Agents and Toxins and the Animal and Plant Health Inspection Service to gain additional information about the expansion of high-containment laboratories. Finally, we reviewed documents these agencies provided, including pertinent legislation, regulations, and guidance, and reviewed scientific literature on risks associated with high-containment laboratories. To develop lessons learned from recent incidents at four high-containment laboratories, we interviewed academic experts in microbiological research involving human, animal, and plant pathogens and conducted site visits at selected federal, civilian, military, academic, and commercial BSL-3 and BSL-4 laboratories, including the sites involved in the recent incidents. Specifically, we conducted site visits at CDC and Texas A&M University (TAMU); talked to United Kingdom officials at the Health Safety Executive and the Department for Environment, Food, and Rural Affairs; and reviewed documents and inspection reports. To discuss the incidents at TAMU and CDC, we conducted site visits and interviewed the relevant officials. During our site visit to CDC, we interviewed relevant officials, including the officials of CUH2A, Inc.—the contractor who designed the backup power system for the new BSL-4 laboratory in Atlanta—as well as the expert hired by this firm to conduct the reliability study for the backup power system. We conducted our work from September 2005 through June 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The expert panel that reviewed this report comprised scientists with substantive expertise in microbiological and select agent research and the operations of high-containment laboratories. The following were the panel members: Peter Emanuel, Ph.D. Office of Science and Technology Policy Executive Office of the President Gigi Kwik Gronvall, Ph.D. Center for Biosecurity of the University of Pittsburgh Medical Center University of Pittsburgh George V. Ludwig, Ph.D. U.S. Army Medical Research and Material Command Ft. Detrick, Maryland Jack Melling, Ph.D., Retired U.K. Microbiological Research Authority Porton Down, United Kingdom Alan Jeff Mohr, Ph.D., Retired Life Sciences Division U.S. Army, Dugway Proving Ground Tooele, Utah Suresh D. Pillai, Ph.D. Texas A&M University College Station, Texas Janet Shoemaker American Society for Microbiology Washington, D.C. HHS Select Agents and Toxins Abrin Botulinum neurotoxins Botulinum neurotoxin producing species of Clostridium Cercopithecine herpesvirus 1 (Herpes B virus) Clostridium perfringens epsilon toxin Coccidioides posadasii/Coccidioides immitis Conotoxins Coxiella burnetii Crimean-Congo haemorrhagic fever virus Diacetoxyscirpenol Eastern Equine Encephalitis virus Ebola virus Francisella tularensis Lassa fever virus Marburg virus Monkeypox virus Reconstructed 1918 Influenza virus Ricin Rickettsia prowazekii Rickettsia rickettsii Saxitoxin Shiga-like ribosome inactivating proteins Shigatoxin South American Haemorrhagic Fever viruses Flexal Guanarito Junin Machupo Sabia Tetrodotoxin Tick-borne encephalitis complex (flavi) viruses Central European Tick-borne encephalitis Far Eastern Tick-borne encephalitis Kyasanur Forest disease Omsk Hemorrhagic Fever Russian Spring and Summer encephalitis Variola major virus (Smallpox virus) and Variola minor virus (Alastrim) Yersinia pestis USDA Select Agents and Toxins African horse sickness virus African swine fever virus Akabane virus Avian influenza virus (highly pathogenic) Bluetongue virus (exotic) Bovine spongiform encephalopathy Camel pox virus Classical swine fever virus Ehrlichia ruminantium (Heartwater) Foot-and-mouth disease virus Goat pox virus Japanese encephalitis virus Lumpy skin disease virus Malignant catarrhal fever virus (Alcelaphine herpesvirus type 1) Menangle virus Mycoplasma capricolum subspecies capripneumoniae (contagious caprine pleuropneumonia) Mycoplasma mycoides subspecies mycoides small colony (MmmSC) (contagious bovine pleuropneumonia) Peste des petits ruminants virus Rinderpest virus Sheep pox virus Swine vesicular disease virus Vesicular stomatitis virus (exotic): Indiana subtypes VSV-IN2, VSV-IN3 Virulent Newcastle disease virus Overlap Select Agents and Toxins Bacillus anthracis Brucella abortus Brucella melitensis Brucella suis Burkholderia mallei (formerly Pseudomonas mallei) Burkholderia pseudomallei (formerly Pseudomonas pseudomallei) Hendra virus Nipah virus Rift Valley fever virus Venezuelan Equine Encephalitis virus USDA Plant Protection and Quarantine (PPQ) Select Agents and Toxins Peronosclerospora philippinensis (Peronosclerospora sacchari) Phoma glycinicola (formerly Pyrenochaeta glycines) Ralstonia solanacearum race 3, biovar 2 Schlerophthora rayssiae var zeae Synchytrium endobioticum Xanthomonas oryzae pv. Oryzicola Xylella fastidiosa (citrus variegated chlorosis strain) There are a number of biological agents causing severe illness or death that are not select agents. Some non-select-agents are recommended for work, research, and production safely under BSL-2 containment (BMBL, 5th Edition). These agents are listed in table 11. Several of these non- select-agents may require BSL-3 containment for specific reasons, including production of aerosols or large-scale production of these organisms (BMBL, 5th Edition). These agents are listed in table 12. According to DOD officials, DOD did not have a policy document specific to biological select agents and toxins (BSAT) or high-containment laboratories in 2001. In 2001, all U.S. Army high-containment laboratories working with select agents were registered with CDC (under 42 C.F.R. § 72.6). Army safety regulations in place at that time required the following: 1. A hazard analysis must be conducted to determine safety precautions, necessary personnel protection, engineering features, and procedures to prevent exposure for all agents. The Army utilized the risk analysis technique of maximum credible events, which examines the consequences of realistic worst-case scenarios. 2. Facilities must have standard operating procedures, training and proficiency requirements, medical surveillance, emergency preparedness procedures (including advance notification to local, state, regional, and federal emergency response personnel), hazard labeling, disposal and maintenance controls, and protective equipment for all work with agents. 3. Quarterly inspections for biosafety level (BSL)-1 and BSL-2 laboratories and monthly inspections for BSL-3 and BSL-4 laboratories must be conducted. 4. All mishaps must be reported and investigated. Medical surveillance of all workers present must begin immediately after a mishap. 5. Access control procedures were required to keep people not needed to operate biological laboratories from entering. 6. Federal, state, and local laws must be obeyed when transporting agents. 7. Components that contract out biological defense work must prepare written procedures that set guidelines for facilities, safety, inspections, and risk analysis. They were also required to monitor contractor performance in meeting safety requirements, which includes pre-award inspections, annual inspections of BSL-3 facilities and semiannual inspections of BSL-4 facilities, documentation of safety training programs, designation of an individual responsible for safety, and storage and disposal procedures. Contractors working at BSL-3 and BSL-4 facilities must prepare a plan for controlling laboratory mishaps. 8. Facilities must have published safety plans and monitoring procedures that they coordinated with federal, state, and local emergency services and practiced with emergency groups. An occupational health program, including medical surveillance examinations, was also required. 9. The regulations also set out operational requirements, including laboratory techniques, based on biosafety level, and emergency procedures, such as establishing evacuation procedures and an emergency alarm system. 10. Facilities must abide by personal protective equipment requirements (based on biosafety level), decontamination and disposal requirements and shipping restrictions, and facility specifications based on biosafety level and engineering controls. These regulations are located at 32 C.F.R., parts 626 and 627. Army pamphlet 385-69 also prescribes the minimum safety criteria and technical requirements and is used in conjunction with these regulations. Additionally, since USAMRIID was designated a “restricted area” in 1995, a National Agency Check was also required for general unescorted access for all staff. The USAMRIID Special Immunizations Clinic provided baseline medical and occupational health evaluations of fitness to work in the laboratories and provided vaccines. Annual medical interviews, physical exams, and laboratory reassessments were conducted for changes in health, medication, and duties. According to information provided to us by USAMRIID, security clearance was not and is not required to work in high-containment laboratories, and having a security clearance did not by itself allow access to high- containment laboratories. In 2001, there was no centralized requirement for inventory control and accountability. Individual scientists maintained their own stocks and accountability. CDC’s regulations in 2001 (42 C.F.R. § 72.6) focused on the transfer of select agents and thus did not focus on personnel security or insider risk or inventory control of select agents. While Army regulations required that the consequences of realistic worst case scenarios be examined, insider risk was not considered in such examinations. In commenting on the draft report, HHS officials stressed the importance of the Centers for Disease Control and Prevention’s (CDC) integrated “three-legged approach” to biocontainment at high containment laboratories. They provided the following technical details of their biocontainment experiences. “According to CDC officials, monitoring one-pass directional airflow through negatively pressurized containment zones, enclosed and separated by airtight doors and structure, with HEPA filtration on both the supply side (one HEPA filter) and the exhaust side (two HEPA filters), along with robust Operations and Maintenance protocols (O&M) provides a sound facility design and construction component for CDC’s ‘three- legged’ approach to biocontainment. This approach, which is described in Section II of the BMBL, stresses that laboratory practice and technique is the most important element of a comprehensive containment strategy, in conjunction with appropriate safety equipment (as a primary barrier) and facilities design/construction and engineering (as a secondary containment barrier). CDC maintains that while directional airflow and negative pressure in BSL-4 laboratories is a critical engineering component of a normal ‘safe’ operating environment, engineering systems do fail from time-to-time, for various reasons. “In the event of a loss of power to the supply and exhaust fans and controls that maintain negative pressure conditions in CDC’s BSL-4 laboratories, the laboratories go to a ‘static pressure’ status, whereby secondary containment is maintained by the airtight door gaskets, airtight construction of interior walls, floors, and ceiling within the BSL-4 laboratory block, and because the HEPA filters on the supply side and exhaust ducts are functionally impermeable to air for certain periods of time under static pressure conditions. In effect proper design, construction and O&M render the CDC BSL-4 laboratories into airtight boxes during a complete loss of normal and standby power during these events. Containment was also preserved because CDC’s laboratorians are properly trained in safe laboratory practices and procedures, and BSL equipment and safety protocols (primary barriers) functioned as intended. Equipment within the BSL-4 laboratories include biological safety cabinets, centrifuges, and heavy-duty personal protective suits (i.e., ‘space suits’). “In the lightning and bird strike incidents outlined above , secondary engineering controls failed due to temporary construction- related impacts, rather than typical operations conditions, and all but UPS- generated life safety required power was lost in B 18. However, because CDC had appropriated and effective laboratory practice and safety equipment and practices in place, and because a static pressure condition had been maintained (as a secondary barrier), the chance of an accidental release of dangerous pathogens into the environment so as to cause a significant risk to CDC workers or the surrounding community did not exist. “According to CDC officials, the lightning and bird strike incidents are not typical of O&M-related incidents that CDC has experienced over the years since they are directly related to the intense construction activities at the Roybal Campus that have been ongoing since approximately 2000, and are expected to largely conclude in approximately 2011. The construction activities are the execution of the Agency’s 10-Year Master Plan to replace the many 50-year old buildings, including laboratories and infrastructure at the Roybal and Chamblee Campuses. CDC date indicates that even with the lightning and bird strike incidents, the Roybal Campus electrical distribution system has had a 99.9997 percent reliability rate, or approximately 10 hours of documented down-time due to power outages during 78,840 hours of total run time (2000-2008). CDC expects to reduce electrical system downtime once construction activities have ceased.” The following are GAO’s responses to the Department of Health and Human Service’s (HHS) comments in a letter dated July 20, 2009. 1. We agree with HHS. Our report acknowledges that no executive or legislative mandate currently requires any agency to gather this information and we are making a recommendation in this regard. 2. We agree that instituting new regulatory reporting requirements about the location of all BSL-3 laboratories could create a burden on private sector laboratories and would require new federal resources. 3. Our report did acknowledge information from CDC officials stating that at the time of both incidents, the new BSL-4 facility was not fully operational and that no agents were inside the facility. However, we believe that CDC is missing the point. Given that grounding cables were cut, it is apparent that the building’s integrity as it related to adjacent construction was not adequately supervised. CDC officials stated that standard procedures under building codes did not require monitoring of the integrity of the new BSL-4 facility’s electrical grounding. This incident highlighted the risks inherent in relying on standard building codes to ensure the safety of high-containment laboratories—as there are no building codes and testing procedures specifically for those laboratories. We agree with CDC that high- containment laboratories include a three-legged and multi-tiered approach to containment. However, to have a fully safe system of containment, any failure of one tier or one of the legs needs to be rapidly identified and corrected. Our focus in this incident was on CDC’s power system and lessons that can be learned for other high- containment labs. 4. We modified the language in our report to note that a loss of power could have serious consequences under certain circumstances. 5. While we agree that critical differences, purposes, and functions differentiating code-required emergency power and legally required standby power are important when planning and designing electrical distribution systems for biological laboratories and other science buildings, this does not materially affect our findings. 6. We disagree with CDC that the titer should be at least four times higher than the baseline level to be considered an exposure. Most importantly, any increase in titers involving an agent that is being worked on at a laboratory should be taken seriously and investigated. The laboratory safety aspect of antibody titers is clearly different from those that apply to a general clinical situation. The increase in titers may indicate natural exposure to the agent (depending on the agent and location of the lab) or result from inconsistencies associated with laboratory testing. Most serological assays for select agents are not commonly conducted in clinical laboratories and are primarily performed in research laboratories. As such, these assays may not be properly controlled and validated. Assay-to-assay variation may be high, especially if experience is limited. Additionally, such assays are not particularly robust unless baseline specimens are available for comparison testing and serum samples are collected within relatively short time frames (for example, 3 to 6 months). 7. We agree with HHS that national goals may change over time. Therefore, it is important that the strategic evaluation of high- containment laboratories be undertaken periodically. We have modified our recommendation to include periodic evaluation. 8. Our report recommends that a single entity be charged with governmentwide strategic evaluation of high-containment laboratories. While we agree that there are several challenges, having a single agency would facilitate a coordinated response. 9. We agree that future evaluations of laboratory capacity and supply should examine the needs of the clinical laboratories related to their high-containment capacity. However, knowing the number of laboratories is a key requirement to making such evaluation effective. 10. We disagree. We believe that national standards contribute to ensuring that all high-containment laboratories meet minimum standards. National standards are valuable not only in relation to new laboratory construction but also in ensuring compliance for periodic upgrades. We agree that BMBL provides guidance on design and construction; however, the guidance does not provide standards that must be adhered to. While sharing lessons learned can be beneficial to meeting standards, it is not an adequate substitute for the standards themselves. If existing laboratories do not meet national standards, we believe that these laboratories need to be brought into compliance. In addition to the contact named above, Sushil Sharma, Ph.D., DrPH (Assistant Director), Amy Bowser, George Depaoli, Terrell Dorn, Jeff McDermott, Jean McSween, Jack Melling, Ph.D., Corey Scherrer, Linda Sellevaag, and Elaine Vaurio made key contributions to this report. Biological Research: Observations on DHS’s Analyses Concerning Whether FMD Research Can Be Done as Safely on the Mainland as on Plum Island GAO-09-747. Washington, D.C.: July 30, 2009. Biosafety Laboratories: BSL-4 Laboratories Improved Perimeter Security Despite Limited Action by CDC. GAO-09-851. Washington, D.C.: July 7, 2009. Biosafety Laboratories: Perimeter Security Assessment of the Nation’s Five BSL-4 Laboratories. GAO-08-1092. Washington, D.C.: September 17, 2008. High-Containment Biosafety Laboratories: DHS Lacks Evidence to Conclude that Foot-and-Mouth Disease Research Can be Done Safely on the U.S. Mainland. GAO-08-821T. Washington, D.C.: May 22, 2008. High-Containment Biosafety Laboratories: Preliminary Observations on the Oversight of the Proliferation of BSL-3 and BSL-4 Laboratories in the United States. GAO-08-108T. Washington, D.C.: October 4, 2007. Biological Research Laboratories: Issues Associated with the Expansion of Laboratories Funded by the National Institute of Allergy and Infectious Diseases. GAO-07-333R. Washington, D.C.: February 22, 2007. Homeland Security: Management and Coordination Problems Increase the Vulnerability of U.S. Agriculture to Foreign Pests and Disease. GAO-06-644. Washington, D.C.: May 19, 2006. Plum Island Animal Disease Center: DHS and USDA Are Successfully Coordinating Current Work, but Long-Term Plans Are Being Assessed. GAO-06-132. Washington, D.C.: December 19, 2005. Homeland Security: Much Is Being Done to Protect Agriculture from a Terrorist Attack, but Important Challenges Remain. GAO-05-214. Washington, D.C.: March 8, 2005. Combating Bioterrorism: Actions Needed to Improve Security at Plum Island Animal Disease Center. GAO-03-847. Washington, D.C.: September 19, 2003. Homeland Security: CDC’s Oversight of the Select Agent Program. GAO-03-315R. Washington, D.C.: November 22, 2002.
U.S. laboratories working with dangerous biological pathogens (commonly referred to as high-containment laboratories) have proliferated in recent years. As a result, the public is concerned about the oversight of these laboratories. The deliberate or accidental release of biological pathogens can have disastrous consequences. GAO was asked to determine (1) to what extent, and in what areas, the number of high-containment laboratories has increased in the United States, (2) which federal agency is responsible for tracking this expansion and determining the associated aggregate risks, and (3) lessons learned from highly publicized incidents at these laboratories and actions taken by the regulatory agencies. To carry out its work, GAO surveyed and interviewed federal agency officials, (including relevant intelligence community officials), consulted with experts in microbiology, reviewed literature, conducted site visits, and analyzed incidents at high-containment laboratories. The recent expansion of high-containment laboratories in the United Statesbegan in response to the need to develop medical countermeasures after the anthrax attacks in 2001. Understandably, the expansion initially lacked a clear, governmentwide coordinated strategy. In that emergency situation, the expansion was based on individual agency perceptions of the capacity their high-containment labs required as well as the availability of congressionally approved funding. Decisions to fund the construction of high-containment labs were made by multiple federal agencies in multiple budget cycles. Federal and state agencies, academia, and the private sector considered their individual requirements, but an assessment of national needs was lacking. Even now, after more than 7 years, GAO was unable to find any projections based on a governmentwide strategic evaluation of future capacity requirements set in light of existing capacity; the numbers, location, and mission of the laboratories needed to effectively counter biothreats; and national public health goals. Such information is needed to ensure that the United States will have facilities in the right place with the right specifications. Furthermore, since no single agency is in charge of the expansion, no one is determining the aggregate risks associated with this expansion. As a consequence, no federal agency can determine whether high-containment laboratory capacity may now meet or exceed the national need or is at a level that can be operated safely. If an agency were tasked, or a mechanism were established, with the purpose of overseeing the expansion of high-containment laboratories, it could develop a strategic plan to (1) ensure that the numbers and capabilities of potentially dangerous high-containment laboratories are no greater than necessary, (2) balance the risks and benefits of expanding such laboratories, and (3) determine the type of oversight needed. Four highly publicized incidents in high-containment laboratories, as well as evidence in scientific literature, demonstrate that (1) while laboratory accidents are rare, they do occur, primarily due to human error or systems (management and technical operations) failure, including the failure of safety equipment and procedures, (2) insiders can pose a risk, and (3) it is difficult to control inventories of biological agents with currently available technologies. Taken as a whole, these incidents demonstrate failures of systems and procedures meant to maintain biosafety and biosecurity in high-containment laboratories. For example, they revealed the failure to comply with regulatory requirements, safety measures that were not commensurate with the level of risk to public health posed by laboratory workers and pathogens in the laboratories, and the failure to fund ongoing facility maintenance and monitor the operational effectiveness of laboratory physical infrastructure. Oversight plays a critical role in improving biosafety and ensuring that high-containment laboratories comply with regulations. However, some aspects of the current oversight programs provided by the Departments of Health and Human Services and Agriculture are dependent upon entities monitoring themselves and reporting incidents to federal regulators. Since 2001, personnel reliability programs have been established to counter insider risks, but their cost, effectiveness, and impact has not been evaluated.
VA’s mission is to promote the health, welfare, and dignity of all veterans in recognition of their service to the nation by ensuring they receive medical care, benefits, social support, and lasting memorials. It is the second largest federal department and, in addition to its central office located in Washington, D.C., has field offices throughout the United States, as well as the U.S. territories and the Philippines. The department’s three major components—the Veterans Benefits Administration (VBA), the Veterans Health Administration (VHA), and the National Cemetery Administration (NCA)—are primarily responsible for carrying out its mission. More specifically, VBA provides a variety of benefits to veterans and their families, including disability compensation, educational opportunities, assistance with home ownership, and life insurance. VHA provides health care services, including primary care and specialized care, and it performs research and development to improve veterans’ needs. Lastly, NCA provides burial and memorial benefits to veterans and their families. Collectively, the three components rely on approximately 340,000 employees to provide services and benefits. These employees work in 167 VA medical centers, approximately 1,200 community-based outpatient clinics, 300 veterans centers, 56 regional offices, and 131 national and 90 state or tribal cemeteries situated throughout the nation. VA operates and maintains an IT infrastructure that is intended to provide the backbone necessary to meet the day-to-day operational needs of its medical centers, veteran-facing systems, benefits delivery systems, memorial services, and all other IT systems supporting the department’s mission. The infrastructure is to provide for data storage, transmission, and communications requirements necessary to ensure delivery of reliable, available, and responsive support to all VA staff offices and administration customers, as well as veterans. In this regard, the department operates approximately 240 information systems, manages 314,000 desktop computers and 30,000 laptops, and administers nearly 460,000 network user accounts for employees and contractors to facilitate providing benefits and health care to veterans. These systems are used for the determination of benefits, benefits claims processing, patient admission to hospitals and clinics, patient care through telehealth, and access to health records, among other services. For example, VBA relies on its disability benefits claims processing system—the Veterans Benefits Management System—to collect and store information such as military service records, medical examinations, and treatment records from VA, the Department of Defense, and private medical service providers. Information technology is widely used and critically important to supporting the department in delivering health care to veterans. VHA’s systems provide capabilities to establish and maintain electronic health records that health care providers and other clinical staff use to view patient information in inpatient, outpatient, and long-term care settings. Specifically, the Veterans Health Information Systems and Technology Architecture, known as VistA, consists of many computer applications and modules that collect, among other things, information about a veteran’s demographics, allergies, procedures, immunizations, and medical diagnoses. In 2014, VA issued its 6-year strategic plan, which emphasizes the agency’s goal of increasing veterans’ access to benefits and services, eliminating the disability claims backlog, and ending veteran homelessness. According to the plan, the department intends to improve access to benefits and services through the use of improved technology to provide veterans with access to more effective care management. The plan also calls for VA to eliminate the disability claims backlog by fully implementing an electronic claims process that is intended to reduce processing time and increase accuracy. Further, the department has an initiative under way that provides services, such as health care, housing assistance, and job training, to end veteran homelessness. To this end, VA is working with other agencies, such as the Department of Health and Human Services, to implement more coordinated data entry systems to streamline and facilitate access to appropriate housing and services. VA reported spending approximately $3.9 billion to improve and maintain its IT resources in fiscal year 2015. Specifically, the department reported spending approximately $548 million on new systems development efforts, approximately $2.3 billion on maintaining existing systems, and approximately $1 billion on payroll and administration. For fiscal year 2016, the department received appropriations of approximately $4.1 billion for IT. The department had requested approximately $505 million for new systems development efforts, approximately $2.5 billion for maintaining existing systems, and approximately $1.1 billion for payroll and administration. In addition, in its 2016 budget submission the department requested appropriations for the following purposes, among others: improving veteran access to benefits and services ($378.1 million); reducing the time it takes to process disability claims and increase accuracy ($295.1 million); expanding information security ($180.3 million); implementing the Veterans Access, Choice, and Accountability Act of 2014 (the Veterans Choice Act) ($440.6 million); maintaining the IT infrastructure ($1.828 billion); and improving VistA ($182.6 million). Further, for fiscal year 2017, the department’s budget request included nearly $4.3 billion for IT. The department requested approximately $471 million for new systems development efforts, approximately $2.5 billion for maintaining existing systems, and approximately $1.3 billion for payroll and administration. In addition, in its 2017 budget submission, the department requested appropriations to make improvements in a number of areas, including the following: veterans’ access to health care, to include enhancing health care- related systems, standardizing immunization data, and expanding telehealth services ($186.7 million); veterans’ access to benefits by modernizing systems supporting benefits delivery, such as the Veterans Benefits Management System and the Veterans Services Network ($236.3 million); veterans’ experiences with VA by focusing on integrated service delivery and streamlined identification processes ($171.3 million); VA employees’ experiences by enhancing internal IT systems ($13 information security, including implementing strong authentication, ensuring repeatable processes and procedures, adopting modern technology, and enhancing the detection of cyber vulnerabilities and protection from cyber threats ($370.1 million). Even as the department has engaged in various attempts to improve its IT management capabilities, we have issued numerous reports that highlighted challenges facing its efforts. In February 2014, we reported that about 2 years after taking actions toward developing a single interoperable electronic health record system with the Department of Defense (DOD), VA and DOD announced that they would instead work toward modernizing their separate existing health information systems and work to ensure those systems could share information. However, VA proceeded with this plan without developing a cost and schedule analysis to support the assertion that pursuing a separate modernized system while enabling interoperability with DOD’s system would be less expensive and could be achieved faster than developing a single system, which is contrary to best practice. We recommended that VA work with DOD to develop a cost and schedule estimate for their modernization approach, as well as their joint effort, and compare the estimates to determine which approach would cost less and take less time. VA concurred with our recommendation, but had not completed actions to address it. In addition, we reported in May 2010 that after spending an estimated $127 million over 9 years on its outpatient scheduling system project, VA had not implemented any of the planned system’s capabilities and was essentially starting over by beginning a new initiative to build or purchase another scheduling system. We also noted that VA had not developed a project plan or schedule for the new initiative, stating that it intended to do so after determining whether to build or purchase the new application. We recommended that the department take six actions to improve key systems development and acquisition processes essential to the second outpatient scheduling system effort. The department generally concurred with our recommendations, but as of May 2016, had not completed actions to implement four of the six recommendations. Effectively managing IT needs depends on federal departments and agencies, including VA, having key functions in place. Toward this end, we have identified and reported on a set of essential and complementary functions that serve as a sound foundation for IT management. These include the following: Leadership: Effective leadership, such as that of a CIO, can drive change, provide oversight, and ensure accountability for results. Congress has also recognized the importance of having a strong agency CIO. For example, as part of the Clinger-Cohen Act, Congress required executive branch agencies to establish the position of agency CIO. The act also gave these officials responsibility and accountability for IT investments, including IT acquisitions, monitoring the performance of IT programs, and advising the agency head on whether to continue, modify, or terminate such programs. More recently, in December 2014, Congress passed federal information technology acquisition reform legislation (commonly referred to as FITARA), which strengthened the role that agency CIOs are to play in managing IT. For instance, the law requires the head of covered agencies to ensure that the CIO has a significant role in the decision process for IT budgeting, as well as the management, governance, and oversight processes related to IT. Strategic planning: Strategic planning defines what an organization seeks to accomplish and identifies the strategies it will use to achieve desired results. A defined strategic planning process allows an agency to clearly articulate its strategic direction and to establish linkages among planning elements such as goals, objectives, and strategies. A well-defined IT strategic planning process helps ensure that an agency’s IT goals are aligned with its strategic goals. Also, as part of its strategic planning efforts, an organization should develop an enterprise architecture, which is an important tool to help guide the organization toward achieving the goals and objectives in its IT strategic plan. In addition, the organization should implement human capital management practices to sustain a workforce with the skills necessary to execute its strategic plan. Systems development and acquisition: Agencies should follow disciplined processes for developing or acquiring IT systems. These include defining the requirements that address the needs of the system users, managing project risk to identify potential problems before they occur, reliably estimating cost to help managers evaluate affordability and performance against a project’s plans, and developing an integrated and reliable master schedule that defines when and how long work will occur and how each activity is related to the others, among others. Best practices in these areas have been identified by organizations such as SEI and GAO. Quality assurance: Effective quality assurance supports the delivery of high-quality products by providing staff and management with appropriate visibility into, and feedback on, processes and associated work products throughout the life of a systems development or acquisition project. Quality assurance can also help ensure that planned processes are implemented as intended. Systems operations and maintenance: Given the size and magnitude of the investments that executive branch agencies make in this area (nearly 60 percent of VA’s reported IT spending in 2015), it is important that operations and maintenance be managed effectively to ensure that existing investments (1) continue to meet agency needs, (2) deliver value, and (3) do not duplicate or overlap with other investments. To accomplish this, agencies are required by the Office of Management and Budget to perform annual operational analyses of these investments, which are intended to serve as periodic examinations of the investments’ performance against, among other things, established costs, schedules, and performance goals. Service management: Agencies should develop and implement a process for ensuring that IT services, such as server management and desktop support, are aligned with and actively support the business needs of the organization. The Information Technology Infrastructure Library identifies key practices for successful service management. These include developing a service catalog that identifies all IT services delivered by the service provider, as well as establishing service-level agreements between the IT service provider and its customer on the expected service-level targets. IT investment management: IT projects can significantly improve an organization’s performance, but they can also become costly, risky, and unproductive. Agencies can maximize the value of these investments and minimize the risks of acquisitions by having an effective and efficient IT investment management and governance process, as described in GAO’s guide to effective IT investment management. Emphasizing the importance of investment management, the Clinger-Cohen Act requires executive branch agencies to establish a process for selecting, managing, and evaluating IT investments in order to maximize the value and assess and manage the risks of the acquisitions. Information security and privacy: Effective security for federal IT systems and data is essential to prevent data tampering, disruptions in critical operations, fraud, and inappropriate disclosure of sensitive information, including personal information entrusted to the government by members of the American public. Recognizing the importance of information security and privacy, Congress enacted the Federal Information Security Modernization Act of 2014 (FISMA 2014), which requires executive branch agencies to develop, document, and implement an agency-wide information security program. Additionally, in order to help agencies develop such a program, the National Institute of Standards and Technology has developed guidance for information security and privacy. Since 2007, VA has been operating a centralized IT organization in which most key functions intended for effective management of IT are performed by the department’s Office of Information & Technology (OI&T) and led by the department-level CIO. Two department-level IT governance boards also have responsibility to assist in performing certain functions. However, the department has faced challenges to effectively managing IT using a centralized approach. These include ensuring that (1) all projects are managed and controlled by OI&T, (2) effective communication occurs between OI&T and the VA business units, and (3) new IT capabilities are delivered efficiently and cost-effectively. To address these and other challenges, the CIO recently announced a transformation initiative aimed at improving OI&T’s accountability for, and focus on, achieving a more transparent and veteran-centric organization. OI&T has responsibility for managing the majority of VA’s IT-related functions, under a largely centralized structure that the department attempted for nearly two decades to establish. The department’s efforts toward achieving a centralized management structure were strengthened when the Military Quality of Life and Veterans Affairs Appropriations Act, 2006 established a new appropriation account for the department’s IT systems. In addition, in order to obligate or expend amounts for IT systems development, modernization, and enhancement, the Consolidated Appropriations Act of 2016 required that the Secretary or CIO submit to the Committees on Appropriations of both Houses of Congress a certification of the amounts to be obligated and expended for each development project. Subsequent to the 2006 appropriations act, the Secretary of Veterans Affairs assigned control over the IT appropriation to a department-level CIO. In addition, the Secretary approved a centralized organization structure for IT-related functions in February 2007. By April 2007, all of the department’s personnel that had been involved in IT operations, maintenance, and development, with the exception of those in the Office of Inspector General, were permanently assigned to OI&T. (See appendix II for a summary of key events in the history of VA’s IT centralization efforts.) The CIO serves as the head of OI&T and, accordingly, is responsible for providing effective leadership over the department’s IT activities. The CIO reports directly to the Office of the Secretary of Veterans Affairs through the Deputy Secretary and advises the Secretary regarding the execution of the IT appropriation. In addition, the CIO is expected to serve as the principle advisor to top management officials, such as the Under Secretaries of each of the three administrations, on matters relating to information and technology management in the department. This official also is tasked with reviewing and approving IT investments, as well as overseeing the performance of IT programs and evaluating them to determine whether to continue, modify, or terminate a program or project. Under the CIO’s leadership, seven organizational units within OI&T had responsibility for performing and managing the other specific IT-related functions through March 2016. Architecture, Strategy, and Design: This office was responsible for the department’s IT strategic planning. Specifically, it was tasked with providing a framework of strategies and processes to ensure IT programs and projects are designed and executed to satisfy current and future department needs. Further, the office was tasked with performing other functions, such as the development and maintenance of an integrated technical, business, systems, and data architecture; development of systems design, engineering, and integration standards; and the examination of existing IT requirements and solutions for efficiency and potential redundancy. Product Development: Systems development was managed by this office. Among other things, the office was responsible for developing and testing software products, including new products and upgrades to existing legacy systems, based on defined requirements from its customers (which are the three administrations and other staff offices). In addition to systems development, the office had responsibility for monitoring the progress of all IT projects and reporting that progress to the department’s CIO and other senior leadership. This office was also tasked with developing and maintaining policies and guidance for implementing, and ensuring that responsible staff are trained in using, the department’s project management and accountability system, referred to as PMAS. (This management tool is discussed in greater detail later in the report.) IT Resource Management: Acquisition processes and strategies were facilitated by the IT Resource Management office. Toward this end, the office was tasked with providing acquisition program management oversight and disseminating acquisition policy and procedures, among other things. In addition, the IT Budget and Finance office within IT Resource Management was tasked with planning, executing, and overseeing the department’s technology budget, as well as ensuring that the appropriation funds are used appropriately. According to a director within the IT Budget and Finance office, this office was responsible for performing these functions for the fiscal year 2015 and 2016 IT budgets. The director also stated that the CIO, through this office, oversaw 100 percent of the IT appropriation. Further, in fiscal year 2016 the IT Budget and Finance office was charged with creating certification letters signed by the CIO and submitted to the Committees on Appropriations detailing the systems development, modernization, and enhancement projects VA had planned for the IT appropriation. To date, the CIO has certified projects for fiscal year 2016 in October 2015, February 2016, and April 2016. Office of Quality, Performance, and Oversight: This office was tasked with managing quality assurance activities. It was intended to act as an independent party to determine if OI&T’s governance processes are adequate and functioning as anticipated. The office was also responsible for facilitating the reporting and understanding of performance measures and metrics related to IT program activities and strategic objectives. Further, it was charged with providing independent systems integration testing and other services to help ensure the integrity of the department’s systems and compliance with applicable guidance, such that provided through PMAS. In addition, the office was responsible for providing organizational development and training for IT, to include coverage of information security, privacy, and rules of behavior. It also was responsible for establishing IT human capital policies and conducting workforce planning. Office of Service Delivery and Engineering: All operations and maintenance activities associated with the department’s IT environment, including those in medical centers and regional offices, were directed by the Office of Service Delivery and Engineering. This office also was tasked with overseeing and managing VA’s data centers, network, and telecommunications; monitoring production for all information systems; delivering operations services (including deployment, maintenance, monitoring, and support) to all of the department’s geographic locations; and engineering and designing all of the IT system platforms and infrastructure. Office of Information Security: The department’s information security and privacy infrastructure was managed within this office. In this regard, the office was responsible for, among other things, ensuring the confidentiality, integrity, and availability of the department’s information and information systems, as well as cybersecurity, risk management, records management, incident response, critical infrastructure protection, and business continuity. The office also was charged with developing, implementing, and overseeing the policies, procedures, training, communication, and operations related to improving how the department and its partners safeguard the personally identifiable information of veterans and VA employees. Office of Customer Advocacy: This office was in charge of service management. In this regard, it was responsible for working with VHA, VBA, NCA, and the department’s staff offices to ensure that OI&T provides the services they need. The office also had responsibility for collecting and analyzing customer satisfaction metrics, and for responding to IT support requests at the department, business office, and employee levels. Figure 1 depicts the seven organizational units within OI&T that had responsibility for performing and managing IT-related functions through March 2016. OI&T reported that its organizational units performed key IT-related functions with the support of nearly 7,300 federal employees and approximately 7,800 contractor staff. Nearly 600 of the federal employees were located in Washington, D.C., at the VA Headquarters, while the vast majority of the employees were assigned to other facilities, such as medical centers and regional offices. Table 1 identifies each OI&T organizational unit and the number of employees and approximate number of contractors that supported VA’s IT program. In addition to these organizational resources, OI&T developed PMAS to serve as a mechanism for overseeing the performance of IT programs and ensuring accountability for results. PMAS was intended to provide information to facilitate oversight through a dashboard that captures project data, such as descriptions of the projects, baseline milestone dates, actual milestone dates, contract information, and project status information. The CIO’s responsibilities with regard to PMAS include authorizing new projects and project increments in the system; approving the funding needed for the projects; monitoring a project’s status via reporting, review, and assessment; and addressing any significant issues with the projects. Further, according to the CIO and OI&T documented procedures, TechStat reviews are required for every project that fails to deliver on its committed delivery date. TechStat reviews are the mechanism that the CIO, along with other senior leaders, used to determine whether a project should continue, be modified, or be terminated. In VA’s organization structure, the IT investment governance boards are intended to play a role in strategic planning and IT investment management, as they are responsible for providing the CIO with recommendations on the IT investments they feel would best meet the strategic and business objectives of the department. These two boards are intended to provide executive oversight for IT initiative planning and management. IT Planning, Programming, Budgeting, and Execution Board: The board is tasked with recommending (1) the prioritization of all IT funding requests, including what should and should not be funded during the year; (2) execution of the IT appropriation; and (3) decisions as to whether to fund IT-related projects outside of the IT appropriation. The IT Chief Financial Officer, who also serves as the Deputy Assistant Secretary for the IT Resource Management office within OI&T, serves as the board’s chairperson. The board includes voting members from VA’s three administrations, as well as non- voting members from the OI&T organizational units, among others. In March 2016, the CIO stated in congressional testimony that the roles of this board will be assumed by a newly established Enterprise Program Management Office (EPMO). IT Leadership Board: This board is tasked with reviewing and validating portfolio recommendations from the Planning, Programming, Budgeting, and Execution Board, as well as making decisions on any issues that cannot be resolved by that board. According to the department’s FITARA implementation plan, the IT Leadership Board has been tasked with approving the IT budget and then submitting it to the VA Executive Board—the department’s highest-level decision-making board chaired by the Secretary—for approval. The CIO serves as the IT Leadership Board’s chairperson and its members include the Under Secretaries for VA’s three administrations and all VA staff offices. Although VA centralized its key functions in order to maintain better control over resources, OI&T has faced challenges to fully implementing and managing IT under a centralized organizational structure. In particular, independent assessments of the department’s efforts in 2013 and 2015 showed that the office has had difficulty in preventing IT activities from occurring outside the control of OI&T. It has also been challenged in effectively collaborating with the department’s various business units and in efficiently and cost-effectively delivering new IT capabilities. IT activities have occurred outside the control of OI&T. In September 2013, the VA Inspector General reported that the department’s Office of Acquisition, Logistics, and Construction had used $13 million in supply funds, rather than approved IT funds, to develop a suite of electronic contracting applications to support the office’s procurement process. Further, the report noted that the Office of Acquisition, Logistics, and Construction had not used the department’s IT project management system, PMAS, to develop the contracting system, although doing so was mandated by the Secretary in June 2009 for all IT development projects department- wide. According to the report, the office followed its own project management process that was similar to PMAS. However, in doing so, the system development effort was not subjected to OI&T’s risk management, monitoring, oversight and control, and reporting processes within PMAS. According to the report, officials in the Office of Acquisition, Logistics, and Construction incorrectly believed that VA allowed for a PMAS exception because they had used supply funds, and not IT funds, for the development. The Inspector General recommended that the Office of Acquisition, Logistics, and Construction implement controls to ensure the systems development project and all future IT development efforts fall within the control and oversight of the PMAS process. The office concurred with the recommendation. More recently, in March 2015, the Inspector General reported that VHA had improperly obligated $92.5 million and spent $73.8 million in medical support and compliance funds, rather than IT funds, to develop its health care claims processing system through the VHA Financial Services Center. According to the report, the former VHA Deputy Chief Business Officer stated that funds and development services provided through the Financial Services Center were used in hopes of achieving a faster delivery of the claims processing system. The Inspector General reported that, by doing so, VHA had avoided competing with other VA projects for IT systems appropriations. Thus, the Inspector General recommended that the Under Secretary for Health obtain the appropriate funding to support the development of the claims processing system. The interim Under Secretary for Health concurred with the recommendation and agreed to establish oversight mechanisms and issue guidance to ensure that VHA uses the appropriate funds. Collaboration between OI&T and business units was not always effective. In September 2015, challenges with collaboration between OI&T and VA business units were highlighted by an independent assessment of VA’s health care delivery systems and management processes, including health IT processes. Several clinicians interviewed for that assessment stated that collaboration between systems developers and clinicians seemed to have disappeared after the centralization, resulting in uncoordinated execution of health IT strategies and limited development of new and improved capabilities for existing IT systems. The assessment noted that although the goals of OI&T and VHA did not conflict at the strategic level, the organizations often did not agree on priorities for executing the strategic plans. The assessment team recommended that VA and VHA transform IT strategy, planning, and execution by, among other things, establishing service-level agreements and refining the planning and budgeting process to ensure that business needs are effectively identified, prioritized, funded, and used to drive health IT investments. The team recommended that VA also develop a governance policy to ensure the strategic plans are executed well and in a timely manner. Similarly, staff within the Office of Finance informed us of their dissatisfaction because OI&T controls the budget and the priorities for systems development efforts and IT change management. These staff stated that OI&T’s priorities do not always align with the needs of the administrations or other staff offices, thus potentially delaying the delivery of needed services. PMAS limited VA’s efforts to efficiently and cost-effectively deliver new IT capabilities. The September 2015 independent assessment also stated that overly demanding processes for system development, as defined by OI&T’s PMAS, impeded cost-effective delivery of new health IT capabilities and limited VA’s ability to measure the value of its investments. The assessment report further stated that the PMAS process was schedule driven and risk averse, leading many project managers to limit the amount of functionality in each release, thereby increasing the total time for any useful capability to be released. In addition, assessment interviewees indicated that documentation required by PMAS consumed a significant percentage of development time. Accordingly, the assessment team recommended that VA establish product-focused teams to ensure delivery of needed capabilities to users and that the department’s system development process be modified from focusing on documentation and schedule to focusing more on functionality delivered. Recognizing challenges in IT management, the CIO initiated an effort in January 2016 to transform the focus and functions of OI&T. In doing so, the CIO stressed the Secretary’s goal of achieving a more veteran- focused organization that places greater emphasis on transparency, accountability, innovation, and teamwork. The CIO’s transformation strategy calls for OI&T to focus on stabilizing and streamlining core processes and platforms, mitigating weaknesses highlighted in information security and GAO assessments, and improving outcomes by institutionalizing a new set of IT management capabilities. According to the CIO, one of the most significant challenges the department faces is securing its assets, networks, systems, and data. To address this challenge, in September 2015, as part of the CIO’s effort to transform OI&T, VA documented a long-term cybersecurity strategy. This strategy identified five goals for successfully building a comprehensive cybersecurity capability: (1) protecting veterans’ information and VA data by ensuring privacy concerns are addressed and ensuring secure technology and data systems; (2) providing secure and resilient information systems technology, business applications, publicly accessible platforms, and shared data networks; (3) protecting VA infrastructure and assets, including the boundary environments that provide potential access and entry into VA; (4) enabling effective operations by improving governance and organizational alignment at enterprise, operational, and tactical levels (points of service interactions); and (5) recruiting and retaining a talented cybersecurity workforce covering multiple disciplines within cybersecurity. As another part of this transformation, the CIO began transitioning the oversight and accountability of IT projects from PMAS to a new project management process called the Veteran-focused Integration Process (VIP) in January 2016, in an effort to better streamline systems development and speed up the delivery of new IT capabilities. According to the VIP guide, all systems development projects that touch the VA network or use money from the IT appropriation will be required to follow VIP. The CIO intends for VIP to facilitate an expanded use of the Agile system development methodology resulting in, for example, the delivery of system functionality on a more frequent, 3-month delivery cycle, whereas PMAS delivered functionality in 6-month cycles. In addition, VIP is intended to streamline IT project management by reducing the number of artifacts (i.e., documentation) that must be developed (from 58 to 8) and the number of critical decision points (from 5 to 2). According to the CIO’s transformation strategy, under the new VIP process, OI&T will also be required to consider security and architecture standards early in project planning phases, whereas PMAS required this later in the development process. According to the CIO, the department intends to have all IT projects managed under VIP by September 30, 2016. In addition to implementing VIP, the CIO also intends to establish five new functions within OI&T as part of the transformation: Enterprise Program Management Office. This office began initial operations in April 2016 and is intended to serve as OI&T’s portfolio management and project tracking organization. According to the strategy, the goals of the new organization are to align IT portfolios with agency strategic objectives; enhance visibility and governance; analyze and report on portfolio performance metrics; ensure the overall health of the IT portfolio; and optimize resources for projects, people, and timelines. EPMO includes six functional areas: (1) Intake and Analysis of Alternatives, responsible for working with the VA administrations and other staff offices to develop requirements to meet the needs of veterans, provide analysis of alternative approaches to meeting those requirements, and integrate information security; (2) IT Portfolios, responsible for consolidating programs and projects under five IT portfolios (Health, Benefits, Cemeteries, Corporate, and Enterprise services); (3) Project Special Forces, responsible for mitigating issues that put projects at risk of failure; (4) Lean Systems Engineering, responsible for metrics gathering and analysis, development of process tools, human resources, and training; (5) Transition Release and Support, responsible for managing OI&T’s integrated calendar supporting VIP; and (6) Application Management, responsible for IT implementation efforts, including testing, design, and data management. The EPMO is expected to replace the Product Development office and the Planning, Programming, Budgeting, and Execution board. As of July 2016, VA had not provided information about whether and how the transformation will affect the IT Leadership Board. Account Management. The account management function, led by three account managers, is to be responsible for managing the IT needs of OI&T’s business partners—the VA’s major components. Account managers are to interface directly with the VA administrations and staff offices to understand their needs, help identify and define the solutions to meet those needs, and represent their customers’ interests by reporting directly to the CIO. In this regard, account managers are to submit their customers’ IT requirements to the EPMO, ensure that their business needs are understood by OI&T, and ensure that business solutions are designed to meet their customers’ specifications. This function is also tasked with advocating for their customers in the budget process. OI&T intends this new function to address the challenge of effectively collaborating with business units. As of May 2016, the three account managers were in place. Quality and Compliance. This function is to be responsible for establishing effective policy governance and standards and ensuring adherence to the policies and standards. In addition, the quality and compliance function is expected to be charged with identifying, monitoring, and measuring risks across the OI&T organization. Data Management Organization. The organization is being established to improve both service delivery and the veteran experience by engaging with data stewards to ensure the accuracy and security of the information collected by VA. The organization is intended to institute a data governance strategy; engage with business owners to ensure accuracy and security of collected data; analyze data sources to form an enterprise data architecture; and establish metrics for data efficiency, access, and value. OI&T also intends for the organization to identify trends in the data collected on each veteran that could improve their health care by providing predictive care and anticipating needs. Strategic Sourcing. This function is expected to be responsible for establishing an approach to fulfilling the agency’s requirements with vendors that provide solutions to those requirements, managing vendor selection, tracking vendor performance and contract deliverables, and sharing insights on new technologies and capabilities to improve the workforce knowledge base. Figure 2 depicts the OI&T organization as of April 2016, which includes previously existing components and functions, as well as the new functions described above. OI&T was still in the process of fully defining the roles and responsibilities of the organizational units as of July 2016. In addition to these new functions, the strategy calls for a transformation of the services that OI&T provides in field offices, to include requiring service-level agreements for all VA organizations and using those agreements to define the support that is needed, migrating data and applications to the cloud, and developing a strategy for consolidating data centers. According to the CIO, the transformation strategy is expected to be completed by the first quarter of fiscal year 2017, although the vast majority of the plan, including establishing the five new functions, is to be executed by the end of fiscal year 2016. Key to an agency’s success in effectively managing its IT systems is sustaining a workforce with the necessary knowledge, skills, and abilities to execute a range of management functions that support the agency’s mission and goals. Achieving such a workforce depends on effective human capital management, which includes developing a human capital strategic plan; analyzing the workforce; analyzing the gaps between current skills and future needs, and developing strategies for filling the gaps; and training and developing the workforce. Taking such steps is consistent with activities outlined in human capital management practices that we and the Office of Personnel Management have developed. OI&T took steps toward implementing effective IT human capital practices by documenting an IT human capital strategic plan and initiating an update based on changed priorities, regularly analyzing workforce data, identifying skill gaps for the current fiscal year, and implementing an IT training program. However, OI&T did not implement other human capital practices that are essential for analyzing the workforce and identifying IT skill gaps. Specifically, the office had not (1) tracked and reviewed historical and projected leadership retirements, and (2) identified the gaps in future skill areas. Strategic human capital planning that is linked to an agency’s strategic goals can be used as a tool to identify the workforce needed for the future and to develop strategies for shaping the workforce. Since 2001, we have identified strategic human capital management as a government-wide high-risk area. OPM regulations require agencies to develop a human capital strategic plan that identifies goals and objectives that are consistent with the agency’s strategic plans and annual performance goals. These goals and objectives are to address each of the five systems for comprehensive human capital management that are outlined in OPM’s Human Capital Assessment and Accountability Framework (HCAAF). In addition, OPM’s regulations require agencies to include in their human capital strategic plans performance measures and milestones that assess the agency’s progress in meeting the identified goals and objectives. In October 2012, the VA Inspector General reported that OI&T had not developed a human capital strategic plan and recommended that the office do so. In response to the recommendation, OI&T developed a 6- year human capital strategic plan in September 2013. This plan outlined four human capital strategic goals: maximize employee talent through recruitment, outreach, hiring and sustain a productive, diverse workforce and achieve results by valuing and recognizing performance in an environment in which all employees are encouraged to contribute; ensure OI&T supports a culture of leadership and continuous learning; ensure the human capital strategic plan is aligned with other VA strategic plans and integrated into workforce planning. In addition to outlining the human capital strategic goals, this plan also identified objectives and strategies for achieving them. For example, the objectives included improving the processes for hiring new employees; building a diverse OI&T workforce; eliminating competency gaps in key leadership positions; and developing an office-wide integrated workforce analysis capability. Strategies that are linked to the goals and objectives include implementing a plan to streamline and improve employee orientation, increasing the percentage of minorities and veterans in the OI&T workforce, identifying and defining the competencies that OI&T leaders must develop to ensure staff in leadership positions have the right skills needed to meet organizational goals, and implementing a workforce planning process. See appendix III for a summary of the goals, objectives, and strategies outlined in OI&T’s strategic human capital plan. The human capital strategic plan also was aligned with specific strategies that were identified in VA’s 6-year Strategic Plan, as well as the strategic goals within the department’s Office of Human Resources and Administration Strategic Plan for fiscal year 2014-2020. For example, the department’s strategic plan includes strategies for the development of leadership, and OI&T’s plan also includes the same. Specifically, as stated earlier, one of the goals within OI&T’s plan is to ensure the office supports a culture of leadership and continuous learning. Strategies for this IT human capital strategic goal include identifying and defining the competencies that OI&T leaders must develop and retain. The plan also was aligned with a Human Resources and Administration strategic goal—to advocate for veteran employment within the department. In addition, the plan identified strategies to recruit, hire, and promote qualified veterans. Further, OI&T’s human capital strategic goals are linked to each of the systems for comprehensive human capital management outlined in OPM’s framework. For example, one of the goals is to maximize employee talent through recruitment, outreach, hiring, and retention, which links to the talent management system in the framework. In addition, the plan identifies performance measures that the office is to use to monitor its progress in achieving each of the human capital goals. These performance measures include, among others, improved leadership abilities, and improvements in the number of hires and percentages for minorities, employees with disabilities, and veterans. As of March 2016, OI&T was in the process of updating the human capital strategic plan as part of the CIO’s transformation strategy for OI&T. According to the office’s human capital management director, the revised plan is expected to be in place by the end of December 2016. By developing a human capital strategic plan and ensuring that the plan is updated to reflect the changing mission requirements of the office, OI&T should be better positioned to provide the necessary strategic direction for meeting its workforce needs and goals. Performing workforce analysis is a key component in strategic human capital planning because it assists agencies in developing strategies for acquiring, training, and retaining staff. According to OPM, workforce analysis allows agencies to identify trends impacting their workforce. The analysis also provides a basis for developing actions to address workforce trends that may impact an agency’s future mission capabilities. OPM also stresses the importance of ensuring the continuity of leadership within an agency, and suggests that agencies perform ongoing workforce analysis to identify current and future workforce and leadership needs. Among other things, for relevant agency mission requirements, OPM requires a workforce analysis that includes a forecast of future leadership requirements and changes due to retirements. Similarly, VA’s workforce planning policy requires each staff office, including OI&T, to annually analyze historical workforce data and develop workforce projections that, at a minimum, include employee counts, retirements and other losses, new hires, workforce diversity, and leadership retirements. The workforce analysis also is to include a determination of gaps in the workforce and strategies to close those gaps. OI&T conducted a workforce analysis in May 2013 that identified for the office the number of employees, retirements and projected eligible retirements through fiscal year 2018, employee turnover and hiring rates, and workforce diversity percentages. Further, OI&T has analyzed most of the required workforce information on a monthly basis. Specifically, the analyses we reviewed for October 2015 to January 2016 included workforce data and projections related to employee counts, losses, new hires, and workforce diversity. According to the OI&T Director of Human Capital Management, the office used these data to examine trends, increase awareness of staffing related issues or opportunities, and identify workforce gaps, among other things. Further, the department’s Integrated Human Resources Management Council was tasked with analyzing workforce data, such as retirement rates, on a quarterly basis as part of an overall VA effort to review performance metrics related to human resource strategic goals. The Integrated Human Resources Management Council’s review results included workforce gaps and strategies to close them. For example, in fiscal year 2015 VA had not met its target for hiring veterans. One of the strategies the council identified for increasing the number of veterans hired was to collaborate with stakeholders to identify and implement innovative and targeted recruitment opportunities. Nevertheless, even as it has taken these actions, OI&T was not analyzing historical leadership retirement data; nor was it developing leadership retirement projections as required by the department’s workforce and succession policy. Leadership positions include senior executives as well as other staff at the General Schedule levels 13 through 15. According to the OI&T Director of Human Capital Management, the office collects data on all employee losses, including retirements. Further the OI&T human capital strategic plan included projected eligible retirements. However, historic data on leadership retirements were not analyzed and a forecast of future leadership retirements was not developed, as called for by VA’s workforce planning policy. The OI&T Director of Human Capital Management stated that, although the office has the capability to analyze leadership retirements, there is no specific reason why it has not done so. Without tracking and forecasting leadership retirements, OI&T faces a risk of being unprepared to identify and effectively respond to vacancies in key leadership positions, which in turn can contribute to ineffective IT management. Determining core skills (e.g., problem solving), leadership skills (e.g., developing others), and technical skills (e.g., configuration management) is essential for an organization such as OI&T to successfully achieve its missions and goals. This is especially important as changes in national security, technology, budget constraints, and other factors change the environment within which organizations operate. The identified skills should be linked to the agency’s mission and long-term goals in order to identify the workforce needed for the future and develop strategies for shaping this workforce. Agencies should then identify the gaps in current and future skills needed to achieve results, and develop strategies for filling the gaps. OI&T had conducted annual gap analyses to identify its skill needs, and it had developed strategies for filling the gaps. Specifically, the IT Workforce Development group within the Office of Quality, Performance, and Oversight began conducting the analyses in June 2013, and the analyses were conducted annually thereafter. On an annual basis, the analyses identified each skill area where a gap existed, along with the percentage of staff that was determined to be below the targeted proficiency level for a particular role and career level (e.g., entry, intermediate, and senior). In addition, the analyses identified the top 10 technical skill gaps within OI&T for each year and compared the gaps from the previous year to those in the current year. This comparison showed either an increase, decrease, or no change in the percentage of staff below the targeted proficiency level for a particular skill area. For example, in March 2016, IT Workforce Development reported that the gap in staff proficiency in 9 of the top 10 technical skill areas within OI&T had decreased from the previous year and that the gap in staff proficiency in 1 skill area had remained the same. Table 2 shows the comparison of the skills gaps for the 2015 and 2016 analyses. After the initial analysis was conducted in June 2013, the office identified training that was either available or being developed to address each skill gap. For example, in its March 2016 analysis report, the IT Workforce Development group noted that 48 new courses were being developed based on gaps identified in the March 2015 analysis. This training covered the areas of emerging technologies, project management, communications, interpersonal skills, analytical reasoning, and conflict management. As of July 2015, OI&T had provided approximately 470 on- the-job training opportunities. According to the IT Workforce Development group, on-the-job training is used to increase options for staff to demonstrate advanced-level proficiency, including in many of the skill gap areas identified. For example, the group developed an opportunity for senior staff to guide junior staff in the area of project management. The group also provided staff the opportunity to learn from subject matter experts in the area of infrastructure design. As of April 2016, employees had taken advantage of approximately 160 of these opportunities. The IT Workforce Development group also recommended in its 2016 gap analysis that several other actions be taken by OI&T to address the identified skill gaps. For example, the group recommended that OI&T continue to develop learning plans that focus on advance levels of proficiency; develop training on VIP, including training on Agile program management; and prioritize training for leadership-related competencies, among others. Nevertheless, while OI&T conducted annual skill gap analyses and developed training courses and recommended other actions for addressing skill gaps, the office has not identified the gaps in skill areas that could be needed in future years. Specifically, the analyses focused on gaps in the time periods when they were conducted; however, the analyses did not identify the skills or the gaps in the skills that may have been needed in future years in order to implement the goals and objectives identified in the department’s long-term IT strategic plan or for planned IT initiatives. According to a program manager in the IT Workforce Development group, the annual skill gap analyses were based on current staffing levels and on what each organization within OI&T wanted to achieve through the current fiscal year, but not on what skills would be needed for future years. However, by only focusing on the current year and not including future years in its skill gaps analysis, OI&T may not have been aware of gaps in skills that would be needed to successfully accomplish other goals, such as the completion of a multi-year IT development project, or longer-term goals, such as the department’s future IT operating environment as described in the office’s 5-year IT strategic plan. By including the IT skills needed for future-years in its gap analyses, OI&T could have increased assurance that its staff has the capabilities to deliver long-term IT support that contributes to improved services for veterans. Training programs are an integral part of a learning environment that can enhance an agency’s ability to attract and retain employees with the core, leadership, and technical skills needed to achieve results. The essential aim of these programs is to assist the agency in achieving its mission and goals by improving individual and, ultimately, organizational performance. Training programs involve establishing learning priorities; identifying training initiatives; ensuring delivery of learning opportunities; and evaluating the program and demonstrating how training efforts contribute to improved performance and results. OI&T took several steps to implement a training program. These included establishing an advisory board, developing competency models and individual development plans, and providing training to its employees. Specifically, the office established a Training Advisory Board which was tasked with identifying and validating training needs in OI&T. The board, which was established in November 2014, is headed by the OI&T Chief Learning Officer and has senior management representatives from each organizational unit. According to its charter, the advisory board discusses and determines the priority of the training needs of each of the organizational units. In addition, the charter states that the board is intended to ensure that available training is aligned with VA’s mission, strategic planning, and OI&T priorities. The board’s most recent meetings were held in October 2015 and April 2016. During the meetings, the board discussed the training needs for OI&T’s organizations, the development of training courses, and upcoming learning opportunities. The IT Workforce Development group used competency models and individual development plans to identify training initiatives that could address the learning needs of OI&T’s employees. According to an official in the IT Workforce Development group, employees were aligned with competency models based on their job responsibilities. These competency models identified training needs for professional development. Additionally, the IT Workforce Development official stated that OI&T used individual development plans to reinforce the required knowledge, skills, and abilities necessary for employees to progress into new positions at higher skill levels. The individual development plan is essentially a to-do list that is generated based on identified gaps in the employee’s skills. The plan identifies the employee’s competency model, training needs, and the due dates for the learning opportunities identified to meet their training needs. Further, OI&T has taken steps to ensure the delivery of learning opportunities. Specifically, in fiscal year 2015, the office provided over 700 training courses. For example, it provided training on information security awareness, which over 15,000 employees and contractors completed, and on risk assessment, which was completed by 284 employees and contractors. The courses also covered a wide range of other IT-related subjects, such as project management, enterprise architecture, and cybersecurity. The training courses were delivered through several methods, such as webinars, on-the-job training, classrooms, and virtual classrooms. The IT Workforce Development group used event announcements and the delivery of course schedules to each organizational unit to help ensure employees were made aware of the provided opportunities. Lastly, OI&T took steps to evaluate its training program. Specifically, in April 2015, the office performed an assessment that identified weaknesses in its training program, such as the need to supplement competency modeling efforts with learning opportunities that focus on IT strategic business priorities for overall organizational development. The assessment also identified the need to provide learning opportunities that address emerging trends and technologies at all leadership levels. Further, the assessment included strategies to improve employee, leadership, and organizational development. For example, one strategy to improve employee development was to survey employees to identify challenges in job responsibilities and the least understood emerging technologies, and then to use that information to provide theme-based development opportunities to address the identified needs. Additionally, OI&T gauged the satisfaction of the training attendees by providing course evaluations, which assessed the relevance of the training to job responsibilities and the effectiveness of the instructor. For example, in February 2016, 90 percent of course attendees indicated that they were satisfied with the courses; 83 percent indicated that the content of the courses were relevant to their jobs; and 95 percent indicated that the instructor was effective in conveying practical knowledge about the subject matter. By taking these steps to implement a training program, OI&T is better positioned to ensure that it has the ability to train and develop an IT workforce to effectively support the department’s mission and goals. Instituting disciplined, repeatable practices for IT development and acquisition is key to ensuring that investments in IT cost-effectively deliver the capabilities needed to support an organization’s mission. SEI and GAO have identified best practices that are essential to the development and acquisition of products and services for IT projects. These include practices related to project planning, requirements management, risk management, project monitoring and control, product validation, and process and product quality assurance, as well as developing and maintaining a reliable, high-quality project schedule. According to SEI, when an organization’s processes are documented, it allows the organization to more consistently apply these best practices. Further, documented processes are important for sustaining institutionalized best practices regardless of future leadership changes. Of 123 best practices that we selected to evaluate, VA’s documented processes reflected 97 practices, partially reflected 12 practices, and did not reflect 14 practices. Project planning involves establishing and maintaining estimates of project parameters, developing a project plan, and obtaining commitment to the plan from those who are responsible for implementing and supporting it. For example, establishing project estimates involves developing a work breakdown structure that details project tasks, responsibilities, and schedules; specifying estimates of work products and task attributes for the project; and estimating the project’s effort and costs for work products and tasks based on estimation rationale. In addition, developing a project plan includes identifying major milestones, schedule assumptions, constraints, and task dependencies; and creating and maintaining a project plan that ties together the budget, schedule, milestones, and stakeholder identification and interaction. Further, obtaining commitment to the plan includes identifying agreements regarding interfaces between project elements and other projects; documenting commitments with an appropriate level of signatories; and making adjustments to the project plan to reconcile variances in estimated and available resources. OI&T’s project planning processes reflected 13 of 15 selected best practices. Of the 15 practices, the processes we examined included criteria that addressed 4 of 5 practices for establishing project estimates. These included developing a work breakdown structure that specifies project tasks, responsibilities, and schedules; identifying products to be acquired externally; providing estimates of work products and task attributes; and planning project lifecycle phases. One practice—to estimate the project’s effort and costs for work products and tasks based on estimation rationale—was partially reflected in the documented processes. In this regard, VA’s project scheduling tool contained processes that called for project managers to estimate project effort. However, the processes did not address how the managers were to calculate the estimated effort to be expended to complete a project. Further, the processes included criteria related to all 7 selected practices for developing a project plan. These included identifying major milestones, defining and quantifying resources needed for the project, and creating and maintaining a project plan that ties together the budget, schedule, milestones, and stakeholder identification and interaction. Finally, the processes reflected 2 of 3 selected practices for obtaining commitment to the plan. In particular, they included identifying commitments regarding interfaces between project elements and other projects, and for documenting commitments with an appropriate level of signatories. However, the processes did not address making adjustments to the project plan to reconcile variances in estimated and available resources. Until OI&T incorporates all the recommended project planning best practices in its process documentation, the office increases the risk that its project managers could inconsistently calculate estimates of the effort needed across projects and fail to adjust project plans based on resource availability, thus potentially affecting the timely completion of projects. Table 3 summarizes the extent to which these best practices were documented in OI&T’s project planning processes. Best practices recommend that project teams manage requirements of the project’s products and components, and ensure alignment between those requirements and the project plans and work products. Managing requirements can be accomplished by understanding and obtaining commitment to requirements, and ensuring that project work and associated requirements align with each other. Examples of these best practices include analyzing requirements to ensure that established criteria for managing requirements are met; documenting all requirements and requirements changes; maintaining bidirectional requirements traceability from a requirement to its derived (i.e., lower-level) requirements and allocation to work products and back; and identifying changes to be made to plans and work products based on modifications to the requirements baseline. OI&T had documented processes that reflected 10 of 11 selected best practices for managing project requirements. For example, the processes included assessing the impact of requirements on existing commitments, negotiating and documenting changes to commitments, and maintaining a requirements change history with the rationale for the changes. However, the documentation did not address one of the practices. Specifically, it did not include identifying changes that should be made to plans and work products resulting from changes to the requirements baseline. Until the office incorporates in its documented processes the best practice for identifying changes to be made based on modifications to a project’s requirements baseline, OI&T’s staff will not be best positioned to implement the associated changes and maintain consistent processes across IT projects. Table 4 summarizes the extent to which these best practices were reflected in OI&T’s requirements management processes. Risk management best practices call for the identification of potential problems before they occur so that risk-handling activities can be planned throughout the life of the project to mitigate adverse impacts on achieving objectives. These practices involve preparing for risk management, identifying and analyzing risks, and mitigating identified risks. For example, preparing for risk management involves determining risk sources and categories, and developing risk mitigation techniques. In addition, identifying and analyzing risks includes determining those that are associated with cost, schedule, and performance; and reviewing the work breakdown structure and project plan to help ensure that all aspects of the work effort have been considered. Further, mitigating risks includes determining the levels and thresholds in which a risk becomes unacceptable and triggers the execution of a risk mitigation plan or contingency plan; establishing a schedule for each risk handling activity, with a start date and anticipated completion date; determining the costs and benefits of implementing the risk mitigation plan for each risk; and collecting performance measures on risk handling activities. OI&T established processes that reflected 26 of 28 selected best practices for risk management. Of the 28 best practices, the office’s processes addressed all 9 practices related to preparing for risk management. For example, they defined criteria for evaluating and quantifying risk likelihood and severity, established thresholds for risk categories, and defined a time period for risk monitoring or reassessment. In addition, the documented processes included criteria related to all 9 practices for identifying and analyzing risks. For example, the processes reflected best practices for documenting the context, conditions, and potential consequences of each risk; categorizing and grouping risks according to defined risk categories; and prioritizing risks for mitigation. With regard to mitigating risks, the documentation reflected 8 of 10 selected best practices. For example, it included identifying the person or group responsible for addressing risks, developing contingency plans for critical risks, and monitoring risk status. However, it did not address one practice for determining the costs and benefits of implementing the risk mitigation plan for each risk. Further, one practice—to collect performance measures on risk handling activities—was partially reflected in the processes. In this regard, VA’s processes called for the creation and distribution of monthly performance reports, but they did not address risk mitigation activities. Although VA included 26 of 28 best practices related to risk management in its processes, it is not positioned to fully ensure that the costs and benefits of executing risk mitigation plans will be consistently determined and that performance measures for risk mitigation efforts are collected. Table 5 summarizes the extent to which these best practices were documented in OI&T’s risk management processes. The purpose of project monitoring and control is to provide an understanding of the project’s progress so that appropriate corrective actions can be taken when the project’s performance deviates significantly from the plan. Toward this end, a project’s documented plan is considered the basis for monitoring activities and corrective actions taken when performance deviates significantly from the plan. Examples of monitoring the project against the project plan include tracking progress against the schedule, monitoring project costs and the effort expended for project completion, documenting the results of data management activity reviews, and identifying and documenting significant issues and deviations from the project plan. In addition, managing corrective actions to address identified issues involves determining and documenting the appropriate actions needed to address the issues and monitoring corrective actions for completion. OI&T’s documented processes reflected 18 of 28 selected best practices related to project monitoring and control. Of the 28 practices, the processes reflected 12 of 22 selected practices for monitoring the project against the project plan and all 6 selected practices related to managing correction actions. For monitoring the project against the project plan, the processes included tracking change requests and problem reports to closure, documenting the results of milestone reviews, and reviewing the project plan, project status, and risks at selected milestones. In addition, for managing corrective actions, the processes included gathering issues that require corrective action for analysis, analyzing the results of corrective actions to determine effectiveness, and determining and documenting lessons learned as a result of taking appropriate corrective actions. However, three of the selected practices were partially reflected in the processes for monitoring the project against the project plan. Specifically, documentation on monitoring project costs and the effort expended for project completion included criteria for monitoring project costs; however, it did not call for tracking expended effort. With regard to monitoring resources provided and used, the processes call for VA to monitor projects to ensure that staff and resources are provided, but they did not clearly require monitoring for the utilization of those staff and resources. Similarly, OI&T partially documented its processes for reviewing the results of collecting and analyzing project performance measures, such as customer service satisfaction metrics. In particular, the processes had been developed to ensure the agreed-upon functionality was delivered to customers. However, the customers’ level of satisfaction based on the functionality requested and received for the project was not addressed. In addition, the processes did not address seven of the selected practices. These practices were associated with monitoring the knowledge and skills of project staff, periodically reviewing data management activities against the project plan, identifying and documenting significant data management issues and their impacts, documenting results of data management activity reviews, periodically reviewing stakeholder involvement in projects, documenting the results of stakeholder involvement status reviews, and tracking action items to closure. Until OI&T completely documents its processes for monitoring and controlling projects, staff responsible for executing these steps may not be fully aware of a project’s progress toward achieving the milestones defined in the project plan. Table 6 summarizes the extent to which these best practices were addressed in OI&T’s project monitoring and control processes. Best practices for product validation call for project teams to demonstrate that a product or component fulfills its intended use when placed in its intended environment. The methods employed to accomplish validation can be applied to work products as well as to the product components. Key practices, as identified by SEI, include preparing for validation by identifying and selecting products, and validating the product or product components. For example, preparing for validation includes identifying the product, product component, and/or features to be validated throughout the life of the project and identifying requirements for the validation environment and passing them to the requirements development process. Further, validating the product or its components involves ensuring that activities (i.e., those that validate IT hardware and services) are performed throughout the project lifecycle and that deviations from validation procedures are documented. Of the 17 selected best practices for product validation, OI&T’s documented processes reflected all 10 practices related to preparing for validation, and all 7 practices for carrying out the validation of the product or its components. For example, the office’s processes addressed best practices for identifying test equipment and tools, ensuring the validation method was selected early in the development process, identifying and documenting validation failure causes as needed, comparing actual validation test results to expected results, and analyzing validation data for defects. By incorporating the selected validation best practices into its processes, OI&T has increased assurance that staff responsible for performing validation of the department’s IT products or their components will be positioned to consistently apply key processes to ensure that products perform as expected when placed in the intended environment. Table 7 summarizes the extent to which these best practices were addressed in OI&T’s validation processes. Best practices for process and product quality assurance are intended to support the delivery of high-quality products by providing project staff and management with appropriate visibility and feedback on processes and associated work products throughout the life of the project. Toward this end, process and product quality assurance involves objectively evaluating processes and work products, and providing objective insight to staff and managers. Examples of objectively evaluating processes and work products include establishing and maintaining clearly stated criteria, based on business needs, for assessing processes and work products; identifying lessons learned that could improve processes; and documenting a description of the quality assurance reporting chain and defining how it will ensure objectivity. In addition, examples of providing objective insight include escalating noncompliance issues that cannot be resolved in the project to the appropriate level of management; tracking noncompliance issues to resolution; recording process and product quality assurance activities in sufficient detail so that the status and results are known; and periodically reviewing open noncompliance issues and trends with management designated to receive and act on them. OI&T had documented processes that addressed 12 of 14 selected best practices related to process and product quality assurance. Specifically, the processes reflected 4 of 5 practices for objectively evaluating processes and work products. These included practices related to establishing and maintaining clearly stated criteria, based on business needs, for evaluating processes and work products; identifying each noncompliance found during the evaluation; identifying lessons learned; and evaluating selected work products at selected times. One practice—to document a description of the quality assurance reporting chain and define how it will ensure objectivity— was partially addressed in the processes. In this regard, while a quality assurance standard described how each review would ensure objectivity, the description of the reporting chain was not included. In addition, the office’s processes reflected 8 of 9 selected practices for providing objective insight. These included documenting noncompliance issues that cannot be resolved in the project, analyzing noncompliance issues to determine if quality trends can be identified and addressed; and ensuring that relevant stakeholders are aware of results of evaluations and quality trends in a timely manner. However, they did not address the periodic review of open noncompliance issues and trends with appropriate management. By not fully documenting best practices related to quality assurance in its processes, OI&T has less assurance that staff will appropriately report quality assurance issues and periodically review open issues and trends so that the relevant manager can act on them. Table 8 summarizes the extent to which these best practices were reflected in OI&T’s process and product quality assurance processes. The success of a project depends, in part, on having an integrated and reliable master schedule that defines when and how long work will occur, and how each activity is related to the others. A project’s schedule provides not only a road map for systematic project execution, but also the means by which to gauge progress, identify and resolve potential problems, and promote accountability at all levels of the program. VA’s guide for IT project management deems project schedules as essential to a project’s success and notes that failure to meet project schedule dates may trigger additional project reviews by senior leaders, which could lead to a project’s shutdown. Further, our Schedule Assessment Guide defines 10 best practices related to 4 characteristics that are important to developing high-quality, reliable schedule estimates—comprehensive, controlled, well- constructed, and credible. Table 9 describes the characteristics of high- quality, reliable schedule estimates and their associated best practices that guided our analysis. Almost all of OI&T’s processes for developing project schedules either partially addressed or did not address the 10 best practices. Specifically, the documentation for developing comprehensive, controlled, well- constructed, credible project schedules reflected 1 best practice, partially reflected 6 best practices, and did not reflect 3 best practices. Without documented processes for developing schedules that reflect best practices, OI&T is at increased risk that schedules created for its projects will not be reliable and, therefore, will not be useful tools for measuring program performance against approved project plans. Table 10 summarizes the results of our assessment of whether OI&T’s documented processes reflected the 4 characteristics of a high-quality, reliable schedule estimate, their associated best practices, and examples of our rationale for the assessment. In discussing OI&T process documentation, officials in the Product Development and Service Delivery and Engineering offices, including an Assistant Deputy CIO within Product Development, stated that the department’s documented IT processes did not include all of the selected best practices because they were considered basic, fundamental skills that a properly trained project planner should possess. The officials added that their inclusion would be overly burdensome and counterproductive when working in an Agile environment. However, the selected best practices do not apply to a particular development approach; rather, they were designed to provide value across different approaches, including Agile. In addition, documented processes are key to the consistent implementation of best practices that can withstand future leadership changes. Without incorporating these best practices into its processes, VA is at risk of missing critical steps that could help ensure that its processes are implemented as intended and are consistently applied across IT projects. While VA has largely centralized the performance of key IT-related functions in OI&T, led by the CIO, it has faced challenges in fully implementing and managing IT under this organizational structure. In particular, the office has had difficulty in preventing IT activities from occurring outside the control of OI&T and PMAS. It has also been challenged in effectively collaborating with the department’s various business units and in efficiently and cost-effectively delivering new IT capabilities. The OI&T transformation effort recently initiated by the CIO, and projected to be completed by the first quarter of 2017, is intended to address these and other challenges. OI&T has also taken steps to implement human capital management best practices by developing and documenting and beginning to update a human capital strategic plan that aligns with VA’s strategic plans, creating action plans for implementing OI&T human capital goals and objectives, reviewing its workforce quarterly, identifying skill gaps, and implementing a training program. However, without annually tracking and reviewing data related to leadership retirements or identifying skills needed in future years, VA will be hindered in ensuring that it has the leadership and staff with skills needed to support the successful delivery of IT capabilities. Further, opportunities exist for VA to strengthen its processes for system development and acquisition by including key system development best practices. VA’s documented processes for managing IT system development and acquisition generally reflected best practices in the key areas we reviewed. However, there were gaps in most of these areas. Ensuring that these processes address all key practices will assist the department in effectively managing its IT system development and acquisitions. To assist VA in sustaining an IT workforce with the necessary knowledge, skills, and abilities to execute its mission and goals, we recommend that the Secretary of Veterans Affairs direct the Chief Information Officer to track and review OI&T historical workforce data and projections related to leadership retirements, and identify IT skills needed beyond the current fiscal year to assist in identifying future skills gaps. To assist VA in establishing comprehensive and documented processes that reflect system development and acquisition best practices, we recommend that the Secretary of Veterans Affairs direct the Chief Information Officer to revise OI&T’s documented processes related to project planning, to include (1) estimating the level of effort that will need to be expended for work products and tasks, and (2) making adjustments to the project plan to reconcile differences between estimated and available resources; requirements management, to include identifying changes to be made to plans and work products as a result of requirements baseline changes; risk management, to include (1) determining costs and benefits of implementing the risk mitigation plan for each risk and (2) collecting performance measures on risk handling activities; project monitoring and control, to include the 10 best practices that were missing from the guidance; process and product quality assurance, to include (1) documenting a description of the quality assurance reporting chain and defining how objectivity will be ensured, and (2) periodically reviewing open noncompliance issues and trends with management that is designated to receive and act on them; and project scheduling, to include the 9 best practices that were missing from the guidance and revise the documented processes where the guidance was contrary to best practices. In written comments on a draft of our report (reprinted in appendix IV), VA generally agreed with our conclusions and concurred with our eight recommendations. Further, the department provided, and requested that we include in our report, information describing OI&T’s new organizational structure and its cyber security initiative. We have incorporated this information in relevant sections of the report. VA’s comments described steps that it had taken or planned to implement our recommendations. For example, the department asserted that it had taken steps that fully addressed our recommendation that it track and review OI&T historical workforce data and projections related to leadership retirements. In our follow up on the department’s implementation of our recommendations, we will evaluate whether the actions noted are fully responsive to this recommendation. The department also discussed planned actions for addressing our recommendation related to identifying future IT skills. Specifically, VA stated that it plans to, among other actions, include in the next skills gap analysis, long-term recommendations at the organizational level that show the types of skills each organization needs to increase and which proficiency level targets need the most emphasis. Further, in response to our recommendations that it revise OI&T’s documented processes related to project planning and requirements management, VA stated that it plans to document changes to these processes as the department transitions from PMAS to VIP. In addition, with respect to the majority of the remaining deficiencies in VA’s documented processes—related to risk management, project monitoring and control, quality assurance, and project scheduling—the department stated that our recommendations are to be addressed through the implementation of VIP, Agile processes, and other systems development process tools. According to VA, its actions in response to our recommendations are expected to be completed by the end of fiscal year 2017. If the department ensures that these and other activities it identified are appropriately documented and effectively implemented, then VA should be better positioned to sustain an IT workforce with the necessary knowledge, skills, and abilities to execute its mission and goals, and to ensure critical systems development and acquisition processes are implemented as intended and are consistently applied across IT projects. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Veterans Affairs and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. Should you or your staffs have questions on matters discussed in this report, please contact me at (202) 512-6304. I can also be reached by e- mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. The objectives of this study were to determine (1) how the Department of Veterans Affairs (VA) is organized to manage and perform key information technology (IT)-related functions and the extent to which it has centralized the management of IT resources, (2) the extent to which VA has implemented effective IT human capital management, and (3) the extent to which VA has established key processes to effectively manage major system development and acquisition efforts. To address the first objective, we obtained and reviewed the department’s documentation identifying and describing its IT organizational structure and functions. These included the IT strategic plan, governance plan, organizational descriptions contained within the VA’s Functional Organization Manual, the IT organization chart, and VA’s project management framework user guides. In addition, we reviewed memoranda, testimony from VA officials, and assessment reports that described the department’s efforts to centralize the management of its IT resources and the challenges it faces in doing so. In addition, we obtained and reviewed the Chief Information Officer’s (CIO) IT transformation strategy intended to address the challenges in maintaining a centralized program. We also interviewed responsible program officials regarding the organizational structure and performance of key IT-related functions. To address the second objective, we reviewed information describing the department’s management of its IT human capital, such as human capital strategic planning documentation, skills gap analyses, and human capital performance measures. We compared the department’s actions to agency-level human capital management best practices that are also applicable to major agency components such as the Office of Information & Technology (OI&T). These best practices are those that we and the Office of Personnel Management (OPM) have identified. We also compared VA’s actions against those required by VA’s Office of Human Resources and Administration. These included practices for effective strategic human capital planning, workforce planning, and strategic training. The practices are identified in our Key Principles for Effective Strategic Workforce Planning; OPM’s final regulations to implement certain provisions of the Chief Human Capital Officers Act of 2002; and VA’s Workforce and Succession Planning directive. We also interviewed officials responsible for the department’s IT human capital management. For the third objective, we reviewed policies, procedures, and supporting documentation describing the department’s key processes for managing major IT system development and acquisition efforts. We assessed the processes against key best practices that the Software Engineering Institute (SEI) and we have identified. The practices we selected are fundamental to effective IT system development and acquisition. These included recognized practices for project planning, requirements management, risk management, project monitoring and control, project validation, quality assurance, and project schedules. These practices are identified in SEI’s Capability Maturity Model® Integration (CMMI®) for Development, Version 1.3; SEI’s CMMI® for Acquisition, Version 1.3; and our Schedule Assessment Guide. Further, we reviewed systems development lifecycle policies, procedures, and other supporting documentation such as templates for required project artifacts within the department’s ProPath system and Project Management Accountability System (PMAS). The scope of our review did not include assessment of VA’s implementation of these processes. Our methodology to determine the extent to which VA included systems development and scheduling best practices developed by SEI and us in its procedures and guidance included three levels of compliance: (1) the program office provided evidence that satisfied the elements of the best practice, (2) the program office provided evidence that satisfied some but not all of the elements of the best practice, and (3) the program office provided no evidence that satisfied the elements of the best practice. We conducted this performance audit from February 2015 to August 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Department of Veterans Affairs (VA) has had a long history of organizing its information technology (IT) functions. The information contained in table 11 describes key events in the department’s efforts. The Department of Veterans Affairs (VA) Office of Information & Technology (OI&T) developed a 6-year human capital strategic plan in September 2013. The information contained in table 12 describes the goals, objectives, and strategies outlined in OI&T’s strategic human capital plan. In addition to the contact named above, Mark Bird (Assistant Director), Nicole Jarvis (Analyst in Charge), Christopher Businsky, Juana Collymore, Sharhonda Deloach, Jason Lee, Lee McCracken, and Christy Tyson made key contributions to this report.
VA relies extensively on IT to deliver services to millions of our nation's veterans. VA reported spending approximately $3.9 billion in 2015 and received appropriations of approximately $4.1 billion in 2016 to improve and maintain its IT resources. Even as the department has engaged in various attempts to improve its IT management capabilities, GAO has issued numerous reports that highlighted challenges in its efforts. This study was to determine (1) how VA is organized to manage and perform key IT-related functions and the extent to which it has centralized the management of IT resources, (2) the extent to which VA has implemented effective IT human capital management, and (3) the extent to which VA has established key processes to effectively manage major system development and acquisition efforts. To conduct its study, GAO reviewed VA policies, procedures, and other documentation and compared the department's processes to best practices for human capital management and IT systems development and acquisition. GAO also interviewed VA officials. The Department of Veterans Affairs (VA) performs key information technology (IT)-related functions, such as leadership, strategic planning, systems development and acquisition, and systems operations and maintenance, largely through its centralized Office of Information & Technology (OI&T), led by the Chief Information Officer (CIO). VA's two IT governance boards are intended to play a role in other key functions, such as investment management. Nevertheless, the department faced challenges in effectively managing IT, including (1) preventing IT activities from occurring outside the control of OI&T, (2) maintaining collaboration between OI&T and business units, and (3) delivering efficient and cost-effective IT capabilities. In response to these and other challenges, the CIO initiated an effort in January 2016 to transform OI&T into a more veteran-focused organization that emphasized transparency, accountability, innovation, and teamwork. The transformation strategy calls for OI&T to stabilize and streamline core processes and platforms, mitigate weaknesses from information security and GAO assessments, and improve outcomes by institutionalizing a new set of IT management capabilities. The CIO intends to complete the transformation by the first quarter of 2017. Key to an agency's success in effectively managing its IT systems is sustaining a workforce with the necessary knowledge, skills, and abilities to execute a range of management functions that support its mission and goals. VA took steps to implement effective IT human capital practices by documenting an IT human capital strategic plan and initiating an update based on changed priorities, analyzing workforce data, identifying skill gaps for the current year, and implementing an IT training program. However, OI&T had not consistently implemented all of these practices. Specifically, the office had not (1) tracked and reviewed historical and projected leadership retirements and (2) identified skills and competencies needed beyond the current year. Without annually tracking and reviewing data related to leadership retirements or identifying skills needed in future years, OI&T faces a risk of being unprepared to effectively respond to vacancies in key positions and not having the capabilities to deliver IT support that can contribute to improved services for veterans. Key to successful development and acquisition of IT services is establishing documented processes that reflect best practices. Although there were gaps in some areas, VA's processes generally included best practices for project validation, project planning, requirements management, risk management, project monitoring and control, and process and product quality assurance. In addition, processes for developing and maintaining a project schedule had not fully addressed the majority of the associated best practices. Ensuring that these processes address all key practices will assist the department in effectively managing its IT system development and acquisitions. GAO is recommending that VA take two actions to assist the department in sustaining a workforce with the necessary knowledge, skills, and abilities to execute its mission and goals, as well as six actions to assist the department in developing comprehensive processes that reflect systems development best practices. VA generally agreed with GAO's conclusions and concurred with GAO's eight recommendations.
The enormous challenge involved in making information systems Year 2000 compliant is managerial as well as technical. Agencies’ success or failure will largely be determined by the quality of their program management and executive leadership. The outcome of these efforts will also depend on the extent to which agencies have institutionalized key systems development and program management practices, as well as on their ability to execute large-scale software development or conversion projects. To assist agencies with these tasks, our Year 2000 assessment guide discusses the scope of these challenges and offers a structured, step-by-step approach for reviewing and assessing an agency’s readiness to handle the Year 2000 problem. The assessment guide states that the Year 2000 program should be managed as a single, large information systems project. The assessment guide describes in detail the five phases of a Year 2000 conversion process (i.e., awareness, assessment, renovation, validation, and implementation). Each of these phases represents a major Year 2000 program activity or segment. To successfully address the Year 2000 problem, effective program and project management is required for all five phases. Appendix I contains a description of these phases. To make its information systems Year 2000 compliant, IRS must (1) convert existing systems by modifying application software and data and upgrading hardware and systems software, if needed; (2) replace systems if correcting them is not cost-beneficial or technically feasible; or (3) retire systems if they will not be needed by 2000. IRS’ Chief Information Officer (CIO) established several parallel efforts to help ensure that IRS achieves Year 2000 compliance by January 1999. These efforts include creating the Century Date Change Project Office, which is responsible for coordinating the conversion of most existing information systems that can be made Year 2000 compliant as well as ensuring that all systems are converted in accordance with the same standards. The Century Date Change Project Office adapted our Year 2000 conversion model phases and established a 14-step process to track the progress of its Year 2000 conversion efforts. Some of the steps involved in converting existing systems include (1) converting applications; (2) upgrading hardware and/or systems software for mainframes, minicomputers/file servers, and personal computers; (3) upgrading telecommunications networks; and (4) ensuring that external data exchanges are Year 2000 compliant. The other parallel Year 2000 efforts are 2 major replacement efforts: (1) the replacement of the Distributed Input System (DIS) and the Remittance Processing System (RPS) with the Integrated Submission and Remittance Processing (ISRP) system and (2) the consolidation of the mainframe computer processing operations at 10 service centers to 2 computing centers. IRS personnel use DIS to input taxpayer data and RPS to input remittance data. According to IRS, these systems are old, and it is not cost-beneficial to make them Year 2000 compliant. Therefore, IRS decided to replace DIS and RPS with ISRP. A two-phase pilot of ISRP is under way during 1998 at IRS’ Austin Service Center. Nationwide implementation is scheduled for January 1999. As a part of its mainframe consolidation effort, IRS is to (1) replace and/or upgrade service center mainframe hardware, systems software, and telecommunications infrastructure; (2) replace about 16,000 terminals that support frontline customer service and compliance operations; and (3) replace the communication replacement system that provides security functions for on-line taxpayer account databases. Replacements of the terminals and the communication replacement system are critical to IRS’ achieving Year 2000 compliance. IRS is undertaking the non-Year 2000 aspects of mainframe consolidation because it concluded that consolidation would satisfy the Office of Management and Budget’s Bulletin 96-02, which directs agencies to consolidate information processing centers; be consistent with IRS’ planned modernization architecture; and save an estimated $356 to $500 million from fiscal years 1997 through 2003. IRS’ original mainframe consolidation schedule called for moving mainframe computer processing operations and the communication replacement system from 10 service centers to 2 computing centers between December 1997 and December 1998. The mainframe consolidation project is to provide the hardware, systems software, and telecommunications infrastructure for 40 mission-critical systems whose application software is being converted under the direction of the Century Date Change Project Office. IRS is experiencing delays in completing conversion efforts for its existing systems and major systems replacement efforts. IRS has made the most progress in converting its applications for the systems it has deemed mission-critical. Conversion efforts for systems software and hardware, telecommunications networks, and external data exchanges are still in the initial steps of IRS’ 14-step conversion process. The completion schedule for mainframe consolidation, with the exception of the Year 2000 critical aspects, has been extended beyond December 1998. Table 1 shows how IRS has allocated each of the 14 steps in its conversion process to our Year 2000 assessment, renovation, validation, and implementation phases. Much of IRS’ early Year 2000 efforts in 1996 focused on the awareness and assessment phases of the applications for existing information systems controlled by the CIO. In May 1997, IRS began assessing the date dependencies of applications for information systems that were controlled by either field offices or business functional areas (hereafter referred to as field/customer systems). As a result of the CIO and field/customer system assessments that were completed as of March 31, 1998, IRS had identified 127 mission-critical systems, including 7 telecommunications systems. As of April 24, 1998, IRS reported that it had completed the first 12 steps of its 14-step conversion process on applications for about 46 percent (59 systems) of its 127 mission-critical systems. In doing so, IRS fell short of its goal of having the applications for 66 systems converted by January 31, 1998. IRS’ schedule calls for completing the first 12 steps for the remaining 54 percent (68 systems) of the mission-critical systems by January 1999. IRS officials said that they believe they are on track for meeting that goal. IRS is still in the initial steps of its 14-step conversion process for most of its systems software, hardware, and telecommunications network components. IRS is also still in the initial stages of converting its external data exchanges. Appendix II provides additional information on the status of the conversion process for each of these areas. Of these infrastructure areas, according to IRS, telecommunications networks present the most significant conversion challenge and may be at the highest risk for not being done by January 1999. According to IRS, the capability to exchange information, both voice and data, between various computer systems is the backbone of IRS’ ability to perform all of its tax processing and customer service functions. IRS uses a telecommunications network that is supported through the Department of the Treasury and additional networks that are unique to IRS. As of March 10, 1998, IRS had an inventory of the components that are included in Treasury’s network and was verifying a preliminary inventory of the components in the networks unique to IRS. At the time of our review, a contractor was doing a risk assessment to help develop a conversion schedule so that the most important work would be scheduled first to minimize adverse impacts if IRS is not able to complete all of its telecommunications work by January 1999. IRS’ systems replacement efforts (i.e., ISRP and mainframe consolidation) are experiencing some delays. For example, certain ISRP software development that was to be completed in April 1998 is now scheduled to be done by June 16, 1998. As a result, the time available for testing before the start of the second phase of the pilot has been reduced. ISRP project office officials still anticipate that ISRP will be implemented nationwide by December 1998. The completion schedule for consolidating the data processing portion of service center operations has been extended from December 1998 until after June 1999. According to IRS officials, the need for this extension stems from numerous factors, including field office concerns about the ambitious schedule and expanded business requirements for security, disaster recovery, and testing. At the time we were finalizing this report, IRS officials said they were assessing various technical alternatives for meeting the expanded business requirements. They said they expect a revised business case and budget estimates that reflect the impact of both the schedule and requirements changes to be completed in June 1998. IRS officials said they expect to complete the Year 2000 portions of mainframe consolidation (i.e., terminal replacement and the communication replacement system) by the original completion date of December 1998. However, the communication replacement system has been experiencing some difficulties and is somewhat behind its original schedule for system testing. In our briefing to your office, we said that IRS’ ability to meet future milestones was at increased risk because IRS lacked a master schedule showing the relationships and interdependencies among the many Year 2000 efforts that must be completed in 1998. According to our assessment guide, a master conversion and replacement schedule should be a part of an agency’s Year 2000 program plan. This schedule could be used to (1) establish the sequential relationships among all of the tasks associated with all Year 2000 activities; (2) identify how much a task’s milestone completion date could slip without affecting other tasks; (3) help determine whether programming and testing resources will be available when needed, given the concurrent milestone completion dates for various tasks; and (4) provide a tool for assigning programming and testing resources that are essential to the success of all efforts in the most efficient manner. Understanding the schedule and resource interdependencies of all the key activities that are needed to make its systems Year 2000 compliant is imperative if IRS is to make its mission-critical systems Year 2000 compliant on time. For example, IRS’ mainframe consolidation project is behind schedule because of start-up delays and problems with implementing the systems software that is being used to consolidate one of the mainframe platforms. As a result of the problems with the commercial off-the-shelf software, additional testing is being done that was not initially expected. This testing requires staff from IRS’ Information Systems Office of Technical Support, which is also supporting the Century Date Change Project Office in other efforts. A master schedule showing both task and resource requirements should help identify whether any of the individual Year 2000 efforts will require the same staff for the same time period. Recognizing that several major and complex projects, including application software changes that are needed to implement recent tax legislation, must be completed before the 1999 filing season, in November 1997, the Commissioner of Internal Revenue announced the establishment of an executive steering committee. This committee is to identify risks to the 1999 filing season and the entire Year 2000 effort and take actions to mitigate those risks. As a part of this effort, IRS developed a Century Date Change Project Schedule for its Year 2000 activities. While the project schedule identifies the tasks for major Year 2000 activities, their corresponding start and finish dates, and the primary organizations responsible for them, the schedule does not yet establish a link between related tasks or analyze how the timing of the various tasks may affect resource availability. Until these actions are complete, IRS cannot project whether resources will be available when needed for concurrent tasks. Thus, IRS faces the risk that resources may not be available when needed. On February 12, 1998, IRS issued a statement of work for a contractor to provide program management support to the Commissioner’s newly established executive steering committee. One of the support activities that IRS identified in the statement of work was the development of an integrated schedule identifying (1) the interfaces and dependencies among Year 2000 projects and (2) efforts to implement legislative changes for the 1999 filing season. IRS expects this schedule, along with related tasks and dependencies, to be available in June 1998. If properly developed, this schedule should meet the intent of the master conversion and replacement schedule called for in our assessment guide. The contractor is to work closely with IRS staff who are responsible for the various Year 2000 efforts to assess resource needs for new requirements or resource shortages for existing requirements. The contractor also is to identify and recommend alternatives for allocating resources to help IRS meet all of its requirements. Contingency planning was the second risk area we identified during our briefing to your office. Under IRS’ contingency planning approach, IRS may be jeopardizing the continuity of operations for core business processes in the event that Year 2000-induced system failures occur. IRS’ Century Date Change Project Office has developed a “Century Date Change Contingency Management Plan.” This plan states that “developing contingency procedures for all of IRS’ numerous systems will require a significant amount of knowledgeable resources, in most cases the same resources assigned to perform the actual century date change conversion effort.” To minimize the number of contingency plans that IRS would have to develop, the contingency management plan calls for developing contingency plans only for those business functions or processes that are supported by application software projects that are at risk of not being made Year 2000 compliant on schedule. The Century Date Change Project Office has established criteria to identify such projects. For these projects, IRS is to initiate a business function impact analysis. Once that analysis is complete, technical and business owners evaluate available alternatives, including using any existing contingency procedures, such as manual procedures, or an alternative technological solution, such as commercial off-the-shelf software. IRS plans to use a similar approach for initiating contingency plans for business functions when the conversion of infrastructure areas, such as systems software, external data exchanges, and telecommunications network components, falls behind schedule. IRS’ “Century Date Change Contingency Management Plan” does not address the likelihood that information systems that are converted on schedule may experience system failures. As a result, IRS will be ill-prepared to effectively manage Year 2000-induced system failures that could affect core business processes. IRS’ contingency management plan does not address the possibility that (1) IRS may have overlooked a date dependency during its assessment phase of applications or infrastructure areas or (2) even if system conversion and replacement efforts are completed on time and fully tested, unexpected failures may occur. Aspects of contingency planning are under way for IRS’ replacement projects (i.e., ISRP and mainframe consolidation). For example, the ISRP project office has developed a contingency plan that identifies (1) various risks to the ISRP pilot and nationwide implementation, (2) the probability of those risks, and (3) contingency options for addressing those risks. IRS is also taking steps to make its existing service center mainframe computers Year 2000 compliant in the event that the consolidation of tax processing to the computing centers is not completed according to schedule. IRS expects to make its existing service center mainframe computers Year 2000 compliant by January 1999. Also, as part of a larger effort to enhance IRS’ disaster recovery capabilities, IRS officials said they have identified expanded disaster recovery requirements for service center data processing. At the time we were finalizing this report, IRS officials said they were assessing various technical alternatives for meeting those requirements so they can be incorporated in the mainframe consolidation project. Our exposure draft on business continuity and contingency planning states that agencies must start business continuity and contingency planning now to reduce the risk of Year 2000 business failures. Among other things, the exposure draft states that agencies need to do a business impact analysis to determine the effect of mission-critical system failures on the viability of agency operations. This analysis is to include examining business priorities; dependencies; service levels; and, most important, the business process dependency on mission-critical information systems. According to our exposure draft, the business impact analysis triggers the development of contingency plans for each core business process, including any information system components that support that process. Contingency plans would also address the actions IRS may take, for example, to notify taxpayers in the event that Year 2000 failures cause significant delays in processing tax returns and issuing refunds. IRS has undertaken efforts in the past to identify its core business processes as a part of various reengineering efforts that could be the starting point for a business impact analysis. For example, in 1996, as part of an effort to redesign its work processes, IRS began an effort to identify and map core business processes. IRS is still assessing some of its infrastructure components and faces the risk of not completing all of its Year 2000 efforts by January 1999. Moreover, IRS, like other agencies, is likely to encounter Year 2000-induced failures in some systems that were fully assessed, tested, and implemented. IRS’ “Century Date Contingency Management Plan” focuses on developing contingency plans only for business functions that are supported by application software projects that are behind schedule. The possibility exists that existing contingency or disaster recovery plans that were developed for other than Year 2000 purposes may be applicable to Year 2000 failures. However, if these plans are not applicable, under IRS’ “Century Date Contingency Management Plan,” IRS has no assurance that its core business processes will be able to continue to function, albeit, possibly at some reduced level of service, in the event that Year 2000-induced failures occur in systems that were converted according to schedule. We recommend that the Commissioner of Internal Revenue take the following steps to better ensure that IRS has adequately assessed the vulnerabilities of its core business processes in the event of Year 2000-induced system failures: solicit the input of business functional area officials to identify IRS’ core business processes and prioritize those processes that must continue in the event of Year 2000-induced failures; map IRS’ mission-critical systems to those core business processes; determine the impact of information system failures on each core business assess any existing business continuity and contingency plans that may have been developed for non-Year 2000 reasons to determine whether these plans are applicable to Year 2000-induced failures, and develop and test contingency plans for core business processes if existing plans are not appropriate. We requested comments on a draft of this report from the Commissioner of Internal Revenue or his designated representative. IRS provided us with comments during a May 4, 1998, meeting with the Acting Chief Information Officer and his staff. Those comments were reiterated in a May 8, 1998, letter from the Commissioner of Internal Revenue, which is reproduced in appendix III. The Commissioner said that IRS agrees that it must develop contingency plans to manage any adverse impacts of a less-than-fully successful century date program, and that IRS will take the following actions to address our recommendations regarding contingency planning. He said that to leverage the limited resources on the remaining Year 2000 conversion and testing efforts, IRS will focus contingency planning on those areas that have the greatest risk and highest business impact. Specifically, the Commissioner said the Acting Chief Information Officer will be working with the other Chief Officers to document IRS’ current business processes, the systems that support them, the impact if these processes or systems fail, and the probability or potential for Year 2000 risk. The Commissioner said that contingency plans will be developed for those areas that meet all of the following criteria: (1) high business impact, (2) high risk associated with failure, and (3) high probability of systems failure/instability due to Year 2000 conversion. We believe that these actions, if implemented properly, address most of the steps we identified in our contingency planning recommendations and should put IRS in a better position to respond to unexpected failures as a result of the Year 2000 problem than was the case under its previous contingency planning approach. However, we remain concerned that IRS will be ill-prepared in the event a failure occurs in a high business impact area that is supported by a system that IRS assesses as having a low probability of failure, but subsequently fails unexpectedly. We recognize that IRS needs to leverage its resources, particularly its information systems resources, to ensure that it completes all of the required Year 2000 conversion work on schedule. However, we believe it would be prudent for IRS’ business officials who are responsible for high business impact areas, regardless of the perceived Year 2000 risks, to begin identifying alternative business procedures or processes that may need to be implemented in the event of unexpected systems failure. In addition to commenting on our recommendations, IRS provided us with updated information on the status of its Year 2000 efforts. We have incorporated that updated information in the report where appropriate. The updated information is also included in IRS’ “Status Update Summary,” which is also reproduced in appendix III. To determine IRS’ progress and identify the risks facing its Year 2000 conversion efforts, we interviewed officials from the National Office, computing centers, service centers, regions, and district offices. We analyzed and compared IRS’ planning, budget, and performance-monitoring documentation with our Year 2000 assessment guide as a part of a structured approach for reviewing IRS’ conversion efforts. We did not review existing business continuity or contingency plans that IRS may have been developing for other than Year 2000-induced failures. We conducted our work in accordance with generally accepted government auditing standards between October 1996 and May 1998. We are sending copies of this report to the Subcommittee’s Ranking Minority Member; the Chairmen and Ranking Minority Members of the House Committee on Ways and Means and the Senate Committee on Finance, Subcommittee on Taxation and IRS Oversight; various other congressional committees; the Secretary of the Treasury; the Commissioner of Internal Revenue; the Director of the Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. The major contributors to this report are listed in appendix IV. Please contact me at (202) 512-9110 if you have any questions about this report. Our Year 2000 assessment guide describes in detail the five phases that agencies need to complete when making their systems Year 2000 compliant. Each of the following phases represents a major Year 2000 program activity or segment: Awareness. This phase entails defining the Year 2000 problem, gaining executive level support and sponsorship, and ensuring that everyone in the organization is fully aware of the issue. It is also during this phase that the agency is to establish a Year 2000 program team and develop an overall strategy. Assessment. This phase entails assessing the Year 2000 impact on the agency, identifying core business areas, inventorying and analyzing the systems supporting the core business areas, and prioritizing the conversion or replacement of these systems. It is also during this phase that the agency is to initiate contingency planning and identify and secure the necessary resources. Renovation. This phase deals with converting, replacing, or eliminating selected systems and applications. In so doing, it is important that the agency consider the complex interdependencies among the systems and applications. Validation. This phase deals with testing, verifying, and validating all converted or replaced systems and applications and ensuring that they perform as expected. This entails the agency testing in an operational environment the performance, functionality, and integration of converted or replaced systems, applications, and databases. Implementation. This phase entails deploying and implementing Year 2000-compliant systems and components. It is also during this phase that the agency’s data exchange contingency plans are implemented, if necessary. This appendix contains additional information on the status of IRS’ infrastructure areas that were in the initial steps of IRS’ 14-step conversion process at the time of our review. These initial steps are comparable to either our assessment or renovation phase of our Year 2000 Conversion Model. According to the Office of Management and Budget’s guidelines, agencies were to have completed the assessment phase by June 1997. IRS placed a priority on assessing its mainframe computers first because these computers encompass most of IRS’ tax processing systems. IRS is still assessing its telecommunications networks, external data exchanges, and the systems software and hardware for minicomputers/file servers and personal computers. IRS has completed its assessment of its mainframe computers and has scheduled their conversion. All existing mainframe hardware and systems software are currently scheduled to be converted between January 1998 and January 1999. IRS’ mainframe computer systems constitute the core of IRS’ data processing activities, including the processing of tax return and remittance data and the storage of taxpayer account and collection activity data. These systems are currently located at IRS’ 10 service centers, the Martinsburg Computing Center, and the Detroit Computing Center. Most of IRS’ mainframe computers are being replaced as a part of IRS’ mainframe consolidation project. In the event that not all centers can be consolidated by 2000, IRS is proceeding with plans to make its existing mainframe hardware and systems software at the service centers Year 2000 compliant. Some mainframe computers, such as those supporting master-file processing at the Martinsburg Computing Center and others at the Detroit Computing Center, are not included in the mainframe consolidation project. These mainframe computers are to be upgraded to achieve Year 2000 compliance by January 1999. IRS is still in the assessment phase for its telecommunications networks. IRS relies extensively on telecommunications networks to accomplish its mission. According to the IRS’ Year 2000 Telecommunications Project Management Plan, the IRS’ telecommunications network is a critical component of IRS’ tax processing and customer service operations. The capability to exchange information, both voice and data, among its various computer systems is the backbone of IRS’ ability to perform all of its tax processing and customer service functions. According to the Commissioner’s Executive Steering Committee documents, the telecommunications networks conversion is significantly behind schedule for meeting the January 1999 milestone. Although IRS has established conversion schedules for the mission-critical areas of its telecommunications networks and is integrating these schedules into an overall plan, many of the individual components that make up these mission-critical areas have not been fully assessed. Generally, these components have not been fully assessed because IRS’ inventory of telecommunications resources has not been sufficiently detailed to allow IRS to (1) confirm the Year 2000-compliant status of all telecommunications components, (2) develop detailed conversion schedules, and (3) track conversion progress against those schedules. In part, the inventory has been difficult to compile because IRS’ telecommunications networks include both IRS-owned and multiple vendor-maintained networks and equipment, such as the Treasury-supported network, that cannot be easily combined to serve as a comprehensive source of information. IRS, Treasury, and contractors have formed integrated teams to address the Year 2000 telecommunications issues. IRS is currently validating the inventory of the Treasury-supported network by conducting site-specific inventories at its service centers. In addition to needing quality inventory data, IRS’ conversion solutions and plans for some areas are largely dependent on the ability of vendors to provide Year 2000-compliant products in a timely manner. After these products are received, IRS must test them to ensure that they work within IRS’ own data processing environment. According to IRS officials, a test plan for the Treasury-supported network has been developed. Given the large extent to which IRS relies on telecommunications networks to accomplish its mission and the high degree of risk associated with not making IRS’ telecommunications networks Year 2000 compliant, IRS’ telecommunications project plan calls for steps to mitigate this risk. The plan calls for initiating efforts to ensure that the portions of IRS’ telecommunications networks that are most critical to IRS’ operations are scheduled first and receive the necessary resources in accordance with their priority to IRS’ operations. At the time of our review, a contractor was doing a risk assessment to help develop a conversion schedule so that the most important work is scheduled first to minimize adverse impacts if IRS is not able to complete all of its telecommunications work by January 1999. According to IRS documents, this risk assessment will also trigger the development of contingency plans for mission-critical systems that are found to be at risk for not being converted on time. As an additional contingency measure, according to IRS, it is building redundancy into telecommunications networks to provide limited access if a portion of the network fails due to Year 2000 compatibility issues. IRS hopes to complete its assessment of external data exchanges by June 30, 1998. IRS, like most organizations, exchanges data in an electronic format with other organizations for a variety of purposes. These data exchanges involve both other government agencies as well as private sector organizations. For example, IRS (1) transmits information electronically to the Treasury’s Financial Management Service (FMS) for the purposes of reporting revenue receipts and the issuance of refund checks and (2) receives wage information (W-2) from the Social Security Administration (SSA) to verify the accuracy of individuals’ reported income. IRS also receives interest income data from banks and provides information to many states to assist them with taxpayer compliance activities. In September 1997, IRS initiated a plan to identify (1) all of its external data exchange organizations and (2) the actions needed to ensure that data exchanges are not adversely impacted by the Year 2000 problem. IRS has notified these organizations that Treasury has adopted a four-digit date field. IRS reports that as part of its application conversion efforts it has already converted more than 50 percent of the more than 300 data files that it exchanges with more than 400 organizations. A key portion of the remaining work involves contacting each of the organizations and verifying that it is aware of the IRS’ plans for conversion and it has taken steps to ensure the continued receipt and transmission of data. IRS has also identified a group of organizations whose external data exchanges are most critical to IRS’ operations and plans to commit additional attention and resources to these organizations to ensure that Year 2000 data exchange issues are thoroughly addressed. These organizations include government agencies, such as FMS, SSA, and the Federal Reserve, as well as private firms that are involved in activities such as the Electronic Federal Tax Payment System for federal employment tax deposits and banks that provide “lockbox” processing of $170 billion in remittances annually. To ensure that these most critical areas are thoroughly addressed, IRS has hired a contractor to conduct site visits to validate that the systems that receive/provide these data are on track to be Year 2000 compliant. At the time of our review, IRS was validating its inventory of external data exchanges and obtaining agreements regarding the organizations’ plans for converting their systems so that data exchanges can be made Year 2000 compliant. For the most part, IRS has completed its assessment of its minicomputer/file-server hardware and systems software. In the last 10 to 15 years, IRS has developed a number of information systems that use minicomputer and file-server technologies, rather than the mainframe-based technology that it has used for the past 30 years. IRS has identified 39 mission-critical systems that use minicomputer and file-server technologies. These systems support a variety of programs, including electronic filing, customer service, returns processing, fraud detection, criminal investigation, and compliance research activities. Many of these systems input data to IRS’ mainframe-based systems and, as such, are key elements of IRS’ tax processing system. Because organizational control over these systems is scattered across various project offices within IRS’ information systems organization and business or functional units, IRS has taken longer to identify and assess these systems than it has taken for its mainframe-based systems. The lack of an accurate inventory has also hindered progress in completing the assessment of these systems. IRS is relying on vendors to provide the Year 2000 solution for 12 platforms (i.e., a combination of computer hardware and systems software). These platforms currently support 39 mission-critical systems and several other important applications. According to the Commissioner’s Executive Steering Committee documents, IRS established March 15, 1998, as the date by which IRS wanted some assurance by the vendors that a Year 2000 solution existed for these platforms. Of the 12 platforms, 2 will be retired (1 of which will be replaced when ISRP is implemented nationwide.) IRS has determined that two platforms cannot be made Year 2000 compliant, and, at the time of our review, IRS was still evaluating its replacement options for them. As of April 10, 1998, IRS had either received or was about to receive the eight remaining platforms. IRS had identified a relational database as its greatest risk for its minicomputers and file servers because it supports 15 mission-critical systems, and IRS is the vendor’s only customer for this product. IRS officials told us that as of May 8, 1998, the vendor had provided a version of this database to IRS for testing. IRS officials said that once testing is completed, they will take the necessary steps to procure this database and make it available to the various users. Despite having identified the Year 2000 solutions for various minicomputer/file-server platforms, as of May 8, 1998, IRS had not yet completed a plan for migrating business or functional organizations from their current minicomputer/file-server platforms to the ones that are Year 2000 compliant. Specifically, as of May 8, 1998, IRS was beginning to develop for business and functional organizations (1) a schedule of critical tasks, (2) the associated milestones for completing the tasks, and (3) guidance on how to complete the tasks. IRS is in the assessment phase for personal computer hardware and commercial software (i.e., systems software and applications). IRS uses personal computers extensively for a wide range of functions, including (1) providing customer service staff with access to taxpayer account databases, (2) allowing compliance staff to collect detailed information and do complex calculations while working in the field, (3) entering information from tax returns and remittances, and (4) doing essential administrative functions. IRS has identified 134,000 personal computers that it must assess to determine if the hardware and/or the associated systems software is Year 2000 compliant or must be converted. IRS has estimated that approximately 60,000 of these computers support mission-critical functions. Of these 60,000 computers, IRS is currently replacing approximately 16,000 as part of IRS’ service center mainframe consolidation project. IRS’ goal is to convert all personal computers by January 1999. The Century Date Change Project Office is assigning Year 2000 conversion responsibility for personal computers to the organizations within IRS that currently share responsibility for purchasing and maintaining personal computers and their associated commercial software. According to the Century Date Change Project Office, as of March 1998, it had assigned responsibility for converting 75,000 of these personal computers. A. Carl Harris, Assistant Director Joanna Stamatiades, Evaluator-in-Charge Robert Arcenia, Senior Evaluator Linda Standau, Senior Evaluator Ronald Heisterkamp, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Internal Revenue Service's (IRS) efforts to have its information systems function correctly when processing dates beyond December 31, 1999, focusing on: (1) IRS' progress in converting its systems according to the guidelines in GAO's year 2000 assessment guide; (2) the risks IRS faces to completing the year 2000 effort on time; and (3) risks to the continuity of IRS operations in the event of year 2000-induced system failures. GAO noted that: (1) according to IRS, before January 1999, it needs to complete 12 steps of its 14-step process for converting: (a) the applications for its existing systems; (b) the telecommunications networks; and (c) systems software and hardware for mainframes, minicomputers/file servers, and personal computers; (2) in addition, before January 1999, IRS needs to: (a) ensure that external data exchanges will be year 2000 compliant; (b) implement the Integrated Submission and Remittance Processing System and, at a minimum, the year 2000 portions of mainframe consolidation; and (c) modify application software to implement tax law changes for the 1999 and 2000 filing seasons; (3) if these efforts are not completed, IRS' tax processing and collection systems may fail to operate or may generate millions of erroneous tax notices, refunds, interest calculations, and account adjustments; (4) for the conversion of its existing systems, IRS has made more progress on its applications than on its information systems infrastructure; (5) specifically, as of April 24, 1998, IRS reported that it had completed the first 12 steps of its 14-step conversion process for applications for about 46 percent of the 127 systems it has deemed as mission-critical; (6) IRS expects to convert the applications for the remaining 54 percent of the mission-critical systems by January 1999; (7) the two major systems replacement efforts, which are also expected to follow IRS' 14-step conversion process, are experiencing some schedule slippages; (8) IRS officials said they expect to complete the year 2000 portions of the mainframe consolidation by the original completion date of December 1998; (9) GAO identified two risk areas for IRS' year 2000 effort: (a) the lack of an integrated master conversion and replacement schedule; and (b) a limited approach to contingency planning; (10) since GAO's briefing, IRS has decided to have a contractor develop an integrated schedule of its year 2000-related efforts, including making all of the necessary tax law changes for 1999; (11) IRS officials said they hope to have a baseline, master integrated schedule in June 1998; and (12) in part, due to IRS officials' concerns that the same resources that are doing year 2000 conversion work would be needed to do contingency planning, IRS officials decided to develop a process that would minimize the number of contingency plans that would have to be developed.
The federal government plans to invest more than $89 billion on IT in fiscal year 2017. However, as we have previously reported, investments in federal IT too often result in failed projects that incur cost overruns and schedule slippages while contributing little to the mission-related outcome. For example: The Department of Defense’s Expeditionary Combat Support System was canceled in December 2012 after spending more than a billion dollars and failing to deploy within 5 years of initially obligating funds. The Department of Homeland Security’s Secure Border Initiative Network program was ended in January 2011, after the department obligated more than $1 billion to the program, because it did not meet cost-effectiveness and viability standards. The Department of Veterans Affairs’ Financial and Logistics Integrated Technology Enterprise program was intended to be delivered by 2014 at a total estimated cost of $609 million, but was terminated in October 2011 due to challenges in managing the program. The Office of Personnel Management’s Retirement Systems Modernization program was canceled in February 2011, after spending approximately $231 million on the agency’s third attempt to automate the processing of federal employee retirement claims. The tri-agency National Polar-orbiting Operational Environmental Satellite System was stopped in February 2010 by the White House’s Office of Science and Technology Policy after the program spent 16 years and almost $5 billion. The Department of Veterans Affairs’ Scheduling Replacement Project was terminated in September 2009 after spending an estimated $127 million over 9 years. These and other failed IT projects often suffered from a lack of disciplined and effective management, such as project planning, requirements definition, and program oversight and governance. In many instances, agencies had not consistently applied best practices that are critical to successfully acquiring IT investments. Federal IT projects have also failed due to a lack of oversight and governance. Executive-level governance and oversight across the government has often been ineffective, specifically from chief information officers (CIO). For example, we have reported that not all CIOs had the authority to review and approve the entire agency IT portfolio and that CIOs’ authority was limited. Recognizing the severity of issues related to government-wide management of IT, FITARA was enacted in December 2014. The law holds promise for improving agencies’ acquisition of IT and enabling Congress to monitor agencies’ progress and hold them accountable for reducing duplication and achieving cost savings. FITARA includes specific requirements related to seven areas. Federal data center consolidation initiative (FDCCI). Agencies are required to provide OMB with a data center inventory, a strategy for consolidating and optimizing the data centers (to include planned cost savings), and quarterly updates on progress made. The law also requires OMB to develop a goal for how much is to be saved through this initiative, and provide annual reports on cost savings achieved. Enhanced transparency and improved risk management. OMB and agencies are to make detailed information on federal IT investments publicly available, and agency CIOs are to categorize their IT investments by risk. Additionally, in the case of major IT investments rated as high risk for 4 consecutive quarters, the law requires that the agency CIO and the investment’s program manager conduct a review aimed at identifying and addressing the causes of the risk. Agency CIO authority enhancements. Agency CIOs are required to (1) approve the IT budget requests of their respective agencies, (2) certify that IT investments are adequately implementing OMB’s incremental development guidance, (3) review and approve contracts for IT, and (4) approve the appointment of other agency employees with the title of CIO. Portfolio review. Agencies are to annually review IT investment portfolios in order to, among other things, increase efficiency and effectiveness, and identify potential waste and duplication. In developing the associated process, the law requires OMB to develop standardized performance metrics, to include cost savings, and to submit quarterly reports to Congress on cost savings. Expansion of training and use of IT acquisition cadres. Agencies are to update their acquisition human capital plans to address supporting the timely and effective acquisition of IT. In doing so, the law calls for agencies to consider, among other things, establishing IT acquisition cadres or developing agreements with other agencies that have such cadres. Government-wide software purchasing program. The General Services Administration is to develop a strategic sourcing initiative to enhance government-wide acquisition and management of software. In doing so, the law requires that, to the maximum extent practicable, the General Services Administration should allow for the purchase of a software license agreement that is available for use by all Executive Branch agencies as a single user. Maximizing the benefit of the federal strategic sourcing initiative. Federal agencies are required to compare their purchases of services and supplies to what is offered under the Federal Strategic Sourcing initiative. OMB is also required to issue related regulations. In June 2015, OMB released guidance describing how agencies are to implement the law. OMB’s guidance states that it is intended to, among other things: assist agencies in aligning their IT resources to statutory establish government-wide IT management controls that will meet the law’s requirements, while providing agencies with flexibility to adapt to unique agency processes and requirements; clarify the CIO’s role and strengthen the relationship between agency CIOs and bureau CIOs; and strengthen CIO accountability for IT cost, schedule, performance, and security. The guidance includes several actions agencies are to take to establish a basic set of roles and responsibilities (referred to as the “common baseline”) for CIOs and other senior agency officials that are needed to implement the authorities described in the law. For example, agencies were required to conduct a self-assessment and submit a plan describing the changes they will make to ensure that common baseline responsibilities are implemented. Agencies were to submit their plans to OMB’s Office of E-Government and Information Technology by August 15, 2015, and make portions of the plans publicly available on agency websites no later than 30 days after OMB approval. As of May 2016, 22 of the 24 Chief Financial Officers Act agencies had made their plans publicly available. In addition, OMB recently released proposed guidance for public comment on the optimization of federal data centers and implementation of FITARA’s data center consolidation and optimization provisions. Among other things, the proposed guidance instructs agencies to maintain complete inventories of all data center facilities owned, operated, or maintained by or on behalf of the agency; develop cost savings targets due to consolidation and optimization for fiscal years 2016 through 2018 and report any actual realized cost savings; and measure progress toward defined performance metrics (including server utilization) on a quarterly basis as part of their data center inventory submissions. The proposed guidance also directs agencies to develop a data center consolidation and optimization strategic plan that defines the agency’s data center strategy for the subsequent 3 years. This strategy is to include a timeline for agency consolidation and optimization activities with an emphasis on cost savings and optimization performance benchmarks the agency can achieve between fiscal years 2016 and 2018. Finally, the proposed guidance indicates that OMB will maintain a public dashboard that will display consolidation-related costs savings and optimization performance information for the agencies. In February 2015, we introduced a new government-wide high-risk area, Improving the Management of IT Acquisitions and Operations. This area highlights several critical IT initiatives in need of additional congressional oversight, including reviews of troubled projects, an emphasis on incremental development, a key transparency website, reviews of agencies’ operational investments, data center consolidation, and efforts to streamline agencies’ portfolios of IT investments. We noted that implementation of these initiatives has been inconsistent and more work remains to demonstrate progress in achieving IT acquisition outcomes. Further, in our February 2015 high-risk report, we identified actions that OMB and the agencies need to take to make progress in this area. These include implementing FITARA, as well as implementing our previous recommendations, such as developing comprehensive inventories of federal agencies’ software licenses. As noted in that report, we have made multiple recommendations to improve agencies’ management of IT acquisitions and operations, many of which are discussed later in this statement. Between fiscal years 2010 and 2015, we made approximately 800 such recommendations to OMB and federal agencies. As of May 2016, about 33 percent of these recommendations had been implemented. Also in our high risk report, we stated that OMB and agencies will need to demonstrate measurable government-wide progress in the following key areas: implement at least 80 percent of GAO’s recommendations related to the management of IT acquisitions and operations within 4 years, ensure that a minimum of 80 percent of the government’s major acquisitions deliver functionality every 12 months, and achieve no less than 80 percent of the planned PortfolioStat savings and 80 percent of the planned savings for data center consolidation. One of the key initiatives to implement FITARA is data center consolidation. OMB established FDCCI in February 2010 to improve the efficiency, performance, and environmental footprint of federal data center activities. In a series of reports over the past 5 years, we determined that while data center consolidation could potentially save the federal government billions of dollars, weaknesses existed in several areas including agencies’ data center consolidation plans and OMB’s tracking and reporting on cost savings. In total, we have made 111 recommendations to OMB and agencies to improve the execution and oversight of the initiative. Most agencies agreed with our recommendations or had no comment. Most recently, in March 2016, we reported that the 24 departments and agencies participating in FDCCI collectively made progress on their data center closure efforts. Specifically, as of November 2015, agencies had identified a total of 10,584 data centers, of which they reported closing 3,125 through fiscal year 2015. Notably, the Departments of Agriculture, Defense, the Interior, and the Treasury accounted for 84 percent of these total closures. Agencies are also planning to close an additional 2,078 data centers—for a total of 5,203—by the end of fiscal year 2019. See figure 1 for a summary of agencies’ total data centers and reported and planned closures. In addition, we reported that 19 of the 24 agencies reported achieving an estimated $2.8 billion in cost savings and avoidances from their data center consolidation and optimization efforts from fiscal years 2011 to 2015. Notably, the Departments of Commerce, Defense, Homeland Security, and the Treasury accounted for about $2.4 billion (or about 86 percent) of the total. Further, 21 agencies collectively reported planning an additional $5.4 billion in cost savings and avoidances, for a total of approximately $8.2 billion, through fiscal year 2019. See figure 2 for a summary of agencies’ reported achieved and planned cost savings and avoidances from fiscal years 2011 through 2019. However, we noted that planned savings may be higher because 10 of the 21 agencies that reported planned closures from fiscal years 2016 through 2018 have not fully developed their cost savings and avoidance goals for these fiscal years. Agencies provided varied reasons for not having this information, including that they were in the process of re- evaluating their data center consolidation strategies, as well as facing other challenges in determining such information. We noted that the reporting of planned savings goals is increasingly important considering the enactment of FITARA, which requires agencies to develop yearly calculations of cost savings as part of their multi-year strategies to consolidate and optimize their data centers. We concluded that, until agencies address their challenges and complete and report such information, the $8.2 billion in total savings and avoidances may be understated and agencies will not be able to satisfy the data center consolidation strategy provisions of FITARA. Finally, we reported that agencies made limited progress against OMB’s fiscal year 2015 core data center optimization performance metrics. In total, 22 of the 24 agencies reported data center optimization information to OMB. However, of the nine metrics with targets, only one—full-time equivalent ratio (a measure of data center labor efficiency)—was met by half of the 24 agencies, while the remaining eight were each met by less than half of the agencies. See figure 3 for a summary of agencies’ progress against OMB’s data center optimization metric targets. Agencies reported a variety of challenges in meeting OMB’s data center optimization targets, such as the decentralized nature of their agencies making consolidation and optimization efforts more difficult. We noted that addressing this challenge and others is increasingly important in light of the enactment of FITARA, which requires agencies to measure and report progress in meeting data center optimization performance metrics. We concluded that, until agencies take action to improve progress against OMB’s data center optimization metrics, including addressing any challenges identified, they could be hindered in the implementation of the data center consolidation provisions of FITARA and in making initiative- wide progress against OMB’s optimization targets. To better ensure that federal data center consolidation and optimization efforts improve governmental efficiency and achieve cost savings, we recommended that 10 agencies take action to complete their planned data center cost savings and avoidance targets for fiscal years 2016 through 2018. We also recommended that 22 agencies take action to improve optimization progress, including addressing any identified challenges. Fourteen agencies agreed with our recommendations, 4 did not state whether they agreed or disagreed, and 6 stated that they had no comments. To facilitate transparency across the government in acquiring and managing IT investments, OMB established a public website—the IT Dashboard—to provide detailed information on major investments at 26 agencies, including ratings of their performance against cost and schedule targets. Among other things, agencies are to submit ratings from their CIOs, which, according to OMB’s instructions, should reflect the level of risk facing an investment relative to that investment’s ability to accomplish its goals. In this regard, FITARA includes a requirement for CIO’s to categorize their major IT investment risks in accordance with OMB guidance. Over the past 6 years, we have issued a series of reports about the IT Dashboard that noted both significant steps OMB has taken to enhance the oversight, transparency, and accountability of federal IT investments by creating its IT Dashboard, as well as issues with the accuracy and reliability of data. In total, we have made 22 recommendations to OMB and federal agencies to help improve the accuracy and reliability of the information on the IT Dashboard and to increase its availability. Most agencies agreed with our recommendations or had no comment. Most recently, as part of our ongoing work, we determined that agencies had not fully considered risks when rating their major investments on the IT Dashboard. Specifically, our assessment of 95 investments at 15 agencies matched the CIO ratings posted on the Dashboard 22 times, showed more risk 60 times, and showed less risk 13 times. Figure 4 summarizes how our assessments compared to the select investments’ CIO ratings. Aside from the inherently judgmental nature of risk ratings, we identified three factors which contributed to differences between our assessments and CIO ratings: Forty-one of the 95 CIO ratings were not updated during the month we reviewed, which led to more differences between our assessments and the CIOs’ ratings. This underscores the importance of frequent rating updates, which help to ensure that the information on the Dashboard is timely and accurately reflects recent changes to investment status. Three agencies’ rating processes span longer than 1 month. Longer processes mean that CIO ratings are based upon older data, and may not reflect the current level of investment risk. Seven agencies’ rating processes did not focus on active risks. According to OMB’s guidance, CIO ratings should reflect the CIO’s assessment of the risk and the investment’s ability to accomplish its goals. CIO ratings that do no incorporate active risks increase the chance that ratings overstate the likelihood of investment success. As a result, we concluded that the associated risk rating processes used by the agencies were generally understating the level of an investment’s risk, raising the likelihood that critical federal investments in IT are not receiving the appropriate levels of oversight. To better ensure that the Dashboard ratings more accurately reflect risk, we are recommending in our draft report, which is with the applicable agencies for comment, that 15 agencies take actions to improve the quality and frequency of their CIO ratings. OMB has emphasized the need to deliver investments in smaller parts, or increments, in order to reduce risk, deliver capabilities more quickly, and facilitate the adoption of emerging technologies. In 2010, it called for agencies’ major investments to deliver functionality every 12 months and, since 2012, every 6 months. Subsequently, FITARA codified a requirement that agency CIO’s certify that IT investments are adequately implementing OMB’s incremental development guidance. In May 2014, we reported that almost three-quarters of selected investments at five major agencies did not plan to deliver capabilities in 6-month cycles, and less than half planned to deliver functionality in 12- month cycles. We also reported that most of the five agencies reviewed had incomplete incremental development policies. Accordingly, we recommended that OMB develop and issue clearer guidance on incremental development and that selected agencies update and implement their associated policies. Most agencies agreed with our recommendations or had no comment. More recently, as part of our ongoing work, we determined that agencies had not fully implemented incremental development practices for their software development projects. Specifically, as of August 31, 2015, on the IT Dashboard, 22 federal agencies reported that 300 of 469 active software development projects (approximately 64 percent) were planning to deliver usable functionality every 6 months for fiscal year 2016, as required by OMB guidance. Regarding the remaining 169 projects (or 36 percent) that were reported as not planning to deliver functionality every 6 months, agencies provided a variety of explanations for not achieving that goal, including project complexity, the lack of an established project release schedule, or that the project was not a software development project. Table 1 lists the total number and percent of software development projects that agencies reported plans to deliver functionality, from highest to lowest. In reviewing seven selected agencies’ software development projects, we determined that the percentage delivering functionality every 6 months was reported at 45 percent for fiscal year 2015 and planned for 54 percent in fiscal year 2016. However, significant differences existed between the delivery rates that the agencies reported to us and what they reported on the IT Dashboard. For example, the percentage of software projects delivering every 6 months that was reported to us by the Department of Commerce decreased by about 42 percentage points from what was reported on the IT Dashboard. In contrast, the Department of Defense reported a 55 percentage point increase from what was reported on the IT Dashboard. Figure 5 compares what the seven agencies reported on the IT Dashboard and the numbers they reported to us. We determined that the significant differences in delivery rates were due, in part, to agencies having different interpretations of OMB’s guidance on reporting software development projects and because the information reported to us was generally more current than the information reported on the IT Dashboard. We concluded that, until the inconsistences in the information reported to us versus the information provided on the IT Dashboard are addressed, the seven agencies we reviewed are at risk that OMB and key stakeholders may make decisions regarding agency investments without the most current and accurate information. Finally, nearly all of the seven agencies we reviewed had not yet implemented the FITARA requirement related to certifying that major IT investments are adequately implementing OMB’s incremental development guidance. Specifically, only one agency—the Department of Homeland Security—had processes and policies to ensure that the CIO will certify that major IT investments are adequately implementing incremental development, while the remaining six agencies had not established such processes and policies. Officials from most of these six agencies reported they were in the process of updating their existing incremental development policies to address certification. To improve the use of incremental development, we are recommending in our draft report, which is with the applicable agencies for comment, that agencies take action to update their policies for incremental development and IT Dashboard project information. We are also recommending that OMB provide clarifying guidance on what IT investments are required to use incremental development and for reporting on projects that are not subject to these requirements. In summary, with the recent enactment of FITARA, the federal government has an opportunity to improve the transparency and management of IT acquisition and operations, and strengthen the authority of CIOs to provide needed direction and oversight. However, improvements are needed in several critical IT initiatives, including data center consolidation, efforts to increase transparency via OMB’s IT Dashboard, and incremental development—all of which are related to provisions of FITARA. Accordingly, OMB and federal agencies should expeditiously implement the requirements of the new IT reform law and continue to implement our previous recommendations. To help ensure that these improvements are achieved, continued congressional oversight of OMB’s and agencies’ implementation efforts is essential. Chairmen Meadows and Hurd, Ranking Members Connolly and Kelly, and Members of the Subcommittees, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staffs have any questions about this testimony, please contact me at (202) 512-9286 or at [email protected]. Individuals who made key contributions to this testimony are Dave Hinchman (Assistant Director), Justin Booth, Chris Businsky, Rebecca Eyler, Linda Kochersberger, and Jon Ticehurst. Data Center Consolidation: Agencies Making Progress, but Planned Savings Goals Need to Be Established . GAO-16-323. Washington, D.C.: March 3, 2016. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. Data Center Consolidation: Reporting Can Be Improved to Reflect Substantial Planned Savings. GAO-14-713. Washington, D.C.: September 25, 2014. Information Technology: Agencies Need to Establish and Implement Incremental Development Policies. GAO-14-361. Washington, D.C.: May 1, 2014. IT Dashboard: Agencies Are Managing Investment Risk, but Related Ratings Need to Be More Accurate and Available. GAO-14-64 Washington, D.C.: December 12, 2013. Data Center Consolidation: Strengthened Oversight Needed to Achieve Cost Savings Goal. GAO-13-378. Washington, D.C.: April 23, 2013. Information Technology Dashboard: Opportunities Exist to Improve Transparency and Oversight of Investment Risk at Select Agencies. GAO-13-98. Washington, D.C.: October 16, 2012. Data Center Consolidation: Agencies Making Progress on Efforts, but Inventories and Plans Need to Be Completed. GAO-12-742. Washington, D.C.: July 19, 2012. IT Dashboard: Accuracy Has Improved, and Additional Efforts Are Under Way to Better Inform Decision Making. GAO-12-210. Washington, D.C.: November 7, 2011. Data Center Consolidation: Agencies Need to Complete Inventories and Plans to Achieve Expected Savings. GAO-11-565. Washington, D.C.: July 19, 2011. Federal Chief Information Officers: Opportunities Exist to Improve Role in Information Technology Management. GAO-11-634. Washington, D.C.: September 15, 2011. Information Technology: OMB Has Made Improvements to Its Dashboard, but Further Work Is Needed by Agencies and OMB to Ensure Data Accuracy. GAO-11-262. Washington, D.C.: March 15, 2011. Information Technology: OMB’s Dashboard Has Increased Transparency and Oversight, but Improvements Needed. GAO-10-701. Washington, D.C.: July 16, 2010. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The federal government plans to invest more than $89 billion on IT in fiscal year 2017. Historically, these investments have frequently failed, incurred cost overruns and schedule slippages, or contributed little to mission-related outcomes. Accordingly, in December 2014, IT reform legislation was enacted into law, aimed at improving agencies' acquisition of IT. Further, in February 2015, GAO added improving the management of IT acquisitions and operations to its high-risk list—a list of agencies and program areas that are high risk due to their vulnerabilities to fraud, waste, abuse, and mismanagement, or are most in need of transformation. Between fiscal years 2010 and 2015, GAO made about 800 recommendations related to this high-risk area to OMB and agencies. As of May 2016, about 33 percent of these had been implemented. This statement primarily summarizes: (1) GAO's published work on data center consolidation, and (2) GAO's draft reports on the risk of major investments as reported on the IT Dashboard and the implementation of incremental development practices. These draft reports with recommendations are currently with applicable agencies for comment. The Office of Management and Budget (OMB) and agencies have taken steps to improve federal information technology (IT) through a series of initiatives; however, additional actions are needed. Consolidating data centers. In an effort to reduce the growing number of data centers, OMB launched a consolidation initiative in 2010. GAO recently reported that agencies had closed 3,125 of the 10,584 total data centers and achieved $2.8 billion in cost savings and avoidances through fiscal year 2015. Agencies are planning a total of about $8.2 billion in savings and avoidances through fiscal year 2019. However, these planned savings may be higher because 10 agencies had not fully developed their planned savings goals. In addition, agencies made limited progress against OMB's fiscal year 2015 data center optimization performance targets, such as the utilization of data center facilities. GAO recommended that the agencies take action to complete their cost savings targets and improve optimization progress. Most agencies agreed with the recommendations or had no comment. Enhancing transparency. OMB's IT Dashboard provides detailed information on major investments at federal agencies, including ratings from Chief Information Officers (CIO) that should reflect the level of risk facing an investment. In a draft report, GAO's assessments of the risk ratings showed more risk than the associated CIO ratings. In particular, of the 95 investments reviewed, GAO's assessments matched the CIO ratings 22 times, showed more risk 60 times, and showed less risk 13 times. Several issues contributed to these differences, such as ratings not being updated frequently. In its draft report, GAO is recommending that agencies improve the quality and frequency of their CIO ratings. Implementing incremental development. An additional key reform initiated by OMB has emphasized the need to deliver investments in smaller parts, or increments, in order to reduce risk and deliver capabilities more quickly. Since 2012, OMB has required investments to deliver functionality every 6 months. In a draft report, GAO determined that 22 agencies reported that 64 percent of 469 active software development projects had plans to deliver usable functionality every 6 months for fiscal year 2016. Further, for seven selected agencies, GAO identified significant differences in the percentage of software projects delivering every 6 months reported to GAO compared to what was reported on the IT Dashboard. For example, the percentage of software projects reported to GAO by the Department of Commerce decreased by about 42 percentage points from what was reported on the IT Dashboard. These differences were due, in part, to agencies having different interpretations of OMB's guidance on reporting software development projects. In its draft report, GAO is recommending that OMB and agencies improve the use of incremental development. GAO has previously made numerous recommendations to OMB and federal agencies to improve the oversight and execution of the data center consolidation initiative, the accuracy and reliability of the IT Dashboard, and incremental development policies. Most agencies agreed with GAO's recommendations or had no comment.
In November 2002, Congress passed and the President signed the Improper Payments Information Act of 2002 (IPIA), which was later amended by IPERA and the Improper Payments Elimination and Recovery Improvement Act of 2012 (IPERIA). The amended legislation requires executive branch agencies to (1) review all programs and activities and identify those that may be susceptible to significant improper payments (commonly referred to as a risk assessment), (2) publish improper payment estimates for those programs and activities that the agency identified as being susceptible to significant improper payments, (3) implement corrective actions to reduce improper payments and set reduction targets, and (4) report on the results of addressing the foregoing requirements. In addition to the agencies’ identifying programs and activities that are susceptible to significant improper payments, OMB also designates as high-priority the programs with the most egregious cases of improper payments. Specifically, OMB is required by IPERIA to annually identify a list of high-priority federal programs in need of greater oversight and review. In general, OMB has implemented this requirement by designating high-priority programs based on a threshold of $750 million in estimated improper payments for a given fiscal year. IPERA calls for executive agencies’ IGs to annually determine and report on whether their respective agencies complied with the following six criteria: publish a report in the form and content required by OMB—typically an AFR or a PAR—for the most recent fiscal year, and post that report on the agency website; conduct a program-specific risk assessment for each program or activity; publish improper payment estimates for all programs and activities deemed susceptible to significant improper payments under the agency’s risk assessment; publish corrective action plans for those programs and activities assessed to be at risk for significant improper payments; publish and meet annual reduction targets for all programs and activities assessed to be at risk for significant improper payments; and report a gross improper payment rate of less than 10 percent for each program and activity for which an improper payment estimate was published. IPERA states that if an IG reports that an agency is not in compliance with any of the IPERA criteria for 1 fiscal year, the agency head must submit a plan to appropriate congressional committees and OMB describing the actions that the agency will take to come into compliance. If an agency is found noncompliant with respect to the same program for 2 consecutive years, IPERA directs OMB to review the program and determine if additional funding would help bring the program into compliance and, if so, directs the agency to use any available reprogramming or transfer authority, or request further reprogramming or transfer authority from Congress, to aid in the program’s remediation efforts. For programs determined to be noncompliant for more than 3 consecutive years, IPERA requires the agency to submit to Congress within 30 days of the IG’s report either (1) a reauthorization proposal for the program or (2) proposed statutory changes necessary to bring the program or activity into compliance. Because the legal requirement to report to Congress is triggered by the IG reporting noncompliance, rather than the noncompliance itself, agencies are not subject to the congressional reporting requirement until their IGs report their determinations. In addition, as we previously reported, when agencies determine that reauthorization or statutory changes are not considered necessary to bring the programs into compliance, the agencies should state so in their letters to Congress. Furthermore, IPERIA requires the IGs at agencies with OMB-designated high-priority programs to perform additional procedures. OMB also provides guidance, in OMB Circular No. A-123, Appendix C (OMB M-15- 02), to the IGs regarding their annual IPERA compliance reports. This guidance restates the statutory requirements and adds procedures that the IGs are encouraged to adopt at their discretion during their annual IPERA reviews (hereafter referred to as optional procedures). It is important to note that some of the optional procedures outlined in OMB guidance are similar to those required for programs designated by OMB as high-priority; however, whether to perform such procedures for non-high-priority programs is up to the IGs. Figure 1 details the IGs’ required and optional procedures. Five years after the implementation of IPERA, 15 of the 24 agencies were reported as noncompliant under IPERA for fiscal year 2015. Although the number of agencies reported as noncompliant improved in fiscal years 2012 and 2013, decreasing to 12 and 11 agencies, respectively, IGs collectively reported an increase in agency noncompliance to 15 agencies in fiscal years 2014 and 2015. However, as discussed later in this report, we found that the IGs’ compliance determinations for fiscal year 2015 were not based on a consistent government-wide approach. Figure 2 summarizes the number of agencies noncompliant under IPERA each year since fiscal year 2011, as reported by their IGs. We found that noncompliance frequently stretched across multiple years. For instance, 14 of the 15 agencies were noncompliant in fiscal years 2014 and 2015. In addition, 9 of the 15 noncompliant agencies in fiscal year 2015 have been reported as noncompliant since IPERA was implemented—for 5 consecutive years. Figure 3 details the 24 agencies’ overall compliance under IPERA, as reported by their IGs, for fiscal years 2011 through 2015. The IGs associated with the 15 noncompliant agencies reported that a total of 52 programs were responsible for identified instances of noncompliance in fiscal year 2015. As shown in figure 4, the noncompliant programs accounted for a reported $132 billion in fiscal year 2015 estimated improper payments, which is approximately 96 percent of the $136.7 billion government-wide reported estimate. Seven agencies were reported as noncompliant for 3 or more consecutive years as of the end of fiscal year 2015, representing 12 programs. IPERA requires agencies that have been deemed noncompliant for consecutive years to take certain actions. Specifically, according to IPERA and OMB guidance, if a program is found to be noncompliant by an agency’s IG for 3 or more consecutive years, the agency must submit to Congress within 30 days of such determination a reauthorization proposal for each noncompliant program or any proposed statutory changes it deems necessary to bring the program into compliance. OMB guidance also states that agencies should share these proposals or plans with their respective IGs. For the 7 agencies that had noncompliant programs for 3 or more consecutive years as of the end of fiscal year 2015, as reported by their IGs, we found that 6 agencies submitted the required information to Congress. The remaining agency, the U.S. Department of Agriculture (USDA), had not submitted the required information to Congress, despite our prior recommendation, as well as USDA IG recommendations, to do so. Specifically, 5 USDA programs were reported as noncompliant for at least 3 consecutive years as of fiscal year 2015. Four of these programs were noncompliant for 5 consecutive years, and 1 was noncompliant for 3 consecutive years. Figure 5 lists the 7 agencies and the 12 programs that were noncompliant for 3 or more consecutive years, as reported by their IGs. USDA’s Office of Chief Financial Officer (OCFO) informed us that some of the delays for issuing the letter were caused by the office’s need to meet with OMB to determine how best to satisfy the requirements. For example, according to the USDA OCFO, OCFO met with OMB in June 2014, and OMB advised USDA OCFO that proposed statutory changes were not needed and that a letter to Congress was acceptable to satisfy the requirements. However, almost 3 years later, USDA has still not issued the letter. The USDA OCFO stated that it plans to submit the required information for all 5 programs by the end of fiscal year 2017. However, IPERA and OMB guidance calls for the information to be submitted within 30 days of the IG’s reported noncompliance. Therefore, for the 4 programs that were noncompliant for 5 consecutive years, the required information should have been submitted within 30 days of the 3 consecutive years noncompliance reporting, which occurred in April 2014. We also recommended in June 2016 that USDA take the required actions for these 4 programs, and as of March 2017, our prior recommendation remains open. For the 1 remaining program, which was noncompliant for 3 consecutive years, the information was required to be submitted in June 2016. When agencies do not report the required information, Congress may lack the information necessary to monitor the implementation of IPERA and take action to address problematic programs in a timely manner. The IGs’ IPERA compliance reports showed areas where agency compliance has remained a challenge throughout the past 5 years. Specifically, as shown in figure 6, noncompliance with the IPERA criterion to publish and meet annual improper payment reduction targets has been the primary reason agencies were reported as noncompliant by their respective IGs for 5 consecutive years. For instance, for fiscal years 2011 through 2015, IGs reported that 11 agencies did not comply with this criterion. Of these 11 noncompliant agencies, 8 agencies have been noncompliant with this criterion every year since fiscal year 2011. While the IGs generally reported that agencies published reduction targets for applicable programs, the targets were regularly not met as the actual rates exceed the targets. Although reported noncompliance for several IPERA criteria increased or stayed the same in fiscal year 2015, as compared to fiscal year 2014, the total instances of noncompliance (i.e., sum of all instances of IG-reported agency noncompliance for all six IPERA criteria) has declined since IPERA was first implemented in fiscal year 2011, as noted in figure 7. Specifically, in fiscal year 2011, the IGs reported 38 total instances of noncompliance for all six IPERA criteria, compared to 28 reported instances of noncompliance in fiscal year 2015. Although agencies’ continued noncompliance is concerning, we have previously reported that an agency’s failure to meet reduction targets and report improper payment rates below 10 percent may not necessarily suggest that the agency failed to adequately monitor its programs’ improper payments. For example, certain IGs reported increases in improper payment rates because of factors such as improved sampling and emphasis on training, which enhanced their agencies’ ability to detect improper payments. Specifically, the Department of Veterans Affairs (VA) IG reported in its fiscal year 2015 report that 2 VA programs—Community Care and Purchased Long-Term Services and Support—reported improper payment rates of 54.77 percent and 59.14 percent, respectively. According to the VA IG, these rates were significantly higher than those in the prior year when both programs reported improper payment rates slightly below 10 percent. According to the VA IG, the dramatic increase in improper payment rates for the 2 programs occurred primarily because of improved sample evaluation procedures in fiscal year 2015, which resulted in more improper payments being identified. Significant improvements in sampling, training, and the ability to detect improper payments could considerably reduce the likelihood that an agency will meet a predetermined reduction target or report an improper payment rate below 10 percent. As previously discussed, for the past 5 years, IGs reported that there were 11 agencies that did not meet the IPERA criterion that required agencies to publish and meet reduction targets. For fiscal year 2015, 4 of the 11 agencies—the Department of Defense (DOD), Department of Education (Education), Department of Homeland Security (DHS), and Social Security Administration (SSA)—that did not meet reduction targets were compliant with the remaining five IPERA criteria. Therefore, once these 4 agencies implement the necessary controls to prevent and detect improper payments, they may be able to meet their reduction targets, which in turn, could result in these 4 agencies becoming compliant under IPERA overall in future years. Furthermore, 3 other agencies—the Department of Labor (DOL), VA, and Small Business Administration (SBA)—out of the 11 that did not meet reduction targets were reported by their IGs as being noncompliant with only one other IPERA criterion (to report improper payment rates below 10 percent). Therefore, these 3 agencies may also become compliant under IPERA once they implement the necessary controls to prevent and detect improper payments, as they may be able to meet their reduction targets and report improper payment rates below 10 percent. Appendix III details the agency programs that did not meet their reduction targets and that reported improper payment rates above the 10 percent threshold for fiscal year 2015. We reviewed the IGs’ fiscal year 2015 IPERA compliance reports and found that the IGs were not consistent with one another regarding how they determined and reported on compliance when issues were identified. Some IGs reported compliance based on the presence or absence of the required analysis or reporting (hereafter referred to as a pass/fail determination by the IG), regardless of whether the IGs identified flaws, whereas other IGs reported agencies as noncompliant based on the IGs performing some degree of evaluative procedures to determine whether the agencies’ analyses or reports were substantively adequate (hereafter referred to as a determination based on evaluative procedures). IPERA does not clearly indicate whether, for example, an IG should report an agency as compliant under the criteria regarding publication of improper payment estimates if the agency reports an improper payment estimate that is based on a flawed methodology. While we recognize that the severity of the issues may have resulted in the IGs’ reporting noncompliance for some agencies, we found, as noted in some of the examples below, the types of issues identified for both the compliant and noncompliant agencies were similar. IPERA criterion: Publish an AFR or PAR. One IG reported its respective agency as noncompliant with this criterion. While this IG reported the Department of Transportation (DOT) noncompliant as a result of finding issues with improper payment information included in the AFR (i.e., noncompliance based on evaluative procedures), six other IGs reported their respective agencies—the Department of Housing and Urban Development (HUD), Department of State (State), Environmental Protection Agency (EPA), National Aeronautics and Space Administration (NASA), General Services Administration (GSA), and Office of Personnel Management (OPM)—as compliant despite finding similar issues with the improper payment information included in the agencies’ published AFRs or PARs (i.e., compliance based on pass/fail determinations). For example, the DOT IG reported noncompliance with this criterion as a result of identified issues with the outlays reported in DOT’s AFR, which is the same issue that was identified by another IG that reported its agency (GSA) as compliant. As shown in table 1, if the IGs consistently determined compliance with this IPERA criterion based on pass/fail determinations or based on evaluative procedures, the number of agencies reported as noncompliant with this IPERA criterion could have decreased by 1 (from 1 agency to none) or increased by 6 (from 1 to 7 agencies), respectively. IPERA criterion: Conduct program-specific risk assessments. Four IGs reported their respective agencies as noncompliant with this criterion. One of the 4 IGs reported its agency, the Department of the Interior (DOI), as noncompliant because the agency did not prepare a required risk assessment (i.e., compliance based on pass/fail determinations). The remaining 3 IGs reported their respective agencies—the Department of Health and Human Services (HHS), HUD, and OPM—as noncompliant based on identified issues with the risk assessments (i.e., noncompliance based on evaluative procedures). However, 6 other IGs reported their respective agencies—EPA, GSA, NASA, the National Science Foundation (NSF), State, and the Department of the Treasury (Treasury)—as compliant despite finding similar issues with the risk assessments (i.e., compliance based on pass/fail determinations). For example, 2 IGs reported that their respective agencies (HUD and OPM) were noncompliant with the IPERA risk assessment criterion as a result of their agencies not considering all nine required risk factors, as outlined in OMB guidance, during program-specific risk assessments, whereas another IG reported its agency (NSF) as compliant with this IPERA criterion, despite also finding issues with the agency’s consideration of the nine required risk factors. In addition, although the State IG reported that the qualitative risk assessments conducted by State included an evaluation of the nine risk factors required by OMB, the State IG also reported that the agency could have improved its risk assessment process regarding one of the nine risk factors— significant increases in funding. Specifically, the State IG reported that there was a deficiency with State’s process for identifying programs with significant funding changes. The State IG recommended that State expand its process to identify programs with significant funding changes to consider additional factors that may increase the risk of significant improper payments, including, at a minimum, the percentage increase of the change. According to the State IG, if the State’s risk assessment process does not consider the percentage increase, then the agency may not identify all programs that had increased risks of improper payments because of increased funding. For example, the State IG stated that a program that had expenditures of $101 million in year one and experienced an increase in expenditures of $99 million (98 percent) in year two would not meet the $100 million threshold and, as a result, would not be identified as a program with significant funding changes, which would require a risk assessment. As shown in table 1, if the IGs consistently determined noncompliance with this IPERA criterion based on pass/fail determinations or based on evaluative procedures, the number of agencies reported as noncompliant with this IPERA criterion could have decreased by 3 (from 4 to 1 agencies) or increased by 6 (from 4 to 10 agencies), respectively. IPERA criterion: Publish improper payment estimates. This criterion did not apply to 5 agencies. For the remaining 19 agencies, 5 agencies were reported as noncompliant with this criterion. Two of these agencies (GSA and HHS) were reported as noncompliant because the agencies did not publish required estimates (i.e., compliance based on pass/fail determinations), and the remaining 3 agencies (USDA, DOI, and OPM) were reported as noncompliant because of identified issues with their estimates (i.e., noncompliance based on evaluative procedures). While the IGs for these 3 agencies reported noncompliance as a result of identified issues with the estimates, there were 6 other IGs that reported their respective agencies (DOD, DOL, Education, HUD, SBA, and VA) were compliant despite finding similar issues with the estimates (i.e., compliance based on pass/fail determinations). For example, 1 IG (USDA) reported that its respective agency was noncompliant with the IPERA improper payment estimate criterion because the agency was not using sufficient sampling methods to report improper payment estimates, whereas another IG (Education) reported its agency as compliant with the same IPERA criterion, despite reporting that its agency used flawed estimation methodologies to calculate its estimates. As shown in table 1, if the IGs consistently determined noncompliance with this IPERA criterion based on pass/fail determinations or based on evaluative procedures, the number of agencies reported as noncompliant with this IPERA criterion could have decreased by 3 (from 5 to 2 agencies) or increased by 6 (from 5 to 11 agencies), respectively. IPERA criterion: Publish corrective action plans. This criterion did not apply to 9 agencies. Of the remaining 15 agencies, 1 (HHS) was reported as noncompliant with this criterion because it did not publish a corrective action plan for one of its eight risk-susceptible programs. Although we did not find any instances of IGs reporting noncompliance as a result of identified issues with the corrective action plans, we identified 5 IGs that reported their respective agencies (DOD, GSA, OPM, USDA, and HUD) as compliant despite finding issues with the corrective action plans (i.e., compliance based on pass/fail determinations). For example, 1 IG reported that its respective agency was compliant with this IPERA criterion, but the IG also reported that the agency’s corrective action plans did not explain how it addressed the root causes identified and did not include planned or actual completion dates of the actions. As shown in table 1, if the IGs consistently determined noncompliance with this IPERA criterion based on evaluative procedures, the number of agencies reported as noncompliant with this IPERA criterion could have increased by 5 (from 1 to 6 agencies). There would be no changes to the number of agencies reported as noncompliant if such determinations were made based on pass/fail basis, as the 1 IG would have still reported its agency as noncompliant for failing to publish a corrective action plan. For the two remaining IPERA criteria, which required agencies to publish and meet reduction targets and report improper payment rates below 10 percent, we found that the IGs consistently made pass/fail determinations. Specifically, 11 agencies were reported as noncompliant because they each had at least 1 program that did not meet its reduction target, and 6 agencies were reported as noncompliant because they had at least 1 program that did not report an improper payment rate below 10 percent. As a result, for these two IPERA criteria, table 1 shows no differences between the original IG noncompliance determinations and the potential noncompliance results if determinations were consistently based on pass/fail or evaluative procedures. We believe that the variations in the IGs’ determinations reduce their usefulness for comparing compliance and progress across agencies. As shown in table 1, compliance results could change significantly if the IGs consistently determined compliance by performing either pass/fail determinations or the evaluative procedures that some IGs use, which we describe in this report. Specifically, if the IGs had all performed IPERA compliance determinations based on pass/fail determinations, the number of noncompliant agencies could decrease by 1 agency (OPM), from 15 to 14 agencies. In addition, the total instances of noncompliance for all six IPERA criteria could have been 21 as compared to 28 instances as originally reported by the IGs. Alternatively, if the IGs had consistently determined compliance based on the evaluative procedures some IGs used, the number of noncompliant agencies could have been over one-third higher—an increase of 4 noncompliant agencies (State, EPA, NASA, and NSF), for a total 19 noncompliant agencies for fiscal year 2015. Also, the number of instances of noncompliance for all six IPERA criteria could increase from the originally reported 28 instances to 51 instances. Standards for Internal Control in the Federal Government states that management should establish measurable objectives that are stated in a form that permits reasonably consistent measurement. However, IPERA and related guidance from OMB does not specify what, if any, evaluative procedures should be conducted as part of the compliance determination, beyond simply checking for the presence or absence of the required analysis or report. In addition, there are no specific requirements for IGs to be consistent with one another when determining agency compliance. Lastly, the Council of the Inspectors General on Integrity and Efficiency (CIGIE) stated that although it provided general guidance to the IGs in fiscal year 2011, it has not provided the IGs with any guidance regarding how compliance determinations should be made. OMB issued OMB M-15-02, the latest iteration of its Circular No. A-123, Appendix C, in October 2014, which was effective starting with the IGs’ fiscal year 2014 IPERA compliance reviews. The new OMB guidance attempted to make IG determinations of compliance clearer and more concise by instructing IGs to include in their reports a high-level summary of compliance, both overall and by IPERA criteria. While our audit scope did not include testing to determine if the IGs complied with all of the IPERA requirements or whether the IGs followed OMB guidance, as noted above we identified inconsistent reporting among the IGs during the course of our audit work. We presented the inconsistencies we identified to OMB staff, and they informed us that they had noticed similar inconsistencies. Although OMB held IPERA compliance town halls prior to the IGs’ fiscal years 2015 and 2016 IPERA compliance reviews, the town hall briefing slides did not state OMB’s position on whether the IGs should make compliance determinations based on pass/fail determinations or evaluative procedures. OMB staff confirmed that their guidance, as well as IPERA, does not specify what, if any, evaluative procedures should be conducted as part of the compliance determination, beyond simply checking for the presence or absence of the required analysis or report. Continuing inconsistent compliance determinations may result in potentially misleading information regarding government-wide compliance under IPERA. We reviewed the IGs’ fiscal year 2015 IPERA compliance reports and found that 20 of the 24 agencies’ IGs reported that they performed at least one procedure beyond what is required by IPERA and IPIA, as amended by IPERIA. Specifically, as shown in figure 8, we found that the IGs reported that they performed optional procedures, which included evaluations of the (1) agency’s assessment of the level of risk for non-high-priority programs, (2) quality of the agency’s estimation methodology for non-high-priority programs, (3) accuracy and completeness of agency reporting, (4) agency’s performance in recapturing improper payments, and (5) agency’s corrective action plans. Appendix V lists the 20 IGs and the specific optional procedures performed by each IG. While current improper payment estimation laws and corresponding OMB guidance require the IGs to conduct evaluative procedures of the agencies’ risk assessments and estimation methodologies when reviewing programs that OMB designated as high-priority, these types of evaluative procedures are optional for non-high-priority programs. To put this in perspective, in fiscal year 2015, there were 122 programs that published improper payment estimates, and OMB designated 16 of those programs as high-priority programs. Therefore, only those 16 programs were required to undergo the IGs’ evaluations of their risk assessments and improper payment estimates (two IPERA criteria). While $127 billion of the $136.7 billion fiscal year 2015 government-wide improper payment estimate was attributable to the 16 OMB-designated high-priority programs, there was $9.7 billion in reported estimated improper payments for the remaining programs that are not considered high-priority. Although the IGs were not required to perform annual evaluative procedures to identify issues with the agencies’ risk assessments and estimation methodologies for the non-high-priority programs, as noted in figure 8, some IGs elected to perform these evaluative procedures for such programs. OMB staff stated that they believe that performing evaluative procedures is beneficial to addressing government-wide improper payments, and, for that reason, they encourage IGs to do more substantive, in-depth work during their annual IPERA compliance reviews. The following examples provide additional details regarding the IGs’ performance of these optional procedures: Evaluation of agency’s risk assessment(s) for non-high-priority programs: As noted in figure 8, we found that 11 IGs reported that they evaluated the agency’s program-specific risk assessment(s). Of these 11 agencies, we identified 3 IGs (USDA, Treasury, and NASA) that performed their own independent risk assessments and/or analysis and, as a result, identified additional agency programs that they believed should have been identified as susceptible to significant improper payments during the agencies’ own risk assessments. As a result of performing this optional procedure, certain IGs recommended that the agencies revise their program-specific risk assessment processes to reduce the risk of the agencies not identifying all programs susceptible to significant improper payments. The recommended improvements included revising the processes to include factors such as quantitative assessments in addition to (or instead of) qualitative assessments. Evaluation of agency’s estimation methodologies for non-high-priority programs: As noted in figure 8, we found that 10 IGs reported that they evaluated the quality of the agencies’ improper payment estimates and methodologies for certain non-high-priority programs. Specifically, we found that these IGs reported that the published improper payment estimate for at least 1 agency program was unreliable because the agencies used inaccurate data, incomplete data, or insufficient sampling methodologies. As a result of performing this optional procedure, certain IGs provided recommendations to help the agencies improve the precision of their improper payment estimates, such as using improved sampling methodology prepared by a trained statistician. Evaluation of the accuracy and completeness of agency reporting: As noted in figure 8, this was the most frequently performed optional procedure. Specifically, we found that 15 of the 24 IGs reported that they evaluated the accuracy and completeness of their agencies’ reporting on improper payments. Specifically, we found most of these IGs reported that they identified errors in improper payment-related information in the agencies’ published AFRs. As a result of performing this optional procedure, these IGs provided recommendations to help the agencies improve the accuracy of the improper payment-related information published in their reports, such as recommending that the agencies implement additional internal controls. Evaluation of agencies’ performance in recapturing improper payments: As noted in figure 8, we found that 9 IGs reported that they evaluated their agencies’ efforts to recapture improper payments and in some cases found that more improper payments were actually recaptured by the agencies than the agencies reported. Specifically, certain agencies’ summaries of recaptured improper payments failed to include all identified and recaptured improper payments identified through sources outside of the agencies’ recapture audits, such as those payments identified through IG audits. As a result of performing this optional procedure, these IGs made recommendations to help the agencies improve the accuracy and completeness in reporting of their recaptured payments, such as recommending that the agencies develop written policies and procedures detailing the process for reporting overpayments identified and recaptured from sources outside of payment recapture audits. Evaluation of agency corrective action plans: As noted in figure 8, we found that 6 IGs reported that they determined whether their agencies’ corrective action plans were (1) robust and focused on the appropriate root causes of improper payments, (2) effectively implemented, and (3) prioritized within the agency. Specifically, we found that certain IGs reported that their agencies’ corrective action plans did not explain how the corrective actions addressed the root causes identified, lacked planned or actual completion dates of the actions, or had not been updated to include more current root causes. Identifying the root causes of improper payments enables the agencies to revise their corrective action plans to better reflect the unique processes, procedures, and risks involved with each agency program susceptible to significant improper payments. As a result of performing this optional procedure, 1 IG recommended that the agency revise its corrective action plans to address the issues identified by the IG. Although hundreds of IG recommendations were made, and over two-thirds were closed (320 closed recommendations out of 425 total recommendations) during the past 5 years of the IGs’ IPERA compliance reviews, agencies’ overall noncompliance under IPERA continues to be at its highest point—15 noncompliant agencies for both fiscal years 2014 and 2015. As detailed in figure 9, the total annual number of IG-reported recommendations made by the 24 agencies’ IGs has stayed relatively steady for the IGs’ fiscal years 2011 through 2015 IPERA compliance reviews. There was a decrease in the number of IG-reported recommendations related to the agencies’ compliance under IPERA during fiscal years 2012 and 2013, which coincides with the 2 years that the IGs’ reporting showed slight improvements in agencies’ compliance under IPERA. As detailed in appendix VI, the number of recommendations per agency varied significantly from fiscal years 2011 through 2015. For instance, the number of recommendations per agency fluctuated from as little as no recommendations for one agency—the Nuclear Regulatory Commission (NRC)—during this time period, to 49 recommendations for another agency—HUD. One potential reason for the significant difference in the number of recommendations per agency could be the extent to which the agencies were compliant under IPERA over the past 5 years. For example, NRC has been reported as compliant for the past 5 years, whereas HUD has been reported as noncompliant for the past 3 years. As noted in figure 10, there were 105 recommendations open as of December 31, 2016. While the majority of the open recommendations were made in the past 2 years, and the agencies may not have had sufficient time to implement the necessary corrective actions, there was 1 recommendation from fiscal year 2011 that was still open at the time of our review. Specifically, in the HHS IG’s fiscal year 2011 IPERA compliance report issued in March 2012, the HHS IG recommended that HHS develop an improper payment estimate (one of the six IPERA criteria) for the Temporary Assistance for Needy Families (TANF) program and, if necessary, seek statutory authority to require state participation in such a measurement. In the HHS IG’s fiscal year 2015 IPERA compliance report issued in May 2016, the HHS IG made another recommendation for HHS to publish an estimate for TANF. Specifically, the IG recommended that HHS continue to work with OMB to implement one of the OMB-suggested potential alternative approaches to reporting on TANF improper payments in fiscal year 2016. According to HHS’s fiscal year 2016 AFR, issued in November 2016, HHS plans to encourage Congress to consider statutory modifications that would affect the development of an improper payment estimate when legislation is considered to reauthorize TANF. In addition, in a June 2016 letter to Congress, HHS stated that it wanted to work with Congress to address a set of issues concerning the TANF program related to accountability and how funds are used. Until HHS develops an improper payment estimate for TANF, HHS will continue to be reported as noncompliant with the IPERA criteria that agencies publish improper payment estimates and corrective action plans for all susceptible programs. Based on the HHS IG’s fiscal year 2015 compliance report, HHS was noncompliant with five of the six IPERA criteria; however, its noncompliance with the IPERA criteria to publish an improper payment estimate and publish a corrective action plan was based on the TANF program only. The open recommendations address a number of improper payment issues, including compliance under IPERA. We reviewed the open recommendations for the 15 agencies reported as noncompliant for fiscal year 2015, and as shown in figure 11, we found that agencies generally had open recommendations for each of the IPERA criteria for which they were reported as noncompliant. If the agencies take the necessary corrective actions to address the IGs’ open recommendations, the number of noncompliant agencies could decrease in future fiscal years, if no new compliance issues arise. In addition, as shown in figure 11, recommendations designed to address noncompliance with certain IPERA criteria were closed by some of the IGs after their fiscal year 2015 IPERA compliance reports were issued. Although these agencies may have taken corrective actions that resulted in the IGs closing the recommendations, the compliance results for these agencies for fiscal year 2016 were not published at the time of our review. As noted in figure 11, the 3 agencies (DOL, SBA, and Treasury) that did not report improper payment rates below 10 percent either submitted plans to address noncompliance to Congress or submitted proposed statutory changes. For example, in June 2015, Treasury submitted proposed statutory changes to reduce the improper payment rate of the Earned Income Tax Credit program, which reported an estimated improper payment rate of 23.8 percent in fiscal year 2015. Specifically, according to the Treasury IG’s fiscal year 2015 IPERA compliance report, the proposed statutory change would help prevent the improper issuance of billions of dollars in refunds as it would provide the Internal Revenue Service (IRS) with expanded authority to systematically correct erroneous claims that are identified when tax returns are processed, which, according to Treasury, would allow the IRS to deny erroneous Earned Income Tax Credit refund claims before they are paid. However, as of March 2017, legislation had not been enacted to provide Treasury with this authority. Related to its noncompliance with the IPERA criterion to report an improper payment rate below 10 percent, DOL submitted a legislative proposal in October 2016 to make changes to the federal-state unemployment compensation system to help reduce improper payments. DOL’s Unemployment Insurance program was its only program that reported an improper payment rate over 10 percent, and that program was compliant with the remaining IPERA criteria. However, as of March 2017, legislation had not been enacted to make changes to the federal-state unemployment compensation system. Related to SBA’s fiscal year 2015 noncompliance with the IPERA criterion to report an improper payment rate below 10 percent, its IG noted that the improper payment rate for one program (Disbursements for Goods and Services) increased from 8.46 percent in fiscal year 2014 to 13.52 percent in fiscal year 2015. Although the IG did not issue a recommendation in its fiscal year 2015 IPERA compliance report to address such noncompliance, the IG reported that SBA submitted a letter to Congress and OMB detailing how it would become compliant under IPERA. SBA’s March 2016 letter to Congress included SBA’s corrective action plan for reducing improper payments in its Disbursements for Goods and Services program. The total amount of estimated improper payments reported government- wide has been estimated to total over $1.2 trillion from fiscal year 2003 through fiscal year 2016. Five years after the implementation of IPERA, 15 of the 24 CFO Act agencies were reported as noncompliant under IPERA for fiscal year 2015. Although noncompliance under IPERA continues to be at its highest point, the inconsistencies in IGs’ determinations of noncompliance that we found in the IGs’ fiscal year 2015 IPERA compliance reports may present potentially misleading information to individuals and entities interested in comparative government-wide compliance under IPERA. OMB guidance and IPERA do not specify what, if any, evaluative procedures should be conducted as part of the IGs’ compliance determinations. CIGIE, which represents the IGs, has also not issued such guidance. In addition, IGs reported programs at 7 agencies as noncompliant for 3 or more consecutive years as of the end of fiscal year 2015, and as a result, the agencies were required to submit certain information to Congress. However, USDA had not submitted the required information despite prior recommendations from its IG and GAO. When agencies do not submit the required information, Congress may lack the information necessary to effectively monitor the implementation of IPERA and take prompt action to address problematic programs. We are making the following two recommendations: To help ensure that government-wide compliance under IPERA is consistently determined and reported, we recommend that the Director of OMB coordinate with CIGIE to develop and issue guidance, either jointly or independently, to specify what procedures should be conducted as part of the IGs’ IPERA compliance determinations. To help fulfill USDA’s requirements under IPERA and OMB guidance— that agencies submit proposals to Congress when a program reaches 3 or more consecutive years of noncompliance with IPERA criteria—we recommend that the Secretary of Agriculture submit a letter to Congress detailing proposals for reauthorization or statutory changes in response to 3 consecutive years of noncompliance as of fiscal year 2015 for its Farm Security and Rural Investment Act Program. To the extent that reauthorization or statutory changes are not considered necessary to bring a program into compliance, the Secretary or designee should state so in the letter. We requested comments on a draft of this report from the 24 CFO Act agencies and their IGs, OMB, and CIGIE. We received letters from CIGIE, HUD, Treasury, and SSA, as well as from the IGs for DHS and HUD. These letters are reprinted in appendixes VIII through XIII. In addition, all of the other agencies and the IGs either notified us that they had no comments or provided their comments via e-mail. Specifically, OMB’s liaison to GAO stated in an e-mail that they had no comments on the report or the recommendation to coordinate with CIGIE to develop guidance. Similarly, USDA’s Acting Deputy Secretary concurred with our recommendation to USDA by e-mail. We also received technical comments from the Department of Commerce, HHS, HUD, State, Treasury, EPA, NSF, SBA, and SSA, which we incorporated in the report as appropriate. In addition, the IGs for DHS, HUD, State, Treasury, VA, EPA, NASA, NSF, and SSA also provided technical comments, which we also incorporated in the report as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Agriculture, the Director of the Office of Management and Budget, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2623 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix XIV. Our objectives were to review the following: 1. The extent to which the 24 agencies listed in the Chief Financial Officers Act of 1990, as amended (CFO Act), complied with the criteria listed in the Improper Payments Elimination and Recovery Act of 2010 (IPERA), for fiscal years 2011 through 2015 as reported by their inspectors general (IG); the criteria and programs that the IGs concluded were responsible for instances of agency noncompliance and the number of programs at the 24 CFO Act agencies that were reported as noncompliant under IPERA criteria by their IGs for 3 or more consecutive years as of fiscal year 2015: and the extent to which the responsible agencies submitted the required information to Congress. 2. The extent to which CFO Act agency IGs reported that they performed additional optional procedures during their fiscal year 2015 reviews, as outlined in Office of Management and Budget (OMB) guidance contained in OMB Circular No. A-123, Appendix C, OMB Memorandum No. M-15-02 (OMB M-15-02). 3. The extent to which the IGs’ fiscal years 2011 through 2015 IPERA compliance reports included recommendations and the status of these recommendations as of December 31, 2016, and for the open recommendations associated with noncompliant agencies, the extent to which the recommendations were designed to address agencies’ noncompliance with one or more of the six IPERA criteria. Although the responsibility for complying with provisions of improper payment-related statutes rests with the head of each executive agency, we focused on those agencies designated as CFO Act agencies because these agencies represented over 99 percent of the total government-wide improper payments reported in fiscal year 2015. Our work did not include validating or retesting the data or methodologies used by the IGs to determine and report compliance. We corroborated our findings with OMB and all 24 CFO Act agencies and IGs. To address our first objective, we identified the requirements that agencies must meet by reviewing the Improper Payments Information Act of 2002 (IPIA), IPERA, the Improper Payments Elimination and Recovery Improvement Act of 2012 (IPERIA), and OMB guidance. We analyzed CFO Act agency IGs’ fiscal year 2015 IPERA reports, which were the most current reports available at the time of our review; summarized information related to agency compliance under IPERA criteria; and identified common findings and related causes for improper payments, as reported by the IGs. We also relied on and reviewed prior year supporting documentation and analyses of CFO Act agencies’ IG IPERA reports for fiscal years 2011 through 2014, as reported in our June 2016 report on IPERA compliance reporting, and compared agencies’ compliance with each IPERA criterion over fiscal years 2011 through 2015, as reported by the IGs. In addition, we identified the programs responsible for noncompliance over this period by analyzing and summarizing the determinations made in the IG reports. Further, we reviewed the fiscal years 2015 and 2016 improper payments information, which OMB provided to GAO during the audit of the fiscal years 2016 and 2015 consolidated financial statements of the U.S. Government, and compared such information to the agency financial reports or performance and accountability reports to determine the total improper payment estimate reported for the 24 agencies (see apps. IV and VII, respectively). To determine the extent of inconsistencies between the IGs’ fiscal year 2015 noncompliance determinations, we summarized the noncompliance determinations by “noncompliance based on pass/fail determinations” and “noncompliance based on evaluative procedures.” To determine if agencies responsible for the programs that were reported as noncompliant for 3 or more consecutive years had submitted either proposals for reauthorization or statutory changes to Congress, we interviewed and requested information from relevant agency offices of chief financial officer in coordination with the agency IGs. We did not draw conclusions as to the sufficiency or completeness of the information contained in proposals for reauthorization or statutory changes submitted to Congress. Lastly, we corroborated our findings with OMB and all 24 CFO Act agencies and IGs. To address our second objective, we identified procedures that agencies’ IGs were required to perform during their annual IPERA compliance reviews, as required by IPIA, IPERA, IPERIA, and OMB M-15-02. We also reviewed IPIA, IPERA, IPERIA, and OMB guidance to identify a list of optional procedures. To verify that we had properly identified and categorized the optional IG procedures, we interviewed and obtained confirmation from OMB staff. We reviewed and summarized the CFO Act agencies’ IGs’ fiscal year 2015 IPERA compliance reports to determine if the IGs reported that they performed one or more of the optional procedures. Given that some of the optional procedures were only optional when the IGs were reviewing non-high-priority programs, we determined the population of OMB designated high-priority programs for fiscal year 2015 based on information from www.paymentaccuracy.gov. To verify that this list was reported correctly on the website, we interviewed OMB staff and corroborated the information. For each agency responsible for a non-high-priority program, we reviewed the related IG’s IPERA compliance report for fiscal year 2015 to determine whether the IG reported that it performed one or more of the optional procedures for non- high-priority programs. We did not evaluate the sufficiency of the optional procedures performed by the IGs nor did we determine whether the IGs completed the required procedures, as outlined in IPIA, IPERA, IPERIA, and OMB guidance. Lastly, we corroborated our findings with the 24 CFO Act agencies’ IGs. To address our third objective, we reviewed the 24 CFO Act agency IGs’ fiscal years 2011 through 2015 IPERA compliance reports to determine the number of recommendations made by the IGs. To determine the status of the recommendations as of December 31, 2016, we reviewed the recommendation status information obtained from the IGs. To determine whether the IGs associated with the 15 noncompliant agencies had open recommendations that were designed to address the agencies’ noncompliance with one or more of the six IPERA criteria, we reviewed and categorized the IGs’ open recommendations, as of December 31, 2016. For those agencies that did not have open recommendations, as of December 31, 2016, that addressed the agencies’ noncompliance with the IPERA criteria, we determined whether any of the closed recommendations addressed noncompliance with the IPERA criteria and were closed after the IGs’ fiscal year 2015 IPERA compliance reports were issued in May 2016. For the noncompliant agencies that did not have any open recommendations or recommendations that were closed after May 2016 that were designed to address the agencies’ noncompliance with one or more of the six IPERA criteria, we determined whether those agencies submitted information to Congress to address IPERA noncompliance, such as reauthorization proposals, proposed statutory changes, or agency action plans. Lastly, we corroborated our findings with the respective agencies and their IGs. We conducted this performance audit from June 2016 to May 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Figures 12 and 13 detail agencies’ compliance under Improper Payments Elimination and Recovery Act of 2010 criteria, as reported by their inspectors general, for fiscal year 2015. Table 2 lists the Chief Financial Officers Act of 1990 agencies and their programs that their inspectors general reported in fiscal year 2015 were noncompliant under the Improper Payments Elimination and Recovery Act of 2010. Table 3 lists the Office of Management and Budget (OMB)-reported improper payment estimates by agency and program or activity for fiscal year 2015, which relate to fiscal year 2015 compliance determinations of the CFO Act agency inspectors general (IG) as discussed in this report. As noted in table 3, excluding the Defense Finance Accounting Service Commercial Pay program, there were 115 Chief Financial Officers Act of 1990 agency programs that reported an improper payment estimate (either a zero estimate or higher) for fiscal year 2015. Appendix VII details the fiscal year 2016 improper payment estimates, which OMB provided to GAO in December 2016 during the audit of the fiscal years 2016 and 2015 consolidated financial statements of the U.S. government; the IGs’ reports regarding such estimates are due in May 2017. Table 4 summarizes the optional procedures the CFO Act agency inspectors general reported that they performed during their fiscal year 2015 IPERA compliance reviews. Table 5 details the status of the recommendations made in the Improper Payments Elimination and Recovery Act of 2010 (IPERA) compliance reports of the CFO Act agency inspectors general (IG) for fiscal years 2011 through 2015, as of December 31, 2016. The IGs’ recommendations address a number of improper payment issues, including compliance under IPERA. The number of open and closed recommendations, as well as the total number of recommendations, excludes the same (or similar) recommendations that were repeated from a prior year. Additional information regarding the status of the recommendations can be found in the IGs’ annual IPERA compliance reports. Table 6 lists Office of Management and Budget (OMB)-reported improper payment estimates by agency and program or activity for fiscal year 2016, which OMB provided to GAO in December 2016 during the audit of the fiscal years 2016 and 2015 consolidated financial statements of the U.S. government. The IGs’ reports regarding these estimates are due in May 2017. As noted in table 6, excluding the Defense Finance Accounting Service Commercial Pay program, there were 105 Chief Financial Officers Act of 1990 agency programs that reported an improper payment estimate (either a zero estimate or higher) for fiscal year 2016. The total government-wide improper payment estimate for fiscal year 2016 totaled $144.3 billion for 112 federal entity programs that reported estimates. In addition to the contact named above, Phillip McIntyre (Assistant Director), Michelle Philpott (Auditor in Charge), John Craig, Melanie Darnell, Sophie Geyer, Wilfred Holloway, Jason Kelly, Jason Kirwan, Emily Matic, Brian Paige, Dacia Stewart, and Fabiola Torres made key contributions to this report.
Fiscal year 2015 marked the fifth year of the implementation of IPERA, which requires IGs to annually assess and report on whether executive branch agencies complied with six IPERA criteria related to the estimation of improper payments. Improper payments have been estimated to total over $1.2 trillion government-wide from 2003 through 2016. This report examines (1) the extent to which the 24 CFO Act agency IGs reported that agencies complied with the six IPERA criteria for fiscal years 2011 through 2015 and the programs reported as noncompliant for 3 or more consecutive years; (2) the extent to which the IGs reported that they performed optional procedures during their fiscal year 2015 reviews; and (3) the number and status of the IGs' fiscal years 2011 through 2015 IPERA compliance review recommendations. To conduct this work, GAO analyzed the IGs' fiscal years 2011 through 2015 IPERA compliance reports and corroborated the findings with OMB and all 24 CFO Act agencies and their IGs. Five years after the implementation of the Improper Payments Elimination and Recovery Act of 2010 (IPERA), 15 of the 24 Chief Financial Officers Act of 1990 (CFO Act) agencies were reported by their inspectors general (IG) as noncompliant under IPERA for fiscal year 2015. The programs associated with these 15 agencies accounted for $132 billion (or about 96 percent) of the reported $136.7 billion government-wide improper payment estimate for fiscal year 2015. In addition, the inconsistent IG compliance determinations in the IGs' fiscal year 2015 IPERA compliance reports may present potentially misleading information. Specifically, certain IGs reported compliance based on the presence or absence of the required analysis or reporting, regardless of whether the IGs identified flaws, whereas certain other IGs reported agencies as noncompliant based on their performance of some degree of evaluative procedures to determine whether the analysis or reporting that the agency produced was substantively adequate. While the severity of the IGs' findings may have resulted in the IGs reporting noncompliance for some agencies, similar findings were identified for both the compliant and noncompliant agencies. IPERA and Office of Management and Budget (OMB) guidance does not specify what, if any, evaluative procedures should be conducted as part of the IGs' compliance determinations. The Council of the Inspectors General on Integrity and Efficiency (CIGIE), which represents the IGs, has also not issued such guidance. IGs reported programs at 7 agencies as noncompliant for 3 or more consecutive years as of the end of fiscal year 2015 and, as a result, were required to submit certain information to Congress. However, the Department of Agriculture had not submitted the required information, despite prior recommendations from its IG and GAO. When agencies do not submit the required information, Congress may lack the information necessary to effectively monitor the implementation of IPERA and take action to address problematic programs in a timely manner. The IGs' IPERA compliance reviews serve a key function: to reasonably assure that federal dollars are not misspent and that improper payment estimates are accurate, reliable, and complete. To that end, 20 of the 24 IGs reported in their fiscal year 2015 IPERA compliance reports that they also performed one or more optional procedures, which included evaluating the accuracy and completeness of their agencies' reporting. The IGs made 425 recommendations in their fiscal years 2011 through 2015 IPERA compliance reports, and 320 of these recommendations were closed as of December 31, 2016. GAO recommends that the (1) Director of OMB coordinate with CIGIE to develop and issue guidance, either jointly or independently, to specify what procedures should be conducted as part of the IGs' IPERA compliance determinations and (2) Department of Agriculture submit a proposal to Congress, as required in response to 3 years of IPERA noncompliance. In response to the draft report, OMB had no comments and CIGIE stated that it would coordinate with OMB. Also, the Department of Agriculture concurred with the recommendation to it.
Under its environmental restoration program, DOD is responsible for identifying and cleaning up contamination that is a threat to human health or the environment and resulted from its past activities on active and closing installations and on formerly used defense sites. The types of contamination include petroleum products; heavy metals, such as lead and mercury; paints and solvents; and other hazardous substances. The restoration program also covers substances that may not be contaminants, such as ordnance and explosive waste and unsafe buildings and debris. The program is guided primarily by the Superfund Amendments and Reauthorization Act of 1986, which amended the Comprehensive Environmental Response, Compensation, and Liability Act of 1980. DOD’s program also must comply with applicable state laws. Under federal and state law, the EPA and state regulatory agencies oversee DOD’s restoration program. The Office of the Deputy Under Secretary of Defense, Installations and Environment, formulates policy and provides oversight for the restoration program. In fiscal year 1997, program funding was partitioned into five environmental restoration accounts: Army, Navy (including Marine Corps), Air Force, formerly used defense sites, and defensewide. The military services plan, program, and budget for individual restoration projects. The Air Force administers its program through its Environmental Restoration Branch; the Navy, through its Naval Facilities Engineering Command; and the Army, through its Environmental Center. The Army also administers the program at formerly used defense sites through the Environmental Division of the U.S. Army Corps of Engineers. The restoration program at installations designated for closure or mission realignment is funded separately, through the Base Realignment and Closure program. DOD’s environmental restoration program is one of the largest in the United States, containing over 28,000 potentially contaminated locations, and involves several stages. First, potentially contaminated locations must be identified. Next, restoration program officials assess locations to determine if they are eligible for cleanup under the program. If a location is found to be on an active installation or a formerly used defense site and is contaminated from past DOD activities, the location is evaluated for risk and, if cleanup is necessary, a cleanup approach is selected. Because DOD has many projects in its inventory, it sets priorities for sequencing the work. Eventually, the location is cleaned up or a remedy is put in place and, if necessary, is monitored to ensure protection of human health and the environment. Through fiscal year 2000, DOD had spent over $17 billion on its restoration program. Cleanup at most locations is scheduled for completion by 2074, and the total expected cost of the program is projected to be over $42 billion. DOD’s efforts for identifying locations in Guam that may have environmental contamination have been scaled back since the mid-1990s. Under the current approach, DOD generally limits its efforts to search for potentially contaminated locations, instead concentrating on cleaning up locations already identified. Of the known contaminated locations in Guam, most were identified when DOD, under an earlier approach, funded major efforts to search for them. For both DOD-owned property and formerly used defense sites, the Navy, the Air Force, and the Corps conducted multiple organized searches for contamination in the 1980s and early 1990s, usually through contracts with private companies. The searches included activities such as reviewing records and historical photographs, observing property conditions, and interviewing knowledgeable individuals. If contamination was discovered or suspected during a search, the location could be added to DOD’s inventory. Since the mid-1990s, however, DOD has shifted its focus to cleaning up contamination and generally has limited its efforts to search for potentially contaminated locations. Since then, potentially contaminated locations on active military installations have been discovered through normal operations and construction activities, while the Corps has relied primarily on regulators or community residents to bring potentially contaminated locations on formerly used defense sites to its attention. DOD has added far fewer locations to the Guam inventory since the change in program emphasis. However, based on DOD’s extensive past activities in Guam and the continuing discoveries of potentially contaminated locations, regulators and other stakeholders believe that additional undetected contamination may exist in Guam and that a continuing process to identify that contamination is needed to protect human health and the environment. Starting in the 1980s, DOD agencies conducted several searches to identify potentially contaminated locations in Guam. The Navy, the Air Force, and the Corps used similar approaches that generally involved hiring contractors to, among other techniques, review archived records, maps, and photographs; inspect property; and interview knowledgeable individuals. These searches occurred on different occasions over the years. For example, between 1984 and 1994, the Corps conducted three separate searches in Guam to identify contaminated locations. According to a Corps Honolulu District Office official, more than one search was conducted because Corps officials had concerns that all contaminated locations may not have been identified in the prior studies. The identification of formerly used defense sites can be difficult in Guam because land use and property transfer records are hard to locate and are often incomplete. Searches by the Navy, the Air Force, and the Corps identified a large number of potentially contaminated locations on both active DOD properties and formerly used defense sites. In addition, several potentially contaminated locations were brought to DOD’s attention through referrals from other parties, such as Guam EPA. For all of Guam, a total of 202 potentially contaminated locations were included in the DOD inventory, including 155 on active installations and 47 on formerly used defense sites. The circumstances varied under which DOD used these locations, as did the types of hazardous waste and debris they contained. For example, for years the Air Force disposed of construction debris, aircraft components, ordnance, and chemical waste, such as pesticides, on private property located on the cliff-line boundary of Andersen Air Force Base. At the same time, the Navy disposed of paints, paint thinners, battery casings, and other material on its own property, which was located near the ocean at Orote Point, Guam. Figure 1 shows the Navy’s disposal site before environmental restoration action began. In the mid-1990s, as a result of congressional direction and the belief that much of the environmental contamination had been found, DOD changed its focus from identifying locations with potential contamination to addressing contamination at the locations already identified. DOD officials said that most contaminated locations had been found and that the change in focus was a natural progression of the program. The Congress was also concerned that DOD had not made much progress in cleaning up identified locations and that more money was being spent on identifying and studying locations than on the actual cleanup. Consequently, in the National Defense Authorization Act for Fiscal Year 1996, the Congress set a goal for DOD to spend no more than 20 percent of its environmental program funds for program support, studies, and investigations. Despite the shift in focus from identifying locations to addressing the contamination already found, DOD continued to identify and add potentially contaminated locations to its inventory in Guam, although fewer locations were added than in the past (see table 1). While DOD continued to fund some searches, such as one to identify chemical warfare materials on the Fifth Field Marine Supply Depot in Guam, restoration program officials began to rely primarily on others to bring the locations to their attention. On active installations, contamination was discovered as a result of construction or other operational activities. For example, the Navy added two locations to its inventory in 1995 that were discovered during construction activities. On formerly used defense sites, the Corps began relying primarily on agencies, such as Guam EPA, and other sources, such as community residents, to identify potential locations. For example, Guam EPA referred the only potentially contaminated location that the Corps added to its inventory since the shift in program emphasis. Stakeholders said they believe that not all contaminated locations in Guam caused by DOD have been found. Given the extent of past DOD operational activities in Guam, the few controls over disposal practices during and after World War II, and the continuing discoveries of contamination problems, this view seems reasonable. In part to respond to congressional concerns, the Corps has budgeted $500,000 in fiscal year 2002 to conduct an islandwide archival search in Guam to identify formerly used defense sites with evidence of potential chemical warfare material. Even with this effort, however, stakeholders will continue to have an important role in alerting DOD agencies to potential environmental hazards on the island. Stakeholders raised no major concerns about DOD’s cleanup efforts on active military installations, but raised three major concerns about the Corps’ efforts to identify and address contamination on formerly used defense sites in Guam. Their first concern is that the Corps’ current process for adding potentially contaminated locations to its inventory is not clear to them. We believe that the lack of clarity can be attributed to the Corps’ failure to develop well-understood written guidelines for stakeholders to use when referring such locations to the Corps, including the information that should be included with the referrals. We also found that the Corps has not effectively communicated to stakeholders the actions it plans to take on the referrals. The second concern is that DOD excludes from the restoration program debris that does not pose a threat to human health or the environment, even though it was caused by DOD and could place a financial burden on owners who incur costs to remove it. However, DOD policy provides for cleaning up debris only if it is a threat to human health or the environment. The third concern is the slow pace of funding environmental cleanup on formerly used defense sites included in the restoration program. During fiscal years 1984-2000, 4 percent of the total expected cost of locations the Corps approved for cleanup had been funded in Guam while, nationally, 16 percent had been funded, even though contaminated locations in Guam posed risks to human health and the environment that were similar to risks posed by such locations nationally. The Corps explained that, consistent with DOD policy, the unfunded locations in Guam ranked lower in sequencing work than the locations that were funded nationally. Stakeholders have reported that the process for referring potentially contaminated locations to the Corps is unclear to them. Without a clearly understood process, stakeholders cannot be sure that the Corps is properly considering the referred locations for inclusion in its Guam inventory. DOD policy requires the identification of contamination from its past activities, but neither DOD nor Corps policy sets forth the process that stakeholders should use when making referrals. In fact, the Corps’ formerly used defense site program manual, which is its primary document setting forth policy guidance for executing the program, is silent on procedures stakeholders should use to make referrals. Corps Pacific Ocean Division and Honolulu District Office officials acknowledged that the division and district offices did not have written guidelines explaining the referral process, but the Corps district office program manager said the process was verbally explained to Guam EPA and other stakeholders. One area needing clarification is the information that should be included with referrals of potentially contaminated locations. Stakeholders were unclear about the information they should provide when referring such locations to the Corps because the Corps had not defined what information was required. Neither DOD nor Corps policy sets forth the information required with referrals, and the Corps district program manager said that the district office had provided no written guidelines to stakeholders regarding information requirements. Moreover, the program manager said that the referrals the Corps district office had received were sometimes incomplete. For example, the program manager told us that the information provided by Guam EPA with an October 30, 1999, letter referring several potentially contaminated locations was incomplete because there was no documentation showing contamination or indicating that the locations were likely formerly used defense sites. The program manager also said that more information would be needed before the Corps would take any action to determine whether the referred locations should be added to the inventory. Guam EPA officials told us that, in the summer of 2001, the Corps had verbally informed them that more information was needed with their referrals, but it did not describe the specific information needed. Rather than identifying the specific information that should be included, the program manager asked that Guam EPA and others include as much information as possible with any referrals, including information that indicates that the locations were formerly used defense sites and describes potential contamination associated with DOD activities. These uncertainties have been exacerbated by poor communication between the Corps and its stakeholders. Guam EPA officials told us that the Corps often did not respond to or share much information about the referrals it had received, so they did not know whether the Corps was properly considering their referrals. For example, concerning several referrals made between October 30, 1999, and May 18, 2000, the Guam EPA administrator wrote a letter on June 20, 2000, to the district engineer in the Corps Honolulu District Office complaining that no feedback had been provided regarding whether the referred locations were eligible for funding or what action the Corps planned to take on the referrals. The Corps program manager had no written record of a response to this letter. However, the program manager said that the referrals had been verbally acknowledged with a Guam EPA official, who was also told that no action to assess the referrals would be taken at that time because there was no money available due to higher priority work. The Guam EPA official did not recall receiving this information. Stakeholders said that they discussed concerns about the formerly used defense sites program with the Corps, but the concerns have not been resolved. For example, EPA officials organized a work group to improve the Corps Honolulu District Office’s process for dealing with formerly used defense sites. Concerns about how to add locations and other issues related to the Corps’ inventory process, such as what locations may exist that are not on the inventory, were raised in the initial work group meeting in January 2001. The meeting involved EPA, Guam EPA, Corps district and division officials, and officials from other interested federal agencies, such as the Fish and Wildlife Service, the National Park Service, and the Coast Guard. EPA officials told us that concerns about the inventory were also discussed at an August meeting of the work group and would continue to be discussed in future meetings. As of February 2002, the work group was still considering the concerns. In our view, improved communications on the part of the Corps would help stakeholders better understand the process for referring potentially contaminated locations to the Corps, including information they should include with such referrals. Under the Superfund Amendments and Reauthorization Act of 1986, EPA regulations, and DOD policy, the Corps is required to consult with regulators and the public in the decision- making process for environmental cleanup. Nationally, since 1994, restoration advisory boards have been the primary forum for communities affected by contamination at formerly used defense sites to keep informed of and participate in decisions affecting cleanup. Corps policy is to establish a restoration advisory board for formerly used defense sites that contain an active cleanup project if, among other reasons, a board is requested by a government agency. However, there currently is no restoration advisory board for formerly used defense sites in Guam. In August 2001, Guam EPA asked the Corps Honolulu District Office to establish a restoration advisory board for the island. While none of the pending projects in Guam have progressed far enough to be considered active and Corps district officials have expressed concern about the cost of establishing a board in Guam, the Corps district office engineer agreed in September 2001 that a board would be a good tool and committed to discussing the issue with the work group discussed previously. In addition, in August 2001, the Corps’ formerly used defense sites national program manager visited Guam, in part, to improve communications with regulators and assure them that the Corps would be more responsive to their inquiries about site eligibility. Stakeholders’ second concern is that the Corps has not accepted responsibility for some apparent military debris discovered on private property. For example, in 2001, a property owner unearthed military debris while excavating for a foundation on a residential lot east of Guam’s capitol city. As figure 2 shows, the debris included jeep parts, scrap metal, and other material, such as tires. The debris apparently had been discarded and buried years before, when the lot was part of the 700-acre Fifth Field Marine Supply Depot. Upon discovering the debris, the property owner notified Guam EPA, which in turn notified the Navy and the Corps. After inspecting the site, the Corps Honolulu District Office decided that since the debris contained no apparent toxic materials, and, prior to excavation by the owner, had been buried, it was not a threat to human health or the environment and was therefore not eligible for funding under the restoration program. The Corps’ decision to exclude this debris is consistent with DOD policy, although it likely will result in a financial burden for the property owner. The Superfund Amendments and Reauthorization Act of 1986 authorizes using environmental restoration program funds to remove unsafe debris, and DOD has adopted a policy that it only cleans up debris that poses a threat to human health or the environment. DOD officials stated that this policy is necessary, in part, to ensure that most funding is directed toward cleaning up contamination from hazardous and toxic waste that poses a greater risk to human health or the environment. While the Corps followed DOD policy in making its decision, the property owner may incur costs to remove the debris and relocate the construction project. A stakeholder said that this type of problem was likely to increase as more of Guam’s limited land base is developed. The third concern raised by stakeholders is that the Corps has not made sufficient progress in cleaning up locations that the Corps has accepted for inclusion in the restoration program. They said that little work has been done to date or is scheduled in the next several years. Despite the shift in focus in the mid-1990s to cleaning up contaminated locations that have been identified, between fiscal year 1984 and 2000, the Corps spent $4.9 million on its environmental restoration program in Guam, which represents 4 percent of the total expected cost in Guam. Nationally, the Corps has spent about 16 percent of the total expected cost of its restoration program. Six of the 20 projects the Corps approved for cleanup action in Guam have been completed, while 3 are scheduled for completion before 2011, 2 between 2011 and 2020, and 9 after 2021. Most of the completed cleanup projects in Guam have involved removing hazardous waste and underground storage tanks. The remaining work mostly involves removing ordnance and explosive waste. Corps officials acknowledged the difference in funding between Guam and other locations, but they said that it was an appropriate outcome of the Corps’ approach to prioritizing the sequence of work. The Corps considers several factors in sequencing work, including the risk posed to human health or the environment, legal obligations, stakeholder concerns, and program management considerations. Contaminated locations on formerly used defense sites in Guam have a similar risk profile as locations nationally. Risk, therefore, does not explain the difference in funding. Corps officials said that when other factors besides risk are considered, projects in other locations emerge with higher priority. For example, the Alaska District Office sometimes combines low priority projects with high priority projects in remote areas of Alaska to save transportation and other costs. If new contamination is discovered, the Corps can reassess its priorities and redistribute available funds to address the problem. For example, a Guam landowner discovered World War II-era chemical testing kits with diluted mustard gas and other chemicals on his property in July 1999. Due to the potential threat, EPA conducted an emergency response action and, within 3 weeks of discovery, it had removed 16 kits from the property. One week later, the Corps inspected the property using ground-penetrating radar and removed 19 additional kits. In March 2000, the Corps expanded its efforts to a 6-acre area surrounding the property and removed at least 17 more kits. Overall, the Corps spent over $4.6 million on this project, which represented about 95 percent of all the environmental restoration funds it had spent in Guam. To fund this unexpected effort, the Corps reallocated funds from other projects within its Pacific Ocean Division and from other sources, such as Corps headquarters. Despite DOD’s efforts to identify environmentally contaminated locations in Guam, it is likely that some contamination has yet to be discovered. Because DOD agencies now limit their efforts to search for the contamination and instead rely primarily on others to identify such locations, it is important to have a clearly understood process in place for referring those locations to DOD. Although stakeholders raised no major concerns about the process for active DOD installations, the Corps’ process for adding potentially contaminated locations to its formerly used defense site inventory is unclear—both the procedures to follow and the information to include. Without a clear process, the Corps cannot ensure that it is carrying out its environmental responsibilities properly. Furthermore, stakeholders cannot be assured that they are meeting the Corps’ information needs. Stakeholders need to better understand the process for referring potentially contaminated locations to the Corps because the stakeholders are the persons and entities most likely to make referrals. Moreover, once the referrals have been made, communications between the Corps and its stakeholders about actions the Corps plans to take have been ineffective. Without knowing the actions that the Corps plans to take on referrals, stakeholders have no assurance that the Corps has properly considered the referrals to determine whether the potential locations should be added to the inventory. By not effectively communicating with stakeholders, the Corps’ process is not transparent, and stakeholders lack the assurance they seek that the Corps’ restoration program is properly implemented in Guam. To improve DOD’s management of the process for identifying contamination on formerly used defense sites in Guam, we recommend that the secretary of the Department of Defense direct the secretary of the Department of the Army to develop written guidelines for stakeholders in Guam to use when referring locations of suspected contamination to the Corps. The Army should also identify the information that stakeholders should include when making such referrals. To improve stakeholders’ overall understanding of DOD’s restoration program on formerly used defense sites in Guam, we recommend that the secretary of the Department of Defense direct the secretary of the Department of the Army to improve efforts to communicate with stakeholders in Guam to better inform them about policies and procedures for stakeholders to use when referring potential locations to the Corps and the actions the Corps plans to take on the referrals it receives. One way to do this would be to establish a restoration advisory board for formerly used defense sites in Guam. We provided DOD with a draft of this report for its review and comment. DOD responded that, except for one concern, the draft report represented a fair and accurate assessment of the Corps’ efforts to identify new potentially contaminated sites in Guam and coordinate cleanup of those sites with regulators and other stakeholders. DOD agreed with our recommendations to develop written guidelines on its referral process and to improve communications with stakeholders in Guam. DOD’s one concern was that some information that it had provided to us during our review, such as clarifying the types of materials found in Guam and the conditions under which the Corps would establish a restoration advisory board in Guam, was left out of the report. In finalizing our report, however, we incorporated these and other DOD suggestions as appropriate. Regarding our recommendation that the Army develop written guidelines for stakeholders in Guam to use when referring locations of suspected contamination to the Corps, DOD agreed and stated that it would publish such written guidelines and make them publicly available. DOD also stated that its process in Guam could be improved and that the Corps has undertaken a programwide improvement initiative to better coordinate cleanup of formerly used defense sites with regulators and stakeholders. One aspect of the initiative is the development of management action plans, which also provide regulators with the opportunity to communicate with the Corps on cleanup priorities and to notify the Corps about other potentially contaminated locations. DOD stated that in response to our recommendation, and as a first step in developing a management action plan in Guam, it would direct the Army to convene interagency meetings with Guam EPA to review the list of formerly used defense sites and develop an updated inventory. Regarding our recommendation that the Army improve efforts to communicate with stakeholders in Guam, DOD agreed and said it would direct the Army to develop a community relations plan for Guam that describes the information needs of the community and tools the Corps can use to reach out to the community, such as public meetings and information papers. Through these tools, DOD stated that the Corps would also be able to better communicate its procedures for referring potentially contaminated locations. DOD also stated that establishment of restoration advisory boards would be considered if there is sufficient, sustained community interest and cleanup projects are planned on the island. As we stated in our report, such boards are one way to improve communications with stakeholders in Guam. DOD also provided technical corrections, which we incorporated as appropriate. DOD’s written comments on the draft report are included in appendix I. To determine the process used by DOD to identify potentially contaminated locations in Guam and determine what locations were identified, we reviewed relevant federal laws and regulations and DOD policies and procedures and discussed DOD’s environmental restoration program with DOD officials. We also visited DOD officials in Hawaii and Guam to discuss the program and document their efforts to identify environmental contamination in Guam. We reviewed each military service’s inventory of potentially contaminated locations in Guam and the method by which the locations were discovered. We also discussed DOD’s current inventory of contaminated locations with Guam EPA officials and other stakeholders. To determine the nature and extent of concerns about the environmental restoration program raised by regulators and other stakeholders, we discussed the program with Guam EPA officials and other interested parties in Guam, such as restoration advisory board members and EPA officials. To evaluate the concerns raised by stakeholders, we reviewed relevant federal laws and regulations and DOD environmental restoration program policies and procedures and discussed the program with DOD headquarters and field officials. We also analyzed program funding in Guam and nationally. We did not independently verify DOD’s funding data, which forms the basis for DOD’s annual report to the Congress and is publicly available. We conducted our work from June 2001 to March 2002 in accordance with generally accepted government auditing standards. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 5 days after the date of this letter. At that time, we will send copies of the report to the secretary of defense; the administrator, Environmental Protection Agency; and the administrator, Guam Environmental Protection Agency. We will make copies available to others on request. In addition to the above, Don Cowan, Jonathan Dent, Doreen Feldman, Susan Irwin, and Stan Stenersen made key contributions to this report.
Chemical testing kits from World War II containing diluted mustard gas and other chemicals have been discovered on Guam. The Department of Defense (DOD) is responsible for identifying and cleaning up contaminated military sites throughout the United States and its territories. In the mid-1990s, DOD scaled back its identification efforts nationally and focused its attention on Guam. It now relies on referrals from the Guam Environmental Protection Agency and on incidental discovery during construction and other operational activities. Stakeholders had three concerns about the Army Corps of Engineers' efforts to identify and address contamination on former defense sites. First, they were uncertain about the Corps' process for adding potentially contaminated locations to its Guam inventory. Second, some locations containing debris, such as metal and tires, were excluded even though the waste was caused by DOD and could place a financial burden on the owner to remove it. Third, stakeholders were concerned about the slow pace of funding for the program. Between fiscal years 1984 and 2000, only four percent of the total expected cost of cleaning up these locations had been funded in Guam, compared with 16 percent nationwide.
Government officials are concerned about attacks from individuals and groups with malicious intent, such as criminals, terrorists, and adversarial foreign nations. For example, in February 2009, the Director of National Intelligence testified that foreign nations and criminals have targeted government and private sector networks to gain a competitive advantage and potentially disrupt or destroy them, and that terrorist groups have expressed a desire to use cyber attacks as a means to target the United States. The director also discussed that in August 2008, the national government of Georgia’s Web sites were disabled during hostilities with Russia, which hindered the government’s ability to communicate its perspective about the conflict. The federal government has developed a strategy to address such cyber threats. Specifically, President Bush issued the 2003 National Strategy to Secure Cyberspace and related policy directives, such as Homeland Security Presidential Directive 7, that specify key elements of how the nation is to secure key computer-based systems, including both government systems and those that support critical infrastructures owned and operated by the private sector. The strategy and related policies also establish the Department of Homeland Security (DHS) as the focal point for cyber CIP and assign the department multiple leadership roles and responsibilities in this area. They include (1) developing a comprehensive national plan for CIP, including cybersecurity; (2) developing and enhancing national cyber analysis and warning capabilities; (3) providing and coordinating incident response and recovery planning, including conducting incident response exercises; (4) identifying, assessing, and supporting efforts to reduce cyber threats and vulnerabilities, including those associated with infrastructure control systems; and (5) strengthening international cyberspace security. In addition, the strategy and related policy direct DHS and other relevant stakeholders to use risk management principles to prioritize protection activities within and across the 18 critical infrastructure sectors in an integrated, coordinated fashion. Because the threats have persisted and grown, President Bush in January 2008 began to implement a series of initiatives—commonly referred to as the Comprehensive National Cybersecurity Initiative (CNCI)—aimed primarily at improving DHS and other federal agencies’ efforts to protect against intrusion attempts and anticipate future threats. While these initiatives have not been made public, the Director of National Intelligence stated that they include defensive, offensive, research and development, and counterintelligence efforts, as well as a project to improve public/private partnerships. Subsequently, in December 2008, the Commission on Cybersecurity for the 44th Presidency reported, among other things, that the failure to protect cyberspace was an urgent national security problem and made 25 recommendations aimed at addressing shortfalls with the strategy and its implementation. Since then, President Obama (in February 2009) initiated a review of the cybersecurity strategy and supporting activities. The review is scheduled to be completed in April 2009. Over the last several years we have reported on our nation’s efforts to fulfill essential aspects of its cybersecurity strategy. In particular, we have reported consistently since 2005 that DHS has yet to fully satisfy its cybersecurity responsibilities designated by the strategy. To address these shortfalls, we have made about 30 recommendations in key cybersecurity areas including the 5 listed in table 1. DHS has since developed and implemented certain capabilities to satisfy aspects of its cybersecurity responsibilities, but the department still has not fully satisfied our recommendations, and thus further action needs to be taken to address these areas. In July 2008, we reported that DHS’s United States Computer Emergency Readiness Team (US-CERT) did not fully address 15 key cyber analysis and warning attributes related to (1) monitoring network activity to detect anomalies, (2) analyzing information and investigating anomalies to determine whether they are threats, (3) warning appropriate officials with timely and actionable threat and mitigation information, and (4) responding to the threat. For example, US-CERT provided warnings by developing and distributing a wide array of notifications; however, these notifications were not consistently actionable or timely. As a result, we recommended that the department address shortfalls associated with the 15 attributes in order to fully establish a national cyber analysis and warning capability as envisioned in the national strategy. DHS agreed in large part with our recommendations. In September 2008, we reported that since conducting a major cyber attack exercise, called Cyber Storm, DHS had demonstrated progress in addressing eight lessons it had learned from these efforts. However, its actions to address the lessons had not been fully implemented. Specifically, while it had completed 42 of the 66 activities identified, the department had identified 16 activities as ongoing and 7 as planned for the future. Consequently, we recommended that DHS schedule and complete all of the corrective activities identified in order to strengthen coordination between public and private sector participants in response to significant cyber incidents. DHS concurred with our recommendation. To date, DHS has continued to make progress in completing some identified activities but has yet to do so for others. In a September 2007 report and an October 2007 testimony, we reported that consistent with the national strategy requirement to identify and reduce threats and vulnerabilities, DHS was sponsoring multiple control systems security initiatives, including an effort to improve control systems cybersecurity using vulnerability evaluation and response tools. However, DHS had not established a strategy to coordinate the various control systems activities across federal agencies and the private sector, and it did not effectively share information on control system vulnerabilities with the public and private sectors. Accordingly, we recommended that DHS develop a strategy to guide efforts for securing control systems and establish a rapid and secure process for sharing sensitive control system vulnerability information. DHS recently began developing a strategy and a process to share sensitive information. We reported and later testified in 2006 that the department had begun a variety of initiatives to fulfill its responsibility, as called for by the national strategy, for developing an integrated public/private plan for Internet recovery. However, we determined that these efforts were not comprehensive or complete. As such, we recommended that DHS implement nine actions to improve the department’s ability to facilitate public/private efforts to recover the Internet in case of a major disruption. In October 2007, we testified that the department had made progress in implementing our recommendations; however, seven of the nine have not been completed. To date, an integrated public/private plan for Internet recovery does not exist. In 2007, we reported that public and private entities faced a number of challenges in addressing cybercrime, including ensuring adequate analytical and technical capabilities for law enforcement and conducting investigations and prosecuting cybercrimes that cross national and state borders. In addition to our recommendations on improving key aspects of the national cybersecurity strategy and its implementation, we also obtained the views of experts (by means of panel discussions) on these and other critical aspects of the strategy, including areas for improvement. The experts, who included former federal officials, academics, and private sector executives, highlighted 12 key improvements that are, in their view, essential to improving the strategy and our national cybersecurity posture. These improvements are in large part consistent with our above mentioned reports and extensive research and experience in this area. They include: 1. Develop a national strategy that clearly articulates strategic objectives, goals, and priorities—The strategy should, among other things, (1) include well-defined strategic objectives, (2) provide understandable goals for the government and the private sector (end game), (3) articulate cyber priorities among the objectives, (4) provide a vision of what secure cyberspace should be in the future, (5) seek to integrate federal government capabilities, (6) establish metrics to gauge whether progress is being made against the strategy, and (7) provide an effective means for enforcing action and accountability when there are progress shortfalls. According to expert panel members, the CNCI provides a good set of tactical initiatives focused on improving primarily federal cybersecurity; however, it does not provide strategic objectives, goals, and priorities for the nation as a whole. 2. Establish White House responsibility and accountability for leading and overseeing national cybersecurity policy— The strategy makes DHS the focal point for cybersecurity; however, according to expert panel members, DHS has not met expectations and has not provided the high-level leadership needed to raise cybersecurity to a national focus. Accordingly, panelists stated that to be successful and to send the message to the nation and cyber critical infrastructure owners that cybersecurity is a priority, this leadership role needs to be elevated to the White House. In addition, to be effective, the office must have, among other things, commensurate authority— for example, over budgets and resources—to implement and employ appropriate incentives to encourage action. 3. Establish a governance structure for strategy implementation—The strategy establishes a public/private partnership governance structure that includes 18 critical infrastructure sectors, corresponding government and sector coordinating councils, and cross-sector councils. However, according to panelists, this structure is government-centric and largely relies on personal relationships to instill trust to share information and take action. In addition, although all sectors are not of equal importance in regard to their cyber assets and functions, the structure treats all sectors and all critical cyber assets and functions equally. To ensure effective strategy implementation, experts stated that the partnership structure should include a committee of senior government representatives (for example, the Departments of Defense, Homeland Security, Justice, State, and the Treasury and the White House) and private sector leaders representing the most critical cyber assets and functions. Expert panel members also suggested that this committee’s responsibilities should include measuring and periodically reporting on progress in achieving the goals, objectives, and strategic priorities established in the national strategy and building consensus to hold involved parties accountable when there are progress shortfalls. 4. Publicize and raise awareness about the seriousness of the cybersecurity problem—Although the strategy establishes cyberspace security awareness as a priority, experts stated that many national leaders in business and government, including in Congress, who can invest resources to address cybersecurity problems are generally not aware of the severity of the risks to national and economic security posed by the inadequacy of our nation’s cybersecurity posture and the associated intrusions made more likely by that posture. Expert panel members suggested that an aggressive awareness campaign is needed to raise the level of knowledge of leaders and the general populace that our nation is constantly under cyber attack. 5. Create an accountable, operational cybersecurity organization—DHS established the National Cyber Security Division (within the Office of Cybersecurity and Communications) to be responsible for leading national day-to- day cybersecurity efforts; however, according to panelists, this has not enabled DHS to become the national focal point as envisioned. Panel members stated that currently, DOD and other organizations within the intelligence community that have significant resources and capabilities have come to dominate federal efforts. They told us that there also needs to be an independent cybersecurity organization that leverages and integrates the capabilities of the private sector, civilian government, law enforcement, military, intelligence community, and the nation’s international allies to address incidents against the nation’s critical cyber systems and functions. However, there was not consensus among our expert panel members regarding where this organization should reside. 6. Focus more actions on prioritizing assets and functions, assessing vulnerabilities, and reducing vulnerabilities than on developing additional plans—The strategy recommends actions to identify critical cyber assets and functions, but panelists stated that efforts to identify which cyber assets and functions are most critical to the nation have been insufficient. According to panel members, inclusion in cyber critical infrastructure protection efforts and lists of critical assets are currently based on the willingness of the person or entity responsible for the asset or function to participate and not on substantiated technical evidence. In addition, the current strategy establishes vulnerability reduction as a key priority; however, according to panelists, efforts to identify and mitigate known vulnerabilities have been insufficient. They stated that greater efforts should be taken to identify and eliminate common vulnerabilities and that there are techniques available that should be used to assess vulnerabilities in the most critical, prioritized cyber assets and functions. 7. Bolster public/private partnerships through an improved value proposition and use of incentives—While the strategy encourages action by owners and operators of critical cyber assets and functions, panel members stated that there are not adequate economic and other incentives (i.e., a value proposition) for greater investment and partnering in cybersecurity. Accordingly, panelists stated that the federal government should provide valued services (such as offering useful threat or analysis and warning information) or incentives (such as grants or tax reductions) to encourage action by and effective partnerships with the private sector. They also suggested that public and private sector entities use means such as cost-benefit analyses to ensure the efficient use of limited cybersecurity-related resources. 8. Focus greater attention on addressing the global aspects of cyberspace—The strategy includes recommendations to address the international aspects of cyberspace but, according to panelists, the U.S. is not addressing global issues impacting how cyberspace is governed and controlled. They added that, while other nations are actively involved in developing treaties, establishing standards, and pursuing international agreements (such as on privacy), the U.S. is not aggressively working in a coordinated manner to ensure that international agreements are consistent with U.S. practice and that they address cybersecurity and cybercrime considerations. Panel members stated that the U.S. should pursue a more coordinated, aggressive approach so that there is a level playing field globally for U.S. corporations and enhanced cooperation among government agencies, including law enforcement. In addition, a panelist stated that the U.S. should work towards building consensus on a global cyber strategy. 9. Improve law enforcement efforts to address malicious activities in cyberspace—The strategy calls for improving investigative coordination domestically and internationally and promoting a common agreement among nations on addressing cybercrime. According to a panelist, some improvements in domestic law have been made (e.g., enactment of the PROTECT Our Children Act of 2008), but implementation of this act is a work in process due to its recent passage. Panel members also stated that current domestic and international law enforcement efforts, including activities, procedures, methods, and laws are too outdated and outmoded to adequately address the speed, sophistication, and techniques of individuals and groups, such as criminals, terrorists, and adversarial foreign nations with malicious intent. An improved law enforcement is essential to more effectively catch and prosecute malicious individuals and groups and, with stricter penalties, deter malicious behavior. 10. Place greater emphasis on cybersecurity research and development, including consideration of how to better coordinate government and private sector efforts—While the strategy recommends actions to develop a research and development agenda and coordinate efforts between the government and private sectors, experts stated that the U.S. is not adequately focusing and funding research and development efforts to address cybersecurity or to develop the next generation of cyberspace to include effective security capabilities. In addition, the research and development efforts currently underway are not being well coordinated between government and the private sector. 11. Increase the cadre of cybersecurity professionals—The strategy includes efforts to increase the number and skills of cybersecurity professionals but, according to panelists, the results have not created sufficient numbers of professionals, including information security specialists and cybercrime investigators. Expert panel members stated that actions to increase the number of professionals with adequate cybersecurity skills should include (1) enhancing existing scholarship programs (e.g., Scholarship for Service) and (2) making the cybersecurity discipline a profession through testing and licensing. 12. Make the federal government a model for cybersecurity, including using its acquisition function to enhance cybersecurity aspects of products and services—The strategy establishes securing the government’s cyberspace as a key priority and advocates using federal acquisition to accomplish this goal. Although the federal government has taken steps to improve the cybersecurity of agencies (e.g., beginning to implement the CNCI initiatives), panelists stated that it still is not a model for cybersecurity. Further, they said the federal government has not made changes in its acquisition function and the training of government officials in a manner that effectively improves the cybersecurity capabilities of products and services purchased and used by federal agencies. In summary, our nation is under cyber attack, and the present strategy and its implementation have not been fully effective in mitigating the threat. This is due in part to the fact that there are further actions needed by DHS to address key cybersecurity areas, including fully addressing our recommendations. In addition, nationally recognized experts have identified improvements aimed at strengthening the strategy and in turn, our cybersecurity posture. Key improvements include developing a national strategy that clearly articulates strategic objectives, goals, and priorities; establishing White House leadership; improving governance; and creating a capable and respected operational lead organization. Until the recommendations are fully addressed and these improvements are considered, our nation’s most critical federal and private sector infrastructure systems remain at unnecessary risk to attack from our adversaries. Consequently, in addition to fully implementing our recommendations, it is essential that the Obama administration consider these improvements as it reviews our nation’s cybersecurity strategy and begins to make decisions on moving forward. Madam Chair, this concludes my statement. I would be happy to answer any questions that you or members of the subcommittee may have at this time. If you have any questions on matters discussed in this testimony, please contact me at (202) 512-9286, or by e-mail at [email protected]. Other key contributors to this testimony include Bradley Becker, Camille Chaires, Michael Gilmore, Nancy Glover, Kush Malhotra, Gary Mountjoy, Lee McCracken, and Andrew Stavisky. Steve D. Crocker, Chair, Security and Stability Advisory Committee, Internet Corporation for Assigned Names and Numbers. Robert Dix, Vice President of Government Affairs, Juniper Networks, Inc. Martha Stansell-Gamm, (Retired) Chief, Computer Crime and Intellectual Property Section, Department of Justice. Dr. Lawrence Gordon, Ernst & Young Alumni Professor of Managerial Accounting and Information Assurance, Robert H. Smith School of Business, University of Maryland. Tiffany Jones, Director, Public Policy and Government Relations, Symantec. Tom Kellerman, Vice President of Security Awareness, Core Security. Dr. Kathleen Kiernan, Chief Executive Officer, The Kiernan Group, and Chairman of the Board, InfraGard. Cheri McGuire, Principal Security Strategist, Microsoft Corporation, and former Acting Director, National Cyber Security Division, U.S. Department of Homeland Security. Allan Paller, Director of Research, SANS Institute. Andy Purdy, President, DRA Enterprises, Inc., and former Acting Director, National Cyber Security Division, U.S. Department of Homeland Security. Marcus Sachs, Executive Director of Government Affairs for National Security Policy, Verizon Communications; and Director, SANS Internet Storm Center. Howard Schmidt, President and Chief Executive Officer, Information Security Forum. David Sobel, Senior Counsel, Electronic Frontier Foundation. Amit Yoran, Chairman and Chief Executive Officer, NetWitness Corporation; former Director, National Cyber Security Division, U.S. Department of Homeland Security. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Pervasive and sustained computerbased (cyber) attacks against federal and private-sector infrastructures pose a potentially devastating impact to systems and operations and the critical infrastructures that they support. To address these threats, President Bush issued a 2003 national strategy and related policy directives aimed at improving cybersecurity nationwide. Congress and the Executive Branch, including the new administration, have subsequently taken actions to examine the adequacy of the strategy and identify areas for improvement. Nevertheless, GAO has identified this area as high risk and has reported on needed improvements in implementing the national cybersecurity strategy. In this testimony, you asked GAO to summarize (1) key reports and recommendations on the national cybersecurity strategy and (2) the views of experts on how to strengthen the strategy. In doing so, GAO relied on its previous reports related to the strategy and conducted panel discussions with key cybersecurity experts to solicit their views on areas for improvement. Over the last several years, GAO has consistently reported that the Department of Homeland Security (DHS) has yet to fully satisfy its responsibilities designated by the national cybersecurity strategy. To address these shortfalls, GAO has made about 30 recommendations in key cybersecurity areas. While DHS has since developed and implemented certain capabilities to satisfy aspects of its cybersecurity responsibilities, it still has not fully satisfied the recommendations, and thus further action needs to be taken to fully address these areas. In discussing the areas addressed by GAO's recommendations as well as other critical aspects of the strategy, GAO's panel of cybersecurity experts identified 12 key areas requiring improvement. GAO found these to be largely consistent with its reports and its extensive research and experience in the area. Until GAO's recommendations are fully addressed and the above improvements are considered, our nation's federal and private-sector infrastructure systems remain at risk of not being adequately protected. Consequently, in addition to fully implementing GAO's recommendations, it is essential that the improvements be considered by the new administration as it begins to make decisions on our nation's cybersecurity strategy.
DOD is increasingly relying on contractor services to accomplish its missions. In fiscal year 2006, DOD awarded more than $294 billion in contracts. Despite this huge investment in buying goods and services, our work and the work of the DOD Inspector General (IG) has found that DOD’s spending sometimes is inefficient and not managed effectively. Too often, requirements are not clearly defined; rigorous price analyses are not performed, and contractors’ performance is not sufficiently overseen. In fact, we have identified overall DOD contract management as a high-risk area for the past several years. When a requirement needs to be met quickly and there is insufficient time to use normal contracting vehicles, federal regulations permit the use of a UCA. UCAs are binding commitments used when the government needs the contractor to start work immediately and there is insufficient time to negotiate all of the terms and conditions for a contract. UCAs can be entered into via different contract vehicles, such as a letter contract (a stand-alone contract), a task or delivery order issued against a pre- established umbrella contract, or a modification to an already established contract. The FAR and the Defense Federal Acquisition Regulation Supplement (DFARS) govern how and when UCAs can be used. The regulations also establish requirements as to how quickly UCAs must be definitized. Although each regulation contains two criteria, they are not the same. The FAR states that a letter contract needs to be definitized within 180 days after the award date or before 40 percent of the work is complete, whichever occurs first. While the DFARS includes the 180-day time frame, it addresses all UCAs (including undefinitized task and delivery orders and contract modifications) and adds a requirement to definitize before more than 50 percent of funds are obligated. It does not mention the 40 percent of work completed. Under FAR and DFARS respectively, a waiver of the 180-day requirement can be granted for extreme circumstances or when the agency is supporting a contingency or peacekeeping operation. The definitization time frame can also be extended an additional 180 days when a qualifying proposal is received from the contractor. The contractor does not receive profit or fee during the undefinitized period, but can recoup it once the contract is definitized. Under UCAs, the government risks paying unnecessary costs. For example, in a September 2006 report on contracts in support of Iraq reconstruction, we found that the timeliness of definitization can affect the government’s costs. We reported that DOD contracting officials were more likely to adhere to the Defense Contract Audit Agency’s advice regarding the disposition of questioned and unsupported costs when negotiations were timely and occurred before contractors had incurred substantial costs under UCAs. On the other hand, contracting officials were less likely to remove questioned costs from a contract proposal when the contractor had already incurred these costs during the undefinitized period. Similarly, the DOD IG found that untimely definitization of contracts transfers additional cost and performance risk from the contractors to the government. Contractors should bear an equitable share of contract cost risk and receive compensation for bearing additional risk based on the degree of risk assumed. Costs that have already been incurred on an unpriced action, such as a letter contract, have virtually no cost risk associated with them. As such, when negotiating profit with the contractor, the government may attribute a zero risk factor to the undefinitized period. DOD faces a potentially large gap in its data and thus does not know the extent to which it is using UCAs. The federal procurement data system is only able to identify UCAs that are awarded via letter contracts. Undefinitized task or delivery orders, as well as contract modifications, are not identified. DOD also lacks high-level oversight of its UCA activity since UCA monitoring has been delegated to the local commands, with upward reporting no longer required. At the local commands we visited, monitoring of UCAs varied in both detail of information and frequency of review. DOD understates its UCA usage due to a potentially significant gap in data. Because the government’s federal procurement data system—managed by the Office of Federal Procurement Policy—only identifies letter contracts as undefinitized at award and does not identify undefinitized task or delivery orders or contract modifications, DOD does not know the extent of its UCA activity. As figure 1 shows, DOD’s reported obligations for letter contracts have increased from $5.98 billion in fiscal year 2001 to $6.53 billion in fiscal year 2005. These obligations for letter contracts as a percentage of DOD’s total obligations remained 4 percent or less during this time period. At the same time, DOD’s task and delivery order obligations have increased significantly, as shown in figure 2. A DOD senior acquisition official stated that if DOD wanted to know the amount obligated under undefinitized task and delivery orders, it would have to ask for the information from all of the local commands. According to information maintained at the local commands we visited, most have issued some undefinitized task or delivery orders. As table 1 illustrates, one command obligated over $500 million in UCA orders during the 2-year period we reviewed. UCA oversight takes place at local commands, without any centralized reporting at the DOD headquarters or military services levels. Although UCA oversight was centralized in the past, a senior DOD acquisition official told us that DOD does not believe that UCA usage is a significant concern, given that letter contracts have represented no more than 4 percent of DOD’s total obligations over the past several years. As such, DOD relies on its local commands to oversee the use of UCAs and inform upper management if any issues arise. The Air Force is the only military service that has a reporting requirement for UCA activity. A June 2002 policy requires commands to report to the headquarters acquisition office on UCAs that have remained undefinitized for more than 1 year. However, the acquisition office has not received any reports on delinquent UCAs, despite the fact that we found 9 UCAs that had remained undefinitized for over 1 year at the two Air Force commands we visited. An official from one of the commands told us it reported one of its delinquent UCAs, but, according to an Air Force headquarters acquisition official, it was never received. Since the reporting of delinquent UCAs is by a manual self-reporting system, it is possible that other delinquent UCAs have gone unreported. The local commands we visited performed oversight of their UCA usage to varying degrees. All of the military locations had some sort of reporting of UCA activity to the local acquisition management on a regular basis, ranging from monthly to quarterly reporting. The local commands also varied in whether all UCAs were tracked versus only those that remain undefinitized after the 180-day time frame. We found that the National Geospatial-Intelligence Agency was not tracking or monitoring its DOD UCAs, even though its acquisition regulation requires a monthly report on UCA activity. After we raised this issue, National Geospatial-Intelligence Agency officials stated that they will begin monitoring their UCA activity. DOD is using UCAs to rapidly fill needs in a variety of circumstances, many of which are directly or indirectly related to the war in Iraq. The message from management at the locations we visited is to limit the use of UCAs. However, this message seems to have resonated to different degrees with the frontline acquisition staff who requested and awarded the UCAs we reviewed. In some instances, inadequate acquisition planning drove the need for the UCA. The UCAs we reviewed were for a range of goods and services—from providing immediate support to the warfighter in theater to procuring long lead items to keep weapon system program schedules on time. The military services’ commands awarded about half of the UCAs we reviewed for support of war efforts and one third to meet schedules on production contracts. In one instance, a UCA was awarded to immediately provide body armor on combat vehicles already in use in operations in the Middle East. In another, a UCA was awarded to obtain a jamming system that was needed to avoid grounding F-15 aircraft. The National Geospatial- Intelligence Agency awarded over half of the UCAs we reviewed for immediate intelligence needs and about half to avoid disruptions of services it was receiving under expiring contracts. Table 2 provides a summary of the reasons presented by the contract files and discussed with the contracting officers specifically for the 77 UCAs we reviewed. Poor acquisition planning is not an appropriate reason to award a UCA. However, for 10 of the UCAs we reviewed, the government may have been able to prevent the use of a UCA with better planning. These included, for example, 4 UCAs issued to procure long lead items that could have been contracted for earlier. The requirement for long lead items is typically established early in a program and is normally provided advanced funding in the annual budget process, which should provide sufficient time to acquire the items through normal acquisition procedures. Other inadequate planning situations included 4 UCAs—1 at the Naval Sea Systems Command and 3 at the National Geospatial-Intelligence Agency— that we believe could have been prevented by the program office. In each instance, the requirement was known in a significant amount of time before the UCA was issued. These situations ranged from late issuance of the request for proposals (which had been planned earlier) to awards that were issued quickly to avoid disruptions in services (which could have been anticipated). For example, one National Geospatial-Intelligence Agency UCA for the continuation of ongoing services was awarded the day after the services from the prior contract ended. The agency should have been able to reasonably estimate the requirement and prices in advance based on the terms and work of the ongoing contract, which were already known. The remaining three inadequate planning situations were due to circumstances that were beyond the control of the program office. For example, a Navy UCA was issued because the senior acquisition executive, external to the program office, delayed the approval of the program’s acquisition plan. Furthermore, one UCA added requirements that expanded the work beyond what was originally planned. Specifically, the National Geospatial-Intelligence Agency awarded a UCA to quickly obtain aerial data from the regions affected by hurricanes, but subsequently augmented it to establish a permanent facility that had been planned for some time. Several contracting officers across DOD expressed concern that program office staff need training on the appropriate use of UCAs because they do not always seem to be aware of the risks that these contract actions pose to the government. The “tone at the top” provided by the local commands we visited is to not use UCAs unless absolutely necessary. However, this message is emphasized differently from one location to another and has only recently come about in some locations. For example, an April 2000 Naval Air Systems Command memorandum says that the use of UCAs is to be kept to the “absolute minimum” and that they should not be used if the requirements are not fully defined. On the other hand, the National Geospatial-Intelligence Agency allowed its contracting officers to use UCAs without the need for higher-level approval until a May 2006 memorandum elevated the approval authority to the senior procurement executive. Representatives from the four companies we spoke with use UCAs with DOD to different degrees—ranging from considering UCAs to be a “normal part of business” to rarely using UCAs in recent years. One company said that its UCAs are mostly used for short duration work needed to maintain critical schedules in the development or production processes of other contracts. Another company recently entered into several indefinite delivery/indefinite quantity contracts with the government so that UCAs could be avoided in that area of work. DOD did not meet the definitization time frame requirement of 180 days after award for over half the UCAs we reviewed. This situation places the government at risk of paying increased costs, thus potentially wasting taxpayers’ money. On average, the UCAs we reviewed were definitized more than 2 months past the required period, with 16 remaining undefinitized for a year or more. While DOD regulations allow up to half of the funding to be provided before definitization, we found that DOD tends to obligate this maximum amount of funding immediately at award—a practice that could provide a disincentive for the timely definitization of the UCA. In addition, DOD does not monitor its compliance with the FAR requirement to definitize letter contracts when 40 percent of the work is complete. Sixty percent of the UCAs we reviewed—46 of 77—were not definitized within the 180-day time frame required by FAR and DFARS. Table 3 shows the number of days elapsed before the UCAs were definitized. We found 16 UCAs that took more than a year to definitize, with the longest taking over 600 days. Each location we visited had at least 1 UCA in effect for over a year. In addition, we found no discernable relationship between the dollar value or contract type of the UCAs and the length of time it took to definitize. Approximately the same proportion of small and large dollar value UCAs were definitized in less than 180 days as were definitized in more than 180 days. Likewise, the final contract type did not appear to influence the timeliness of definitization. Approximately the same proportion of UCAs with final contract types of fixed-price and cost- type were definitized in less than 180 days as were definitized in more than 180 days. We also identified a number of UCAs that met provisions that allow an extension or waiver of the 180-day definitization requirement. FAR and DFARS allow an additional 180-day extension of the definitization time frame from the date a qualifying proposal (one that is complete and auditable) is received from the contractor. Our review showed that definitization occurred during this extended time frame in only 7 of the 36 cases. Two UCAs were permitted waivers of the 180-day requirement since they were in support of contingency operations, pursuant to a September 2003 Air Force memorandum waiving the time frame for actions related to Operation Iraqi Freedom. Figure 3 illustrates the average time frames and the range of days that lapsed before definitization. Contracting officials provided more than a dozen reasons for not definitizing UCAs within the original 180-day time frame. Based on our review of the contract files and discussions with contracting and program officials, the most common reasons for the delays were (1) delays in obtaining a qualifying proposal from the contractor, (2) acquisition workforce shortages that led to overly heavy workloads, and (3) complexity of requirements at award of the UCA or changing requirements after award. In many cases, multiple reasons contributed to the definitization delay. Some of the longest delayed definitizations occurred because of a combination of the three reasons stated above. Table 4 provides a summary of the number of instances each reason was provided as an explanation of the delay. Contracting officers stated that delays in obtaining a qualifying proposal were sometimes caused by the program office’s changing requirements. Many contracting officials stated the government’s requirement was inadequately described when the UCA was awarded or was subsequently changed after award once the requirement was better understood. Contractor representatives and contracting officers noted that it is difficult for a contractor to timely submit an adequate proposal when the government is unsure about the specifications of the product or service it requires. Officials at two companies noted that they attempt to submit qualifying proposals on time, but must redo them—sometimes multiple times—to reflect the government’s revised requirements. In addition to timeliness of proposals and changing requirements, shortfalls in the government’s acquisition workforce were another key reason for definitization delays. This issue was manifest in different ways, including inadequate numbers of contracting officials, the heavy workload of the Defense Contract Audit Agency, which is frequently called upon to perform audits of the proposal’s pricing structure, and, in four cases, contracting officials who did not perform their duties to definitize the UCAs. Some contracting officers commented that UCAs require twice the work that a normal contract award does, because in essence they go through the contracting process twice—once for the undefinitized period and once for the definitized period. Problems with acquisition staff or workloads at the commands resulted, in some instances, in a UCA remaining undefinitized until someone turned attention to it. Some contracting officers told us that their focus is on getting the UCA awarded; after that, they often must turn to other pressing awards so that following up on definitizations becomes less of a priority. We also found one situation where a UCA simply fell through the cracks because it dropped off the local reporting system due to a computer error. In one case, the contracting officer awarded a UCA but took another job before it was definitized, and the contracting officer who inherited it was not aware for some time that it had not been definitized; thus, no one acted on it for over a year. Most of the UCAs we reviewed were awarded with the maximum obligations allowed. Specifically, 60 of the 77 UCAs—78 percent—were obligated with approximately 50 percent or more of the not-to-exceed price at award. As a result, contractors may have less incentive to hasten the submission of qualifying proposals and agencies have little incentive to demand their prompt submission, since funds are available to proceed with the work, leading to a protracted negotiation process. One contracting officer obligated a smaller percentage initially, but as time went by and various issues arose that slowed definitization, he raised the obligated amount little by little until it reached 50 percent. In hindsight, he said it would have been easier to just obligate the 50 percent at the beginning. Company officials said that the minimum amount needed to begin work under a UCA depends on the circumstances of the work. Officials from all four companies told us they usually receive 50 percent of the not-to-exceed price at award. While we found some evidence of monitoring the percentage of funds obligated, in accordance with the DFARS requirement to definitize UCAs before 50 percent of the funding is obligated, none of the commands we visited act proactively to ensure the obligations do not exceed this maximum amount. As a result, DOD is at risk of increasing the potential that it is paying additional unnecessary costs during the undefinitized period. The monitoring that does occur, at three local commands we visited, is not effective in ensuring compliance with the requirement because no alerts are generated if a UCA goes beyond the maximum obligations before definitization. An official at one command that does not monitor this requirement stated that the command does not do so because it is the responsibility of the contracting officer to ensure it is met. DOD is not monitoring compliance with the FAR requirement to definitize letter contacts when 40 percent of the work is complete. None of the local commands we visited had procedures in place to track this provision. Officials at two commands were not familiar with the requirement. As such, we were unable to assess whether DOD is in compliance with this requirement. Many contracting officers stated that the amount of work completed before definitization could not readily be determined because under a UCA there is no established baseline against which to measure the percentage of work completed. Policy officials at several locations we visited also stated that the FAR requirement would be difficult to implement. Based on our findings, a DFARS case was initiated in April 2007 to clarify defense acquisition regulations. Contracting officers are not usually documenting, when applicable, whether profit or fee is adjusted for work performed by the contractor at a lower level of risk during the undefinitized period. All UCAs are essentially cost-reimbursement contracts until definitized, as contractors are reimbursed for all incurred costs that are reasonable, allocable, and allowable during the undefinitized period. This contract type places the greatest cost risk on the government. When the UCA is definitized, the ultimate contract type is determined. Our sample included a variety of final contract types, including firm-fixed-price, cost-plus-award-fee, cost- plus-incentive-fee, and cost-plus-fixed-fee. Each contract type includes either profit (fixed-price contracts) or fee (cost-type contracts) for the contractor. During the undefinitized period, however, profit or fee is not paid. The profit rate or fee is derived at definitization and then applied across the entire period of performance, including the undefinitized period. “When the final price of a UCA is negotiated after a substantial portion of the required performance has been completed, the head of the contracting activity shall ensure the profit allowed reflects (a) Any reduced cost risk to the contractor for costs incurred during contract performance before negotiation of the final price; and (b) The contractor’s reduced cost risk for costs incurred during performance of the remainder of the contract.” When costs have been incurred prior to definitization, contracting officers are to generally regard the contract type risk to be in the low end of the designated range. If a substantial portion of the costs have been incurred prior to definitization, the contracting officer may assign a value as low as 0 percent, regardless of contract type. Table 5 shows the range of profit and fee rates negotiated at definitization for the UCAs we reviewed. We did not assess the reasonableness of the profit or fee percentages determined by the contracting officers. We found that these adjustments to profit or fee were usually not documented in the price negotiation memorandum, a contract document that sets forth the results of the negotiations and contains the contracting officer’s determination that the negotiated price is fair and reasonable. Specifically, the memorandums for only 14 of the 77 UCAs we reviewed discussed how the negotiated profit or fee was affected by the UCA. As a result, for the majority of the UCAs we reviewed, no determination can be made whether the costs incurred during the undefinitized period were considered when the allowable profit or fee was determined. Similarly, in a 2004 report, the DOD IG found that contract records did not contain evidence that allowable profit factors, such as the reduced cost risk, were considered in the final profit rate awarded to the contractor. It was also not evident that already incurred costs were taken into account when determining profit. The majority of the contracting officers responsible for the UCAs we reviewed acknowledged that they are required to document how the shift in risk associated with the undefinitized period was accounted for in determining the profit or fee calculated for negotiations. UCAs are a necessary tool for DOD to use to meet urgent contracting needs, but DOD must ensure that their use is limited to appropriate circumstances. Even when UCAs are used appropriately, increased management attention is needed regarding definitization time frames so the government’s position during subsequent negotiations is not overly weakened. Existing regulations and guidance governing UCAs are not always understood or followed. Actions are needed to strengthen management controls and oversight of UCAs; otherwise the department will remain at risk of paying unnecessary costs and potentially excessive profit rates. To improve oversight of UCAs, we recommend that the Administrator of the Office of Management and Budget’s Office of Federal Procurement Policy assess whether the Federal Procurement Data System-Next Generation data fields need to be modified to require coding that will identify undefinitized task and delivery orders and undefinitized contract modifications, and the Secretary of Defense issue guidance to program and contracting officials on how to comply with the FAR requirement to definitize when 40 percent of the work is complete. To help ensure that UCAs are definitized in accordance with regulations, we recommend that the Secretary of Defense take the following two actions: put in place a reporting channel to headquarters that includes information on UCAs in place for 180 days or more and that outlines plans and time frames for definitization, and supplement acquisition personnel on an as-needed basis to quickly definitize UCAs once they are awarded. To mitigate the risks of paying increased costs under UCAs, we recommend that the Secretary of Defense set forth supplemental guidance to accomplish the following two actions: direct contracting officers, where feasible, to obligate less than the maximum allowed at UCA award to incentivize contractors to expedite the definitization process, and specify that the effect of contractor’s reduced risk during the undefinitized period on profit or fee be documented in the price negotiation memorandum or its equivalent. We provided a draft of this report to DOD and the Office of Federal Procurement Policy for comment. In written comments, DOD concurred with our findings and recommendations and noted actions underway that are directly responsive. The department’s comments are reproduced in appendix III. The Office of Federal Procurement Policy provided oral comments, stating that it had no concerns regarding our recommendation to add a data field in FPDS-NG that would identify undefinitized task and delivery orders and contract modifications at award. Such data are needed to provide DOD (and other agencies) more complete information on UCAs, which can then be used to improve oversight of their use. Although DOD concurred with our recommendation to issue guidance addressing the FAR definitization requirement, in its comments, DOD stated that our reference to the FAR requirements for UCA definitization schedules did not consider the difference in requirements for DOD that are specified in the U.S. Code. However, our report does address those differences. DOD also stated that the Defense Acquisition Regulation Council has initiated a DFARS case, based upon our discussions during this review, to clarify that DOD contracting officers should use the DOD definitization schedule criteria. DOD agreed that the need for enhanced oversight of UCAs is appropriate and said it will consider requiring the military departments to enhance oversight of UCAs and to provide periodic reports, with remediation plans, for those past the definitization time frames. The Department also published two notices in the Federal Register on May 22, 2007, seeking public comments on current DOD contract financing and funding policies, including the weighted guidelines that are used to determine appropriate profit or fee based on an assessment of contractor risk. We are sending copies of this report to the Secretary of Defense, the Director of the Office of Management and Budget, the Administrator of the Office of Federal Procurement Policy, and other interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report or need additional information, please contact me at (202) 512-6986 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were Michele Mackin, Assistant Director; R. Eli DeVan; Lily Chin; Matthew T. Drerup; Victoria Klepacz; John Krump; Jean K. Lee; and Lynn Milan. To determine the level of insight the Department of Defense (DOD) has into its use of undefinitized contract actions (UCA), we interviewed DOD senior-level acquisition officials and service-level acquisition officials to identify any additional policies specifically addressing the use of undefinitized contract actions at the locations selected for our review. We analyzed information from DOD’s procurement system (DD350) and local commands for undefinitized contract actions from fiscal year 2001 through fiscal year 2005. We also reviewed the relevant sections of the Federal Acquisition Regulation and the Defense Federal Acquisition Regulation Supplement, as well as service-level guidance pertaining to the use of undefinitized contract actions. To identify how and when DOD is using UCAs and whether DOD is definitizing these actions in a timely manner, we reviewed a random sample of undefinitized contract actions from six military commands and one non military defense agency. While undefinitized contract actions may include letter contracts, task or delivery orders, and contract modifications, only letter contracts are recorded by DD350 in a manner that allowed GAO to identify them as undefinitized at the time of award. Therefore, the specific locations for our review were selected based on the total dollar value and volume of letter contracts issued during fiscal years 2004 and 2005 by various DOD buying organizations as recorded in the DD350 system. On the basis of this data, we selected the two commands with the largest dollar volume of letter contracts within each of the three military services (Air Force, Army, and Navy). As such, the six military locations represented over 75 percent of the total dollars awarded for letter contracts during the period. We also selected the non military defense agency with the largest number of letter contracts. The specific locations selected for our review were: Aeronautical Systems Center, Dayton, Ohio Warner Robins Air Logistics Center, Warner Robins, Georgia TACOM Life Cycle Management Command, Warren, Michigan Aviation and Missile Command, Huntsville, Alabama Naval Sea Systems Command, Washington, D.C . Naval Air Systems Command, Patuxent River, Maryland National Geospatial-Intelligence Agency, Washington, D.C. To include other types of undefinitized contract actions in our review, we requested a listing of task and delivery order and contract modifications issued as undefinitized contract actions during fiscal years 2004 and 2005 from each of the seven locations that we planned to visit. This request was necessary because these types of undefinitized actions are not identified in the federal procurement data system. We then established a population of undefinitized contract actions at each location and selected a random sample of contract actions to review. Not every location could provide us with a listing of other undefinitized contract actions prior to our site visit, and in some cases there were an insufficient number of such actions to meet our sampling needs. In such cases we reviewed additional letter contracts selected at random to achieve similar sample sizes at each location. A total of 77 undefinitized contract actions were sampled for this review. The six Army, Navy, and Air Force contracting organizations that we selected for our review initiated 70 of the undefinitized contract actions that we reviewed. The National Geospatial-Intelligence Agency initiated 7 of the undefinitized contract actions that we reviewed. Observations made from our review cannot be generalized to the entire population of undefinitized contract actions issued by DOD. We omitted undefinitized contractual actions for foreign military sales, purchases that did not exceed the simplified acquisition threshold, special access programs, and congressionally mandated long lead procurement contracts since these actions are not subject to compliance with the definitization requirements we were reviewing. We also excluded all undefinitized task orders issued under basic ordering agreements. The majority of pricing and contract terms are established under basic ordering agreements, leaving few terms and conditions to be definitized after award when orders are issued under this type of contract. At each location, we reviewed contract document files and interviewed officials from the local program office as well as the cognizant contracting officers. In a few cases the contracting officer could not speak to the reasons for definitization delays because that officer was not involved with the award or definitization of the UCA selected for our review. We relied on data provided to us by DOD and the buying commands we visited, which we verified where practical. For example, in determining the length of time to definitize the sampled actions, we verified the data reported in DD350 by tracing the reported award and definitization dates to the contract file documentation. We also verified contract obligation and not- to-exceed amounts reported in DD350 by reviewing contract file documentation available in hard copy at the sites we visited and electronically from DOD’s Electronic Data Access Web-based system. To obtain insight into the issues surrounding the use of UCAs from a contractor’s point of view, we interviewed representatives from four companies who entered into undefinitized contract actions with one or more of the buying organizations that were selected for this review. We conducted our work from August 2006 through April 2007 in accordance with generally accepted government auditing standards.
To meet urgent needs, the Department of Defense (DOD) can issue undefinitized contract actions (UCA), which authorize contractors to begin work before reaching a final agreement on contract terms. The contractor has little incentive to control costs during this period, creating a potential for wasted taxpayer dollars. Pursuant to the House of Representatives report on the National Defense Authorization Act for Fiscal Year 2007, we assessed (1) the level of insight DOD has into its use of UCAs, (2) how and when DOD is using UCAs, (3) whether DOD is definitizing UCAs in a timely fashion, and (4) whether contracting officers are documenting the basis for negotiated profit or fee. GAO reviewed 77 randomly-selected contracts at seven locations and interviewed DOD officials. DOD faces a potentially large gap in its data and thus does not know the extent to which it is using UCAs. DOD's reported obligations for UCAs increased from $5.98 billion in 2001 to $6.53 billion in 2005. However, the government's procurement system does not identify undefinitized task or delivery orders or undefinitized contract modifications. In light of DOD's reported increase in its use of task and delivery orders in recent years, the data gap could be large. Because DOD decentralizes oversight of its UCAs, the department would have to manually obtain data from each of its local commands in order to obtain a complete picture. The local commands GAO visited performed oversight of their UCAs to varying degrees. DOD is generally using UCAs to rapidly fill urgent needs, as permitted, in a variety of circumstances. Local managements' message to the contracting community is to not use a UCA unless absolutely necessary, but this message is emphasized differently from one location to another. GAO found 10 instances in the 77 UCAs we reviewed where UCAs could have been avoided with better acquisition planning. For example, one UCA for the continuation of ongoing services was awarded the day after the previous contract expired. DOD did not meet the definitization time frame requirement of 180 days after award on 60 percent of the 77 UCAs reviewed. The most common reasons for the delays were untimely receipt of an adequate proposal from the contractor, acquisition workforce shortfalls, and changing requirements. GAO also found that DOD tends to obligate the maximum amount of funding permitted--up to 50 percent of the not-to-exceed amount--immediately at award of UCAs. As a result, contractors may have little incentive to quickly submit proposals. In addition, since DOD does not track whether it meets the Federal Acquisition Regulation requirement to definitize letter contracts (one type of UCA) before 40 percent of the work is complete, GAO was unable to assess compliance with this requirement. Contracting officers are not documenting, as required, the basis for the profit or fee prenegotiation objective and the profit or fee negotiated. As such, it is unclear whether the costs incurred prior to definitization are considered when computing the profit rates or fee amounts. For the 40 fixed-price contracts GAO reviewed, profit ranged from 3 to 17 percent, and for the 37 cost-type contracts in our sample, fees ranged from 4 to 15 percent. Generally the rate was applied equally over the entire contract term, including the undefinitized period.
DOJ awards federal financial assistance to state and local governments, for-profit and nonprofit organizations, tribal jurisdictions, and educational institutions to help prevent crime, assist victims of crime, and promote innovative law enforcement efforts. Federal financial assistance can take the form of discretionary grants, formula grants, cooperative agreements, and payment programs, which all are generally referred to as grants. Grant programs are generally created by statute and funded through annual appropriations. As such, Congress has a central role in determining the scope and nature of federal financial assistance programs. In addition, the Office of Management and Budget (OMB) establishes general guidance which governs administration of all such federal financial assistance and DOJ has flexibility in how to administer assistance that is discretionary in nature. In fiscal year 2010, DOJ provided direct grant funding to nearly 11,000 grantees.assistance. We reviewed all 253 of the fiscal year 2010 grant solicitations that OJP, OVW, and the COPS Office published on their respective websites and found overlap across 10 justice areas—as table 2 illustrates. These solicitations announced funding available to grantees for criminal and juvenile justice activities, including direct assistance for crime victims and the hiring of police officers. These solicitations also announced funding available for grantees to collect criminal justice data, conduct research, or provide related training and technical assistance. We developed these 10 categories of justice areas after reviewing comparable justice areas identified within OJP’s CrimeSolutions.gov website, which OJP officials stated covers a variety of justice topics, including some topic areas that OVW and the COPS Office fund; OJP’s Fiscal Year 2010 Program Plan; and other materials from OVW and the COPS Office, such as justice program themes from their respective websites. Within the justice areas, a variety of activities—including research, direct service provision, or technical assistance—can be conducted. We examined the purpose areas of the 253 grant solicitations and then categorized them by justice area. In conducting this analysis, we recognize that overlapping grant programs across common programmatic areas result in part from authorizing statutes, and that overlap itself may not be problematic. However, the existence of overlapping grant programs is an indication that agencies should increase their visibility of where their funds are going and coordinate to ensure that any resulting duplication in grant award funding is purposeful rather than unnecessary. Overlap and the associated risk of unnecessary duplication occur throughout the government, as we have reported previously, and are not isolated to DOJ. However, when coupled with consistent programmatic coordination, the risk of unnecessary duplication can be diminished. As table 2 illustrates, we found overlap across the various DOJ grant programs. For example, 56 of DOJ’s 253 grant solicitations—or more than 20 percent—were providing grant funds available for activities related to victim assistance or to support the research and prevention of violence against women. Eighteen of these 56 programs were administered by offices other than OVW and OJP’s Office for Victims of Crime. In addition, more than 50 percent of all grant solicitations provided funding that could be used in support of the same three justice areas—victim assistance, technology and forensics, and juvenile justice— indicating concentrated and overlapping efforts. The justice area with the least overlap was juvenile justice, with 30 of 33 grant programs administered by the Office of Juvenile Justice and Delinquency Prevention. There are some instances in which overlap occurs because of the statute that established programs. Further, we recognize that overlap among DOJ’s grant programs may be desirable because such overlap can enable DOJ’s granting agencies to leverage multiple funding streams to serve a single justice purpose. However, coordination across the administering granting agencies is critical for such leveraging to occur. In the section below, we discuss the ways in which overlapping grant programs increase the risk of unnecessarily duplicative grant awards for the same or similar purposes. In subsequent sections, we discuss the steps DOJ has taken to enhance coordination and some ways in which DOJ’s efforts can be improved. We found that in some instances, DOJ’s granting agencies awarded multiple grants to the same grantees for the same or similar purposes. Applicants can apply directly to DOJ for funding through a variety of grant programs that DOJ announces annually. Recipients of such grant awards are referred to as prime grantees. Since many of DOJ’s grant programs allow prime grantees to award subgrants, applicants also can apply directly to a prime grantee for award funding. As a result, prime grantees receiving money from DOJ through one funding stream also can be subgrantees receiving money from a prime grantee through another funding stream. If an applicant, either as a prime grantee or as a subgrantee, receives multiple grant awards from overlapping programs, the risk of unnecessary duplication increases since the applicant may receive funding from more than one source for the same or similar purpose without DOJ being aware that this situation exists. Such duplication may be unnecessary if, for example, the total funding received exceeds the applicant’s need, or if neither granting agency was aware of the original funding decision. After reviewing a sample of 26 grant applications from recipients who received funds from grant programs we identified as having similar purpose areas, we found instances where applicants used the same or similar language to apply for multiple streams of funding. For example, one grant recipient applied for funding to reduce child endangerment through cyber investigations from both the COPS Office’s Child Sexual and OJP’s Internet Crimes Against Children Predator (CSPP) Program(ICAC) program. In both of these applications, the applicant stated that it planned to use the grants to increase the number of investigations in the state, provide training for cyber crime investigations, serve as a forensic resource for the state, and establish an Internet safety program. Further, included in this applicant’s proposed budgets for both funding streams were plans to purchase equipment, such as forensic computers and the same specialized software to investigate Internet crimes against children. Another grant recipient from a different jurisdiction also applied for funding from OJP and the COPS Office programs to support the same types of investigations. In a third instance, an applicant received fiscal year 2010 grant funding for planned sexual assault victim services from both OJP’s Office for Victims of Crime and OVW. The applicant used similar language in both applications, noting that it intended to use the funding to support child victim services through its child advocacy center. After we shared these examples with DOJ, DOJ officials followed up with the grant recipients involved and reported to us that the grantees were not using awarded funds for duplicative purposes—which DOJ defines as grantees using funds to pay for the exact same item. However, such follow-up for the purpose of assessing duplication is not a routine practice for DOJ. Further, DOJ’s narrow definition of duplication curtails it from assessing the use of funds for the same or a similar overall purpose on a grant project. In fiscal year 2010, DOJ’s three granting agencies awarded nearly 11,000 prime grant awards, but officials told us that they do not consider the flow of grant funds to subgrantees when making grant award decisions. Because DOJ does not have visibility of the flow of funds to these recipients, agency officials were not positioned to tell us what activities, or for what purposes, the subgrantees were spending their federal funds. Thus, to obtain more information, we surveyed JAG SAAs, who are responsible for managing the subgrants they make, to obtain information related to the purpose areas of their funding. In our survey, we asked the JAG SAAs if they or their subgrantees used grant funding in fiscal year 2010 for key justice areas such as funding sex offender registry notification systems; correctional officer salaries, and sexual assault services; purchasing bullet-resistant vests; and hiring police officers. DOJ supports all of these areas through JAG, as well as through targeted grant programs specifically addressing each of these topics. On the basis of survey responses, we found several instances where SAAs reported that JAG funds were used to support activities that could have been funded through other DOJ grants. For instance, 11 of 50 responding SAAs, or 22 percent, reported that they or their subgrantees used JAG funding to support correctional officer salaries. Further, 23 of the 50 SAAs, or nearly 50 percent, reported that they or their subgrantees used JAG funding to hire police officers, even though a separate DOJ program dedicates funding exclusively to hiring law enforcement personnel. The COPS Office hiring grant awarded to this county was for fiscal year 2009. COPS Office hiring grants last up to 3 years, and the county used the grant in fiscal years 2010 and 2011 as well. These grants support the hiring or the rehiring of career law enforcement officers to increase community policing and crime prevention strategies. received funding for drug court–assisted substance abuse treatment and mental health counseling through both a JAG program subaward and a grant directly from OJP’s Adult Drug Court Grant Program. Officials from one of these counties informed us that they received so much DOJ funding from the two grant programs that it exceeded the county’s need and they planned to return a portion to DOJ. The IG has previously identified the risk of OJP and the COPS Office funding duplicative grant awards. For example, in 2003, the IG identified duplication between the COPS Office Hiring Program and the Local Law Enforcement Block Grant Program, the predecessor to the JAG grant program. The IG reported that while the COPS Hiring Grant program is required to advance community policing, the Local Law Enforcement Block Grant Program grants are sometimes used for the same or similar purposes. According to the IG, in such cases the grants are duplicative. In 2010, the IG reported that it had identified potential overlap between the COPS Office Hiring Recovery Program and OJP’s JAG Formula Program and Edward Byrne Competitive Grant Program. As a result, the IG recommended that the COPS Office work with OJP to avoid duplication of future funding by coordinating closely on grantee selection decisions, as discussed later in this report. U.S. Department of Justice Office of the Inspector General, Audit Division, Streamlining of Administrative Activities and Federal Financial Assistance Functions in the Office of Justice Programs and the Office of Community Oriented Policing Services, Audit Report 03-27 (Washington D.C.: Aug. 2003). According to DOJ officials, the statutory creation of grant programs with similar purposes requires grant design coordination within and among DOJ’s granting agencies to limit the risk of unnecessary duplication from overlapping programs. The primary purpose of consolidation or coordination may not be to limit this risk, but officials reported that reducing the risk may be a secondary benefit. Officials from all three granting agencies stated that they meet with one another to coordinate the goals and objectives of their grant programs, especially joint grant programs that they believe are complementary. For example, the Bureau of Justice Assistance and the Office for Victims of Crime issued a joint solicitation for anti-human trafficking programs where each office issued separate awards based on coordinated proposals from collaborating police departments and community-based victim service organizations. Further, according to officials, DOJ recently launched the Coordinated Tribal Assistance Solicitation to provide a single application for most of DOJ’s tribal grant programs. Consolidating two programs with similar purposes into one, with unified management, is the most comprehensive way to reduce overlap, according to DOJ officials. However, they stated that the statutory creation of grant programs with similar purposes can create administrative challenges because in many cases, DOJ must seek statutory authorization to discontinue or consolidate enacted programs that DOJ believes may be overlapping. Officials told us they have sought congressional action in a few instances for these purposes and will continue to do so, but because the process is complex, they have also taken administrative steps on an ad hoc basis to mitigate overlap of purpose areas as illustrated in table 3. Officials stated that they meet with one another when they determine it is needed to coordinate the goals and objectives of their grant programs, especially those programs that they believe are complementary. In addition, an OJP official told us that in 2010 the office prioritized coordination as 1 of its 10 management goals and cited benefits that resulted from this focus, including reduced administrative costs, fewer grant solicitations, and a reduced number of competitive grant peer reviews. However, these officials told us that these coordination and consolidation efforts, as well as those illustrated in table 3, do not occur routinely. Even with efforts to coordinate its programs, DOJ officials told us they have not conducted a formal assessment or study of their grant programs to determine if and to what extent they overlap and where opportunities exist to more consistently pursue consolidation or better coordinate grant programs. Further, we found that coordination among granting agencies occurred on an ad hoc basis and that without an assessment of its overlapping programs, DOJ was not well positioned to identify and describe areas of potential for unnecessary duplication across its grant programs. A senior OJP official told us that the department had not formally assessed or studied its grant programs to determine the extent of overlap because of the significant investment of time and staff resources that it would require. DOJ officials emphasized that since these programs were statutorily established as distinct programs, they are not certain that any attempt at harmonization—beyond what they have already done—would be viable. For example, they said that in some cases, statutes creating what may appear to be similar programs also create very different eligibility criteria for grant applicants. Thus, the officials stated, some programs may not be easily merged through administrative efforts such as announcing similar grant programs in a single solicitation. We agree that similar grant programs may have unique features that could render grant consolidation or coordination impractical, but DOJ has not taken the steps to catalogue all of its programs across each of the three granting agencies, and then determine which have the potential to be consolidated or coordinated and what barriers might exist to achieve such changes. The IG continues to include DOJ’s grants management among its list of top challenges affecting the department, and in previous reports, has identified fragmentation and duplication among DOJ’s granting agencies as an area of concern. Further, developing agency procedures to avoid grant duplication is one of the promising practices that the federal Domestic Working Group Grant Accountability Project suggested in its Given the Guide to Opportunities for Improving Grant Accountability.specific knowledge of these grant programs’ statutory authorities, their histories of funding certain types of activities, and the nuances related to their administration, officials within OJP, OVW, and the COPS Office are uniquely positioned to assess their programs for overlap. Doing so could yield positive dividends for the granting agencies and the department over the longer term. Specifically, such assessments could include understanding the areas in which individual granting agencies may be awarding funds for the same or similar purposes, determining whether these grant programs appropriately channel the department’s resources across the justice areas it funds, and determining whether any existing overlap is desirable. By conducting an assessment of its grant programs of this kind, DOJ would be better positioned to take action, such as through consolidation and coordination of its programs, in a more systematic way to limit overlap and mitigate the risk of unnecessary duplication. OJP, OVW, and the COPS Office do not routinely share lists of current and potential awardees to consider both the current and planned dispersion and purposes of all DOJ grant funding before finalizing new award decisions. Not having routine coordination in the pre-award phase limits each of DOJ’s granting agencies’ visibility of the funds each respectively awards rather than to the overall flow of department dollars. Thus, in the instances where DOJ made multiple grant awards to applicants for the same or similar purposes, officials made these awards without always being aware of the potential for unnecessary duplication or whether funding from multiple streams was warranted. DOJ officials stated that their annual process to formulate budgets for grant administration, OJP’s annual planning process to develop solicitations, and the department’s overall grant oversight functions address the risks of unnecessary duplication in grant awards. However, these activities do not specifically relate to the pre-award phase when any potential for unnecessary duplication can best be avoided. DOJ officials also stated that they meet bi-monthly to discuss grantees on DOJ’s High Risk List to avoid funding grantees who in the past have demonstrated deficiencies in However, the purpose of these properly managing their federal awards.discussions is not to prevent or reduce duplication. Developing agency policies and procedures to avoid unnecessary grant duplication in the awarding of funds is one of the promising practices that the federal Domestic Working Group Grant Accountability Project suggested in its Guide to Opportunities for Improving Grant Accountability. As a result of our work, OJP officials informed us that as of March 2012, they had begun to pilot solicitation language in two of its grant programs requiring grant applicants to disclose any pending applications submitted in the last 12 months for other federally funded assistance to support the same costs associated with the same projects outlined in applicants’ budgets. Additionally, officials stated they are currently developing a grant special condition for all fiscal year 2012 grant awards that would require grantees to report to OJP if they receive any funding for a specific project cost that is duplicative of the funding OJP provides. OJP officials told us that if grantees report duplicative funding for a specific project cost, OJP staff will work with the grantees to ensure return of the OJP funds. We believe this requirement will improve OJP’s ability to limit the risks of duplicative funding for single items; however, OJP continues to take a more narrow view of the term “duplication.” OJP defines duplicative funding to include only instances where grantees are using federal money for the same exact item. In doing so, OJP excludes from its purview all federal funding that grant applicants have been awarded to carry out the same or similar activities within a proposed project. Thus, in making funding decisions without asking for information about and considering other sources of an applicant’s federal funding to carry out the same or similar activities, OJP may be awarding funds for proposed projects that are already partially or fully funded. It may also be doing so at the expense of other applicants who, in the absence of other funding sources, may demonstrate to OJP greater financial need for their proposals. Further, DOJ’s new approach—while an important step—relies solely on grantee reporting. By independently assessing its own lists of actual and prospective grantees prior to awarding funds, DOJ could have additional assurance that it is taking actions to mitigate the risk of unnecessary duplication. In addition, OVW officials stated that for fiscal year 2013, they intend to require those applying to four of its grant programs to identify in their grant applications all federal funding that they recently applied for or have received. By enhancing visibility over various sources of grant funding, OVW would be better positioned to avoid unnecessary duplication in awarding grants for these four programs. It could also provide OVW with opportunities to best leverage OVW funding in a manner that complements other funding streams the applicant already has available or may soon receive. For example, if an applicant reports to OVW that it already receives money through a non-OVW grant to provide counseling services to victims, OVW can ensure that OVW funds are available for other project-related activities such as providing training to counselors who serve victims. However, beyond what OJP is piloting and what OVW has proposed for four of its programs, DOJ generally does not require grant applicants to identify other funding that they have received or any pending funding yet to be awarded, including funding received through a subgrant. Further, while the COPS Office’s grants management system automatically includes information on other COPS Office funding a COPS Office applicant may already be receiving, it does not identify other DOJ grant funding or any other federal funding sources. As a result, DOJ’s three granting agencies could take additional steps to increase their visibility over what applicants may already be receiving before awarding new funds. OJP, OVW, and the COPS Office have not established policies and procedures requiring consistent coordination and information sharing among its granting agencies. Having such policies and procedures would provide guidance to DOJ granting agencies to help ensure they take action to mitigate the risks of unnecessary duplication before finalizing award decisions. By routinely coordinating to ensure the sharing of grant applications and potential grant awards among DOJ granting agencies prior to finalizing grant award decisions and documenting its methods for doing so, DOJ could also improve its oversight and better leverage information already at its disposal. The IG recommended in August 2003 that OJP and the COPS Office establish procedures to coordinate to ensure that grantees do not receive funds for the same purpose from both agencies.the COPS Office signed a memorandum of understanding to establish procedures for avoiding duplication by coordinating grants and grant programs that were identified by the IG and grant programs where the In response, OJP and potential for duplication exists. Further, the agencies committed to reviewing any new guidance affecting grants or grant programs where the potential for duplication exists. Specifically, for grants and grant programs identified as having potential for duplication, OJP and the COPS Office agreed to minimize potential duplicative grant awards in a manner consistent with statutory provisions, and include a grant award special condition requiring that grantees not use OJP and COPS Office grant funds to pay for the same expenses. During the course of our audit work, we asked COPS Office and OJP officials for examples of the type of coordination they have been engaging in since the IG’s recommendations. Officials provided evidence from fiscal year 2009, when they coordinated funding decisions with OJP prior to awarding grants for two similar grant programs funded under the Recovery Act. However, in some cases, granting agencies continue to provide funding for the same or similar purposes without each being aware of the others’ actions. Further, we examined grant special conditions for the COPS Office CSP program and OJP’s ICAC grant program and neither agency included special conditions in the grant awards requiring grantees to identify and report duplication. However, the 2010 COPS Child Sexual Predator Program Grant Owner’s Manual did include a requirement that grantees inform the COPS Office if they receive other funding for the same cost or service already funded by the COPS Office while their grant is underway. The Grant Manager’s Manual used by OJP lists grant award special conditions, and except for the duplication condition that specifically applies to CTAS, there were no other grant duplication special conditions listed. Officials from OJP and the COPS Office told us that state and local communities have expansive criminal justice needs and therefore they encourage applicants to seek out as much DOJ grant funding as possible, including from grant programs that may have similar objectives or allow for similar activities. In some instances, DOJ may deem it appropriate for distinct grant programs to serve the same goal, or for one community or grantee to benefit from multiple streams of grant funding. For example, if DOJ granting officials are coordinating their activities across overlapping grant programs and are aware that a grantee is receiving funds from more than one DOJ program, making funding decisions of this kind may be warranted. However, DOJ’s granting agencies are not routinely engaging in such coordination. Unless DOJ improves granting agencies’ coordination, and considers information available on current, past, and prospective funding, it cannot know where all of its funding goes, how it is being or will be used, and whether it is awarding grant dollars in the most efficient way possible. Further, if the granting agencies are not aware of which recipients are receiving funds from multiple grants, they may be inadvertently awarding multiple grants that exceed the demonstrated need of a recipient or community at the expense of another applicant or community with similar demonstrated needs. In addition, they may be missing opportunities to award grants to recipients who may use funding in a complementary way, whereby funding may be leveraged by a grantee or a community to accomplish a single goal. With the exception of OVW’s plan to have four grant program recipients identify all federal funding they receive and OJP’s solicitation pilot and plan to have applicants identify duplicative cost items, DOJ does not have policies and procedures that require grant applicants to identify all sources of current or pending DOJ funding in their grant applications in a manner that provides DOJ a complete picture of DOJ grant project funding. If DOJ had (1) a coordinated approach to share applicants’ funding intentions, and (2) policies and procedures to share lists of applicants that each granting agency plans to fund, DOJ could improve its understanding in the pre- award phase as to whether its funding would complement or unnecessarily duplicate other federal funding. DOJ officials told us that the timeline for reviewing applications, making recommendations, and processing awards each year is compressed and that it would be difficult to build in the extra time and level of coordination required to complete an intradepartmental review for potentially unnecessary duplication of funding prior to making awards. The officials added that it would take even more time if granting agencies were to attempt a pre-award duplication review at the subgrantee level. Thus, officials told us that they rely upon post-award activities through grant monitoring, Single Grant Audits, and IG audits to determine if duplicative expenses have occurred after grants are under way. However, relying upon monitoring and external audits to identify duplication after it has occurred should not substitute for the mitigation of potential unnecessary duplication in the pre-award phase. We acknowledge that the time necessary to complete annual grant awards makes such a review process more difficult; however, actions to make coordination more consistent and efficient as well as the leveraging of grant award information, including subgrants, could help overcome this challenge. Moreover, using tools such as existing grant data available on USASpending.gov, which we address later in this report, could aid DOJ in validating other grant funding that grant applicants report and allow for an expedient way to search for subgrant funding. In addition, DOJ could limit its pre-award coordination to those grant programs that DOJ identifies as overlapping with other DOJ grant programs. For certain grant programs, OJP and OVW have taken important first steps to require grant applicants to report other sources of funding, but expanding this requirement to all grant programs across all granting agencies, such that every applicant would report both past and prospective sources of DOJ grant funding, could provide broader coverage and help DOJ better mitigate the risk of unnecessary duplication. While OJP and OVW use a single grants management system called GMS, the COPS Office uses a separate grants management system— CMS—which limits the sharing of grant award information across the granting agencies. Specifically, OJP, OVW, and the COPS Office use GMS and CMS to track and manage awards throughout the grant life cycle. For example, agency grant staff in OJP, OVW, and the COPS Office use their grant management systems to review and approve applications and to plan and document grant monitoring activities. Grantees use the grant management systems to submit financial status reports that include summary information on grant expenditures and program income as well as progress or performance reports. DOJ has spent about $36 million from 2008 through 2010 to maintain and upgrade these two separate grants management systems, including about $8 million for CMS and $28 million for GMS. DOJ’s continued use of two systems to manage grant programs impedes coordination because GMS and CMS are not linked with each other, and the agencies’ access is limited to the grants management systems they utilize. OJP and OVW can access information through GMS about grants awarded by each other, but they cannot access CMS to see the grantees that have received COPS Office funds. As a result, these granting agencies cannot use these grants management systems to inform themselves of all of the funding DOJ has awarded or is preparing to award to a recipient and consider this information before making additional awards. According to an OJP official, over the long term, it would be helpful if GMS could connect to CMS. Pursuant to the statute establishing OAAM—the office overseeing programmatic grant monitoring and assessment across OJP and COPS Office programs—the Director of OAAM was required to establish and maintain a modern, automated system for managing all information relating to grants made under programs within its purview. grant management systems in the department. See 42 U.S.C. § 3712h(g) (providing that the OAAM Director shall establish and maintain such a system in consultation with the chief information officer of the office). and monitoring, such as through remote access to CMS. The IG concluded that oversight agencies should have direct, instant, and complete access to grant information, which is not provided using the current system, which relies on hard copies of documents. In response, COPS Office officials reported that they would provide OJP with real-time hard copy reports necessary to carry out oversight work and that CMS could be accessed only by employees using remote access to the COPS Office or through a COPS Office laptop. DOJ contractors completed a gap analysis of CMS and GMS in 2006 to outline the differences—or gaps—between the two systems and propose solutions for reconciling them. At the time, the contractor found key gaps between the two systems related to business processes—in particular, programmatic grant monitoring, financial monitoring, and progress- reporting capabilities. Thus, the contractor recommended either building a new single grants system or maintaining the status quo because in the analysis, business process differences between OJP and the COPS Office were reportedly obstacles that made using either one of the two grants systems for both agencies untenable. Since 2007, OJP has upgraded GMS, which has closed some of the system gaps that the contractor initially identified, but the two systems remain distinct and unlinked. According to COPS Office officials, CMS continues to better meet their needs than GMS could because CMS captures and stores data in such a way that it can be more easily queried than data in GMS and CMS uniquely aligns with COPS Office grant processes. This is helpful to the officials when the COPS Office is evaluating grant applications for its largest program, the COPS Hiring Program. Rather than evaluating qualitative grant project narratives through external peer review, the COPS Office considers quantitative data related to applicant fiscal distress, reported crime statistics, and community policing strategies when determining where to award COPS Hiring Program grants. Because COPS Hiring Program grant applicants use CMS to upload their data, COPS officials are positioned to use CMS for automated aggregation and analysis of applicant responses. In contrast, OJP officials said that because of recent upgrades, GMS could be used or modified in order to query individual searchable elements. They also stated that variation in the information required by individual grant programs would not present an insurmountable barrier to unifying systems. In addition, GMS has served multiple agencies in the past and can be modified when circumstances warrant. For example, in 2010 and 2011, COPS Office officials successfully used GMS for awarding purposes under CTAS. According to OJP officials, the initial coordination with the COPS Office for CTAS purposes required additional modifications to GMS, but these were not onerous or costly. The officials said that with relative ease, after the modifications, OJP, OVW, and COPS Office grant managers all accessed GMS to perform some of the phases of the CTAS grant process. For example, GMS supports the management of CTAS by storing all applications, managing peer review comments, and registering awards once decisions are final. Besides OVW’s use of GMS to award and manage its grants, the Department of Homeland Security’s (DHS) Federal Emergency Management Agency (FEMA) also uses GMS to award and manage grants, though it has future Further, OJP officials stated that plans to use a DHS grants system.GMS has a current storage capacity well in excess of what it currently uses. They also emphasized that OVW and FEMA have grant business processes that do not completely align with OJP’s, but with small investments, OJP has been able to adjust GMS to accommodate OVW and FEMA. In June 2012, DOJ officials informed us they had engaged a contractor to assess whether a single grants management system, among a range of other options, can best serve DOJ’s granting agencies. They plan for the contractor to report back within 6 months of beginning the analysis and said that they envision the assessment including an evaluation of costs, benefits, and technical requirements, such as those needed to harmonize business processes. Engaging a contractor for this purpose is an important first step, and doing so could help DOJ make better investment decisions about the most efficient way to manage its grants systems, especially when it considers the costs and benefits of having fragmented grants systems. In the interim, however, DOJ could take a more immediate solution to foster information sharing across GMS and CMS by providing system access to appropriate OJP, OVW, and COPS Office staff—for example, through common login names and passwords, just as department staff have done in limited instances such as the CTAS Program. DOJ’s granting agencies are not submitting grant award information to USASpending.gov in a timely way. In accordance with the Federal Funding and Transparency Act of 2006 (FFATA), USASpending.gov was created to increase the transparency and accountability for federal funding awarded through contracts, loans, grants, and other awards. OMB issued guidance on reporting the receipt and use of federal funds. OMB also launched USASpending.gov in December 2007 to allow the public to view federal spending and engaged the General Services Administration (GSA) to build and maintain USASpending.gov, among other FFATA-related websites.guidelines for agencies related to the requirement for prime grantees to report all subgrants over $25,000 in fiscal year 2011. The USASpending.gov website includes the subgrantees’ names, geographical locations of funded activities, specific subgrant amounts, and the funded purposes. A GSA official told us that the FFATA reporting infrastructure is the first time that comprehensive federal grant and subgrant information, including DOJ grant information, has been made widely available on a single website. Through the steps that granting agencies, as well as the grantees, take to supply this website’s content, the public and DOJ’s granting agencies can better track the flow of funds and identify communities receiving funds from multiple streams. In August 2010, OMB established OMB’s FFATA guidance requires agencies to submit grant award data by the 5th of each month. USASpending.gov contains validation software used to validate agency data submissions, and if data are rejected, agencies receive automated notification. OMB guidance then requires agencies to resubmit the corrected data to USASpending.gov within 5 working days of the rejection notification. OJP manages submissions to USASpending.gov for all of DOJ—that is for OJP, OVW, and the COPS Office—and an OJP official reported that DOJ submits grant award information to USASpending.gov twice per month. However, more than a quarter of the grant award information that DOJ submitted to USASpending.gov in fiscal year 2011 was rejected and resubmission took Figure 3 illustrates the more than 80 days after the fiscal year ended.flow of grant award information from both DOJ and grantees and some issues we identified related to DOJ’s fiscal year 2011 reporting. Agency reporting of prime grant award information in USASpending.gov is a critical step in the FFATA reporting process because prime grantees cannot upload their subgrant award information until it occurs. For fiscal year 2011, DOJ submitted 4,346 distinct prime grant award records to USASpending.gov; however, GSA rejected 1,152 of these because of incomplete or inaccurate data associated with some of these grant files, and DOJ did not resubmit the records within 5 working days as required under OMB guidance. Specifically, DOJ did not correct and resubmit the rejected records until December 22, 2011, which was after we raised this issue with DOJ and 83 days after the end of the fiscal year. Five out of 11 DOJ prime grantees we interviewed who had awarded subgrants indicated that, after searching, they could not view their prime DOJ grant awards on FSRS.gov or USASpending.gov and indicated that DOJ had not uploaded the information to the websites. Further, another prime grantee among the 11 with whom we spoke indicated that it was unaware that subgrant reporting was a requirement. As a result, these prime grantees were unable to submit their subgrant award information and thus were unable to comply with OMB subgrant reporting requirements. DOJ has taken action to help ensure that prime grantees and DOJ grants staff are aware of FFATA reporting requirements. For example, OJP offered FFATA reporting training for all DOJ grants staff and grantees, and all three granting agencies required a special condition in grant awards that included FFATA reporting requirements. These steps may have informed grantees of their FFATA responsibilities, but DOJ’s untimely submission of grant award information to USASpending.gov led to prime grantees being unable to access their grant awards to submit their subgrant award information as required by OMB. OJP officials stated that since 2007 they have been coordinating with OMB, other federal agencies, and contractors on issues related to reporting guidelines and other technical requirements related to subgrant reporting, but that it was not until August 2010 that OMB established guidelines for the collection and reporting of subgrant information. Nevertheless, OJP officials stated that the current allotment of 5 days for agencies to review and resubmit data that GSA originally rejected is an unreasonable time frame given the time-intensive nature of checking and correcting errors. Officials also noted that OJP would like to see GSA allow for the posting of individual records that pass system validation rather than waiting for entire blocks to be corrected at once before GSA will post award information to USASpending.gov. In addition, OJP officials indicated that the information prime grantees ultimately posted about their subgrantees was limited and in most cases very brief. As a result, the officials said they were not considering such information before making new awards. We recognize that 5 days may not be adequate to correct errors in grant award information, but we also believe that timely submission of grant award information on USASpending.gov is key to transparency and the overall utility of the system for grant decision makers, the criminal justice community, Congress, and taxpayers. By DOJ ensuring that it submits its grant award information to USASpending.gov in as timely a manner as is possible, prime grantees’ abilities to report their subgrant activities would likely improve. As a result, DOJ could have greater visibility over which subgrantees were using its money and for what purposes before DOJ makes its grant award decisions. Further, even if DOJ does not believe that the information that prime grantees ultimately post about their subgrantees is ideally descriptive in every instance, the information could provide DOJ with important details—that it currently does not consider or otherwise have access to before finalizing award decisions—related to how subgrantees are using their funds. The statute establishing OAAM tasked the OAAM Director with selecting and carrying out program assessments of not less than 10 percent of the aggregate amount of grant funding awarded annually by OJP, the COPS Office, and any other grant programs carried out by DOJ that the Attorney OAAM officials told us that to meet the General considers appropriate.directive to conduct program assessments, and in recognition of its own resource constraints, OAAM relies on the programmatic grant monitoring that OJP’s and the COPS Office’s grant staff already conduct. To oversee and track these monitoring efforts, OAAM develops and implements standards and protocols, including a framework and methodology for OAAM also tracks OJP’s systematically identifying high-risk grantees.and the COPS Office’s monitoring progress and compares it against an established annual monitoring plan. For example, in its fiscal year 2010 annual report—the latest available—on both OJP and the COPS Office’s monitoring goals and activities, OAAM found that both offices exceeded their goals of monitoring 10 percent of total award funding. OJP monitored 1,447 grantees with awards totaling $3.05 billion, and the COPS Office monitored 185 grantees with awards totaling $234.74 million. In addition to overseeing monitoring activities, OAAM also conducts program assessments of OJP and COPS Office grant programs. OAAM considers monitoring—and its oversight of it—as responsive to its originating statute’s intent, but it also recognizes that assessments have utility and serve a separate but important function in helping the office improve grant management. Table 4 illustrates the distinction between OJP’s and the COPS Office’s monitoring and OAAM’s assessment functions. In general, both monitoring and assessment are important and complementary tools for grant oversight. Nevertheless, we found that OAAM’s program assessments yield richer information to enhance grant programs than either OJP’s or the COPS Office’s individual monitoring reports or the summary reports that OAAM’s Program Assessment Division (PAD) compiles because the program assessments are more analytical and broader in perspective. OAAM’s PAD Standard Operating Procedures define a program assessment as “a systematic review and evaluation of programs to gauge effectiveness, identify promising practices, document impediments, and when necessary, make recommendations for improvement.” OAAM reported to us that from 2008 through mid-February, 2012, its staff had produced 28 products; however, when we reviewed the 28, we found that 7 met OAAM’s definition for a program assessment.assessment report on the COPS Office Methamphetamine Initiative. In 2011, it completed one on BJA payment programs, and in 2010, it assessed the ICAC Training and Technical Assistance Program. The other 21 publications were user guides; summaries of monitoring reports, such as those described earlier; or Recovery Act risk indicator reports, which identify potentially high-risk grantees so that the program offices can work with those grantees to resolve issues and prevent potential problems. According to OJP, all of OAAM’s publications contribute to improving OJP’s grant programs and operations by strengthening internal controls, streamlining processes to be more efficient, or reporting on how well programs and policies are meeting their objectives. For example, in 2012, OAAM completed an Of the 7 publications meeting OAAM’s definition for program assessments, 2 reviewed a single aspect of the grant cycle—program awarding—within a particular grant program rather than the grant program overall.thorough review of the extent to which a grant program is meeting its intended purpose. Moreover, each recommended specific actions to address identified program deficiencies, and implementation of these recommendations has helped DOJ enhance its grant programs. For example, the 2010 ICAC assessment report contained 11 recommendations related to the collection and use of performance measurement data, financial management, fair and open competition for awards, and improving grant management and oversight. In our review of the six monitoring reports that the ICAC grant manager completed for the ICAC program’s sole grantee, there was no mention of the same deficiencies. In particular, none of the six monitoring reports identified the unallowable costs, conflict of interest, or inadequate oversight and documentation of grant activity that the OAAM assessment report identified. Instead, the monitoring reports showed that the grantee was progressing as expected on implementation of the program, and was on schedule with no problems noted. Nevertheless, all 7 were based on a much more OAAM also assessed BJA’s payment programs, which otherwise are not subject to BJA grant monitoring. In the November 2011 report, OAAM assessed the processes that BJA used to verify the eligibility and accuracy of reimbursement requests submitted by grantees. The assessment concluded that BJA is administering its payment programs appropriately to verify the eligibility and accuracy of payments, but it also determined that additional internal controls were necessary and that procedures were not sufficient to identify duplicate payment requests from grantees. As a result of the assessment, OAAM made six recommendations to BJA, including implementing additional procedures to identify duplicate requests for payments of detention expenses. In particular, one of OAAM’s recommendations was that BJA implement a process to identify overlapping requests for reimbursement between two of the programs for expenses related to detention of criminal aliens. In response, BJA compared all of those programs’ applications for reimbursement for fiscal year 2011 to identify whether jurisdictions were requesting reimbursement for the detention of the same individuals over the same period of time. BJA’s review led to the removal of approximately $5.8 million in requests for reimbursement prior to generating the final reimbursement awards. Standards for Internal Control in the Federal Government calls for managers to compare actual performance with planned or expected results throughout the organization and analyze significant differences. These standards also identify that program managers need both operational and financial data to determine whether they are meeting their agencies’ strategic and annual performance plans and meeting their goals for accountability for effective and efficient use of resources. The programmatic grant monitoring reports that each of the granting agencies compile contribute to meeting these standards at the grantee level by tracking the progress and, when necessary, providing assistance to individual grant recipients. OAAM’s summaries of these reports then roll up the statistics and ensure the compliance monitoring occurs as required. However, OAAM’s program assessments are more comprehensive than both the individual grant monitoring reports and the summary reports OAAM prepares because their broader perspective allows for reporting on program successes, impediments, and potential areas for improvement. While monitoring 10 percent or more of the aggregate amount of grant funding awarded annually is important and beneficial to the grant management process, the 7 program assessment reports that OAAM has issued since 2008 have led to more than 50 recommendations for the improvement of OJP and COPS Office grant programs. According to OAAM officials, additional program assessments would be beneficial; however, they told us that OAAM does not have sufficient resources to conduct more. They said that conducting program assessments on 10 percent of the aggregate amount of grant funds awarded annually would not be possible given current resources, but they also noted that the department has not conducted a feasibility analysis that considers the costs and benefits of having OAAM conduct assessments on a larger number of grant programs. Further, OJP officials stated that since the establishment of OAAM, the administration has never requested, and the department has not received, the full amount authorized for appropriation under OAAM’s governing statute. DOJ officials did not explain the rationale for the administration’s budget proposals and officials did not report any plans to increase OAAM’s resources. As of December 2011, out of a total of 49 staff (26 federal staff authorized by DOJ and 23 contractors) spread across OAAM’s three divisions, OAAM had 8 staff (5 federal staff authorized by DOJ and 3 contractors)—or less than 20 percent—in its PAD dedicated to performing program assessments in addition to overseeing OJP and COPS Office programmatic monitoring. OAAM also has 18 staff (10 federal staff authorized by DOJ and 8 contractors)—or more than 30 percent— working in its Audit and Review Division to coordinate IG, GAO, and Single Grant Audit resolutions, and to conduct A-123 reviews—activities that are not specifically addressed in OAAM’s authorizing statute. Appendix V contains further discussion of the different activities of OAAM’s three divisions. Because DOJ considers its resources to be limited, it is important that OAAM’s resources be used as efficiently as possible to maximize the investment in grant programs. Thus, given the different roles that grant monitoring and program assessment play in assessing the overall effectiveness of grant programs, considering whether it employs an appropriate mix of monitoring and program assessments could aid DOJ in awarding grant funds in the most efficient and effective way possible. Consistent with the 2000 reauthorization of the Violence Against Women Act, the Attorney General submits a biennial report to Congress on the effectiveness of VAWA-funded grant programs. OVW uses the VAWA Measuring Effectiveness Initiative, conducted under a noncompetitive cooperative agreement with a university to develop and implement reporting tools, as the primary way it meets statutory requirements to report on the effectiveness of VAWA-funded programs. Staff from the university also provide data collection training to grantees to ensure that they use the forms and database properly, and then use the information collected to summarize grantee performance in semiannual summary data reports. The results of this initiative are summary data reports that university staff compile from the semiannual or annual progress reports that grantees submit to OVW. OVW then uses these summaries to meet biennial reporting requirements for its discretionary grant programs under VAWA and, for example, the Services, Training, Officers, Prosecutors (STOP) Violence Against Women Formula Grant Program (STOP Program). The STOP Program promotes a coordinated, multidisciplinary approach to improving the criminal justice system’s response to violent crimes against women and increasing the availability of victim services. OVW’s biennial reports are composed of three main components: a literature review of research showing (where available) the effectiveness of grant-funded activities and (when such research is not available) information on promising or best practices in the field of victims services; a summary of performance measure data, such as the number of grant-funded staff, the number of people trained, and the number of victims/survivors seeking services that are served, partially served, and not served, as reported by the grantees to OVW through OJP’s GMS; and anecdotal evidence from grantees on the benefit of what they are able to do with grant funds. The statute establishing OAAM did not give it oversight authority for OVW programs. Provisions in the authorizing statute, however, provide the Attorney General with discretion to expand OAAM’s scope beyond OJP and COPS Office programs, which the Attorney General has not undertaken. As a result, while OVW uses its data collection and analysis to report on grant program effectiveness, in accordance with the VAWA requirement, by providing information on activities carried out with grant funds and the number of persons served using those funds, it does not benefit from the monitoring oversight and grant program assessments that OAAM provides. Such assessments could provide OVW with more substantive information on its grant programs. Table 5 contains a comparison of the analytical approaches that OVW uses when it reports to Congress on the effectiveness of its grant programs against those that OAAM uses in its program assessments. On the basis of a review of seven OVW reports and seven OAAM grant program assessment reports, we found that OVW’s reports contain less analysis than the OAAM reports do. Specifically, these OVW reports summarized performance measurement data rather than analyzed it. Further, these OVW reports did not address grant program operations and management. OAAM, in contrast, used more varied approaches to analyze grant programs, which provided information on both grant program performance and operations, identified areas for improvement, and resulted in specific recommendations to OJP’s bureaus and program offices, and the COPS Office. Unlike the OAAM analysts who conduct assessments of OJP and COPS Office grant programs, the university staff responsible for the Measuring Effectiveness Initiative do not have access to grant program financial data or OVW grant monitoring reports. Additionally, university staff involved in the initiative do not conduct site visits to validate the data provided by grant recipients and the work they perform. Instead, OVW staff in each program area review the results of the biennial reports and the semiannual summary data reports compiled from grantee progress reports to identify priority areas where there is an unmet need. The OVW reports contain sections on “remaining areas of need” identified by grant recipients. However, the areas of need that OVW identified are based on comments grant recipients provided on gaps in service, rather than being based on an independent assessment that OVW conducted on the overall grant programs. Moreover, unlike the grant program assessments that OAAM conducts, beyond identifying areas of need, the biennial reports do not result in concrete recommendations for improving OVW’s grant programs. OVW officials told us that for the upcoming 2012 biennial report, they plan to focus more attention to the discussion of remaining areas of need. According to OVW’s 2010 Biennial Report to Congress on the Effectiveness of Grant Programs Under the Violence Against Women Act, demonstrating the effectiveness of services provided by agencies funded under OVW presents a challenge for those charged with meeting the reporting mandate of VAWA 2000. An OVW official told us that it is difficult to discern between output and outcomes when dealing with the grant programs in OVW and that it is difficult to measure outcomes with service grant programs. For example, OVW might consider whether an abuse victim not only gets a protective order but also receives additional services. Additionally, according to OVW officials, it would be difficult to track how many victims received different types of services in multiple areas, because OVW service provider grantees only track the first instance in which a victim receives services and do not follow up on related services. According to the federal Domestic Working Group Grant Accountability Project’s Guide to Opportunities for Improving Grant Accountability, agencies need a process for managing performance once grants are awarded, and the ability to assess grant results and use those results when awarding future grants. The Working Group identified engaging outside experts to assess program performance, inspecting projects after completion, and conducting evaluations to identify factors affecting results among its promising practices to improve program performance. These activities are not part of OVW’s current approach to program oversight. OVW conducts its grant program monitoring as well as IG audit follow-up. Additionally, OVW has a Grant Assessment Tool (GAT)—designed by the same company that produced the GAT for OJP.conducts Single Audit follow-ups; manages the high-risk grantee list; and oversees DOJ’s combined programmatic and financial monitoring plan, which is the combined monitoring list of all the sites that OJP, OVW, and the COPS Office plan to visit for the year. In a March 2011 audit, the IG found that OVW and the COPS Office perform certain monitoring and oversight services that are duplicative of the services available through OJP and recommended that DOJ standardize the oversight services OAAM currently provided to OVW and the COPS Office to eliminate such duplication and provide uniformity in oversight among DOJ granting agencies. OAAM provides certain administrative services that facilitate grant program management, but DOJ officials told us the reason OAAM does not have oversight over OVW is because the Attorney General has not extended OAAM’s purview. OJP officials told us that they have not been provided the scope of work that OVW oversight may encompass and, as such, OAAM has not conducted any analyses using a workforce model to determine the staffing levels, associated resources, and other possible impacts (i.e., costs and benefits) of having OVW under its purview on OAAM operations. OVW officials expressed concern that OAAM staff would not have any expertise in violence against women issues. However, OAAM currently has oversight over specialized bureaus and offices such as the Office for Victims of Crime and the Office of Sex Offender Sentencing, Monitoring, Apprehending, Registering, and Tracking. Additionally, OVW officials stated that OAAM does not perform data collection and analysis activities, which are the primary activities of the Measuring Effectiveness Initiative. However, as a part of its assessments, OAAM has collected performance measures and conducted analysis. For example, in its assessment of the ICAC Training and Technical Assistance Program, OAAM collected and analyzed national performance metrics related to training. Given the nature of OAAM assessments, along with the other oversight services it provides to OJP and the COPS Office, the information resulting from OAAM assessments of OVW grant programs could better inform OVW about its grant programs and funding to assist with future program design and award decisions while also providing Congress with a more complete picture on the effectiveness of programs funded under VAWA. Accordingly, DOJ could benefit from assessing the feasibility, costs, and benefits of OAAM providing grant program assessments for OVW. The statutory design of DOJ’s grant programs has contributed to overlap across a number of justice areas. We recognize that even when programs overlap, there may be meaningful differences in their eligibility criteria or objectives, or they may be providing similar types of services in different ways. We also recognize that a number of grant programs are formula- driven and therefore grantees’ eligibility is predetermined. However, because DOJ exercises independent judgment when making discretionary awards and therefore has full responsibility for how it conducts its pre-award reviews, it will be important for the department to maximize visibility over how grantees plan to spend the funds they receive from multiple funding streams. In some instances, DOJ may deem it appropriate for large numbers of distinct grant programs to serve one goal, or for the same communities to benefit from multiple streams of its grant funding. In these cases, duplication may be warranted. However, because we found routine coordination and consistent policies and procedures for sharing information across the granting agencies during DOJ’s pre-award phase limited, we do not believe DOJ knows with certainty if such duplication is always necessary. DOJ’s three granting agencies have taken some steps to coordinate their grant-related activities and have sought congressional approval in some instances for grant program consolidation. Further, they have initiated other, limited actions to ensure that grantees report additional streams of funding. However, DOJ limits its view of duplication to instances where grant applicants apply for and receive multiple streams of funding, including DOJ funding, to support single costs associated with a single grant project. Using this definition, DOJ believes that any unnecessary duplication can be identified through monitoring grantees post-award. We take a broader view of duplication and consider it potentially unnecessary when DOJ is unaware that grantees have applied for and are receiving funding for potentially the very same or similar purposes. Therefore, we believe it is incumbent that DOJ take steps in the pre-award phase to make purposeful judgments about funding necessity before finalizing the awards. Doing so would help the department better mitigate this risk for potential, unnecessary duplication. Specifically, by conducting a broad examination of all DOJ grant programs to systematically identify justice areas for which funding overlaps, DOJ would have greater visibility over how its funding can be used and whether it is awarding grant dollars in the most efficient way possible. Further, developing and implementing policies and procedures to require granting agencies to routinely share and consider information each may have about past or prospective grantee funding could provide DOJ with more strategic visibility over its awarding decisions. In addition, requiring all grantees to report current or prospective federal funding sources when applying for DOJ grants could provide DOJ with more information to better target its limited financial resources before it finalizes new grant awards. Additionally, by taking interim steps to expand access to the two distinct grant management systems—CMS and GMS—DOJ could better ensure that grant managers and decision makers can leverage all existing tools while a longer-term study to consider the feasibility, costs, and benefits of potential options for DOJ grant management systems is underway. Such options could include unifying the systems, creating a DOJ-wide system, or using off-the-shelf software to bridge information gaps. Related, DOJ can have greater confidence that any variation in how the granting agencies are currently managing their portfolios does not hinder any potential unification by ensuring that its planned study include an assessment of the steps needed to harmonize DOJ grant processes. Further, with additional steps to ensure that DOJ is submitting grant award information to USASpending.gov in the most timely manner possible, the department could facilitate prime grantees’ uploading of information on subgrantees’ use of funds and therefore make the website a more useful resource to DOJ’s own grant decision makers. Finally, recognizing the value of OAAM’s role, assessing whether the office relies on an appropriate mix of programmatic grant monitoring and program assessment—as well as considering expansion of OAAM’s coverage to include OVW—could improve the overall operation of grant programs departmentwide. To ensure that DOJ can identify overlapping grant programs to either consolidate or coordinate similar programs, mitigate the risk of unnecessary grant award duplication in its programs, and enhance DOJ’s ability to gauge grant program effectiveness, we recommend that the Attorney General take the following eight actions: 1. Conduct an assessment to better understand the extent to which the department’s grant programs overlap with one another and determine if grant programs may be consolidated to mitigate the risk of unnecessary duplication. To the extent that DOJ identifies any statutory obstacles to consolidating its grant programs, it should work with Congress to address them, as needed. 2. Coordinate within and among granting agencies on a consistent basis to review potential or recent grant awards from grant programs that DOJ identifies as overlapping, including subgrant awards reported by prime grant awardees, to the extent possible, before awarding grants. DOJ should also take steps to establish written policies and procedures to govern this coordination and help ensure that it occurs. 3. Require its grant applicants to report all federal grant funding, including all DOJ funding, that they are currently receiving or have recently applied for in their grant applications. 4. Provide appropriate OJP and COPS Office staff with access to both GMS and CMS and appropriate OVW staff with access to CMS. 5. As part of DOJ’s evaluation of its grant management systems, DOJ should ensure that it assesses the feasibility, costs, and benefits of moving to a single grants management system, including the steps needed to harmonize DOJ grant processes, so that any variation in how the granting agencies manage their portfolios is not an encumbrance to potential system unification. 6. Ensure the most timely reporting possible of grant award information to USASpending.gov according to OMB guidelines, which would enable its grantees to comply with their reporting responsibilities according to the same guidelines. 7. Assess whether OAAM relies on an appropriate mix of programmatic grant monitoring and program assessment, and determine whether the office could support additional program assessments. 8. Assess the feasibility, costs, and benefits of OAAM providing assessments for OVW, in addition to OJP and the COPS Office. If DOJ determines that OAAM assessments of OVW grant programs would be more cost-effective and provide greater insight into the effectiveness of OVW grant programs than OVW’s current approach, then the Attorney General should extend OAAM’s oversight to include OVW. We provided a draft of this report to DOJ for comment. DOJ provided written comments, which are reproduced in full in appendix VI, and concurred with all eight of the recommendations. DOJ also described actions it has underway or plans to take to address the recommendations. DOJ agreed with the first recommendation that it conduct an assessment to better understand the extent to which the department’s grant programs overlap with one another. DOJ stated it will explore options for carrying out such an assessment in an effort to reduce the risk associated with unnecessary or inappropriate program duplication. For example, DOJ stated it is considering tasking OAAM to conduct such an assessment. Since DOJ is developing options for how it will implement this recommendation, it is too soon to know what specific actions DOJ will take, when they will be completed, and whether they will fully address the intent of the recommendation. DOJ agreed with the second recommendation that it coordinate within and among granting agencies, to the extent possible, before awarding grants. DOJ stated that its grant-making agencies will continue to closely collaborate and share information prior to making grant awards. DOJ also stated it plans to use the results of the assessment referenced in the first recommendation to develop a targeted and strategic approach for reviewing grant applications during the pre- award process. Since DOJ is considering how it will implement this recommendation, it is too soon to know what specific actions DOJ will take, when they will be completed, and whether they will fully address the intent of the recommendation. DOJ agreed with the third recommendation that DOJ require its grant applicants to report all federal grant funding, including all DOJ funding, that they are currently receiving or have recently applied for in their grant applications. DOJ stated it plans to use a risk-based approach to implement this recommendation, using the results from its assessment in response to the first recommendation. This is a positive step toward ensuring that DOJ has a more complete picture of an applicant’s access to other federal funding. However, since DOJ has not yet developed its approach, it is too soon to tell whether DOJ’s actions will address the intent of the recommendation. DOJ agreed with the fourth recommendation that DOJ provide appropriate OJP and COPS Office staff with access to both GMS and CMS and appropriate OVW staff with access to CMS. DOJ noted that OJP will provide read-only GMS access to COPS Office staff and that the COPS Office will provide reports to OJP and OVW from CMS, given the technological barriers to providing external system access. These actions, when implemented, should address the intent of this recommendation. DOJ agreed with the fifth recommendation that as part of its evaluation of its grant management systems, DOJ should ensure it assesses the feasibility, costs, and benefits of moving to a single grants management system. DOJ stated that it had initiated such a study and plans to complete it within the next six months. When effectively completed, this study, along with any actions taken to implement its findings, should address the intent of this recommendation. DOJ agreed with the sixth recommendation that DOJ ensure the most timely reporting possible of grant award information to USASpending.gov. DOJ committed to doing its best to ensure timely reporting, but did not provide specific actions or plans to address the intent of the recommendation. DOJ agreed with the seventh recommendation that DOJ assess whether OAAM relies on an appropriate mix of programmatic grant monitoring and program assessment, and whether the office could support additional program assessments. DOJ stated that additional program assessments would be beneficial and contribute to the improvement of grant programs and operations. DOJ also stated it would explore ways to conduct more program assessments, but did not provide specific actions or plans to address the intent of the recommendation. DOJ agreed with the eighth recommendation that DOJ assess the feasibility, costs, and benefits of OAAM providing assessments for OVW, in addition to OJP and the COPS Office. DOJ stated that discussions have been initiated between OAAM and OVW related to this recommendation. This is a positive first step, but it is too soon to know whether the results of these discussions and any resulting potential future actions will address the intent of the recommendation. In addition, DOJ raised concerns about the methodology we used to identify overlap in DOJ’s fiscal year 2010 grant program solicitations across 10 broad justice themes. DOJ stated that our analysis of potential overlap between DOJ funding solicitations substantially overstated the number of programs that might be duplicative. DOJ commented that the table we used to show the overlap was an indication that DOJ was involved in “wasteful duplication.” Our analysis, as summarized in table 2 of this report, demonstrates overlap in the justice areas that DOJ’s grant programs aim to support. Having several overlapping grant programs within individual justice areas requires greater visibility and pre-award coordination on the part of DOJ to diminish the risk of unnecessary duplication at the grant project level. As such, our analysis does not, on its own, indicate unnecessary duplication among DOJ grant programs, but instead identifies the potential risk of unnecessary duplication. Implementing the recommendations in this report that DOJ assess grant program overlap and coordinate grant award decisions will help DOJ identify areas of overlap and mitigate the risk of unnecessary duplication in grants. DOJ also considered the categories we developed for our analysis such as “community crime prevention strategies” as too broad and exclusive of specialized programs such as community policing. We developed our 10 broad justice areas based mainly on programmatic information contained on DOJ granting agency websites and other DOJ literature and believe they fairly demonstrate overlap among DOJ’s various grant programs. We recognize that the more detailed analysis we recommended and DOJ agreed to undertake is necessary to determine the extent of any unnecessary duplication. DOJ also commented that our sample size of grant applications was too small in number and was not generalizable. As discussed in this report, our sample size was not intended to be generalizable across the entire scope of DOJ grant program awards, but instead was meant to illustrate the potential for unnecessary duplication. DOJ further commented that their investigation of the examples of unnecessary duplication we provided proved that no duplication actually existed in the grant programs. DOJ conducted its review after we provided our examples and focused on how grantees were using the funds they had received. Our analysis of potential duplication focused on grant applications—how applicants proposed to spend federal grant dollars—and not on the verification of activities grantees carried out once DOJ funded them. DOJ’s plans to improve pre-award coordination are positive steps and we believe that by doing so, DOJ will be better positioned to make better informed decisions about the financial needs of grantees and communities for their proposed projects. Finally, DOJ expressed concern that the report implies that DOJ is not tracking subgrantees’ activities. Our analysis focused on pre-award coordination, not DOJ’s efforts to track subgrantee activities. As such, we recommended that DOJ use the subgrant award information it does have to help inform DOJ’s grant award decision making. We believe that subgrant award information could provide DOJ decision makers with a more complete financial picture of applicants and the projects they propose to be funded by DOJ. We are sending copies of this report to the Attorney General, selected congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9627 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. This report answers the following questions: (1) To what extent does overlap across Department of Justice (DOJ) grant programs exist and contribute to the risk of unnecessary duplication in grant awards? (2) To what extent has DOJ taken steps to reduce overlap in its grant programs and the potential for unnecessary duplication in grant awards? (3) To what extent does DOJ use programmatic grant monitoring and assessment to determine grant program effectiveness and use the results to enhance its grant programs? To examine the extent to which overlap across DOJ grant programs exists, we identified the total number of DOJ grant solicitations for fiscal year 2010. To do this, we reviewed the lists posted on the Office of Justice Programs (OJP), Office on Violence Against Women (OWV), and Community Oriented Policing Services (COPS) Office websites and confirmed the currency of the information with DOJ officials. To determine whether these solicitations were announcing grant funding available for similar or overlapping purposes, we first established 10 categories of criminal justice areas. We developed these 10 categories after reviewing comparable justice areas identified within OJP’s CrimeSolutions.gov website, which OJP officials stated includes themes addressed through OVW and COPS Office programs; OJP’s Fiscal Year 2010 Program Plan; and other materials from OVW and the COPS Office, such as justice program themes from their respective websites. Next, through analyst consensus, we sorted the grant solicitations according to the 10 justice categories. After identifying solicitations with similar scopes, we then reviewed 26 successful grant applications that were awarded under similar solicitations to identify and assess specific examples of how the recipients planned to use funds from multiple programs in the same or similar manner. The sample we reviewed is not generalizable to all DOJ grant programs because we did not review all of the more than 11,000 grant applications that DOJ funded in fiscal year 2011, but it illustrates the potential for unnecessary duplication. To determine if DOJ could take more action to avoid program overlap that can lead to unnecessary duplication, we applied the Domestic Working Group Grant Accountability Project’s Guide to Opportunities for Improving Grant Accountability. To examine the extent to which DOJ has taken steps to reduce overlap in its grant programs and the potential for unnecessary duplication in grant awards, we reviewed agency policies, procedures, and guidance on grant program design and award, such as the COPS Office Program Development Team charter and template, and the OJP Grant Manager’s Manual. Further, we interviewed DOJ officials from the three granting agencies to obtain additional information on grant program design and award processes, and the extent to which the three agencies coordinate and share information. We also visited or conducted phone interviews with officials from 11 states, including the five largest and five smallest state recipients of Edward Byrne Memorial Justice Assistance Grant (JAG) funding. These officials represent the state administering agencies (SAA) responsible for distributing JAG and other DOJ formula block grant funds to subrecipients in California, Florida, New York, North Dakota, Pennsylvania, South Dakota, Rhode Island, Tennessee, Texas, Vermont, and Wyoming. These officials provided their views regarding the type and timeliness of information on grant awards and subawards they provide to and receive from DOJ. We selected these 11 states based on the amount of JAG funding they receive and the existence of other recipients in their communities receiving DOJ discretionary grants for potentially similar purposes. The results of these contacts are not generalizable to all states, but provided insight into how DOJ grant funds are used locally and into the communication between states and DOJ. To determine if JAG recipients expended grant funds in fiscal year 2010 on sexual assault services, bullet and stab-resistant vests, sex offender registry and notification systems, Internet crime against children task forces, hiring police officers, and correctional officer salaries, we conducted a web- based survey of all recipients of DOJ JAG grant funding who received an award from fiscal years 2005 through 2010. The survey response rate related to SAAs was 89 percent, with 50 out of 56 SAAs answering the questionnaire. We compared agency grant design and award practices against Standards for Internal Control in the Federal Government and promising practices identified in the Domestic Working Group Grant Accountability Project’s Guide to Opportunities for Improving Grant Accountability. To analyze the extent to which DOJ uses programmatic grant monitoring and assessment to determine grant program effectiveness and uses the results to enhance its grant programs, we analyzed DOJ documentation, such as assessments DOJ conducted of its own programs and specific programmatic grant monitoring reports. We also interviewed DOJ officials from the granting agencies, including those tasked with assessment, as well as contractors responsible for assessing grant programs for OVW. This report focuses solely on the types of assessment conducted by DOJ granting agencies on its grant programs. Training and technical assistance provided by the department and its program offices and bureaus to grantees to support the evaluation of individual grant projects, such as the Bureau of Justice Assistance (BJA) Center for Program Evaluation and Performance Measurement, is not included in this report. Also excluded from this report are the outcome evaluations of the impact of grant programs such as those funded by the National Institute of Justice (NIJ). OVW Following the enactment of the Violence Against Women Act of 1994, the department established the Violence Against Women Office, which later became OVW under OJP. OVW now functions as a separate and distinct office within DOJ and is headed by a presidentially appointed, Senate- confirmed Director. COPS Office The Attorney General established the COPS Office in October 1994 to administer community policing grants authorized under the Violent Crime Control and Law Enforcement Act of 1994. The Attorney General appoints a Director to head the COPS Office. Bureau of Justice Assistance Bureau of Justice Statistics National Institute of Justice Office of Juvenile Justice Office for Victims of Crime Office of Sex Offender Sentencing, Monitoring, Apprehending, Registering and Tracking OJP provides grants to various organizations, including state and local governments, universities, and private foundations, which are intended to develop the nation’s capacity to prevent and control crime, administer justice, and assist crime victims. OVW administers financial and technical assistance to local, state, and tribal governments; courts; nonprofit organizations; community-based organizations; secondary schools; institutions of higher education; and state and tribal coalitions. OVW provides grants for developing programs, policies, and practices aimed at ending domestic violence, dating violence, sexual assault, and stalking. The COPS Office provides grants to and shares information with the state, local, territory, and tribal law enforcement agencies to advance community policing. From fiscal years 2005 to 2012, OJP received approximately $24 billion for OJP grant programs. In 2010, almost 3 billion was available to OJP to fund grants, and OJP issued 223 solicitations for grants.ª According to OJP, it awarded nearly 5,000 grants in 2010. From fiscal years 2005 to 2012, OVW received approximately $3.4 billion for OVW grant programs. In 2010, OVW received $418.5 million for OVW grant programs, and OVW issued 19 solicitations for grants. From fiscal years 2005 to 2012, the COPS Office received approximately $5.3 billion to fund COPS Office grant programs. In 2010, the COPS Office received $791.6 million to fund COPS Office grants, and the COPS Office issued nine solicitations for grants. Solicitations are announcements of new grant funding available and explain areas for which funding can be used. These numbers reflect solicitations provided by each individual office and do not reflect any joint solicitations, which are those offered in tandem with other program offices, either within or external to DOJ (e.g., other DOJ components or federal agencies). According to DOJ officials, there are three ways in which DOJ grant programs can be merged or better coordinated—through consolidation, braiding, and blending. Figures 4,5, and 6 explain these mechanisms. coordinates audits, such as Single Audits that independent nongovernmental auditors conduct, as well as those that the Inspector General (IG) and GAO conduct, reviews internal control processes (A-123), and manages DOJ’s High Risk Grantee Program, which applies criteria to identify grantees most at risk of fraud, waste, or abuse in use of their grant funds. oversees OJP and COPS Office programmatic monitoring, including development and implementation of standards and protocols, and assesses grant programs and initiatives of OJP and the COPS Office, as well as operational activities. serves as the primary resource for OJP grants management policies and procedures by producing authoritative guidance, develops and facilitates grants related training to staff and grantees, manages Grants Management System (GMS) and other tools and facilitates OJP’s business process improvement efforts. In addition to the contact named above, Joy Booth, Assistant Director, and Christian Montz, Analyst-in-Charge, managed this assignment. Julie E. Silvers, Marya Link, Caitlin Carlberg, and Michael Sweet made significant contributions to the work. Michele Fejfar assisted with design and methodology. Janet Temko and Tom Lombardi provided legal support. Lara Miklozek provided assistance in report preparation.
Since fiscal year 2005, approximately $33 billion has been appropriated to DOJ for the administration of more than 200 federal financial assistance solicitations, such as grants, that support criminal justice activities at the state and local levels. Pursuant to section 21 of Public Law 111-139, this report addresses the extent to which (1) overlap exists across DOJ grant programs and if it contributes to the risk of unnecessary duplication in grant awards, (2) DOJ has taken steps to reduce overlap and the potential for unnecessary duplication in its grants awards, and (3) DOJ uses monitoring and assessment to determine grant program effectiveness and uses the results to enhance its grant programs. GAO assessed DOJ’s fiscal year 2010 announcements of grant award funding; categorized them according to key justice areas to identify any overlap; and interviewed DOJ officials about their grant making practices, systems, and assessment methods. Further, GAO interviewed officials from 11 states receiving DOJ grants, selected for the levels and types of funding received. Though not generalizable, the interviews provided their perspectives on funding. The Department of Justice’s (DOJ) grant programs overlap across 10 justice areas contributing to the risk of unnecessarily duplicative grant awards for the same or similar purposes. For example, GAO reviewed all 253 grant award announcements that DOJ’s Office of Justice Programs (OJP), the Office on Violence Against Women (OVW), and the Community Oriented Policing Services (COPS) Office published on their websites for fiscal year 2010 and found overlap across the justice areas. For example, 56 of DOJ’s 253 grant solicitations—or more than 20 percent—were providing grant funds for victim assistance and related research. GAO also found instances where applicants used the same or similar language to apply for funding from these overlapping programs. In one example, a grant recipient applied for, and received, funding from both OJP’s Internet Crimes Against Children program and the COPS Office’s Child Sexual Predator Program to provide training for cyber crime investigations and establish an Internet safety program. In some instances, DOJ may deem it appropriate for distinct grant programs to serve one goal, or for one community or grantee to benefit from multiple streams of grant funding. However, DOJ generally lacks visibility over the extent to which its grant programs overlap and thus is not positioned to minimize the risk of potential, unnecessary duplication before making grant awards. DOJ has taken some actions that address overlap in its grant programs; for example, by requesting statutory authorization in some instances to consolidate programs that are similar. However, DOJ has not conducted an assessment of its grant programs to systematically identify and reduce overlap. Doing so would enable DOJ to identify program areas where overlap may be desirable and where a consolidation of programs may be more efficient. Further, OJP and OVW use a separate grants management system than the COPS Office uses, limiting their ability to share information on the funding they have awarded or are preparing to award to a recipient. According to COPS Office officials, its mission and grant management processes are unique enough to necessitate a separate system. However, OJP officials told GAO that its system has been and can be modified with minimal investment to accommodate different grant processes. DOJ has initiated a study to assess the feasibility, costs, and benefits of unifying the systems among other options. By ensuring that such a study accounts for the effort necessary to harmonize departmental grant processes, DOJ could ensure that variations in such processes do not encumber system unification. DOJ’s Office of Audit, Assessment, and Management (OAAM) oversees monitoring of grantees’ compliance and conducts grant program assessments to gauge program effectiveness. GAO found that OAAM’s program assessments yield richer information than its monitoring reports because they identify improvement areas. OAAM officials believe additional assessments could be beneficial. They also said they lacked resources to conduct more, but had not conducted a feasibility analysis to confirm this. By OAAM examining its mix of monitoring and assessment activities, including the costs and benefits of current resource allocations, it could better ensure continuous improvement in grant programs. GAO recommends, among other things, that the department assess its grant programs for overlap, ensure its comprehensive study of DOJ grant management systems also includes an analysis of steps necessary to harmonize business processes, and examine its mix of grant monitoring and program assessment activities. DOJ agreed with GAO’s recommendations.
In September 1997, the District of Columbia Financial Responsibility and Management Assistance Authority (Authority) awarded a contract to acquire a new FMS. The overall objective of the FMS project is to improve the District’s financial systems through faster, more efficient, and accurate processing providing increased functionality, flexibility, and reduced cost of operations. According to the Chair of the Authority, the new FMS is intended to (1) eliminate the principal problems that exist with the current system and ensure that all financial management guidelines are adhered to, (2) enable managers to more effectively and efficiently monitor and control financial resources, and (3) produce timely, accurate, and reliable information, providing decisionmakers with the basic financial information needed to make more informed decisions. The Authority awarded a contract for the new FMS in September 1997 and committed to an aggressive implementation schedule. The schedule anticipates (1) pilots in five agencies beginning in February 1998, (2) the accounting system to be implemented by October 1998, and (3) District-wide implementation by February 1999. We were asked to review the District’s efforts to acquire a new financial management system. Our objective was to determine whether the District had implemented disciplined software acquisition processes for acquiring its new financial management system. To accomplish this, we applied the Software Engineering Institute’s Software Acquisition Capability Maturity Model (SA-CMM) and its Software Capability Evaluation (SCE) method. SEI’s expertise in, and methods for, software process assessment are recognized and accepted throughout the industry. Our evaluators were all SEI-trained software specialists. SA-CMM ranks organizational maturity according to five levels (see figure 1). Maturity levels 2 through 5 require the verifiable existence and use of certain software acquisition processes, known as key process areas (KPA). According to SEI, an agency that has these acquisition processes in place is in a much better position to successfully acquire software than an organization that does not have these processes in place. We evaluated the District’s software acquisition processes against six of the seven level 2 KPAs (the transition to support KPA was not evaluated because the District does not plan to support FMS in-house) and one level 3 KPA (see table 1). We selected level 2 because it is the minimum level at which any assurance exists that software acquisition processes are mature enough to consistently deliver promised software capabilities on time and within budget. We included one level 3 KPA—acquisition risk management—because it is considered by software experts to be a very important process area. Basic project management processes are established to track performance, cost, and schedule. The necessary process discipline is in place to repeat earlier successes on projects in similar domains. The software acquisiton process is characterized as ad hoc, and occasionally even chaotic. Few processes are defined and success depends on individual effort. The purpose of software acquisition planning is to ensure that reasonable planning for the software acquisition is conducted and that all aspects of the total software acquisition effort are included in these plans at the proper level of detail. The software acquisition planning process, among other things, includes (1) addressing software life-cycle support in acquisition plans, (2) preparing life-cycle software cost estimates, (3) having a written software acquisition policy, (4) measuring and reporting on the status of software acquisition planning activities, and (5) having guidance on software training and experience requirements for project personnel. The FMS project had many strengths in this KPA. The District received pro bono assistance from several companies to help define the acquisition strategy and conduct the activities for software acquisition planning. An acquisition strategy was developed and the acquisition planning team was staffed with personnel with software and systems experience. The team developed a cost estimate and the District management was briefed by the team on a periodic basis. This enabled the District management to be informed on the progress of the acquisition planning and the various activities through the solicitation phase. However, the FMS project also had many weaknesses in this KPA. Weaknesses observed included a lack of policy on acquisition planning and no specific assignment of responsibility for acquisition planning. Furthermore, the FMS project did not always document significant project decisions or update the planning document to reflect these decisions. For example, when the District decided to not pursue a single contract to both acquire FMS and outsource data center operations, the capability assessment (a software acquisition planning document) was not updated to reflect this decision. Decisions should be documented and the planning documents updated to ensure that large acquisitions such as FMS can be effectively managed. Table 2 shows the strengths and weaknesses for the software acquisition planning KPA and the specific findings supporting these ratings. The purpose of solicitation is to prepare a request for proposal that delineates a project’s software-related requirements and select a contractor that can most cost-effectively satisfy these requirements while complying with relevant solicitation laws and regulations. Specific requirements for a solicitation process include, among other things (1) having and following a solicitation plan, (2) assigning responsibility and ensuring sufficient resources for coordinating and conducting solicitation activities, (3) preparing and reviewing cost and schedule estimates for the software products and services being acquired, and (4) periodically measuring solicitation work completed and effort and funds expended, comparing these measures to plans, and reporting the results to management. The FMS project exhibited many process strengths during the solicitation. The District has a policy on solicitation and the FMS project followed this policy. The project had experienced personnel on the source selection team and these personnel briefed the team members on the objectives of the solicitation. However, the District did not measure either time or funds expended to conduct the solicitation. Specifically, no evidence was provided to show that the FMS project tracked personnel hours or costs during the conduct of the solicitation. Addressing this weakness would enable the District to better estimate the resources needed to conduct similar acquisitions in the future. For example, if these data were collected and made available to other projects, such as the tax systems upgrade, the District would be in a better position to understand its own capability to effectively conduct solicitation, to estimate how long such a solicitation was likely to take, and to eliminate problems that may have hampered the FMS solicitation. Table 3 shows the strengths, weaknesses, and observations for the solicitation KPA and the specific findings supporting these ratings. The purpose of requirements development and management is to establish and maintain a common and unambiguous definition of software requirements among the acquisition team, system users, and software development contractor. This KPA involves two subprocesses: (1) developing a baseline set of software-related contractual requirements and (2) managing these requirements and changes to these requirements for the duration of the acquisition. A number of requirements development and management practices are necessary to satisfy this key process area. These include (1) having a written organizational policy for establishing and managing requirements allocated to software, (2) documenting plans for the development and management of requirements, (3) having documented processes for requirements development, including elicitation, analysis, and verification, (4) measuring and reporting on the status of requirements development and management activities to management, (5) appraising the impact on software of system-level requirements changes and (6) having a mechanism to ensure that contractor-delivered work products meet specified requirements. The FMS project has some process strengths in the conduct of requirements development and management. The project team is performing requirements management activities in accordance with its documented plan and software-related contractual requirements have been baselined. In addition, District management periodically reviews the status of requirements development and management activities with the project team. However, in acquiring FMS, the District did not perform many of the requirements development and management practices necessary to satisfy this KPA. For example, the District does not have an organizational policy for establishing and managing software-related requirements, there is no clear assignment of responsibility for requirements development and management and no documented evidence exists to show either resource requirements or resources expended for requirements development activities. Currently, the FMS project has begun to hold “requirements confirmation meetings” with the users to validate the requirements already specified in the FMS contract. Although requirements should be validated, this should have been done prior to releasing the request for proposal to ensure that the proposal accurately reflects the District’s requirements. Changing requirements after contract award may adversely impact project cost, schedule, and/or performance. Table 4 shows the strengths, weaknesses, and observations for the requirements development and management KPA and the specific findings supporting these ratings. The purpose of project management is to manage the activities of the project office and supporting contractors to ensure a timely, efficient, and effective software acquisition. Effective project management requires, among other things, that project teams (1) be organized to accomplish the project’s objective, (2) have a written policy for the management of the software project, (3) document their plans for the activities of the project team, (4) have the authority to alter either the project’s performance, cost, or schedule baseline while maintaining the other two, and (5) periodically brief management on the status of project management activities. The FMS project has many process strengths in project management. For example, a team was assigned responsibility for managing the project and staffed with experienced individuals whose roles and responsibilities were defined. The program management plan was written and a corrective action system to track issues and problems was implemented. However, the District has no written policy for the execution of the software project. As a result, the District has no assurance that FMS or any other software acquisition project it undertakes will be conducted in a disciplined manner. Table 5 shows the strengths, weaknesses, and observations for the project management KPA and the specific findings supporting these ratings. The purpose of contract tracking and oversight is to ensure that (1) the software development contractor performs according to the terms of the contract, (2) needed contract changes are identified, negotiated, and incorporated into the contract, and (3) contractor performance issues are identified early, when they are easier and less costly to address. An effective contract tracking and oversight process, among other things, includes (1) having a written organizational policy for contract tracking and oversight, (2) having a documented plan for contract tracking and oversight, (3) conducting tracking and oversight activities in accordance with the plan, and (4) ensuring that individuals performing contract tracking and oversight are suitably experienced or trained. The FMS project had many strengths in this KPA. The project has a designated project manager, a group is responsible for managing contract tracking and oversight activities, and the team is meeting periodically with the contractor and tracking issues in a corrective action system. However, at the time of our review, there was no contracting specialist supporting the team in the execution of the contract. In addition, the District has no documented policy for contract tracking and oversight activities. Table 6 shows the strengths, weaknesses, and observations for the contract tracking and oversight KPA and the specific findings supporting these ratings. The purpose of evaluation (testing) is to determine that the acquired software products and services satisfy contract requirements prior to acceptance. The evaluation process includes (1) documenting evaluation plans and conducting evaluation activities in accordance with the plan, (2) developing and managing evaluation requirements in conjunction with developing software technical requirements, (3) incorporating evaluation requirements into the solicitation and the resulting contract, (4) tracking contractor performance of evaluation activities for compliance with the contract, (5) ensuring that adequate resources are provided for evaluation activities, and (6) measuring and reporting on the status of evaluation activities to management. The FMS project has some process strengths in this KPA. For example, responsibility for evaluation activities has been designated to the project manager, individuals designated to perform evaluation activities have experience, and members of the evaluation team received briefings on the objectives of the evaluation. However, there is no documented evaluation policy or plan, no evidence that evaluation requirements have been developed, and neither the Authority nor the project manager reviews the status of evaluation activities. Table 7 shows the strengths, weaknesses, and observations for the evaluation KPA and the specific findings supporting these ratings. SEI defines risk as the possibility of suffering a loss. The purpose of acquisition risk management is to formally identify risks as early as possible and adjust the acquisition to mitigate those risks. An effective risk management process, among other things, includes (1) having a written policy on acquisition risk management, (2) developing a software acquisition risk management plan, (3) conducting software risk management activities in accordance with the plan (e.g., identifying risks, taking mitigation actions, and tracking risk mitigation actions to completion), and (4) measuring and reporting on the status of acquisition risk management activities to management. The FMS project had one strength for this KPA. The project has designated responsibility for risk management to the project management team. However, the District is not performing any of the other practices to satisfy this KPA. For example, there is no written policy or plan for acquisition risk management, resource requirements for risk management have not been identified, and at the time of this audit, neither the Authority nor the project manager were reviewing the activities for risk management. Table 8 shows the strengths, weaknesses, and observations for the acquisition risk management KPA and the specific findings supporting these ratings. Leading software acquisition organizations rely on defined and disciplined software acquisition processes to deliver promised software capabilities on time and within budget, first on a project-by-project basis, and later, as the organization’s processes become more mature, consistently across the institution. While the District has many strengths in its acquisition processes for FMS, it also has many weaknesses that, overall, make its processes undisciplined and immature. As a result, the District’s success or failure in acquiring FMS depends largely on specific individuals rather than on well-defined software acquisition management practices. This greatly reduces the probability that the system will consistently perform as intended and be delivered on schedule and within budget. To satisfy the intent of all the software acquisition key process areas and thereby have a reasonable assurance that acquisition efforts are effectively planned, managed, evaluated, and tracked, the District must address the many weaknesses identified in this report. This would entail the District formulating and implementing a written policy for software acquisition planning, requirements development and management, project management, contract tracking and oversight, evaluation, and acquisition risk management. In addition, it is important for the District to track the various activities for each KPA to ensure that they are being performed and that evaluation and risk management activities are being planned and effectively conducted. We recommend that the Chairman of the District of Columbia Financial Responsibility and Management Assistance Authority direct the District’s Chief Financial Officer to (1) take the following actions for the six KPAs we reviewed to ensure that the current FMS acquisition and implementation is satisfactorily completed and (2) apply these actions to any future software acquisitions. Document decisions and update the planning documents to ensure that large acquisitions such as FMS can be effectively managed. Designate responsibility for software acquisition planning activities. Determine required resources for acquisition planning. Ensure that measurements of software acquisition activities are taken. Ensure that the software acquisition planning documentation is updated as well as make program changes regarding outsourcing of the data center and upgrading the current system versus buying off-the-shelf. Ensure that the software acquisition planning documentation addresses life-cycle support of the software. Develop a written policy for software acquisition planning. Requirements Development and Management: Develop an organizational policy for establishing and managing software-related requirements. Clearly assign responsibility for requirements development and management. Document either resource requirements or resources expended for requirements development activities. Develop the capability to trace between contractual requirements and the contractor’s work products. Develop measurements to determine the status of the requirements development and management activities. Develop a written policy for the execution of the software project. Authorize the project manager to independently alter either the performance, cost, or schedule. Require that measurements be taken to determine the status of project management activities. Contract Tracking and Oversight: Develop written policy for contract tracking and oversight activities for the financial management system project. Support the project team with contracting specialists. Require that the project team review the contractor’s planning documents (for example, the project management plan, software risk management plan, software engineering plan, configuration management plan). Assign someone responsibility for maintaining the integrity of the contract. Take measurements to determine the status of contract tracking and oversight activities. Develop written policy for managing the evaluation of acquired software products and services. Develop a documented evaluation plan. Develop evaluation requirements in conjunction with system requirements. Assess the contractor’s performance for compliance with evaluation requirements. Develop measurements to determine the status of evaluation activities. Ensure that the Authority and the project manager review the status of evaluation activities. Develop written policy for software acquisition risk management. Designate a group to be responsible for coordinating software acquisition risk management activities. Define resource requirements for acquisition risk management. Ensure that individuals designated to perform software acquisition risk management have adequate experience and training. Integrate software acquisition risk management activities into software acquisition planning. Develop a software acquisition risk management plan in accordance with a defined software acquisition process. Develop a documented acquisition risk management plan and conduct risk management as an integral part of the solicitation, project performance management, and contract performance management processes. Track and control software acquisition risk handling actions until the risks are mitigated. Ensure that risk management activities are reviewed by the Authority and the project manager. GAO requested comments on a draft of this report from the Chairman, District of Columbia Financial Responsibility and Management Assistance Authority, and the District’s Chief Financial Officer. They provided us with written comments that are reprinted in appendixes I and II. In their comments, the District of Columbia Financial Responsibility and Management Assistance Authority’s Executive Director and the District of Columbia’s Chief Financial Officer acknowledged that the software acquisition project for the new financial management system was a high risk initiative and that the District’s processes were not sufficiently mature. The District Chief Financial Officer identified initiatives in each of the key process areas. Both cited ongoing corrective actions, which, if properly implemented, will address several of our recommendations. For example, the Chief Financial Officer stated that the District is developing a Risk Management Plan and is evaluating various strategies to identify and manage risks, and that the Chief Technology Officer for the District of Columbia is developing policies and procedures for information resource management which will include software acquisition. However, the District Chief Financial Officer also added that their efforts to date have achieved a sound acquisition state consistent with the intent of the SA-CMM. As discussed in the report, significant improvements would be necessary to achieve the minimally acceptable level of maturity as defined by the Software Engineering Institute’s Software Acquisition Maturity Model to satisfy the intent of all the software acquisition key process areas. Accordingly, the District has not yet achieved a sound acquisition state consistent with the intent of the SA-CMM. If the District is to instill the needed discipline into its systems acquisition processes consistent with the intent of SA-CMM, it will need to effectively implement all of our recommendations. We are sending copies of this report to the Ranking Minority Member of your Subcommittee and to the Chairmen and Ranking Minority Members of the Subcommittee on Oversight of Government Management, Restructuring, and the District of Columbia, Senate Committee on Governmental Affairs, the Subcommittee on the District of Columbia, Senate Committee on Appropriations, and the Subcommittee on the District of Columbia, House Committee on Government Reform and Oversight. We are also sending copies to the Director of the Office of Management and Budget, the Chairman of the District of Columbia Financial Responsibility and Management Assistance Authority, and the Chief Financial Officer of the District of Columbia. Copies will be made available to others upon request. If you have questions or wish to discuss the issues in this report, please contact me at (202) 512-6412. Major contributors to this report are listed in appendix III. Richard Cambosos, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed whether the District of Columbia had implemented disciplined software acquisition processes for its new financial management system (FMS). GAO noted that: (1) while the District has many strengths in its acquisition processes for FMS, it also has many weaknesses; (2) when compared to standards established by the Software Engineering Institute, the District's processes for software acquisitions are not mature; (3) of the six key process areas (KPA) evaluated for the repeatable level, the District fully satisfied only one--solicitation; (4) severe weaknesses were found in other critical key processes, including requirements development and management and evaluation; (5) for example, the District does not have a policy for establishing and managing software-related requirements, does not at present have adequate resources for requirements development, and has not formally designated responsibility for requirements development and management; (6) similarly, the District does not have an effective evaluation process, and is currently unable to objectively determine if the acquired systems will satisfy the contract requirement; (7) finally, the District has not satisfied the one KPA evaluated for the defined level of maturity, acquisition risk management; and (8) the FMS project does not have a risk management plan and does not track project risk.
The countries of Latin America have a long history of political change, including dictatorships, autocratic rule, military juntas, and various forms of democracy. According to Freedom House, a U.S. research organization that tracks political developments around the world, these countries have, since the 1980s, gradually progressed toward stronger democracies, as measured by the extent to which the citizens of these countries enjoy political rights and civil liberties (see fig. 1). Of the six countries in our study (Bolivia, Colombia, El Salvador, Guatemala, Nicaragua, and Peru), all but Colombia and Nicaragua experienced a strengthening of democracy by these standards between 1992 and 2002 (see table 1 and app. VI for more information). Appendix V provides further information on the quality of life and selected indicators for the selected countries. In September 2001, the 34 democratic members of the Organization of American States (OAS) unanimously adopted the Inter-American Democratic Charter, declaring that “the peoples of the Americas have a right to democracy and their governments have an obligation to promote and defend it.” This commitment goes beyond preserving elections to ensuring the defense of human rights and fundamental freedoms, popular participation in government, the rule of law, the separation of powers, and transparent and accountable government institutions. Despite this commitment, many Latin American nations have yet to fully achieve these conditions. According to the OAS charter, the hallmarks of democracy include respect for the rule of law on the part of all institutions and sectors of constitutional subordination of all state institutions to the legally access to and the exercise of power in accordance with the rule of law; transparency in government activities and probity, responsible public administration on the part of governments; participation of citizens in decisions relating to their own development; separation of powers and independence of the branches of government; a pluralistic system of political parties and organizations; freedom of expression and of the press; respect for human rights and fundamental freedoms; and periodic, free, and fair elections based on secret balloting and universal suffrage. Although the national governments of all six countries we visited have been democratically elected since the 1990s, they face serious social, economic, and political challenges that have made strengthening key democratic institutions a difficult and long-term endeavor. In South America, Colombia continues to struggle with the escalation of a nearly 40-year campaign to overthrow the government, with attendant economic and social disruptions that affect thousands of its citizens each year, while Peru is emerging from the shadow of authoritarian rule and the violent actions of insurgent guerillas. Bolivia, which has had a relatively more stable political environment, must now deal with a host of economic challenges and an increasingly disillusioned and vocal indigenous class. In Central America, El Salvador’s and Guatemala’s Peace Accords were signed in 1992 and 1996, respectively, providing a framework for rebuilding those societies after decades of civil war. Nicaragua, one of the poorest nations in the hemisphere, still confronts political polarization and corruption, according to U.S. officials. The United States has provided assistance to many of the countries of Latin America and the Caribbean to aid in strengthening democracies. From fiscal years 1992 to 2002, the six countries in our study, Bolivia, Colombia, El Salvador, Guatemala, Nicaragua, and Peru, received about $580 million in assistance (see fig. 2 for distribution of funding among these six countries). Almost all U.S. funding for democracy assistance, authorized under the Foreign Assistance Act of 1961, is appropriated to the U.S. Agency for International Development (USAID) and the Department of State. A significant amount of assistance has been allocated to the Department of Justice through interagency fund transfers from USAID and State. From fiscal years 1992 through 2002, USAID has administered $479.3 million of program funding for democracy activities in this region, while the Justice Department has administered $101.3 million. The State Department also administered democracy-related programs during this time period. However, the department could not readily provide disaggregated data on the bulk of its democracy-related programs, such as funding provided by the Bureau for International Narcotics and Law Enforcement Affairs (INL). Figure 3 shows the distribution among the major implementing agencies of democracy assistance funding to the six countries we reviewed. Other organizations with democracy-related assistance activities funded by the U.S. government include the National Endowment for Democracy, the Inter-American Foundation, and the Department of the Treasury. These agencies provide assistance through a variety of means, primarily in the form of goods and services to governmental and nongovernmental organizations and individuals. For some projects, such as law enforcement training, U.S. government agencies provide the assistance directly, or with contract assistance, as needed. For other projects, such as institutional development projects, the agencies distribute aid to beneficiaries primarily through grants, cooperative agreements, and contracts with nongovernmental organizations, private voluntary organizations, and firms located in the United States or overseas. Cash disbursements are generally not provided directly to foreign governments. Democracy assistance efforts, if successful, can influence political stability and economic growth. Economists have long demonstrated that countries with stronger democratic institutions are more likely to experience sustained economic growth. For example, the positive relationship between the respect for property and contractual rights and the rate of economic growth has been found to be especially strong. Law-respecting, accountable governments tend to provide conditions that encourage long- term investments and innovation. As the standard of living improves, the probability of further democratization of political institutions over time increases substantially. Many other foreign donors have also provided democracy assistance to the countries covered in our review. Multilateral donors, including the Inter- American Development Bank (IDB), the World Bank, the United Nations, and OAS have been active in funding democracy-related activities. In addition, many Western European countries, the European Union, and private international donors have also financed projects similar to those funded by the United States. We did not attempt to determine the total amounts of funding and the outcomes associated with this assistance, given the difficulty in identifying many different efforts, their costs, and the paucity of studies documenting program outcomes. The United States has taken a broad approach to providing democracy assistance. The assistance approach generally incorporates four elements: (1) rule of law, (2) governance (3) human rights, and (4) elections. (See fig. 4 for an illustration of these elements.) Rule of Law: These projects support constitutional and criminal code reforms to make criminal justice more swift, transparent, and participatory; establish new institutions and enhance existing ones to improve management of the justice sector and to help safeguard the legal rights of citizens; provide technical assistance, training, and management information systems for judges, prosecutors, public defenders, and law enforcement agencies to improve their capabilities and increase their efficiency, effectiveness, and fairness; increase access to justice through mediation, alternative dispute resolution, and other mechanisms; and reform law school curricula to reflect modern methods and necessary skills for practicing law. Governance: These projects seek to improve the administrative, analytical, and outreach capacity of legislatures; strengthen the administrative capacity and accountability of municipalities and increase citizen participation in local government; and foster a greater public awareness about corruption and implement strategies to enable government institutions to become more transparent and accountable. Human Rights: These projects are intended to prevent human rights abuses through greater public awareness, protect citizens against abuses, and respond to past violations through legal action and public reconciliation processes. Elections: These projects are designed to improve election administration, enhance voter access, and legitimize election results by supporting domestic and international observers. USAID and the State and Justice Departments have not traditionally accounted for funding data according to the four elements previously described but have provided this information for fiscal years 2000 through 2002, as shown in figure 5. While assistance to civil society appears to be relatively small in figure 5, important civil society support is also included through the four programmatic areas we focus on in this report. While USAID funds and implements assistance projects in all areas covered by this report, the State Department provides funding to the Justice Department for law enforcement assistance. The State Department’s Bureau of Democracy, Human Rights, and Labor also provided a relatively small amount of democracy-related assistance to some of the six countries covered in our review, as did the department’s Western Hemisphere public diplomacy program. To assess the nature, impact, and sustainability of U.S. assistance programs to strengthen democratic institutions in Bolivia, Colombia, El Salvador, Guatemala, Nicaragua, and Peru, we first interviewed headquarters officials in Washington, D.C., at the departments and agencies providing rule of law, governance, human rights, and election assistance, including USAID, the State and Justice Departments, the National Endowment for Democracy, and the Inter-American Foundation. We also interviewed experts at nongovernmental organizations, including the National Democratic Institute, the International Republican Institute, the Washington Office on Latin America, and Human Rights Watch. For all six countries, we reviewed Mission Performance Plans, USAID country and regional strategic plans and other planning documents, funding agreements, contracts, and project evaluations. We obtained funding information for fiscal years 1992 through 2002 from USAID headquarters and country staff and the Justice Department (the Justice Department administers funding provided by the State Department). The State Department could not readily differentiate most of its democracy-related assistance funding during this period from counternarcotics-related funding, which we did not include in the scope of our review. We also reviewed our prior reports on democracy assistance to Latin America. We conducted fieldwork in each of the previously identified six countries between March and September 2002. In each of these countries, we met with the U.S. Ambassador; the USAID Chief of Mission; political and economic officers; senior U.S. officials representing agencies with rule of law, governance, human rights, or elections programs; and numerous program staff, including contractors responsible for implementing the projects. We interviewed host country officials at supreme courts; law enforcement organizations; legislatures; national ombudsmen; and ministries covering justice, police, local governments, government oversight, and elections. We visited training schools for judges, prosecutors, and police; local justice centers; local government pilot projects; and legislative outreach offices, as appropriate. We also met with numerous representatives from nongovernmental organizations and other groups representing a broad spectrum of civil society, including local citizen groups involved with rule of law, governance, human rights, and elections programs. To analyze the overarching management issues that have affected program outcomes, we analyzed project documentation, interviewed knowledgeable officials, and reviewed assistance activities on field visits to the six countries. We then analyzed and synthesized information across the six countries. To look for broader themes, we also interviewed experts in the field, including those from nongovernmental organizations and academia, and attended USAID’s annual democracy officers’ conference in 2001. We performed our work from August 2001 through December 2002 in accordance with generally accepted government auditing standards. Reforming the criminal justice sector has been a critical area of concern in Latin America. Nontransparent legal processes, corruption, and incarceration of prisoners for months or years before trials can undermine confidence that justice is being dispensed fairly. Surveys done in the region have shown that high levels of crime and citizens’ lack of trust in justice institutions are positively correlated with reduced public support of democracy. In the six countries we reviewed, USAID and the State and Justice Departments have sought to (1) reform criminal justice systems by helping establish new legal frameworks to make criminal procedures more efficient and transparent and by strengthening the capabilities of justice sector institutions, (2) increase the public’s access to the justice system by establishing public defense services for poor defendants and by supporting construction of justice centers in poor communities, and (3) help law enforcement institutions conduct criminal investigations and manage their operations more efficiently and effectively. We found that although the U.S. assistance had contributed to noteworthy progress in these areas in most of the countries we reviewed, concerns remain about whether gains will be sustained. Due to resource constraints and other implementation difficulties, judicial and law enforcement institutions in these countries continue to rely to a large degree on U.S. and other international assistance for implementing justice sector reforms. U.S. officials also stated that legislative restrictions on law enforcement assistance restrict their ability to plan and carry out comprehensive justice sector reform programs because they prohibit many types of police assistance. As seen in table 2, U.S. rule of law assistance has been provided to five of the six countries we visited since the mid-1980s, beginning first with El Salvador in 1984. A key component of U.S. rule of law assistance in five of the six countries we reviewed has been support for criminal justice sector reforms establishing new roles and responsibilities for judicial and law enforcement institutions and introducing oral procedures and public trials. Support for criminal justice reforms has been provided primarily by USAID and the Justice Department and has focused on facilitating constitutional and criminal code reforms, helping to create and strengthen justice sector institutions, and improving legal training for justice sector professionals and reforming law school curricula. The United States has helped five of the countries we reviewed establish new legal frameworks for their criminal justice systems, supporting the drafting of new criminal codes and developing political consensus for criminal justice reform, both within the government and among civil society. Although the reforms each country has enacted have varied, U.S. assistance has supported the necessary legal frameworks for oral, adversarial criminal procedures and training for justice sector actors to implement these procedures. The United States has assisted Latin American countries’ transitions from inquisitorial to adversarial systems to help increase the transparency and efficiency of the judicial process. Benefits of the adversarial system include shortened pretrial detentions, the presumption of innocence, and the right to a defense. Host country officials commented that U.S. support has been critical to building consensus for the development and enactment of these reforms. USAID has supported constitutional and criminal procedures code reforms that went into effect in Colombia (1991), Guatemala (1994), El Salvador (1998), Bolivia (2001), and Nicaragua (2002). In Bolivia, for example, USAID’s rule of law assistance since 1997 has focused primarily on support for the passage and implementation of a new criminal procedures code. USAID’s assistance, provided in close coordination with the German government, has supported reforms that provide the basis for oral, accusatory procedures and public trials, which significantly changed the roles and responsibilities of judges, prosecutors, defense attorneys, and the police. In addition, U.S. and German assistance has supported disseminating information on the code to the public, mainly through nongovernmental organizations. Despite achievements in passing criminal justice reforms, these countries have had varying degrees of success in implementing the reforms in practice, and each has work remaining to fully put into practice the new roles and responsibilities contained in the reforms. For example, Nicaragua and Bolivia have only recently begun implementing newly enacted criminal procedures codes, while reforms for criminal sentencing codes have not yet been enacted. The Nicaraguan legislature also passed an administrative litigation code in 2000, which created a mechanism for citizens to bring legal cases against the government. This code has not been implemented because, according to a USAID official, the Supreme Court has raised constitutional objections to them. Colombia and Guatemala enacted criminal justice reforms in the early 1990s but have made limited progress in implementing them. Colombia, for example, has made little progress in establishing an adversarial criminal justice system, including oral trials, despite enacting its constitutional reform in 1991. Colombia’s reforms established a legal structure for oral trials and modernized criminal investigation and prosecutorial functions, and the reforms were developed through a coordinated approach that involved key justice sector institutions. Following this promising start, however, political support for these reforms waned during the 1990s, and oral, adversarial procedures are still rare in Colombia, according to USAID officials. Although Guatemala’s reforms provided the basis for transitioning to an adversarial criminal justice system in 1994, and Guatemala reorganized and created the necessary justice institutions for implementing the reforms, the Guatemalan justice system is still plagued by problems, particularly the courts, prosecutor’s office, and the police. During our visit to Guatemala, the prosecutor’s office and the police were still trying to resolve profound differences in the roles that their respective institutions would have in carrying out criminal investigations. U.S. assistance provided by the State and Justice Departments and USAID has helped justice institutions introduce important enhancements to their organizations and operations. Despite these improvements, the Guatemalan criminal justice system still faces serious challenges in its efforts to fully implement these reforms. El Salvador appears to have made the most progress in reforming its justice sector; for example, the Attorney General has instituted sweeping personnel changes in the prosecutor’s office to improve the quality and integrity of its workforce. However, the judiciary in El Salvador has yet to institute similar reforms, according to U.S. officials. According to the State Department’s most recent human reports, the judiciaries in each of the six countries we reviewed are continuing to face problems, including inefficiency, corruption, and a climate of impunity. In Bolivia, for example, State reported that judicial corruption and inadequate case-tracking mechanisms are contributing to the incarcerations of persons for months or years before their trials. In Colombia, State reported that Colombia’s large backlog of over 3 million cases has overburdened the judicial system, and that prosecutors and judges are struggling to transition from traditional, written procedures, to an oral, adversarial system. U.S. assistance also has supported the creation and strengthening of new institutions to implement the new codes and other reforms, such as judicial councils that participate in selecting, training, and disciplining judges and independent prosecutor’s offices to manage investigations and bring criminal cases to trial. For example: In Bolivia, USAID assistance supported creating a judicial council in 1998 that reviews the qualifications of judicial candidates, evaluates the performance of sitting judges, and manages a training center for judges. In Nicaragua, USAID has supported establishing a prosecutor’s office that is independent of the executive branch that will implement the new criminal procedures code. The United States also has provided assistance to strengthen and modernize justice sector institutions’ operational capabilities. For example, USAID support helped establish a clerk of courts office in Guatemala City that centralized case intake and management in one location for the city’s 11 criminal courts. A USAID study showed that after this office was established in 1999, the annual number of cases that were unaccounted for decreased from more than 1,000 to 2. USAID and the Justice Department also have assisted in the publication of operations manuals for judges, prosecutors, and other legal operators to help clarify roles and responsibilities and ensure uniform implementation of legal codes. Judicial and law enforcement institutions that the United States has assisted face resource constraints that make it difficult to sustain or expand U.S-supported pilot projects. For example, in Bolivia, the government lacked the resources to maintain or replicate a U.S.-funded model prosecutor’s office, and the project ended with little impact. Also in Bolivia, USAID supported a pilot case intake and management system for judges. This system was designed to provide information on case assignments and their progress through the judicial system. Originally implemented in 1996, use of this system continues to be uneven due to resource constraints, and the system has not been implemented on a national level. In Colombia, USAID had funded 13 oral trial courtrooms, in addition to 13 such courtrooms opened by Colombia’s judicial council. However, these are the only oral trial courtrooms currently operating in the country, and a major challenge will be to build similar courtrooms for the country’s more than 2,000 municipal, circuit, and special jurisdiction judges. In one regional court we visited in Colombia, USAID had built an oral hearing room and equipped it with new recording equipment to facilitate this transition. Although judges were holding regular oral hearings in this room, this equipment was not used because the court could not afford audiotapes (see fig. 6). In five of the six countries we visited, USAID and the Justice Department have provided extensive legal training to judges, prosecutors, investigators, and public defenders on new criminal procedures codes, either directly or through support to training centers in host government institutions. For example: In Bolivia, these agencies trained more than 5,000 justice operators on the country’s new code through a variety of courses, seminars, and “train-the-trainer” activities. In Colombia, USAID has assisted a training academy for judges by supporting restructuring the school and its curriculum. The school has trained 600 judges to be trainers, allowing the training to be replicated throughout the country. The Justice Department also has provided extensive training to prosecutors and law enforcement personnel. However, training centers for judges, prosecutors, and public defenders have faced severe budgetary constraints, and in most cases do not operate independent of U.S. assistance. For example, in Colombia, the director of the judges training academy told us that its budget has been eliminated, and the future operation of this center is uncertain. Similarly, a USAID- supported training center within Colombia’s Public Defender’s Office lacks a training budget. In Bolivia, the Attorney General told us that, without international assistance, he could not afford to staff and adequately equip his academy to train prosecutors to implement the country’s new criminal procedures code. USAID also has worked with some law schools in Bolivia, Colombia, El Salvador, and Guatemala to revise their curricula to reflect new reforms and provide more practical training in oral, public trials. For example, USAID helped Guatemala’s National University implement a revised curriculum for new law students with greater emphasis on ethics and including courses on constitutional law and human rights. Nonetheless, U.S. and host country officials in the countries we visited also stated that legal education remains a major concern. Although law schools in these countries have proliferated, officials stated that many schools do not provide adequate legal training. In El Salvador, the validity of the degrees and academic credentials of judges and attorneys has come into question, as the Supreme Court has initiated an extensive review of justice officials’ academic backgrounds. Host country officials in El Salvador commented that poor quality legal education requires that lawyers and judges be retrained once they enter the justice sector. USAID has supported efforts to increase citizens’ access to justice through programs to provide legal services to poor citizens and communities (see figs. 7 and 8). USAID’s access-to-justice assistance has focused on establishing and strengthening public defender’s offices and supporting decentralized justice centers and alternative dispute resolution mechanisms. USAID has assisted in establishing or strengthening professional Public Defender’s Offices in five of the six countries we reviewed by helping build political consensus for the creation of these offices and by providing operational support. USAID also has provided training and operation manuals and has supported computerized information systems for Public Defender’s Offices. The number of public defenders and the services they provided has also increased, due in part to USAID contributions. For example: In El Salvador, the number of public defenders increased from 25 in 1991 to over 300 in 2002, and USAID contributed to this increase by initially paying public defender salaries. El Salvador’s Public Defender’s Office now also has local and national coordinators, investigators, and legal aids. This office handles an average of 35,000 cases per year, which is approximately 95 percent of El Salvador’s criminal cases. In Guatemala, USAID supported creating an independent public defender’s institute, as called for in the 1996 Peace Accords. In 2001, the institute provided services to approximately 20,000 Guatemalans. These newly created Public Defender’s Offices have faced severe budgetary constraints and in some cases are not able to provide adequate services to poor defendants nationwide. For example: In Nicaragua, the Public Defender’s Office, created in 1999, had only 13 attorneys when we visited, all of whom were located in the capital, Managua. Since then, according to USAID 23 additional offices have been established throughout Nicaragua, and the total number of public defenders has increased to 47. Colombia’s public defenders work on a part-time contractual basis. According to USAID’s justice contractor, these defenders have large caseloads and are paid a low, fixed salary. Furthermore, Colombia’s approximately 1,200 public defenders handle less than 10 percent of the cases involving poor defendants. Private attorneys appointed by the court to work on a pro bono basis handle the rest of the cases. In Bolivia, host country officials told us that the USAID-supported Office of Public Defense, established in 1995, has not been adequately funded. The office depends on external financing to fund the relatively low public defender salaries. Bolivian officials stated that they have not been able to adequately replace staff who left for higher salaries, and that in criminal trials, public defenders are at a substantial disadvantage to prosecutors because they lack resources and support services. The State Department’s most recent reports on human rights cited the Public Defender’s Offices in four of the six countries as being overburdened by large caseloads and not always able to provide qualified attorneys for indigent defendants. In Nicaragua, State also cited public defenders’ complaints that judges were continuing to sentence poor defendants without the presence of a public defender, despite these defendants’ right to legal counsel. In four of the six countries we reviewed, USAID has provided funds to support the creation of decentralized, community-based houses and centers to provide greater access to the justice system and mediation to resolve domestic disputes. In Colombia, “justice houses” (casas de justicia) have been built in poor, marginalized areas to provide dispute resolution and other legal services and help reinforce the presence of the Colombian government. Since 1995, 18 justice houses have been built, and USAID plans to expand this number to 40 by 2005. National institutions provide the staff, including prosecutors, public defenders, police inspectors, social workers, and mediators, while municipalities are expected to maintain the facilities. In Guatemala, 11 justice centers were built outside the capital along with 16 complementary mediation centers to serve indigenous communities near these centers. U.S. and Guatemalan officials stated that the centers have facilitated coordination of services and have improved local citizens’ experiences with the justice system. In Peru, the Ministry of Justice has established 32 conciliation centers and 31 legal aid clinics in poor communities. In El Salvador, a pilot project plans to open 6 justice houses by the end of 2003. These justice houses will focus on providing mediation services. Despite the positive impact that the justice houses and centers appear to have had, it is not clear how these projects will be supported by host governments or whether they will be able to operate without U.S. assistance. Greater host country commitment of resources will be required to make them more sustainable and to have a wider impact. For example: While Colombia’s Ministry of Justice and Interior has supported the justice houses, it has not made a commitment to build more or take over existing ones from USAID. Further, many Colombian municipalities face severe resource constraints and may not be able to maintain and support existing justice houses. Guatemala has had some success in expanding an aspect of the justice center model from secondary cities to its capitol, Guatemala City, improving officials’ ability to track criminal cases. However, Guatemala’s justice centers are not currently sustainable without U.S. or other donor support, according to USAID and contractor officials, and the Guatemalan government has no plans to fully expand this justice center model to the national level. In Peru, USAID funded pilot conciliation and legal aid centers by nongovernmental organizations and municipalities as well as government-operated centers in several major cities. USAID also has helped the Peruvian government build its capacity to train, license, and regulate a growing number of private conciliators. However, most pilot centers that USAID helped create are now closed for lack of funds, according to project officials. The Minister of Justice also told us that the government lacks the resources to expand the number of government-operated conciliation centers or provide meaningful oversight to privately run centers. U.S. assistance to develop and strengthen the capacities of the police in the six countries we reviewed was provided primarily by the Justice Department’s International Criminal Investigations Training and Assistance Program (ICITAP). ICITAP’s assistance in these countries has focused primarily on developing criminal investigations capabilities by providing training and supporting police management, accountability, and operations (see figs. 9 and 10). Five of the countries we reviewed have ICITAP police assistance programs. A key focus of this assistance has been to strengthen police criminal investigations capabilities by providing direct training to investigators in crime-scene management and coordinating with prosecutors, among other areas, and helping investigator schools prepare to take over these functions. ICITAP also has provided equipment for analyzing forensic evidence and has assisted in developing computerized case management systems. In Guatemala, for example, ICITAP has provided assistance to strengthen the criminal investigations unit within the National Civilian Police, including training in investigative, administrative, and case management skills, and supported an automated case-tracking system. In Colombia, ICITAP also has focused on providing training in criminal investigations, developing internal training capabilities, and strengthening forensics capabilities. Currently, ICITAP is providing assistance under Plan Colombia and the Andean Regional Initiative, which is a more than $2 billion effort to assist the Colombian government in fighting illicit crop production and improve its judicial and law enforcement capabilities. ICITAP, with the Justice Department’s Office of Overseas Prosecutorial Development, Assistance, and Training, has helped establish and strengthen specialized investigative units that focus on money laundering, human rights, anticorruption, and antinarcotics. In an effort to improve interinstitutional coordination, these units include prosecutors, judicial police, and other investigative personnel. In addition, ICITAP is strengthening Colombia’s forensics capabilities in the country’s four laboratory systems through standardized procedures, protocols, and new equipment. In El Salvador, USAID also is working in the criminal investigations area by providing courses on joint crime-scene management as requested by El Salvador’s Attorney General and Chief of Police. Prosecutors, police investigators, and forensic specialists have attended this course to improve police-prosecutor coordination in protecting and managing evidence. U.S. assistance to strengthen criminal investigations capabilities has provided extensive training and supported the development of internal training centers. However, the impact of this assistance has been limited, in some cases, due to the lack of political will for change and resource constraints. In Guatemala, for example, U.S. officials stated that corruption, funding problems, and the lack of political will for reform have limited the impact of U.S. assistance to strengthen criminal investigations. According to U.S. officials, key barriers to improving the police’s investigative capabilities have been the lack of continuity of police leadership and coordination problems between prosecutors and police, including an inability of these institutions to agree on their roles. U.S. officials in Guatemala further stated that the police-prosecutor dispute has impeded effective crime-scene management and evidence handling, and the two institutions developed criminal laboratories with overlapping functions. In Bolivia, ICITAP has supported training and provided equipment for the criminal investigation unit, but the ICITAP program manager stated that courses had to be repeated multiple times because of a high turnover of officers within the unit. In addition, U.S. and Bolivian officials stated that the Bolivian police are facing significant resource constraints that have impeded their ability to operate and expand an ICITAP-supported case management system that would link police units and records in different cities. Originally designed as a nationwide system when it began in 1997, ICITAP is now supporting implementation in five cities, and even in these locations use of the system has varied. According to ICITAP officials, in some cases, police have not paid telephone bills, causing service to be cut off, which has been a major obstacle. Bolivian police officials told us that resource constraints also have prevented them from purchasing fingerprint powder and toner for printers, thus precluding full use of ICITAP-donated equipment. ICITAP officials stated that Bolivia’s centralized administration and management of the police have not been responsive to the resource needs of departmental police units. In August 2002, the State Department’s Bureau of Western Hemisphere Affairs defunded ICITAP’s police assistance program in Bolivia. A State Department official said that the decision was made on the basis of dissatisfaction with ICITAP headquarters’ management of the program. This official also stated that future U.S. police assistance in Bolivia would be taken over by the State Department’s INL and USAID. A Justice Department official said that State’s decision was a reflection of a continuing disagreement between the State and Justice Departments over the role of each agency in implementing and managing criminal justice programs. The official noted that ICITAP headquarters had provided the same management assistance throughout the region, including to the successful program in El Salvador. According to the State Department’s most recent human rights reports, the police in each of the six countries continue to be involved in illegal activities and were not always investigated for these activities. In Guatemala, for example, State reported that there were credible allegations of the involvement of police officers in kidnappings, and that impunity for police who commit criminal offenses remained a problem. In Bolivia, State cited credible allegations that police were involved in abuses, including excessive force, extortion, and improper arrests. State also reported that investigations of these abuses were slow. In addition to supporting criminal investigations, ICITAP also has provided assistance in police management, accountability, and operations. This assistance has included training in police administration and management and training to strengthen internal oversight. In Bolivia, ICTAP has supported a new disciplinary code and Office of Professional Responsibility. ICITAP also has supported curriculum improvements for Bolivia’s 4-year, university-level police academy. In addition, ICITAP has provided technical assistance to draft a new police organizational law that would decentralize operational and administrative decision-making authority and assign resources to operational units, rather than through a centrally controlled budget. To date, this legislation has not been passed into law. Among the countries we visited, ICITAP’s assistance appears to have had the greatest impact in El Salvador. ICITAP has helped El Salvador’s National Civilian Police by developing a strategic plan, supporting standardizing and centralizing record keeping and reporting, and providing a management course to command-level officers. ICITAP also has supported development of the Police Academy since its inception in 1993 and has been able to scale back its assistance to the academy because Salvadorans are now managing its operations and teaching most of its courses. In addition, in an effort to address the country’s serious crime problem, ICITAP has helped develop a new policing model, characterized by increased use of crime statistics and the deployment of police patrols with greater community visibility. Modeled on U.S. programs, this project seeks to establish a permanent and highly visible police presence in urban areas facing crime and involves greater community outreach. The national police have implemented such patrols in 174 of El Salvador’s 262 municipalities, covering approximately 80 percent of the country’s population. Police statistics show that certain crimes have been significantly reduced in areas where these patrols have been deployed. For example, these statistics show a 30 percent drop in overall crime, a 32 percent decrease in homicides, and a 25 percent drop in armed robberies. The program also is being coordinated with an ICITAP-supported “9-1-1” system that covers approximately 65 percent of the country’s population. An additional factor related to implementation of police assistance is section 660 of the Foreign Assistance Act of 1961, which restricts the use of foreign assistance funds for training and financial support for police or other law enforcement forces of foreign governments. Specifically, the provision states that that these funds may not be used “to provide training or advice, or provide any financial support, for police, prisons, or other law enforcement forces for any foreign government or any program of internal intelligence or surveillance on behalf of any foreign government.” This prohibition was put in place in 1975 in response to human rights violations committed by nondemocratic regimes receiving USAID public safety assistance. USAID and the State Department have funded police assistance programs in Latin America, implemented by the Justice Department, under a series of exemptions that have subsequently been added to this provision. For example, an exemption allows for U.S. assistance to support police in the areas of investigative and forensic functions, the development of academic instruction, and programs to improve the administrative and management capabilities. The Justice Department’s program supporting community-oriented police patrols in El Salvador has been permitted under an additional exemption allowing assistance to strengthen civilian police authority and capability in postconflict countries. U.S. officials from the State and Justice Departments and USAID have told us that section 660 is a barrier to developing, or planning effectively, for a comprehensive, coordinated, and integrated justice sector assistance program that includes the police. Under the prohibition on law enforcement assistance, U.S. agencies may not be able to fully incorporate law enforcement organizations into their programs supporting justice sector reform. For example, a USAID official in Nicaragua stated that due to this restriction, the agency could not include the police in its human rights promotion programs or invite police officials to seminars and other forums where their participation was considered to be critical to a productive dialog on implementing justice sector reforms. These officials stated that U.S. assistance providers should be able to plan their rule of law assistance strategies on the basis of local country situations and not on whether an exemption from the law can be justified. For example, the USAID-funded assistance for community-oriented police patrols, implemented by the Justice Department, was scheduled to terminate in 2003 because USAID’s General Council determined that the postconflict exemption permitting this program no longer applies in El Salvador. U.S. rule of law assistance to Latin America supports criminal justice reforms, increased access to justice, and police investigative and management capabilities, and U.S. assistance has had an impact in each area. Due to budgetary constraints and other implementation difficulties, judicial and law enforcement institutions in the six countries we visited continue to rely to a large degree on U.S. and other international assistance to implement judicial reforms and other projects. U.S. and Latin American officials we interviewed stated that criminal justice reform in these countries is likely to be a long-term process, and it will likely take a number of years before these reforms are fully institutionalized. It is thus unclear at this time whether the initial results of U.S. assistance will be sustained or expanded to have greater impact. However, if U.S.-supported reforms are to become sustainable and have a larger impact, it appears that a long-term U.S. commitment and presence in providing rule of law assistance in these countries will be necessary. The State Department indicated that the Executive Branch should develop and propose to the Congress new legislation on law enforcement assistance that could be used to modify section 660 of the Foreign Assistance Act, to provide a clear statement of authority for providing law enforcement assistance abroad. The Justice Department stated that it would work with the State Department and USAID to consider whether changes to section 660 would be appropriate. This could be an important step in providing the Congress with options when considering how to better provide police assistance abroad. Latin American governments have historically been highly centralized, and local governments have lacked authority and resources. In addition, the legislative branch of government has usually been weaker than the executive branch, and public sector corruption remains a serious problem. To address these conditions, U.S. assistance activities, implemented primarily by USAID, have focused on (1) strengthening legislatures by improving their planning, analytical, and citizen outreach capabilities; (2) improving democratic local governance by building the administrative capabilities of municipalities and promoting effective decentralization of government functions; and (3) combating corruption by raising citizen awareness of this problem and establishing laws, regulations, and internal control structures to enhance government accountability. Overall, we found that U.S. governance assistance has enabled all six countries to develop and adopt reforms to make government institutions more effective, accountable, and responsive to the needs of the people. Despite the initially positive results, the sustainability and scope of many of these programs is uncertain because of inconsistent political support and lack of resources. USAID’s legislative strengthening programs have aimed to improve legislative planning and infrastructure, enhance legislative analytical and technical capabilities, and increase citizen knowledge of and input into congressional activities. As shown in table 3, U.S. legislative strengthening assistance has been provided to five of the six countries we visited, starting in the early 1990s and continuing off and on to the present. USAID has provided support to help legislatures function more effectively and professionally by improving their planning and infrastructure in all five countries where there are or have been legislative strengthening programs. USAID has generally done this by supporting the formation of modernization committees, which have developed plans to improve legislative infrastructure and processes, to encourage reform. In Nicaragua, modernization committee projects included upgrading the voting system, strengthening the legislature’s budget oversight capabilities, and creating a Web site to publicize legislative information. The Web site received 35,000 hits within the first 6 months that it was in operation. In El Salvador, the legislature developed a master plan for modernization that has helped to facilitate a consensus across political lines regarding public participation in the legislative process. In addition, USAID’s efforts to upgrade legislative infrastructure helped create more professional and transparent working conditions. In El Salvador, semiprivate offices were constructed for all legislators, thereby enabling some members to work more professionally and some to increase the number of constituents they met with. In Nicaragua, according to USAID officials, a conference room for the National Assembly was equipped, and an electronic voting board was also provided to display and record individual members’ votes. However, not all of these modernization committees are functioning today, and the infrastructure improvements have not always been well maintained. According to USAID and host government officials, there have been problems in three of the five countries where USAID has had legislative strengthening programs. In Nicaragua, former members of the modernization committee reported a decrease in the committee’s influence since the 2000 elections and noted that the committee no longer has the administrative or political support of the legislature. Also, the Nicaraguan legislature invested its own funds to upgrade the electronic voting board, the technician responsible for operating it told us that he no longer had adequate funds to maintain or improve the voting board. Lastly, the conference room that USAID had helped to equip in Nicaragua is now being used solely by one party. In Bolivia, the modernization committee no longer functions. In Guatemala, after the 1999 elections the new majority party cut back staffing of the modernization program, thereby causing the program’s offices to decrease their operations. USAID has supported efforts to establish and strengthen analytical capabilities in three of the five countries that have legislative strengthening programs, thereby enabling them to develop laws and regulations in a more informed fashion and to improve their oversight of the executive branch. In Bolivia, USAID helped establish a congressional research center and a budget office to analyze the executive branch’s proposed budget. This office identified approximately $43 million in errors in 1995. In Guatemala, assistance was provided to the Unit for Technical Support, which produced about 150 studies. According to the former manager of this unit, legislators now consider such reports necessary before presenting a proposal to the legislature. In El Salvador, a unit was created to provide analytical studies to legislators, staff, and committees. In two of these countries, USAID’s efforts to provide analytical support to legislatures have faced challenges due to changes in political support. In Bolivia, despite several years of positive impact, after the 1997 elections legislative branch institutions that USAID had supported, including the congressional research center, lost credibility as neutral entities and became ineffective, according to a 2001 USAID-sponsored evaluation. In Guatemala, after the 1999 elections, the new majority party cut 18 of 24 legislative technician positions, drastically curtailing the legislature’s analytical capacity. USAID also has assisted legislatures in increasing their constituent outreach in all five countries with legislative strengthening programs and has worked to provide more opportunities for citizens to have input into congressional activities. In Peru, USAID’s Office of Transition Initiatives provided assistance for four congressional committees to hold public hearings. To inform people about the congress, USAID supported seminars and a play that was performed in 45 public high schools in Lima, Peru. In El Salvador, three legislative outreach offices were built outside the capital. At one center we visited, representatives from three different political parties shared these offices. They stated that the presence of these offices has helped decrease partisanship. According to a USAID official, the legislature has been actively involved in setting program priorities and has paid for the outreach offices’ recurring costs. In Guatemala, three constituent outreach offices were established that implemented civic education initiatives, organized public hearings, and handled constituent casework. In two countries, these outreach activities have not been sustained, owing to a lack of consistent political support and in some cases politicization of the project. The head of the Nicaraguan Office of Citizen Participation, which USAID helped to create, noted that her office has received little financial or political support from the legislature. In visiting the office, we observed that its location on the 10th floor of an office building in central Managua makes it less accessible and visible to citizens outside the capital (see fig. 11). USAID ended its legislative strengthening program in Guatemala after the 1999 elections when the constituent outreach office staff came under undue political pressure. Today the majority party runs the offices, and opposition legislators are not permitted to work there, according to USAID officials. Some of USAID’s programs have helped leverage funding from other major donors for legislative strengthening programs. The Salvadoran congressional modernization plan helped the legislature secure a loan from the IDB to support new information systems and infrastructure. The current USAID program in Bolivia encouraged investment from the World Bank, the IDB, and two private German foundations. In Peru, USAID expects that its project will complement a planned $10 million IDB technical assistance project. U.S. programs to strengthen local governance, primarily implemented by USAID and to a lesser extent by the Inter-American Foundation, aim to increase the effectiveness, responsiveness, and accountability of municipal governments and to enhance citizen participation in local government. USAID’s local governance assistance has focused on strengthening municipal administrative, budgetary, and outreach capabilities and increasing citizen participation in local government and supporting national-level policy reform and institutions for strengthening local governments. As shown in table 4, local governance assistance has been provided in all of the six countries we visited, with starting dates ranging from 1993 in El Salvador to 2001 in Peru and Colombia. In the late 1980s, Latin American governments began to make efforts to decentralize their countries both fiscally and politically. Countries are undertaking various decentralization activities, including developing a nationwide decentralization program, addressing issues of financial transfers and taxation, and focusing on municipal accountability and citizen oversight. With limited funding for local government, USAID has focused on a small number of municipalities in each country, with the general aim that the host country government, other donors, and other municipalities would replicate the programs’ concepts. For example, in El Salvador, USAID is currently assisting 28 of 262 municipalities. In Colombia, USAID’s democratic local governance program, run by several contractors, is working intensively in 62 of 1,080 municipalities and is also providing training to members of 226 city councils. USAID’s local governance programs have helped many target municipalities operate more effectively and responsively (see fig. 12). In El Salvador, target municipalities increased financial resources by 72 percent between 1996 and 1999 by improving tax records and tax collection. Colombia’s program aims to help increase local tax resources by improving local land records and also partially funds small-scale social infrastructure projects, such as the installation of water meters designed to generate revenue to make local water systems sustainable. Mayors we met with noted that these projects helped enhance local government planning, budgeting, project design, implementation, and evaluation. In Bolivia, according to yearly surveys done by a USAID contractor between 1998 and 2000, citizens in USAID-assisted municipalities rated their local governments more highly on responsiveness than citizens in other municipalities. In addition, USAID programs have helped to increase citizen participation in, and oversight of, municipal activities in target municipalities. Some of the municipal oversight activities are closely tied in with USAID’s anticorruption programs. In Guatemala, support was provided for municipalities’ efforts to disseminate information and organize public meetings to develop municipal plans and budgets. In Colombia, according to USAID data, more than 4,400 citizens have participated in the development, implementation, and oversight of 67 municipal-level social infrastructure projects. On a smaller scale, the work of the Inter-American Foundation also supports local governance through small-scale, grassroots-driven projects that often increase and strengthen participation by citizens and civil society organizations (see fig. 13). For example, in Peru, one Inter-American Foundation grantee organization described how they helped raise women’s awareness of their rights, resulting in increased women’s participation in municipal affairs. The Inter-American Foundation also funded a Bolivian foundation that helped increase the involvement of small-scale rural enterprises, cooperatives, and mayors in defining a national poverty- reduction strategy. From 1997 to 2001, the Inter-American Foundation estimates it funded $34.3 million of projects that had some effect on local governance in the six countries we visited. According to our observations and discussions with USAID and contractor staff, the impact of USAID local governance programs projects has mainly extended outside target municipalities in Bolivia and El Salvador. In Bolivia, where the government has accepted USAID’s approach to working with local governments to replicate programs, impact has been broad. According to USAID, 175 of 314 municipalities in Bolivia now employ some of these participatory methods. Subnational associations of municipalities and departmental municipal associations have also been trained to replicate aspects of USAID’s programs. An Internet portal has also been funded that would enable municipalities to share best practices, have more transparent procurement, expand their financial base, and pursue advocacy and networking. The Salvadoran government has made participatory municipal planning a prerequisite for some government disbursements. The Salvadoran National Municipal Association estimated that in 2001, 160 of 262 municipalities used some form of citizen participation. In Bolivia and El Salvador, USAID has helped create materials that provide guidelines for municipalities and implementers on strengthening local governance and increasing citizen participation. In El Salvador, a manual on the basic criteria for participatory municipal planning was developed, in consultation with other donors. The Salvadoran government has begun to use this manual to measure progress in participation and transparency in all municipalities. In Bolivia, the IDB has funded the publication of manuals, originally developed with USAID support, that were made available to all 314 municipalities. While these manuals have helped increase the scope and sustainability of USAID’s programs in individual countries, they have not been widely or systematically shared among USAID missions where there are local governance programs. According to USAID officials in Washington, D.C., there is no central repository for these materials, which are usually produced by contractors. USAID mission staff we spoke with agreed that materials developed by USAID and its contractors are often not shared across missions. Other donors have also helped replicate USAID’s projects and expand their impact beyond target municipalities. A municipal-level integrated financial management system implemented in 4 municipalities in El Salvador will be extended a $2 million IDB project in at least 20 additional municipalities. In addition, the IDB and the Salvadoran government are planning a joint $2 million project to replicate USAID’s methodology of linking participatory development plans to municipal budget support. In Bolivia, USAID, a German foundation, and the Dutch Embassy have adopted a common methodology for municipal strengthening. USAID’s efforts to assist target municipalities have been constrained by limited municipal resources and skills and by staff turnover. Although these conditions exist in other countries, they were most evident in our visits to Nicaragua and Guatemala. According to USAID officials, Nicaraguan municipalities do not have the authority to set local taxes, which have been lowered in some cases by the national government to attract foreign investment. Representatives from a Nicaraguan institute that works with municipalities expressed concern that local officials may not possess the appropriate skills to handle increased governance responsibilities. USAID officials in Nicaragua and contractor staff in Guatemala said municipal staff turnover has exacerbated this problem, as newly elected mayors have fired existing staff and brought in less experienced personnel. Municipal staff in Guatemala also stated that they were frustrated about their lack of resources, noting that it was difficult to put into practice USAID’s method of participatory planning since there were few funds to implement projects. At the national level in all six countries, USAID has helped develop policies and institutions that support municipalities, often by working with national municipal associations. In Peru, policy advice has been given to the government for a nationwide decentralization program scheduled to begin in 2003. As part of this support, the Prime Minister’s office reviewed local experiences with decentralization and a congressional committee held public hearings to obtain input into its draft decentralization law. In Guatemala, USAID supported national-level working groups on municipal indebtedness and tax codes. In Colombia, USAID is helping the Colombian Federation of Municipalities organize meetings among mayors and local leaders at the regional level to discuss areas for policy reform. In Nicaragua, the National Association of Municipalities, which advises and advocates for municipalities, was established and strengthened. However, USAID’s work in this area has been affected by the level of political support for decentralization, which varies by country. In Nicaragua, municipal officials and representatives of the national municipal association noted that the past government had provided little political or financial support to municipalities. Subsequent to our visit, the current government passed three decentralization laws in May 2002, according to a USAID official. The lack of a municipal civil service law, for example, has posed obstacles to efforts to train local officials. Although the Bolivian government’s support for decentralization decreased after the 1997 elections, USAID continued to work and have an impact at the municipal level because the key decentralization law was already in place. In El Salvador, USAID’s program has been assisted by the government’s commitment to implement a supportive policy agenda. USAID anticorruption assistance has focused on supporting reforms in anticorruption legislation and regulations, introducing programs to make national and municipal government institutions more transparent and accountable, and fostering citizen awareness and oversight. As shown in table 5, U.S. anticorruption assistance has been provided in five of the six countries we visited, beginning with Peru in 1995. USAID’s anticorruption activities have helped countries develop anticorruption legislation and regulations. In Nicaragua, for example, USAID provided recommendations for the 2001 National Budget Law and worked with the National Assembly’s Anticorruption Commission to promote civil service reform. In both Colombia and El Salvador, USAID has supported measures to increase the accountability of public servants, including the development of a code of ethics. USAID also has helped government institutions take steps to become more transparent and accountable. In Nicaragua, USAID collaborated with other donors to help develop an integrated financial management system. This system, when fully operational, will enable the Ministry of Finance to track the spending of 13 government ministries, the National Assembly, and the courts (see fig. 14). In Colombia, the government adopted regulations that will require 3,000 national and subnational entities to follow standardized internal control processes that were recommended by USAID. USAID-supported anticorruption programs have also helped citizens become more aware and active regarding corruption issues. In Colombia, an anticorruption campaign reached 23 million people through radio and television spots. According to a study by a USAID contractor, Nicaraguans have become better informed about corruption issues as a result of a USAID- supported national anticorruption awareness campaign. Municipal-level public oversight in El Salvador and Colombia has increased as a result of local citizen watchdog groups that have been supported by USAID. Despite some initial success, the broader impact and sustainability of USAID’s anticorruption programs are still unclear. Transparency International, which is an international nongovernmental organization that focuses on combating corruption, concurs that although there have been some positive developments in the region, the results of anticorruption programs have been modest so far. According to our observations and discussions with USAID and host country officials, USAID’s projects have been hindered by politicization and a lack of consistent political support. In Nicaragua, for example, the Comptroller General’s Office, which USAID had been supporting with technical assistance and training, was reorganized. Now, a committee of political appointees runs it, impairing its objectivity. In addition, according to a high-ranking Nicaraguan official, in 2001 the Ministry of Finance fired experienced staff that had been trained as part of the USAID- and World Bank-supported integrated financial management system, resulting in lost institutional memory and expertise. In Peru, the Comptroller General’s Office has been unable to fully implement its oversight plans owing to a lack of political or financial support from the government, according to USAID and Peruvian officials. Finally, the systemic nature of corruption in Latin America, combined with public skepticism about anticorruption efforts, poses a major challenge for USAID’s programs. Although the political leaders of countries such as Colombia and Nicaragua have stated that combating corruption is a high priority, both USAID and the host countries are in the relatively early stages of addressing a broad and deeply rooted problem in the region. Transparency International notes that despite some progress, corruption remains widespread in the region, and the credibility of institutions is low. According to a 2002 study focusing on four Latin American countries, higher levels of corruption are significantly associated with lower levels of support for the political system. This is the case in El Salvador, according to a 1999 study, where Salvadorans who were victims of corruption demonstrated less support for the political system than those who were not. In Nicaragua, public sector corruption is endemic, according to USAID, and the public has little confidence in many government institutions, in part because of this corruption. According to a 2001 survey by a USAID anticorruption contractor, more than 70 percent of the Colombians surveyed considered corruption to be common in government institutions. A work plan prepared by the same USAID contractor cited a recent World Bank survey indicating that the same percentage of respondents considered the Colombian Congress to be corrupt or very corrupt. According to this USAID contractor, widespread public skepticism exists regarding the national government’s effort to combat corruption. USAID has noted that this lack of confidence poses challenges to its work in Colombia. U.S. governance-related assistance programs have enabled the six countries we visited to take limited steps toward more effective, responsive, and accountable government institutions. In some cases, other donors have taken steps to replicate or expand USAID’s programs. At the same time, however, USAID’s governance programs have been challenged by inconsistent political will and resource constraints. In light of this modest progress and the continued obstacles to reform, it is unlikely that U.S. governance-related assistance will be able to produce sustainable results without ongoing, long-term involvement. Many Latin American countries have suffered from decades of authoritarian rule and internal conflict. Guatemala, Peru, and Colombia in particular have endured terrorism, massacres, and forced disappearances. While the human rights situation in Peru and Guatemala has slowly improved over the last few years, the situation in Colombia has deteriorated even further. U.S. human rights assistance to Latin America has supported efforts to foster greater awareness of, and respect for, human rights. From 1992 to 2002, Guatemala, Peru, and Colombia were among the largest recipients of USAID human rights funding in Latin America. U.S. assistance efforts to improve the human rights situation in these countries have included technical assistance for the creation of government agencies that address human rights problems, training programs, education programs, and the provision of protection for threatened individuals. For the most part, the impact of these projects has been positive, but they are limited in scope and hindered by a lack of resources. Often, political and logistical problems must be resolved for these programs to work better. Despite some improvements in governments’ respect for human rights in these countries, serious problems persist. In some cases, longer term project results may be difficult for host governments to sustain owing to high recurring costs. As shown in table 6, the U.S. government has provided human rights assistance over the past decade to Colombia, Guatemala, and Peru. U.S. human rights assistance has had a positive impact in the three countries we reviewed that have a current human rights program. In Guatemala, Peru, and Colombia, human rights assistance has addressed past abuses, protected threatened individuals, and prevented future abuses. These efforts have fostered an increased awareness among the citizenry as to what rights the efforts have, and they have increased government accountability. Provided primarily by USAID, human rights assistance in these countries has focused on preventing future human rights abuses by promoting greater public awareness and mechanisms to address potential incidents; protecting human rights by providing physical, economic, and legal assistance to threatened individuals and communities; and responding to past abuses by supporting reconciliation commissions as well as the investigation and prosecution of human rights violations. USAID assistance programs have served to foster greater citizen awareness of human rights and have provided mechanisms for government action in support of human rights. For example, in Colombia, USAID has supported the creation of a national information network, called the “Early Warning System,” for citizens, nongovernmental organizations, and local authorities to report signs of impending massacres or other human rights violations in their communities by any of the irregular armed groups involved in that country’s ongoing conflict. If a threat is deemed real, the military, police, a national social service organization, or all three, will be alerted to take appropriate action. As of August 2002, USAID had provided $600,000 of a total planned investment of $3.1 million to support direct technical assistance and training for the network as well as to establish its central office. USAID also has helped establish 13 regional offices out of a planned 15, although the Early Warning System director said even more offices would be needed. According to its Coordinator, the Early Warning System has been publicized on the Internet and advertised on both television and radio to inform citizens about its existence. This project appears to have facilitated citizens’ ability to recognize and report potential human rights threats as well as allowed them to hold the government directly responsible for taking action. From June 2001 through August 2002, 150 alerts were emitted, of which the military, the police, or both, responded to 107. The Early Warning System director estimates that this response has saved 90,000 people from being victimized, although no actual results indicators have been developed. Although the Early Warning System is a unique tool for preventing large- scale human rights violations and has great potential for replication, coordination problems could hinder its proper implementation and ultimate impact. The director admitted that smooth communication between the regional and central offices can be problematic on the weekends, particularly Sundays, when the central office is not staffed. The system does not appear to have adequate backup communications methods and at times relies on one cell phone to ensure that alerts are transmitted to the appropriate authorities. Furthermore, government authorities have not always responded consistently to alerts and have failed to avert major human rights violations. The U.S. government also has supported the creation of protection programs for threatened citizens in Colombia. The Justice Department supports both a witness and a judicial protection program. Both of these programs place special emphasis on operational security and seek to ensure safe participation in judicial proceedings for witnesses, judges, investigators, and prosecutors. USAID supports a separate protection program for human rights defenders. As of August 2002, USAID has helped protect 2,776 individuals from irregular armed groups. In response to lobbying from the human rights community, the Colombian government has expanded the target protected population to include criminal witnesses, union leaders, journalists, leftist party members, mayors (all 1,098 of Colombia’s mayors were threatened with kidnapping or death by the Revolutionary Armed Forces of Colombia if they did not resign in 2002), council members, and municipal human rights workers. In the 5-year period between 1997 and 2002, the Colombian government spent approximately $25 million on the project. Resources, however, are too limited to help all vulnerable groups of people or even to keep pace with the increasing demand for individual protection. Nevertheless, the program demonstrates that the Colombian government is taking some action to protect threatened citizens. USAID human rights programs also have fostered greater government responsiveness to allegations of past or ongoing human rights abuses. For example, the Human Rights Promoters Network operated by the Colombian government educates citizen leaders about their rights protected by law. These leaders are expected to promote greater human rights awareness by replicating the training in their own communities, particularly for those groups most vulnerable to human rights violations. USAID also has been instrumental in supporting the creation of Human Rights Ombudsman Offices in five of the six countries by providing technical assistance, office equipment, and salaried professionals. These offices address citizen complaints, investigate officials accused of human rights violations, and propose human rights legislation. The State Department has reported that despite providing a legal channel for citizen complaints, funding problems have undermined sustainability and credibility of the ombudsman offices in Colombia and Nicaragua. Furthermore, the ombudsman has at times temporarily cast the entire office in a negative light, as in the case of Guatemala, where an ombudsmamn was accused of corruption. Various government officials, however, stated that, according to public opinion polls in Peru and Bolivia, the ombudsman’s office is one of the most highly respected public organizations. In Guatemala, USAID helped the Attorney General’s Office design the first Victims Assistance Office in Latin America in 1997, staffed with full-time doctors, nurses, social workers and lawyers to provide aid to victims of crime and gather evidence for potential prosecution (see fig. 15). Since then, each of Guatemala’s 23 departments has established at least one such office. USAID human rights programs have also fostered greater justice and resolution for victims and their families. For example, the Foundation for Anthropological Forensics of Guatemala, with funding from USAID, has been carrying out exhumations of clandestine cemeteries created during Guatemala’s 34-year civil war (see fig. 16). These efforts have helped to prove that massacres occurred, put questions about loved ones to rest, and aided in national reconciliation efforts. Peru’s Truth and Reconciliation Commission is carrying out exhumation efforts with similar goals and also is investigating culpability for atrocities. One of the commissioners with whom we met stated that U.S. assistance has been critical for the functioning of the commission, keeping it in operation when the Peruvian government was delayed in providing promised funding. The commission’s work is expected to culminate in a July 2003 report that will make recommendations for government reparations. Finally, the Justice Department has also worked to achieve justice and resolution for victims of human rights violations in Colombia. The department has trained special units of prosecutors and investigators to pursue major human rights cases and high-impact crimes, such as massacres, bombings, and kidnappings, in the criminal justice system. From August 2001 to August 2002, special units operating out of eight cities prosecuted 167 cases against irregular armed groups, including high-profile cases such as the assassination attempt on then-presidential candidate Alvaro Uribe in 2002 and various massacres across the country (see fig. 17). According to the Justice Department, it has plans to help the Colombian government expand the number and size of these units in fiscal years 2003 and 2004. According to the State Department’s most recent human rights reports, although government respect for human rights has improved in some cases, serious problems still remain. In Peru, State reports that in recent years the government has demonstrated greater respect for human rights advocates and had generally improved its relationship with civil society. In Guatemala, State reports that the government generally respects the human rights of its citizens, but its willingness and ability to prosecute and convict human rights violators is seriously limited, and that the police and military may be involved in illegal executions. In Colombia, the government's human rights record remained poor, according to State; there were continued efforts to improve the legal framework and institutional mechanisms for protecting human rights, but implementation lagged, and serious problems remained in many areas. For example, members of the police and armed forces have committed serious human rights abuses and have collaborated with paramilitary insurgents in doing so, but they have rarely been brought to justice. Government security forces also often failed to take action to prevent paramilitary attacks, according to the State Department report. The long-term outlook for many U.S. human rights assistance projects differs from most of the other programs we reviewed. Some human rights efforts that the United States is supporting, such as Peru’s Truth Commission, are short term and are projected to end on a specific date. Other projects, such as assistance to Colombia’s internally displaced persons, are fundamentally humanitarian in nature and may require outside support for as long as there is internal conflict. Funding for some longer term projects, however, is questionable owing to potentially high recurring costs. For example, the Colombian human rights units trained by the Justice Department still have a very limited national presence and depend on U.S. support to update and expand their training and equipment. It is not clear whether the Colombian government will expand these units on a national basis. The United States has provided Colombia, Guatemala, and Peru with some important tools to help address the human rights problems. Nonetheless, human rights remain a major concern in Colombia and Guatemala. Given the magnitude and political complexity of these problems and the limited scope of U.S. assistance, the tools that the United States has provided are likely to have only a marginal impact on these problems. Over the last two decades, many Latin American countries have transitioned to democracy and most countries in the region have held elections regularly. Although U.S. election-related assistance has supported efforts that have contributed to free and fair elections in the six countries we reviewed, most of this assistance has gone to three of these countries— Nicaragua, Peru, and El Salvador—to help them improve electoral institutions and enhance voter access. U.S. officials noted that of these three countries, only Nicaragua is likely to require significant international support before its next major election. The United States has been the largest donor of election-related assistance in many of the six countries we visited, and USAID has provided the bulk of this aid, almost $66 million, during fiscal years 1990 through 2002. Most of this assistance went to Nicaragua ($27 million), Peru ($20 million), and El Salvador ($13 million). The State Department, the National Endowment for Democracy, the National Democratic Institute for International Affairs, and the International Republican Institute provided smaller amounts of additional election assistance to some of these countries. The last two organizations have also used USAID election funds in some of these countries, according to representatives from these institutions. As shown in table 7, USAID provided electoral assistance to all six of the countries visited, starting in 1990 and continuing off and on to the present. Overall, U.S. election assistance activities have focused on improving election administration by building the institutional capacity enhancing voter access by improving voter registration and education and supporting electoral reform, and legitimizing election results by supporting electoral observation by domestic and international groups. USAID also has recently helped improve election administration in Peru and Nicaragua by strengthening the capabilities of electoral authorities. In Peru, USAID supported staff training, technical assistance, election planning, logistics, information systems, and transmission of results by providing almost $3.3 million in assistance before the 2001 national elections. The agency also provided support at a lower level to help run Peru’s 2002 regional and local elections. In Nicaragua, USAID has provided similar types of election administration support since 1990, including more than $1.8 million to the electoral authority for administrative enhancements in planning, logistics, information technology, and transmission of results before the 2001 national elections. U.S. assistance also has helped enhance voter access to the electoral system by improving voter registration and education in El Salvador, Nicaragua, Guatemala, and Peru. In El Salvador, according to USAID officials, the agency supported the establishment of civil and voter registries and helped issue 937,000 single identity documents, out of an expected total of 3.2 million documents, which will be used as official voter identification in future elections. On the basis of an electoral reform enacted with USAID support, the Salvadorian electoral authority plans to use the new voter registry to assign voters to polling stations closer to their residence for the 2004 presidential elections, thereby further improving voter access. In Nicaragua, USAID also provided support for registration efforts before the 2001 elections. This assistance helped about 150,000 citizens obtain voting credentials, according to USAID. To support Guatemala’s 2003 elections, USAID, through OAS, is providing $750,000 in assistance to fund voter registration activities to increase the access of the population to the electoral system. In Peru, USAID funded voter-training activities conducted by nongovernmental groups before the 2001 national elections and the 2002 regional elections. In Peru, El Salvador, and Guatemala, U.S. election-related assistance also has supported electoral reform efforts to improve voter access, with limited success. This assistance has focused on enhancing the rules and procedures governing the electoral system in order to improve political participation of the population. In Peru, USAID provided support for electoral reforms that were proposed following the 2001 national elections, but these reforms have not yet been enacted. In El Salvador and Guatemala, following the signing of those countries’ Peace Accords in 1992 and 1996, respectively, the agency supported efforts to improve electoral rules and procedures and increase political participation of the population, including participation of women, indigenous groups, and rural populations. In El Salvador, USAID supported the drafting of four proposals to reform political parties, the electoral authority, electoral procedures, and proportional representation. In Guatemala, the agency supported efforts to develop an electoral and political parties law and to facilitate public discussion of various other proposals under consideration. These reforms are still being considered in the El Salvadoran and Guatemalan legislatures. U.S. assistance has recently helped legitimize election results by supporting election observation in Peru, Nicaragua, and Colombia by domestic and international groups. In Peru’s 2001 elections, for instance, USAID provided more than $2.1 million to field election observers from the Peruvian Ombudsman’s Office; the Organization of American States; the National Democratic Institute; the Carter Center; and Transparencia, which is a local nongovernmental group (see fig. 18). USAID also provided a similar amount to fund international and domestic observers of Nicaragua’s 2001 elections and $325,000 to support OAS observers of Colombia’s 2002 elections. The State Department has noted in its human rights reports, on the basis of reports by domestic and international observation groups, that elections in the six countries have been generally free and fair, with the exception of the seriously flawed and controversial 2000 Peruvian national elections. This pattern of free and fair elections is consistent with the elections held in other countries in the region since many of these countries started their transition to democracy almost two decades ago. Looking toward the future, USAID officials stated that Peru and El Salvador might require significantly less international assistance to run upcoming elections. USAID officials highlighted that these countries have enhanced their institutional capabilities to run elections, as demonstrated by the widely recognized legitimacy of their recent elections and the decreasing international support required by their electoral authorities for conducting elections. These officials noted that USAID does not plan to fund any electoral activities in Peru and after the 2003 elections in El Salvador (see fig. 19). On the other hand, Nicaragua, which has received the largest amount of U.S. election assistance, will likely require significant international aid to run its next major election, according to USAID officials. These officials noted that the Nicaraguan electoral authority, despite efforts to improve it, still faces major financial, planning, and organizational problems. For example, this electoral authority is still highly politicized and exhibits serious institutional and managerial weaknesses that compromise its ability to run elections. Also, Nicaragua’s civil and voter registries are outdated, and many voter documents used in the 2001 national election were temporary or will expire soon, leaving the challenge of registering a large number of voters before the next election. In their final 2001 election observation reports, the Carter Center and the International Republican Institute noted that, despite having held a free and fair election, Nicaragua still has important shortcomings in its electoral system, particularly in election administration and voter access. U.S. elections assistance has helped all six countries we visited realize a fundamental component of democracy—free and fair elections. While continued improvements will be needed to achieve wider participation and greater efficiency in elections administration, particularly in Nicaragua, basic capabilities are in place in these countries to enable them to continue to hold free and fair elections into the future. Many organizations and entities are involved in providing democracy assistance in the six countries we reviewed, including U.S. government agencies, other multilateral and bilateral donors, and nongovernmental organizations. Effective coordination and cooperation among these players is critical for achieving meaningful, long-term results from assistance efforts. U.S. agencies have not always managed their programs in a way that would leverage the contributions from all of these organizations, particularly other major donors, and maximize the impact and sustainability of U.S. funded programs. Assistance efforts are not always well-coordinated among the agencies, and strategic plans have not defined overarching goals and the roles that key U.S. agencies will play in these efforts or ways to link these efforts with those of other donors to help ensure that results are sustainable. Furthermore, evaluation of program results and sharing lessons learned has been limited among U.S. agencies and implementers across countries where this assistance is provided. Although a wide variety of U.S. government agencies and international donors provide democracy assistance, coordination of this assistance was inconsistent in the six countries we visited. We found that those organizations supporting democratic institutions did not always cooperate in a way that would maximize the impact and sustainability of their efforts. As a result, the programs they implemented were often fragmented and not mutually supportive and failed to overcome common financial and political obstacles. U.S. government agencies have not outlined a long-term, strategic approach to this assistance that considers all of the major parties and available resources and information. The Government Performance and Results Act of 1993 (the Results Act) requires U.S. government agencies to identify their strategic goals and develop annual plans for achieving them. Further, as we have previously reported in our work relating to this act, such plans should identify how similar programs conducted by other agencies will be coordinated to ensure that goals are consistent, and, as appropriate, program efforts are mutually reinforcing. The annual performance plans prepared by the State Department and USAID in accordance with the Results Act both identify promoting democracy and human rights abroad as agency strategic goals. However, neither USAID’s or State’s plans nor the subordinate regional or country- level planning documents we reviewed specifically address the role of other U.S. agencies and donors in ensuring that U.S.-funded democracy projects are well coordinated and leverage domestic and international resources. With few exceptions, these planning documents did not take into account the unique resources that each of the various U.S. agencies has to offer and the role each could play over what will be a long-term effort to help countries achieve and institutionalize democratic reforms. Although some documents mentioned that other agencies would be involved in the assistance effort, the nature or duration of that involvement was not discussed in detail. The relationship among USAID and the State and Justice Departments has frequently been difficult when it comes to rule of law programs, which has hindered long-term joint planning in that area. As we noted in a 1999 report, interagency coordination on rule of law assistance has been a long-standing problem. At that time, the Chairman of the House Committee on International Relations had expressed the concern that, because funds were provided through so many channels, rule of law programs had become inefficient and uncoordinated. Little progress has been made to resolve this problem. According to U.S. officials with whom we spoke, the relationship among implementing agencies is often still characterized more by competition than cooperation and has led to fragmented programs that are not always mutually supportive in achieving common goals. For example, in Bolivia, poor communication and disagreement among these agencies on their respective roles has disrupted efforts to assist the development of that country’s national police by casting the program’s staffing and funding in uncertainty. Unresolved coordination issues among these agencies have precluded efforts to establish a joint strategy on law enforcement development on either the regional, or country-specific level. As a result, in the countries we visited, the agencies are often operating on parallel tracks and not developing programs that are closely coordinated and mutually supportive. Better coordination among these agencies could leverage the critical resources and comparative advantage that each offers to overcome obstacles. For example, while USAID has significant institutional experience designing and implementing development programs, the Justice Department has significant technical expertise in law enforcement and criminal investigations, and the State Department has diplomatic relationships and influence that can be helpful in resolving political impediments to reform. Other international donors have major efforts to promote democracy in the countries we visited, and two of the largest, the World Bank and the IDB, are funded in part by contributions from the U.S. government. However, the strategic plans and other related planning documents prepared by the State and Justice Departments and USAID included very little information on plans to cooperate with other major international donors in the six countries we reviewed. Some plans mentioned a few successful cooperative efforts in the past, but donor cooperation was not consistently discussed as an integral component of the U.S. government’s approach in any of the areas of democracy assistance we reviewed. We observed that donors working in closer coordination, with a common strategy and work plan, can make significant progress. In Bolivia, the U.S. and German governments embarked on a joint program to implement the new criminal code, each providing mutually supporting activities and financing. As a result of this effort, a large number of legal operators were trained on the code’s provisions, and the Bolivian government began implementing the code on schedule. Other examples of close coordination include the following: In Bolivia, USAID, a German foundation, and the government of the Netherlands have adopted a common methodology for municipal strengthening, expanding the impact of USAID’s initial contributions to additional municipalities. In El Salvador, the IDB is funding projects to extend a USAID-supported, municipal-level financial management system to additional localities. Donors and Latin American countries have been collaborating regionally on anticorruption activities since the early 1990s. For example, the Donors Consultative Group of the USAID-supported Americas’ Accountability/Anticorruption Project has helped to increase the number of anticorruption projects in the region, according to USAID. Other multilateral initiatives, such as the Inter-American Convention Against Corruption and ongoing United Nations negotiations for a global anticorruption convention, are also mobilizing states to focus on corruption. Such donor cooperation was not always the norm in the countries we visited, however, and donors often pursued parallel but not necessarily mutually supporting activities. Donor coordination was generally characterized by organizations keeping one another informed of the nature, progress, and location of their activities. Across the six countries, the U.S. government and other donors generally worked on different agendas in the area of judicial reform. In Bolivia, for example, USAID and the World Bank divided their justice sector reform efforts between host government agencies using different approaches. The two organizations have helped the government develop two information systems—one to track criminal cases and one for civil cases. At the time of our visit in June 2002, neither system was being fully implemented on a national scale, and USAID officials were concerned about the future compatibility of these two systems. Pooling financial resources and political influence could enable donor organizations to overcome some political and financial obstacles that limit the impact and sustainability of assistance programs. The United States, with its on-the-ground presence and long-standing diplomatic relationships, can offer significant technical expertise and influence to help achieve political support. At the same time, the multilateral development banks, in particular, can offer significant, low-cost, long-term financing for host governments. Better coordinated, these resources could be combined to (1) leverage political support from host governments for mutually agreed-upon reform programs, (2) devise appropriate program designs, and (3) provide long-term financing that could help ensure that the programs are sustainable. Donor cooperation can be difficult for a number of political and cultural reasons. Donors may have different development priorities or policies that may not allow them to work on the same types of programs in some cases. U.S. government officials have also cited bureaucratic incompatibilities between the agencies that effectively limited the ability of the agencies to work closely together on certain projects. In one country we visited, the working relationship between USAID and a multilateral development bank has been difficult, according to a USAID mission official with whom we spoke. Overcoming some of these obstacles to closer cooperation may require a high-level commitment and impetus from the senior management of these organizations. U.S. agencies and their implementing contractors and grantees have not extensively compiled and shared information on program results. Many U.S. assistance programs have not been evaluated, and important democracy project information, such as materials, final reports, and evaluations, are not systematically made available to the large body of project implementers. The U.S. agencies implementing democracy assistance programs have not consistently evaluated the results of their activities. Our review of project documentation and our discussions with senior U.S. government officials at the State and Justice Departments and USAID indicate that limited efforts have been made to review project results over time to ensure that impact and sustainability have been achieved. In particular, officials from the State and Justice Departments stated that those agencies have conducted very little formal evaluation of law enforcement assistance. Although USAID has a more extensive process for assessing its activities, its efforts to evaluate democracy assistance have not been consistent. Although governance programs in Latin America, in particular legislative strengthening, have undergone considerable evaluation, we found relatively little formal evaluation of rule of law, human rights, and elections assistance. The level of evaluation has varied geographically as well: While USAID sponsored a comprehensive democracy evaluation for Bolivia, it has not conducted similar studies for the other countries we visited. In 2002, USAID commissioned a private contractor to complete a broad study of the agency’s achievements in its rule of law programs around the world, including in many of the countries we visited. This recently completed study provides information on the nature and history of USAID rule of law programs in individual countries but was not meant to be an evaluation of these programs, according to a USAID official. Furthermore, the agencies have not consistently used available survey data to help evaluate the impact of their activities. In several of the countries we visited, a USAID contractor had been conducting regular “democratic values surveys” to gauge public opinion about recent and ongoing political and government reforms, many of which the United States has assisted. The mission in Bolivia has used the results of this survey as a source of data for monitoring, among other things, the impact of Bolivia’s decentralization activities; however, the other missions or embassies we visited did not consistently use these data as a tool for evaluating or monitoring the impact of U.S. assistance. Without systematic evaluations identifying lessons learned and best practices, agencies will have difficulty making informed decisions about a strategy to maximize impact and sustainability and planning for future efforts. For example, USAID and the State and Justice Departments are currently debating the U.S. government’s strategy for police assistance. Each agency has participated in police development programs, and officials from each agency stated that they are uniquely qualified to manage such programs in the future. Yet, none of these agencies has conducted a comprehensive evaluation of police assistance program results to inform the debate about how best to provide this assistance. Evaluations or other efforts to systematically compile lessons learned across countries could enable a more objective comparison of agency performance to identify the advantages of one approach over another and to inform a long-term interagency strategy for achieving various democracy assistance goals. USAID has not taken steps to pool the resources produced by U.S.-funded democracy program implementers, including international development firms, private voluntary organizations, and other nongovernmental organizations to help them achieve common and related goals more effectively and efficiently. USAID-funded contractors often used similar approaches to achieve democratic strengthening and reform in many of the countries we visited. For example, support for local governments often aimed to influence the broad policy framework in a country while directly assisting a relatively small number of target municipalities. However, we found little evidence that the project implementers in these countries had shared with each other the materials they had developed. For example, in several countries, USAID financed the printing of operational guidance for municipal officials, ranging from handbooks on countrywide criteria for governance to detailed, step-by-step manuals on ways to improve local public administration. The contractors and USAID officials stated that to their knowledge, these handbooks had not been systematically shared among USAID missions or contractors. Although mission officials and implementers told us they frequently shared information on an informal basis, the agency’s attempts to systematically compile information about democracy program implementation and results to establish an agency wide “institutional knowledge base” are incomplete. USAID has a very decentralized organizational structure, and, according to USAID officials, the agency has no central repository of implementation reports and other program documents that can be accessed by the various democracy program implementers to determine, among other things, which activities have been more successful than others. Although USAID maintains some documentation from its democracy programs, such as scopes of work for projects, at its intranet site, the agency does not compile contractors’ technical manuals and final reports with information on implementation and results. Such information could be very instrumental in identifying approaches that are most appropriate for replication, while avoiding developing similar materials in different countries at additional expense. As we have previously reported, use of lessons learned is a principal component of an organizational culture committed to continuous improvement. Lessons learned mechanisms serve to communicate acquired knowledge more effectively and to ensure that beneficial information is factored into planning, work processes, and activities. Lessons learned provide a powerful method of sharing good ideas for improving work processes, program design and implementation, and cost- effectiveness. USAID mission directors and other agency officials stated that future assistance efforts would be more effective if they were designed on the basis of concrete information and lessons learned from similar programs in other countries. Local resources for sustaining democracy programs are difficult to mobilize given the serious economic problems in the countries we visited, and funding shortages were often cited by program implementers and beneficiaries as major obstacles to long-term program success. Therefore, it is crucial that the U.S. government and other international donors manage available international resources as efficiently as possible. Achieving greater impact and responsibility in democracy assistance projects may be more likely with a more strategic approach, including closer coordination, and greater information sharing among U.S. agencies, international donors, and other program implementers. To ensure that U.S. assistance activities designed to support and strengthen democracies in Latin America have the maximum impact and sustainability, we recommend that the Secretary of State, the Attorney General, and the Administrator of USAID develop more comprehensive interagency strategic plans at the regional and country level for democracy assistance addressing how U.S. agencies will cooperate with each other and other major donors to achieve greater impact and sustainability in democracy programs; establish a strategy for periodically evaluating democracy assistance projects that is consistent across agencies, countries, and types of programs; and establish a systematic mechanism to share information on development approaches, methods, materials, and results from all democracy assistance projects among U.S. agencies and implementers. We provided a draft of this report to the Departments of State and Justice, the U.S. Agency for International Development (USAID), and the Inter- American Foundation for their comment. The Inter-American Foundation did not comment on this report. The comments of the State and Justice Departments and USAID, along with our responses to specific points, are reprinted in appendixes II, III, and IV, respectively. In general, the State and Justice Departments and USAID acknowledged that democracy assistance is a long-term challenge that requires host country commitment and support for reforms, and that U.S.-supported institutions and programs must ultimately be sustainable, as we discuss in this report. Overall, the agencies basically agreed with the thrust of our recommendations regarding how the management of program assistance could be improved. They also noted that in some cases, activities are either planned or under way that would address our recommendations. The State Department concurred with our recommendation that it work with other agencies to develop comprehensive strategic plans for democracy assistance at the regional and country levels. State agreed with our recommendation that democracy assistance programs should be evaluated but said that our recommendation was a “broad brush” approach that is not appropriate for the diversity of activities covered in the report. State said that it is taking steps with USAID and the Justice Department to improve evaluation, including recently agreeing to undertake joint evaluations of justice programs. Such actions appear to meet the intent of our recommendation. However, our recommendation is intended to establish a basis for periodic overall assessments of democracy programs as well as regular evaluations of specific components of democracy assistance, such as rule of law, governance, and elections. While the State Department agreed that it would be desirable to have better access to project information across the board, they noted that the recommendation goes too far in suggesting the need for a centralized record system containing all project materials. State also said that much useful information is currently shared among programs on an informal basis. Our recommendation is designed to address an important problem we identified in this report, namely that much information is currently not being shared among agencies or programs with similar goals, approaches, and methods. The thinking behind this recommendation is the State Department and other agencies that fund and implement democracy assistance programs should maintain key program documents and evaluations along with examples of materials used for core activities (e.g., training manuals so that groups implementing similar programs can benefit from lessons learned). Given the advances in Web-based technology as a way of sharing information, we believe this recommendation is not unreasonable. The State Department also provided technical comments, which we have incorporated in this report, where appropriate. The Justice Department endorsed our recommendation for better coordination and planning among State, USAID, and Justice; agreed that objective, regularized evaluation of assistance programs is needed to consistently obtain useful information on program outcomes; and supported the recommendation that agencies involved in democracy assistance should establish effective information-sharing mechanisms. The Justice Department also provided technical comments, which we have incorporated in this report, where appropriate. USAID also agreed with our recommendations. Regarding our recommendation on strategic planning, USAID said that it participates in a number of planning activities but that such planning systems can always be upgraded. It also agreed that periodic evaluations of program outcomes and results are important, noting that evaluating democracy programs is a challenge made difficult by the complexities and subtleties of local political situations that influence democracy program implementation and outcomes. USAID also agreed with our recommendation that agencies need to do a better job of sharing information on development methods, approaches, and materials, noting that a new bureau within the agency should respond to these concerns. USAID also provided technical comments, which we have incorporated throughout this report, where appropriate. The State and Justice Departments both commented on our discussion of a provision of the Foreign Assistance Act of 1961 that restricts the use of foreign assistance funds for training and financial support for police and other law enforcement forces of foreign governments (section 660). In its comments, State said that the Executive Branch should develop and propose to the Congress new legislation on law enforcement assistance, stating that the Executive Branch needs a clear statement of its authority to provide law enforcement assistance abroad, coupled with whatever specific prohibitions the Congress may wish to consider. The Justice Department stated that it is concerned that section 660 may in some instances adversely impact long-range planning and the development of broad-based, practical police assistance programs. The Justice Department also indicated that it will work with the State Department and USAID to consider whether changes to section 660 would be appropriate. We believe the approach suggested by the State and Justice Departments could be an important and useful step in providing options for the Congress to consider regarding potential amendments to section 660.
Supporting democracy abroad is a major U.S. foreign policy objective. To better understand how this assistance has been implemented in Latin America, GAO was asked to review programs in six countries--Bolivia, Colombia, El Salvador, Guatemala, Nicaragua, and Peru--that have been of particular importance to U.S. interests in Central and South America. Between fiscal years 1992 and 2002, U.S. agencies have funded more than $580 million in democracy-related programs in these countries. This report discusses the impact of and factors affecting this assistance and the overarching management issues that have affected program planning and implementation. U.S. programs to strengthen democratic institutions in six Latin American countries have had a modest impact to date. These programs have primarily focused on promoting (1) the rule of law, (2) governance, (3) human rights, and (4) elections. U.S. assistance has helped bring about important reforms in criminal justice in five of the six countries, improved transparency and accountability of some government functions, increased attention to human rights, and supported elections that observation groups have considered free and fair. However, host governments have not always sustained these reforms by providing the needed political, financial, and human capital. For example, host countries often did not support training programs, computer systems, or equipment after U.S. funding ended. In other cases, U.S.-supported programs were limited and targeted, and countries have not adopted these programs on a national scale. Since host country resources for sustaining democracy programs are difficult to mobilize, it is crucial that the U.S. government and other donors manage available international resources as efficiently as possible for maximum impact and sustainability. Several management issues have affected democracy assistance programs. Poor coordination among the key U.S. agencies has been a long-standing management problem, and cooperation with other foreign donors has been limited. U.S. agencies' strategic plans do not outline how these agencies will overcome coordination problems and cooperate with other foreign donors on program planning and implementation to maximize scarce resources. Also, U.S. agencies have not consistently evaluated program results or shared lessons learned from completed projects, thus missing opportunities to enhance the outcomes of their programs.
By “home care,” we mean not only health care services delivered in the home, but also assistance with basic and instrumental activities of daily living, such as eating, dressing, bathing, toileting, transferring from bed to chair, shopping, cooking, and laundry. These services are provided by a variety of organizations, including temporary employment firms, nurse registries, and home health agencies. The types of individual workers providing home care services are equally varied, including home health aides, homemakers, and choreworkers, as well as professional social workers, occupational therapists, and physical therapists. However, it is believed that 70 to 80 percent of all paid long-term home care is provided by workers variously known as home health aides, personal care aides, personal care attendants, or homemakers. The federal government finances home care services through several means, including Medicare, Medicaid, Veterans’ Administration programs, Older Americans Act funds, Social Services Block Grants, and various demonstration and waiver programs. However, the broadest federal programs for supporting these services are generally recognized to be Medicare and Medicaid. The federal government indirectly regulates some home care workers through the requirement that agencies or individual providers that participate in the Medicare and Medicaid programs meet various conditions of participation. In addition to these conditions of participation, the Medicare and Medicaid Patient and Program Protection Act of 1987 requires that the Secretary of Health and Human Services (HHS) exclude from participation in title XVIII (Medicare), and direct states to exclude from participation in programs authorized by title XIX (Medicaid) and title XX (Social Services Block Grants), any individual or entity convicted of program-related crimes or who has been convicted under federal or state law of a criminal offense related to neglect or abuse of patients in connection with delivery of a health care item or service. Also, the Secretary may direct a state to disqualify those convicted of other crimes, such as fraud in the delivery of health care services, obstruction of justice, or crimes related to the manufacture, distribution, prescription, or dispensing of a controlled substance. HHS has issued related regulations (see 42 C.F.R sections 1001.101 to 1001.102 and 1002.203), but these regulations do not require home care worker background investigations and apply only to offenses occurring in connection with the delivery of health care services. State or local governments or professional boards also impose requirements on home care organizations and independent workers in connection with agency or individual licensure or registration processes. The coverage of these nonfederal requirements varies from large segments of the home care industry to particular types of workers. Some states also assist consumers in distinguishing licensed or registered workers from others by limiting the advertisement of particular services to those who are licensed or registered. (See appendixes I and II.) Several factors contribute to recent interest in background screening requirements for home care workers. These include the increasing size and obvious vulnerability of the population in need of home care services, the challenges the home care setting poses for worker supervision, rapid expansion of the home care industry, unrelenting demand for home care services in the face of worker shortages, the low wages typically available to paraprofessional workers serving a largely low- or fixed-income population, diffuse responsibility for worker selection, and anecdotal reports of abuses by home care workers with criminal history. Home care is provided to persons living at home who, because of a chronic condition or illness, often cannot care for themselves. Studies of home health care utilization have found that the typical recipient is a woman with functional limitations who is very elderly, has a low income, and lives alone. The number of persons needing home care is expected to increase as the very elderly population grows from the 3 million persons over age 85 in 1990 to more than 15 million in 2050. Among those in need of home care, reliance on paid home care workers is also expected to rise, partly because adults in the baby boom generation have had smaller numbers of children and will therefore have fewer available to provide or supervise their care in old age. In addition, projections indicate that labor force participation will continue to increase among women, who have traditionally provided much of the informal care for the elderly. Although future cohorts of elderly persons may be better off financially, long-term care costs are expected to increase faster than personal income. Although the home care work force is entrusted with significant responsibilities and sometimes demanding work with little supervision or assistance, home care work is often characterized by part-time employment, lower wages, and an absence of fringe benefits. Moreover, workers are in high demand, and managers of proprietary home care agencies indicate that the speed with which they can provide workers to fill newly identified needs is often critical to their firms’ success. In this context, the expanding demand for home care services and the rapid growth of the home care industry combined with an absence of appropriate safeguards could create pressures to hire potentially unqualified job applicants. In addition, in an effort to stretch home care funding and empower consumers, some states are adopting policies that encourage or require aged and disabled beneficiaries to take direct responsibility for hiring and supervising their home care workers who are paid with state or federal funds. Thus, vulnerable home care consumers increasingly need access either to the tools for evaluating potential workers or to a list of prequalified candidates. In the absence of the ability to offer attractive salaries, however, individual consumers may feel hard-pressed to impose extensive requirements on workers, for fear they will lose potentially good workers to less demanding employers. Finally, anecdotal reports of abuses by a small number of home care workers with criminal history have raised the issue of whether and how such workers should be screened for criminal background. Limited experience in screening for criminal background among home care workers suggests that a segment of workers have criminal history, but that such history does not always portend criminal behavior. However, advocates of criminal background screening argue that the use of this process may deter any individuals with criminal intent from entering the field. While some localities have experimented with other methods for screening applicants for home care work, this report is concerned primarily with two potential forms of background screening: use of worker registries and use of mandatory criminal background checks. Under the Omnibus Budget Reconciliation Act of 1987, states must keep registries of individuals qualified to work as nurses’ aides in nursing homes that participate in Medicaid and Medicare, and these registers must note any instances in which states find that such aides have been involved in resident abuse, neglect, or misappropriation of property from a nursing home patient. Before employing someone as an aide, a nursing home that participates in Medicare or Medicaid must check the registry for prior incidents involving the prospective worker and verify that the worker meets appropriate training standards. This registry is important for two reasons: (1) there is some overlap between the nursing home and home care work force; and (2) it is a model for worker screening that has already been incorporated in the Medicare and Medicaid programs that some states have extended to cover other types of workers, including home care workers. With respect to criminal background checks, several variations are possible. Federal, state, or local law enforcement data may be used by states for criminal background checks, and each has advantages and limitations. Although checks using state and local data are more readily implemented, they are not as comprehensive as those employing national records because they miss convictions that have occurred even in neighboring states or communities. Also, if they are based primarily on the name, birthdate, and social security number provided by the job applicant, they can result in false positives or false negatives. The FBI may share interstate criminal history data with state officials if authorized by a state statute approved by the U.S. Attorney General for the purpose of determining the fitness of persons to work with children, the elderly, or individuals with disabilities. Under the law, an entity qualified under such a state law (for example, a licensed home care agency), may request that an FBI-authorized state agency conduct a national background check of an applicant provider. Such checks employ the national criminal history background check system, which contains state and federal criminal history records, and may be requested only when the person to be checked provides fingerprints and a signed statement regarding the presence and nature of any previous criminal convictions. The FBI routinely charges states at least $22 for processing each fingerprint check for a nonvolunteer care provider. The law requires that the FBI-authorized state agency make “reasonable efforts” to respond to such inquiries within 15 business days. We report results on the extent to which states have taken advantage of this law in connection with home care workers, who primarily serve the elderly. To describe federal and state requirements applying to home care organizations and workers, we reviewed federal conditions for participation in Medicare and Medicaid and surveyed state governments regarding their methods for regulating organizations and individuals who provide home care. The terminology used to refer to various types of home care workers is highly variable, and thus, our survey adopted the standard definitions we present in the glossary. Although licensure or registration does not necessarily ensure a better standard of care, we asked about these practices because they indicate that a state has identified certain organizations or individuals and thus has the capacity to apply additional requirements, such as criminal background checks or special training. We also asked about states’ experience in operating the federally mandated registry for nursing home aides that records documented findings of abuse, neglect, or misappropriation of a resident’s property and the states’ use of this or similar registries for home care workers. Finally, we inquired about the presence of state requirements for criminal background checks of workers and the records covered by these checks. We completed our survey between August 1994 and April 1996. We received responses from 49 states and the District of Columbia. We did not independently verify the accuracy of state officials’ responses to our survey questions regarding state laws and policies. In addition to surveying state officials, we interviewed home care providers and state and local officials in California, New York, and Oregon regarding their approaches to screening home care workers. We also spoke with home care consumers in Oregon and with officials from the FBI and the Health Care Financing Administration (HCFA) and reviewed relevant literature on the utilization, regulation, and characteristics of home care. Our work was conducted in accordance with generally accepted government auditing standards. We did not assess the extent of the problems linked to home care providers, nor did we determine to what extent the use of a criminal background check is linked to a lower incidence of abuse, theft, or misappropriation of property. Although in our interviews with consumers we received some anecdotal reports of rough handling, verbal abuse, and theft, we were unable to assess the extent of such problems. Doing so is generally complicated because investigating and documenting such incidents is difficult; they tend to be underreported, and the reports that do exist are scattered among various administrative and law enforcement record systems. We also did not assess the extent to which the operators of home care organizations voluntarily purchase fidelity bonds, which protect covered employers against losses due to employee dishonesty and may be conditioned on certain employee screening practices. Of those persons who received paid home care in 1989, half paid for their care without any assistance from public sources or insurance. Some of this privately financed home care is provided by agencies that participate in Medicare and Medicaid and are therefore subject to limited federal requirements; however, the existence of such a large private market suggests that a substantial portion of assistance may be provided by organizations and individuals that function outside these programs. Care provided outside federal programs is governed by state and local requirements, which may also apply to federally financed care in the state in which it is delivered. States take a wide variety of approaches to categorizing and licensing home care organizations. Among the 50 jurisdictions responding to our survey, 41 required that at least certain organizations obtain a specific license before providing home care services, with 35 licensing at least some of the home care organizations that do not participate in the Medicare program. In fact, 11 states indicated that all home care organizations were publicly regulated through state licensure. (See table 1.) However, even states that regulate some home care services may not prohibit advertisement of similar-sounding services by organizations that are not licensed, registered, or certified. In 45 states, companion, housekeeping and homemaker, or shopping services may be legally advertised by organizations that are not licensed, registered, or certified. (See appendix I.) More states regulated the advertisement of personal care, home health, and home nursing. However, 30 states told us that unlicensed and uncertified organizations might legally advertise personal care; 14 did not specifically prohibit such organizations from advertising home health care, and 22 did not prohibit their advertisement of home nursing or related home care services. Although most states require that at least certain professionals who provide paid home care obtain a state license, they are not likely to require licensure or registration of some common types of home care workers. While individual licensure is a common practice for the regulation of practical nurses, physical therapists, occupational therapists, social workers, and registered nurses, it is not common for home health aides, homemakers, or choreworkers. As shown in appendix II, we found that individuals who were neither licensed nor state-registered could advertise personal care, companion, housekeeping and homemaker, or shopping services in most states. Even home health and home nursing services could legally be advertised by unlicensed and unregistered individuals in about a third of the states. Some state officials also identified related categories of service, such as adult day care and case management, that unlicensed and unregistered providers could legally advertise. Consumers seeking to file complaints against home care workers are faced with a somewhat complex regulatory structure in some states, while in other cases, no administrative complaint is possible, and they must file a criminal complaint with the local police or a civil suit. In most states, the responsibility for licensure of nursing home and home care workers was spread across multiple agencies, units, or professional boards. Similarly, although 20 states issued only one type of license to home care organizations, nearly as many recognized two or more types of home care organizations in their licensing processes. A minority of states (14) reported routinely including at least some types of home care workers in their state’s mandatory registry for nurses’ aides. When we interviewed officials of home care organizations, some expressed concerns about entering qualified employees on any public register that their competitors might use for recruitment. In addition, while administrative expenses for the initiation of the nurse’s aide registry have already been incurred, adding home care workers to the established registry process would require additional resources, such as increased time for data entry. More importantly, there may be greater difficulty in substantiating complaints about home care workers since they are generally subject to less oversight than nursing home workers. Fifteen states reported requiring criminal background checks of at least some of the individuals who may be employed as home care workers. In these states, checks were usually a condition of agency licensure or certification, although in a few states, they were also a condition of individual licensure or registration. States reporting criminal background check requirements for home care workers included Alaska, California, Florida, Idaho, Indiana, Louisiana, Nevada, Ohio, Oklahoma, Oregon, Rhode Island, Texas, Utah, Virginia, and Washington. Twelve of these states indicated that state statutes or regulations specified the jurisdictions that these checks must cover, but the breadth of the check was generally limited to a state’s own criminal records, which may be problematic where many workers come from neighboring states. At least three states—Idaho, Nevada, and Ohio—reported using national FBI data, but officials of other states we interviewed cited the expense of FBI data as a reason for not using it. Almost all states that reported requiring such checks indicated that a criminal background could be grounds for denying employment or refusing licensure or for some other type of adverse action, such as probationary certification. While it is difficult to assess the extent of problems with abuse, neglect, or misappropriation of property in the home care industry, we found that federal regulations do not require criminal background checks of home care workers. While some states and localities have instituted criminal background checks or other safeguards, states’ approaches to regulating home care are quite varied, and in some instances, there may be few formal safeguards to protect potentially vulnerable elderly persons from unscrupulous operators. Every state must maintain a registry of nursing home workers noting those who have been involved in incidents of abuse, neglect, or misappropriation from patients. Although some states incorporate home care workers in a registry, and such registries may be valuable for identifying individuals who have met particular training standards, the effectiveness of such registries in identifying workers with histories of poor performance may depend heavily on the capacity to document incidents occurring in a largely unsupervised environment. While some states have instituted criminal background checks for some home care providers, few have used the FBI’s national data, citing cost concerns. “While we agree with and support efforts designed to eliminate fraud and abuse and increase quality of care, we are unconvinced that there is enough consensus on the mechanisms which should be employed to address these concerns. Consequently, we believe States should be given the flexibility to determine how best to address these issues.” “The FBI’s user-fee program is the sole basis by which it funds processing of noncriminal justice applicant fingerprints card submissions for licensing and employment purposes. The FBI’s budget does not contain any Congressional authorization or appropriation for funding of this part of its fingerprints operation.” As we arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of the report to interested congressional committees, the Secretary of HHS, the Administrator of HCFA, the Assistant Secretary for Aging, the Director of the FBI, and other federal and state officials. We will also make copies available to others upon request. If you have any questions or would like additional information, please contact me at (202) 512-3092 or Sushil K. Sharma, Assistant Director, at (202) 512-3460. Other major contributors to this report are listed in appendix III. (continued) No response provided for this item. (continued) No response provided for this item. Performance of tasks such as shopping, yard work, and housecleaning that generally do not require hands-on contact with the consumer. A title sometimes given to home care workers who perform chore services. A title sometimes given to home care workers who perform companion services. “Friendly visitor” services that may include assisting home care clients as they perform basic or instrumental activities of daily living. This term may connote services to clients who are less dependent than those receiving services labeled “home health” or “personal care.” A review of police or other law enforcement records designed to determine whether a particular person has been convicted of unlawful acts. A for-profit or nonprofit entity that provides any of or all the home care services described below under “home care services.” This term excludes home care providers who are self-employed or personal employees of home care consumers. Services provided to ill or disabled persons in private residences to assist them either in recovering from illness or in continuing to live in their own homes. Such services may be of any duration (short term, long term). They include assistance with basic activities of daily living (for example, bathing, eating, toileting), instrumental activities (for example, shopping, meal preparation, laundry), or chronic medical conditions (for example, catheter care, parenteral nutrition, wound care). Home care services are provided under the rubric of home health care, personal care, companionship, homemaker services, chore services, occupational therapy, and physical therapy. The services are provided by a range of personnel, from nurses and therapists to workers with little formal training or education. Anyone who performs home care services. This designation includes professional nurses and therapists as well as paraprofessional workers with little formal training. A title sometimes applied to a paraprofessional home care worker who generally provides basic home health care and incidental personal care services. Agencies that are Medicare-certified must ensure that home health aides in their employ meet specific Medicare training requirements. Assistance with health care provided in private residences to persons with disabilities. This designation may include services of professional nurses and therapists as well as those of paraprofessionals, such as home health aides and personal care aides. Specific services may include administration of medication, catheter care, wound care, and occupational or physical therapy. Housekeeping assistance that may include cooking, cleaning, and laundry and generally does not involve hands-on contact with the consumer. Permission to practice or operate in a certain capacity (for example, as a home health aide) that is extended only to license holders. Licenses may be extended to either businesses or individual workers. They are generally subject to renewal at designated intervals and may presume satisfaction of certain standards or submission to certain oversight processes. A designation given to an entity that is approved for participation in the Medicare program and is therefore eligible for Medicare reimbursement (for example, a “certified” home health agency). Nurse’s aides who are qualified to work in nursing homes that participate in Medicare or Medicaid are also sometimes called “certified” nurse’s aides. A title used by professionals who help persons with disabilities to develop, recover, or maintain daily living and work skills or to compensate for loss of function. An occupational therapist may also design or make special equipment needed at home or at work. Persons who use this title generally hold a bachelor’s degree in occupational therapy. A title sometimes given to paraprofessional home care workers who perform personal care services. People who perform these functions are also sometimes known as personal attendants or personal care workers. Functional assistance, including help with basic activities of daily living (such as eating, bathing, toileting, and transferring from bed to chair), that generally involves hands-on contact with the consumer. A title used by professionals who perform tasks designed to improve mobility, relieve muscle pain, and prevent or limit the permanent physical disabilities of patients suffering from injuries or disease. Unlike an occupational therapist, a physical therapist focuses exclusively on physical disabilities. Physical therapists are usually licensed and graduates of an accredited physical therapy program. A home care worker who cares for the sick, injured, convalescing, and handicapped under the direction of a physician or registered nurse. Such workers perform tasks such as taking vital signs, treating bedsores, administering injections and enemas, providing assistance with personal care, giving alcohol rubs and massages, and inserting catheters. In some states, practical nurses are referred to as vocational nurses. Many of the functions performed by workers in this category are also performed by persons called home health aides, but practical nurses are more likely to have passed through a state licensing process and are consequently referred to as licensed practical nurses or licensed vocational nurses. A graduate of any accredited nursing school who has passed the national licensing examination. A registered nurse may include a person with an associate’s degree in nursing (A.D.N.), a bachelor’s degree in nursing (B.S.N.), or a hospital nursing diplomate. A list of individuals who work in a designated capacity or meet specific qualifications. Depending on state rules, participation in a registry may either be voluntary or mandatory. The content of registry entries may range from worker name and address to detailed information about worker qualifications, background, and past performance. Registration, unlike licensure, may not imply possession of a document, evidence of fulfillment of particular requirements, or periodic submission to oversight. A person who helps individuals and families cope with problems such as inadequate housing, serious illness, financial mismanagement, handicaps, or substance abuse, generally through direct counseling or referral. Social workers generally hold a bachelor’s degree in social work or a related field. A type of temporary employment service that provides health care workers on a contract basis. Such workers are generally considered employees of the temporary staffing agency rather than of the organizations or individuals for whom they perform services. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined federal and state provisions for protecting vulnerable elderly and disabled persons from home care wokers with histories of crime and patient abuse, focusing on: (1) federal or state requirements for licensure, registration, or certification of home care workers and organizations; (2) the extent to which states have used the federally mandated registry for nursing home aides, or a similar mechanism, to identify home care workers with past involvement in abuse, neglect, or misappropriation of property; and (3) the extent to which states have required criminal background checks of home care workers. GAO found that: (1) assessing the extent of problems with abuse, neglect, or misappropriation in the home care industry is a difficult task; (2) formal safeguards for home care consumers vary widely across political jurisdictions, public programs, and types of providers; and in some instances, few safeguards exist; (3) GAO has three key results of its evaluation to report; (4) few states have licensure requirements for types of workers that are among the most common providers of home care services; (5) however, the vast majority of states license or otherwise regulate some types of home care organizations or professionals, and some states indicate that all types of home care organizations are subject to a state licensure requirement; (6) while all states must maintain a registry for nursing home aides in accordance with federal law, only about a quarter have incorporated home care workers into it or have developed a separate registry for home care workers; (7) finally, GAO found that slightly over a quarter of the states require criminal background checks on some types of home care workers, though these checks are generally limited to a state's own criminal records; (8) states with a statute requiring such checks may access Federal Bureau of Investigation (FBI) data for this purpose, but few states have made use of this capacity with respect to home care workers; and (9) although there is no charge for checking these data for criminal justice purposes, fees are charged for checks for employment screening, and some state officials have cited these fees as a factor in reluctance to make greater use of the FBI data.
In 1862, Congress enacted the Morrill Act to help states establish and maintain land-grant colleges. The act carefully specified the grant’s objectives, placed conditions on the use of revenue derived from the sale of the granted lands, and required annual reports. This established the pattern of categorical grants—providing needed resources for specific purposes in exchange for acceptance of minimum national standards. In the 1960s, the number and dollar amount of federal assistance programs grew substantially. (See fig. 1.) During this timeframe, major steps were taken to broaden elementary, secondary, and higher education opportunities; promote development in economically depressed areas; to help finance health services and medical care for the indigent; launch a war on poverty; and attempt a comprehensive physical, social, and economic program to transform slum and blight-ridden cities into model neighborhoods. Growth in the both the numbers of new grant programs and the level of funding created greater complexity. During the 1960s and into the 1970s, various reforms were begun to address the complexity in the grant system. In 1968, Congress passed the Intergovernmental Cooperation Act of 1968 that sought to improve the cooperation and coordination of activities among levels of government. From 1969-1973, the President initiated the Federal Assistance Review—a government-wide effort with a goal to streamline, simplify, and speed up the flow of federal assistance and improve the federal government’s responsiveness to its state and local partners. In addition, Federal Management Circular 74-7, issued in 1974, provided for standardized administrative provisions across grant programs. The Joint Funding Simplification Act of 1974 permitted grantees to streamline federal assistance by enabling them to combine funding from several grants administered by one or more federal agencies. As previous congressional committee reports have noted, these administrative simplification initiatives, while useful in addressing certain administrative burdens associated with grants, did not address the more fundamental challenges stemming from the fragmented nature of the grant system. For example, the House Government Operations Committee, the predecessor to the House Government Reform Committee, noted that the legislative consolidation of closely related categorical programs into broader purpose grants and the placement of similar programs in a single federal agency have more potential for significantly improving grant-in-aid administration. Over the years, Congress at times has acted to improve the grant system through consolidation. The Omnibus Budget Reconciliation Act of 1981 consolidated a number of social service programs into nine block grants which allowed for greater state and local autonomy and flexibility in the fashioning of local strategies to address federal objectives. More recently, in 1996 the 104th Congress consolidated a number of welfare-related programs into the Temporary Assistance for Needy Families block grant. Notwithstanding these efforts, as figure 2 shows, over the last 20 years each period of consolidation was followed by a proliferation of new federal programs. Moreover, some of the block grants were later recategorized, as Congress added new set-asides and cost-ceilings to address national programmatic concerns, thereby limiting the grants’ flexibility. A sizable increase in the number of grant programs could be justified and simply be an indication that as society evolves the nation’s needs also change and we need new tools—in the form of new programs—at our disposal to address those needs. As such, program proliferation may be an indication that there is heightened congressional interest in ensuring that federal funds are directed in such a way as to meet specific—more narrowly defined—national goals and objectives. Nonetheless, the problems associated with a proliferation of federal programs are compounded when multiple grants are available for the same or similar purposes, forcing grant recipients to package different programs with potentially conflicting requirements to address common problems. Moreover, the total funds available for many of these programs are quite small. As figure 3 shows, the vast majority of available federal funds—78 percent—are concentrated in 20 large grant programs. Stated differently, Mr. Chairman, in 2001 169 federal grant programs were funded at less than $5 million. Cumulatively, these small programs receive less than 1 percent of all federal funds provided through the grant system. As you can imagine, at the recipient level, the funds available can be quite small, particularly—as you may hear in the statements of members of the second panel—in relation to the administrative effort and costs incurred in applying for and managing the grant. For example, FEMA’s Hazardous Materials Assistance program provided grants from “a few dollars to $20,000” per applicant, according to the Catalog of Federal Domestic Assistance. FEMA’s State Fire Training Systems Grants ranged from only $25,000 to $30,000 per state. While these funds undoubtedly served important purposes, the question is whether the funds could have been provided through more efficient means. Many of the same grants management challenges from the past are still with us today. GAO’s work over the years has repeatedly shown that mission fragmentation and program overlap are widespread in the federal government and that crosscutting program efforts are not well coordinated. As far back as 1975, GAO reported that many of the fundamental problems in managing federal grants were the direct result of the proliferation of federal assistance programs and the fragmentation of responsibility among different federal departments and agencies. While we noted that the large number and variety of programs tended to ensure that a program is available to meet a defined need, we found that substantial problems occur when state and local governments attempt to identify, obtain, and use the fragmented grants-in-aid system to meet their needs. More recently, GAO has addressed mission fragmentation through the framework provided under the Government Performance and Results Act (the Results Act). The Results Act's key stages include defining missions and outcomes, developing a strategy, measuring performance, and using performance information. For example, we reported in 2000 on the 50 programs for the homeless that were administered by 8 federal agencies. Housing services were provided under 23 programs operated by 4 agencies, and food and nutrition services were under 26 programs administered by 6 agencies. We recently identified 44 programs administered by 9 different federal agencies that provided a range of employment and training services. In the late 1990s, the Congress tried to bring some unity to this fragmented employment and training system by requiring states to provide most federally funded employment-related services through a centralized service delivery system—one-stop centers. Two years earlier, welfare reform legislation provided states with the flexibility to focus on helping needy adults with children find and maintain employment. Despite the similar focus, the welfare program was not required to be a part of the new workforce investment system. We recently reported that nearly all states report some coordination of their welfare and workforce systems services at the state and local level, but that several challenges remain. For example, different definitions of what constitutes work as well as complex reporting requirements under both programs hamper state and local coordination efforts. Though some states and localities have found creative ways to work around these issues, the differences remain barriers to coordination for many others. Each of these programs is operated out of a different federal agency; the welfare program is administered from the Department of Health and Human Services (HHS), and the Department of Labor (Labor) administers the workforce investment program. We found that HHS and Labor have not addressed differences in program definitions and reporting requirements. It falls to the 108th Congress to redesign the nation’s homeland security grant programs in light of the events of September 11, 2001. In so doing, Congress must balance the needs of our state and local partners in their call for both additional resources and more flexibility with the nation’s goals of attaining the highest levels of preparedness. This goal is too important, and federal resources too scarce, to worry about holding our partners accountable after they have already spent the funds. Funding increases for combating terrorism have been dramatic and reflect the high priority that the administration and Congress place on this mission. These increases bring an added responsibility to ensure that this large investment of taxpayer dollars is wisely applied. We recently reported on some of the management challenges that could stem from increased funding and noted that these challenges—including grants management—could impede the implementation of national strategies if not effectively addressed. GAO testified before this subcommittee last year on the development of counter-terrorism programs for state and local governments that were similar and potentially duplicative. We have identified at least 16 different grant programs that can be used by the nation’s first responders to address the nation’s homeland security. These grants are currently provided through two different directorates of the new Department of Homeland Security, the Department of Justice, and HHS and serve state governments, cities and localities, as well as counties and others. Multiple fragmented grant programs can create a confusing and administratively burdensome process for state and local officials seeking to use federal resources for pressing homeland security needs. This is illustrated in figure 4 which shows the complex delivery structure for these 16 preparedness grant programs. To illustrate the level of fragmentation across homeland security programs, we have shown in table 1 significant features for the major assistance programs targeted to first responders. As the table shows, substantial differences exist in the types of recipients and the allocation methods for grants addressing similar purposes. For example, some grants go directly to local first responders such as firefighters, others go to state emergency management agencies, and at least one goes to state fire marshals. The allocation methods differ as well—some are formula grants while others involve discretionary decisions by federal agency officials on a project basis. Grant requirements differ as well—DHS’ Assistance to Firefighters Grant has a maintenance of effort requirement (MOE) while the State Fire Training Systems Grant has no similar requirement. Table 2 shows considerable overlap in the activities that these programs support—for example, funding from both the State and Local Domestic Preparedness Exercise Support Program and the State Domestic Preparedness Equipment Support Program can be used for planning and conducting exercises. The fragmented delivery of federal assistance can complicate coordination and integration of services and planning at state and local levels. Homeland security is a complex mission requiring the coordinated participation of many federal, state, and local government entities as well as the private sector. As the National Homeland Security Strategy recognizes, preparing the nation to address the new threats from terrorism calls for partnerships across many disparate actors at many levels in our system. Within local areas, for example, the failure of local emergency communications systems to operate on an interoperable basis across neighboring jurisdictions reflects coordination problems within local regions. Local governments are starting to assess how to restructure relationships along contiguous local entities to take advantage of economies of scale, promote resource sharing, and improve coordination on a regional basis. The complex web of federal grants depicted in figure 4 suggests that by allocating federal aid to different players at the state and local level, federal grant programs may continue to reinforce state and local fragmentation. Some have observed that federal grant restrictions constrain the flexibility state and local officials need to tailor multiple grants to address state and local needs and priorities. For example, some local officials have testified that rigid federal funding rules constrains their flexibility and cannot be used to fund activities that meet their needs. We have reported that overlap and fragmentation among homeland assistance programs fosters inefficiencies and concerns in first responder communities. State and local officials have repeatedly voiced frustration and confusion about the burdensome and inconsistent application processes among programs. We concluded that improved coordination at both federal and state and local levels would be promoted by consolidating some of these first responder assistance programs. In addressing the fragmentation prompted by the current homeland security grant system, Congress has several alternatives available. Actions taken by federal agencies under the rubric of the Federal Financial Assistance Management Improvement Act of 1999 will help to streamline the process for obtaining aid across the myriad of programs and standardize administrative requirements. These initiatives promise to reduce administrative burdens at all levels and promote a more efficient grants management process in general. Going beyond these initiatives to address the underlying fragmentation of grant programs remains a challenge for our federal system in the homeland security area, as well as across other program areas. Several alternatives have been pursued in the past to overcome problems fostered by fragmentation in the federal aid structure. I will discuss three briefly here— block grants, performance partnerships, and grant waivers. Block grants are one option that Congress has chosen to consolidate related programs. Block grants currently are used to deliver assistance in such areas as welfare reform, community development, social services, law enforcement, public health and education. While such initiatives often involved the consolidation of categorical grants, block grants also typically devolve substantial authority for setting priorities to state or local governments. Under block grants, state and local officials bear the primary responsibility for monitoring and overseeing the planning, management, and implementation of activities financed with federal grant funds. Accordingly, block grant proposals generally call for Congress to make a fundamental decision about where power and authority to make decisions should rest in our federal system for a particular program area. While block grants devolve authority for decisions, they can and have been designed to facilitate some accountability for national goals and objectives. Since federal funds are at stake, Congress typically wants to know how federal funds are spent and what state and local governments have accomplished. Indeed, the history of block grants suggests that the absence of national accountability and reporting for results can either undermine continued congressional support or prompt more prescriptive controls to ensure that national objectives are being achieved. For instance, the block grants enacted as part of the Omnibus Reconciliation Act of 1981 were not implemented in a manner that encouraged consistent reporting of program data. These block grants have been subject to at least 58 subsequent congressional actions, many of which served to recategorize the programs by tightening program requirements and limiting the grantees’ flexibility. The consolidation of categorical grants, however, need not be structured as a block grant. In fact, federal funding streams can be combined while retaining strong performance oriented accountability by state and local governments for discrete federal goals and objectives. State and local governments can be provided greater flexibility in using federal funds in exchange for more rigorous accountability for results. One example of this model involves what became known as “performance partnerships,” exemplified by the initiative of the Environmental Protection Agency (EPA). Under this initiative, states may voluntarily enter Performance Partnership Agreements with their EPA regional offices which can include major federal environmental grant programs. These agreements delineate which problems would receive priority attention within a state and how the state’s performance will be measured. Congress provided states with flexibility to use funds from two or more environmental program grants in a more flexible and streamlined manner. The benefits of the EPA performance partnership system are ones that should also be helpful for other areas such as homeland security. EPA partnerships (1) allowed states to shift resources to address priority needs and fund crosscutting efforts that are difficult to support with traditional grants, (2) provided a way to support innovative or unique projects, (3) increased the focus on environmental results and program effectiveness, and (4) fostered reduced reporting burden and improved information management. But we reported some significant implementation issues for the performance partnership approach as well. In 1999, we reported that the initiative was hampered by an absence of baseline data against which environmental improvements could be measured and the inherent difficulty in quantifying certain results and linking them to program activities and the considerable resources needed for high-quality performance measurement. The challenge for developing performance partnerships for homeland security grants will be daunting because the administration has yet to develop clearly defined federal and national performance goals and measures. We have reported that the initiatives outlined in the National Strategy for Homeland Security often do not provide performance goals and measures to assess and improve preparedness at the federal or national levels. The strategy generally describes overarching objectives and priorities, but not measurable outcomes. Lacking such measures and outcomes at the national level will surely encumber the federal, state, and local partners’ ability to establish agreements on what sort of goals are expected of our state and local partners, much less how they could be measured. A third approach to overcoming fragmentation could be to provide in law for waivers of federal funding restrictions and program rules when requested and sufficiently justified by state or local governments. In the homeland security area, legislation has been introduced to provide waivers for states to use funds from one category of federal assistance, such as equipment, to support other homeland security activities such as training. This approach could help recipients adjust available federal funds to unique needs and conditions in each state. Unlike full grant consolidation—which is legislated—each waiver must be approved by federal agency officials before grantees could have the kind of flexibility they desire. Some might view the approval requirement as an additional administrative burden while others consider the federal role essential to ensuring accountability. Mr. Chairman, we are eager to work with your subcommittee and others to improve the efficiency and effectiveness of our federal grant system. Improving the grant partnership among federal and nonfederal officials is vital to achieving important national goals. The Federal Financial Assistance Management Improvement Act of 1999 offers promising opportunities to help those officials achieve their mutual goals through the use of federal assistance programs. We look forward to reviewing the activities undertaken pursuant to the Act with an eye toward both highlighting progress as well as identifying further improvements that can be made at all levels of our federal system. We are also ready to assist Congress in identifying the problems stemming from the underlying nature of the grant system and in sorting through the tradeoffs Congress will face in resolving these problems. This concludes my prepared statement. I would be pleased to answer any questions you or the members of the subcommittee may have at this time.
The Federal Financial Assistance Management Improvement Act of 1999 is one of the most recent in a series of efforts to reform the federal grants management system. The act seeks to improve the effectiveness and performance of Federal financial assistance programs; simplify application and reporting requirements; improve delivery of services to the public; and facilitate greater coordination among those responsible for delivering such services. GAO has a responsibility to evaluate the implementation of this Act by 2005 and will soon begin developing an approach and methodology for the study. This testimony describes the problems fostered by proliferation and fragmentation, which the Act addresses indirectly. While the Federal Financial Assistance Management Improvement Act of 1999 (FFAMIA) offers promising opportunities to improve the federal grant system, there remain over 600 different federal financial assistance programs to implement domestic policy. Federal grant recipients must navigate through a myriad of federal grant programs in order to find the appropriate source of funds to finance projects that meet local needs and address local issues. Despite the process reforms initiated under FFAMIA, the federal grant system continues to be highly fragmented, potentially resulting in a high degree of duplication and overlap among federal programs. Since the 1960s the number and dollar amount of federal grant programs has grown substantially. Growth in both the number of grant programs and the level of funding have created a high level of complexity in the system. While the act seeks to improve the effectiveness and performance of federal assistance programs by simplifying grant administration and facilitating coordination among grant recipients, Congress could also consider consolidating grants that have duplicative objectives and missions. Consolidation can be achieved through a variety of ways including combining multiple programs into block grants, establishing performance partnerships, and providing for waiver authority of federal funding restrictions and program rules when requested and sufficiently justified by state or local governments. Each of these alternatives has implications for accountability that Congress will face as it considers improvements to the federal grant system.
Declining letter mail volumes and increasing electronic substitution, which have accelerated with the recent recession, have reduced postal revenues for both the foreign posts and USPS. Additionally, foreign posts told us they, like USPS, faced high infrastructure and workforce costs. Some foreign posts began to modernize their delivery or retail networks, or both, as early as the 1990s. One of the factors precipitating these changes occurred when the European Union began to establish a timetable for the gradual ending of monopolies of letter markets, in its 1997 Council Directive. In 2008, the European Union directed its members to fully “open” their domestic letter markets to competition by 2010 (i.e., remove their monopoly on delivering letter mail). In doing so, the European Union stated that it wanted to “allow sufficient time to put in place the…modernization and restructuring measures required to ensure long- term viability under the new market conditions” of the posts. In all six countries we reviewed, the posts were experiencing challenges related to declining mail volumes or high infrastructure, labor, or operating costs, or both, which necessitated the need for modernization. See figure l for a list of the six foreign posts we reviewed, including how we identified them in this report. USPS is facing similar challenges as other industrialized nations’ posts. For example, USPS has a brick and mortar retail infrastructure it cannot afford to maintain. However, USPS currently manages more retail outlets— approximately 32,500—than all of the foreign posts we reviewed combined. In addition, USPS has twice as many delivery points as any foreign post we studied—USPS delivers to 151 million homes, businesses, and post office boxes. Another challenge that USPS experienced was a record loss of $8.5 billion in fiscal year 2010. Thus, as we have reported, without substantial changes to its business model, USPS’s ability to fulfill its mission through self-supporting, businesslike operations is not sustainable. All of the foreign posts discussed in this report have implemented various initiatives to complement or enhance traditional delivery by offering customers more alternatives to send and receive letters and packages. Traditional delivery methods for foreign posts of letters, packages or parcels, and advertising mail, can include: carrier-to-home (to a customer’s door or mail box), and cluster boxes (grouping of boxes for customers in a particular neighborhood, instead of individual mail boxes at a personal residence, which are often found within a housing development). Although letter mail volumes have been decreasing, some foreign posts expect parcel volumes to strengthen as economic conditions improve, particularly with the emergence of online shopping and small business growth. According to foreign post officials we spoke with, customers have been increasingly changing the way they use and receive their mail. With increasing Internet and broadband access and related changing use of the mail, use of traditional mail has been declining as customers find more convenient and faster ways to communicate. Several of the posts we met with said that customers were using various electronic means to conduct their daily business, such as paying bills online, sending postcards and greeting cards electronically, or receiving news and other periodicals via the Internet instead of hard copy (all referred to as e-substitution). This changing use of the mail was a primary factor driving changes in the way the posts do business. In addition, foreign post officials told us they have a significant amount of costs associated with delivering traditional letter mail and packages. Some posts have been experiencing increases in the number of delivery points every year, at the same time that mail volumes overall have been declining. Australia Post and Canada Post officials told us the number of addresses has generally grown each year by about 200,000 addresses. A few of the posts attribute a major part of their costs to delivering the mail (also known as the “last mile”). Given all of these factors, the foreign post operators we met with have begun to focus on adjusting or downsizing their delivery networks and providing multi-channel approaches to give their customers more choices in how they send and receive mail. All six posts we met with offered some form of digital or hybrid means of sending and receiving mail, and several of the posts reported increased customer service and satisfaction as a result of these offerings. Digital mail is communication between a sender and a recipient through completely electronic communication (this could include personal e-mail and electronic bills and other communication with businesses). While postal experts have a variety of definitions for hybrid mail, there is no consensus on the exact definition. In this report, we use the term “hybrid mail” to describe different ways of sending and receiving mail and communication using a blend or combination of electronic or digital mail and physical forms. For example, an item—a bill, letter, card, or other communication—is electronically transmitted, processed and converted into a letter post item for physical delivery to the addressee. Figure 2 illustrates examples of digital and hybrid mail services, and figure 3 shows examples of their availability through the six foreign posts we reviewed. E-Potbrief™–Allowustomer to commnicte ecrely vi the Internet. When customer regiter for the ervice, they re reqired to authenticte their identity y preenting n identifiction crd or passport. All commniction ent nd received vi the E-Porief™ ecre gtewre end-to-end encrypted. Cot i the same as for letter. In cas when n E-Porief™ i ent to recipient who i not regitered for the ervice or if the ender cho thi option, the E-Porief™ will e printed y DP DHL nd delivered phyiclly. Docment ent nd igned throgh E-Porief™ re leglly inding nd confidentil. e-Letter–A hyrid mil delivery ervice thllowustomer to e-mil letter to e delivered phyiclly to non-e-mil recipient. Poten AB print the letter t dedicted print loction nd then deliver them phyiclly vi the pot. SuisseID–Offer ecre identifiction nd digitignre cabilitie for itustomer. Customer receive n identifiction crd or USB device fter preenting proof of identifiction Swiss Pot retil office. SusseID us high-grde ecrity technology for ecre login nd leglly vlid digitignre. Swiss Pot Box–Regitered customern hve their phyicl mil received, nned, nd delivered Swiss Pot. Customern chooe how they wold like to ccept their mil. After receiving n initinned imge of the front nd back of phyicl piece of mil (for exmple, letter), ustomer cn reqt thSwiss Pot (1) open the mil, n the entire content of the mil, nd end the content electroniclly to the customer; (2) leve the mil nopened nd hve the mil ent to the customer’s phyicddress; or (3) hred the mil piece. “We want to change the way people think about the last mile—as no longer physical, but digital.” A video depiction of Swiss Post Box, described previously, is available here. (Accessed on Feb. 9, 2011) While there is no consensus on a definition of hybrid mail, there does seem to be agreement that the way customers communicate is changing, and moving more toward a digital environment. According to Canada Post officials, in order to stay relevant to customers, the Post began to make a number of revisions to its corporate plans and strategies in 2005, focusing more on electronic and hybrid options. Over the years, Canada Post has been engaged in a number of efforts to reinvent, update, and enhance customer use of the mail through digital alternatives. Canada Post officials observed that some initiatives were successful, while others failed because they may have been ahead of their time. For example, according to a Canada Post official, epost 2.0 will be an improvement on a former digital offering and will be introduced within the next 2 years. Some postal officials see digital and hybrid mail as the “new last mile” and are gearing toward incorporating more digital mail delivery options for their customers. Foreign postal officials told us that their digital and hybrid mail options offered three benefits. First, some of the officials said these digital and hybrid mail services offer enhanced security and trust. They told us that they use their customers’ trust and loyalty in the “brand” of the postal operator to illustrate that digital and hybrid mail communications through the postal operator are secure. According to an Itella official, the biggest advantage of NetPosti was mail security. The official said that a NetPosti account was not as easily accessible as an e-mail account and its contents. The primary concern for Itella was ensuring security of the mail. Swiss Post created its Suisse ID to incorporate security encryption on the actual identification card or Universal Serial Bus (USB) used by customers to sign into their accounts. This encryption provided customers with a secure encryption whenever they logged in. Customers must have their identification verified in person at a post office before receiving the SuisseID. Second, officials at three of the six posts we spoke with offered their customers who use digital or hybrid mail solutions the opportunity to archive not only their digitally transmitted mail, but other documents as well. NetPosti offered archiving of all data and documents for 7 years free of charge. Customers could also bring into their NetPosti accounts personal files, such as scanned receipts, for a fee. Swiss Post offered customers the option to archive their documents. Third, three of the six foreign posts offered hybrid mail options combined with a “platform of services.” This platform offered integration between the sending and receipt of mail (letters, bills, cards, etc.) in a mixture of electronic and physical forms, along with other electronic capabilities such as archiving data for the customer (including the mail itself, as mentioned above), and financial services, such as bill presentment or bill pay and bill consolidation. For example, Swiss Post offered its Suisse ID customers a number of options, including bill pay and receipt, marketing services, and electronic archiving. Posts reported that these digital and hybrid mail initiatives have resulted in increased convenience and customer satisfaction, primarily because they make mail more accessible. A number of the foreign posts mentioned wanting to be where their customers shop or work, a goal that affected posts’ decisions about where to locate services. Officials with Swiss Post stated that their products are not only about making money but also about decreasing the cost of delivering the mail, which for Swiss Post they estimated was more than 50 percent of its operating costs. Several of the posts we spoke with have partnered with major retailers as well as individual stores and shops, such as grocery and drug stores, to provide a number of postal products and services to their customers, including parcel pick up. These stores offer more convenient locations and longer hours than traditional post offices. This partnering was part of a major transformation in the foreign posts’ retail networks, and part of an approach to provide multiple alternatives and choices for customers. In Sweden, Posten AB customers receive a notice, sent as a text message or a hard copy by the letter carrier, that they have a package available for pick up at a retail store counter convenient to them. They then take the paper notice to show the text to a postal retail partner outlet, such as a grocery store, and pick up the package. Posten AB officials noted that customer service has improved, evidenced by customers’ more timely pick up of these packages and parcels (reduced from an average of about 10 days to about 3), longer hours of access to these services, and an increased number of service points. Canada Post offers a variation on alternative parcel pick up with its community boxes, similar to cluster boxes (see fig. 4). Some community boxes have parcel boxes built in. Carriers leave a note along with a key in the private box of a customer with a parcel delivery, and leave the parcel itself in the parcel section of the community box. If there is no parcel section, the carriers leave the parcel elsewhere for customer pick up (for example, at a convenient retail outlet). Swiss Post customers can use Swiss Post’s PickPost service at no additional cost. With PickPost, customers can select a time and place most convenient for them to pick up their parcels. Customers register online and receive a personal customer number. When customers place an order, for example, for a parcel delivery, they provide the sender with the address of their chosen PickPost collection point (there are approximately 350 countrywide). The customers are then notified by text or e-mail as soon as the item has been delivered. They have 7 days to retrieve the parcel from the location. Four of the posts we spoke with use partnerships or contracts with private sector businesses and shops to offer their customers parcel pick up. Both Posten AB and Australia Post use retail partners to offer postal products and services in place of traditional post offices. Canada Post has a network of authorized dealers it contracts with to offer postal products and services. As mentioned previously, foreign post officials told us they improved customer service by offering parcel pick up at more convenient locations with later hours and realized some reduction in costs due to, for example, offering services and products through shops that were nonpost-owned. This helped posts to decrease their infrastructure costs and rightsize their workforces to accommodate changes in their networks. In 2002, DP DHL introduced its “Packstation,” a set of automated, stand- alone lockers where packages are sent and received, and as of 2010 had 2,500 nationwide. Packstations are typically located in high-traffic, public locations such as train stations. Fig. 5 shows an example of a DP DHL Packstation. Customers can sign up for delivery (free of charge to both the sender and the receiver) and have their identity verified in person by the Post. When customers want to receive parcels at a Packstation, they can give the Packstation as the delivery address (instead of their home address). The parcel is then delivered to the Packstation and the customers are notified via text message or e-mail that they have a package. Delivery to a Packstation eliminates the need for home parcel delivery for the postal operator and allows customers to pick up their parcels more conveniently, according to DP DHL. Packstations can also be used to dispatch small packets and parcels 24 hours a day, 7 days a week. As of the end of January 2011, DP DHL had more than 1 million registered users who could collect or send parcels. According to the DP DHL officials, self- service parcel pick up machines have enhanced customer service and convenience because customers can use them at any hour of the day. We observed a demonstration of DP DHL’s Packstation at its Innovation Center. According to DP DHL officials, DP DHL created its Innovation Center in part, for specialists from academic, industrial, and technological fields to exchange information about logistics solutions. The center offers a method for networking and development of solutions from start to finish, and visitors to the center can view DP DHL’s developments. Swiss Post also offered a parcel locker-style delivery option for its customers. Customers could receive their parcels at a parcel box, which is a large, lockable storage box at the entrance to a neighborhood. Both DP DHL and Swiss Post noted that their parcel locker options were aimed at increasing customer service and convenience, and decreasing costs related to delivering packages (the last mile). Perhaps one of the most dramatic examples of changes to delivery frequency and hybrid mail offerings was Itella’s Antilla Living Lab trial. Antilla is a small village in the countryside near Porvoo, Finland, where Itella was testing new ideas for service in remote areas. Changes in customers’ use of the mail and digitization, among other reasons, motivated Itella to begin this trial, which involved 124 households and 17 small companies. Mail was physically delivered to customers’ homes or businesses 2 days a week. On the 3 other days that mail was delivered, it was placed in assigned post lockers located near the Antilla village main shop. Customers received a text message daily letting them know if they had mail, so they could decide whether to travel to the postal locker and retrieve their mail. Any mail not retrieved was delivered to the customer on 1 of the 2 days when mail was delivered to the home or business. First and second class letters were scanned and then delivered electronically to the customer’s NetPosti digital account. These letters were then placed back in their envelopes and delivered in paper form. Customers could also receive letters directly to NetPosti from selected senders digitally without having to convert the letter to physical format. All of the foreign posts we met with have modernized their legacy brick and mortar retail networks in response to customers’ changing use of the mail. All of the foreign posts we met with offer increased access to retail products and services. As communication has changed and broadband usage increased, foreign posts have adapted their offerings to better fit their customers’ needs. Alternative retail points are points of service other than traditional postal retail facilities or post offices owned and operated by the postal operator. Examples include postal counters at retail partners such as a grocery store in Sweden or a pharmacy in Canada, as well as the postal operators’ Web sites, among others. All of the posts we met with provided customer access to online services. The six posts offered a range of services on their Web sites, from sales of stamps and other products to online bill payment. Figure 6 shows Swiss Post’s online store where a customer can buy stamps and chocolate, create postcards, and perform numerous other transactions, such as subscribe to magazines and register for Swiss Post Box, described previously. In addition to online offerings, all six of the foreign post we reviewed have developed postal applications for mobile devices. Foreign posts told us they have decreased operating costs (largely facility- and labor-related) by forming partnerships with private-sector entities that own and operate retail sites. As table 1 indicates, the majority of the foreign posts we met with maintained a network with a majority of partner-owned and -operated retail facilities rather than their own traditional post offices. For example, in 2009, about 98 percent of DP DHL’s retail network was owned and operated by partners. Posten AB in Sweden has dramatically changed its postal retail network since 2001, replacing all traditional brick and mortar post-owned and -operated post offices with partner-owned and -operated retail facilities. In visiting a postal retail partner outlet at a grocery store in Stockholm, a sign in front clearly displayed the services offered inside (see fig. 7). Four or five of the store’s 20–25 employees were trained to work at the postal counter. One employee said he spent about 4 hours a day selling postal services and products and the rest of the time he sold nonpostal items. We witnessed him moving back and forth, like a photo counter clerk at a drugstore. If he had questions, he could call a Posten AB-operated phone assistance service line. He said he liked the work because there was little down time. Figure 8 depicts another example of a partnership between a postal operator and a private business. All of the posts we interviewed reported positive results as a result of retail modernization in one or both of two key areas: (1) customer service, which has improved because, for example, the retail partner is open for more hours during a day and open on a weekend versus a traditional post office, and (2) savings in operating and labor costs achieved by closing post-owned and -operated facilities. In Australia, 81 percent of retail facilities were owned and operated by partners and were mostly located in commercial areas. When Australia Post modernized its retail network, some post offices were moved to newer, lower-cost smaller structures, such as facilities in a commercial area “where the shoppers are.” For example, we visited an Australia Post partner’s retail facility (licensed post office) in Kilmore, Australia that was relocated from the outskirts of town to a strip mall and Australia Post officials said the move saved the Post 20 percent in operating costs. Some foreign posts have traditionally offered banking or financial services, while others no longer do. Three of the posts that we visited were continuing or seeking to expand these services. For example, Swiss Post and Australia Post, which have traditionally offered financial or banking services, offered in-person bill paying services. Swiss Post also provided some financial services and was seeking a banking license. Australia Post provided banking services as an agent for more than 70 banks and financial institutions. Similarly, DP DHL and its partners offer banking and financial services. Other posts—Itella and Posten AB—have experienced mixed results from offering nonpostal services. Several foreign posts made government services available. Australia Post and Itella offered government services in their retail facilities, including those in remote or rural areas where citizens may have difficulty traveling to a large city. For example, Itella offered voter registration services as well as fishing licenses. Itella and the Finnish Ministry of Justice first established a framework for the voter registration service under which local government officials could choose whether to make advance voting available in post offices in their municipalities. Itella officials reported that in 2010 Itella offered this service to 127 post offices and its goal was for 93 post offices to provide this service. Australia Post reported its identity verification service was used, for example, when applying for a government service or to open a bank account. This service was a successful revenue-generator, according to Australia Post, leveraging the trusted brand of the Post and provided a needed government service in rural areas. For example, citizens seeking to work with children could fill out an identity verification application in order to initiate a required background check. A senior official noted that services such as assisting a customer with a passport application would generally need to be provided in person, with assistance, rather than online. Figure 9 depicts a retail employee in Australia assisting a customer with questions related to nonpostal services. All foreign posts have traditionally sold postal products, such as stamps and envelopes at their retail facilities. In the past 5 years, several posts began selling other retail products as a means to increase revenue. For example, Itella exclusively partnered with a major Finnish design firm to license merchandise including designed packaging, gift wrapping, postcards, and gift items, resulting in what it reported as a successful partnership. Itella officials noted that sales of other nonpostal products, such as jewelry and candy, have not been as successful. They said selling luxury items such as jewelry required focused work by the salespeople to make the sale. In addition, they said the market for luxury items was limited. Officials reported that although candy was inexpensive, sales have not been successful because candy was available for purchase in so many other places. Figure 10 depicts nonpostal items for sale. In Australia, officials told us some nonpostal products such as children’s books sold well. However, their future plans place less emphasis on selling products and more on selling services, including identity and financial services. Both Posten AB and Canada Post introduced alternative retail products. Canada Post officials stated they discontinued many of the sales because the products were not making money. However, a number of the services—including the selling of money transfers, wireless prepaid, and long distance services, among other government services—have been maintained and are growing. In most cases, we were unable to determine the revenue generated from retail products and services because the information was proprietary or it was not reported. Foreign posts have faced resistance to change and challenges similar to those USPS faces, and they have developed communication, outreach, and labor transition strategies to address stakeholder concerns and address resistance to changes. Lessons to be learned from foreign posts’ experiences modernizing their retail and delivery networks include the following: 1. Strategic outreach and coordination with governments can address political resistance. Foreign posts have developed public relations campaigns to inform both national and local government officials as well as the public about the need for modernization and the benefits of improved access to services. They have also coordinated with local governments to resolve concerns raised by communities affected by facility closures. For example, Sweden’s Posten AB developed a comprehensive public campaign and communication effort to inform its stakeholders of what is perhaps one of the most expansive and earliest examples of retail network transformations of any of the posts we met with. (Video of Sweden’s national campaign effort, courtesy of Posten AB.) Part of this communication effort was intended to help change the perception of “the post as a place” to “the post as a service.” Despite this extensive and planned effort, Posten AB officials said the Post’s brand and customer satisfaction ratings suffered, taking 10 years to recover. 2. Strategic communication and outreach with mailers, small businesses, and retail customers. Foreign posts communicated with and reached out to customers to increase acceptance of changes and to better meet customers’ needs, including providing alternatives before implementing major retail network changes. In some cases, they have established ongoing outreach with their customers to determine how they can best serve them. In addition, when adjustments to the posts’ retail networks meant that alternative facilities would be further away for some customers than their traditional post offices, the posts emphasized that the new locations would have extended operating hours. Posten AB and DP DHL officials, in Sweden and Germany respectively, told us they made efforts to show customers and other stakeholders that although post-owned and -operated retail facilities were closing, the new retail partnerships offered more access points and made postal products and services more convenient to obtain. If Swiss Post planned to close, replace, or relocate a post office, the Post was required by law to attempt to reach an agreement with the affected community, according to officials. Typically, Swiss Post approached the area officials first and offered a variety of solutions, such as creating an alternative retail access point if a partner could be found. The Post negotiated with the local community on the best way to address closures or moves. Although Swiss Post owned 88 percent of retail facilities at the end of 2009, it reported that it was considering increasing the number of nonpost-owned facilities. In some cases, such as in Australia, when closures or relocation were planned, the affected community first had an opportunity to propose an alternative. 3. A labor relations strategy can ease transition. Foreign posts also anticipated employee resistance and developed strategies to address how changes could affect employees and to assist employees in making the necessary transition to changes in operations. For example, for retail network changes—such as closures—strategies were developed to assist employees in making the necessary transition to any downsizing. Foreign postal officials told us that although they viewed the modernization efforts as necessary, some of the labor changes were unpopular; however, they also said that support for affected employees helped alleviate some of the resistance the posts faced. In most cases, either the foreign posts negotiated changes with the labor unions or created industrial relations (labor relations) strategies to help transition the workforce, or both. For example: Australia Post created a labor relations policy that was central to how it managed structural change, which involved reducing excess staff. This policy was known as the RRR Agreement: Redeployment to other areas of the business. Australia Post sometimes needed to retain and attract more employees in certain areas. To do this, Australia Post could redeploy an employee from one area to another job or facility. Retraining to provide skills required in new positions. In some cases, if employees were not redeployed, Australia Post could retrain the employees. Redundancy. When Australia Post needed to reduce the number of positions, it tried to obtain them through voluntary means, using incentives to encourage employees to retire. Itella reported a number of ways in which it supported its employees through changes in Finland: Career consultants worked with employees to find new placements within Itella when they would lose their jobs. Job-seeking training was provided in collaboration with the Finnish government’s employment office, which helped employees fill out job applications and search for jobs. Outplacement services within Itella helped employees find new jobs in other companies or helped employees set up their own business. Monetary support measures helped to alleviate the effects on the individual in case of redundancy, relocation, or a change from full-time to a part-time employment. Posten AB officials noted that labor was not as resistant to changes in Sweden as in some other countries, in part, because labor was represented on Posten AB’s governing board, which ultimately helped the relationship between management and labor. USPS has taken steps to modernize its retail and delivery networks, but, as we have previously reported, it faces ongoing challenges related to legislative restrictions and stakeholder resistance to changes. Some stakeholders have been concerned about how USPS’s network changes might affect jobs, services, employees, and communities, particularly in small towns or rural areas. According to some members of Congress, and others, post offices are an important aspect of small towns, providing them with an economic and social anchor. Another concern is that inadequate USPS financial resources could impede efforts to optimize postal mail processing, retail, and delivery networks by limiting available funding for transition costs. In addition, concerns have been raised about whether USPS should be allowed to introduce new products and services that are not directly related to postal activities. Some of the steps USPS has taken to modernize include the following: 1. Issued an action plan that outlined a general modernization strategy. USPS issued a plan on March 2, 2010, that identified modernization strategies for addressing financial challenges associated with declining letter mail volumes. Among other changes, USPS requested flexibility to Change delivery frequency. Adjust delivery days to better reflect current mail volumes and customer usage. Expand access. Modernize customer access by providing services “where the customers are.” Increase and enhance customer access through partnerships, kiosks, and improved online offerings, while reducing costs. Expand products and services. Permit USPS to evaluate and introduce more new products consistent with its mission, and allowing it to better respond to changing customer needs. 2. Identified the need for legislative clarification and changes. After issuing its action plan, USPS developed legislative proposals for changes needed to implement its action plan. In September 2010, Senator Carper introduced a bill that included some of USPS’s legislative proposals related to these areas. Senator Collins introduced another bill that included a provision to encourage USPS to enhance its retail presence and consider the impact of any changes to retail facilities on small communities and rural areas. 3. Conducted initial outreach to stakeholders and is developing a more detailed long-term strategy for modernizing its retail network and services. Senior USPS officials we met with stated that they are continuing to work on developing strategies for modernizing their retail and delivery networks. USPS plans to make these details public in early 2011. USPS has held a number of meetings to involve stakeholders in this ongoing discussion, such as a 2010 innovation symposium whose goal was to discuss potential initiatives to help address its financial sustainability. Similarly, USPS has begun a dialogue with its customers, including major mailers, through its previously mentioned innovation efforts. As USPS moves forward with its modernization efforts, the experiences of foreign posts suggest that outreach and communication strategies will be critically important to help it address concerns raised by the public, Congress, postal unions, and the Postal Regulatory Commission. In recent testimony we discussed some of the public policy questions related to legislative proposals that require congressional decisions to determine how USPS should modernize to achieve sustainable financial viability. For example: Should USPS have greater flexibility to rightsize its retail network and workforce, which may involve closing post offices and moving retail services to alternative commercial locations that are often open more days and longer hours? Should it retain retail facilities and provide new nonpostal products and services to help it cover costs and generate net revenues? Foreign posts did not plan and implement changes to their networks, nor realize results, overnight. In many cases they planned for the modernization 10–20 years in advance, and were met with a great deal of stakeholder resistance. Thus, Congress and USPS urgently need to make critical decisions about actions USPS will take to restore its financial viability. In doing so, they can potentially learn from a number of strategies that foreign posts have used to modernize their networks. We provided a draft of this report to USPS for review and comment. USPS provided comments in a letter from the Chief Financial Officer/Executive Vice President dated February 4, 2011. These comments are presented in appendix III and our evaluation of them is summarized below. USPS also provided technical comments, which we incorporated where appropriate. USPS generally agreed with our conclusion that it must pursue modernization and that modernization could be more quickly and effectively instituted if legal and political barriers are removed. USPS suggested that we use the word “improve,” rather than “modernize”, when discussing its retail and delivery networks. USPS believes some stakeholders interpret modernize to mean automate. Further, USPS stated that many of its modernization and optimization efforts in the areas of delivery, networks, access, and products and services have been underway for some time and that political barriers have slowed these efforts. We recognize that USPS has made changes to its retail networks by transitioning to alternative retail locations, and we are evaluating these efforts in an ongoing review. However, in this report we focused not only on delivery and retail changes made by foreign posts, but also on the strategies used to address resistance to change. We thus use the term modernization to emphasize much more than automation. USPS also stated that incremental steps are not going to get the USPS where it needs to be to serve its customers in the 21st century. USPS listed some modernization efforts it described as similar to those mentioned in this report. Finally, we share USPS’s view that there is an urgent need for actions that will restore its financial viability and modernize it to meet the challenges of the 21st century. We are sending copies of this report to the appropriate congressional committees, the Postmaster General, the Chairman of the Postal Regulatory Commission, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Retail and delivery modernization efforts of postal operators encompass aspects of “universal service” or “community service,” such as requirements related to delivery of the mail and access to postal retail services and facilities, among other areas. In the United States, annual appropriations language mandates mail delivery 6 days a week. The Postal Reorganization Act of 1970 placed requirements on delivery and retail access standards by mandating that the USPS provide “prompt, reliable, and efficient services to patrons in all areas and shall render postal services to all communities” as well as “…a maximum degree of effective and regular postal services to rural areas, communities, and small towns where post offices are not self-sustaining.” The objectives of our work were to describe (1) what major initiatives foreign postal operators (foreign posts) have implemented to improve or enhance mail delivery, and what have been the results; (2) what key initiatives foreign posts have implemented to improve or enhance their retail network and what have been the results and (3) what strategies used by foreign posts in their modernization efforts can help U.S. Postal Service (USPS) as it tries to improve its financial condition and customer service. To describe the major initiatives implemented by foreign posts in mail delivery and retail networks, and the results of those initiatives, we initially reviewed documents, presentations, and articles from a variety of postal experts, trade journals, and textbooks, related to postal innovations in the digital, retail and delivery areas. We also conducted interviews with experts in the international postal field. We then employed a case study approach to select six countries from among industrialized countries, using the following criteria: change in delivery frequency, change in retail network, use of innovations and technology, rank of broadband usage per 100 inhabitants, and convenience and cost of travel. In addition to the above criteria, we also considered the following to ensure that our selection of the countries included a range of characteristics: location and size of the country, number of delivery points in 2008 (last year data were available), 2008 mail volume (last year data were available), and regulatory framework and business model. After applying these criteria we selected six countries—Australia, Canada, Finland, Germany, Sweden, and Switzerland—and conducted site visits to each, where we met with the foreign posts and a number of postal stakeholders to gain their perspectives on the effects of modernization efforts. These stakeholders included: private sector companies and businesses with partnerships with foreign mailer and marketing associations, private sector mailers, and labor unions. In addition, we observed a number of demonstrations of foreign posts’ innovations, toured their retail facilities, and received briefings on their retail and delivery networks, strategic planning, innovation strategies, and other areas. We reviewed foreign post documentation including annual reports from 2006 to 2010, strategic plans related to retail and delivery network changes and innovations, reports on postal services and products, fact sheets on foreign post innovations and initiatives, and sustainability reports and strategic reviews. In addition, we also reviewed documentation from foreign posts’ stakeholders such as foreign post regulator studies; and position papers related to foreign posts’ modernization efforts. We also distributed a request for information to the six foreign posts we reviewed, to gain additional information regarding goals, strategies, and results of the initiatives implemented by the posts. Some of the information we received from the foreign posts in response to our request for information was not complete, and most data we requested was not quantified by the posts. In addition, the information we received was not consistent across the six foreign posts. We did not assess whether the foreign posts’ operational changes were profitable since that information is either proprietary or not consistently reported. To describe how the strategies used by foreign posts could help USPS, we reviewed USPS’s action plan entitled Ensuring a Viable Postal Service for America: An Action Plan for the Future, reports from the USPS Office of Inspector General and the Congressional Research Service, relevant congressional testimonies, pending postal reform legislation, and our past work. We also interviewed key officials from USPS to address innovations and strategies related to their retail and delivery network. We conducted this performance audit from January 2010 to February 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Teresa Anderson (Assistant Director), Tonnyé Conner-White, Elizabeth Eisenstadt, Brandon Haller, Armetha Liles, Margaret McDavid, Sarah McGrath, Joshua Ormond, and Crystal Wesco made key contributions to this report.
The foreign postal operators (foreign posts) in industrialized countries in GAO's review have been experiencing declining letter mail volumes and have modernized their delivery and retail networks to address this challenge. As requested, GAO reviewed the innovations and initiatives that foreign posts are using and the lessons the U.S. Postal Service (USPS) might learn to help it address plummeting mail volumes and record financial losses. This report examines initiatives foreign posts have implemented to improve mail delivery and retail networks and related results, and modernization strategies used by foreign posts that can inform consideration of proposals to improve USPS's financial condition and customer service. GAO selected foreign posts in Australia, Canada, Finland, Germany, Sweden, and Switzerland as case studies based on characteristics, such as delivery and retail changes and country size and location. GAO reviewed foreign posts' documents, including annual reports and strategic plans related to delivery and retail network changes and innovations. GAO met with foreign post officials, toured their retail facilities, received briefings on their delivery and retail networks and other areas, and met with regulators, labor unions, and mailers to obtain their views on the effects of their posts' modernization efforts. USPS generally agreed with GAO's findings and mentioned both its own modernization efforts and the barriers it faces. The foreign posts GAO reviewed have developed alternative delivery choices for customers that, according to the posts, have reduced costs and improved customer satisfaction and service. All of these posts now offer digital (purely electronic) or hybrid mail (a blend of physical and digital) options. Some posts offer parcel pick up at retail facilities like grocery stores, which are open longer than post offices, and are often owned and operated by businesses that partner with the posts, thus reducing costs. One post allows customers to pick up parcels from a publicly-located machine, or parcel locker, that is available 24 hours a day. The selected foreign posts have modernized their legacy brick and mortar retail networks in response to customers' changing use of the mail. For example, they have expanded retail access through alternatives such as Internet sales and partnerships with retail businesses such as grocery stores or pharmacies, while reducing the number of post-owned and -operated facilities. According to all of the posts, retail modernization has either (1) improved customer service, in some cases because the partner stays open longer, or (2) reduced operating and labor costs, by closing post-owned and -operated facilities, or both. The foreign posts faced resistance to change and challenges similar to those USPS faces, and they have used strategies that could be helpful to USPS as it moves forward with plans to modernize its own delivery and retail networks. In particular, they relied on outreach and communication strategies to inform public officials and customers of increased access to products and services and help gain acceptance for retail network changes. A few posts also developed labor transition plans or strategies under which they provided training, relocation and job search services, and financial incentives to support employees who were negatively affected by the modernizations. While USPS has taken steps in the past year to generate ideas for modernizing its retail and delivery networks, the experiences of foreign posts suggest that it will be critically important for USPS to fully develop and implement similar outreach, communication, and labor transition strategies.
The United States, along with its coalition partners and various international organizations and donors, has embarked on a significant effort to rebuild Iraq. The United States is spending billions of dollars to reconstruct Iraq while combating an insurgency that has targeted military and contractor personnel and the Iraqi people. The United States has relied heavily on private-sector contractors to provide the goods and services needed to support both the military and reconstruction efforts in Iraq. DCAA is responsible for providing contract audits for DOD, along with general accounting and financial advice to DOD acquisition officials negotiating government contracts. DCAA performs many types of audits for DOD, including audits of contractor proposals, audits of estimating and accounting systems, and incurred cost audits. Generally, the results of a DCAA audit are intended to assist contracting officials in negotiating reasonable contract prices. Normally, DCAA audits contractors’ proposals and provides contracting officials advice on the reasonableness of contractor costs prior to negotiations. DCAA also conducts audits of cost- type contracts after they are negotiated to ensure costs incurred on these contracts are acceptable. Relying on cost information provided by the contractor and assessing whether the costs comply with government regulations, DCAA may identify certain costs as questioned or unsupported. DCAA defines questioned costs as costs considered to be not acceptable for negotiating a reasonable contract price, and unsupported costs as costs that lack sufficient supporting documentation. DCAA reports its findings to contracting officers for consideration in negotiating reasonable contract prices. DCAA audit reports represent one way DCAA can assist contracting officials as they negotiate government contracts. Also, contracting officials may invite DCAA to participate in contract negotiations to explain audit findings and recommendations, and may factor DCAA audit findings into evaluations of contract proposals. The Federal Acquisition Regulation (FAR) acknowledges that DCAA’s role is advisory, and assigns the contracting officer responsibility for ensuring that the contractor’s proposed price is fair and reasonable. While DCAA audit recommendations are nonbinding, DCAA’s Contract Audit Manual states that contracting officials deviating from DCAA advice during negotiations should explain the reasons why they disagreed with DCAA. DOD Directive 5105.36 also enables DCAA to withhold payments by (1) suspending payment for specific incurred costs lacking documentation or (2) disapproving costs that do not conform with applicable regulations. DCAA can issue a Form 1 to a contractor notifying it that funds will be withheld, which initiates review of the challenged costs by the contracting officer. Between February 2003 and February 2006, DCAA issued hundreds of audit reports that collectively identified $3.5 billion in questioned and unsupported costs on Iraq-related contracts, primarily through audits of contractor proposals. In some cases, DCAA was asked to audit multiple iterations of contractor proposals as these proposals were revised over a period of time. Based on information provided by DCAA, contracting officials have responded to audit findings that questioned $1.4 billion. As a result, contracting officials negotiated contract cost reductions of $386 million according to DCAA. DCAA does not render an opinion about costs it determines to be unsupported; therefore they do not track the resolution of unsupported costs. Between February 2003 and February 2006, DCAA tracked 349 audit reports that identified about $3.5 billion in questioned and unsupported costs on Iraq contracts. Of this total, DCAA classified $2.1 billion as questioned, and $1.4 billion as unsupported. DCAA identified these questioned and unsupported costs in audits related to 99 different contractors. Most of the questioned and unsupported costs were identified through audits of contractor proposals. Specifically, in DCAA’s database, more than three-quarters of the audits with questioned and unsupported costs were classified as “forward pricing activity,” which primarily involves auditing contractor proposals or parts of proposals. In some cases, DCAA reviewed multiple contractor proposals for the same work. DOD officials told us that contractors submitted multiple proposals because requirements changed or the proposal was considered inadequate for negotiations. For example, over a 6-month period, DCAA issued four audit reports on three different proposals for a task order related to an oil mission. Each audit superseded the prior one, and DCAA updates its information to reflect the most recent report. While DCAA tracks and records how questioned costs are addressed by the contracting official in negotiation, it does not track similar information about unsupported costs. Based on the data provided to us by DCAA, as of July 3, 2006, contracting officials had responded to 169 of DCAA’s 349 Iraq audit reports. DCAA considered the findings to be addressed because the contracting official had documented the result of negotiations with the contractor. Based on information provided by DCAA, for the $1.4 billion in questioned costs addressed by contracting officials, contracting officials sustained $386 million of the total questioned costs. DCAA defines sustained costs as the costs reduced through negotiations directly attributable to findings reported by the DCAA auditor for proposal audits. Based on the information provided by DCAA, as of July 2006, the remaining $700 million in questioned costs is still in process. DCAA’s information does not reflect what actions, if any, the contracting officials have taken to respond to these audit report findings. According to a DCAA official, DCAA does not track how unsupported costs are addressed. DCAA guidance implementing the FAR requires the contracting official to report on the disposition of questioned amounts, but not specifically on whether contractors provided sufficient documentation to eliminate unsupported costs. Because unsupported costs indicate a lack of contractor information that is needed to assess costs, DCAA cannot and does not render an opinion on those costs. Therefore, DCAA does not track the resolution of unsupported costs. For the 18 audit reports selected for this review, we found that DOD took a variety of actions in response to audit findings, including not allowing some contractor costs. Based on contract documentation we reviewed, the DOD contracting officials generally considered DCAA’s questioned and unsupported cost findings when negotiating with the contractor. We found that DOD contracting officials were more likely to use DCAA’s advice when negotiations were timely and occurred before contractors had incurred substantial costs. In contrast, DOD officials were less likely to remove questioned costs from a contract proposal when the contractor had already incurred these costs. In addition to identifying questioned and unsupported costs, DCAA can also withhold funds from the contractor, which it chose to do in eight of the cases included in our review. Other actions taken by the DOD contracting officials included inviting DCAA to attend meetings or negotiations with the contractor and conducting additional analyses to respond to audit findings. Our review of the government’s documentation of contract negotiations for the selected audit reports showed that DOD generally considered DCAA audit findings. The majority, or 13 of the 15 memorandums, identified how the contracting officials addressed DCAA audit findings. Most memorandums discussed questioned and unsupported costs identified by DCAA in areas such as labor, equipment, material, and subcontracts, but some lacked specific detail on how DCAA audit findings were addressed. However, in two cases we were unable to determine from the negotiation documentation how DCAA audit findings were used. To address the more than $1 billion in questioned costs related to the audits selected, we found that the DOD contracting officials were less likely to remove questioned costs from a contractor proposal when the contractor had already incurred these costs. For example, in 5 audit reports comprising about $600 million of questioned costs reviewed, we found that the DOD contracting officials determined that the contractor should be paid for nearly all of the questioned costs (all but $38 million), but reduced the base used to calculate the contractor’s fee (by $205 million). By reducing the base, the DOD contracting official reduced the contractor’s fee by approximately $6 million. Generally, when entering into a contract, the government and contractor reach agreement on the key aspects of the contract, including the scope and price of the work, before the work is authorized to start. However, the FAR enables the government to authorize the contractor to begin work before doing so in certain cases, such as when the government demands the work start immediately and it is not possible to negotiate a fully defined contract in sufficient time to meet the requirement. Our past work has shown that the use of undefinitized contract actions can pose risks to the government, such as potentially significant additional costs. Acquisition regulations generally require that the contracting official define the scope and costs of such contracts within 180 days of the contract’s start date. According to contract officials, urgent conditions in Iraq led the government to initiate such contract actions without first specifying their scope of work and agreeing to the contract price. In many cases we reviewed, contractors completed some work and incurred substantial costs well before the government negotiated the contract price. Of the 18 audits covered in our review, 11 audit reports corresponded to contract actions where more than 180 days had elapsed from the beginning of the period of performance to final negotiations. For nine of these audits, the period of performance DOD initially authorized for each contract action concluded before final negotiations took place. For example, DCAA questioned $84 million in its audit of a task order proposal for an oil mission. In this case, the contractor did not submit a proposal until a year after the work was authorized, and DOD and the contractor did not negotiate the final terms of the task order until more than a year after the contractor had completed work (see fig. 1). The DOD contracting officer paid the contractor for all questioned costs but reduced the base used to calculate contractor profit by $45 million. As a result, the contractor was paid about $3 million less in fees. In the final negotiation documentation, the DOD contracting official stated that payment of incurred costs is required for cost-type contracts, absent unusual circumstances. This same rationale was used in negotiations on several other task orders. In contrast, in the few audit reports we reviewed where the government negotiated prior to starting work, we found that the portion of questioned costs removed from the proposal was substantial. For example, in 3 audit reports related to a logistics support task order, DCAA questioned $204 million. Since the government and the contractor negotiated the terms of this task order prior to the onset of work, the contractor had not incurred any costs at the time of negotiation (see fig. 2 for timeline). According to DCAA’s calculations, $120 million of these questioned costs was removed from the contractor’s proposal as a result of its audit findings. In addition to identifying questioned and unsupported costs, DCAA can withhold funds from contractors in certain situations. The cognizant administrative contracting officer may subsequently determine that the withheld costs should be approved for payment to the contractor. We found DCAA withheld $236 million from contractors related to 8 of the audits included in our review. Subsequently, as a result of either additional documentation provided by the contractor or the DOD administrative contracting officer determination, $148 million of the withheld funds was released to the contractor and the government did not pay $36 million to the contractor. The remaining $51 million has not been settled yet—a DOD contracting official is reviewing the available information to make a decision about whether or not these costs will be reimbursed. In the audit reports reviewed, the vast majority of funds withheld by DCAA ($171 million of the $236 million) related to dining facilities services provided at U.S. troop camps in Iraq. The contractor was directed by the Army to build, equip, and operate the dining facilities located at U.S. troop base camps in Iraq and to provide four meals a day to the camp populations. The population at each camp was specified in the Army’s description of the work to be performed by the contractor. In addition, contractor and government representatives counted the number of troops served at each mealtime. However, the Army’s description of the work did not specify whether the contractor should bill the government for the camp population identified in the work description or the actual head count for each meal. Generally, the government was billed based on the estimated base camp population, but DCAA stated that the billings should be based on the actual head count, which was lower than the estimated base camp population included in the work description. As a result, DCAA withheld funds by reducing payment for dining facilities costs on contractor billings by 19.35 percent. Ultimately, the DOD and the contractor negotiated a settlement where it was agreed that $36 million would not be paid to the contractor. In another example, DCAA withheld a payment to the contractor for its subcontractor’s costs of $12 million related to electrical repair services. DCAA determined the cost to be unreasonable based on a comparison of price quotes from other subcontractors for similar electrical repair services. As a result of the action taken by DCAA to withhold payment, an Army Corps of Engineers contracting official reviewed contractor data. The contracting official determined the subcontractor’s price was reasonable given the short contract time frames. A memo outlining the Army Corps of Engineers’ rationale for paying the contractor for the subcontractor costs states, “Corps of Engineers representatives in Baghdad directed all of the contractors there to do whatever was necessary, regardless of cost, to meet schedule commitments.” The $12 million withheld was released to the contractor. DCAA officials expressed concerns that the DOD contracting official had not sought their assistance when settling this issue with the contractor. We found that DOD contracting officials took a variety of other actions to address DCAA audit findings. In many of the cases we reviewed, DCAA was invited to participate in meetings or negotiations with the contractor. For example, To address the questioned costs identified in the task orders for the oil mission, the DOD contracting official convened a meeting to include the contractor representatives, DCAA officials, and Army Corps of Engineers officials. As a result of discussions in this meeting, a DCAA official told us that DCAA stopped questioning some costs such as the percentage paid to contractor employees for working in a dangerous area and price adjustments the contractor paid to its subcontractors for fuel from Turkey. In addition, the DOD contracting official asked DCAA to provide alternative negotiation positions, and in response DCAA developed memorandums outlining several options for each task order. Ultimately the DOD contracting official used a negotiation position presented in each memorandum to establish the government’s negotiation position with the contractor. When asked if he was satisfied with the resolution of the questioned costs, a DCAA official involved in the process told us he thought the DOD contracting official did the best job he could. In another example, DCAA attended negotiations between the contractor and the government for an electricity contract. DCAA’s role at this meeting was to answer questions and to reiterate its opinion. Subsequent to negotiations, DCAA participated in additional meetings with the contractor and the Army Corps of Engineers to ensure that its concerns with the contractor purchasing system were addressed. DCAA officials told us that problems with the contractor purchasing system were related to the unsupported costs identified in the audit. DCAA officials involved in this process told us that they were generally satisfied with the actions taken by DOD and the contractor to resolve their audit findings. In some cases, DOD officials conducted additional analyses in response to DCAA’s audit findings. For example, for the audit reports we reviewed related to the oil mission, DCAA questioned the cost of fuel and transportation based on a comparison between the price paid by the contractor and the price paid by the Defense Energy Support Center (DESC) when it took over the mission from the contractor in April 2004. In response, DOD collected additional information to update the fuel and transportation cost comparison. For example, although DESC negotiated prices based on trucks shipping fuel to Iraq three times per month, in practice the trucks were only able to make two trips per month, a fact that increased the cost of the mission to DESC. Overall, the additional analyses provided a rationale for the DOD contracting official to pay the contractor for some of DCAA’s questioned costs. We requested comments from DOD on a draft of this report, but none were provided. We are sending copies of this report to the Secretary of Defense, appropriate congressional committees, and other interested parties. We will make copies of this report available on request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions concerning this report, please contact me at (202) 512-4841 or by e-mail at [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Other major contributors to this report were Penny Berrier Augustine, Tim Bazzle, Greg Campbell, David E. Cooper, Tim DiNapoli, Julia Kennon, John Krump, Eric Lesonsky, Janet McKelvey, Guisseli Reyes-Turnell, Raffaele Roffo, and Jeffrey Rose. To determine the costs identified by the Defense Contract Audit Agency (DCAA) as questioned or unsupported, we analyzed data from DCAA’s management information system on all DCAA Iraq-related audit reports with questioned or unsupported costs issued between February 2003 and February 2006. To develop an understanding and assess the reliability of the information included in the database, we held discussions with and obtained documentation from DCAA officials located in Fort Belvoir and we conducted electronic testing for obvious inconsistencies and completeness. During our review of the database we identified records for 20 audits, 1 percent of the database, which listed 2 different totals for questioned and/or unsupported costs. For these 20 audits, based on our conversation with a DCAA official, we selected the amount that reflected the most current total for questioned and/or unsupported costs. We determined the data used in our review to be sufficiently reliable for our purposes. To determine the actions taken by the Department of Defense (DOD) in response to DCAA audit findings, including the extent to which funds were withheld from contractors, we selected 18 audit reports comprised of (1) the 10 reports with the highest dollar amounts of questioned and unsupported costs and (2) a selection of 8 of the remaining audit reports with questioned and unsupported cost dollars above $5 million. We excluded audit reports issued after November 1, 2005 from the selection to ensure that DOD contracting officials had adequate time to resolve the audit findings. The questioned and unsupported costs for the 18 audits total approximately $1.8 billion, or about 50 percent of all questioned and unsupported costs identified through DCAA’s database on Iraq contracts. These reports include (1) 11 audits of 8 task order proposals to provide logistics support for U.S. troops, (2) 3 audits of task order proposals to provide fuel and fuel transportation, (3) 3 audits of task order proposals for electricity services, and (4) 1 audit of a proposal for contract management and administrative support functions. The 18 selected audit reports represent work performed by 4 contractors. For the selected audits, we held discussions with DCAA officials located in Fort Belvoir; Arlington, Texas; Lexington, Massachusetts; Seattle, Washington; Kent, Washington; and Iraq. We collected key documentation related to each audit report, such as DCAA’s calculation of the resolution of questioned costs and dollars withheld (Form 1). For each audit, we also held discussions with DOD contracting officials located in Rock Island, Illinois; Dallas, Texas; Winchester, Virginia; and Iraq. We interviewed these officials to determine the actions taken by DOD to address DCAA’s audit findings and obtained key documentation such as price negotiation memorandums. We conducted our review from March 2006 through September 2006 in accordance with generally accepted government auditing standards.
The government has hired private contractors to provide billions of dollars worth of goods and services to support U.S. efforts in Iraq. Faced with the uncertainty as to the full extent of rebuilding Iraq, the government authorized contractors to begin work before key terms and conditions were defined. This approach allows the government to initiate needed work quickly, but can result in additional costs and risks being imposed on the government. Helping to oversee their work is the Defense Contract Audit Agency (DCAA), which examined many Iraq contracts and identified costs they consider to be questioned or unsupported. The Conference Report on the National Defense Authorization Act for Fiscal Year 2006 directed GAO to report on audit findings regarding contracts in Iraq and Afghanistan. As agreed with the congressional defense committees, GAO focused on Iraq contract audit findings and determined (1) the costs identified by DCAA as questioned or unsupported; and (2) what actions DOD has taken to address DCAA audit findings, including the extent funds were withheld from contractors. To identify DOD actions in response to the audit findings, GAO selected 18 audit reports representing about 50 percent of DCAA's questioned and unsupported costs on Iraq contracts. GAO requested comments from DOD on a draft of this report, but none were provided. Defense Contract Audit Agency audit reports issued between February 2003 and February 2006 identified $2.1 billion in questioned costs and $1.4 billion in unsupported costs on Iraq contracts. DCAA defines questioned costs as costs that are unacceptable for negotiating reasonable contract prices, and unsupported costs as costs for which the contractor has not provided sufficient documentation. This information is provided to DOD for its negotiations with contractors. Based on information provided by DCAA, DOD contracting officials have taken actions to address $1.4 billion in questioned costs. As a result, DOD contracting officials negotiated contract cost reductions of $386 million according to DCAA. Based on the information provided by DCAA, as of July 2006, the remaining $700 million in questioned costs is still in process. Because unsupported costs indicate a lack of contractor information that is needed to assess costs, DCAA cannot and does not render an opinion on those costs. Therefore, DCAA does not track the resolution of unsupported costs. For the 18 audit reports selected for this review, GAO found that DOD contracting officials took a variety of actions to address DCAA's audit findings, including not allowing some contractor costs. In the contract documentation GAO reviewed, DOD contracting officials generally considered DCAA's questioned and unsupported cost findings when negotiating with the contractor. GAO found DOD contracting officials were more likely to use DCAA's advice when negotiations were timely and occurred before contractors had incurred substantial costs. For example, in three audit reports related to a logistics support task order negotiated prior to the onset of work, DCAA questioned $204 million. According to DCAA's calculations, $120 million of these questioned costs was removed from the contractor's proposal as a result of its audit findings. In contrast, DOD officials were less likely to remove questioned costs from a contract proposal when the contractor had already incurred these costs. For example, in five audit reports comprising about $600 million of questioned costs reviewed, GAO found that the DOD contracting officials determined that the contractor should be paid for all but $38 million of the questioned costs, but reduced the base used to calculate the contractor's fee by $205 million. By reducing the base, the DOD contracting official reduced the contractor's fee by approximately $6 million. In addition to identifying questioned and unsupported costs, DCAA has the option of withholding funds from the contractor and chose to withhold a total of $236 million for eight cases included in this review.
The 21st Century program is authorized to provide a wide range of activities for K-12 students and their families to: 1. provide opportunities for academic enrichment, including providing tutorial services to help students—particularly students who attend low-performing schools—meet state and local academic standards in core subjects such as reading and mathematics; 2. offer students a broad array of additional services, such as drug and violence prevention, counseling, art, music, recreation, and technology programs that are designed to reinforce and complement instruction in the regular school day; and 3. offer literacy and related educational development opportunities to families of students served. Education has established performance objectives for the 21st Century program, including objectives related to student outcomes. These objectives state that 21st Century participants will demonstrate educational and social skills and exhibit positive behavioral changes; and they will improve in outcomes such as academic performance, school attendance, and rates of disciplinary incidents and other adverse behaviors (see fig. 1). The 21st Century program is administered by state educational agencies. Education provides state agencies with annual formula grants, which in 2015 ranged from about $6 million to $132 million. The formula that determines the funding amount for a particular state is based in part on the percentage of total students from low-income families enrolled in K-12 public schools and how much that state spends per pupil on education. States, in turn, competitively award funds to sub-grantees, which may be school districts or community-based organizations, such as those that focus on youth development. By law, states must award sub-grants of a minimum $50,000 per year for periods of 3 to 5 years. Sub-grantees oversee one or more physical locations, referred to as “centers,” where grant-funded services are provided to participating students and adult family members. Centers may be located in schools, churches, community centers’ or other spaces (see fig. 2). They must provide services during non-school hours or periods when school is not in session, such as before school, after school, on weekends, or during summer vacations and school breaks. In making awards, states must give priority both to applications that propose to serve students who attend schools identified as needing improvement and that are submitted jointly by at least one school district receiving funds under Title I, Part A of the Elementary and Secondary Education Act of 1964 (ESEA), as amended, and at least one community- based organization or other public or private entity. In addition, states are authorized to include additional priorities in their sub-grant competitions so long as they are aligned with the statute’s requirements and priorities. Education’s Office of Academic Improvement, under the Office of Elementary and Secondary Education, is responsible for overseeing states’ implementation of the 21st Century program. The Office of Academic Improvement is also responsible for providing ongoing technical assistance to states and monitoring state performance through on-site and desk monitoring visits and ongoing communications with state program officials. The Office of Academic Improvement conducts on-site monitoring for the 21st Century program for each state grantee every 3 years, or more frequently if needed. The Office of Academic Improvement is also responsible for reviewing states’ grant applications for, among other things, compliance, including reviewing certain assurances that states are required to provide under ESEA. For example, Education must review states’ assurances that they will grant awards only to eligible entities that propose to serve students who primarily attend schools with at least 40 percent of students from low-income families. In addition, states must collect plans from applicants describing how activities funded by the sub-grant will continue after 21st Century funding ends, as well as plans for how centers will address participants’ transportation needs, among other things. Education collects program data from states in order to report annually on whether 21st Century programs have met performance targets for their objectives. These data include information on sub-grantee characteristics, center activities, and participants’ demographics and outcomes. States monitor sub-grantee performance and program data submission to Education. Under ESEA, states are required to conduct ongoing program evaluations to assess the effectiveness of their 21st Century programs and activities and disseminate the findings. Education reviews state evaluations as a part of its monitoring process. Together states received more than 4,000 21st Century sub-grant applications and funded nearly 60 percent of them (about 2,400) in their most recent sub-grant competitions, according to our survey of all 50 states and the District of Columbia. States’ criteria for these competitions vary, and most states score applications on a point system based on a variety of criteria. To help determine whether applicants have the capacity to implement high-quality programs, over half of states award applicants additional points based on the quality of their program design (33 states) and use of evidence-based practices (28 states). To sustain these programs once grant funding ends, about half of the states reported awarding additional points based on the applicants’ level of support from schools and school districts (25 states) and other external organizations (23 states). (See fig. 3.) Additionally, 10 states reported in our survey that they provided additional points to applicants serving schools identified as the lowest performing or with the largest gaps in student achievement based on comparisons by race or socio-economic levels. To provide adequate time to implement successful programs, a majority of states reported that they offer sub-grantees 21st Century funding for 5 years, the maximum number of years allowed under law. Specifically, 39 states said in our survey that they provide 21st Century funding to sub- grantees for 5 years. Education’s 2003 program guidance states that research suggests it takes approximately 5 years of continual revision and improvement for a community to fully implement a successful 21st Century program. Further, existing 21st Century sub-grantees are also eligible to re-apply for funding outside the competition period if their grant is close to expiring. Of the applicants that received an award in the most recent competition, about 45 percent (nearly 1,100) had previously received a 21st Century grant award. States have provided different levels of funding for sub-grantees and their centers. Fifteen states reported the minimum and maximum amounts sub-grantees allocated to their centers. Specifically, the amount that a sub-grantee’s center received ranged from a minimum of $100,000 to a maximum of about $660,000, with a median amount of $185,000 per center in school year 2015-2016. Officials in Idaho told us the amounts vary because each center has different needs and student numbers. They said their program’s average cost per student was $1,700 per year. They noted that they encourage centers to target students most in need of services instead of serving all students in a school, even if the school primarily serves students from low-income families. Officials in Texas told us that funding amounts to centers can vary based on other state requirements, such as a requirement to employ a full-time program director or coordinator at the sub-grantee or center level. Sub-grantees have used 21st Century funds to provide a broad array of services to K-12 students. Education’s 2014 annual report on the characteristics of 21st Century programs found that the most commonly provided activities in school year 2012-2013 were those to enrich academics, provide tutoring and homework help, and offer recreational activities. According to Education’s 2003 guidance, academic enrichment can include tutoring in core academic subjects and providing extra learning opportunities so that students can practice their academic skills through hands-on activities. The 2014 annual report also found that many centers supported building students’ academic skills in reading and math as well as in the arts, science, technology, cultural activities, and health. Further, at least two states gathered information on the time spent on specific activities as part of their state 21st Century program evaluations. In New Jersey, for example, a 2015 state evaluation examined time spent by students on specific 21st Century activities, including academic improvement/remediation, tutoring, academic enrichment, community service, and recreation. Of these, academic improvement/remediation was the activity on which students spent the most time. Oregon’s 2012 state evaluation also found centers were most likely to offer weekly activities they categorized as enrichment, homework help, or recreational. In terms of subjects targeted by the program, centers were most likely to report in the state evaluation that their centers’ weekly activities focused on reading, math, and arts or music. Officials in all 13 centers we visited in our four selected states said they offered a mix of academic support and enrichment activities. These activities generally focused on reading and math, science and technology, art and music, and fitness and nutrition. For example, centers we visited used computers and tablets to reinforce math skills or introduce new concepts, such as computer programming, for students who had limited access to technology. Officials at 7 of the 13 centers also said 21st Century activities can fill gaps in a school’s offerings such as providing music or computer instruction that schools may not have the opportunity to provide during the regular school day. (See fig. 4.) 21st Century centers serve different grade levels and use different staffing models. Education’s 2014 annual report and officials in the states we visited said that centers were primarily for elementary school students, but some also served middle and high school students. Further, Education’s report found that centers relied primarily on paid staff, but staff characteristics varied widely. For example, the report found that some centers used regular school-day teachers and non-school based staff, such as youth development workers or staff without college degrees. In the 13 centers we visited, officials told us their staff included a mix of certified teachers and paraprofessionals who work during the regular school day, as well as staff from outside the school who work in the program only part-time. Education’s guidance encourages sub-grantees to identify other sources of related funding, and Education’s monitoring protocols ask sub-grantees to describe how all of these resources will be combined or coordinated to offer a high-quality, sustainable program. Eight states in our survey reported that they require sub-grantees to match Education funds with funding from other sources. For example, one state required sub- grantees to match 30 percent of their 21st Century grant award, with at least 10 percent required from sources outside the participating school district. Even if a state does not require matching funding, 21st Century applicants for sub-grants must identify federal, state, and local programs they will combine or coordinate with on their 21st Century program to make the most effective use of public resources. In our survey, states reported that the two most frequent funding sources sub-grantees used to supplement their 21st Century funding were the local school district (23 states) and other federal government programs (21 states), such as the Department of Health and Human Services’ Child Care and Development Fund and the Department of Agriculture’s Child and Adult Care Food Program. (See fig. 5.) Officials at several centers we interviewed also said they provide snacks and meals during the afterschool programs using Department of Agriculture funds. Ten states reported in our survey that sub-grantees frequently use other Education funding, such as Title I, Part A of ESEA, as amended, which provides financial assistance to districts and schools with high numbers or percentages of children from low-income families. In addition, officials in Idaho told us their sub- grantees also use funds from Education’s Migrant Education program, which provides educational and support services to migrant children. Officials from 46 states reported in our survey that they allow sub- grantees to charge student fees. These states, however, generally had a policy that no child would be turned away due to a family’s inability to pay. Of these states, 27 reported they require sub-grantees to have a sliding scale for fees based on family income. To encourage sub-grantees to secure funding from alternate sources, most states reported that they gradually reduce 21st Century sub-grants over the funding cycle. Specifically, officials in 37 states reported in our survey that they reduce funding for sub-grantees after an initial period. For example, one state reported that it awards full funding in the first 2 years, but then reduces funding to 80 percent of that amount in the third year, 60 percent in the fourth year, and 40 percent in the fifth year. Further, 15 states reported that they reduce funding levels for sub- grantees that have received 21st Century grants in previous funding cycles (“repeat sub-grantees”). Officials in Texas told us they fund repeat sub-grantees for 3 years rather than the usual 5 to preserve funds for new recipients. In Massachusetts, repeat sub-grantees can only apply for 50 to 75 percent of their previous grant amount. Few evaluations of the 21st Century program use methodologies that are appropriate for determining the effect of program participation on student outcomes. Of the 10 studies we included in our review of program effectiveness—4 state evaluations and 6 other studies—that do use such methodologies, 3 state evaluations found a positive relationship between program participation and school-day attendance and/or discipline. (See appendix I for more details on our process for selecting studies.) These state evaluations—from New Jersey, Texas, and Washington—examined behavioral outcomes such as school-day attendance and discipline. All three of these evaluations found there was a positive effect on school-day attendance, and two of them found there was also a positive effect on school-day discipline. For example, Washington’s state evaluation found that school-day attendance improved for students in grades 6 through 12 who participated in the program when compared to similar students who had not participated. Similarly, Texas’s state evaluation found participating students in grades 4 through 11 had improved school-day attendance. This effect was particularly strong for students who participated in the program 60 days or more, especially in grades 6 through 11. These students had an absentee rate that was more than 20 percent lower than non-participants. Additionally, Texas’s evaluation found that students in a statewide 21st Century pilot program with intensive academics showed improved school-day attendance. In New Jersey’s evaluation of students in grades 4 through 8, students in all grades had improved attendance; this improvement was generally larger in the higher grades, particularly 8th grade. Further, one of the three meta-analyses of afterschool programs in general also identified similar positive effects on school-day attendance. This meta-analysis, published by Education in 2014, synthesized the results of 30 studies, including 6 studies of 21st Century programs. It identified positive effects on students’ academic motivation, a broad category of outcomes which included measures such as school-day attendance and homework completion. Two of the four state evaluations we included in our review also examined school-day disciplinary incidents—which include fighting, bullying, or disruptive conduct that result in a student’s removal from the classroom— and both found that 21st Century program participation may improve outcomes, particularly for students who participated in the program for 60 days or more. Specifically, Washington’s state evaluation found an association between program participation and fewer disciplinary incidents for students in grades 3 through 12 who attended 60 days or more. In Texas, student participation in the program for 60 days or more was also associated with fewer disciplinary incidents. Separately, Texas’s evaluation also found that the presence of certain program characteristics in the state’s 21st Century academic pilot program were associated with lower rates of disciplinary incidents. In particular, the evaluation showed that centers in the pilot that taught students face-to- face rather than via computer were associated with fewer disciplinary incidents. Moreover, this positive association was stronger for centers whose curricula focused on general learning strategies rather than on specific subject area skills. Regarding the effect of the 21st Century program on reducing delinquency, a 2016 meta-analysis of 12 studies examining afterschool programs in general, including one on programs funded through the 21st Century program, found no significant effect. This meta-analysis identified levels of delinquency—incidents such as arrest rates and violent behavior—as an important measure for afterschool programs because many of them focus on reducing such behaviors by providing a safe, supervised environment where children spend less time with potentially delinquent peers. Nevertheless, when the researchers averaged the results of all afterschool programs in their review, they found no significant effect on delinquency rates. Results of some studies we reviewed demonstrated a positive association between participation in 21st Century programs and improved academic outcomes for selected groups of students or for particular types of activities. For example, in Texas the program was associated with higher math scores for students who participated 60 days or more. However, none of the 10 studies in our review observed consistently better scores in either math or reading in program participants’ state assessments. (See appendix I for more details on our process for selecting studies.) The 10 studies we reviewed identified differing effects of participation in 21st Century programs on students’ math scores. For example, evaluations for Virginia in grades 4 through 12 and for Washington in grades 4 through 8 found the program had no significant effect on math scores. On the other hand, two other state evaluations indicated that 21st Century programs were associated with improved math scores for certain grade levels. Specifically, New Jersey’s evaluation found a positive association between program participation and higher math scores, but it was not observable for students in 4th or 6th grade. Additionally, results from Texas’s evaluation found an association between program participation and increased math scores among middle school students. None of the state evaluations we reviewed showed a significant association between participation in a 21st Century program and increased reading scores. In fact, Texas’s evaluation showed lower reading scores among students who participated in the program compared to students who did not participate. In particular, participants in grades 4 and 5 had lower reading scores than non-participants. However, the Texas evaluation showed no observable decline in reading scores among students in middle school who attended 30 days or more. Evaluations for Washington, Virginia, and New Jersey found no significant relationship between program participation and increased reading scores for grades 4 through 8. Additionally, a 2014 academic study examined differences in program characteristics among 58 21st Century programs in New York City and found no association with improved reading scores for students. Afterschool programs’ inconsistent effects on math and reading scores may be the result of these programs serving students with different needs, according to recent research. In particular, two studies we reviewed indicated that the impact of specific types of activities in 21st Century programs may differ depending on the students being served. Specifically, they used statistical modeling to analyze the relationships between program features and the different educational needs among 21st Century program participants. One study found that programs focusing solely on academic content were associated with larger increases in reading scores for students with limited English proficiency than programs that mixed academic content with other activities; while students overall benefited from both solely academic programs as well as those that mixed academic with other activities. The other study found that activities with structured interactions with adults—including opportunities for collaboration and meaningful verbal exchanges—were associated with increased reading scores for middle school students. For elementary students, however, there was no association between these activities and improved reading scores. While Education has developed performance measures to align with some 21st Century program objectives—primarily student academic outcomes—it has not aligned its measures with other program objectives related to key student behavioral and socio-emotional outcomes. As previously noted, these objectives describe a goal of program participants demonstrating improvement in three areas—educational, social, and behavioral—with outcomes such as improved academic performance and school attendance, and lower rates of disciplinary incidents and other adverse behaviors. Education’s current performance measures were established in 1998. They address participating students’ English and math grades and state test scores as well as some behavioral outcomes, including homework completion, class participation, and classroom behavior. However, Education does not measure two other behavioral outcomes that are included in 21st Century program objectives: improved school-day attendance and a decrease in disciplinary incidents, although research has shown positive program effects for these two outcomes more often than for academic outcomes. Some states also recognize the importance of measuring behavioral outcomes associated with these programs, with about half of states (26 states) reporting in our survey that they choose to measure at least one of them. The remaining states, however, are not measuring either of these behavioral outcomes. In addition, Education has not established any performance measures for socio-emotional outcomes, although social skills are also included in program objectives, and socio-emotional learning is an important component of 21st Century implementation across states we visited and surveyed. According to Education’s website, socio-emotional learning involves students’ knowledge and skills necessary to understand and manage emotions and establish positive relationships, among other things. Twenty-seven states reported in our survey that they currently measure or are developing measures for at least one socio-emotional outcome, including student relationships with adults or communication skills. Again, the remaining states are not measuring any of these socio- emotional outcomes. At the centers we visited, programs offered various activities geared toward socio-emotional learning, including activities that paired students with adult mentors and developed students’ problem- solving skills. Officials in two states we visited told us that socio-emotional learning is a major component of their 21st Century programs’ philosophy. (See figure 6 for a comparison of 21st Century program objectives and Education’s program performance measures). Several factors contributed to Education’s decision to maintain its current 21st Century performance measures for over 15 years. In particular, Education officials told us they have not substantially revised performance measures since 1998 in part because the program’s authorization lapsed from 2008 through 2016, with ESSA providing funding authorization starting in fiscal year 2017. As a result, the officials said this created uncertainty about potential future program changes. In addition, they said they were reluctant to revise the measures given the costs to states of tracking and reporting on new measures, as well as concerns about maintaining the continuity of data collected over time. Education officials told us they may revise the program’s performance measures when they implement program changes pursuant to ESSA and that in doing so, they will consider including additional behavioral performance measures as well as socio-emotional measures. Education’s lack of 21st Century performance measures for some key program objectives, however, has limited the usefulness of its performance data. Specifically, without measures for student behavioral and socio-emotional outcomes, Education lacks useful data on the extent to which the program is achieving its stated objectives, especially in areas where the program is likely to have positive effects on student outcomes. While about half of states reported they already measure at least one behavioral or socio-emotional outcome, absent measures for all states, Education will continue to report incomplete program results to Congress and the public. Further, leading practices in performance management call for agencies to use performance information as the basis for decision making. These practices also state that aligning performance measures with program objectives can enhance the use of information for management decision making. While Education has taken steps to promote data quality, including data accuracy and completeness, it lacks reasonable assurance that the data submitted by states are accurate. Education officials told us they began using a new online data system in school year 2014-2015, called 21APR, in part to improve data quality. Among other changes, 21APR does not permit users to advance to the next page until the data on the current page are complete. Education officials said that this can help prevent missing data—a feature that was absent in the previous data system. In addition, in 2016 Education provided states with regional training sessions and instructional documents for data entry. However, Education has not independently assessed the accuracy of 21st Century data submitted in 21APR. Education officials told us they require states to check their data submissions rather than Education conducting these checks because the agency does not have access to sub-grantees’ and centers’ source information. Education’s own Information Quality Guidelines state that data should be processed and edited to help ensure they are accurate. Performing accuracy checks would not necessarily require that Education have access to the source data. For example, it could perform basic logic checks between fields—such as verifying that the number of student participants in one grade is fewer than the total number of participants in all grades. Education officials told us they may explore additional checks for accuracy, but they did not provide a timeframe for doing so. Education officials do review states’ procedures for data quality assurance, although some states have expressed concerns about the quality of data they receive from sub-grantees. Since 2015, when Education implemented new monitoring protocols that included a review of states’ data quality procedures, Education officials have monitored 20 states and have not found any states that are noncompliant. However, in our interviews and survey responses, officials from 10 states commented that they had concerns about the quality of program data they collect from sub-grantees. For example, states are unable to upload data to the 21APR system, unlike in Education’s previous data system, and several states expressed concerns that they must enter data manually. One state official commented that this can increase the risk of errors. Without independently checking accuracy, Education may be unable to detect and address potential data errors in the 21APR system—data that Education eventually needs to use to conduct analyses that inform management decisions, identify sub-grantees needing technical assistance, and contribute to annual performance reports and budget justifications. Without reasonable assurance that data in the 21APR system are of sufficient quality, Education may not be able to effectively use the data for such decision making and reporting going forward, despite the fact that the new system was created, in part, to improve data quality. Further, absent better information on the quality of its data, Education will be unable to communicate any data limitations in its reports to Congress and the public. Our prior work has identified leading practices in performance management, which state that agencies should disclose limitations associated with data used in reporting and indicate what action they plan to take to address the limitations. States in our survey gave mixed reviews on the usefulness of Education’s technical assistance for developing and conducting state-level evaluations. Twenty-nine states reported that they found Education’s technical assistance on conducting state evaluations to be very or moderately useful. On the other hand, 12 states reported they found it only slightly or not at all useful; 8 states reported they had not received any technical assistance on this topic; and one state said it was unaware that Education provided such technical assistance. Further, 40 states commented in our survey that they face challenges in evaluating sub-grantee performance that may limit their capacity to conduct high-quality evaluations. For example, six states reported they have difficulty designing evaluations that can determine whether a change in student outcomes results from a student’s participation in a 21st Century program or from other factors, such as interventions during the school day. This difficulty is evidenced by the fact that we identified very few state evaluations whose evidence standards allowed us to include them in our review on program effectiveness, as noted above. In addition, four states reported that they face challenges in defining and establishing performance measures for students who participate in the program, such as measures for behavioral and academic outcomes. For example, one state commented that it is difficult to measure the benefits of enrichment activities on academic performance. Education also found that states are having difficulty conducting their required evaluations when Education reviewed them as part of its monitoring process. For example, in recent monitoring findings, Education officials said they found that five states did not have plans in place to conduct these evaluations. In addition, according to technical assistance reports provided as a part of the monitoring process, three states had difficulty conducting comprehensive evaluations such as establishing the appropriate scope of work and timeframes for conducting evaluations and developing appropriate performance measurement tools. According to Education officials, it has not provided written guidance to states on 21st Century program evaluations since 1999, when it provided states non-regulatory guidance in the form of a guide to help them evaluate their programs and use the results to make program improvements. However, since 1999, there have been significant changes in the 21st Century program. Specifically, the program was reauthorized twice, resulting in changes to requirements for measuring and evaluating program performance. First, the 2002 reauthorization added new requirements for states to describe in their funding applications how the state will evaluate the effectiveness of programs, including developing performance indicators and measures to evaluate programs and activities. It also required sub-grantees to conduct periodic local evaluations to assess progress toward achieving goals. Second, as a result of requirements under ESSA, which reauthorized the 21st Century program in 2015, additional changes are planned starting in school year 2017-2018. Specifically, states will have to evaluate their programs in conjunction with new data collection and evaluation planning requirements, including requirements to track student progress over time and to include state standardized test scores and other indicators of student success, such as improved school-day attendance. Federal standards for internal control state that when significant changes occur in how an agency achieves its objectives it should periodically review policies and procedures for continued relevance and effectiveness in achieving its objectives or addressing related risks. Further, it states that written policies and procedures can help ensure that necessary actions are taken to address risks to achieving the entity’s objectives. In addition, our prior work has identified several leading practices in agencies’ use of evaluations, which state that agencies should promote capacity building to evaluate program activities as a key strategy for supporting objectives. Education officials told us they are developing a technical assistance plan to improve the support the agency provides to states to implement the 21st Century program. These officials told us they are considering the topics and areas of concern that they will address including developing and conducting effective evaluations and assessments, and they expect to finalize the list of topics in summer 2017. However, Education officials told us that the agency has not determined what type of technical assistance it will provide to states on conducting evaluations. Absent new written, non-regulatory guidance to states that addresses the areas in which they struggle most, Education may miss opportunities to help states improve their capacity to conduct high-quality evaluations and ultimately improve their programs. Education’s technical assistance to states does not effectively address the challenges most states face in helping their 21st Century sub- grantees continue to operate their programs once grant funding ends, according to state and sub-grantee officials. In our survey, 35 states reported that their centers often face challenges providing the same level of services to students when the grant funding ends. Officials in 20 states reported that centers in their states generally reduce the level of services or cease to operate once 21st Century funding ends. Two states commented that less than 10 percent of centers had been able to continue operating after the 21st Century grants expired. These concerns were echoed by officials at 10 of the 13 21st Century centers we visited, who told us they had major concerns about sustaining their programs. Twenty centers that commented they will be unable to sustain operations or maintain the same level of services after 21st Century funding ends had difficulty securing funds from other sources. For example, officials in one state commented that it is often difficult for centers to obtain funding from local school districts because district budgets are already strained by supporting school-day programs. Officials in three other states commented that there is no state funding dedicated exclusively to programs outside the regular school day. By law, states must require sub-grant applicants to submit a preliminary plan for how their programs will continue to operate once grant funding ends. Education’s monitoring protocols call for Education officials to ask states if applicants have sustainability plans and how states monitor sub- grantees’ implementation of the plans. A majority of states (42 states) reported in our survey that they require sub-grantees to provide a written sustainability plan, while 6 states reported they do not. However, in its monitoring efforts, Education told us it did not identify any states that are currently out of compliance with this requirement. Education may be missing opportunities in its monitoring efforts to collect information on states’ strategies and practices for program sustainability—information that could be useful for sharing promising practices across states. As a part of its monitoring process, Education officials told us they discuss with state officials how they ensure compliance with the requirement to have such sustainability plans in place, but they do examine the quality of the plans. Officials in 11 states in our survey commented they would benefit from opportunities to collaborate across states. States do appear to have valuable information on sustainability to share with each other. For example, 25 states—in our survey of all 50 states and the District of Columbia—reported providing additional points or credit based on applicants’ ability to leverage external funds and sustain programs when selecting sub-grantees. Further, in the states that collect information on what happens after 21st Century funding ends, most reported centers having some success in finding private or nonprofit funds to help replace Education’s grant funds. States reported that the most common sources of replacement funding were student fees charged to participants (17 states), private foundation funding (14 states), and non-profit funding (e.g., universities and community organizations) (13 states). Education officials told us they provide technical assistance on 21st Century program sustainability through interactions such as an annual summer training conference on the program and an online learning portal. However, Education’s July 2016 training conference—a forum where 21st Century participants share experiences and best practices— was generally not focused on state practices and policies. The sessions were generally focused on centers’ collaboration with the school or community, centers’ activities (e.g., literacy, science, and math), or how to report performance data. Officials in about a third of states reported in our survey that they did not know about or had no basis to judge Education’s technical assistance on sustainability. Another third of states reported that Education’s technical assistance in this area was slightly or not at all useful. For example, two states reported that the content of Education’s professional development, training, and other assistance is heavily geared toward sub-grantees. Education officials said they do have other interactions and forums, such as regional meetings, to share information with states. For example, Education started in-depth training in 2016 for state-level officials at four regional meetings for the Midwest, Northwest, South, and East, but the sessions did not cover program sustainability. Federal standards for internal control state that information should be communicated in a form that enables an agency to achieve its objectives. In addition, our prior work has identified leading practices in federal collaboration that have shown that information sharing among grant participants, such as states, is important for effective grants management. Education’s efforts to help facilitate information sharing among states can help states identify practices that can be tailored to meet their individual needs and leverage their knowledge to address common challenges in continuing their programs over the long-term. Unless Education undertakes such efforts for the 21st Century program, students and families who participate may be at greater risk of not receiving the full range program benefits for the longest amount of time. The 21st Century program is designed to fund programs outside the regular school day to improve academic and behavioral outcomes for K- 12 students who are from low-income families or who attend low- performing schools. High-quality research on the effectiveness of this program is very limited due to several factors, including difficulty determining whether a change in students’ outcomes results from their participation in a 21st Century program or from other factors, such as interventions during the school day. Although existing research on effectiveness points to greater positive behavioral effects than academic effects, Education’s current performance measures do not address some key behavioral outcomes. The lack of performances for behavioral outcomes makes it difficult to determine whether the program is achieving some of its stated objectives—especially in areas where research has shown that the program is likely to have the greatest effect on student outcomes. Specifically, Education does not measure socio-emotional outcomes or two other behavioral outcomes included in program objectives: improved school-day attendance and discipline in the classroom. Absent these performance measures, Education is missing an opportunity to assess the full range of benefits of this program. In addition, Education has not sufficiently assessed the quality of its program data, further limiting its ability to assess program effectiveness. Another factor hindering the agency’s ability to determine program effectiveness is the significant difficulty that states are experiencing in evaluating their programs. Although Education officials told us they are considering topics and areas of concern for additional technical assistance, Education has not yet provided states with sufficient guidance on developing rigorous evaluations. Unless Education takes steps to reasonably ensure the accuracy of data and provides written guidance to help states develop high-quality evaluations, it cannot ensure the program is effectively meeting its goals. Lastly, states are experiencing substantial difficulty in sustaining their programs after 21st Century funding ends, and many states expressed interest in collaborating with other states to address such challenges. Education is uniquely situated to take the lead in sharing information with states to help them address their sustainability challenges by allowing them to identify state policies and practices that have had some success. Sharing such information could help states address the challenges they face in continuing their programs over the long-term. We recommend that the Secretary of Education direct the Office of Academic Improvement to: 1. Expand its performance measures for the 21st Century program to address all program objectives. Specifically, Education should establish performance measures related to key behavioral, including student attendance and disciplinary incidents, and socio-emotional outcomes. 2. Conduct federal-level data checks on the accuracy of 21st Century program data submitted by states. Such checks could test for logical relationships between fields. Education should also publicly disclose and address any data limitations it identifies, as appropriate. 3. Provide written, non-regulatory guidance to states on developing and conducting high-quality 21st Century state evaluations to help address the difficulties states face in measuring program performance and effectiveness. 4. Use the information it collects from its monitoring visits and ongoing interactions with states to share effective practices across states for sustaining their 21st Century programs once program funding ends. This information could be shared using existing mechanisms such as Education’s meetings with 21st Century state coordinators. We provided a draft of this report to the Department of Education for comment, and its written comments are reproduced in appendix III. Education also provided technical comments that we incorporated in the report as appropriate. Education neither agreed nor disagreed with our recommendations; rather, it generally noted that it will keep our recommendations in mind as it continues to implement changes in the program as a result of ESSA, and outlined steps it will take to address our recommendations. In response to our recommendation that it expand its performance measures for the 21st Century program to address all program objectives, including those related to behavioral and socio-emotional outcomes, Education said it will keep our recommendation in mind. Specifically, the department stated that it is in the process of re-examining whether additional or revised measures should be developed to align more significantly with the program’s statutory objectives under ESSA. Education also acknowledged the need to develop measures that provide sufficient information and data on program outcomes. As stated in our draft report, the Department currently measures some student behavioral outcomes; in its response, Education described one measure as: the percentage of all program participants with teacher-reported improvements in behavior. Therefore, we modified our recommendation to focus on measuring the program’s key behavioral outcomes. In its response, Education also expressed concern about collecting data on student attendance and disciplinary measures, noting that it will require effective collaboration between states, districts, and other eligible entities. However, as we stated in our report, about half of states already collect data on at least one of these two measures. Further, as we stated, research has shown that 21st Century programs more often have positive effects on student attendance and reducing disciplinary incidents than on improving students’ academic outcomes. Given these effects, we continue to believe that it is critical for Education to measure student attendance and disciplinary incidents to obtain more complete, accurate information on this program’s effect on student outcomes. In response to our recommendation that it conduct federal-level data checks on the accuracy of 21st Century program data submitted by states, Education commented that it plans to build in additional data checks into the data system beyond its current checks on the data’s completeness. Specifically, Education anticipates that new technology enhancements in the data system will be designed to flag for inconsistencies in data reporting. For example, the system may send a “flag” that participation data is significantly lower or higher than previously reported participation data. Further, Education indicated that it will consider whether auditors performing audits under the Single Audit Act can be asked and guided to do more checks on the accuracy and reliability of 21st Century program data. Regarding our recommendation to provide written, non-regulatory guidance to states on developing and conducting high-quality 21st Century state evaluations, Education outlined several steps it has taken to assist states in the past. For example, Education said that it provided six states with individualized technical assistance on strategies related to developing statewide evaluations and measures. Education also noted that, to date, it has conducted two webinars on state evaluations and is in the process of including presentations from those webinars on its online learning portal so that states will have easy access to the information. In addition, Education stated that it included presentations on evaluation strategies in the past during its Summer Institute. Education also said it would consider whether additional guidance for all states was needed. While these are important steps, we do not believe they are sufficient. Twenty-one of 51 states (41 percent of states) reported in our 2016 survey that they found Education’s technical assistance for developing and conducting state-level evaluations only slightly or not at all useful; they had not received any technical assistance on this topic; or they were unaware that Education provided such assistance. Therefore, we continue to believe that Education should prepare written guidance to assist all states in developing and conducting high-quality program evaluations. Finally, regarding our recommendation to share effective practices across states for sustaining their 21st Century programs once program funding ends, Education stated that it hosts meetings twice a year for 21st Century state coordinators. At these meetings, Education officials share strategies with states related to program sustainability. Education stated that these meetings covered topics such as reducing the amounts of 21st Century grant awards by a percentage each year. However, Education officials told us in February 2017 that these meetings have not focused on topics on program sustainability for several years. We continue to believe that Education should take the lead in sharing information with states to help them address their sustainability challenges by sharing information on state policies and practices that have shown some success. In its comments, Education stated that it has not held regional meetings with states for several years; however, in 2016 it held four regional meetings with states on implementing its new 21APR data system. Therefore, we modified our recommendation to emphasize that such information be shared through these types of existing mechanisms. We are sending copies of this report to the appropriate congressional committees, the Secretary of Education, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff should have any questions about this report, please contact me at (617) 788-0580 or [email protected]. Contact points for our Offices of Congressional Relations [email protected] Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Our study of the 21st Century program was framed around four objectives: (1) how 21st Century funds are awarded and used, (2) what is known about the effectiveness of 21st Century programs, (3) the extent to which Education has effectively managed and used program data to inform decisions about the 21st Century program, and (4) the extent to which Education’s technical assistance has helped states assess and sustain high-quality programs once 21st Century funding ends. We focused our review on 21st Century afterschool programs because centers have historically provided services more during this time as compared to before school and during the summer. To address our four objectives, we used a variety of methods, including a Web-based survey of 21st Century state coordinators; a review of selected state evaluations and academic literature; a review of federal laws, regulations, and agency documents such as annual performance reports and guidance; interviews with federal and other officials; and site visits to four states. To obtain information about the state sub-grantee award process, program sustainability and technical assistance of the 21st Century program, we conducted a Web-based survey of the 21st Century state coordinator at each state educational agency in all 50 grantee states and the District of Columbia. We conducted the survey from August 2016 through November 2016. The survey covered several topics, including state processes for awarding 21st Century sub-grants to local communities, state-level program evaluations, performance measures, and sub-grantees’ financial sustainability, among other things. We received responses from all 50 states and the District of Columbia for a 100 percent response rate. The survey included an introductory statement specifying that the survey focused on afterschool programs funded by the 21st Century program and collects information about federal-level guidance provided to state officials on this program. The quality of survey data can be affected by nonsampling error, which includes variations in how respondents interpret questions, respondents’ willingness to offer accurate responses, and data collection and processing errors. To minimize such error, we included the following steps in developing the survey and in collecting and analyzing survey data: in developing the Web survey, we pre-tested draft versions of the instrument with 21st Century program officials in four states to check the clarity of the questions and the flow and layout of the survey. On the basis of the pretests, we made revisions to the survey. Further, using a Web- based survey and allowing 21st Century coordinators to enter their responses directly into an electronic instrument created an automatic record for each state in a data file and eliminated the errors associated with a manual data entry process. In addition, the program used to analyze the survey data was independently verified to ensure the accuracy of this work. In order to examine what is known about the effectiveness of 21st Century programs, we reviewed select state-level program evaluations and academic studies that reported outcomes for students who participated in the program. To identify relevant research, we took a two- pronged approach. First, we reviewed three state evaluations that Education determined in a 2012 report on 21st Century program grantee evaluation practices (the most recent available Education report on this topic) to have used quasi-experimental methods or statistical controls, such as a comparison group, to account for other plausible influences on the outcomes reported. We retrieved the most recent evaluations from those three states and through interviews with researchers in the field; we identified three other quasi-experimental state evaluations published since 2012. We reviewed each of these state evaluations for the soundness of its methodology and data. In four cases where the published evaluation did not include enough information to complete our review, we contacted the states and the researchers to obtain additional data from their samples. Ultimately, we determined that four of the six state evaluations were appropriate for purposes of our research objective about the 21st Century program’s effectiveness. Second, we conducted a comprehensive literature review of 104 academic studies of afterschool programs, including the 21st Century program, and we ultimately determined that six of these academic studies were sufficiently rigorous and appropriately scoped to include in our review. To identify academic studies on the effectiveness of 21st Century programs, we conducted a literature search through ProQuest. Our initial search terms were adapted from a 2014 report produced by Education that looked at the effect of 21st Century programs, along with other programs that increased the amount of time that children spent in school. Because our review was focused on afterschool programs, rather than the broader range of out of school time or extended learning time programs covered in that study, we only searched for studies of afterschool programs and studies of 21st Century programs particularly. We identified 104 papers initially and selected 25 of them on the basis of the following criteria: Topic relevance–the studies covered 21st Century programs or afterschool programs that measured student outcomes and had some instructional component (i.e. programs were not only a study hall or a sports program). Timeframe relevance–studies must have been published since 2012, in order to find studies that may not have been included in Education’s 2014 study, which examined studies published since 1998. Publication status–studies were in their final form, not drafts. Sample relevance–studies looked at afterschool programs in K-12 settings in the United States. Foreign school systems were excluded. Design relevance–studies used experimental or quasi-experimental designs with well-formulated comparison groups or controls; or they used some statistical method to account for other plausible influences on the outcomes of the study. Next, two analysts reviewed those studies a second time, retaining 13 studies that had a sample size above 100 students in the intervention group and evaluated more than one program or intervention site. The analysts further narrowed this sample to six studies, which we selected because they were focused on 21st Century programs. In addition to reviewing Education’s 2014 meta-analysis, in our initial search of 104 papers, the analysts also identified two additional meta-analyses focused on afterschool programs, which we included in our sample because they were of appropriate quality and scope to assess afterschool program effects. We then conducted detailed reviews of the studies. These reviews entailed an evaluation of each study’s research methodology, including its research design, sampling frame, selection of measures, data quality, limitations, and analytic techniques, as well as a summary of its major findings. We also assessed the extent to which each study was relevant to assessing what is known about the effectiveness of 21st Century programs. Three studies had major research design limitations resulting from the lack of a rigorously formed comparison group. As such, these studies were not able to demonstrate whether the programs were responsible for the effects being measured or whether other factors may have contributed. After eliminating the three studies with major research design limitations, six studies remained in our review. For all four objectives, we reviewed agency documents and interviewed agency officials and researchers who had conducted work on the 21st Century program. To examine how states and other entities have used 21st Century funds, we reviewed Education’s 2014 annual characteristic report for the 21st Century program. The reports provide summaries and analyses over time, based on program data collected from states in school years 2001-2002 through 2012-2013 using data from the Profile and Performance Information Collection System. In order to assess the reliability of the data underlying the reports, we reviewed agency documents regarding this data system, spoke with agency officials, and conducted electronic tests of data from the system for select data elements. We determined that the underlying data summarized and analyzed in Education’s 2014 annual characteristic report was sufficiently reliable for our purpose of reporting on program characteristics. To examine Education’s management of program data, we also reviewed Education documents including budget justifications and documents on the 21st Century data systems such as user guides, technical assistance documents for states, data dictionaries, and a 2013 Education Office of Inspector General report on federal and state oversight for the 21st Century program. To examine Education’s technical assistance, we reviewed Education’s program guidance to states, monitoring protocols and tools, and annual grantee satisfaction survey results which generally includes Education’s largest grant programs including the 21st Century program. In order to compare the program to established criteria, we reviewed relevant legislation; Office of Management and Budget guidance; federal internal control standards; and past GAO work pertaining to leading practices in performance management, information quality, program evaluation, and federal collaboration for grants management. We interviewed federal officials from Education’s Office of Academic Improvement, within the Office of Elementary and Secondary Education, regarding data management and use, research and evaluation, and program challenges. In addition, we interviewed stakeholders of the 21st Century program, including contractors and researchers. Finally, we observed Education’s annual Summer Institute, where Education provides technical assistance and training to state, sub-grantee, and center officials. To learn about program activities and challenges, we conducted site visits to four states—Idaho, Massachusetts, Rhode Island, and Texas—to visit 21st Century centers and speak with program staff and state officials. We selected these states for diversity in geographic region, 21st Century formula grant funding levels to states, and student demographics. In each state, we interviewed the 21st Century state coordinator and other officials responsible for the program. In total, we visited 13 centers in the four states. During these visits, we interviewed center staff and observed afterschool program activities. Information we gathered on our site visits represents only the conditions present in the states and local areas at the time of our visits. Furthermore, our fieldwork focused on in-depth analysis of only a few selected states. On the basis of our site visit information, we cannot generalize our findings beyond the states we visited. We conducted this performance audit from April 2016 to April 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Elizabeth Sirois (Assistant Director), Sheranda Campbell (Analyst-in-Charge), Lucas Alvarez, Ashley Chaifetz, Jamila Kennedy, and Kelsey Kennedy made significant contributions to this report. Assistance, expertise, and guidance were provided by Susan Aschoff, Carl Barden, Deborah Bland, James Bennett, Angie Jacobs, Thomas James, Kirsten Lauber, Ben Licht, Edward Malone, John Mingus, Amy Moran Lowe, Sara Pelton and James Rebbe.
Education's 21st Century program—funded about $1 billion annually since 2002—supports a broad array of activities outside the school day to improve student outcomes in high-poverty or low-performing K-12 schools. A statement accompanying the Consolidated and Further Continuing Appropriations Act of 2015 included a provision for GAO to review Education programs outside the regular school day. GAO examined (1) how 21st Century funds are awarded and used, (2) what is known about the effectiveness of these programs, (3) how Education manages and uses program data to inform decision making, and (4) Education's technical assistance for evaluating and sustaining programs. GAO conducted a 50-state survey of program officials, obtaining a 100 percent response rate. GAO also reviewed selected state program evaluations and academic studies on student outcomes, and observed program activities and interviewed officials in four states representing a range of grant size and location. The Department of Education (Education) awards 21st Century Community Learning Centers (21st Century) grants to states, who in turn, competitively award funds to local organizations, which use them to offer academic enrichment and other activities to improve students' academic and behavioral outcomes. In their most recent grant competitions, states awarded 21st Century funds to nearly 2,400 organizations—including school districts and community-based organizations—based on a variety of criteria, such as the quality of their proposed program designs. Relevant research we reviewed that compared program participants to those of non-participants suggests that the 21st Century program is effective in improving students' behavioral outcomes, such as school-day attendance and reduced disciplinary incidents, more often than their academic outcomes. However, because Education's current 21st Century performance measures primarily focus on students' reading and math scores on state tests, Education lacks useful data about whether the program is achieving its objectives to improve students' behavioral outcomes such as attendance and discipline—the areas where the program most frequently has a positive effect. Education officials have not substantially revised the program's performance measures since 1998, in part because its authorization lapsed from fiscal years 2008 through 2016. Leading practices in performance measurement call for federal agencies to align performance measures with program objectives. Education's technical assistance to states does not adequately address challenges states face in evaluating their 21st Century programs and sustaining them when program funding ends. About a third of states reported in GAO's 50-state survey that they face challenges in evaluating program performance, such as difficulty designing evaluations that shed light on program effects. Further, the 21st Century program was reauthorized twice, which resulted in significant changes to state requirements for evaluating programs. However, Education has not provided states written guidance on developing and conducting high-quality evaluations since 1999. Federal standards for internal control state that when significant changes occur agencies should periodically review policies for continued relevance and effectiveness in achieving their objectives. Absent written guidance to states on conducting high-quality evaluations, Education may miss opportunities to help states improve their capacity to conduct such evaluations to assure the program is meeting its goals. GAO is making four recommendations, including that Education expand its performance measures for behavioral outcomes and provide written guidance to states on conducting high-quality program evaluations. Education neither agreed nor disagreed with the recommendations, and outlined steps it is taking to address them.
The Aviation and Transportation Security Act established TSA as the federal agency with primary responsibility for securing the nation’s civil aviation system, which includes the screening of all passenger and property transported by commercial passenger aircraft. TSA currently has direct responsibility for, or oversees the performance of, security operations at approximately 457 TSA-regulated airports in the United States implementing security requirements in accordance with TSA- approved security programs and other TSA direction. At TSA-regulated airports, prior to boarding an aircraft, all passengers, their accessible property, and their checked baggage are screened pursuant to TSA- established procedures, which include, for example, passengers passing through security checkpoints where they and their identification documents are checked by TSOs and Travel Document Checkers, or by Screening Partnership Program employees. TSA uses multiple layers of security to deter, detect, and disrupt persons posing a potential risk to aviation security. These layers include three principal types of screening employees at airport checkpoints—Travel Document Checkers, who examine tickets, passports, and other forms of identification; TSOs, who examine property, including checked baggage, and persons using x-ray equipment and magnetometers, as well as other devices; and BDOs, using SPOT to assess passenger behaviors and appearance. BDOs are the only type of TSA screening employees not deployed to all TSA-regulated airports and all checkpoints within the airports where it is deployed on a regular basis. TSA deployed SPOT as an added layer of security to help deter terrorists attempting to exploit TSA’s focus on prohibited items and other potential security weaknesses. Other security layers cited by TSA include intelligence gathering and analysis; passenger prescreening; random canine team searches at airports; federal air marshals; reinforced cockpit doors; federal flight deck officers; the passengers themselves; as well as other measures both visible and invisible to the public. Figure 1 shows TSA’s 20 aviation security layers. The grey area in figure 1 highlights four layers that apply to passengers and their property as they seek to board an aircraft. Airport LEOs, another layer of security cited by TSA, do not report to TSA and may not maintain a physical presence at smaller TSA-regulated airports. According to TSA, each one of these layers alone is capable of stopping a terrorist attack. In combination, TSA states that their security value is multiplied, creating a much stronger system, and that a terrorist who has to overcome multiple security layers in order to carry out an attack is more likely to be preempted, deterred, or to fail during the attempt. The SPOT program utilizes behavior observation and analysis techniques to identify potentially high-risk passengers. Individuals who exhibit suspicious behaviors, including both physical and appearance indicators, may be required to undergo additional screening. Field agents and law enforcement officers of other federal agencies and entities—such as the FBI, the Secret Service, CBP, and FAMS—utilize elements of behavior detection analysis as a part of their work. In addition, some foreign entities, such as Israel’s El Al airlines, use behavior detection and analysis techniques as part of their security efforts. However, TSA emphasized to us that the SPOT program is unique among these entities because it uses a point system to help identify suspicious persons on the basis of their behavior and appearance and because behavior detection and analysis are the central focus of SPOT. Officials from the other agencies stated that their field personnel incorporate behavior detection as one of many skills used in their work; in contrast, behavior detection is the primary element of the BDOs’ work. SPOT trains BDOs to look for and recognize facial expressions, body language, and appearance that indicate the possibility that an individual is engaged in some form of deception and fears discovery. These behaviors and appearances are listed on a SPOT score sheet used in SPOT training. Passenger behavior and appearance are to be compared by the BDOs— who typically work in two-person teams. BDOs are expected to “walk the line”—that is, to initiate casual conversations with passengers waiting in line, particularly if their observations led them to question someone exhibiting behaviors or appearances on the SPOT checklist. As the BDOs walk the line, and the passenger with SPOT indicators is reached, a casual conversation is used to determine if there is a basis for observed behaviors or appearances on the checklist. In most instances, these conversations provide information to the BDOs that permits them to consider the issue resolved, and hence not a security concern. Figure 2 below illustrates the first step of the three-step SPOT process, the BDO-passenger interaction at a checkpoint prior to the passenger passing through a magnetometer. As shown in figure 2, passenger behavior and appearance are observed by the BDOs as passengers wait in line for screening at a security checkpoint. Even if the checkpoint is busy, the BDOs must attempt to visually scan all the passengers waiting in line, as well as persons near the checkpoint, to determine if any are showing behaviors or appearances on the SPOT checklist. According to TSA, on average a BDO has approximately 30 seconds to assess each passenger while the passenger waits in line. For passengers exhibiting indicators above baseline conditions, the BDOs are to (mentally) add up the points assigned to each indicator they observe. Both BDO team members must agree that observed indicators have exceeded the predetermined numerical threshold, although they do not have to identify the same indicators the passenger exhibited. In instances when a passenger’s SPOT indicators place them above the numerical threshold, and the passenger has placed their property on the conveyor belt for x-raying, and has walked through the magnetometer or equivalent screening device for passengers, he or she will be directed to the second step of SPOT, referral screening. This involves additional questioning and physical search of their person and property by BDOs and TSOs. This referral screening occurs in the checkpoint area. If the passenger’s behavior escalates further—accumulating more points based on the SPOT checklist—the BDOs are to refer the passenger to a LEO. A referral to a LEO is a potential third step in the SPOT process. BDOs are not LEOs—they do not conduct criminal investigations, carry weapons, or make arrests. After a passenger has been referred by the BDOs to a LEO, the LEO is then expected to independently determine, through additional investigation, such as questioning the passenger and, if appropriate, by conducting an identity verification and background check through the FBI’s National Crime Information Center (NCIC), whether sufficient grounds exist to take further action, such as detaining or arresting the passenger. TSA officials who are LEOs also have access to NCIC, such as an airport’s Assistant Federal Security Director for Law Enforcement or federal air marshals. NCIC is the FBI’s computerized index of criminal justice information (i.e., criminal record history information, fugitives, stolen properties, and missing persons), available to federal, state, and local law enforcement and other criminal justice agencies at all times. Similarly, other federal LEOs also have such access, including CBP, and Drug Enforcement Agency (DEA) personnel. However, since both local and federal LEOs have other responsibilities, and may not be present at each operating checkpoint, BDOs may have to seek them out to request an NCIC check. According to TSA, aside from requiring that an airport maintain a law enforcement presence, it exercises no jurisdiction over the law enforcement activities of non-TSA officers or entities at an airport; thus, it cannot require LEOs to conduct an NCIC check or to provide BDOs with information about the ultimate disposition of cases referred by them to LEOs. Once the LEO concludes his or her investigation and determines whether the passenger will be arrested or detained, TSA officials are to evaluate the security concerns to determine whether to allow the passenger to proceed to the boarding gate. (In some instances, a LEO might choose not to arrest or detain a passenger; TSA would then decide whether the infraction was sufficiently serious to necessitate barring the passenger from boarding.) After a referral incident has been resolved, BDOs are to enter information about the incident into TSA’s SPOT referral database. The data entered are to include time, date, location of the incident, behaviors witnessed, prohibited items found (if any), and information on the LEO’s response (if applicable), such as whether the LEO questioned the passenger, arrested the individual, or released the passenger. The SPOT referral database contains no personal identifying information about passengers. The SPOT program began with pilot tests in 2003 and 2004 at several New England airports, in which TSA began using uniformed BDOs at airport checkpoints. After some initial pilot projects and test deployments, 644 BDOs were deployed to 42 airports in the first phase of the program from November 2006 through June 2007. As of March 2010, about 3,000 BDOs utilizing SPOT were deployed at 161 of 457 TSA-regulated airports. BDO eligibility is restricted to TSOs with at least 12 months of TSO experience, or others with related security experience. Applicants must apply and be accepted into the BDO training program. The training includes 4 days of classroom courses, followed by 3 days of on-the-job training. BDOs must memorize all of the behaviors and appearances on the SPOT checklist, as well as the point value assigned to each, in order to be able to add these up to determine if a passenger should be sent to SPOT referral screening. BDO applicants must also pass a job knowledge test at the conclusion of the training. The test includes related multiple choice questions, true or false statements, and case-based scenarios. Although DHS is in the process of validating the way in which the SPOT program utilizes the science of behavior detection in an airport environment, TSA deployed SPOT nationwide before first determining whether there was a scientifically valid basis for using behavior and appearance indicators as a means for reliably identifying passengers as potential threats in airports. TSA reported that it deployed SPOT before a scientific validation of the program was completed in response to the need to address potential threats to the aviation system that would not necessarily be detected by existing layers of aviation security. TSA stated that no other large-scale U.S. or international screening program incorporating behavior- and appearance-based indicators has ever been rigorously scientifically validated. While TSA deployed SPOT on the basis of some risk-related factors, such as threat information and airport passenger volume, it did not use a comprehensive risk assessment to guide its strategy of selectively deploying SPOT to 161 of the nation’s 457 TSA- regulated airports. TSA also expanded the SPOT program over the last 3 years without the benefit of a cost-benefit analysis of SPOT. Additionally, TSA’s strategic plan for SPOT could be improved by the inclusion of desirable characteristics identified in our prior work, such as risk assessment information, cost and resources analysis, and a means for collaboration with other key entities. TSA proceeded with deploying SPOT on a nationwide basis before determining whether the list of passenger behaviors and appearances underpinning the SPOT program were scientifically validated, and whether these techniques could be applied for counterterrorism purposes in an airport environment. In 2008, a report issued by the National Research Council of the National Academy of Sciences noted that behavior and appearances monitoring might be able to play a useful role in counterterrorism efforts but stated that a scientific consensus does not exist regarding whether any behavioral surveillance or physiological monitoring techniques are ready for use in the counterterrorist context given the present state of the science. The report also stated that the scientific evidence for behavioral monitoring is preliminary in nature. According to the report, an information-based program, such as a behavior detection program, should first determine if a scientific foundation exists and use scientifically valid criteria to evaluate its effectiveness before going forward. The report added that programs should have a sound experimental basis and documentation on the program’s effectiveness should be reviewed by an independent entity capable of evaluating the supporting scientific evidence. The report also stated that often scientists and other experts can help independently assess the scientific evidence on the effectiveness of a program. A contributor to the National Research Council report also stated that no conclusive research has been conducted to determine if behavior detection can be reliably used on a larger scale, such as in an airport setting, to identify persons intending to cause harm to the aviation system. While TSA and DHS’s Science and Technology (S&T) Directorate officials agreed that SPOT was deployed before its scientific underpinnings were fully validated, they stated that no large-scale U.S. or international operational screening program incorporating behavior- and appearance- based indicators has been rigorously scientifically validated. These officials also questioned the findings of the National Research Council report and stated that the study lacked sufficient information for its conclusions because it did not consider recent findings from unpublished DHS, defense, and intelligence community studies. However, National Research Council officials stated that an agency should be cautious about relying on the results of unpublished research that has not been peer reviewed, such as that generated by DHS and the defense and intelligence community, and using unpublished work as a basis for proceeding with a process, method, or program. Moreover, we have previously reported that peer review is widely accepted as an important quality control mechanism that helps prevent the dissemination of potentially erroneous information. In addition to the unpublished research, TSA told us that the SPOT program was based on operational best practices from law enforcement, defense, and the intelligence communities. According to TSA officials, the agency based its choice of SPOT behavior, appearance, and deception indicators on existing research and training programs. For example, TSA cited research on emotions and their behavior indicators by Dr. Paul Ekman, interviewing and interrogation by Stan Walters, and nonverbal indicators by Dr. David Givens and Dr. Mark Frank as support for the choice of several of the behavior indicators. According to TSA, its development of the SPOT program was based on related DHS research and information from the training curricula of other federal agencies, such as the Federal Transit Administration and the Bureau of Alcohol, Tobacco, Firearms, and Explosives. As with the SPOT behavior indicators, TSA told us that it sought input in creating the SPOT point scoring system from subject matter experts and from participants in TSA’s SPOT working group, which consisted of law enforcement officials from agencies such as FBI, DEA, and local law enforcement officials. While TSA officials said that they coordinated with relevant subject matter experts, such as Dr. Ekman, and based the SPOT scoring system on existing research and training programs, no validation of the behavior, appearance, and deception indicators was conducted prior to the deployment of SPOT in November 2006. According to TSA officials, they used professional judgment in developing the SPOT point system and stated that the purpose of developing the scoring system was to increase the objectivity of the SPOT process. Dr. Ekman stated that, in his opinion, and after reviewing the scoring system and observing the program in operation, it was not clear whether the SPOT behaviors and appearances, and the related point system, could be used effectively in an airport environment because no credible validation research on this issue had been conducted. He noted, for example, that research is needed to identify how many BDOs are required to observe a given number of passengers moving at a given rate per day in an airport environment, or the length of time that such observation can be conducted before observation fatigue affects the effectiveness of the personnel. He commented that observation fatigue is a well-known phenomenon among workers whose work involves intense observation, and that it is essential to determine the duration of effective observation and to ensure consistency and reliability among the personnel carrying out the observations. DHS has recognized the need to conduct additional research to scientifically validate the use of the SPOT behavioral indicators in an airport environment. DHS’s S&T Directorate began research in 2007 to determine if there is a statistically significant correlation between the SPOT behaviors exhibited by airport passengers and finding airport passengers with prohibited items (such as weapons), false documents, and illegal drugs or who pose a potential risk to aviation security. According to S&T, this research is expected to be completed in fiscal year 2011 and is to include three key elements. First, the study’s purpose is to assess the reliability of the SPOT program by analyzing TSA’s SPOT database to determine patterns of BDO scoring to measure consistency across BDOs, teams, locations, and other variables. Second, the study aims to compare the current implementation of SPOT to random passenger screening. Specifically, according to S&T officials, 130,000 passengers are to be randomly selected for additional SPOT referral screening. The study’s design states that data collected about these passengers will be compared to data for passengers screened through the normal SPOT process. S&T officials expect that the results of this element of the study will provide a better understanding of how SPOT compares to random selection, as well as providing a baseline of each indicator present in the traveling public. Third, the study also aims to utilize live and video data, as available, to measure SPOT score ratings by BDOs of behaviors exhibited by passengers against ratings of the same passengers by subject matter experts. This element of the study could help determine whether BDOs are using, or are continuing to use, the SPOT score sheet correctly as time passes after their initial training. According to S&T officials, the study is to form the basis for BDO performance and training requirements. The S&T Directorate reported some preliminary findings associated with this research in February 2008. The Directorate reported that although some of the existing literature supported the possibility of using behavioral and physiological cues, the results are not methodologically strong enough to support standardized applications in an operational setting. The preliminary findings also noted that it is not known whether behavioral and physiological cues linked to deception in planning a hostile action will be the same or different as those indicators linked to deception by an individual after they have already engaged in a hostile action. However, an S&T program director stated that although early literature can be characterized as methodologically weak, more recent unpublished research sponsored by DHS, the Department of Defense, and the intelligence community is promising in that it has demonstrated some linkages between behavioral and physiological indicators and deception. In March 2009, the Under Secretary (Acting) for DHS’s S&T Directorate testified that the Directorate had performed an initial validation of the behavior indicators used by BDOs. The Under Secretary stated that this analysis provided statistically significant support that persons demonstrating select behavioral indicators are more likely to possess prohibited items and that behaviors can distinguish deceptive from nondeceptive individuals. According to S&T, this validation was the result of statistical analyses performed by S&T using operational data from the SPOT program database. However, we identified weaknesses in TSA’s process for maintaining these data. For example, controls over the SPOT database to help ensure the completeness and accuracy of the data were missing. Specifically, the SPOT database did not have computerized edit checks built into the system to review the format, existence, and reasonableness of data. For example, we found that discrepancies existed between the number of passengers arrested by local law enforcement at the screening checkpoints and the number of screened passengers recorded as arrested. In another example, we found that the total number of LEO referrals differed from the number of passenger records with information on the reasons for LEO referral. Internal control standards state that controls should be installed at an application’s interfaces with other systems to ensure that all inputs are received and are valid and that outputs are correct and properly distributed. TSA officials explained these issues as data anomalies and planned to change instructions to staff entering data to reduce these problems. Although TSA is taking steps to update the SPOT database, which are discussed later in this report, the data used by S&T to conduct its preliminary validation of related behaviors lacked such controls. In addition, BDOs could not input all behaviors observed in the SPOT database because the database limits entry to eight behaviors, six signs of deception, and four types of prohibited items per passenger referred for additional screening. Because of these data-related issues, meaningful analyses could not be conducted to determine if there is an association between certain behaviors and the likelihood that a person displaying certain behaviors would be referred to a LEO or whether any behavior or combination of behaviors could be used to distinguish deceptive from nondeceptive individuals. As a result, TSA lacks assurance that the SPOT data can be used effectively to determine that the person poses a risk to aviation security. S&T has recognized weaknesses in the procedures for collecting data on passengers screened by SPOT and plans to more systematically collect data during its study by, for example, requiring BDOs to record more complete and accurate information related to a passenger referral immediately following resolution. The S&T study is an important step to determine whether SPOT is more effective at identifying passengers who may be threats to the aviation system than random screening. However, S&T’s current research plan is not designed to fully validate whether behavior detection and appearances can be effectively used to reliably identify individuals in an airport terminal environment who pose a risk to the aviation system. For example, research on other issues, such as determining the number of individuals needed to observe a given number of passengers moving at a given rate per day in an airport environment or the duration that such observation can be conducted by BDOs before observation fatigue affects effectiveness, could provide additional information on the extent to which SPOT can be effectively implemented in airports. In another example, Dr. Ekman told us that additional research could help determine the need for periodic refresher training since no research has yet determined whether behavior detection is easily forgotten or can be potentially degraded with time or lack of use. While S&T officials agree on the need to validate the science of behavior detection programs, they told us that some of these other issues could be examined in the future but are not part of the current plan due to time and budgetary constraints. According to S&T, some additional analysis is underway to inform the current BDO selection process. This analysis is intended to provide information on the knowledge, skills, abilities, and other characteristics of successful BDOs. Since the analysis is scheduled for completion in May 2010, it remains unclear to what extent the findings will help to validate the science related to SPOT. While we recognize the potential benefits of these efforts, we believe that an assessment by an independent panel of experts of the planned methodology of DHS’s study could help DHS assess the costs and benefits associated with a more comprehensive methodology designed to fully validate the science related to SPOT. Our prior work has recommended the use of such independent panels for comprehensive, objective reviews of complex issues. In addition, according to the National Research Council, an independent panel could provide an objective assessment of the methodology and findings of DHS’s study to better ensure that SPOT is based on validated science. Thus, an independent panel of experts could help DHS develop a comprehensive methodology to determine if the SPOT program is based on valid scientific principles that can be effectively applied in an airport environment for counterterrorism purposes. According to DHS’s National Infrastructure Protection Plan (NIPP), risk assessments are to be documented, reproducible (so that others can verify the results), defensible (technically sound and free of significant errors), and complete. The NIPP states that comprehensive risk assessments are necessary for determining which assets or systems face the highest risk, for prioritizing risk mitigation efforts and the allocation of resources, and for effectively measuring how security programs reduce risks. For a risk assessment to be considered complete, the NIPP states that it must specifically assess threat, vulnerability, and consequence; after these three components have been assessed, they are to be combined to produce a risk estimate. According to TSA, SPOT was deployed to TSA-regulated airports on the basis of threat information in TSA’s Current Airport Threat Assessment list. TSA deployed SPOT to 161 of 457 TSA-regulated airports. TSA officials told us that this selective deployment creates unpredictability for persons seeking to cause harm to the aviation system because they would not know which airports had BDO teams and because BDOs are occasionally sent out to the smaller airports that do not have BDOs on a permanent basis. Although TSA’s selective deployment of SPOT was based on threat information, TSA did not conduct vulnerability and consequence assessments to inform the deployment of BDOs. As a result, it could not combine the results to conduct a comprehensive risk assessment to inform the deployment of BDOs to those airports with the highest risks. TSA officials told us that while they have not completed a comprehensive risk assessment for airport security, they have prepared and are currently reviewing a draft of a comprehensive, scenario-based Aviation Modal Risk Assessment—known as the AMRA—which is to serve as a comprehensive risk assessment for aviation security. According to TSA officials, the AMRA is to address all three elements of risk for domestic commercial aviation, general aviation, and air cargo. Although TSA planned to release the AMRA in February 2008, it now expects to finalize the AMRA in 2010. According to TSA, the AMRA may help provide information for the prioritization of BDO deployment within airports, but could not provide specifics on how it would do so. Further, TSA officials noted that information from AMRA would inform BDO deployment in conjunction with other TSA priorities not related to SPOT. Since the AMRA is not yet complete, it is not clear whether it will provide the risk analysis— including assessments of vulnerability and consequence—needed to inform TSA’s decisions and planning for any revisions or future deployment of SPOT. If AMRA lacks information relevant to the deployment of SPOT and further research determines that SPOT has a scientifically validated basis for using behavior detection for counterterrorism purposes in the airport environment, then conducting a comprehensive risk assessment of airports could strengthen TSA’s ability to establish priorities and make cost-effective resource decisions regarding the deployment of BDOs to those airports deemed to have the highest priority risks. DHS and other federal guidance recommend conducting a cost-benefit analysis before implementing new programs to avoid unnecessary costs and identify the best way to achieve goals at the lowest costs among potential alternatives. Our prior work has also supported the use of cost- benefit analyses during retrospective reviews to validate the agency’s original assumptions regarding costs and benefits. In addition, the DHS February 2006 Cost-Benefit Analysis Guidebook and OMB guidance both recommend the use of cost-benefit analysis, both in the planning stage for a program, and when significant milestones or financial options are to be assessed. The DHS Guidebook states that a cost-benefit analysis is designed to identify optimal financial solutions among competing alternatives. OMB guidance also identifies cost-benefit analysis as one of the key principles to be considered when making capital expenditures, that expected benefits of proposed actions should be explained, and that a baseline should be identified discussing costs and benefits in comparison with clearly defined alternatives. DHS’s 2006 and 2009 NIPPs also state that priority is to be given to those protective measures that provide the greatest mitigation of risk for the resources that are available. The DHS NIPPs add that effective protective programs seek to use resources efficiently by focusing on actions that offer the greatest mitigation of risk for any given expenditure. In addition, measuring cost effectiveness of SPOT was a key TSA goal in an October 2005 version of the SPOT strategic plan. Although the DHS and OMB guidance recommend that a cost-benefit analysis be conducted prior to deploying a program nationwide—and potentially incurring substantial costs—TSA did not conduct such an analysis of SPOT to inform its pilot testing prior to full-scale nationwide deployment. In early 2003, TSA began conducting a pilot test of the SPOT program at Boston Logan airport to better understand the benefits of the program. According to Boston Logan’s Federal Security Director, the primary purpose of this pilot test was to understand the potential of the program, not to validate its success. TSA officials stated that the program had several benefits, one of which was its “negligible cost.” However, TSA did not analyze the pilot test results to determine if SPOT was more cost effective than other alternatives, such as random screening of passengers. In October 2004, TSA implemented additional pilot programs in Providence, Rhode Island and Portland, Maine with the goal of providing Federal Security Directors with an additional layer of security to identify high-risk passengers for additional screening using behavior detection techniques. TSA concluded that the pilot program was successful and cited several security benefits of these pilots. For example, TSA personnel in Providence identified two individuals in possession of illegal drugs, who were then arrested. Law enforcement also arrested another individual referred to them for providing a fraudulent passport. In another example, BDOs in Portland discovered a passenger with multiple passports and a hidden luggage compartment. The passenger was interviewed by LEOs and later released. TSA determined that these initial pilot tests at three airports were successful without comparing pilot test data to other possible security alternatives. For example, the results of random screening of passengers at the pilot airports could have provided TSA with objective baseline data. Specifically, these data could have been compared to data collected during the SPOT pilots to determine if SPOT was more effective than random screening in detecting passengers who pose a potential risk to aviation security. TSA concluded that the pilot tests were successful because pilot airports were able to easily incorporate SPOT into their security program, train personnel in SPOT, and implement procedures for an additional layer of security according to TSA. TSA conducted additional pilot tests at the Minneapolis-St. Paul, Minnesota and Bangor, Maine airports in October 2005. TSA also deployed the program to nine additional airports in response to TSA’s holiday preparedness plan in December 2005 to further operationally test the program. Senior SPOT program officials explained that TSA did not conduct an analysis of the pilot testing because the program was in its infancy and officials were focused on deploying SPOT to additional airports. Since that time, TSA has not conducted a cost-benefit analysis, which could help the agency establish the value of the program relative to other layers of aviation security. Moreover, a cost-benefit analysis could also be useful considering recent program growth. For example, from fiscal year 2007 through fiscal year 2009, TSA allotted about $383 million for SPOT. During this period, SPOT’s share of TSA’s total screening operations budget increased from 1 percent to 5 percent. The conference report accompanying the fiscal year 2010 DHS appropriations act designates $212 million of the appropriated aviation security funding for the SPOT program. A cost-benefit analysis could have provided TSA management with analysis on whether this allocation was a prudent investment, as well as whether this level of investment in SPOT is appropriate. Figure 3 shows the growth in the budget and personnel numbers for SPOT from fiscal years 2007 through 2010. Our previous work, and the Government Performance and Results Act, set forth several key elements of a strategic plan. Such plans can guide agencies in planning and implementing an effective government program. Table 1 summarizes the desirable characteristics of an effective strategic plan, as identified in our prior work. In April 2009, we reported that these characteristics are the starting point for developing a strategic plan. TSA officials at Boston Logan airport told us that they completed the first strategic plan for SPOT in 2006. The strategic plan was last updated in March 2007. The March 2007 plan includes some of the desirable characteristics described above, such as an overall purpose. However, incorporating additional characteristics of an effective strategic plan could enhance the plan’s usefulness in program management and resource allocation decisions to effectively manage the deployment of SPOT if TSA determines that the program has a scientifically valid basis. TSA officials stated that they believed the plan was sufficiently comprehensive to develop a national program, such as SPOT. However, these officials told us that the plan was not updated after TSA expanded the program in 2008 and 2009. They also stated that the program’s focus remained on deploying SPOT to additional airports. Our assessment of the extent to which the SPOT strategic plan addresses these characteristics is presented below. Purpose, scope, and methodology: The SPOT strategic plan addresses why the plan was developed (i.e., purpose) and the scope of its coverage. Specifically, the plan describes a strategy to utilize behavior detection screening as an additional layer of security. The plan also notes that the primary focus is to expand SPOT in the aviation environment while also developing a capability to deploy BDOs to support security efforts in all modes of transportation. However, the plan does not discuss the process by which it was developed (i.e., methodology). According to TSA, officials responsible for developing the plan received input from relevant stakeholders at Boston Logan airport and TSA headquarters. We believe incorporating the methodology into the plan could make the document more useful to TSA and other organizations, such as local law enforcement, responsible for implementing the plan. Problem definition and risk assessment: The plan addresses the particular threat it is directed towards. Specifically, the plan describes the need to implement SPOT to counter terrorist activities, improve security, and incorporate additional layers of protection within aviation security. However, the plan does not incorporate risk assessment information to identify priorities or guide program implementation because TSA has not conducted a comprehensive risk assessment related to the deployment of SPOT. Using available risk assessment information to inform the development of a strategic plan would help ensure that clear priorities are established and focused on the areas of greatest need. Specifically, incorporating the results of a risk assessment in the program’s strategic plan could help inform TSA’s decisions such as whether to deploy SPOT to additional TSA-regulated airports, to shift SPOT teams from one airport to another, or to remove SPOT at airports where the benefit of addressing the risk does not outweigh the costs, as well as to identify and communicate the risks to aviation security if SPOT was not deployed to all TSA- regulated airports. Goals, subordinate objectives, activities, and performance measures: The plan outlines several goals, objectives, and activities for the SPOT program to achieve. For example, the plan outlines a goal to develop multimodal partnerships, including at the local level, to support SPOT. An associated objective for this goal includes identifying and fostering advocates within each mode of transportation by developing transportation, intelligence, and law enforcement working groups with relevant officials to share information and foster cooperation. The plan also includes a goal to develop and implement performance measures for SPOT. However, the plan did not include performance measures for SPOT. Incorporating performance measures into the plan could help TSA officials measure progress in implementing the plan’s goals, objectives, and activities. Resources, investments, and risk management: The plan does not identify the costs and resources needed to achieve program objectives discussed in the plan. Incorporating information about cost and resources would facilitate TSA’s ability to allocate resources across programs according to priorities and constraints, track costs and performance, and shift such investments and resources as appropriate. Organizational roles, responsibilities, and coordination: The SPOT program relies on a close partnership with law enforcement officers at airports. TSA provides briefings to law enforcement on the SPOT program, and TSA officials conduct outreach efforts to local law enforcement as needed. The SPOT SOP guidance and SPOT training include guidance about ensuring that LEOs receive complete and accurate information about each SPOT referral. However, while the strategic plan identifies TSA officials and offices as responsible parties for implementing the strategic plan, it does not provide guidance on how to effectively link the roles, responsibilities, and capabilities of federal, state, and local officials providing program support. Moreover, although SPOT SOP guidance discusses the need for BDOs to coordinate with other TSA personnel, such as TSOs and TDCs, TSA does not identify their roles and responsibilities in regards to the SPOT program in the program’s strategic plan. Integrating these elements into the strategic plan could help to clarify the relationships between these various implementing parties, which would thereby increase accountability and improve the effectiveness of implementation. Integration and implementation: The SPOT strategic plan does not discuss how its scope complements, expands upon, or overlaps with other related strategic documents. For example, TSA’s April 2008 Office of Security Operations Organizational Business Plan for Fiscal Year 2010 describes how its goals—including those for SPOT—relate to DHS and TSA strategic goals. However, TSA does not link goals in the SPOT strategic plan with other related strategic documents, such as the Aviation Implementation Plan of DHS’s Transportation Systems Sector-Specific Plan and the Passenger Checkpoint Screening Program Strategic Plan. By linking goals in its SPOT strategic plan to other TSA efforts, TSA could better ensure that the program’s objectives are integrated with other TSA security programs and that resources are used effectively by minimizing any unnecessary duplication with these other actions. Inconsistencies in the use of available information technology to aid in the collection and recording of data on passengers by BDOs during referrals to LEOs, lack of guidance on, or a mechanism for, BDOs to request the TSA’s Transportation Security Operations Center to run the names of passengers exhibiting suspicious behaviors against law enforcement and intelligence databases, and the Center’s not checking all of the databases available to it—have limited TSA’s ability to identify potential terrorist threats to the aviation system. Among other information, these databases include terrorism-related watch lists. TSA is not fully utilizing the resources it has available to systematically collect the information obtained by BDOs on passengers whose behaviors and appearances resulted in either SPOT referral screening, or in a referral to LEOs, and who thus may pose a risk to the aviation system. TSA’s July 2008 Privacy Impact Assessment on the TSA Transportation Security Operations Center, and its August 2008 Privacy Impact Assessment on SPOT, state that information may be obtained by BDOs to check an individual’s identity against intelligence, terrorist, and law enforcement databases and to permit intelligence analysts to conduct trend analysis. The August 2008 SPOT Privacy Impact Assessment states that information about a passenger who has exceeded the SPOT behavior threshold, leading to LEO referral, may be collected and entered into DHS’s Transportation Information Sharing System. According to the SPOT Privacy Impact Assessment, information collected may be submitted to the Transportation Information Sharing System database for analysis, and, through it to other linked intelligence databases and the intelligence analysts who study them, to detect, deter, and defeat a criminal or terrorist act in the transportation domain before it occurs. The SPOT Privacy Impact Assessment notes that terrorist acts that threaten transportation security are most vulnerable in the planning stages and that the timely passage of SPOT referral information may assist in identifying such efforts before they become operational. A June 2008 Transportation Information Sharing System Privacy Impact Assessment similarly states that one goal is to use the system data to find trends and patterns that may indicate preoperational terrorist or criminal activity—that is, to “connect the dots” about a planned terrorist attack or criminal enterprise. Information in TSA’s Transportation Information Sharing System is primarily activity or behavioral information but may also contain personal information regarding the individuals identified by the BDO through SPOT. According to TSA, BDOs do not analyze the data obtained during referrals; if they have the appropriate training, they may enter the data by computer into the Transportation Information Sharing System, where they can be analyzed by intelligence analysts. Other appropriately trained and officially designated TSA officials, such as Federal Security Directors, may also enter data into the system. According to TSA, a 2008 pilot program it conducted that involved BDOs entering data into the Transportation Information Sharing System database was “invaluable,” in part because over 40 referrals have since been passed on to other LEO organizations for further investigation, most of which came from BDO input. A February 2006 TSA memorandum describes the Transportation Information Sharing System as “a critical element in the success of SPOT” because it provides the necessary platform for the reporting of information obtained as a result of SPOT referrals. TSA noted that through the use of the Transportation Information Sharing System, two different BDO teams had separately identified and selected the “same extremist” for secondary questioning. TSA officials also told us about an incident in which an individual sought to board an aircraft with a handgun on two separate occasions, at two different airports. Although the handgun was detected both times, the individual was released after providing what seemed to be a credible explanation. After the second incident, however, intelligence analysts who reviewed the system information saw that this individual had tried twice in 2 weeks to bring a weapon onto an aircraft. A LEO was dispatched to the person’s home, and an arrest was made. Without the data inputted into the system both times, no pattern would have been detected by the analysts, according to TSA. Although the pilot program illustrated the benefits of BDOs entering data into the system, access to the system was not expanded to all SPOT airports in 2008 or 2009. Internal control standards call for management to develop policies, procedures, and techniques to help enforce management directives. TSA does not provide official guidance on how or when BDOs or other TSA personnel should enter data into the Transportation Information Sharing System or which data should be entered. Official guidance on what data should be entered into the system on passengers could better position TSA personnel to be able to consistently collect information to facilitate synthesis and analysis in “connecting the dots” with regard to persons who may pose a threat to the aviation system. On March 18, 2010, TSA officials told us that TSA recognizes the value of recording SPOT incidents for the purposes of intelligence gathering. As a result, TSA decided that certain data would be entered into the Transportation Information Sharing System, and would, in turn, be analyzed as a way to potentially “connect the dots” with other transportation security incidents. TSA officials said that the Federal Security Director at each SPOT airport has been given the discretion to decide which personnel should have access to the Transportation Information Sharing System. However, TSA has not developed a plan detailing how many personnel would have access to the system, or when they would have access at SPOT airports. TSA officials said that training is currently being provided to personnel responsible for using the system at all SPOT airports although they did not provide information on the number being trained. Standard practices for defining, designing, and executing programs include developing a road map, or program plan, to establish an order for executing specific projects needed to obtain defined programmatic results within a specified time frame. However, TSA stated that it has not developed a schedule or milestones by which database access will be deployed to SPOT airports, or a date by which access at all SPOT airports will be completed. Setting milestones for expanding Transportation Information Sharing System access to all SPOT airports, and setting a date by which the expansion will be completed, could better position TSA to identify threats to the aviation system that may otherwise go undetected and help TSA track its progress in expanding Transportation Information Sharing System access as management intended. Internal control standards state that policies, procedures, techniques, and other mechanisms are essential to help ensure that actions are taken to address program risks. The current process makes the BDOs dependent on the LEOs with regard to the timeliness that LEOs respond to BDO calls for service, as well as with regard to whether the LEOs choose to question the passengers referred to them or conduct a background check. Our analysis of the SPOT referral database found a wide variation in the percent of times that LEOs responded to calls for service at SPOT airports. Moreover, if a local LEO decides to run a background check on a passenger referred to them, they would be accessing the FBI’s NCIC and not other intelligence and law enforcement databases. Although LEOs may not always respond to calls for service, question passengers, or check passenger names against databases available to TSA, TSA has not developed a mechanism allowing BDOs to send information to the Transportation Security Operations Center about passengers whose behavior indicates that they may be a possible threat to aviation security. According to TSA’s July 2008 Transportation Security Operations Center Privacy Impact Assessment, passenger information may be submitted to the Transportation Security Operations Center to ascertain, as quickly as possible, the individual’s identity, whether they are already the subject of a terrorist or criminal investigation, or to analyze suspicious behavior that may signal some form of preoperational surveillance or activity. Our survey of Federal Security Directors at SPOT airports found a notable inconsistency in the rates at which BDOs at different airports contacted the Transportation Security Operations Center. Developing additional guidance in the SPOT operating procedures could help improve consistency in the extent to which BDOs utilize Transportation Security Operations Center resources. Given the range of responses we received from SPOT airports about whether the BDOs contact the Transportation Security Operations Center to verify passenger identities and run their names against terrorist and intelligence databases and the inconsistencies identified related to LEO responses to BDO requests for service, developing a standard mechanism and providing BDOs with additional guidance could help TSA achieve greater consistency in the SPOT process. Such a mechanism would provide designated TSA officials with a means of verifying passenger identities and help them determine whether a passenger was the subject of a terrorist or criminal investigation and thus posed a risk to aviation security. Standards for internal control state that effectively using available resources, including key information databases, is one element of functioning internal controls. In this connection, it is widely recognized among intelligence entities and police forces that a capability to “run” names against databases that contain criminal and other records is a potentially powerful tool to both identify those with outstanding warrants and to help discover an ongoing criminal or security-related incident. Additionally, TSA recommended in an April 2008 Organizational Business plan for its Office of Security Operations that the SPOT program should establish a mechanism and policy for allowing real-time checks of federal records for individuals whose behavior indicates they may be a threat to security. The Office of Security Operations plan also states that BDOs should communicate the data to U.S. intelligence centers, with the purpose of permitting rapid communication of this information to local LEOs to take action. However, TSA officials told us that because of safety concerns, the Transportation Security Operations Center does not provide information from database checks directly to BDOs because BDOs are not LEOs, are unarmed, and do not have the training needed to deal with potentially violent persons. If the mechanism discussed in the Office of Security Operations business plan were implemented, it would allow the Transportation Security Operations Center to use BDO information to conduct real-time record checks of passengers and communicate the results to LEOs for action. Such a mechanism could increase the chances to detect ongoing criminal or terror plans. The final report of the National Commission on Terrorist Attacks Upon the United States (the “9/11 Commission Report”) recommends that in carrying out its goal of protecting aviation, TSA should utilize the larger set of information maintained by the federal government, that is, the entire Terrorist Screening Database—the U.S. government’s consolidated watch list that contains information on known or suspected international and domestic terrorists—as well as other government databases, such as intelligence or law enforcement databases. However, the Transportation Security Operations Center is not using all the resources at its disposal to support BDOs in verifying potential risks to the aviation system. This reduces the opportunities to “connect the dots” that would increase the chances of detecting terrorist attacks in their planning stage, which the SPOT Privacy Impact Assessment states is when they are the most vulnerable. According to TSA, the Transportation Security Operations Center has access to multiple law enforcement and intelligence databases that can be used to verify the identity of airline passengers; these include among others: 1. the Selectee list, which identifies persons who must undergo enhanced screening at the checkpoint prior to boarding; 2. the No-Fly list, which lists persons prohibited from boarding aircraft; and 3. the Terrorist Identity Datamark Environment terrorist list. TSA stated that the Transportation Security Operations Center checks passenger names submitted to it against these three databases if the passenger has been referred by a BDO to a LEO, but has not been arrested. Of the three databases that the Transportation Security Operations Center is to check in the case of a referral, passengers would have already been screened against two—the Selectee and No-Fly lists—in accordance with TSA passenger prescreening procedures when purchasing a ticket. The third database checked—the Terrorist Identity Datamark Environment— tracks terrorists but not persons wanted for other crimes. The FBI’s NCIC information system would contain names of such persons, but is not among the three databases checked for nonarrest referrals. If the passenger has been arrested, the Transportation Security Operations Center will run the passenger’s name against the additional law enforcement and intelligence databases available to it. In addition, TSA told us that the Operations Center does not have direct electronic access to the Terrorist Screening Database and must call the FBI’s Terrorist Screening Center to provide it with a name to verify. TSA stated that this is done if a passenger’s identity could not be verified using the Operations Center databases. In effect, if a passenger has been referred to a LEO, but not arrested, the Operations Center is to check the three databases shown above to verify the passenger’s identity. If a passenger has been arrested, but the three databases do not list the person, the Center can check the additional databases available to it. If none of these databases can verify the person’s identity, the Operations Center can contact the Terrorist Screening Center by telephone to request further screening. screening database (the watch list) and to provide for the use of watch-list records during security-related screening processes. See GAO-08-136T, Aviation Security: TSA Is Enhancing Its Oversight of Air Carrier Efforts to Screen Passengers against Terrorist Watch-List Records, but Expects Ultimate Solution to Be Implementation of Secure Flight (Washington, D.C.: Sept. 9, 2008). For passengers who have risen to the level of a LEO referral at an airport checkpoint, having the Transportation Security Operations Center consistently check their names against all the databases available to it could potentially help TSA identify threats to the aviation system and aid in “connecting the dots.” TSA indicated that there are no obstacles to rapidly checking all databases rather than the three listed. We did not analyze the extent to which the law enforcement and intelligence databases available to TSA may contain overlapping information. TSA has established some performance measures by tracking SPOT referral and arrest data, but lacks the measures needed to evaluate the effectiveness of the SPOT program and, as a result, has not been able to fully assess SPOT’s contribution to improving aviation security. TSA emphasized the difficulty of developing performance measures for deterrence-based programs, but stated that it is developing additional measures to quantify the effectiveness of the program. The SPOT program uses teams to assess BDO proficiency, provide individual and team guidance, and address issues related to the interaction of BDOs with TSA checkpoint personnel. However, TSA does not systematically track the teams’ recommendations or the frequency of the teams’ airport visits. TSA states that it is working to address these issues and plans to do so by the end of fiscal year 2010. TSA agreed that the SPOT program lacked sufficient performance measures in the past, but stated that it has some performance measures in place including tracking data on passengers referred for additional screening and the resolution of this screening, such as if prohibited items were found or if law enforcement arrested the passenger and the reason for the arrest. TSA is also working to improve its evaluation capabilities to better assess the effectiveness of the program. DHS’s NIPP, internal controls standards, and our previous work on program assessment state that performance metrics and associated program evaluations are needed to determine if a program works and to identify adjustments that may improve its results. Moreover, standard practices in program management for defining, designing, and executing programs include developing a road map, or program plan, to establish an order for executing specific projects needed to obtain defined programmatic results within a specified time frame. Congress also needs information on whether and in what respects a program is working well or poorly to support its oversight of agencies and their budgets; and agencies’ stakeholders need performance information to accurately judge program effectiveness. For example, in the Senate Appropriations Committee report accompanying the fiscal year 2010 DHS appropriations bill, the committee noted that while TSA has dramatically increased the size and scope of SPOT, resources were not tied to specific program goals and objectives. In addition, the conference report accompanying the fiscal year 2010 DHS appropriations act requires TSA to report to Congress, within 60 days of enactment, on the effectiveness of the program in meeting its goals and objectives, among other things. This report was completed on March 15, 2010. Although TSA tracks data related to SPOT activities including prohibited items, law enforcement arrests related to SPOT referrals, and reasons for the arrests (output measures), it has not yet developed measures to gauge SPOT’s effectiveness in meeting TSA strategic goals (outcome measures), such as identifying individuals who may pose a threat to the transportation system. OMB encourages the use of outcome measures because they are more meaningful than output measures, which tend to be more process- oriented or means to an end. For example, TSA’s Assistant General Manager for the Office of Operation Process and Performance Metrics told us that SPOT staffing levels are currently used as one performance metric. The official said that since the SPOT is an added layer of security, additional SPOT staffing would add to security effectiveness. While staffing levels may help gauge how fast the program is growing, they do not measure the effectiveness in meeting strategic goals. Similarly, TSA also cited the number of prohibited items discovered by BDOs in SPOT metrics reports as a measure of program success. However, TSA told us that possession of a prohibited item is often an oversight and not an intentional act; moreover, other checkpoint screening layers are intended to find such items, such as the TSOs and the property screening equipment. TSA also cited measures of BDO job performance as some of the existing measures of program effectiveness, but noted that these are “pass/fail” assessments of individual BDOs, rather than overall program measures. TSA notes that one purpose of the SPOT program is to deter terrorists, but that proving that it has succeeded at deterring terrorists is difficult because the lack of data has presented challenges for the SPOT program office when developing performance measures. We agree that developing performance measures, especially outcome measures, for programs with a deterrent focus is difficult. Nevertheless, such measures are an important tool to communicate what a program has accomplished and provide information for budget decisions. TSA uses proxy measures—indirect measures or indicators that approximate or represent the direct measure—to address deterrence, other security goals, or a combination of both. For example, TSA tracks the number of prohibited items found and individuals arrested as a result of SPOT referrals. According to OMB, proxy measures are to be correlated to an improved security outcome, and the program should be able to demonstrate—such as through the use of modeling—how the proxies tie to the eventual outcome.In using a variety of proxy measures, failure in any one of the identified measures could provide an indication on the overall risk to security. However, developing a plan that includes objectives, milestones, and time frames to develop outcome-based performance measures could better position TSA in assessing the effectiveness of the SPOT program. With regard to more readily quantifiable output performance measures, such as the number of referrals by BDOs, or the ratio of arrests to referrals, TSA was limited in its ability to analyze the data related to these measures. The SPOT database includes information on all passengers referred by BDOs for additional SPOT screening including the behaviors of the passengers that led to the additional screening, as well as the resolution of the screening process (e.g., no further action taken, law enforcement notification, law enforcement investigation, arrested, and reason for arrest). However, TSA reported that any analysis of the data had to be done manually. In March 2010, TSA migrated the SPOT referral data to its Performance Management Information System, allowing for more statistical and other analyses. According to TSA, migrating the SPOT referral database will enhance the SPOT program’s analytic capabilities. For example, TSA stated that it would be able to conduct trend analyses, better segregate data, and create specific reports for certain data. This includes better tracking of performance data at specific airports, analyzing by categories of airports (threat or geographic location), and tracking the performance data of individual BDOs, such as number of referrals, number of arrests, arrest to referral ratios, and other analyses. However, since these changes to the database were not complete at the time of our audit, we could not assess whether the problems we identified with the database had been corrected. The SPOT referral database records the total number of SPOT referrals since May 29, 2004, how many were resolved, how many passengers BDOs referred to LEOs, the recorded reasons for the referral, and how many referrals led to arrests, among other things. As shown in figure 4, we analyzed the SPOT referral data for the period May 29, 2004, to August 31, 2008. Figure 4 shows that approximately 2 billion passengers boarded aircraft at SPOT airports from May 29, 2004, through August 31, 2008. Of these, 151,943 (less than 1/100th of 1 percent) were sent to SPOT referral screening, and of these, 14,104 (9.3 percent) were then referred to LEOs. These LEO referrals resulted in 1,083 arrests, or 7.6 percent of those referred, and less than 1 percent of all SPOT referrals (0.7 percent of 151,943). We also analyzed the reasons for arrests resulting from SPOT referrals, for the May 29, 2004, through August 31, 2008, period. Table 2 shows, in descending order, the reasons for the arrests. While SPOT personnel did not determine a specific reason for arrest for 128 cases categorized as “other” or 16 other cases categorized as “no reason given,” our analysis of the SPOT database found that a specific reason for arrest could have been determined for these cases by using the LEO resolution notes included in the database. For example, we identified 43 additional arrests related to fraudulent documents, illegal aliens, and suspect documents, among others. The remaining 101 arrests originally characterized as “other” or “no reason given” included arrests for reasons such as intoxication, unruly behavior, theft, domestic violence, and possession of prohibited items. Many of the arrests resulting from BDO referrals would typically fall under the jurisdiction of various local, state, and federal agencies and are not directly related to threats to aviation security. For example, the 427 individuals arrested as illegal aliens, and the 166 arrested for possession of fraudulent documents, are subject to the enforcement responsibilities shared by U.S. Immigration and Customs Enforcement (ICE) and CBP. Although outstanding warrants and the possession of fraudulent or suspect documents could be associated with a terrorist threat, TSA officials did not identify any direct links to terrorism or any threat to the aviation system in any of these cases. According to TSA, anecdotal examples of BDO actions at airports show the value added by SPOT to securing the aviation system. However, because the SPOT program has not been scientifically validated, it cannot be determined if the anecdotal results cited by TSA were better than if passengers had been pulled aside at random, rather than as a consequence of being identified for further screening by BDOs. Some of the incidents cited by TSA include the following. A BDO referred two passengers who were traveling together to referral screening due to suspicious behavior. During secondary screening, one passenger presented fraudulent travel documents. The other could not produce any documentation of his citizenship and it was determined he was in the United States illegally. ICE responded and interviewed both passengers. ICE stated one passenger was also in possession of $10,000 dollars which alarmed positive for narcotics when swept by a K-9 team. ICE arrested one passenger on a federal charge of possession of fraudulent identification documents and entry without inspection. ICE stated charges are still pending for the possession of $10,000. The second passenger was charged with a federal charge of entry without inspection. A BDO referred a passenger to referral screening for exhibiting suspicious behavior. Port Authority of Portland (Oregon) Police responded and interviewed the passenger who did not give a statement. LEOs conducted an NCIC check which revealed that there was an outstanding warrant for the failure to appear for a theft charge. LEOs arrested the passenger on a state charge for an outstanding warrant for the failure to appear for theft. A BDO referred a passenger for referral screening due to suspicious behavior. During the referral, the passenger admitted that he was unlawfully present in the United States. The Orlando (Florida) Police Department and CBP responded and interviewed the passenger who stated he had $100,000 in his checked baggage, which was confirmed by CBP. The passenger was arrested on a federal charge of illegal entry. Because these are anecdotal examples, they cannot be used to reliably generalize about the SPOT program’s overall effectiveness or success rate. Our analysis of the SPOT referral database found that the referral data do not indicate if any of the passengers sent to referral screening, or those arrested by LEOs after being referred to them, intended to harm the aircraft, its passengers, or other components of the aviation system. Additionally, SPOT officials told us that it is not known if the SPOT program has ever resulted in the arrest of anyone who is a terrorist, or who was planning to engage in terrorist-related activity. Studying airport video recordings of the behaviors exhibited by persons waiting in line and moving through airport checkpoints and who were later charged with or pleaded guilty to terrorism-related offenses could provide insights about behaviors that may be common among terrorists or could demonstrate that terrorists do not generally display any identifying behaviors. TSA officials agreed that examining video recordings of individuals who were later charged with or pleaded guilty to terrorism- related offenses, as they used the aviation system to travel to overseas locations allegedly to receive terrorist training or to execute attacks, may help inform the SPOT program’s identification of behavioral indicators. In addition, such images could help determine if BDOs are looking for the right behaviors or seeing the behaviors they have been trained to observe. Using CBP and Department of Justice information, we examined the travel of key individuals allegedly involved in six terrorist plots that have been uncovered by law enforcement agencies. We determined that at least 16 of the individuals allegedly involved in these plots moved through 8 different airports where the SPOT program had been implemented. Six of the 8 airports were among the 10 highest risk airports, as rated by TSA in its Current Airport Threat Assessment. In total, these individuals moved through SPOT airports on at least 23 different occasions. For example, according to Department of Justice documents, in December 2007 an individual who later pleaded guilty to providing material support to Somali terrorists boarded a plane at the Minneapolis-Saint Paul International Airport en route to Somalia to join terrorists there and engage in jihad. Similarly, in August 2008 an individual who later pleaded guilty to providing material support to Al-Qaeda boarded a plane at Newark Liberty International Airport en route to Pakistan to receive terrorist training to support his efforts to attack the New York subway system. Our survey of Federal Security Directors at 161 SPOT airports indicated most checkpoints at SPOT airports have surveillance cameras installed. As we previously reported, best practices for project management call for conducting feasibility studies to assess issues related to technical and economic feasibility, among other things. In addition, Standards for Internal Control state that effectively using available resources is one element of functioning internal controls. TSA may be able to utilize the installed video infrastructure at the nation’s airports to study the behavior of persons who were later charged with or pleaded guilty to terrorism- related offenses, and determine whether BDOs saw the behaviors. The Director of Special Operations in TSA’s Office of Inspection told us that video recordings could be used as a teaching tool to show the BDOs which behaviors or activities they did or did not observe. In addition, TSA indicated that although the airports may have cameras at the security screening checkpoints, the cameras are not owned by TSA, and in many cases, they are not accessible to TSA. However, TSA officials lack information on the scope of these potential limitations because prior to our work TSA did not have information on the number of checkpoints equipped with video surveillance. We obtained this information as part of our survey of Federal Security Directors at SPOT airports. While TSA officials noted several possible limitations of the use of the existing video surveillance equipment, these images provide TSA a means of acquiring information about terrorist behaviors in the checkpoint environment that is not available elsewhere. If current research determines that the SPOT program has a scientifically validated basis for using behavior detection for counterterrorism purposes in the airport environment, then conducting a study to determine the feasibility of using images captured by video cameras could better position TSA in identifying behaviors to observe. TSA sends standardization teams to SPOT airports on a periodic basis to conduct activities related to quality control. Teams observe SPOT operations at an airport for several days, working side by side with the BDOs, on multiple shifts, observing their performance, offering guidance, and providing training when required. According to TSA, the purpose of a standardization team visit is to provide operational support to the BDOs, which includes additional training, mentoring, and guidance to help maintain a successful SPOT program. The standardization teams are comprised of at least two G-Band, or Expert BDOs who have received an additional week of training on SPOT behaviors and mentoring skills. SPOT officials stated that the SPOT program uses its standardization teams to assess overall BDO proficiency by observing BDOs, reviewing SPOT score sheet data, and other relevant data. Standardization teams may also provide a Behavior Observation and Analysis review class to refresh BDOs if the team determines that such a class is needed. The SPOT program director also said that the standardization teams aim to monitor the airport’s compliance with the SPOT program’s Standard Operating Procedures. As part of this mentoring approach, the standardization teams provide individual and team guidance to the BDOs, offer assistance in program management, and cover issues related to the interaction of BDOs with other TSA checkpoint personnel. TSA reported to us that it does not systematically track the standardization teams’ recommendations or the frequency of the teams’ airport visits. Standards for Internal Control state that programs should have controls in place to assess the quality of performance over time and ensure that the findings of audits and other reviews are promptly resolved. Managers are to (1) promptly evaluate findings from audits and other reviews, including those showing deficiencies and recommendations reported by auditors and others who evaluate agencies’ operations; (2) determine proper actions in response to findings and recommendations from audits and reviews; and (3) complete, within established time frames, all actions that correct or otherwise resolve the matters brought to management’s attention. Although the standardization teams may provide an airport Federal Security Director with recommendations on how to improve SPOT operations, the SPOT program director stated that Federal Security Directors are not required to document whether they have implemented the team recommendations. TSA officials told us that standardization teams can follow up on recommendations made during previous visits. However, TSA did not track whether corrective actions were implemented or the frequency of the team’s airport visits to ensure the implementation of the airport’s SPOT program. TSA officials stated that they are currently examining ways to compile data to address this issue, and expect to have a system in place in fiscal year 2010. Although TSA has taken steps to incorporate all four elements of an effective training program by planning, designing, implementing, and evaluating training for BDOs, further action could help enhance the training’s effectiveness. TSA initially consulted outside experts for help in the training’s development, which began as a half-day course and has grown to include classroom, on-the-job, and advanced training. TSA also has efforts underway to improve its training program, such as the deployment of SPOT recurrent training. However, TSA evaluations of SPOT program instructors found mixed quality among them, from 2006 onwards. Additionally, TSA has ongoing plans to evaluate the SPOT training for effectiveness, but has not yet developed time frames and milestones for completing the evaluation. In 2003, TSA officials at Boston Logan International airport developed the initial half-day training course for SPOT based on an existing course developed for the Massachusetts State Police. Their goal was to take the behavior detection program designed for law enforcement and apply it to screeners at airport checkpoints. According to TSA officials at Boston Logan, after they recognized that the lecture-style course they originally designed was not effective, they tasked an instructional system designer from TSA’s Workplace Performance and Training (the former name of TSA’s Operational and Technical Training Division) and an industrial psychologist from the Office of Human Capital to redesign and expand the course, which was piloted in 2005. The 2007 SPOT strategic plan included training objectives for the SPOT program as follows: reviewing existing behavior observation training providers, establishing and prioritizing multimodal training and assistance efforts based on threat assessments and critical infrastructure, establishing a Center of Excellence for Behavior Detection Program training that would continually enhance the quantity and quality of training to selected candidates, and developing a recurrent training program designed to refresh and hone skills needed for an effective Behavior Detection Program. Since that time, the SPOT program implemented, or is in the process of implementing, some of these objectives. For example, in 2008, as part of its effort towards establishing a center for excellence in behavior detection training (third objective), the SPOT program participated in a meeting with behavior detection training officials from various DHS components facilitated by DHS’s Screening Coordination Office to promote the sharing of information about behavior detection training and foster future collaboration. Additionally, the SPOT program worked with TSA’s Operational and Technical Training Division to create a recurrent training component for BDOs (fourth objective). For example, in 2008, the SPOT program office added a course on detecting microfacial expressions called Additional Behavior Detection Techniques. This 3-day course builds on the behavior detection skills taught in basic training, by teaching BDOs how to detect microfacial expressions. After pilot testing, the course began implementation nationwide in January 2009. In developing an effective training program, we previously reported that consultation with subject matter experts and expert entities is a core characteristic of the strategic training and development process. TSA SPOT program staff told us that they consulted with experts on behavior detection and observed existing behavior detection courses before deploying the SPOT training program. According to SPOT program officials, a TSA staff member from Boston Logan International Airport attended other training programs offered by other federal agencies and private training organizations to inform the design of SPOT training. TSA officials told us that information from the training courses was used to help develop the list of behaviors or “stress elevators” for the program, and that the point system used to identify passengers for referral screening was based in part on consultations with several subject-matter experts. TSA documentation also notes that a SPOT working group created in February 2004 consulted with the FBI’s Behavioral Science Unit. The Behavioral Science Unit specializes in developing and facilitating training, research, and consultation in the behavioral sciences for the FBI, law enforcement, intelligence, and military communities. While TSA officials from Boston Logan told us that the FBI was included in this initial SPOT working group, these officials agree that coordination with the FBI lapsed until June 2009 when the SPOT Program Office reengaged with the Behavioral Science Unit, and held a meeting with the unit at the FBI Academy in Quantico, Virginia. Since that meeting, a subject matter expert from the SPOT Program Office has been invited to be a member of the Terrorism Research and Analysis Project, which is an ongoing working group sponsored by the unit. In July 2008, DHS’s Screening Coordination Office facilitated a collaborative discussion on behavior detection that included TSA, CBP, and Secret Service officials to better ensure that components within DHS share information regarding their efforts in behavior detection and provide a forum for components to have an informed and collaborative discussion on current capabilities, best practices, and lessons learned. According to TSA, no further contact has occurred between the DHS Behavior Detection Working Group and the SPOT program. Thus, the extent to which the working group’s expertise will be used to refine or augment SPOT training in the future is not yet clear. Along with basic and remedial training required by the Aviation and Transportation Security Act, TSA policy requires its screening force to regularly complete recurrent (refresher) training. TSA recognized that ongoing training of screeners on a frequent basis and effective supervisory training are critical to maintaining and enhancing skills learned during basic training. According to agency officials, TSA is currently working with DHS S&T to determine the necessary frequency for refresher training for each training course within the SPOT program. Furthermore, TSA plans to place BDOs under TSA’s Performance and Accountability Standards System (PASS) beginning in fiscal year 2010. This will include a recertification module. In 2008, the SPOT program office began the process for developing recurrent SPOT training. Our internal control standards and training assessment guidance suggest that such refresher training should be considered integral to an effective training program from the start because work conditions and environments can be expected to change over time, and additional or updated training is essential to ensuring that the program mission continues to be accomplished. According to the SPOT program office, the recently deployed recurrent training will be semiannual. TSA’s Operational and Technical Training Division initially planned to pilot test recurrent training in April 2009 followed by full implementation of the course in approximately May 2009. Because the Operational and Technical Training Division focus was shifted to completing the revisions for the SPOT basic certification course, recurrent training was delayed until September 2009 when they released the training on TSA’s Online Learning Center. Our previous work on elements of effective training states that instructors must be both knowledgeable about the subject matter and issues involved, as well as able to effectively transfer these skills and knowledge to others. Moreover, internal control standards state that all personnel need to possess and maintain a level of competence that allows them to accomplish their assigned duties. Management needs to identify appropriate knowledge and skills needed for various jobs and provide needed training, as well as to ensure that those teaching the skills are themselves competent. TSA conducted internal assessments of SPOT instructors episodically from 2006 through March 2008. These assessments involved a few instructors being rated at a time, and found a wide range of competency among the instructors. In January 2009, TSA’s Office of Inspections and Investigations began an investigation of the SPOT training manager, who resigned shortly thereafter. TSA investigators determined that the training manager and other trainers had created a hostile training environment that intimidated some trainees. To address this problem, TSA stated that the program office reexamined the SPOT training program nationally. This included recertifying 47 of 54 SPOT instructors in March 2009, which included evaluation by TSA’s Office of Human Capital, Quality Assurance assessors. Additionally, in July 2009, TSA centralized SPOT training at five permanent, regional training facilities in Orlando, Florida; Houston, Texas; Phoenix, Arizona; Denver, Colorado; and Philadelphia, Pennsylvania. According to the SPOT program director, this will allow the SPOT program office more oversight over training. Previously, training was provided at individual airports. After the March 2009 recertification training, ratings scores of SPOT instructors showed less variation than did previous ratings. We reviewed the quality assurance instructor evaluations of two SPOT instructors conducted by TSA’s Office of Human Capital, Training Standards and Evaluation Branch, and the 167 SPOT program instructor evaluations of 54 SPOT instructors conducted by the SPOT program office and TSA’s Operational and Technical Training Division since the program started in October 2006. After the recertification training, 93 percent of instructors were rated as exceeding expectations, compared to 30 percent in the 2006 to September 2008 ratings. Table 3 shows the ratings of instructors for March 2009 compared to the period of 2006 to September 2008. In addition to the variation in numeric scores and rating levels for the 2006 to September 2008 period, as shown in table 3, we found substantial variation in the comments about instructor competency for the same period. For example, in 32 out of 74 instructor evaluation forms that we reviewed where comments were made about the instructor prior to 2009, the comments ranged from superb to needs more experience as an instructor, as well as needs more time performing the job as a BDO to be able to teach others. In the comments on an instructor who was rated as “meets expectations,” the instructor was described as having “limited experience within the SPOT program,” that this was “a major concern,” and it was recommended that the instructor spend as much time as possible functioning as a BDO. In other cases, however, SPOT instructors were described as competent, solid, and outstanding. For example, one instructor who received a rating of “exceeds expectations” was described as a superb instructor who “is a valued member of the National Training Team.” As noted above, following the March 2009 recertification training, 93 percent of the instructors received a rating of “exceeds expectations” with only 1 percent “needing improvement.” Of the 94 instructor evaluations completed in March 2009, 82 contained written comments. Of these, multiple SPOT instructors were described as excellent, knowledgeable, and effective. For example, an instructor who received a rating of “exceeds expectations” was noted as demonstrating a high degree of material knowledge and great presentation skills. TSA attributed the increase in instructor ratings to two factors. The first is low turnover among SPOT instructors, which allows instructors to hone both their technical and instructor skills. The second factor cited by TSA is that TSA conducted a 2-day instructor refresher training immediately prior to the evaluations in March 2009. To ensure all instructors were reevaluated within a specific time frame, evaluations were scheduled and conducted in a controlled environment. Instructors knew in advance they were going to be evaluated and delivered modules of the BDO certification course to other BDO instructors. We previously reported that evaluation is an integral part of training and development efforts, and that agencies need to systematically plan for and evaluate the effectiveness of training and development. Employing systematic monitoring and feedback processes can help by catching potential problems at an early stage, thereby saving valuable time and resources that a major redesign of training would likely entail. Similarly, in 2006, TSA’s Operational and Technical Training Division issued general evaluation standards for training programs, stating that training programs should be comprehensively evaluated on a periodic basis to identify program strengths and weaknesses. Moreover, standard practices in program management for defining, designing, and executing programs include developing a road map, or program plan, to establish an order for executing specific projects needed to obtain defined programmatic results within a specified time frame. The former SPOT training manager told us that the SPOT program internally evaluates the effectiveness of SPOT training through the job knowledge tests that BDO candidates must pass following the classroom portion of the training and the SPOT Proficiency/On-the-Job Training Checklist following the on-the-job portion of the training. Furthermore, the former training manager told us that TSA knows that the SPOT training is effective because BDOs are able to recognize behaviors at the checkpoint, and because of BDOs’ demonstrated ability to identify criminals—such as drug couriers or people with outstanding arrest warrants—through the screening process. Although TSA has not conducted a comprehensive analysis of the effectiveness of the SPOT training program, TSA’s Office of Human Capital, Training Standards and Evaluation Branch conducted training evaluations to assess how students use what they were taught in the SPOT basic training course. Specifically, from July through September 2008, the Training Standards and Evaluation Branch conducted evaluations at 5 of the 161 airports where the SPOT program is currently operating. Based on BDO feedback at the 5 airports, the Training Standards and Evaluation Branch’s final report contained a series of recommendations for improving the SPOT training program. These recommendations and TSA’s actions to address them are summarized in table 4. Additionally, in conjunction with S&T, TSA conducted a training effectiveness evaluation on the Additional Behavior Detection Techniques course, which showed a statistically significant increase in knowledge and skills following completion of the course. S&T is currently conducting a BDO job task analysis, which may be used to evaluate and update the SPOT training curriculum. Following the completion of the job task analysis—anticipated in mid-May 2010—TSA’s Operational and Technical Training Division intends to conduct an in- depth training gap analysis, which will take approximately 2 months to complete. Following completion of the training gap analysis, the agency will develop project plans, including milestones for future development efforts, to address any training concerns. However, to date, the agency does not have an evaluation plan including time frames and milestones for completion. According to the Operational and Technical Training Division, TSA will conduct periodic evaluations as the BDO position evolves. By conducting a comprehensive evaluation of the effectiveness of its training program, TSA will be in a better position to determine if BDOs are being taught the knowledge and skills they need to perform their job. Furthermore, by developing milestones and time frames for conducting such evaluations systematically, as well as on a periodic basis, TSA could help ensure that the SPOT training program is evaluated in accordance with its directives to help ensure that the program continues to provide BDOs with the necessary tools required to carry out their responsibilities. TSA developed the SPOT program in the wake of September 11, 2001, in an effort to respond quickly to potential threats to aviation security by identifying individuals who may pose a threat to aviation security, including terrorists planning or executing an attack who were not likely to be identified by TSA’s other screening security measures. Because TSA did not ensure that SPOT’s underlying methodology and work methods were scientifically validated prior to its nationwide deployment, an independent panel of experts could help determine whether a scientific foundation exists for the way in which the SPOT program uses behavior detection analysis for counterterrorism purposes in the aviation environment. With approximately $5.2 billion devoted to screening passengers and their property in fiscal year 2009, it is important that TSA provides effective stewardship of taxpayer funds ensuring a return on investment for each layer of its security system. As one layer of aviation security, the SPOT program has an estimated projected cost of about $1.2 billion over the next 5 years if the administration’s requested funding of $232 million for fiscal year 2011 remains at this level. The nation’s constrained fiscal environment makes it imperative that careful choices be made regarding which investments to pursue and which to discontinue. If an independent expert panel determines that DHS’s study is sufficiently comprehensive to determine whether the SPOT program is based on valid scientific principles that can be effectively applied in an airport environment for counterterrorism purposes, then conducting a comprehensive risk assessment including threat, vulnerability, and consequence could strengthen TSA’s ability in making resource allocation decisions and prioritizing its risk mitigation efforts. Moreover, conducting a cost-benefit analysis could help TSA determine whether SPOT provides benefits greater than or equal to other security alternatives and whether its level of investment in the SPOT program is appropriate. Revising its strategic plan for SPOT to incorporate risk assessment information, cost and resource analysis, and other essential components could enhance the plan’s usefulness to TSA in making program management and resource allocation decisions to effectively manage the deployment of SPOT. Providing guidance on how to use TSA’s resources for running passenger names against intelligence and criminal databases available to the Transportation Security Operations Center and helping DHS to connect disparate pieces of information using the Transportation Information Sharing System and other related intelligence and crime database and data sources could better inform DHS and TSA regarding the identity and background of certain individuals and thereby enhance aviation security. In addition, implementing the steps called for in the TSA Office of Strategic Operations plan to provide BDOs with a real-time mechanism to verify passenger identities and backgrounds via TSA’s Transportation Security Operations Center could strengthen their ability to rapidly verify the identity and background of passengers who have caused concern, and increase the likelihood of detecting and disrupting potential terrorists intending to cause harm to the aviation system. Additionally, developing outcome-oriented performance measures, making improvements to the SPOT database, and studying the feasibility of utilizing video recordings of individuals as they transited checkpoints and who were later charged with or pleaded guilty to terrorism-related offenses, could help TSA evaluate the SPOT program, identify potential vulnerabilities, and assess the effectiveness of its BDOs. Further, developing a plan for systematic and periodic evaluation of the training provided to BDOs along with time frames and milestones for its completion could help ensure that the SPOT training program is evaluated in accordance with its directives to help ensure that the program continues to provide BDOs with the necessary tools required to carry out their responsibilities. To help ensure that SPOT is based on valid scientific principles that can be effectively applied in an airport environment, we recommend that the Secretary of Homeland Security convene an independent panel of experts to review the methodology of the DHS S&T Directorate study on the SPOT program to determine whether the study’s methodology is sufficiently comprehensive to validate the SPOT program. This assessment should include appropriate input from other federal agencies with expertise in behavior detection and relevant subject matter experts. If this research determines that the SPOT program has a scientifically validated basis for using behavior detection for counterterrorism purposes in the airport environment, then we recommend that the TSA Administrator take the following four actions: Conduct a comprehensive risk assessment to include threat, vulnerability, and consequence of airports nationwide to determine the effective deployment of SPOT if TSA’s ongoing Aviation Modal Risk Assessment lacks this information. Perform a cost-benefit analysis of the SPOT program, including a comparison of the SPOT program with other security screening programs, such as random screening, or already existing security measures. Revise and implement the SPOT strategic plan by incorporating risk assessment information, identifying cost and resources, linking it to other related TSA strategic documents, describing how SPOT is integrated and implemented with TSA’s other layers of aviation security, and providing guidance on how to effectively link the roles, responsibilities, and capabilities of federal, state, and local officials providing program support. Study the feasibility of using airport checkpoint-surveillance video recordings of individuals transiting checkpoints who were later charged with or pleaded guilty to terrorism-related offenses to enhance understanding of terrorist behaviors in the airport checkpoint environment. Concurrent with the DHS S&T Directorate study of SPOT, and an independent panel assessment of the soundness of the methodology of the S&T study, we recommend that the TSA Administrator take the following six actions to ensure the program’s effective implementation: To provide additional assurance that TSA utilizes available resources to support the goals of deterring, detecting, and preventing security threats to the aviation system, TSA should: Provide guidance in the SPOT Standard Operating Procedures or other TSA directive to BDOs, or other TSA personnel, on inputting data into the Transportation Information Sharing System and set milestones and a time frame for deploying Transportation Information Sharing System access to SPOT airports so that TSA and intelligence community entities have information from all SPOT LEO referrals readily available to assist in “connecting the dots” and identifying potential terror plots. Implement the steps called for in the TSA Office of Security Operations Business plan to develop a standardized process for allowing BDOs or other designated airport officials to send information to TSA’s Transportation Security Operations Center about passengers whose behavior indicates that they may pose a threat to security, and provide guidance on how designated TSA officials are to receive information back from the Transportation Security Operations Center. Direct the TSA Transportation Security Operations Center to utilize all of the law enforcement and intelligence databases available to it when running passenger names, for passengers who have risen to the level of a LEO referral. To better measure the effectiveness of the program and evaluate the performance of BDOs, TSA should: Establish a plan that includes objectives, milestones, and time frames to develop outcome-oriented performance measures to help refine the current methods used by Behavior Detection Officers for identifying individuals who may pose a risk to the aviation system. Establish controls to help ensure completeness, accuracy, authorization, and validity of data collected during SPOT screening. To help ensure that TSA provides BDOs with the knowledge and skills needed to perform their duties, TSA should: Establish time frames and milestones for its plan to systematically conduct evaluations of the SPOT training program on a periodic basis. We provided a draft of our report to DHS and TSA on March 19, 2010, for review and comment. On May 3, 2010, DHS provided written comments, which are reprinted in appendix II. In commenting on our report, DHS stated that it concurred with 10 of our recommendations and identified actions taken, planned, or under way to implement them. However, the actions DHS reported it plans to take and has underway do not fully address the intent of our first recommendation. DHS also concurred in principle with an eleventh recommendation stating that it had convened a working group to determine the feasibility of implementing it. DHS commented on the scientific basis underlying SPOT and on two statements in our report that it believed were inaccurate—specifically, DHS disagreed with our reliance on a 2008 National Research Council report published under the auspices of the National Academy of Sciences on issues related to behavior detection, and second, on issues related to unpublished research they had cited as a partial validation of some aspects of the SPOT program. Finally, DHS commented on our conclusion regarding the use of the SPOT referral data. Regarding our first recommendation that DHS convene an independent panel of experts to review the methodology of DHS’s Science and Technology Directorate (S&T) study on SPOT, and to include appropriate input from other federal agencies with relevant expertise, DHS concurred and stated the current process includes an independent review of the program that will include input from other federal agencies and relevant experts. Although DHS has contracted with the American Institutes for Research to conduct its study, it remains unclear who will oversee this review and whether they are sufficiently independent from the current research process. DHS’s response also does not describe how the review currently planned is designed to determine whether the study’s methodology is sufficiently comprehensive to validate the SPOT program. As we noted in our report, research on other issues, such as determining the number of individuals needed to observe a given number of passengers moving at a given rate per day in an airport environment or the duration that such observation can be conducted by BDOs before observation fatigue affects effectiveness, could provide additional information on the extent to which SPOT can be effectively implemented in airports. Dr. Paul Ekman, a leading research scientist in the field of behavior detection, told us that additional research could help determine the need for periodic refresher training since no research has yet determined whether behavior detection is easily forgotten or can be potentially degraded with time or lack of use. Thus, questions exist as to whether behavior detection principles can be reliably and effectively used for counterterrorism purposes in airport settings to identify individuals who may pose a risk to the aviation system. To help ensure an objective assessment of the study’s methodology and findings, DHS could benefit from convening an independent panel of experts from outside DHS to determine whether the study’s methodology is sufficiently comprehensive to validate the SPOT program. DHS also concurred with our second recommendation to conduct a comprehensive risk assessment to determine the effective deployment of SPOT. DHS stated that TSA’s Aviation Modal Risk Assessment is designed to evaluate overall transportation security risk, not deployment strategies. However, DHS noted that TSA is in the process of conducting an initial risk analysis using its risk management analysis tool and plans to update this analysis in the future. However, it is not clear from DHS’s comments how this analysis will incorporate an assessment of TSA’s deployment strategy for SPOT. DHS also concurred with our third recommendation to perform a cost- benefit analysis of SPOT. DHS noted that TSA is developing an initial cost- benefit analysis and that the flexibility of behavior detection officers already suggests that behavior detection is cost-effective. However, it is not clear from DHS’s comments whether its cost-benefit analysis will include a comparison of the SPOT program with other security screening programs, such as random screening, or already existing security measures as we recommended. Completing its cost-benefit analysis and comparing it to other screening programs should help establish whether the SPOT program is cost-effective compared to other layers of security. With regard to our fourth recommendation to revise and implement the SPOT strategic plan using risk assessment information, DHS concurred and noted that analysis facilitated by the risk management analysis tool will allow the program to revise the SPOT strategic plan to incorporate the elements identified in our recommendation. DHS also concurred with our fifth recommendation to study the feasibility of using airport checkpoint-surveillance video recordings to enhance its understanding of terrorist behaviors. DHS noted that TSA agrees this could be a useful tool and is working with DHS’s S&T Directorate to utilize video case studies of terrorists, if possible. These cases studies could help TSA determine what behaviors had been demonstrated by these persons convicted of terrorist-related offenses who went through SPOT airports, and what could be learned from the observed behaviors. DHS concurred with our sixth recommendation that TSA provide guidance in the SPOT SOP or other directives to BDOs, or other TSA personnel, on how to input data into the Transportation Information Sharing System database. DHS stated that the SPOT SOP is undergoing revision, and that the revised version will provide guidance directing the input of BDO data into the Transportation Information Sharing System. DHS anticipates release of the updated SPOT SOP in fiscal year 2010. DHS also agreed that TSA should set milestones and a time frame for deploying Transportation Information Sharing System access to SPOT airports so that TSA and intelligence community entities have information from all SPOT LEO referrals readily available to assist in “connecting the dots” and identifying potential terror plots. DHS stated that TSA is currently drafting a plan to include milestones and a time frame for deploying System access to all SPOT airports. DHS concurred with our seventh recommendation to develop a standardized process to allow BDOs or other designated airport officials to send information to TSA’s Transportation Security Operations Center about passengers whose behavior indicates they may pose a threat to security, and to provide guidance on how designated TSA officials are to receive information back from the Center. DHS stated that TSA has convened a working group to address this recommendation. Moreover, TSA is developing a system and procedure for sending and receiving information from the Center and stated that it anticipates having a system in place later in fiscal year 2010. DHS concurred in principle with regard to our eighth recommendation that the Transportation Security Operations Center utilize all of the databases available to it when conducting checks on passengers who rise to the level of a LEO referral against intelligence and criminal databases. DHS stated that TSA has convened a working group to address this recommendation. According to DHS, this group will conduct a study during fiscal year 2010 to determine the feasibility of fully implementing this recommendation. As such, the study is to review the various authorities, permissions, and limitations of each of the databases or systems cited in our report. DHS stated that access to some of the systems, requires more justification than a BDO referral. Further, according to DHS, because some of the databases or systems contain classified information, TSA will also need to adopt a communication strategy to transmit the passenger information between the BDO and Transportation Security Operations Center. DHS stated that TSA will work on a process to collect the passenger information, verify the passenger’s identity, through checks of databases, and analyze that information to determine if the passenger is the subject of an investigation and may pose a risk to aviation security. With regard to our ninth recommendation to establish a plan with objectives, milestones, and time frames to develop outcome-oriented performance measures for BDOs, DHS concurred and stated that TSA intends to consult with experts to develop outcome-oriented performance measures. DHS also concurred with our tenth recommendation to establish controls for SPOT data. DHS noted that TSA established additional controls as part of the SPOT database migration to TSA’s Performance Management Information System and is exploring an additional technology solution to reduce possible errors. As noted in our report, since these changes to the database were not complete at the time of our audit, we could not assess whether the problems we identified with the database had been corrected. Regarding our eleventh recommendation to establish time frames and milestones to systematically evaluate the SPOT training program on a periodic basis, DHS concurred and stated that TSA intends to develop such a plan following completion of DHS’s S&T Directorate’s BDO Job Task Analysis, and TSA’s training gap analysis, which identifies gaps in the training curriculum. DHS also commented on the scientific basis underlying SPOT. Specifically, DHS stated that decades of scientific research has shown the SPOT behaviors to be “universal in their manifestation.” However, according to DHS, its S&T Directorate is examining the extent to which behavior indicators are appropriate for screening purposes and lead to appropriate and correct security decisions. DHS also commented that the results of this work, which is currently underway, will establish a scientific basis of the extent to which the SPOT program instruments and methods are valid. Thus, DHS’s comments suggest that additional research is needed to determine whether these behaviors can be used in an airport environment for screening passengers to identify threats to the aviation system. Moreover, DHS took issue with our use of a report from the National Research Council of the National Academy of Sciences stating that we improperly relied upon this report. We disagree. DHS questioned the findings of the National Research Council report and stated that it lacked sufficient information for its conclusions because it principally focused on privacy as it relates to data mining and behavioral surveillance and was not intended to represent an exhaustive or definitive review of the research or operational literature on behavioral screening, including recent unpublished DHS, defense, and intelligence community studies. DHS also stated that the National Research Council report did not study the SPOT program and that the researchers did not conduct interviews with SPOT personnel. As we noted in our report, although the National Research Council report addresses broader issues related to privacy and data mining, a senior Council official—and one of the authors of the study—stated that the committee included behavior detection as a focus because any behavior detection program could have privacy implications. This official added that the primary objective of the report was to develop a framework for sound decision making for programs, such as SPOT, and help ensure a sound scientific and legal basis. According to this official, the National Academy of Sciences’ Committee on Technical and Privacy Dimensions of Information for Terrorism Prevention and Other National Goals—which had oversight of the report—was briefed on the SPOT program as part of the study. The Committee also conducted meetings with three experts in behavior detection as part of their research. During the course of our review, we interviewed three Committee members responsible for developing the report’s findings, as well as four other behavior detection experts, including the three who participated in the National Research Council study. Our discussions with these experts corroborated the report’s findings. Thus, we believe that our use of the Council report was an appropriate and a necessary part of our review. However, the National Research Council report was only one of many sources that we analyzed with regard to the science of behavioral and physiological screening, and its applicability to an airport environment. As we noted in the description of our methodology, our study included interviews with officials from DHS as well as several of its components and other U.S. government agencies—each of which use elements of behavior detection in their daily work. We also interviewed El Al airline officials, a former director of security at Israel’s Ben-Gurion airport, and seven nationally recognized experts in behavior detection as part of our review. Moreover, as we explained in the discussion of our scope and methodology, we conducted a survey about the SPOT program of all 118 Federal Security Directors for all SPOT airports, and conducted site visits to 15 SPOT airports. In addition, we analyzed the SPOT referral database, to the extent the data permitted, covering a 4-year period and the results from 2 billion passengers passing through SPOT airports. Moreover, we attended both the basic and advanced training courses in behavior detection provided by TSA to BDOs, in order to better understand how the program is carried out. Therefore, our analysis of the program was not derived from or based on a single study by the National Research Council as DHS suggested, but rather is based on all of the information we gathered and synthesized from multiple, diverse, expert sources, each of which provided different perspectives about the program, as well as about behavior detection in general. DHS also disagreed with the accuracy of a statement included in our report that noted DHS S&T could not provide us with specific contacts related to sources of information for certain research it cited as support for the SPOT program. In its comments, DHS stated that it had provided us with all requested documents that represent DHS’s S&T Directorate- sponsored research. We agree. However, DHS did not provide us with contact information for the sources of unpublished studies by the Department of Defense and other intelligence community studies that DHS S&T had cited as support for the SPOT program. Without such information, we are unable to verify the contents of these unpublished studies. Finally, DHS stated that while we were unable to use the SPOT referral data to assess whether any behavior or combination of SPOT behaviors could be used to reliably predict the final outcome of an incident involving the use of SPOT, it was able to analyze the SPOT referral database successfully after working with TSA to verify scores assigned to different indicators. Our concern with the data did not involve the question of whether some behaviors were entered erroneously, nor whether errors in coding were excessive or non-random. Rather, we were concerned with whether the data on behaviors were complete. Specifically, it cannot be determined from the SPOT referral database whether all behaviors observed were included for each referred passenger by each BDO or whether only the behaviors that were sufficient for a LEO referral were recorded into the database. It is not possible to determine from the database if the number of observed behaviors entered for a given passenger was the total number of observed behaviors, or whether additional behaviors were observed. A rigorous analysis of the relative effects of the different behaviors on the outcomes of the use of SPOT would require each BDO to record, for each of the observable behaviors, whether it was or was not observed. TSA also provided technical comments that we incorporated as appropriate. We will send copies of this report to the Secretary of Homeland Security; the TSA Administrator (Acting); and interested congressional committees as appropriate. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4379 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are acknowledged in appendix III. To determine the extent to which the Transportation Security Administration (TSA) determined whether the Screening of Passengers By Observation Techniques (SPOT) program had a scientifically-validated basis for identifying passengers before deploying it, we reviewed literature on behavior analysis by subject matter experts, interviewed seven experts in behavior analysis, interviewed other federal agencies and entities about how they use behavior detection techniques, and analyzed relevant reports and books on the topic. These included a 2008 study by the National Research Council of the National Academy of Sciences that has a discussion regarding deception and behavioral surveillance, as well as other issues related to behavioral analysis. We interviewed Dr. Herbert S. Lin, who was a primary author of the report, as well as Dr. Robert W. Levenson, and Dr. Stephen E. Fienberg, both members of the Academy committee that oversaw the report, about the report’s findings with regard to behavior detection, and the extent to which behavior detection in a complex environment, such as an airport terminal, has been validated with regard to its effectiveness in identifying persons who may be a risk to aviation security. Other behavior detection experts we consulted were Dr. Paul Ekman; Dr. Mark Frank; Dr. David Givens; Dr. David Matsumoto; and Mr. Rafi Ron, former director of security at Israel’s Ben-Gurion Airport. Dr. Ekman, Dr. Frank, and Mr. Ron provided expert advice for the National Research Council study. Dr. Givens was identified by TSA as having been their principal source for the nonverbal behavior indicators used by the SPOT program. We also interviewed Dr. Lawrence M. Wein, an expert in emergency responses to terror attacks and mathematical models in operations management. In addition, we interviewed officials from the Department of Homeland Security’s (DHS) Science and Technology (S&T) Directorate regarding their ongoing research into behavior detection. Although the views of these experts cannot be generalized across all experts in behavior analysis because we selected individuals based on their publications on behavioral analysis or related topics, their recognized accomplishments and expertise, and, in some cases, TSA’s use of their work or expertise to design and review the SPOT program’s behaviors, they provided us with an overall understanding of the fundamentals of behavior analysis, and how it could be applied. To determine the basis for TSA’s strategy to develop and deploy SPOT and evaluate to what extent SPOT was informed by a cost-benefit analysis and a strategic plan, we reviewed program documentation, including briefings prepared by the SPOT program office during the course of developing and fielding SPOT, two versions of a strategic plan for SPOT, and the 2009 SPOT standard operating procedures guidance. We compared the plans and analyses used by TSA to develop and implement SPOT to criteria on how to develop and implement programs in DHS’s 2006 Cost Benefit Analysis Guidebook, as well as to Office of Management and Budget guidance on the utility of cost-benefit analyses in program implementation. We also analyzed the development of SPOT in light of the standards and criteria cited in DHS’s 2006 National Infrastructure Protection Plan. We met with relevant TSA officials to discuss these issues. To assess whether DHS developed an effective strategic plan for SPOT prior to implementing the program, we interviewed TSA officials involved in development of the SPOT strategic plan. We analyzed whether the SPOT plan incorporated the desirable characteristics of an effective strategic plan as identified by previous GAO work on what strategic plans should include to be considered effective, such as a risk assessment, cost and resources analysis, and a means for collaboration with other key entities. We also examined it in light of the requirements of the Government Performance and Results Act of 1993, which specifies the elements of strategic plans for government programs. We assessed whether the SPOT strategic plan was followed by TSA. As part of our analysis of the planning for SPOT before it was implemented on a nationwide basis, we reviewed TSA documentation related to the development and pilot testing of SPOT, such as a TSA white paper on SPOT, and interviewed key program officials from both headquarters and field offices. We also interviewed cognizant officials from other U.S. government agencies and agency entities that utilize behavior detection in their work, including U.S. Customs and Border Protection (CBP), the U.S. Secret Service, the TSA’s Federal Air Marshal Service (FAMS) component, and the Federal Bureau of Investigation (FBI). We sought their views on the utility of various behavior detection methods, their experience with practicing behavior detection, and asked them about the extent to which TSA had consulted with them in developing and implementing the SPOT program. To better understand how SPOT incorporated expertise about the use of behavior detection in an airport setting, we interviewed officials from Israel’s El Al Airlines, which is cited by TSA as having provided part of the basis of the SPOT program. We asked about El Al’s methods to ensure the security of its passenger aircraft, and also interviewed a former head of security at Israel’s Ben-Gurion airport, who has advised TSA on security issues. We asked TSA and SPOT program officials about their consultations with El Al, and about the ways in which they had utilized El Al’s expertise, as well as about any other entities whose expertise they may have adopted into SPOT. To determine the challenges, if any, that emerged during implementation of the SPOT program, we interviewed headquarters and field personnel about how the program has utilized the resources available to it to ensure that it is effective. These resources included the support of law enforcement officers (LEOs), to whom passengers are referred by Behavior Detection Officers (BDOs) for additional questioning. In addition, we interviewed SPOT program and TSA officials about the databases available to them at TSA’s Transportation Security Operations Center to determine if a suspect passenger is being sought by other U.S. law enforcement or intelligence entities, and whether there is guidance for BDOs on when and how to contact the Transportation Security Operations Center. We also asked about whether there is guidance and training for BDOs on how to access TSA’s Transportation Information Sharing System database, which is owned by FAMS, and is available through the Transportation Security Operations Center. To determine if any management challenges had emerged related to management controls in developing and implementing SPOT, we compared TSA’s approach for implementing and managing the SPOT program with GAO’s Standards for Internal Control in the Federal Government and with risk management principles we had previously identified. Our legal counsel office reviewed court decisions relevant to the SPOT program. In addition, we interviewed attorneys from the American Civil Liberties Union, and obtained and reviewed TSA’s Privacy Impact Assessments for SPOT, the Transportation Security Operations Center, and the Transportation Information Sharing System. We also met with and discussed relevant privacy and legal issues with TSA’s Offices of Privacy and Civil Rights/Civil Liberties. To obtain data about certain aspects of the SPOT program that the SPOT program office did not have, we conducted a survey of Federal Security Directorswhose responsibilities included security at all 161 SPOT airports at the time of our survey. (Some Federal Security Directors have responsibility for more than one airport.) We obtained a 100 percent response rate. This survey asked, among other things, about whether there were cameras at security checkpoints that record the interactions of Transportation Security Officers (TSO), BDOs, and passengers; if the airport authority had an agreement with TSA that specifies certain law enforcement actions during a SPOT referral; and if there was an agreement, or any other comparable guidance that specified a time limit for LEOs to come to checkpoints after being called for help by BDOs. To determine the extent to which TSA has measured SPOT’s effect on aviation security, we obtained and analyzed the TSA SPOT referral database, which records all incidents in which BDOs refer passengers to secondary, more intensive questioning, and which also records all incidents in which BDOs chose to refer passengers to LEOs. We found that the SPOT database was sufficiently reliable to count the number of arrests resulting from referrals from BDOs to LEOs, for examining the reasons for each arrest, and for counting the percentage of times that LEOs responded to BDO calls for service, and the length of time required. Use of these data required us to resolve apparent contradictions and anomalies in the database to make the data useable. Because of data problems, we were unable to conduct analyses to assess whether any behavior or combination of behaviors could be used to predict the final outcome of an incident involving the use of SPOT. In addition, we reviewed relevant standardization team reports and observed a standardization team visit in operation. In addition, we spoke with BDO managers, Federal Security Directors, and Assistant Federal Security Directors to determine how BDOs are evaluated. To do so, we conducted site visits to 15 commercial airports at which BDOs and SPOT have been deployed, or almost 10 percent of the 161 airports with SPOT. We chose these airports taking into account the following criteria, among others: (1) each airport had BDOs deployed, and at each, the SPOT program had been in effect for no less than 3 months; (2) airports were chosen to provide a variety of sizes, as measured in annual passenger volume; physical location within the country (northeast, southwest, central, Pacific Coast, rural, urban); and estimated risk of terrorist incident, using DHS’s Current Airports Threat Assessment list (visiting 6 that were in the top 10, and others much lower); (3) BDOs who are employed by contractors, rather than employed directly by TSA; and (4) airports with LEOs who were identified to us by TSA as having received some form of behavior detection training and airports where they were not known to have received such training. In addition, we took into account the location of the airports with regard their proximity to subject matter experts on behavior detection whom we wished to interview, as well as the time and cost required to reach certain airports. At each of the airports we visited, we interviewed cognizant officials, including the Federal Security Director or Assistant assigned to the airport, the BDO program manager, one or two BDOs, and one or two LEOs who have interacted with BDOs. Since each of these airports differs in terms of passenger volume, physical size and layout, geographic location, and potential value as a target for terrorism, among other things, the results from these visits are not generalizable to other airports. However, these visits provided helpful insight into the operation of SPOT at airports. In addition, to determine if individuals had transited SPOT airports who were later charged with or pleaded guilty to terrorism-related offenses, we reviewed information contained in (1) the Treasury Enforcement Communication System II database maintained by CBP; (2) Department of Justice information and court documents, including indictments and related documents; and (3) media accounts of individuals accused of terrorism-related activities. We compared information pertaining to these individuals’ dates of transit to the dates when SPOT was deployed to the various airports identified in the Treasury Enforcement Communication System and Justice Department data to determine if SPOT had been deployed at a given airport when the transits occurred. Further, we used our survey of Federal Security Directors at SPOT airports to determine the extent to which video surveillance cameras are present at checkpoints. To assess the extent that SPOT training incorporates the attributes of an effective training program, we had training experts at TSA headquarters complete a training assessment tool that we developed using our prior work for assessing training courses and curricula. To address training- related issues, including to understand better how other entities train their employees in behavior detection, and what their curricula include, we conducted site visits to the Secret Service, FAMS, CBP, and the FBI, and also interviewed nongovernmental experts on behavior detection (our selection of these experts is discussed above). As part of our assessment of SPOT training, we attended the basic SPOT training course given to BDOs, as well as the advanced SPOT course on behavior detection. We interviewed BDOs and BDO managers about the SPOT training, as well as officials of El Al airlines, with regard to how El Al trains and tests its personnel who utilize behavior recognition and analysis as part of their assessment of El Al passengers. We conducted this performance audit from May 2008 through May 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, David M. Bruno, Assistant Director, and Jonathan R. Tumin, managed this assignment. Ryan Consaul, Jeff C. Jensen, Kevin Remondini, and Julie E. Silvers made significant contributions to the work. Arthur James, Jr., Amanda Miller, and Douglas Sloane assisted with design, methodology, and data analysis. Chris Dionis assisted with issues related to training. Katherine Davis and Debra Sebastian provided assistance in report preparation; Tracey King and Tom Lombardi provided legal support; and Pille Anvelt and Barbara Hills developed the report graphics.
To enhance aviation security, the Transportation Security Administration (TSA) began initial testing in October 2003 of its Screening of Passengers by Observation Techniques (SPOT) program. Behavior Detection Officers (BDO) carry out SPOT's mission to identify persons who pose a risk to aviation security by focusing on behavioral and appearance indicators. GAO was asked to review the SPOT program. GAO analyzed (1) the extent to which TSA validated the SPOT program before deployment, (2) implementation challenges, and (3) the extent to which TSA measures SPOT's effect on aviation security. GAO analyzed TSA documents, such as strategic plans and operating procedures; interviewed agency personnel and subject matter experts; and visited 15 SPOT airports, among other things. Although the results from these visits are not generalizable, they provided insights into SPOT operations. Although the Department of Homeland Security (DHS) is in the process of validating some aspects of the SPOT program, TSA deployed SPOT nationwide without first validating the scientific basis for identifying suspicious passengers in an airport environment. A scientific consensus does not exist on whether behavior detection principles can be reliably used for counterterrorism purposes, according to the National Research Council of the National Academy of Sciences. According to TSA, no other large-scale security screening program based on behavioral indicators has ever been rigorously scientifically validated. DHS plans to review aspects of SPOT, such as whether the program is more effective at identifying threats than random screening. Nonetheless, DHS's current plan to assess SPOT is not designed to fully validate whether behavior detection can be used to reliably identify individuals in an airport environment who pose a security risk. For example, factors such as the length of time BDOs can observe passengers without becoming fatigued are not part of the plan and could provide additional information on the extent to which SPOT can be effectively implemented. Prior GAO work has found that independent expert review panels can provide comprehensive, objective reviews of complex issues. Use of such a panel to review DHS's methodology could help ensure a rigorous, scientific validation of SPOT, helping provide more assurance that SPOT is fulfilling its mission to strengthen aviation security. TSA is experiencing implementation challenges, including not fully utilizing the resources it has available to systematically collect and analyze the information obtained by BDOs on passengers who may pose a threat to the aviation system. TSA's Transportation System Operations Center has the resources to investigate aviation threats but generally does not check all law enforcement and intelligence databases available to it to identify persons referred by BDOs. Utilizing existing resources would enhance TSA's ability to quickly verify passenger identity and could help TSA to more reliably "connect the dots." Further, most BDOs lack a mechanism to input data on suspicious passengers into a database used by TSA analysts and also lack a means to obtain information from the Transportation System Operations Center on a timely basis. TSA states that it is in the process of providing input capabilities, but does not have a time frame for when this will occur at all SPOT airports. Providing BDOs, or other TSA personnel, with these capabilities could help TSA "connect the dots" to identify potential threats. Although TSA has some performance measures related to SPOT, it lacks outcome-oriented measures to evaluate the program's progress toward reaching its goals. Establishing a plan to develop these measures could better position TSA to determine if SPOT is contributing to TSA's strategic goals for aviation security. TSA is planning to enhance its evaluation capabilities in 2010 to more readily assess the program's effectiveness by conducting statistical analysis of data related to SPOT referrals to law enforcement and associated arrests.
EEOICPA, as amended, generally provides compensation to employees of the Department of Energy (Energy) and its contractors employed in the production of U.S. nuclear weapons who developed illnesses related to their exposure to radiation and many other toxins at Energy facilities. During and shortly after World War II, the United States sponsored the development and production of nuclear weapons using a network of facilities. During the Cold War, this network expanded into a complex of as many as 365 industrial sites and research laboratories throughout the country that employed more than 600,000 workers in the production and testing of nuclear weapons. Some of the production sites were owned by Energy or its predecessor agencies, and in many instances contractors managed operations at the facilities. Workers in these facilities used manufacturing processes that involved handling very dangerous materials, and they often were provided inadequate protection from exposure to radioactive elements, although protective measures have increased over time. Because of national security concerns, they also worked under great secrecy, often facing severe criminal penalties for breaches of secrecy. Workers were often given minimal information about the materials with which they worked and the potential health consequences of their exposure to the materials. In some cases, the extent of the potential negative effects of the toxins may not have been fully understood at the time of workers’ exposure. Active production of nuclear weapons was halted at the end of the Cold War, and federally sponsored cleanup of some of these sites has been underway since that time. Other sites remain active for research, storage, uranium production, and weapons assembly and disassembly. In passing EEOICPA, Congress recognized that many of these employees were unknowingly exposed to radiation, beryllium, and other toxic materials at Energy facilities. EEOICPA, as amended, consists of two compensation programs, Part B and Part E. The Part B program generally provides for $150,000 to eligible workers or their survivors, as well as coverage of future medical expenses associated with certain radiogenic cancer, chronic beryllium disease, and chronic silicosis. Part E provides up to $250,000 for wage loss and impairment, as well as coverage of medical expenses, to claimants with any illness who demonstrate that it is at least as likely as not that 1) exposure to a toxic substance at an Energy facility was a significant factor in aggravating, contributing to, or causing the illness, and 2) the exposure to such a toxic substance was related to employment at an Energy facility. If the employee is deceased, certain eligible survivors can also claim benefits. While the Division of Energy Employees Occupational Illness Compensation (DEEOIC) within DOL’s Office of Workers’ Compensation Programs has primary responsibility for administering the compensation program, other federal agencies, including the Department of Health and Human Services and Department of Energy, also have a role in implementing the program. The National Institute for Occupational Safety and Health (NIOSH) conducts activities to assist claimants and support the role of the Department of Health and Human Services under EEOICPA. Some of these activities include developing scientific guidelines for determining whether a cancer is related to the worker’s occupational exposure to radiation, developing methods to estimate worker exposure to radiation (dose reconstruction), using the dose reconstruction to develop estimates of radiation dose for workers who apply for compensation, and overseeing the process by which classes of workers can be considered for inclusion in the Special Exposure Cohort. Energy’s role, in part, is to ensure that all available worker and facility records and data are provided to DOL and NIOSH. This includes information related to individual claims, such as employment records to establish periods of covered employment and facilities, and exposure records for use in adjudicating claims. DEEOIC’s four district offices, located in Cleveland, Denver, Jacksonville, and Seattle, are responsible for claims development, determining causation, and issuing a Recommended Decision. Claims examiners within each district office can also authorize compensation and medical benefits, respond to inquiries from interested parties, and maintain case files. Claims examiners rely on the EEOICPA Procedure Manual, among other resources, to process and develop claims. The Procedure Manual is supplemented by EEOICPA Bulletins and Circulars and is updated periodically. In addition, DOL has issued regulations that set forth the general policies and guidelines governing its administration of EEOICPA. DOL also has 11 resource centers to assist with claims processing. The resource centers, situated in key geographic locations throughout the United States, are responsible for providing assistance and information to the EEOICPA claimant community and other interested parties. They provide claim development support and program outreach as well as initial claim intake. While the resource centers gather substantial information and documentation, and perform certain initial development and limited follow-up tasks, they do not make any decisions regarding the claim. The district office further develops the claim and issues an initial Recommended Decision. Since Part E’s creation in 2004 and through December 2015, just over 123,000 claims have been processed by DOL. The process starts when the employee or their survivor(s) files a claim with a district office or resource center (see fig. 1). If the claim is filed with a resource center, a center employee conducts outreach and initiates actions to verify employment. Once complete, the resource center forwards the claim to the district office for further development. Upon receipt of a claim from the resource center, or directly from a claimant, the district office assigns it to a claims examiner, who develops the claim and issues a Recommended Decision. During this process, the claims examiner will often request additional information from the claimant to verify employment, document a diagnosed claimed illness, and determine survivor eligibility, as applicable. The claims examiner will also verify the claimant’s employment history with Energy, the Social Security Administration, or other organizations. After the district office has established that the claimant meets the employment criteria and has a diagnosed illness, it determines if the illness was a result of exposure to radiation or other toxic substances during the claimant’s contract employment with Energy. To do this, the district office requests and reviews additional information, as applicable, from NIOSH, medical consultants, the claimant’s treating physician, certified toxicologists, and industrial hygienists, as well as web-based information regarding the relationship between the illness and toxic substances. Based on a review of all the evidence gathered, the claims examiner—determining if it was at least as likely as not that the exposure was a significant factor in causing, contributing to, or aggravating the illness—issues a Recommended Decision to accept or deny the claim. The claims examiner then notifies the claimant and the Final Adjudication Branch (FAB) of the Recommended Decision. Claimants have 60 days from the date the district office issues the Recommended Decision to object and request a hearing. The FAB reviews the Recommended Decision and the claimant’s objections, if any, and reaches a Final Decision to accept or reverse the Recommended Decision, or remand the claim to the district office for further processing. If the claimant provides new evidence before a Final Decision to deny the claim is issued, FAB may return the claim to the district office for additional development or, if the new evidence warrants reversal in favor of the claimant, FAB may issue a reversal. After the Final Decision is issued, claimants can request reconsideration within 30 days of the Final Decision, or a reopening of the claim at any time. For such requests, DOL will consider the claimant’s reason for the request and either accept or deny the request. DOL claims examiners determine workers’ eligibility for Part E compensation in part by using a centralized database of information on Energy facilities, toxic substances, and their related illnesses. Known as the Site Exposure Matrices (SEM), the web-based database was developed by DOL’s contractor to organize, display, and communicate information on the toxic substances workers were potentially exposed to at specific Energy sites, buildings at the sites, and during specific job processes conducted in those buildings. It also cross-references the toxins with diseases for which there is an established link. DOL officials noted that although the creation of the SEM is not mandated, the agency developed it partly as a tool to help claimants establish a link between a facility and possible exposure to toxins. The SEM is continually updated as new exposure data are obtained and is publicly available on the Internet for anyone seeking this type of information. Claims examiners will typically use SEM information during the adjudication of a claim, while a public citizen can use it as a research tool. Upon accessing the SEM, the claims examiner will retrieve relevant information by entering search terms specific to the claim being processed. For example, by entering the facility where the employee worked, the examiner will generate a list of toxins known to have been at the site. By entering the illness being claimed, the examiner will generate a list of toxins known to be linked with that illness. Public users retrieve SEM information in a similar fashion, and the information they obtain may factor into their decision to file a claim for benefits under EEOICPA Part E. However, a claimant need not obtain nor consider SEM information before filing a claim. Based on our review of a stratified random probability sample of 200 EEOICPA Part E claims filed from 2010 through 2014 by employees or their survivors, we found DOL claims examiners generally followed established adjudication procedures. Overall, we estimate that approximately 90 percent of adjudicated claims were consistent with the selected procedures we tested that are contained in DOL’s EEOICPA Procedure Manual. The remaining estimated 10 percent of adjudicated claims were inconsistent with at least one procedure, and the nature of the inconsistency varied. While our sample may not necessarily reflect the reasons for inconsistency across the entire population of Part E claims, we found many examples of deficiencies in DOL’s written correspondence to claimants. In other instances, we found deficiencies in how the claim was developed. However, in the vast majority of occurrences, the inconsistencies we identified in our sample would likely not have affected adjudication outcomes because DOL officials verified that most of the affected claims were accepted or denied for reasons unrelated to the problems we found. Our claim file review found that most of the inconsistency with the Procedure Manual pertained to deficiencies in written correspondence to claimants, such as in Recommended and Final Decision letters, and in one instance, a letter requesting additional evidence. The Procedure Manual explicitly states that claims examiners must ensure that written decisions, in particular, are clear, concise, and well written, with language that clearly communicates the necessary information. It specifically cautions that a poorly written Recommended Decision increases the likelihood that a claimant will not understand the outcome of the claim and the probability of objection. Our sample identified letters to claimants that included inaccurate, inconsistent, or incomplete information, though these deficiencies may not necessarily be reflective of the general population of claims. Inaccurate information: Some written correspondence from DOL to claimants, such as the letters accompanying the Recommended Decision, contained factual errors, as illustrated by the following examples: For a claim that covered myocardial infarction, a heart condition, DOL’s decision letter erroneously stated that the condition was “hyperthyroidism,” a glandular condition. One decision letter was dated incorrectly. One letter requesting additional evidence listed the incorrect address for the district office. This error resulted in the claimant mailing evidence to the incorrect address, which we found led to processing delays. Inconsistent information: Some Recommended Decision letters to claimants included inconsistent information within the same letter, or contradicted other information sent to the claimant, such as the Final Decision letter. Examples include: In some parts of a Recommended Decision letter, the claimant’s condition was correctly cited as “bladder cancer,” while in other places it was incorrectly cited as “prostate cancer.” One Recommended Decision letter initially stated there was a medical diagnosis of the claimed condition, but later stated there was no diagnosis. Another Recommended Decision letter stated there was no diagnosis for any of the seven conditions claimed, but the Final Decision letter stated there was a diagnosis for four conditions. Moreover, the same Final Decision letter stated, in a latter part of the letter, that there was a diagnosis for six conditions. In two instances, the claimant’s employment dates differed between the Recommended and Final Decision letters sent to the claimant. Further, dates listed on letters sent to claimants sometimes differed from what the claims examiner entered into DOL’s electronic case management system. Incomplete information: We also found that some Recommended Decision letters omitted the required notice to the claimant informing them of their right to request a copy of the Contract Medical Consultant (CMC) report if the Recommended Decision used the opinion of a CMC. More specifically, we identified 47 claims in our sample in which a CMC report was used to substantiate a Recommended Decision to deny the claim, but of that group, 17 did not include the required notice. DOL officials informed us that, subsequent to our file review, a July 2015 email from DOL headquarters instructed district offices to automatically attach any CMC, industrial hygienist, or toxicologist reports to Recommended Decision letters for claims that were denied. In addition, our review revealed deficiencies associated with claims development, which pertains to steps the claims examiner must take to verify employment and medical condition, and establish causation. While the deficiencies among claims in our sample may not reflect those within the general population of Part E claims, we found examples of incorrect or missing SEM searches, no evidence of a referral to a specialist, and untimely response to claimant requests for reopening: In a hearing loss claim, the claims examiner ran an incorrect SEM search on “maintenance mechanic” although the claimant actually worked as a maintenance supervisor. In a claim for sarcoidosis, the Procedure Manual directs the claims examiner to determine whether the claimant may have been exposed to beryllium. However, we saw no evidence that a SEM search, or other type of determination, had been performed. In a Parkinsonism claim, the claims examiner was uncertain as to whether that condition was linked to exposure to a certain toxin. Established adjudication policy suggests claims be referred to an industrial hygienist or toxicologist under these circumstances; however, we saw no evidence that such a referral had been made. A claim was denied for lack of causation but there was no evidence that a request was made to the claimant or a physician asking for more causation evidence. Two claimants had requested reopening of their denied claim but the district office had not responded to their request. Upon their reexamination of the claims involved, DOL officials agreed procedural inconsistencies had occurred and said they would flag certain issues for additional review and for claims examiner training, such as those related to development of sarcoidosis and hearing loss claims. For the claim we determined should have been referred to an industrial hygienist or toxicologist, DOL officials said the claim would be referred to both in order to determine if it warrants reopening. As part of our examination of claim files, we also found evidence that a review of the file was often not performed. Under the Procedure Manual, claims examiners are to route the Recommended Decision and case file to the “appropriate signatory” for review, signature, date, and release. The Procedure Manual does not, however, clarify who an “appropriate signatory” is, or when such a review would be necessary. DOL officials said there is no national policy on performing such reviews and that they occur at the discretion of the district office. Officials added that the decision is often based on the claims examiner’s level of experience, meaning that claims adjudicated by senior claims examiners would not typically undergo a review. We estimate that 50 percent of adjudicated claims did not undergo this review. Reviews are one of DOL’s quality controls, but their inconsistent application may increase the likelihood of issues with the Recommended Decision, which in turn increases the potential for claimant confusion and delays in adjudication. Lastly, review by the Final Adjudication Branch (FAB), required after the issuance of the Recommended Decision, also acts as a quality control to help identify and correct any issues with adjudication. For example, we estimate that FAB remands 1 percent of claims back to the district office due to development errors, including incorrectly performed SEM searches. However, it is notable that while FAB’s role includes identifying and correcting procedural errors, it had not detected the several instances of procedural inconsistency we had identified during our review. As a result, none of the deficiencies we identified during our claim file review had led to the claim being remanded and corrected. DOL continuously captures new links between toxic substances and illnesses as they are identified and documents them in the SEM. New links are primarily drawn from a database of hazardous toxins and associated diseases—known as Haz-Map—maintained by the National Library of Medicine. According to DOL officials, as new links are added to Haz-Map, they are also added to the SEM for claims examiners’ use, and added to the public SEM on the Internet about every 6 months. In general, the SEM contains only causal links that are based on epidemiological studies, and for which there is consensus within the medical and scientific communities. In addition, on its public SEM website, DOL encourages the public to submit site- or disease-related information to be considered for inclusion in the SEM. According to DOL officials, to determine whether publicly submitted information should be added, DOL relies on two individuals: a toxicologist it employs and the National Library of Medicine’s physician responsible for updating Haz- Map. Both monitor ongoing research on toxin-illness causation. Users can check the public SEM website for the status of proposed submissions to see if they have been accepted for inclusion in the SEM. According to DOL officials, the amount of information contained in the SEM has dramatically increased since 2006 when it became available to claims examiners for adjudication purposes. For example, they estimated that from 2006 to 2015, the number of links between toxins and illnesses has increased from around 300 to over 3,000. Officials said the SEM represents the largest collection of information ever assembled by a government entity for the purpose of assessing occupational hazards at nuclear weapons facilities, both current and past. Despite its scope, the SEM has come under scrutiny from claimant advocacy groups and the Ombudsman for the Energy Employees Occupational Illness Compensation Program, both of whom expressed concerns about its accuracy and completeness. DOL officials said that the complex nature of the information associated with the SEM makes it challenging to ensure absolute completeness, due in part to lack of consensus about whether there is a link between a specific substance and an illness. Further, in response to a request by DOL to evaluate the scientific rigor of the SEM, the Institute of Medicine published a 2013 report that also questioned the SEM’s completeness. The report noted several examples of potential causal linkages missing from the SEM. Further, it questioned the SEM’s exclusive dependence on Haz-Map as its source for disease and causal link information, and suggested other sources be considered, such as the U.S. Department of Health and Human Services and the World Health Organization. During our interviews, DOL officials said the SEM is therefore not complete, and the agency states this on the SEM website. DOL officials said they have taken several steps over recent years, largely in response to the recommendations of an external and internal review, to improve the SEM’s content and usefulness. These include continuous updates and refinements to the information contained in the SEM and increasing functionality, such as enhancing the user’s ability to filter search results according to certain terms. They also highlighted several efforts to engage the general public, including increasing the public’s access to the SEM, publishing SEM resource documentation, and responding to individuals who submit suggestions for new causal links or other information. In addition, a law was enacted in December 2014 requiring the establishment of an advisory board to, among other things, advise the Secretary of Labor on the SEM. According to DOL officials, the agency has appointed a Designated Federal Officer, hired staff and a contractor to help support the board’s work, and is in the process of recommending board members representing the scientific, medical, and claimant communities. DOL estimates that the board’s membership will be in place in early 2016. DOL provides limited notification to claims examiners and the public regarding new links between toxic substances and illnesses due to the large volume of information being continuously added to the SEM, according to officials. Therefore, it is usually incumbent upon claims examiners and the public to make themselves aware of new links. DOL officials said claims examiners typically become aware of new links when they check the SEM as part of the adjudication of an individual claim. Since each claim necessitates tailored searches of the SEM based on the specific facility, toxic substances, and illnesses associated with the particular claim, the claims examiner will learn of a new link if it is relevant to that claim during adjudication. DOL also conveys information on some new links through notices issued to claims examiners and posted on its public website. From fiscal years 2006 through 2015, DOL issued six Circulars and four Bulletins pertaining to new links. Five of these notices announced plans to reopen claims thought to be potentially affected by the information contained in the notice. For example, Circular 15-04, issued in fiscal year 2015, informed claims examiners that the substance trichloroethylene had been linked to kidney cancer and that Haz-Map had been updated to reflect this new link. The circular also announced that DOL had compiled a list of previously denied Part E kidney cancer claims and instructed claims examiners on the reopening of those claims for reevaluation in light of the new link. DOL officials told us that such steps are limited to instances in which it is believed a relatively large number of claims are potentially affected. Other than these notices, claims examiners typically learn of a new link and apply it to adjudication to the extent they check the SEM for updates. However, if the SEM is not searched regularly for the latest information on new links, claims examiners could be rendering decisions using outdated information. For that reason, DOL’s Procedure Manual instructs claims examiners to check the SEM for any updates just prior to issuing a Recommended Decision to deny a claim, however, documentation of that step is not always required. According to the Procedure Manual, the SEM will show the latest date it was updated. If that date has changed since the prior search was conducted, the claims examiner must search the SEM again and document the results of the query. On the other hand, if that date is unchanged since the original search was conducted, the claims examiner will know that no new information was added to the SEM and, consequently, no new search is required. However, examiners are not required to document this latter check in which they determined that no new information was added to the SEM that might affect adjudication. The absence of such documentation impedes the ability to effectively monitor whether this critical step is carried out. Since effective internal controls require monitoring be conducted to assess the quality of program performance, the need for such documentation is essential. Our file review showed evidence of some claims in which the claims examiner clearly documented an updated SEM search just prior to issuing the Recommended Decision, but we could not assess the full extent to which claims examiners followed this final step due to the lack of documentation. In addition, DOL officials said that independent SEM searches required to be performed by FAB also serve as a check of those performed by district office claims examiners. As with district offices, FAB must also ensure the SEM record is the most complete and updated data available, and that no significant changes have been made before FAB issues its decision on the claim. Unlike district offices, if an updated SEM search is not needed, FAB makes an entry in the electronic case management system to indicate that no significant changes have been made in the SEM that would alter the Recommended Decision. Similarly, claimants must usually rely on repeatedly checking the SEM to learn of any newly added causal links. The online SEM available to the public is identical to the one claims examiners use when adjudicating claims, differentiated only by a time lag of around 6 months, according to DOL officials. This lag is primarily because Energy needs to review and approve all SEM information before it is made public. New public versions of the SEM are announced online, but details accompanying the specific updates are limited. In its 2013 report on the SEM, the Institute of Medicine noted that although a SEM record indicates when it was last updated, there is no indication as to what specific information or field was updated. The report added that this lack of information makes it extremely difficult for the user to know if the most current information has been incorporated. For example, the public SEM version dated May 18, 2015, was accompanied by a notice on the SEM website that data for 29 worksites had changed since the previous update, although details were displayed for only two of the sites. Moreover, it was not possible to determine whether any new causal links had been added because this information was not contained in the notice. Absent a check of the SEM for specific illnesses or toxic substances, claimants could remain unaware that a new link had been added that may warrant the reopening of their claim. This could be a significant issue given that claimants are authorized to request the reopening of a previously denied claim. In general, DOL’s monitoring from fiscal years 2010 through 2014 of how well EEOICPA Part E claims are adjudicated concluded that the process is working satisfactorily and meeting DOL’s established acceptability standards. However, there were also some areas of concern that were consistent with those we identified during our review. DOL has monitored the adjudication of EEOICPA Part E claims primarily using its annual Accountability Reviews. In 2015, it also reviewed referrals to CMC and Second Opinion Medical Specialists, which focused on the use of physician opinions during the adjudication process to assist in the resolution of claims, and deemed both referral processes satisfactory. DOL conducts Accountability Reviews annually to evaluate its performance in processing and adjudicating EEOICPA claims under Parts B and E. According to DOL, the objective of the reviews is to provide management with an effective means to evaluate program performance and consider corrective action both program-wide and in individual district offices. Each year, DOL typically reviews five entities: two selected district offices, the two corresponding FABs co-located in these offices, and the national FAB in Washington, D.C. The claims subject to review are randomly selected from all claims adjudicated by these entities during the review period, generally a period of 365 days prior to the date of the review. The Accountability Reviews encompass the entire claims process, from the time a claim is filed to when benefits are paid. For example, in addition to claims adjudication, the reviews also look at data entry accuracy, post-decision actions, calculation of benefits, and payment processing. (See appendix II for additional information on the specific areas of focus for the Accountability Reviews for fiscal years 2010 through 2014.) While some aspects of adjudication are examined each year, each year’s review is also targeted to assess areas of the claims process in which there may be deficiencies based on input from policy personnel from the national office, district office and FAB representatives, and other DOL stakeholders. DOL officials told us that focusing on different areas from year to year allows the reviews to be targeted to areas needing attention while avoiding re-evaluation of areas that have already shown improvement. Shifting the focus also helps minimize the predictability of the review questions, though it precludes the ability to track trends over time. In addition, the sample size of claims in DOL’s review has also varied from year to year depending on the area of emphasis for that year. Because of the changing areas of focus and sample sizes, it was difficult for us to determine whether the percentage of errors has increased or decreased over time and, therefore, we did not make comparisons across years. Although DOL’s monitoring concluded that the claims adjudication process was generally working satisfactorily, DOL identified some recurring deficiencies among the elements it reviewed each year. However, many of these recurring deficiencies were deemed by DOL to be significant only for the Accountability Reviews conducted in fiscal years 2010 to 2012. The recurring deficiencies identified in all the Accountability Reviews were primarily in three components of the adjudication process: claims development, written quality of the Recommended Decision letter, and written quality of the Final Decision letter. With regard to claims development, DOL found: Instances in which the claims examiner did not undertake full development of claimed employment, medical condition, and survivorship. For example, the examiner did not use all available resources to develop the claim, such as requesting additional information or clarification from claimants, former employers, the Social Security Administration, and other appropriate sources to help substantiate the claim. Development letters requesting additional information from claimants were not clear about the evidence needed. In some cases, the letters were lengthy and confusing or very broad, requesting a copy of all medical records. Lack of sufficient use of appropriate program resources to determine causation, such as referring the case to appropriate experts, requesting additional information from claimants, and properly using the SEM. With regard to the Recommended Decision letter to claimants, DOL found: Many cover letters did not properly summarize medical conditions that were accepted or denied. This includes missing or incorrect accepted or denied conditions. For example, in one year both district offices reviewed had errors in over 60 percent of the Recommended Decision cover letters which were examined. Various other sections of the decision letter contained errors, inconsistencies, conflicting information, or excluded relevant information. For example, some letters contained incorrect identifying information, such as the claimant’s name, address, filing dates, and claimed medical condition. Other letters contained contradictory statements in different sections of the letter regarding what was being accepted and denied, or even the decision itself, or did not address all medical conditions, or explain how the evidence was evaluated to arrive at the decision. For correspondence that communicated the Final Decision, DOL’s reviews of FAB found: Instances in which FAB did not summarize what conditions were being accepted or denied under Parts B or E in the cover letter. Various sections of the decision letter contained errors, conflicting information, or excluded relevant information. For example, one claimant provided a diagnosis for chronic bronchitis but the letter noted the diagnosis was insufficient to support the claimed condition without explaining why. Moreover, a subsequent section of the same letter stated that there was no diagnosis. In 2015, DOL completed an audit to assess the quality of the process used by claims examiners to make referrals to certain physicians—CMCs and Second Opinion Medical Specialists. In making these referrals, DOL procedures noted that claims examiners are responsible for ensuring that all the necessary components of a referral are prepared accurately, the content of the referral is appropriate and specific to the issue under determination, and sufficient factual documentation is prepared so that the physician clearly understands the medical questions to be addressed. Furthermore, the procedures noted the referral should include a Statement of Accepted Facts, which summarizes the facts of the claim, such as the accepted medical conditions and potential toxic substance exposure encountered by the employee. DOL’s audit was designed to assess two main elements of the referral process: Quality of district office inputs: This element assessed the appropriateness of the claims examiner’s referral, the quality and completeness of the Statement of Accepted Facts, and the appropriateness of the questions posed by the claims examiner. Quality of the medical review and opinion: This element evaluated whether the physician’s written medical report was complete and appropriate, and assessed whether a physician’s response was well- rationalized and consistent with the totality of the evidence in the case. With respect to CMC referrals, DOL concluded that the process was working satisfactorily and did not require a formal corrective action plan. However, DOL did identify four specific areas for improvement: More effort is needed to better interact with the claimant’s physician before proceeding with a CMC referral. Under DOL policy, the claims examiner is to seek the input of a treating physician before deciding to make a referral to a CMC. The audit uncovered instances in which CMC referrals were made without first properly interacting with the treating physician. Claims examiners need to undertake more development of exposure data to offer better explanations of the nature, extent, and duration of exposure in their referrals for causation. According to DOL, providing CMCs with better information on exposure will produce more probative and compelling medical causation outcomes. Claims examiners need further guidance on making proper referrals. There must be an “obvious defect” in case evidence to necessitate obtaining a medical opinion. According to DOL, when medical evidence clearly contains a diagnosis of a medical condition, for example, a CMC referral for diagnosis is unnecessary. Claims examiners need to evaluate the rationale presented by the CMC to ensure that it presents a clear, compelling, and medically substantiated position. According to DOL, a medical opinion based on a poorly justified medical analysis of the relevant evidence reduces the probative value of the opinion and reduces the likelihood that the program will be able to use the opinion. With respect to referrals to Second Opinion Medical Specialists, DOL concluded that its process was also working satisfactorily. However, it recommended that these physicians be provided better guidance from the district office regarding the format of the specialist’s written medical report and rationale for their conclusion. DOL’s audit report also noted that because the Second Opinion physician’s report affects the outcome of a claim, it necessitates a more concise response with reasonable explanation of their rationale. DOL took steps to address the significant deficiencies identified in the Accountability Reviews from fiscal years 2010 through 2012, but determined that deficiencies in 2013 and 2014 were not significant enough to warrant corrective action. However, our review of DOL’s monitoring indicates that the deficiencies have persisted nonetheless. To address the significant deficiencies found in Accountability Reviews conducted in years 2010 through 2012, DOL’s corrective actions included providing office-wide training to claims staff on (1) properly written development letters, Recommended Decisions, and Final Decisions; (2) claims development and the use of appropriate resources in establishing exposure and causation; and (3) the need to clearly explain to claimants what evidence is needed to adjudicate their claim. Despite the additional training for claims examiners, these deficiencies have persisted. For example, in the Accountability Reviews conducted in fiscal years 2013 and 2014, DOL found similar problems with the quality of correspondences sent to claimants. DOL officials stated that because these problems did not reflect specific trends, they did not require a corrective action plan. Nonetheless, according to DOL, the managers of the district and FAB offices followed up on specific errors to ensure that training or other actions were taken as appropriate. DOL officials acknowledged that parts of the adjudication process remain challenging. According to officials, it is particularly challenging to establish toxic substance exposure and determine causation, which involves establishing a causal link between the claimed medical conditions and a known exposure to a toxic substance. Verifying employment is also difficult, partly because of the need to retrieve employment records from as far back as the early 1940s. Officials also said that errors or other issues with correspondences still occur, including listing the wrong medical condition in the decision cover letter. They added that while this is usually due to carelessness on the part of the claims examiner and typically has no impact on the decisions, officials acknowledged that deficiencies in any correspondence to the claimant may affect customer service as well as claimants’ overall impression of the program. EEOICPA was enacted to compensate workers who carried out the nation’s nuclear weapons production. These workers were often unaware of the extreme personal hazards they faced while serving their nation and many became fully aware only when they were later stricken by illness. In light of this, it is imperative their claims for compensation be given the due attention and care they deserve. Though we found DOL’s EEOICPA Part E adjudication process was generally consistent with the steps outlined in its procedure manual, we identified the need for improvements in two areas, one of which DOL also identified through its own monitoring. First, all decisions are to be clearly written, but without additional actions to help identify and correct mistakes within claimant correspondence, such problems may persist and claimants may experience confusion or processing delays. Second, although claims examiners are required to check the SEM for updates just prior to issuing a Recommended Decision to deny a claim, they are only required to document this step if the check reveals that the SEM had been updated since the examiner’s last check. This gap in required documentation hinders the ability to monitor, consistent with federal internal control standards, whether claims examiners are performing a final check of the SEM to ensure that their decisions are based on the most up-to-date information. Given the importance of this program, which serves so many who sacrificed for their nation, it is vital that DOL has controls in place to help ensure that compensation claims for workers and their survivors are being handled correctly. To enhance consistency with DOL policy and procedures in adjudicating EEOICPA Part E claims, we recommend that the Secretary of Labor strengthen internal controls by: Requiring district offices to take steps to ensure that all claimant correspondence for Recommended and Final Decisions receives supervisory review; Requiring district offices to document that the SEM was checked for updates just prior to issuing a Recommended Decision to deny a claim in cases in which the date of the last SEM update has not changed since the claims examiner’s prior check. We provided a draft of this report to the Secretary of Labor for review and comment. DOL’s comments are reproduced in full in appendix III. DOL also provided technical comments, which we have incorporated as appropriate. In its comments, DOL agreed with our recommendations and said that they will ultimately allow the agency to better fulfill its mission of making timely, appropriate, and accurate claims decisions. With regard to our recommendation to ensure all decisions receive supervisory review, DOL stated it will evaluate the current signatory process, work with its district offices to implement a second level review, and conduct an internal review following implementation. With regard to our recommendation to document that the SEM was checked for updates before issuing a decision to deny a claim, DOL stated it plans to implement this recommendation and will assess its options for capturing and documenting the final SEM search, such as in its electronic case management or its digital imaging system. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. To assess the extent to which the Department of Labor (DOL) follows its adjudication procedures for Part E claims, we examined a stratified, random sample of 200 claims, which is generalizable to all Part E claims in our sampling universe. We derived our sampling universe from Energy Employees Occupational Illness Compensation Program (EEOICP) data contained in DOL’s electronic case management system. The data were the most recent available at the time we selected our sampling universe in March 2015. On the basis of our analysis of these data, and through discussions with DOL, we determined this data source was sufficiently reliable for the purposes of identifying a sampling universe. Our sampling universe consisted of 15,932 Part E claims filed within 5 calendar years, between 2010 and 2014. Because we wanted to determine if DOL processes Part E claims in accordance with key aspects of its adjudication procedures, we excluded certain types of claims that would not allow us to review all steps. Specifically, we excluded claims that were still in process and for which a Final Decision had not yet been issued. We also excluded claims that had already been accepted under Part B, since such claims are automatically accepted under Part E. Similarly, we excluded “Special Exposure Cohort” claims, which are associated with certain designated facility locations and do not undergo typical adjudication. Because the Part E sampling universe is based on recent calendar years and included only claims that received a Final Decision, the selected sample may exclude a greater number of claims from more current years and claims that take longer to adjudicate. From our sampling universe, we randomly selected 200 claims. These 200 claims were selected separately within each of two major groups. One major group contained all claims associated with three selected medical conditions—hearing loss, chronic beryllium disease, and Parkinson’s disease—defining three substrata, and the second group contained the remaining universe of claims not associated with these three medical conditions. We selected the three particular medical conditions for the first group based on document reviews and interviews with DOL officials. We determined that although such claims comprise a relatively small portion of the sampling universe (about 9 percent), they are potentially more challenging to adjudicate. Due to the possible unique nature of claims associated with these three medical conditions, and to ensure our file review included each of these claim types, we further divided the first group into three separate sub-strata, one for each specific medical condition (i.e. hearing loss, chronic beryllium disease, and Parkinson’s disease), and then randomly selected claims for review within each sub-stratum. We allocated the sample of 100 claims across these three sub-strata according to their relative frequency. Combining this sub-sample of claims with selected medical conditions with the sample of general claims allowed us to address our objectives in a manner that maximized return on limited resources while still allowing for an independent and objective analysis of the entire universe of claims that met our selection criteria. Table 1 shows the breakdown of our stratified random sample. The way we selected our sample size for claims associated with specific medical conditions increased the likelihood of encountering these types of claims during our review (known as oversampling). However, we used sampling weights to ensure that our sample of 200 claims properly reflected the sampling universe. Our sample of claims reflected a variety of characteristics. For example, approximately two-thirds of the claims (68 percent) were employee claims, with the remaining third being survivor claims. Just under one-half (48 percent) of the claimants had applied for benefits for one medical condition and the rest (51 percent) had applied for benefits for multiple conditions. In fact, nearly a third (32 percent) of the claims were for three or more conditions. Across all 200 claims, the number of conditions being claimed totaled 427. In addition to the claims that were associated with the three specific medical conditions we selected, the sample contained claims for a wide variety of medical conditions, including cancer (21 percent of all conditions), chronic obstructive pulmonary disease (6 percent), and asthma (4 percent). Based on the results of our file review and the use of a weighted sample, we were able to provide generalizable results for our sampling universe of Part E claims. We were also able to provide generalizable results within the two major groups—claims based on selected medical conditions and claims based on all other conditions—as well as make generalizable comparisons between the two groups. Due to relatively small sub-strata sizes within select medical conditions, we cannot generalize results for claims involving any of the selected medical conditions—hearing loss, chronic beryllium disease, or Parkinson’s disease. Because we used a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular samples as a 95 percent confidence interval. For each claim in our sample, we reviewed the case file materials and documented whether the claims examiners followed DOL’s EEOICPA Part E claim adjudication procedures from development to the Recommended and Final Decisions. We reviewed claim files at all four DOL district offices. Each office gave us access to the claim files in our sample that had been adjudicated at that district office. Most of the claims were in paper form, a few recent claims had been scanned into electronic format, and some were a combination of paper and electronic formats. To document our review, we developed and used an electronic data collection instrument (DCI) that consisted of questions on the many individual claim development steps outlined in DOL’s current Procedure Manual. We evaluated older claims based on the policies and procedures that were in effect at that time. Our review focused only on procedural steps required by the Procedure Manual and we did not attempt to evaluate the accuracy of the Recommended or Final Decisions, or the scientific and medical judgements used to support them. For each adjudication step noted in the DCI, the reviewer documented whether and how the step was taken and each reviewer’s DCI responses were checked and verified by another reviewer. Upon analyzing our DCI data, we developed a list of 44 claims that we flagged as being potentially inconsistent. DOL reviewed our list and agreed with our findings for many claims. DOL also provided an explanation or clarification regarding other claims that justified their removal from the list. As a result of this verification step, we removed 11 claims from our list, leaving 33 that we deemed inconsistent. To determine how new links between toxic substances and diseases are captured and applied in the adjudication process, we reviewed adjudication guidance, including DOL’s EEOICPA Procedure Manual, Circulars, and Bulletins. In all, we reviewed 80 Circulars and 147 Bulletins and determined their relevance to our objective using two criteria: 1) those that pertained to the development and adjudication of Part E claims, and 2) those that provided guidance related to new links between toxic substances or radiological exposure, and diseases, or that provided guidance on whether the facility was covered under EEOICPA Part E. In addition, we interviewed DOL officials and SEM contractor staff to obtain an understanding about how the SEM was created, what it contains, and DOL’s process for incorporating newly identified links into the SEM and making the information available to claims examiners and the public. We also asked about how the SEM has changed over time and what efforts DOL has taken to improve the database. We reviewed relevant reports, including past EEOICP Ombudsman’s reports, a report from the Institute of Medicine, and reports from advocacy groups. To understand what DOL’s monitoring efforts indicated about the Part E adjudication process, we reviewed findings from DOL’s annual Accountability Reviews and other audits. Specifically, we reviewed DOL’s Accountability Review results from 2010 through 2014—corresponding to the 5 calendar years of our file review—and focused on the procedures that were within our scope. The procedures encompassed the steps involved in claim development, adjudication, the Recommended Decision, and Final Decision. We reviewed the procedure manual issued by the Office of Workers’ Compensation Programs for planning and conducting Accountability Reviews, as well as handbooks, manuals, and worksheets developed by its Division of Energy Employees Occupational Illness Compensation (DEEOIC) that are specific to EEOICPA Accountability Reviews. We also reviewed the Office of Workers’ Compensation Programs Accountability Review Procedure Manual to obtain an understanding of DOL’s methodology for sample selection, accountability review team selection, and reporting methods. We reviewed the DEEOIC handbooks and other guidance to identify the areas of focus for each year’s review, the specific questions used for each review, and scoring criteria. We also reviewed DOL’s Contractor Medical Consultant and Second Opinion Medical Specialist referral audit completed in 2015 to obtain an understanding of the procedures examined during this audit and its results. Throughout the course of our work we compared the steps taken during DOL’s adjudication of Part E claims against federal standards for internal control. In addition, we reviewed available corrective action plans and interviewed agency officials to learn more about their monitoring of EEOICPA Part E and the corrective actions taken resulting from the findings. Our interviews specifically focused on the methodologies DOL used to conduct its monitoring. We also asked for information on the specific corrective actions DOL has taken to address program deficiencies identified during these internal audits. Finally, we reviewed applicable federal laws and regulations. We conducted this performance audit from May 2014 through February 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Basic Development (including Part E*) and Part B Causation Development Part E Causation Development* Impairment Development Wage Loss Development and Written Quality) Part E Causation* (Development and Causation Assessment) Recommended Decision* (Outcome Decision Outcome Notification* Claim Assessment and Narrative Explanation* Factual Findings of the Claim* and Written Quality) Development and Wage Loss Reopening Requests* (This category applicable only in FY 2013) In addition to the contact named above, Meeta Engle (Assistant Director), Susan Chin, Karen Granberg, David Perkins, Cody Raysinger, Walter Vance, and Sonya Vartivarian made significant contributions to this report. Also contributing to this report were Lucas Alvarez, Susan Baker, Marcia Crosse, Jennifer Cook, Holly Dye, Alexander Galuten, Kathy Leslie, Diane Lofaro, Mimi Nguyen, and Minette Richardson. Energy Employees Compensation: Additional Independent Oversight and Transparency Would Improve Program’s Credibility. GAO-10-302. Washington, D.C.: March 22, 2010. Energy Employees Compensation: Actions to Promote Contract Oversight, Transparency of Labor’s Involvement, and Independence of Advisory Board Could Strengthen Program. GAO-08-4. Washington, D.C.: Oct. 26, 2007. Energy Employees Compensation: Adjustments Made to Contracted Review Process, but Additional Oversight and Planning Would Aid the Advisory Board in Meeting Its Statutory Responsibilities. GAO-06-177. Washington, D.C.: Feb. 10, 2006. Energy Employees Compensation: GAO’s Prior Work Has Identified Needed Improvements in Various Aspects of the Program. GAO-07-233T. Washington, D.C.: Dec. 5, 2006.
EEOICPA was enacted in 2000 to compensate employees and contractors of the Department of Energy whose illnesses are linked to their work in the nuclear weapons industry. Part E of the Act, enacted in 2004, compensates these contractor and subcontractor workers, or their eligible survivors, for medical expenses, impairments, and lost wages up to $250,000. GAO was asked to review DOL's management of this program. GAO examined (1) the extent to which DOL follows its procedures to adjudicate Part E claims, (2) how DOL captures new links between toxic substances and diseases and applies them to adjudication, and (3) what DOL's monitoring indicates about the adjudication process and whether any corrective actions have been taken to address identified problems. GAO reviewed a generalizable stratified random sample of 200 Part E claims filed from 2010 through 2014; reviewed applicable federal laws, regulations, guidance, internal audit reports, and other agency documentation associated with internal monitoring; and interviewed DOL officials. The Department of Labor's (DOL) adjudication process for compensating Department of Energy contract workers or their survivors for illnesses linked to work in the nuclear weapons industry generally follows guidance and procedures implementing Part E of the Energy Employees Occupational Illness Compensation Program Act of 2000 (EEOICPA). Although GAO's analysis of a generalizable sample of 200 claims filed by workers from 2010 through 2014 found the adjudication process generally followed DOL's guidance and procedures, GAO identified some inconsistencies in an estimated 10 percent of the claims, including errors in correspondence to claimants and in development of claims. The procedure manual stipulates that written decisions should clearly convey information that led to the decision and that decisions are to be reviewed by the appropriate signatory. GAO found that decisions sometimes contained inaccurate, conflicting, or incomplete information, such as listing the wrong medical condition. DOL also did not always run accurate searches of its Site Exposure Matrices (SEM)—an online electronic database of facilities, toxic substances, and associated illnesses—when processing claims, or responding to requests for reopening claims. In addition, GAO found that supervisory review is at the discretion of each district office and, as a result, recommended decisions on claims were not always reviewed. This may increase the likelihood of poorly written decisions, which is inconsistent with procedures and which, in turn, increases the potential for claimant confusion and delays in adjudication. DOL uses the SEM to, among other things, document newly identified causal links between toxins and diseases on the basis of medical research. According to DOL officials, since 2006 the number of such links listed in the SEM has increased from about 300 to over 3,000. They said that due to the large volume of information updates, DOL provides limited notification to claims examiners and the public when they occur. It has issued 10 notices specifically on new links since 2006. Therefore, it is usually incumbent on claims examiners and claimants to make themselves aware of new links by continuously checking the SEM for updates. As a result, new links are applied to claims largely to the extent these checks are performed. However, claims examiners are not always required to document that they checked whether the SEM had been updated prior to issuing a recommended decision to deny a claim. This gap in documentation hinders DOL's ability to monitor program performance, consistent with federal internal control standards. According to DOL's monitoring, its process for adjudicating Part E claims is working satisfactorily, but persistent deficiencies remain. DOL conducted reviews from fiscal years 2010 through 2014 based on random sampling and found that the process for adjudicating claims met DOL's acceptability standards in any given year. Nonetheless, DOL consistently found deficiencies in certain adjudication steps across all years, including insufficient use of program resources to fully develop claims and improperly written decisions, as GAO also identified in its claim file review. DOL took corrective actions, such as training for claims examiners, to address deficiencies in 2010 through 2012, but determined that corrective actions were not warranted in 2013 and 2014. GAO recommends that DOL take steps to ensure all decision letters receive supervisory review, and require that claims examiners document that they checked whether the SEM had been updated just prior to issuing a decision to deny a claim. DOL agreed with the recommendations and indicated it would take steps to implement them.
Prior to recent congressional deliberations on the Navy’s fiscal year 1995 budget, the Navy planned to spend over $2.5 billion to add limited ground attack capability and other improvements to 210 F-14 Tomcat fighter aircraft (53 F-14Ds, 81 F-14Bs, and 76 F-14As). According to the Navy, the ground attack capabilities were required to partially compensate for the loss in combat capabilities during the period starting in 1997, when all of its A-6E Intruder attack aircraft are scheduled to be retired, to the turn of the century when the F/A-18E/F, the next generation strike fighter, is scheduled to arrive. The F-14 was to undergo two upgrades. An initial upgrade, commonly called the A/B upgrade, included structural modifications to extend the F-14’s fatigue life to 7,500 hours, improved defensive capabilities and cockpit displays, and incorporation of digital architecture and mission computers to speed data processing time and add software capacity. The A/B upgrade had to be incorporated into 157 F-14 aircraft before the second upgrade, called the Block I, could be added. Block I was to add a Forward-Looking Infrared (FLIR) pod with a built-in laser to designate targets and allow F-14s to independently drop laser guided bombs (LGBs), a modified cockpit for night attack operations (night vision devices and compatible lighting), and enhanced defensive countermeasures. Concerned about the Navy’s capability to maintain carrier-based power projection without A-6Es and with only limited F-14 upgrades, the Joint Conference Committee on the fiscal year 1994 Defense Authorization Act directed the Navy to add an F-15E equivalent capability to its F-14D aircraft, including the capability to use modern air-to-ground stand-off weapons. The act restricted the obligation of fiscal year 1994 F-14 procurement funds until 30 days after the Navy submitted a report outlining its plans to add more robust ground attack capability. The report, submitted on May 20, 1994, reiterated the Navy’s intent to add only the A/B and Block I upgrades. During recent fiscal year 1995 deliberations, the defense authorization act conferees eliminated funding for F-14 Block I ground attack upgrades, authorizing funds for only the A/B structural and survivability modifications. In a subsequent similar action, defense appropriation act conferees did not appropriate funds for the Block I upgrades. The Navy eliminated the Block I ground attack upgrade from its Program Objectives Memorandum. However, Navy officials continue to believe a ground attack upgrade is necessary. A final decision on the extent of the upgrade depends upon the results of a COEA and an acquisition milestone decision scheduled for the first quarter of fiscal year 1995. In a related response to congressional direction to add more robust capability to the F-14, beyond that mentioned above, the Navy estimated it would cost $1.8 billion to add F-15E-equivalent capability to 53 F-14Ds and another $9 billion to upgrade 198 F-14A/Bs. According to the Navy, an upgrade of that magnitude was not affordable. Most F-14s, even after receiving the Block I upgrade, will lack some important capabilities that the F/A-18C currently has or will gain in the near future. The absence of these capabilities could limit the combat effectiveness and utilization of the F-14 under some adverse conditions. The Block I upgrade will permit F-14s to drop LGBs, which are more accurate than unguided gravity bombs. But the usefulness of laser targeting is limited when targets are obscured by clouds, smoke, haze, and moisture that prevent laser beams from illuminating and marking the targets and from providing a clear path for the bomb guidance system to follow. Thus, to assist crews in locating and identifying targets, attack aircraft need synthetic aperture radar with ground mapping capability. The F-14A/B models’ AWG-9 radar is one of the most powerful U.S. military aircraft radars for detecting multiple air targets approaching at long range, but it is not ideally suited to pinpointing ground targets under some conditions. For example, it does not provide a ground mapping capability that permits crews to locate and attack targets in adverse weather and poor visibility or to precisely update the aircraft’s location relative to targets during the approach, a capability that improves bombing accuracy. Only the 53 F-14Ds, with their improved APG-71 synthetic aperture ground mapping radar, will have this capability. The 157 F-14A/Bs in the Block I program, lacking the APG-71 radar, will not be as effective in locating, identifying, and attacking targets, except in daylight and clear visibility conditions. F/A-18Cs, which have synthetic aperture ground mapping radar with a doppler beam sharpening mode to generate ground maps, have greater capability, and they will get even more precise and clear radar displays when they receive the APG-73 radar upgrade later this decade. New production F/A-18Cs are scheduled to receive APG-73 radars later in 1994. The Navy, in a COEA summary dated May 1992 comparing the F/A-18 to various alternatives, wrote that “a strike fighter should be capable of effectively employing all Navy strike and fighter weapons in the inventory and under development.” However, the Block I upgrade will not add any weapon capability new to the F-14, except the ability to independently drop LGBs. No Block I F-14s will be able to launch precision stand-off attack weapons such as the High-speed Anti-Radiation Missile (HARM), Harpoon antiship missile, Maverick anti-armor missile, Walleye guided bomb, and Stand-off Land Attack Missile (SLAM). F/A-18Cs and A-6Es can. Block I aircraft will not be able to employ future precision stand-off weapons, including the Joint Direct Attack Munition (JDAM) and the Joint Stand Off Weapon (JSOW). F/A-18Cs will. The Navy does plan to add the capability to launch the Advanced Medium Range Air-to-Air Missile (AMRAAM) to F-14Ds when their computer software is updated. (AMRAAM is the Defense Department’s newest air-to-air missile.) The Navy has stated that it cannot afford to add stand-off weapon capability to other F-14s. Currently, F/A-18Cs have AMRAAM capability. Table 1 shows the weapons carried by F-14s and F/A-18Cs. In defending the F-14 upgrade, Navy officials said F-14s have a combat range and/or endurance approaching that of the A-6E, which is considerably longer than the F/A-18. While range (distance) and endurance (loiter time in the target area) are important capabilities, they are not as critical in littoral warfare, when carriers may operate close to shore. Operating close to the shore decreases the distance to targets and increases the amount of loiter time the aircraft has at or near the target. The Secretary of the Navy, in the 1994 Posture Statement, stated that 85 percent of the Navy’s potential targets are within 200 miles of the world’s shorelines. Although the F-14 generally has greater range and endurance than the F/A-18C, the majority of littoral targets should be within the F/A-18C’s range, even with an aircraft carrier operating 100 miles or more offshore. The Navy’s Atlantic Fleet officials told us that F/A-18Cs carrying four 1,000-pound bombs and external fuel tanks have an unrefueled mission radius of about 340 miles. Future F/A-18Es are projected to carry the same weapon load up to 520 miles without refueling. While the longer range F-14s could potentially reach the 15 percent of the targets beyond 200 miles of shorelines, alternatives are available. The Navy’s Tomahawk cruise missile can strike fixed targets up to a range of about 700 miles. Air Force bombers, with mid-air refueling, have even a greater range. If aerial refueling is available, as should be the case with U.S. forces operating jointly, an aircraft’s range, including the F/A-18’s, can be extended significantly. The Block I F-14 aircraft will not have all of the capability of the Air Force’s F-15E Strike Eagle (a long range, all-weather, multimission strike fighter with precision weapons capability), the Navy’s own F/A-18C Hornet, or its A-6E Intruder (see table 2). F-14A/Bs can drop most unguided bombs, including 500-, 1,000-, and 2,000-pound gravity bombs, as well as cluster munitions. They can also drop LGBs if another aircraft marks the target with a laser beam. Block I will add the capability to independently drop LGBs without external assistance. F-14A/B aircraft will not have a radar ground mapping capability to assist crews in locating, identifying, and attacking targets when visibility is poor. No F-14s, including the D model, will be able to launch precision stand-off weapons, and none will have all-weather terrain following capability. Although the Navy justified the F-14 upgrade as necessary to fill the gap between A-6E retirements and delivery of F/A-18E/Fs, no F-14s, under the original Block I plan, were scheduled to begin receiving upgrades until fiscal year 1998, a year after the last A-6s were retired. The Navy plans to procure F/A-18 E/F aircraft starting in fiscal year 1997 and expects the aircraft to enter service in the year 2000. In the interim, two carrier air wings have retired their A-6Es, and these air wings will operate for 5 years, at a minimum, before the first upgraded F-14s are delivered in 1999. The USS Constellation is scheduled to deploy late in 1994, without A-6Es. Its F-14Ds cannot drop bombs because they lack the necessary computer software. The first carrier air wing equipped with Block I F-14s will not deploy until fiscal year 1999 or 2000. The last F-14s will not complete the upgrade until fiscal year 2003. By that time, if not earlier, the Navy should start receiving squadrons of F/A-18E/Fs to replace F-14s and older F/A-18s. As the Navy eliminates A-6Es from carrier air wings, it plans to add a third squadron of F/A-18s to each wing, increasing the number of F/A-18s in each air wing from 20 to 36. The Navy also plans to eliminate one F-14 squadron from each air wing, reducing the number from 20 to 14 planes. Two air wings, including the USS Constellation’s, will receive this modified air wing mix in fiscal year 1994. Two more air wings are expected to change their aircraft mix in fiscal year 1995, with three more wings changing in fiscal years 1996 and 1997, respectively, until the configuration of all 10 active air wings is changed. As noted earlier, most F-14s, even after under going the Block I upgrade, will lack some important capabilities that the F/A-18C has or will gain in the near future. The absence of these capabilities could limit the F-14’s combat effectiveness and utilization under some adverse conditions. This view is supported by an April 1992 Navy COEA summary, which compared the F/A-18 to various alternatives, including an upgraded F-14D called Quick Strike. This version was to have more capability than is planned for Block I. The analysis concluded that the F-14 Quick Strike was a less capable strike aircraft than the F/A-18C. Because the Navy faces an uncertain budget environment and system affordability concerns, and, since planned F-14 upgrades offer little or no improvement over current capabilities and may not be fielded before F/A-18E/Fs are delivered, the upgrades do not appear to be cost-effective. Current Navy plans will not provide F-14s with F-15E-equivalent capabilities. If the Congress wishes to add these capabilities, Navy estimates show that it will cost much more. Therefore, the Congress may wish to defer authorizing or appropriating additional monies for the F-14 until the Navy can demonstrate that planned upgrades are essential when considering (1) the current F/A-18C capabilities; (2) the net weapon capability gain over current F-14A/B levels; (3) the absence of a ground attack radar in 157 of the 210 aircraft; (4) the lack of precision stand-off weapons capability in all 210 F-14 aircraft that limits the versatility and use of these aircraft in combat; (5) the nearly simultaneous delivery of upgraded F-14s and F/A –18E/Fs; and (6) the Navy’s willingness to deploy carriers without A-6Es or upgraded F-14s, as evidenced by the upcoming deployment of the USS Constellation. Navy officials, commenting on a draft of this report, defended the F-14 upgrade as necessary, even though they were aware that the Block I ground attack upgrade capability had been eliminated from the Navy’s budget by the House and Senate defense authorization conferees and from the Navy’s 1996 Program Objectives Memorandum. Navy officials said the upgrade was only eliminated from the Program Objectives Memorandum for the present. They defended the need for this upgrade, which is one of several possible upgrades being considered in an ongoing COEA. The Navy could resubmit the ground attack upgrade in a future budget. However, if this upgrade is delayed, it is likely that new F/A-18E/Fs will be deployed before upgraded F-14s enter the fleet, making a need based on capability more questionable. Navy officials said the key issue discussed in our report is not whether planned F-14 upgrades duplicate strike capabilities available in the Navy as well as in the other services, as suggested by us, but rather the contribution these aircraft would make to the capability of each carrier air wing. Commenting on the Navy’s willingness to immediately deploy carriers without A-6Es, relying completely on F/A-18s for its strike capability, Navy officials said this decision is a reflection of affordability constraints, not a willingness to forgo the capability. We agree that affordability is part of the issue. Affordability provided the impetus for the Navy to set priorities. In setting its priorities, the Navy eliminated the F-14 upgrade from its Program Objectives Memorandum, which was a clear admission that the Navy weighed its needs and found it had more important priorities. Our data gathering and analysis focused on the Navy’s decision to upgrade 210 F-14 aircraft. We interviewed officials and reviewed documents from the Office of the Chief of Naval Operations (Director for Air Warfare); the Naval Air Systems Command; and Headquarters, U.S. Air Force, in Washington, D.C. We also interviewed personnel at the U.S. Naval Air Forces, Atlantic Fleet and Pacific Fleet; Headquarters, U.S. Air Force Air Combat Command; the Naval Strike Warfare Center, Naval Air Station, Fallon, Nevada; Carrier Air Wings Two and Fifteen at Naval Air Station, North Island, California; and Naval Air Station, Miramar, California; and Hughes Aircraft Company, Los Angeles, California. We conducted our review between June 1993 and May 1994 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense, the Navy, and the Air Force; the Director, Office of Management and Budget; and the Chairman, Commission on Roles and Missions of the Armed Forces. Please contact me at (202) 512-3504 if you or your staff have any questions concerning this report. The major contributors to this report are William C. Meredith, Kenneth W. Newell, and Frances W. Scott. Richard Davis Director, National Security Analysis The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Navy's decision to spend about $2.5 billion between fiscal years 1994 and 2003 for a limited ground attack upgrade and other modifications to about 200 F-14 Tomcat fighters. GAO found that: (1) although the Navy has justified F-14 attack upgrades as necessary to replace the loss of A-6E aircraft, most upgraded F-14 aircraft will be less capable than F/A-18C aircraft; (2) although upgraded F-14 aircraft have greater range than F/A-18C aircraft, this capability may not be needed, since the Navy's shift to a littoral warfare strategy will bring the Navy's targets within range of the F/A-18C and other weapons systems; (3) delivery of upgraded F-14 aircraft is not scheduled to begin until after the A-6E fleet is retired, even though the Navy stated that the aircraft are needed to fill a gap between A-6E retirement and the introduction of the F/A-18E/F aircraft; (4) for several years, the Navy will deploy carriers without A-6E or upgraded F-14 aircraft and will rely on its F/A-18C fleet for all attack missions; and (5) the Navy has not adequately justified its $2.5-billion plan to upgrade the F-14 fleet.
The F/A-22 is the Air Force’s next-generation air superiority fighter aircraft and incorporates a low observable (stealth) and highly maneuverable airframe, advanced integrated avionics, and a new engine capable of sustained supersonic flight without the use of afterburners. It was originally designed to counter threats posed by the Soviet Union and was intended to replace the F-15 fighter in the air-to-air combat role. Over the years, the Air Force decided to add a more robust air-to-ground capability not previously envisioned but now considered necessary to increase the utility of the aircraft. In 2002, the F-22 aircraft was redesignated the F/A-22, with the “A” representing the expanded ground attack capabilities. Officials initiated a modernization program to develop and integrate these new capabilities. The F-22 acquisition program started in 1986 with an intended development period of 9 years and an initial operational capability in March 1996. The Air Force’s plan at that time was to procure 750 aircraft. In the years since, the original business case has been severely weakened as threats, missions, and requirements have changed. Further, the program milestones have slipped, the development period lengthened to more than 19 years, development costs more than doubled, and a modernization program was added. The initial operational capability date is now December 2005. Amidst concerns about escalating costs and schedule, Congress placed cost limitations on both development and production budgets in 1997, later removing the development cost cap. (The current production cost cap is $37.3 billion.) Concomitantly, the planned procurement quantity has steadily decreased due to affordability concerns and changes in missions and combat requirements. Two major reviews of defense force structure and acquisition plans, the 1993 Bottom-Up Review and the 1997 Quadrennial Defense Review, both significantly reduced F/A-22 quantities. In addition, OSD’s “buy to budget” acquisition strategy, which essentially placed a ceiling on the total program costs, has resulted in further cuts to quantity as development cost increased. In December 2004, OSD issued Program Budget Decision 753, which reduced F/A-22 funding by $10.5 billion and cut 96 aircraft from the planned procurement quantity. The decision ends procurement in 2008, instead of 2011, and would reduce total procurement quantity to 178 aircraft. Figure 1 illustrates the downward trend in procurement quantity over the years juxtaposed with a rise in program acquisition unit costs, which has resulted in a significant loss in buying power. Program acquisition unit costs have increased largely due to (1) increased development and production costs; (2) decreased procurement quantities; and (3) increased costs to modernize and enhance capability. The current plan supporting the fiscal year 2006 defense budget request submitted in February 2005 is to acquire 178 aircraft for about $63.8 billion. Appendix II illustrates other changes in cost, quantity, and schedule experienced by the program since its commencement. Our March 2004 report discussed the significant changes in cost, quantity, capabilities, and mission since F-22 development began in 1986. We reported continued problems and delays in the development and testing schedules. We recommended that DOD complete a new business case that justifies the continued need for the F/A-22 and the quantities needed to carry out the air-to-air versus air-to-ground missions. The business case was to also consider alternatives within the constraints of future defense spending. Later in testimony, we stated that there are competing priorities both internal and external to DOD’s budget that require a sound and sustainable business case for DOD’s acquisition programs based in clear priorities, comprehensive needs assessments, and a thorough analysis of available resources. DOD partially concurred with our report recommendation but did not prepare a new business case, stating that it evaluates the F/A-22 business case elements as part of the routine acquisition and budget processes with the results reflected in the defense budget. We did not think these kinds of activities sufficiently analyzed and addressed the specific business case elements—analysis of need for original and new capabilities, assessment of alternatives, justification of needed quantities, and evidence that planned quantities were affordable. The Air Force, in the face of significant changes to the F/A-22, has not prepared a new business case to justify the resources needed to add a much more robust ground attack capability and to assume new missions. The requirements for the F/A-22 have changed significantly since its original business case and the available resources are in flux—both are key components of a business case needed to support further investments. A December 2004 budget decision reduced procurement funding and quantities but did not cut funding for modernization. This has made the current modernization plan obsolete as key ground attack and intelligence gathering enhancements had been slated for aircraft that have now been eliminated from the F/A-22 procurement program. While its total cost is not clear at this time and program content is subject to change, in 2003 OSD cost analysts estimated the modernization program would cost about $11.7 billion. The Air Force embarked on the expensive and wide-ranging modernization program without a new business case to support investments of billions of dollars to develop and deliver new capabilities and missions. The modernized F/A-22 would differ so significantly from the original aircraft in capabilities and missions that it should have been developed in an entirely separate acquisition program. Instead, the Air Force opted to incorporate modernization efforts within the existing acquisition program. A business case should match requirements with resources— proven technologies, sufficient engineering capabilities, time, and funding—when undertaking a new product development. First, the user’s needs must be accurately defined, alternative approaches to satisfying these needs properly analyzed, and quantities needed for the chosen system must be well understood. The developed product must be producible at a cost that matches the users’ expectations and budgetary resources. Finally, the developer must have the resources to design and deliver the product with the features that the customer wants and to deliver it when it is needed. If the financial, material, and intellectual resources to develop the product are not available, a program incurs substantial risk in moving forward. The original business case for the F/A-22 was made in 1986 to support acquiring large quantities of air superiority fighters to engage in conventional warfare and counter Cold War-era threats. These threats never materialized as expected. Because the program was in development for over 19 years, tactical fighter requirements, projected threats, and operational war plans changed. To enhance the utility of the F/A-22 for today and the future, the Air Force now plans to develop a robust air-to- ground attack capability to allow the aircraft to engage a greater variety of ground targets, such as surface-to-air missiles systems, that pose a significant threat to U.S. aircraft. It also plans to equip most of the F/A-22 fleet with improved capabilities to satisfy expanded warfighter requirements and to take on new missions, including intelligence data gathering and the suppression of enemy air defenses and interdiction. The Air Force established a time-phased modernization program to develop and insert new capabilities required to implement the Air Force’s Global Strike concept of operations. Table 1 shows how the Air Force intended to integrate new capabilities incrementally before the December 2004 budget decision reduced quantities by 96 aircraft. At the time of our review, officials were still determining the impacts of the budget decision on the modernization program content and quantities. Initial development work on modernization enhancements started in 2003 and is planned to extend over a 12-year period with the first set of new capabilities inserted into the production line in fiscal year 2007. By the end of development in 2015, the Air Force plans to have three different configurations (or blocks) of F/A-22s, each with distinct operational capabilities. Based on the current modernization road map, the Global Strike Basic configuration (block 20) will include 56 F/A-22s built primarily to perform air-to-air missions but with limited air-to-ground capability. The Global Strike Enhanced configuration (block 30) includes 91 aircraft that will perform the bulk of air-to-ground and electronic attack missions using advanced radars to track targets and small diameter bombs to destroy them. Block 40 encompasses both the Global Strike Full and Enhanced Intelligence, Surveillance, and Reconnaissance increments. This configuration of 128 aircraft is expected to perform such missions as suppression of enemy air defenses and gathering up-to-date information on potential adversaries’ locations, resources, and personnel to improve target identification and increase kill capabilities. According to program officials, these latter two increments are still conceptual in nature and subject to revision. The modernization program as currently planned is much in doubt because of the recent budget cut and the likely prospects for more changes. The instability in F/A-22 resources and upcoming DOD-wide reviews of capabilities and requirements may result in further revisions and cutbacks, further impacting modernization plans. Budget and programmatic decisions also cause ripple effects on other resource plans tied to the modernization, which may open up budgeted funds for other uses. In March 2003, OSD’s Cost Analysis Improvement Group (CAIG) estimated that the Air Force would need $11.7 billion for the planned modernization programs through fiscal year 2018. The CAIG estimate included costs for development, procurement, and retrofit of modernized aircraft. The Air Force’s latest estimated cost for the modernization program is about $5.4 billion through 2011. Future modernization costs beyond 2011 have not been definitized and are subject to change. The modernization program manager projected annual funding of $700 million to $750 million would be needed for the currently planned modernization program after 2011. The December 2004 budget decision places much of the modernization program in doubt, particularly the latter stages. OSD substantially reduced the F/A-22 budget, which will require another strategy for the modernization program. It reduced F/A-22 funding by $10.5 billion, stopped procurement of aircraft after 2008, and reduced the quantity by 96 aircraft. This and other events will reduce the Air Force’s expected buy to no more than 178 aircraft. While the OSD funding decision changed the baseline F/A-22 program, it did not change the planned funding for the modernization program to add advanced ground attack and intelligence gathering capabilities between 2007 and 2015. However, many of these new and advanced capabilities had been planned for aircraft that will not be built as the budget eliminates F/A-22 aircraft that had been planned for production after 2008. Air Force officials told us they hope to reverse these changes, but officials acknowledge that a major restructuring is likely if the proposed funding cuts are sustained. If the budget cut is sustained, the modernization program as currently planned is largely obsolete and funding for these advanced capabilities to be incorporated after 2008 would be available for other uses. This could include up to $1.2 billion now budgeted for the start-up of the latter modernization increments. The Air Force’s desire to upgrade the F/A-22’s computer architecture and avionics processors in order to support the block 40 expanded capabilities may also be affected by the recent budget cut. Program officials do not expect the new architecture to be fully developed and ready for installation in the F/A-22 until fiscal year 2010. However, early indications show that the effort to upgrade the computer architecture— expected to cost between $400 million and $500 million—already is experiencing schedule problems and increased risks. As a result, the 2010 insertion date may not be achievable as planned for the F/A-22. Furthermore, DOD’s proposed termination of procurement after 2008 raises questions about the need to proceed with the planned computer upgrade. The existing processors with some minor upgrades would support up to 155 aircraft and most Global Strike Enhanced capabilities. Additionally, since our March 2004 report the program office has identified new requirements needed to implement the modernization program. The F/A-22 program office has concluded that the F/A-22 infrastructure, including government laboratories, such as software avionics integration labs, flying test beds, and test ranges need to be upgraded to ensure a successful modernization program. According to program officials, the existing facilities have major resource/capacity limitations and are inadequate to support needed software integration activities and system performance and operational testing for most planned enhancements. The program office has budgeted about $1.8 billion through fiscal year 2009 for the infrastructure upgrades, including funds for engineering and maintenance personnel support. According to program officials, the current infrastructure limitations have caused some modernization efforts to be deferred to later blocks. If modernization plans are curtailed, some infrastructure improvements may not be needed. Even if funding were restored to the F/A-22 program and the above- mentioned concerns were resolved, previous funding shortfalls and schedule slippages have already resulted in planned capabilities being deferred to later years. For example, block 20 enhancements required to conduct autonomous search and improve target recognition have been deferred to block 30. Similarly, funding problems have caused the Air Force to scale back some efforts and delay development of block 30 electronic attack and small diameter bomb enhancements. In November 2004, the Defense Contract Management Agency reported that the contractor proposes to reduce the amount of planned tasks, defer development of software specifications, and incrementally develop a key communication component in order to meet an April 2005 system design review. DOD officials stated that they believe the budget cut has some diseconomies that may result in procuring even fewer than 178 aircraft. They said that stopping aircraft production early affects production economies and efficiencies that were expected from a multiyear procurement contract and from production line efficiencies. The multiyear contract was to begin in fiscal year 2008, the year procurement is now curtailed by the budget decision. Now that opportunity is gone. Officials also said that cutting production quantities from the final years of the program eliminate projected savings in annual unit procurement costs. Typical of many DOD acquisitions, Air Force program officials had projected future budgets assuming that the marginal costs for buying F/A- 22s would decrease with each passing year of production as a result of manufacturing efficiencies, productivity projects, and more economical buying quantities. This means that aircraft bought late in the production program usually cost less than those bought earlier in the program. For example, the average unit flyaway cost paid for F/A-22s was $212 million per aircraft bought in 2002 and $178 million in 2003. Before the budget decision, officials had projected average unit flyaway costs to decrease to $127 million, $111 million, and $108 million in fiscal years 2007, 2008 and 2009, respectively. Now that the program has been truncated after 2008, the less expensive aircraft in 2009 and beyond will not be bought and unit costs are now projected at $135 million in 2007 and $149 million in 2008 (increase associated with close-out of production). OSD has directed that the 2005 Quadrennial Defense Review include an assessment of joint air dominance in future warfare and the contributions provided by all tactical aircraft, including the F/A-22. An announced defense goal is to redirect investment from areas of conventional warfare, where the United States enjoys a strong combat advantage, toward more transformational capabilities needed to counter “irregular” threats, such as the insurgency in Iraq and ongoing war on terror. DOD is also conducting a set of joint capability reviews to ensure acquisition decisions are based on providing integrated capabilities rather than focused on individual weapon systems. The study results, although still months away, could further impact the future of the F/A-22 program including the modernization plan. The F/A-22 will have to compete for funding, priority, and mission assignments with operational systems, such as the F-15 and F/A-18, and future systems, such as the Joint Strike Fighter and the Joint Unmanned Combat Air Systems. Air Force leadership and the Air Combat Command continue to support the multi-mission role for the F/A-22 and do not want to reduce or eliminate the new capabilities and missions. Therefore, if restructuring is required, program officials are considering other options to accommodate the program within reduced funding and fleet size. They are considering the possibility of moving forward with blocks 20 and 30 but curtailing block 40 because its enhancements are slated for those aircraft that have been cut by the budget decision (refer to table 1). Officials said that some of the enhancements planned for block 40 could be retrofitted into the block 20 and 30 aircraft. At the time of our review, Air Force officials were considering alternative strategies and plans for rephasing funds in order to execute the changes in the program enumerated above. It is paramount that these issues be settled before moving forward in the program. Reports detailing the results from IOT&E were not available for our review, but Air Force test officials told us that testing showed the F/A-22 was “overwhelmingly effective” as an air superiority fighter and that its supporting systems were “potentially suitable.” Some deficiencies were noted, particularly in reliability and maintainability, but Air Force officials believe these deficiencies can be corrected in time to meet the warfighter’s needs by the scheduled initial operational capability date in December 2005. They also believe test results support making the full-rate production decision. Testing to demonstrate the limited air-to-ground attack capability was not accomplished but is scheduled to be done as part of the follow-on operational test planned to start in July 2005. The F/A-22 initial operational test and evaluation was conducted by the Air Force Operational Test and Evaluation Center from April through December 2004 to support the full-rate production decision planned for March 2005. Its operational test plan was designed to assess the F/A-22’s combat effectiveness and suitability in an operationally representative environment. The warfighter had established five critical operational issues for evaluation during operational testing to demonstrate effectiveness and suitability: effectiveness—demonstrate operational performance to effectively execute selected counter-air missions; survivability—assess ability to evade and survive against air-to-air and surface-to-air threats; deployability—evaluate the timely transportability and set up of F/A-22 personnel and equipment into a theater of operations; sortie generation—assess how well air crews can generate and launch sorties, including maintenance and supply support capabilities; and ground attack—demonstrate limited air-to-ground attack with the Joint Direct Attack Munition. The first two issues assess combat effectiveness in completing selected counter air missions and in surviving against representative air and ground threats. The second two issues assess suitability of F/A-22 to support combat by transporting, deploying and sustaining forces and equipment. These four critical operational issues were addressed in IOT&E. The fifth critical operational issue—ground attack—was not addressed in IOT&E and will be assessed during follow-on operational test and evaluation, scheduled to start in July 2005. This follow-on testing is also planned to include demonstrations of corrective actions for some deficiencies identified during IOT&E and other testing needed to achieve initial operational capability in December 2005. Additional follow-on operational tests are planned in the future to test new, more robust attack capabilities and other enhancements added by the modernization program. Combat effectiveness and survivability testing included extensive flight tests to evaluate air-to-air capabilities including (1) offensive counter-air missions against aggressor aircraft and (2) defensive counter-air missions to accompany and protect friendly strike and high value support aircraft from attack by aggressor aircraft. These tests incorporated ground and air threats resident at the Air Force’s Nevada test range. Computer simulations and models were also used to evaluate performance against future threats and in other scenarios that cannot be replicated in open flight tests. Test officials told us that the F/A-22 performed all the air-to-air missions very satisfactorily, demonstrating “overwhelming effectiveness” in their words. Officials also said that, in direct comparability tests with the F-15C, the F/A-22 demonstrated a clear advantage often many more times the effectiveness of the F-15C. Testing did reveal some areas needing improvement, including avionics reliability, defensive systems, and other corrective actions that will need to be addressed in follow-on testing. Test officials characterized F/A-22 suitability demonstrations for the aircraft and support systems as “potentially suitable.” The ability to transport and deploy F/A-22 personnel and equipment was adequately demonstrated and met the interim goal set by the warfighter regarding the number of airlift planes needed to transport forces and support equipment in the required amount of time. Of the four critical operational issues assessed, sortie generation experienced the most problems. Officials rated the sortie generation area as unsatisfactory. Problems were noted in aircraft reliability and maintainability, including maintenance of the aircraft’s critical low observable characteristics. Problems were also noted in the maturity of integrated diagnostic systems, key assets expected to greatly improve and accelerate field maintenance activities for meeting sortie rates with constrained personnel. Officials believe these and other deficiencies can be corrected in time to meet the warfighter’s needs. For example, officials said the mission capability rate demonstrated during testing has continued to improve and is close to achieving the warfighter’s desired rate, not required until December 2005. However, the testing and implementation of most corrective actions will not occur until after the full-rate production decision. Sortie generations and support activities also required the extensive involvement of contractor personnel for providing technical assistance, off-aircraft maintenance, and engineering, including trouble-shooting and use of special test equipment. Air Force officials said that extensive contractor involvement has long been planned for the F/A-22 system, particularly during initial fielding, and that reliance on contractor personnel and special test equipment should somewhat lessen as Air Force personnel gain experience. Before full-rate production can start, the Office of the Director of Operational Test and Evaluation must still review test results and report to Congress and defense leadership. In addition, the F/A-22 program must demonstrate it satisfies criteria established by the Defense Acquisition Board in November 2004. Among other things, that criteria includes delivering a fully-resourced plan for follow-on testing to correct deficiencies identified in IOT&E, achieving design stability of the avionics software, demonstrating mature manufacturing processes, and validating technical order data. The Air Force, in the face of significant changes to the F/A-22, has not prepared a new business case to justify the resources needed to add a much more robust ground attack capability and to assume new missions. Over the 19 years that the program has been in development, the world threat environment has changed and the capabilities the Air Force once needed and planned for in the F-22 may not satisfy the warfighter’s future needs. Additionally, cost growth over time and affordability concerns have driven down planned aircraft quantities from 750 to 178 aircraft. The Air Force is now planning a modernization program that will substantially change the role of the F/A-22. Because of budget cuts in the program that have eliminated F/A-22 procurement after 2008 the modernization program as planned is obsolete. Even if aircraft are restored to the procurement plan beyond 2008, this modernization is projected to occur over a 12-year period. Based on the program’s current knowledge, there is significant risk that the planned modernization would not move ahead and deliver capability to the warfighter on schedule. The original plan to develop and deliver an initial capability for the F-22 was 9 years—it has taken nearly 20 years. Our body of work in best practices tells us one thing for certain, and that is that the chances of attaining successful outcomes are substantially increased when a business case is made that matches requirements and resources for developing a product. Right now both requirements and resources for the F/A-22 program are in a state of flux and it lacks a business case to move forward with billions of dollars in planned investments. Over the immediate horizon, planned studies present OSD with opportunities to answer questions about need and affordability of the F/A- 22. The 2005 Quadrennial Defense Review is expected to make a strategic assessment of available and planned tactical air capabilities to help determine where to target resources. Likewise, an ongoing series of joint capabilities reviews, to include the F/A-22, could help determine where the F/A-22 now fits in the force structure. These top-level studies would provide information needed for a specific F/A-22 business case that would place DOD leaders in a better position to decide on remaining F/A-22 investments in concert with other tactical aircraft and DOD needs. The F/A-22 full rate production decision is currently planned for March 2005, before the results of these studies are available and production is already at near full rate quantities. Because of evolving threats against the United States; pending changes in U.S. defense plans; the lack of clarity regarding F/A-22 required capabilities, quantities, and resources; the recent budget decision; and upcoming reviews on joint air capabilities, we are reiterating and expanding upon the recommendation in our March 2004 report for a new and comprehensive business case to justify future investment in the F/A-22 program. We recommend the Secretary of Defense complete a new business case that determines the continued need for the F/A-22 and that specifically: (a) justifies the F/A-22’s expanded air-to-ground capabilities based on an assessment of alternatives to include both operational assets and planned future weapon systems; (b) justifies the quantity of F/A-22 aircraft needed to satisfy requirements for air-to-air and air-to-ground missions; (c) provides evidence that the planned quantity and capabilities are affordable within current and projected budgets and the statutory funding limitation; (d) addresses impacts of the recent budget decision on the need for and cost of future developmental activities, long-term logistical support and basing decisions, and the ability to take advantage of cost reduction efforts, such as multiyear contracting and productivity improvement; and (e) justifies the need for investments for a new computer architecture and avionics processor, and F/A-22 infrastructure deficiencies. In written comments on a draft of this report, DOD concurred with our recommendation. They identified the following actions planned that would accomplish business case elements: (1) the 2005 Quadrennial Defense Review will address quantity of aircraft needed for air-to-air and air-to- ground missions; (2) Defense Acquisition Board reviews of the F/A-22 program will ensure that initial modernization efforts have validated requirements and are tested; and (3) the plan to break out the latter stages of modernization as a separate acquisition program will require the Air Force to develop requirements, perform an analysis to substantiate those requirements, and justify investments in new capabilities. DOD also stated its concern that by only reporting total program acquisition unit cost (pp. 5 and 6 herein), the report does not provide a balanced picture. They asked us to also present information concerning the steady reduction in unit flyaway costs over the course of the program. Flyaway costs do not include “sunk” costs and fixed expenses for program start-up, development, test, construction, and support but focus on the procurement costs of buying additional systems, costs that generally decrease as a production program matures and manufacturing efficiency improves. In response, we provided additional information about flyway costs and potential diseconomies from truncating the procurement program (see p. 12). We also incorporated other technical comments from DOD where appropriate. We are sending copies of this report to the Secretary of Defense; the Secretary of the Air Force; and the Director, Office of Management and Budget. Copies will also be made available to others on request. Please contact me or Michael J. Hazard at (202) 512-4841 if you or your staff has any questions concerning this report. Other contributors to this report were Robert Ackley, Michael W. Aiken, Lily J. Chin, Bruce D. Fairbairn, Steven M. Hunter, and Adam Vodraska. To determine the Air Force’s F/A-22 modernization plans and funding requirements, we analyzed budget documents, cost reports, acquisition plans, and project listings to identify the purpose, scope, and cost of the modernization efforts. Officials from the Air Force and the Office of the Secretary of Defense (OSD) briefed us on program details, specific candidate projects, and program history. We compared current plans and project listings with previous time periods to determine changes in modernization projects and schedules. We also compared cost estimates prepared by the Air Force and OSD’s cost analysts in order to identify key differences in assumptions used, cost factors applied, and time periods and to reconcile how these differences impacted final results. To determine the results and implications of the initial operational test and evaluation on the F/A-22 program, we first reviewed test plans, laws and regulations governing operational tests, and management direction affecting the scope and schedule of testing. We then discussed summary results and program impacts, including schedule issues, with testing and evaluating officials from the Air Force and OSD. We also reviewed briefing materials used by testing officials to inform DOD management and congressional staffs on the results of initial operational test and evaluation (IOT&E). However, at the time of our review, the final reports on IOT&E results from the Air Force’s Operational Test and Evaluation Center and the OSD Director of Operational Test and Evaluation were not issued nor were drafts made available to us. Accordingly, our analysis of actual results and data was somewhat constrained and our reporting limited to providing summary level observations on test scope, results, and corrective actions identified. Notwithstanding, DOD officials gave us access to sufficient information to make informed judgments on the matters covered in this report. In performing our work, we obtained information and interviewed officials from the Office of the Secretary of Defense, Washington, D.C., including the offices of the Under Secretary of Defense for Acquisition, Technology and Logistics, the Director of Operational Test and Evaluation, the Program Analysis and Evaluation, and the Cost Analysis Improvement Group; Air Force Headquarters, Washington, D.C.; F/A-22 System Program Office, Wright-Patterson Air Force Base, Ohio; Air Combat Command, Langley Air Force Base, Virginia; Air Force Operational Test and Evaluation Center, Kirkland Air Force Base, New Mexico; and the Combined Flight Test Center, Edwards Air Force Base, California.. We performed our work from November 2004 through February 2005 in accordance with generally accepted government auditing standards. Defense Acquisitions: Assessments of Major Weapon Programs. GAO-04- 248. Washington, D.C.: March 31, 2004. Tactical Aircraft: Status of the F/A-22 and Joint Strike Fighter Programs. GAO-04-597T. Washington, D.C.: March 25, 2004. Tactical Aircraft: Changing Conditions Drive Need for New F/A-22 Business Case. GAO-04-391. Washington, D.C.: March 15, 2004. Best Practices: Better Acquisition Outcomes Are Possible If DOD Can Apply Lessons from F/A-22 Program. GAO-03-645T. Washington, D.C.: April 11, 2003. Tactical Aircraft: Status of the F/A-22 Program. GAO-03-603T. Washington, D.C.: April 2, 2003. Tactical Aircraft: DOD Should Reconsider Decision to Increase F/A-22 Production Rates While Development Risks Continue. GAO-03-431. Washington, D.C.: March 14, 2003. Tactical Aircraft: F-22 Delays Indicate Initial Production Rates Should Be Lower to Reduce Risks. GAO-02-298. Washington, D.C.: March 5, 2002. Tactical Aircraft: Continuing Difficulty Keeping F-22 Production Costs Within the Congressional Limitation. GAO-01-782. Washington, D.C.: July 16, 2001. Tactical Aircraft: F-22 Development and Testing Delays Indicate Need for Limit on Low-Rate Production. GAO-01-310. Washington, D.C.: March 15, 2001. Defense Acquisitions: Recent F-22 Production Cost Estimates Exceeded Congressional Limitation. GAO/NSIAD-00-178. Washington, D.C.: August 15, 2000. Defense Acquisitions: Use of Cost Reduction Plans in Estimating F-22 Total Production Costs. GAO/T-NSIAD-00-200. Washington, D.C.: June 15, 2000. F-22 Aircraft: Development Cost Goal Achievable If Major Problems Are Avoided. GAO/NSIAD-00-68. Washington, D.C.: March 14, 2000. Defense Acquisitions: Progress in Meeting F-22 Cost and Schedule Goals. GAO/T-NSIAD-00-58. Washington, D.C.: December 7, 1999. Fiscal Year 2000 Budget: DOD’s Production and RDT&E Programs. GAO/NSIAD-99-233R. Washington, D.C.: September 23, 1999. Budget Issues: Budgetary Implications of Selected GAO Work for Fiscal Year 2000. GAO/OCG-99-26. Washington, D.C.: April 16, 1999. Defense Acquisitions: Progress of the F-22 and F/A-18E/F Engineering and Manufacturing Development Programs. GAO/T-NSIAD-99-113. Washington, D.C.: March 17, 1999. F-22 Aircraft: Issues in Achieving Engineering and Manufacturing Development Goals. GAO/NSIAD-99-55. Washington, D.C.: March 15, 1999. F-22 Aircraft: Progress of the Engineering and Manufacturing Development Program. GAO/T-NSIAD-98-137. Washington, D.C.: March 25, 1998. F-22 Aircraft: Progress in Achieving Engineering and Manufacturing Development Goals. GAO/NSIAD-98-67. Washington, D.C.: March 10, 1998. Tactical Aircraft: Restructuring of the Air Force F-22 Fighter Program. GAO/NSIAD-97-156. Washington, D.C.: June 4, 1997. Defense Aircraft Investments: Major Program Commitments Based on Optimistic Budget Projections. GAO/T-NSIAD-97-103. Washington, D.C.: March 5, 1997. F-22 Restructuring. GAO/NSIAD-97-100R. Washington, D.C.: February 28, 1997. Tactical Aircraft: Concurrency in Development and Production of F-22 Aircraft Should Be Reduced. GAO/NSIAD-95-59. Washington, D.C.: April 19, 1995. Tactical Aircraft: F-15 Replacement Issues. GAO/T-NSIAD-94-176. Washington, D.C.: May 5, 1994. Tactical Aircraft: F-15 Replacement Is Premature as Currently Planned. GAO/NSIAD-94-118. Washington, D.C.: March 25, 1994.
The Air Force is preparing a modernization plan that expands the capabilities of the F/A-22, which was first designed to serve as an air-to-air fighter aircraft with very limited ability to strike targets on the ground. The Air Force now intends to transform it by adding robust air-to-ground capabilities to attack enemy ground threats and by adding onboard intelligence data gathering capabilities. After the recent budget cut, DOD estimates F/A-22 cost at $63.8 billion for 178 aircraft. It has been in development for more than 19 years, a decade longer than originally envisioned. In the face of significant cost and schedule overruns, Congress mandates that GAO annually assess the F/A-22 program. In this report, GAO addresses (1) the Air Force's business case for the F/A-22 modernization plan and (2) the recently completed initial operational test and evaluation. The Air Force has yet to produce a business case for the next-generation F/A-22. Much has changed in the years since the F/A-22 program began nearly 2 decades ago--adversarial threats against U.S. aircraft have evolved, and a plan to modernize the F/A-22 significantly different than the original aircraft is in progress. A DOD cost estimate in 2003 projected the Air Force's modernization plan to cost $11.7 billion through 2018. A December 2004 budget decision reduced procurement funding and quantities but did not cut funding for modernization. The decision to terminate procurement after fiscal year 2008 places the current modernization plan in doubt as key ground attack and intelligence-gathering enhancements had been slated for aircraft now eliminated from the program. Without a new business case for adding a more robust ground attack capability and for new intelligence missions, the Air Force may be at a disadvantage when the time comes to justify the modernization plan in the face of future budget constraints. DOD is set to conduct the 2005 Quadrennial Defense Review to weigh the merits of transformational priorities and investments to determine if the best choices are being made to meet military needs within available funding levels. This may further influence an F/A-22 business case. The F/A-22 program recently underwent initial operational testing, but testing did not include the air-to-ground missions that the Air Force envisions for the aircraft. The Air Force does not expect to conduct testing of these capabilities until after a decision is made to enter full-rate production. Although a final test report was not available for our review, Air Force officials told us that the F/A-22 was extremely effective in performing its air-to-air missions. Evaluation results of capabilities needed to sustain combat operations and maintain aircraft were not as favorable. Additional testing will be required to assess corrective actions for deficiencies identified and to evaluate new ground attack and intelligence-gathering capabilities added by the modernization program.
Nanotechnology encompasses a wide range of innovations based on the understanding and control of matter at the scale of nanometers—the equivalent of one-billionth of a meter. To illustrate, a sheet of paper is about 100,000 nanometers thick and a human hair is about 80,000 nanometers wide. At the nanoscale level, materials may exhibit electrical, biological, and other properties that differ significantly from the properties the same materials exhibit at a larger scale. Exploiting these differences in nanoscale materials has led to a range of commercial uses and holds the promise for innovations in virtually every industry from aerospace and energy to health care and agriculture. In 2006, an estimated $50 billion in products worldwide incorporated nanotechnology and this figure has been projected to grow to $2.6 trillion by 2014. One research institute estimates that over 500 consumer products already available to consumers may contain nanoscale materials. The National Nanotechnology Initiative (NNI) was established in 2001 as a federal, multiagency effort intended to accelerate the discovery, development, and deployment of nanoscale science, engineering, and technology to achieve economic benefits, enhance the quality of life, and promote national security. Management of the NNI falls under the purview of the National Science and Technology Council (NSTC) that coordinates science and technology policy across the federal government. The NSTC is managed by the Director of the Office of Science and Technology Policy (OSTP), who also serves as the Science Advisor to the President. The NSTC’s Committee on Technology established the Nanoscale Science, Engineering, and Technology (NSET) subcommittee to help coordinate, plan, and implement the NNI’s activities across participating agencies. In 2003, the NSET subcommittee further established a Nanotechnology Environmental and Health Implications (NEHI) working group. The purpose of the NEHI working group, composed of representatives from 16 research and regulatory agencies, is to, among other things, coordinate agency efforts related to EHS risks of nanotechnology. Similar to the NNI, the NEHI working group has no authority to mandate research priorities or to ensure that agencies adequately fund particular research. In December 2003, Congress enacted legislation to establish a National Nanotechnology Program to coordinate federal nanotechnology research and development. Among other things, the act directs the NSTC to establish goals and priorities for the program and to set up program component areas that reflect those goals and priorities. To implement these requirements, the NSTC has established a process to categorize research projects and activities undertaken by the various federal agencies into seven areas. Six of the seven focus on the discovery, development, and deployment of nanotechnology, while the seventh relates to the societal dimensions of nanotechnology that include issues such as the EHS risks of nanotechnology. As part of the annual federal budget process, agencies also report their research funding for each area to OMB. The NNI’s annual Supplement to the President’s Budget, prepared by the NSTC, includes EHS research figures from the agencies and a general description of the research conducted by the agencies in each of the areas. For reporting purposes, the NSET subcommittee has defined EHS research as efforts whose primary purpose is to understand and address potential risks to health and to the environment posed by nanotechnology. Eight of the 13 agencies that funded nanotechnology research in fiscal year 2006 reported having devoted some of those resources to research that had a primary focus on potential EHS risks. Under the NNI, each agency funds research and development projects that support its own mission as well as the NNI’s goals. While agencies share information on their nanotechnology-related research goals with the NSET subcommittee and NEHI working group, each agency retains control over its decisions on the specific projects to fund. While the NNI was designed to facilitate intergovernmental cooperation and identify goals and priorities for nanotechnology research, it is not a research program. It has no funding or authority to dictate the nanotechnology research agenda for participating agencies. The NNI used its fiscal year 2000 strategic plan and its subsequent updates to delineate a strategy to support long-term nanoscale research and development, among other things. A key component of the 2000 plan was the identification of nine specific research and development areas— known as “grand challenges”—that highlighted federal research on applications of nanotechnology with the potential to realize significant economic, governmental, and societal benefits. In 2004, the NNI updated its strategic plan and described its goals as well as the investment strategy by which those goals were to be achieved. Consistent with the 21st Century Nanotechnology Research and Development Act, the NNI reorganized its major subject categories of research and development investment into program component areas (PCA) that cut across the interests and needs of the participating agencies. These seven areas replaced the nine grand challenges that the agencies had used to categorize their nanotechnology research. Six of the areas focus on the discovery, development, and deployment of nanotechnology. The seventh, societal dimensions, consists of two topics—research on environmental, health, and safety; and education and research on ethical, legal, and other societal aspects of nanotechnology. PCAs are intended to provide a means by which the NSET subcommittee, OSTP, OMB, Congress, and others may be informed of the relative federal investment in these key areas. PCAs also provide a structure by which the agencies that fund research can better direct and coordinate their activities. In response to increased concerns about the potential EHS risks of nanotechnology, the NSET subcommittee and the agencies agreed in fiscal year 2005 to separately report their research funding for each of the two components of the societal dimensions PCA. The December 2007 update of the NNI’s strategic plan reaffirmed the program’s goals, identified steps to accomplish those goals, and formally divided the societal dimensions PCA into two PCAs—”environment, health, and safety” and “education and societal dimensions.” Beginning with the development of the fiscal year 2005 federal budget, agencies have worked with OMB to identify funding for nanoscale research that would be reflected in the NNI’s annual Supplement to the President’s Budget. OMB analysts reviewed aggregated, rather than project-level, data on research funding for each PCA to help ensure consistent reporting across the agencies. Agencies also relied on definitions of the PCAs developed by the NSET subcommittee to determine the appropriate area in which to report research funding. Neither NSET nor OMB provided guidance on whether or how to apportion funding for a single research project to more than one PCA, if appropriate. However, representatives from both NSET and OMB stressed that the agencies were not to report each research dollar more than once. About 18 percent of the total research dollars reported by the agencies as being primarily focused on the study of nanotechnology-related EHS risks in fiscal year 2006 cannot actually be attributed to this purpose. Specifically, we found that 22 of the 119 projects funded by five federal agencies were not primarily related to studying EHS risks. These 22 projects accounted for about $7 million of the total that the NNI reported as supporting research primarily focused on EHS risks. Almost all of these projects—20 out of 22—were funded by NSF, with the two additional projects funded by NIOSH. We found that the primary purpose of many of these 22 projects was to explore ways to use nanotechnology to remediate environmental damage or to identify environmental, chemical, or biological hazards not related to nanotechnology. For example, some NSF- funded research explored the use of nanotechnology to improve water or gaseous filtration systems. Table 1 shows our analysis of the nanotechnology research projects reported as being primarily focused on EHS risks. We found that the miscategorization of these 22 projects resulted largely from a reporting structure for nanotechnology research that does not easily allow agencies to recognize projects that use nanotechnology to improve the environment or enhance the detection of environmental contaminants, and from the limited guidance available to the agencies on how to consistently report EHS research. From fiscal years 2001 to 2004, the NSET subcommittee categorized federal research and development activities into nine categories, known as “grand challenges,” that included one focused on “nanoscale processes for environmental improvement.” Agencies initiated work on many of these 22 projects under the grand challenges categorization scheme. Starting in fiscal year 2005, NSET adopted a new categorization scheme, based on PCAs, for agencies to report their nanotechnology research. The new scheme eliminated the research category of environmental improvement applications and asked agencies to report research designed to address or understand the risks associated with nanotechnology as part of the societal dimensions PCA. The new scheme shifted the focus from applications-oriented research to research focused on the EHS implications of nanotechnology. However, the new scheme had no way for agencies to categorize environmentally focused research that was underway. As a result, NSF and NIOSH characterized these projects as EHS focused for lack of a more closely related category to place them in, according to program managers. Furthermore, neither NSET nor OMB provided agencies guidance on how to apportion the dollars for a single project to more than one program component area, when appropriate. This is especially significant for broad, multiphase research projects, such as NSF’s support to develop networks of research facilities. Of the five agencies we reviewed, only NSF apportioned funds for a single project to more than one PCA. In addition to research reported to the NNI as being primarily focused on the EHS risks of nanotechnology, some agencies conduct research that is not reflected in the EHS totals provided by the NNI either because they are not considered federal research agencies or because the primary purpose of the research was not to study EHS risks. For example, some agencies conduct research that results in information highly relevant to EHS risks but that was not primarily directed at understanding or addressing those risks and therefore is not captured in the EHS total. This type of research provides information that is needed to understand and measure nanomaterials to ensure safe handling and protection against potential health or environmental hazards; however, such research is captured under other PCAs, such as instrumentation, metrology, and standards. Because the agencies that conduct this research do not systematically track it as EHS-related, we could not establish the exact amount of federal funding that is being devoted to this additional EHS research. All eight agencies in our review have processes in place to identify and prioritize the research they need related to the potential EHS risks of nanotechnology. Most agencies have developed task forces or designated individuals to specifically consider nanotechnology issues and identify priorities, although the scope and exact purpose of these activities differ by agency. Once identified, agencies communicate their EHS research priorities to the public and to the research community in a variety of ways, including publication in agency documents that specifically address nanotechnology issues, agency strategic plans or budget documents, agency Web sites, and presentations at public conferences or workshops. We determined that each agency’s nanotechnology research priorities generally reflected its mission. For example, the priorities identified by FDA and CPSC are largely focused on the detection and safety of nanoparticles in the commercial products they regulate. On the other hand, EHS research priorities identified by NSF reflect its broader mission to advance science in general, and include a more diverse range of priorities, such as the safety and transport of nanomaterials in the environment, and the safety of nanomaterials in the workplace. In addition to the efforts of individual agencies, the NSET subcommittee has engaged in an iterative prioritization process through its NEHI working group. Beginning in 2006, NEHI identified but did not prioritize five broad research categories and 75 more specific subcategories of needs where additional information was considered necessary to further evaluate the potential EHS risks of nanotechnology. NEHI obtained public input on its 2006 report and released another report in August 2007, in which it distilled the previous list of 75 unprioritized specific research needs into a set of five prioritized needs for each of the five general research categories. The NEHI working group has used these initial steps to identify the gaps between the needs and priorities it has identified and the research that agencies have underway. NEHI issued a report summarizing the results of this analysis in February 2008. Although a comprehensive research strategy for EHS research had not been finalized at the time of our review, the prioritization processes taking place within individual agencies and the NNI appeared to be reasonable. Numerous agency officials said their agency’s EHS research priorities were generally reflected both in the NEHI working group’s 2006 research needs and 2007 research prioritization reports. Our comparison of agency nanotechnology priorities to the NNI’s priorities corroborated these statements. Specifically, we found that all but one of the research priorities identified by individual agencies could be linked to one or more of the five general research categories. According to agency officials, the alignment of agency priorities with the general research categories is particularly beneficial to the regulatory agencies, such as CPSC and OSHA, which do not conduct their own research, but rely instead on research agencies for data to inform their regulatory decisions. In addition, we found that the primary purposes of agency projects underway in fiscal year 2006 were generally consistent with both agency priorities and the NEHI working group’s research categories. Of these 97 projects, 43 were focused on Nanomaterials and Human Health, including all 18 of the projects funded by NIH. EPA and NSF funded all 25 projects related to Nanomaterials and the Environment. These two general research categories accounted for 70 percent of all projects focused on EHS risks. Furthermore, we determined that, while agency-funded research addressed each of the five general research categories, it focused on the priority needs within each category to varying degrees. Specifically, we found that the two highest-priority needs in each category were addressed only slightly more frequently than the two lowest-priority needs. Moreover, although the NEHI working group considered the five specific research priorities related to human health equally important, 19 of the 43 projects focused on a single priority—”research to determine the mechanisms of interaction between nanomaterials and the body at the molecular, cellular, and tissular levels.” Table 2 shows a summary of projects by agency and specific NEHI research priority. Agency and NNI processes to coordinate research and other activities related to the potential EHS risks of nanotechnology have been generally effective, and have resulted in numerous interagency collaborations. All eight agencies in our review have collaborated on multiple occasions with other NEHI-member agencies on activities related to the EHS risks of nanotechnology. These EHS-related activities are consistent with the expressed goals of the larger NNI—to promote the integration of federal efforts through communication, coordination, and collaboration. The NEHI working group is at the center of this effort. We found that regular NEHI working group meetings, augmented by informal discussions, have provided a venue for agencies to exchange information on a variety of topics associated with EHS risks, including their respective research needs and opportunities for collaborations. Interagency collaboration has taken many forms, including joint sponsorship of EHS-related research and workshops, the detailing of staff to other NEHI working group agencies, and various other general collaborations or memoranda of understanding. Furthermore, the NEHI working group has adopted a number of practices GAO has previously identified as essential to helping enhance and sustain collaboration among federal agencies. For example, in 2005 NEHI clearly defined its purpose and objectives and delineated roles and responsibilities for group members. Furthermore, collaboration through multiagency grant announcements and jointly sponsored workshops has served as a mechanism to leverage limited resources to achieve increased knowledge about potential EHS risks. Finally, all agency officials we spoke with expressed satisfaction with their agency’s participation in the NEHI working group, specifically, the coordination and collaboration on EHS risk research and other activities that have occurred as a result of their participation. Many officials described NEHI as unique among interagency efforts in terms of its effectiveness. Given limited resources, the development of ongoing relationships between agencies with different missions, but compatible nanotechnology research goals, is particularly important. NIH officials commented that their agency’s collaboration with NIST to develop standard reference materials for nanoparticles may not have occurred as readily had it not been for regular NEHI meetings and workshops. In addition, NEHI has effectively brought together research and regulatory agencies, which has enhanced planning and coordination. Many officials noted that participation in NEHI has frequently given regulators the opportunity to become aware of and involved with research projects at a very early point in their development, which has resulted in research that better suits the needs of regulatory agencies. Many officials also cited the dedication of individual NEHI working group representatives, who participate in the working group in addition to their regular agency duties, as critical to the group’s overall effectiveness. A number of the members have served on the body for several years, providing stability and continuity that contributes to a collegial and productive working atmosphere. In addition, because nanotechnology is relatively new with many unknowns, these officials said the agencies are excited about advancing knowledge about nanomaterials and contributing to the informational needs of both regulatory and research agencies. Furthermore, according to some officials, there is a shared sense among NEHI representatives of the need to apply lessons learned from the development of past technologies, such as genetically modified organisms, to help ensure the safe development and application of nanotechnology. In closing, Mr. Chairman, while nanotechnology is likely to affect many aspects of our daily lives in the future as novel drug delivery systems, improved energy storage capability, and stronger, lightweight materials are developed and made available, it is essential to consider the potential risks of this technology in concert with its potential benefits. Federal funding for studying the potential EHS risks of nanotechnology is critical to enhancing our understanding of these new materials, and we must have consistent, accurate, and complete information on the amount of agency funding that is being dedicated to this effort. However, this information is not currently available because the totals reported by the NNI include research that is more focused on uses for nanotechnology, rather than the risks it may pose. Furthermore, agencies currently have limited guidance on how to report projects with more than one research focus across program component areas, when appropriate. As a result, the inventory of projects designed to address these risks is inaccurate and cannot ensure that the highest-priority research needs are met. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you and other Members may have. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. For further information about this testimony, please contact Ms. Anu Mittal, at (202) 512-3841 or at [email protected]. Individuals who contributed to this statement include Nancy Crothers, Elizabeth Erdmann, David Lutter, Rebecca Shea, and Cheryl Williams. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In March 2008, GAO issued a report entitled Nanotechnology: Better Guidance Is Needed to Ensure Accurate Reporting of Federal Research Focused on Environmental, Health, and Safety Risks (GAO-08-402). In this report, GAO reviewed the National Nanotechnology Initiative (NNI), a multiagency effort administered by the Office of Science and Technology Policy (OSTP). The NNI coordinates the nanotechnology-related activities of 25 federal agencies that fund nanoscale research or have a stake in the results. A key research area funded by some agencies related to studying the potential environmental, health, and safety (EHS) risks that may result from exposure to nanoscale materials. For this testimony statement, GAO was asked to summarize the findings of its March 2008 report, focusing on (1) the extent to which selected agencies conducted EHS research in fiscal year 2006; (2) the reasonableness of the agencies' and the NNI's processes to identify and prioritize EHS research; and (3) the effectiveness of the agencies' and the NNI's process to coordinate EHS research. In fiscal year 2006, federal agencies devoted $37.7 million--or 3 percent of the $1.3 billion total nanotechnology research funding--to research that was primarily focused on the EHS risks of nanotechnology, according to the NNI. However, about 20 percent of this total cannot actually be attributed to this purpose. GAO found that 22 of the 119 projects identified as EHS in fiscal year 2006 were not primarily related to understanding the extent to which nanotechnology may pose an EHS risk. Instead, many of these projects were focused on how to use nanotechnology to remediate environmental damage or detect hazards not related to nanotechnology. GAO determined that this mischaracterization is rooted in the current reporting structure that does not allow these types of projects to be easily categorized and the lack of guidance for agencies on how to apportion research funding across multiple topics, when appropriate. In addition to the EHS funding reported by the NNI, federal agencies conduct other research that is not captured in the EHS totals. This research was not captured by the NNI because either the research was funded by an agency not considered to be a research agency or because the primary purpose of the research was not to study EHS risks. Federal agencies and the NNI, at the time of GAO's review, were in the process of identifying and prioritizing EHS risk research needs and the overall process they were using appeared reasonable. For example, identification and prioritization of EHS research needs was being done by the agencies and the NNI collaboratively. The NNI also was engaged in an iterative prioritization effort through its Nanotechnology Environmental and Health Implications (NEHI) working group. Through this process, NEHI identified five general research categories as a priority for federally funded research. GAO found that most of the research projects that were underway in fiscal year 2006 were generally consistent with agency and NEHI priorities. NEHI released its new EHS research strategy on February 13, 2008. Agency and NNI processes to coordinate activities related to potential EHS risks of nanotechnology have been generally effective. The NEHI working group has convened frequent meetings that have helped agencies identify opportunities to collaborate on EHS risk issues, such as joint sponsorship of research and workshops to advance knowledge and facilitate information-sharing among the agencies. NEHI also has incorporated several practices that GAO has previously identified as key to enhancing and sustaining interagency collaborative efforts, such as defining a common outcome and leveraging resources. Finally, all agency officials GAO spoke with expressed satisfaction with the coordination and collaboration on EHS risk research that has occurred through NEHI. They cited several factors they believe contribute to the group's effectiveness, including the stability of the working group membership and the expertise and dedication of its members. Furthermore, according to these officials, this stability, combined with common research needs and general excitement about the new science, has resulted in a collegial, productive working environment.
Although its effect on communities can be devastating, wildland fire is a natural and necessary process that provides many benefits to ecosystems, such as maintaining habitat diversity, recycling soil nutrients, limiting the spread of insects and disease, and promoting new growth by causing the seeds of fire-dependent species to germinate. Wildland fire also periodically removes brush, small trees, and other vegetation that can otherwise accumulate and increase the size, intensity, and duration of subsequent fires. Over the past century, however, various management practices—including fire suppression, grazing, and timber harvest—have reduced the normal frequency of fires in many forest and rangeland ecosystems and contributed to abnormally dense, continuous accumulations of vegetation, which can fuel uncharacteristically large or severe wildland fires. Federal researchers have estimated that unnaturally dense fuel accumulations on 90 million to 200 million acres of federal lands in the contiguous United States place these lands at an elevated risk of severe wildland fire. In response to the growing wildland fire problem, the five federal agencies responsible for managing wildland fires—the Forest Service in the Department of Agriculture and the Bureau of Indian Affairs, Bureau of Land Management, Fish and Wildlife Service, and National Park Service in the Department of the Interior—adopted the 1995 federal wildland fire management policy, which formally recognized the essential role that fire plays in maintaining natural systems. This policy was subsequently reaffirmed and updated in 2001. Two important implications of the new policy are that the agencies recognized that (1) they needed to reduce accumulated vegetation that could fuel intense wildland fires and (2) it was not appropriate to continue attempting to suppress all fires. Acknowledging the problem caused by accumulated fuels, Congress substantially increased appropriations for fuel reduction treatments— appropriating more than $3.2 billion to the Forest Service and Interior since 2001—and, in 2003, passed the Healthy Forests Restoration Act, with the stated purpose of, among other things, reducing wildland fire risk to communities, municipal water supplies, and other at-risk federal land. After receiving its annual appropriation, the Forest Service allocates funds to its nine regional offices, which in turn allocate funds to individual national forests and grasslands. Interior, upon receiving its annual appropriation, allocates funds to its four fire management agencies—with the Bureau of Land Management receiving the largest share, about 50 percent of Interior’s funding. Interior’s agencies then allocate funds to their regional or state offices, which in turn allocate funds to individual field units, such as national parks or wildlife refuges. Forest Service and Interior agency field units are generally responsible for selecting individual fuel reduction projects to undertake, which are typically conducted through mechanical treatments (using chainsaws, chippers, mowers, and the like) or by using prescribed fire (which land managers deliberately set to restore or maintain desired vegetative conditions). The agencies used the tools and fuel reduction funding provided by Congress to treat more than 18 million acres from 2001 through August 2007. Over the last decade, Congress, the Office of Management and Budget, federal agency officials, and others have expressed concerns about mounting federal wildland fire expenditures. These concerns have led GAO, the Department of Agriculture’s Office of Inspector General, the Forest Service, Interior, and others to conduct numerous reviews of the federal wildland fire program. These reviews identified many issues the agencies needed to address if they are to contain costs—issues generally related to reducing accumulated fuels, acquiring and using firefighting personnel and equipment, and selecting firefighting strategies. Land managers and incident management teams (specialized fire-response teams that include personnel to handle command, planning, logistics, operations, and finance functions) have a wide spectrum of strategies available to them when responding to wildland fires, some of which can be significantly more costly than others. These strategies range from having a few personnel monitor a fire while allowing it to burn to achieve ecological benefits—a practice known as wildland fire use—to mobilizing all available personnel and equipment to try to control the entire perimeter of a fire or otherwise suppress it as quickly as possible. In selecting a strategy for a particular fire, land managers are required to consider the cost of suppression, the value of structures and other resources threatened by the fire, and the potential ecological effects of the fire. The agencies use the term “appropriate management response” for a strategy that considers these factors. Recent reports by GAO and others, however, have identified barriers to the agencies increasing their use of less aggressive strategies, which often cost less. If the agencies and Congress are to make informed decisions about an effective and affordable long-term approach for addressing wildland fire, the agencies need a cohesive strategy that identifies the long-term options and associated funding for reducing excess vegetation and responding to fires. We first recommended the development of a cohesive strategy for addressing excess vegetation in 1999. By 2005, the agencies had yet to develop such a strategy, and that year we reiterated the need for such a strategy and broadened our recommendation’s focus to include options not only for reducing fuels but also for responding to wildland fires when they do occur, in order to better address the interrelated nature of the two activities. We repeated our call for a cohesive strategy in 2006 and 2007. Although the agencies had consistently concurred with our recommendation to develop a cohesive strategy, in 2007 they retreated from their commitment to develop one. The Department of Agriculture’s Under Secretary for Natural Resources and Environment testified before the Senate Committee on Energy and Natural Resources in January 2007, and before the House Subcommittee on National Parks, Forests and Public Lands in June 2007, that he did not think it useful to provide specific funding estimates for fuel treatments years into the future because conditions on the ground change over time and may change priorities in future years. Forest Service and Interior officials subsequently told us in January 2008 that they have no plans to develop a cohesive strategy that identifies long-term options and associated funding requirements. Despite the agencies’ retreat from their commitment to develop a cohesive strategy, a strategy of this sort nevertheless remains fundamental if the agencies and Congress are to fully understand the potential choices, and associated costs, for addressing wildland fire problems. We also believe the agencies have mischaracterized our recommendation to develop long- term options, and associated funding, for reducing fuels. Our intent was not to have the agencies identify the specific areas they would treat each year into the future, but rather that they develop broad options for reducing fuels, including estimated costs, and analyze the effects of the different options on the predicted costs of preparing for and responding to wildland fires in the future. One such analysis was developed in 2002 by a team of Forest Service and Interior experts, who produced an estimate of the funds needed to implement each of eight different fuel reduction options for protecting communities and ecosystems across the nation over the next century. The team determined that effectively reducing the risks to communities and ecosystems across the nation could require an approximate tripling of fuel reduction funding, to about $1.4 billion annually, for an initial period of several years. These initially higher costs for fuel reduction would decline after fuels had been sufficiently reduced to allow less-expensive prescribed burning methods in many areas. More importantly, the team estimated that the reduction in fuels would allow the agencies to suppress more fires at lower cost and would reduce total wildland fire management costs and risk after 15 years. Alternately, the team concluded that maintaining the then-current level of investment in fuel reduction would increase costs as well as risks to communities and ecosystems in the long term. However, the Office of Management and Budget raised concerns about the accuracy of the long-term funding estimates used by the study; as a result, agency officials told us in 2006 that they needed to improve the data before they could develop a cohesive strategy. Now, however—and despite agency efforts to improve their data—this concern appears to be moot, as the agencies have abandoned their commitment to develop the strategy. We reported in 2007 that although the Forest Service and Interior agencies had taken several steps intended to help contain wildland fire costs, they had not clearly defined their cost-containment goals or developed a strategy for achieving those goals—steps that are fundamental to sound program management. As we reported, the agencies are implementing a number of steps designed to help them contain wildland fire costs—such as improving how they acquire and use firefighting assets, updating policies to require officials to consider the full spectrum of available strategies when selecting a firefighting strategy, and developing new decision support tools that help officials select the most appropriate strategy. However, we also found that the agencies had neither clearly defined the goals of their cost-containment efforts nor developed a clear plan for how the various steps they are taking to help contain costs fit together. Without such a strategy, we believe the agencies will have difficulty determining whether they are taking the most important steps first, as well as the extent to which the steps they are taking will help contain costs. As a result, we recommended that the agencies take several steps to improve the management of their cost-containment efforts, including establishing clearly defined goals and measurable objectives and a strategy to achieve them. Because of the importance of these actions and continuing concerns about the agencies’ response to the increasing cost of wildland fires—and so that the agencies could use the results of these actions to prepare for the 2008 fire season—we recommended the agencies provide Congress with this information no later than November 2007, a step they have yet to take. The Forest Service and Interior, in commenting on a draft of that report, generally disagreed with the characterization of many of our findings, but they neither agreed nor disagreed with our recommendations. In particular, they identified several agency documents that they argue provide clearly defined goals and objectives and that make up their strategy to contain costs. Although the documents cited by the agencies provide overarching goals and objectives, they lack the clarity and specificity needed by land management and firefighting officials in the field to help manage and contain wildland fire costs. Agency policy, for example, established an overarching goal of suppressing wildland fires at minimum cost, considering firefighter and public safety and the importance of resources being protected, but the agencies have established neither clear criteria by which to weigh the relative importance of the often-competing elements of this broad goal nor measurable objectives by which to determine if the agencies are meeting the goal. As a result, despite improvements the agencies continue to make to policy, decision support tools, and oversight, we believe that managers in the field lack a clear understanding of the relative importance that the agencies’ leadership places on containing costs, and—as we concluded in our 2007 report—are therefore likely to continue to select firefighting strategies without due consideration of the costs of suppression. We continue to believe that our recommendations, if effectively implemented, would help the agencies better manage their cost-containment efforts and improve their ability to contain wildland fire costs. In 2007, we also identified several shortcomings in the agencies’ processes for allocating fuel reduction funds to field units and selecting fuel reduction projects, which the agencies should correct in order to use their fuel reduction funds more effectively. Specifically, we noted that the agencies (1) did not consistently use systematic allocation processes—that is, processes that are based on criteria and applied consistently—in all agencies or at all levels, often relying instead on historical funding levels and professional judgment to allocate funds and select projects; (2) did not consistently consider the potential risk from wildland fire or the potential effectiveness of fuel reduction treatments when allocating funds and selecting projects; and (3) had not clarified the relative importance of the numerous factors they consider when allocating funds and selecting projects, including factors (such as funding stability or the use of forest products resulting from fuel reduction treatments) unrelated to risk or effectiveness. Accordingly, we recommended that the agencies improve their allocation processes in three areas. First, we recommended that the agencies develop and routinely use a systematic allocation process that is based on criteria, applied consistently, and common to all the agencies. Second, we recommended that the agencies work to improve the information they use to make allocation decisions, particularly information on wildland fire risk and fuel treatment effectiveness. Third, we recommended that the agencies clarify the relative importance of the various factors they consider when allocating funds. Without improvements in these three areas, we noted that the agencies would likely continue relying on “allocation by tradition”—that is, allocating fuel reduction funds on the basis of past funding levels rather than on calculated need. Some agencies have begun implementing systematic processes for allocating funds. In 2007, the Forest Service began using a computer model to influence funding allocations to its nine regions, and it continues to refine and expand its use of the model, including introducing improved data about the likelihood of fire in a particular area. In addition, all nine Forest Service regions are required, beginning in 2008, to use the model as part of their process for allocating funds to national forests. Interior is developing a similar computer model for allocating funds to its agencies, in part based on the Forest Service’s model. For fiscal year 2007, Interior allocated 5 percent of its fuel reduction project funds to its four agencies using the model; for fiscal year 2008, according to an Interior official, Interior will use the model to allocate all of its fuel reduction project funds to its four agencies, within constraints designed to reduce the potential impact of funding changes. Officials from both the Forest Service and Interior told us that the agencies are working closely with each other on model development. Of Interior’s agencies, the Bureau of Land Management is developing a model similar to Interior’s for allocating funds to its state offices; the Fish and Wildlife Service uses its own computer model when allocating funds to regional offices; the Bureau of Indian Affairs allocates funds to its regions using a formula that considers past performance and proposed work; and the National Park Service allocates funds to its regions primarily on the basis of historical funding levels. However, Interior is working to standardize the allocation process within these agencies as well; a department official told us that Interior plans to use its model to allocate funds down to the agencies’ state and regional levels in fiscal year 2009. Although the models some of the agencies are developing represent substantial steps forward in systematically allocating funds, these steps are incomplete and not fully coordinated. Specifically, not all the agencies have models; none consistently uses models at the national, regional, and local levels; and the models that are in use are not common to all agencies. Further, the models, even where used, often exert only a small influence on allocation decisions, partly because the agencies do not yet have full confidence in the models’ data. Until the models serve as the foundation for allocation decisions, such decisions will continue to rely mainly on historical funding patterns and professional judgment. Accordingly, we urge the agencies to continue developing an allocation process that is systematic and that is common to all agencies. The agencies are also continuing to investigate ways to develop and use measures of risk and treatment effectiveness. Forest Service and Interior officials told us, for example, that researchers are looking at areas burned in past wildland fires to assess the extent to which fuel treatments altered fire behavior. Although efforts such as these are likely to be long-term undertakings and involve considerable research investment and activity, developing such measures would improve the agencies’ ability to assess and compare the cost-effectiveness of potential treatments in deciding how to optimally allocate scarce funds. Finally, such information could also help the agencies address our third recommendation—that is, to clarify the relative importance of the various factors they consider when allocating funds. Such an effort is already under way at Interior, according to an Interior official, and the agency hopes to complete its work before the 2008 fire season. A Forest Service official stated that the Forest Service is also working to prioritize the various factors, but did not provide a timetable for completing this effort. Agency officials plan to complete the FPA model by June 30, 2008, but preliminary results from our ongoing review raise questions about the extent to which the current model will be able to meet all of the key goals established for FPA. FPA—a common interagency performance-based system for program planning and budgeting for the full scope of fire management activities, including preparedness, large fire suppression, and fuel reduction treatments—was proposed and funded to address shortcomings that Congress, GAO, and the Office of Management and Budget identified in the agencies’ existing budget allocation frameworks. FPA also is critical to developing and implementing a cohesive strategy, and to the agencies’ efforts to contain wildland fire costs. Development of FPA commenced in 2002. According to a 2001 report commissioned by the agencies that serves as the foundation of FPA, FPA was intended to establish a common framework for the agencies to determine national budget needs by analyzing budget alternatives at the local level—using a common, interagency process for fire management planning and budgeting—and aggregating the results; determine the relative costs and benefits for the full scope of fire management activities, including potential trade-offs among investments in fuel reduction, fire preparedness, and fire suppression activities; and identify, for any given budget level, the most cost-effective mix of personnel and equipment to carry out these activities. In addition, because responding to wildland fires often requires coordination and collaboration among federal, state, tribal, and local firefighting entities to effectively protect lives, homes, and resources, the agencies were directed to develop FPA in conjunction with their nonfederal partners and to recognize the availability of adjacent nonfederal firefighting resources when determining the appropriate amount and location of federal resources. FPA program and senior agency officials told us that, when completed, FPA will allow the agencies to meet the key goals established for it, but preliminary results from our ongoing review have raised questions about FPA’s ability to do so. In particular, FPA likely will analyze only 5 years of fuel reduction treatments when modeling the effect such treatments will have on future large fire events, according to FPA program officials, although they have not yet made a final determination on the number of years to be analyzed. The officials said that it is not possible to identify fuel treatment projects more than 5 years into the future with sufficient accuracy to include in the analysis. Such a limited time frame, however, substantially impairs the ability of the model to analyze long-term trade- offs between annual fuel reduction treatment costs and future expected suppression costs for large fires, a key goal of FPA. Officials say that the FPA model expected to be completed in 2008 is the first step in an iterative development process and can be improved to increase its capability to analyze the trade-offs, but they could not provide a time frame for doing so. In addition, in 2006, after 4 years of model development, the agencies initiated substantial changes to the process FPA will use to analyze needed firefighting resources and determine where best to locate these resources; they are also still deciding how senior officials will use the model’s output to allocate funds between agencies and geographic regions of the country. It is not clear at this time the extent to which FPA will meet the key goal of identifying the most cost-effective allocation of resources for a given budget level, because the agencies are still developing the FPA model and determining how it will be used. A full assessment of FPA cannot be conducted, however, until the agencies complete the model; at that time, we plan on assessing the extent to which FPA will meet the key goals established. Faced with an incendiary mix of accumulated fuels, climate change, and burgeoning development in fire-prone areas, and constrained by our nation’s long-term fiscal outlook, the federal wildland fire agencies need to commit to a more considered, long-term approach to managing their resources in order to address the wildland fire problem more effectively and efficiently. They have taken an important first step by establishing and updating federal wildland fire policy. Development of strategies and management tools for agency officials to use in achieving the policy’s vision, however, has been uneven. The agencies are making progress in certain areas, including improving funding allocation processes for reducing fuels and requiring appropriate management response to fires that occur. In addition, the agencies are continuing to develop FPA, which, if implemented appropriately, could significantly improve the agencies’ ability to allocate their resources effectively. But broader efforts have stalled—as in the development of cost containment goals and objectives— or even lost ground, as evidenced by the agencies’ retreat from their earlier commitment to develop the cohesive wildland fire strategy we have called for. If the agencies are to achieve lasting results in their efforts to address the wildland fire problem, they will need a sustained commitment by agency leadership to developing both a long-term strategy that identifies potential options (and their costs) for managing wildland fires and the tools for carrying out such a strategy. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Steve Gaty, Assistant Director; David P. Bixler; Ellen W. Chu; Jonathan Dent; and Richard Johnson made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The nation's wildland fire problems have worsened over the past decade. Recent years have seen dramatic increases in the number of acres burned and the dollars spent on preparing for and responding to wildland fires. As GAO has previously reported, a number of factors have contributed to worsening fire seasons and increased firefighting expenditures, including an accumulation of fuels due to past land management practices; drought and other stresses, in part related to climate change; and an increase in human development in or near wildlands. Recent GAO reports have identified shortcomings in the approach to wildland fire management taken by the responsible federal agencies--the Department of Agriculture's Forest Service and four agencies within the Department of the Interior. GAO was asked to testify on agency efforts to (1) develop a cohesive strategy for preparing for and responding to wildland fire, (2) contain federal expenditures related to wildland fire, and (3) improve the processes used to allocate funds for reducing accumulated fuels and to select fuel reduction projects. GAO also is providing preliminary findings from its ongoing review of an interagency budget allocation and planning model known as fire program analysis (FPA). This testimony is based on issued GAO reports, reviews of agency documents related to the development of FPA, and discussions with agency officials. In recent years, GAO has recommended a number of actions federal wildland fire agencies should take to better diagnose the extent of the nation's wildland fire problems and develop a strategic approach for addressing them. The agencies have taken some steps to respond to GAO's recommendations, but have not completed other needed steps. Specifically, the agencies should: (1) recommit to developing a cohesive strategy that identifies options and associated funding to reduce fuels and address wildland fire problems. In several reports dating to 1999, GAO recommended that a cohesive strategy be developed that identifies the available long-term options and associated funding for reducing hazardous fuels and for responding to wildland fires. Such a strategy would assist Congress and the agencies in making informed decisions about effective and affordable long-term approaches to addressing the nation's wildland fire problems. As of January 2008, the agencies had not developed such a strategy and, in fact, had retreated from earlier commitments to do so. (2) Establish clear goals and a strategy to help contain wildland fire costs. In 2007, GAO reported that the agencies had taken several steps to contain wildland fire costs, including developing new decision support tools to help officials select the most appropriate strategy for fighting wildland fires, but lacked clearly defined cost-containment goals and a strategy for achieving them. As a result, we believe managers in the field lacked a clear understanding of the relative importance agency leadership placed on containing costs and were therefore likely to select firefighting strategies without duly considering the costs of suppression. Although the agencies have continued to implement individual cost-containment steps, they still have not developed clear goals or a strategy for achieving them. (3) Continue to improve their processes for allocating fuel reduction funds and selecting fuel reduction projects. Also in 2007, GAO recommended several improvements to the agencies' processes for allocating fuel reduction funds to field units and selecting projects. Specifically, GAO recommended that the agencies use a more systematic allocation process, improve the information they use to make allocation decisions, and clarify the relative importance of the various factors they consider when allocating funds. The agencies are currently taking steps to implement these improvements, although none have yet been completed. In addition, GAO's ongoing review of FPA suggests that the current model, which the agencies expect to complete in June 2008, may not allow the agencies to meet all of the key goals established for FPA. Specifically, preliminary results from GAO's review suggest that the model will not allow the agencies to analyze long-term trade-offs between annual fuel reduction treatments and future expected suppression costs for large fires. GAO intends to conduct a full assessment of FPA once it is completed.
Many federal agencies have yet to adopt succession planning and management initiatives. In 1997, the National Academy of Public Administration reported that of the 27 agencies responding to its survey, 2 agencies had a succession planning program or process in place; 2 agencies were planning to have one in the coming year; and 4 agencies were planning one in the next 2 years. In 1999, a joint OPM and Senior Executive Association survey reported that more than 50 percent of all career members of the Senior Executive Service (SES) said that their agencies did not have a formal succession planning program for the SES, and almost 75 percent said that their agencies did not have such a program for managers. Of those who reported that their agencies did have succession planning programs for either executives or managers, 54 percent of the career senior executives said that they had not participated in the executive-level programs and 65 percent said they had not participated at the manager level. On the basis of this survey and anecdotal evidence, OPM officials told us in 2000 that they found that most agencies would not likely have a formal, comprehensive succession plan. Further, we have reported that a lack of succession planning has contributed to two specific human capital challenges currently facing the federal government. The first challenge is the large percentage of career senior executives who will reach regular retirement eligibility over the next several years. In 2000, we reported that 71 percent of the SES members employed as of October 1998 would reach regular retirement eligibility by the end of fiscal year 2005. More recently, we estimated that more than half of the SES members in federal service as of October 2000 will have left the government by October 2007. We concluded that without careful planning, these separations pose the threat of an eventual loss in institutional knowledge, expertise, and leadership continuity. The second challenge facing federal agencies impacted by a lack of succession planning is the amount of diversity in their executive and managerial ranks. As the demographics of the public served by the federal government change, a diverse executive corps can provide agencies with an increasingly important organizational advantage that can help them to achieve results. We have reported that, as of 2000, minority men and women made up about 14 percent of the career SES. If current promotion and hiring trends continue, the proportions of minority men and women among senior executives will likely remain virtually unchanged over the next 4 years. The literature shows that public and private sector organizations use a range of approaches when planning for, and managing, succession-related challenges. These approaches span a continuum from the “replacement” approach, which focuses on identifying particular individuals as possible successors for specific top ranking positions, to the “integrated” succession planning and management approach. Under the integrated approach, succession planning and management is a strategic, systematic effort that works to ensure a suitable supply of potential successors for a variety of leadership and other key positions. These two approaches essentially reflect a shift in emphasis of succession planning from a risk management tool, focused on the near-term, operational need to ensure backup people are identified in case a top position becomes vacant, to a strategic planning tool, which identifies and develops high-potential individuals with the aim of filling leadership and other key roles in the future. GAO, similar to other federal agencies, faces an array of succession planning challenges. The succession planning and management approach we are using to respond to our internal challenges is consistent with the practices we identified in other countries. To manage the succession of their executives and other key employees, agencies in Australia, Canada, New Zealand, and the United Kingdom are implementing succession planning and management practices that work to protect and enhance organizational capacity. Collectively, these agencies’ succession planning and management initiatives receive active support of top leadership; link to strategic planning; identify talent from multiple organizational levels, early in careers, or with critical skills; emphasize developmental assignments in addition to formal training; address specific human capital challenges, such as diversity, leadership capacity, and retention; and facilitate broader transformation efforts. Effective succession planning and management programs have the support and commitment of their organizations’ top leadership. Our past work has shown that the demonstrated commitment of top leaders is perhaps the single most important element of successful management. In other governments and agencies, to demonstrate its support of succession planning and management, top leadership (1) actively participates in the initiatives, (2) regularly uses these programs to develop, place, and promote individuals, and (3) ensures that these programs receive sufficient financial and staff resources, and are maintained over time. For example, each year, the Secretary of the Cabinet, Ontario Public Service’s (OPS) top civil servant, convenes and actively participates in an annual 2-day succession planning and management retreat with the heads of every government ministry. At this retreat, they discuss the anticipated leadership needs across the government as well as the individual status of about 200 high-potential executives who may be able to meet those needs over the next year or two. Similarly, in New Zealand, the State Services Commissioner—an official whose wide-ranging human capital responsibilities include the appointment and review of public service chief executives—developed, with the assistance of a group of six agency chief executives who met regularly over a period of 2 years, a new governmentwide senior leadership and management development initiative. This effort culminated in the July 2003 roll out of the Executive Leadership Programme and the creation of a new central Leadership Development Centre. The Royal Canadian Mounted Police’s (RCMP) senior executive committee regularly uses the agency’s succession planning and management programs when making decisions to develop, place, and promote its top 500-600 employees, both officers and civilians. RCMP’s executive committee, consisting of the agency’s chief executive, the chief human capital officer, and six other top officials, meets quarterly to discuss the organization’s succession needs and to make the specific decisions concerning individual staff necessary to address those needs. In 2001-2002, this process resulted in 72 promotions and 220 lateral transfers. Top leaders also demonstrate support by ensuring that their agency’s or government’s succession planning and management initiatives receive sufficient funding and staff resources necessary to operate effectively and are maintained over time. Such commitment is critical since these initiatives can be expensive because of the emphasis they place on participant development. For example, a senior human capital manager told us that the Chief Executive of the Family Court of Australia (FCA) pledged to earmark funds when he established a multiyear succession planning and management program in 2002 despite predictions of possible budget cuts facing FCA. Although human capital training and development programs are sometimes among the first programs to be cut back during periods of retrenchment, FCA’s Chief Executive has repeatedly stated to both internal and external stakeholders that this will not happen. Similarly, at Statistics Canada—the Canadian federal government’s central statistics agency—the Chief Statistician of Canada has set aside a percentage, in this case over 3 percent, of the total agency budget to training and development, thus making resources available for the operation of the agency’s four leadership and management development programs. According to a human capital official, this strong support has enabled the level of funding to remain fairly consistent over the past 10 years. Finally, the government of New Zealand has committed NZ$19.6 million (about U.S.$11.2 million in July 2003) over four years, representing both central government and agency contributions, for the implementation of its new governmentwide senior leadership and management development strategy. Leading organizations use succession planning and management as a strategic planning tool that focuses on current and future needs and develops pools of high-potential staff in order to meet the organization’s mission over the long term. That is, succession planning and management is used to help the organization become what it needs to be, rather than simply to recreate the existing organization. We have previously reported on the importance of linking succession planning and management with the forward-looking process of strategic and program planning.In Canada, succession planning and management initiatives focus on long-term goals, are closely integrated with their strategic plans, and provide a broader perspective. For example, at Statistics Canada, committees composed of line and senior managers and human capital specialists consider the human capital required to achieve its strategic goals and objectives. During the 2001 strategic planning process, the agency’s planning committees received projections showing that a majority of the senior executives then in place would retire by 2010, and the number of qualified assistant directors in the executive development pool was insufficient to replace them. In response, the agency increased the size of the pool and introduced a development program of training, rotation, and mentoring to expedite the development of those already in the pool. According to a Statistics Canada human capital official, these actions, linked with the agency’s strategic planning process, have helped to ensure that an adequate number of assistant directors will be sufficiently prepared to succeed departing senior executives. In Ontario, succession planning and management has been a required component of the government’s human capital planning framework since 1997. OPS requires that the head of each ministry develop a succession plan that (1) anticipates the ministry’s needs over the next couple of years, (2) establishes a process to identify a pool of high-potential senior managers, and (3) links the selection of possible successors to both ministry and governmentwide opportunities and business plans. These plans, which are updated annually at the deputy ministers retreat, form the basis for Ontario’s governmentwide succession planning and management process. While OPS has not conducted a formal evaluation of the impact of this process, a senior human capital official told us that succession planning and management has received a much greater level of attention from top leadership and now plays a critical role in OPS’ broader planning and staffing efforts. For RCMP, succession planning and management is an integral part of the agency’s multiyear human capital plan and directly supports its strategic needs, and it also uses this process to provide top leadership with an agencywide perspective. RCMP is responsible for a wide range of police functions on the federal, provincial, and local levels, such as illegal drug and border enforcement, international peacekeeping services, and road and highway safety. In addition, RCMP provides services in 10 provinces and three territories covering an area larger than the United States. Its succession planning and management system provides the RCMP Commissioner and his executive committee with an organizationwide picture of current and developing leadership capacity across the organization’s many functional and geographic lines. To achieve this, RCMP constructed a “succession room”—a dedicated room with a graphic representation of current and potential job positions for the organization’s top 500-600 employees covering its walls—where the Commissioner and his top executives meet at least four times a year to discuss succession planning and management for the entire organization. For each of RCMP’s executive and senior manager-level positions in headquarters and the regions, the incumbent and one or more potential successors are depicted on individual movable cards that display relevant background information (see fig. 1). An electronic database provides access to more detailed information for each incumbent and potential successors, including skills, training, and past job experience that the executive committee considers when deciding on assignments and transfers. In addition, high-potential individuals as well as employees currently on developmental assignments outside RCMP are displayed. According to a senior human capital official, because the succession room actually surrounds the RCMP’s top leadership with an accessible depiction of their complex and wide-ranging organization, it provides a powerful tool to help them take a broader, organizationwide approach to staffing and management decisions. Effective succession planning and management initiatives identify high- performing employees from multiple levels in the organization and still early in their careers. In addition, leading organizations use succession planning and management to identify and develop knowledge and skills that are critical in the workplace. RCMP has three separate development programs that identify and develop high-potential employees at several organizational levels. For example, beginning at entry level, the Full Potential Program reaches as far down as the front-line constable and identifies and develops individuals, both civilians and officers, who demonstrate the potential to take on a future management role. For more experienced staff, RCMP’s Officer Candidate Development Program identifies and prepares individuals for increased leadership and managerial responsibilities and to successfully compete for admission to the officer candidate pool. Finally, RCMP’s Senior Executive Development Process helps to identify successors for the organization’s senior executive corps by selecting and developing promising officers for potential promotion to the senior executive levels. The United Kingdom’s Fast Stream program targets high-potential individuals early in their civil service careers as well as recent college graduates. The program places participants in a series of jobs designed to provide experiences such as developing policy, supporting ministers, and managing people and projects—each of which is linked to strengthening specific competencies required for admission to the Senior Civil Service. According to a senior program official, program participants are typically promoted quickly, attaining mid-level management in an average of 3.5 years, and the Senior Civil Service in about 7 years after that. Other agencies use their succession planning and management initiatives to identify and develop successors for employees with critical knowledge and skills. For example, Transport Canada estimated that 69 percent of its safety and security regulatory employees, including inspectors, are eligible for retirement by 2008. Faced with the urgent need to capture and pass on the inspectors’ expertise, judgment, and insights before they retire, the agency embarked on a major knowledge management initiative in 1999 as part of its succession planning and management activities. To identify the inspectors whose leaving would most severely affect the agency’s ability to carry out its mandate, Transport Canada used criteria that assessed whether the inspectors (1) possessed highly specialized knowledge, skills, or expertise, (2) held one-of-a-kind positions, (3) were regarded as the “go- to” people in critical situations, and/or (4) held vital corporate memory. Next, inspectors were asked to pass on their knowledge through mentoring, coaching, and on-the-job training. To assist this knowledge transfer effort, Transport Canada encouraged these inspectors to use human capital flexibilities including preretirement transitional leave, which allows employees to substantially reduce their workweek without reducing pension and benefits payments. The Treasury Board of Canada Secretariat, a federal central management agency, found that besides providing easy access to highly specialized knowledge, this initiative ensures a smooth transition of knowledge from incumbents to successors. Leading succession planning and management initiatives emphasize developmental or “stretch” assignments for high-potential employees in addition to formal training. These developmental assignments place staff in new roles or unfamiliar job environments in order to strengthen skills and competencies and broaden their experience. In the United States, training and development opportunities—including developmental assignments—must be offered fairly, consistent with merit system principles. However, according to a 1999 survey of career SES in the United States, 67 percent reported that they had never changed jobs by going to a different component within their agency or department. Moreover, 91 percent said that they never served in more than one department or agency during their entire executive careers. Agencies in other countries use developmental assignments, accompanied by more formal training components and other support mechanisms, to help ensure that individuals are capable of performing when promoted. Participants in RCMP’s Full Potential Program must complete at least two 6- to 12-month developmental assignments intended to enhance specific competencies identified in their personalized development plans. These assignments provide participants with the opportunity to learn new skills and apply existing skills in different situations and experience an increased level of authority, responsibility, and accountability. For example, a civilian from technical operations and a police officer were given a 1-year assignment to create balanced scorecards that are linked to RCMP’s goals. Another program assignment involved placing a line officer, previously in charge of a single RCMP unit, in the position of acting district commander responsible for the command of multiple units during a period of resource and financial constraint. To reinforce the learning that comes from the developmental assignments, participants attend a 6-week educational program provided by Canada’s Centre for Management and Development that covers the personal, interpersonal, managerial, and organizational dimensions of leadership. Each participant also benefits from the support and professional expertise of a senior-level mentor. Staff who complete this program will be required to continue their formal development as RCMP officer candidates. In Canada’s Accelerated Executive Development Program (AEXDP), developmental assignments form the cornerstone of efforts to prepare senior executives for top leadership roles in the public service. Canada created AEXDP in 1997 to strategically manage the development of senior executives who have the potential to become assistant deputy ministers within 2 to 6 years. AEXDP prepares individuals for these senior leadership positions through the support of coaches and mentors, formal learning events, and placements in a series of challenging developmental assignments. These stretch assignments help enhance executive competencies by having participants perform work in areas that are unfamiliar or challenging to them in any of a large number of agencies throughout the Canadian Public Service. For example, a participant with a background in policy could develop his or her managerial competencies through an assignment to manage a direct service delivery program in a different agency. Central to the benefit of such assignments is that they provide staff with the opportunity to practice new skills in a real-time setting. Further, each assignment lasts approximately 2 years, which allows time for participants to maximize their learning experience while providing agencies with sufficient opportunity to gain a real benefit from the participants’ contributions. AEXDP reinforces the learning provided by the developmental assignments with activities such as “action learning groups” where small groups of five or six program participants meet periodically to collectively reflect on and address actual work situations or challenges facing individual participants. A senior official involved in the program told us that the developmental placements help participants obtain in-depth experience in how other organizations make decisions and solve problems, while simultaneously developing a governmentwide network of contacts that they can call on for expertise and advice in the future. One challenge sometimes encountered with developmental assignments in general is that executives and managers resist letting their high-potential staff leave their current positions to move to another organization. Agencies in other countries have developed several approaches to respond to this challenge. For example, once individuals are accepted into Canada’s AEXDP, they are employees of, and paid by, the Public Service Commission, a central agency. Officials affiliated with AEXDP told us that not having to pay participants’ salaries makes executives more willing to allow talented staff to leave for developmental assignments and it fosters a governmentwide, rather than an agency-specific, culture among the AEXDP participants. In New Zealand, a senior official at the State Services Commission, the central agency responsible for ensuring that agencies develop public service leadership capability, told us that the Commission has recommended legislation that would require that agency chief executives work in partnership with the State Services Commissioner to find ways to release talented people for external developmental assignments. In addition, the government has appropriated NZ$600,000 (about U.S.$344,000 in July 2003) over the next 4 years to help the Commissioner assist agency chief executives who might like to release an individual for a developmental assignment but are inhibited from doing so because of financial constraints, including those associated with finding a replacement. Leading organizations stay alert to human capital challenges and respond accordingly. Government agencies around the world, including in the United States, are facing challenges in the demographic makeup and diversity of their senior executives. Agencies in other countries use succession planning and management to achieve a more diverse workforce, maintain their leadership capacity as their senior executives retire, and increase the retention of high-potential staff. Achieve a More Diverse Workforce. Leading organizations recognize that diversity can be an organizational strength that contributes to achieving results. Our work has shown that U.S. federal agencies will need to enhance their efforts to improve diversity as the SES turns over. In addition, OPM has identified an increase in workforce diversity, including in mission critical occupations and leadership roles, as one of its human capital management goals for implementing the President’s Management Agenda. Both the United Kingdom and Canada use succession planning and management systems to address the challenge of increasing the diversity of their senior executive corps. For example, the United Kingdom’s Cabinet Office created Pathways, a 2-year program that identifies and develops senior managers from ethnic minorities who have the potential to reach the Senior Civil Service within 3 to 5 years. This program is intended to achieve a governmentwide goal to double the representation of ethnic minorities in the Senior Civil Service from 1.6 percent in 1998 to 3.2 percent by 2005. Pathways provides executive coaching, skills training, and the chance for participants to demonstrate their potential and talent through a variety of developmental activities such as projects and short-term work placements. A Cabinet Office official told us that the program is actively marketed through a series of nationwide informational meetings held in locations with large ethnic minority populations. In addition, program information is sent to government agency chief executives and human capital directors, and the top 600 senior executives across the civil service, and executives are encouraged to supplement the self-nominating process by nominating potential candidates. This official noted that although the first Pathways class will not graduate until November 2003, 2 out of the 20 participants have already been promoted to the Senior Civil Service. Rather than a specific program, Canada uses AEXDP, an essential component of their succession planning and management process for senior executives, as a tool to help achieve a governmentwide diversity target. For example, the government has set a goal that by 2003, certain minorities will represent 20 percent of participants in all management development programs. After conducting a survey of minorities, who showed a considerable level of interest in the program, officials from AEXDP devoted 1 year’s recruitment efforts to identify and select qualified minorities. The program reported that, in the three prior AEXDP classes, such minorities represented 4.5 percent of the total number of participants; however, by March 2002, AEXDP achieved the goal of 20 percent minority participation. In addition, an independent evaluation by an outside consulting firm found that the percentage of these minorities participating in AEXDP is more than three times the percentage in the general senior executive population. Maintain Leadership Capacity. Both at home and abroad, a large percentage of senior executives will be eligible to retire over the next several years. In the United States, for example, the federal government faces an estimated loss of more than half of the career SES by October 2007.Other countries that face the same demographic trend use succession planning and management to maintain leadership capacity in anticipation of the turnover of their senior executive corps due to expected retirements. Canada is using AEXDP to address impending retirements of assistant deputy ministers—one of the most senior executive-level positions in its civil service. As of February 2003, for example, 76 percent of this group are over 50, and approximately 75 percent are eligible to retire between now and 2008. A recent independent evaluation of AEXDP by an outside consulting firm found the program to be successful and concluded that AEXDP participants are promoted in greater numbers than, and at a significantly accelerated rate over, their nonprogram counterparts. Specifically, of the participants who joined the program at the entry level, 39 percent had been promoted one level and another 7 percent had been promoted two levels within 1 year compared to only 9 percent and 1 percent for nonparticipants during the same period. This evaluation further concluded that AEXDP is a “valuable source” of available senior executives and a “very important source of well-trained, future assistant deputy ministers.” Increase Retention of High-Potential Staff. Canada’s Office of the Auditor General (OAG) uses succession planning and management to provide an incentive for high-potential employees to stay with the organization and thus preserve future leadership capacity. Specifically, OAG identified increased retention rates of talented employees as one of the goals of the succession planning and management program it established in 2000. According to a senior human capital official, OAG provided high-potential employees with comprehensive developmental opportunities in order to raise the “exit price” that a competing employer would need to offer to lure a high-potential employee away. The official told us that an individual, who might otherwise have been willing to leave OAG for a salary increase of CN$5,000, might now require CN$10,000 or more, in consideration of the developmental opportunities offered by the agency. Over the program’s first 18 months, annualized turnover in OAG’s high-potential pool was 6.3 percent compared to 10.5 percent officewide. This official told us that the retention of members of this high-potential pool was key to OAG’s efforts to develop future leaders. Effective succession planning and management initiatives provide a potentially powerful tool for fostering broader governmentwide or agencywide transformation by selecting and developing leaders and managers who support and champion change. Our work has shown the critical importance of having top leaders and managers committed to, and personally involved in, implementing management reforms if those reforms are to succeed. Agencies in the United Kingdom and Australia promoted the implementation of broader transformation efforts by using their succession planning and management systems to support new ways of doing business. In 1999, the United Kingdom launched a wide-ranging reform program known as Modernising Government, which focused on improving the quality, coordination, and accessibility of the services government offered to its citizens. Beginning in 2000, the United Kingdom’s Cabinet Office started on a process that continues today of restructuring the content of its leadership and management development programs to reflect this new emphasis on service delivery. For example, the Top Management Programme supports senior executives in developing behaviors and skills for effective and responsive service delivery, and provides the opportunity to discuss and receive expert guidance in topics, tools, and issues associated with the delivery and reform agenda. These programs typically focus on specific areas that have traditionally not been emphasized for executives such as partnerships with the private sector and risk assessment and management. A senior Cabinet Office official responsible for executive development told us that mastering such skills is key to an executive’s ability to deliver the results intended in the government’s agenda. The United Kingdom’s Department of Health has embarked on a major reform effort involving a 10-year plan to modernize the National Health Service by, among other things, devolving power from the government to the local health services that perform well for their patients, and breaking down occupational boundaries to give staff greater flexibility to provide care. A National Health Service official told us that the service recognizes the key contribution that succession planning and management programs can have and, therefore, selects and places executives who will champion its reform and healthcare service delivery improvement efforts. For example, the Service’s National Primary Care Development Team created a leadership development program specifically tailored for clinicians with the expectation that they will, in turn, champion new clinical approaches and help manage the professional and organizational change taking place within the health service. At the FCA, preparing future leaders who could help the organization successfully adapt to recent changes in how it delivers services is one of the objectives of the agency’s Leadership, Excellence, Achievement, Progression program, established in 2002. Specifically, over the last few years FCA has placed an increased emphasis on the needs of external stakeholders. This new emphasis is reflected in the leadership capabilities FCA uses when selecting and developing program participants. For example, one of these capabilities, “nurturing internal and external relationships,” emphasizes the importance of taking all stakeholders into account when making decisions, in contrast to the FCA’s traditional internally focused culture. In addition, according to a senior human capital manager, individuals selected to participate in the FCA’s leadership development program are expected to function as “national drivers of change within the Court.” To this end, the program provides participants with a combination of developmental assignments and formal training opportunities that place an emphasis on areas such as project and people management, leadership, and effective change management. As governmental agencies around the world anticipate the need for leaders and other key employees with the necessary competencies to successfully meet the complex challenges of the 21st century, they are choosing succession planning and management initiatives that go beyond simply replacing individuals in order to recreate the existing organization, to initiatives that strategically position the organization for the future. Collectively, the experiences of agencies in Australia, Canada, New Zealand, and the United Kingdom demonstrate how governments are using succession planning and management initiatives that receive the active support of top leadership, link to strategic planning, identify talent throughout the organization, emphasize developmental assignments in addition to formal training, address specific human capital challenges, and facilitate broader transformation efforts. Taken together, these practices give agencies a potentially powerful set of tools with which to strategically manage their most important asset—their human capital. While there is no one right way for organizations to manage the succession of their leaders and other key employees, the experiences of agencies in these four countries provide insights into how other governments are adopting succession practices that protect and enhance organizational capacity. While governments’ and agencies’ initiatives reflect their individual organizational structures, cultures, and priorities, these practices can guide executive branch agencies in the United States as they develop their own succession planning and management initiatives in order to ensure that federal agencies have the human capital capacity necessary to achieve their organizational goals and effectively deliver results now and in the future. We provided drafts of the relevant sections of this report to cognizant officials from the central agency responsible for human capital issues, individual agencies, and the national audit office for each of the countries we reviewed as well as subject matter experts in the United States. They generally agreed with the contents of this report. We made technical clarifications where appropriate. Because we did not evaluate the policies or operations of any U.S. federal agency in this report, we did not seek comments from any U.S. agency. However, because of OPM’s role in providing guidance and assistance to federal agencies on succession planning and leadership development, we provided a draft of this report to the Director of OPM for her information. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report for 30 days from the date of this letter. At that time, we will provide copies of this report to other interested congressional committees, the directors of OPM and the Office of Management and Budget, and the foreign government officials contacted for this report. In addition, we will make copies available to others upon request and the report will be available at no charge on the GAO Web site at www.gao.gov. If you have any questions concerning this report, please contact me or Lisa Shames on (202) 512-6806 or at [email protected] and [email protected]. The major contributors to this report were Peter J. Del Toro and Rebecka L. Derr. To meet our objective to identify how agencies in other countries are adopting a more strategic approach to managing the succession of senior executives and others with critical skills, we selected Australia, Canada, New Zealand, the United Kingdom, and the Canadian Province of Ontario based on our earlier work where we examined their implementation of results-oriented management and human capital reforms. We reviewed the public management and human capital literature and spoke with subject matter experts to obtain additional context and analysis regarding succession planning and management. A key resource was the National Academy of Public Administration’s work on the topic, including their maturity model and subsequent revisions, which describe the major succession planning process elements of initiatives that take a strategic approach to building organizational capacity. We identified the examples illustrating the practices through the results of over 30 responses to a questionnaire sent to senior human capital officials at selected agencies. We analyzed written documentation including reports, procedures, guidance, and other materials concerning succession planning and management programs for agencies in these countries along with government-sponsored evaluations of these programs when available. We interviewed more than 50 government officials from Australia, Canada, New Zealand, and the United Kingdom by telephone, or in person during a visit to Ottawa, Canada. To obtain a variety of perspectives, we spoke to officials from the countries’ national audit offices, central management, and human capital agencies. The scope of our work did not include independent evaluation or verification of the effectiveness of the succession planning and management initiatives used in the four countries, including any performance results that agencies attributed to specific practices or aspects of their programs. We also did not attempt to assess the prevalence of the practices or challenges we cite either within or across countries. Therefore, countries other than those cited for a particular practice may, or may not, be engaged in the same practice. Because of the multiple jurisdictions covered in this report, we use the term “agency” generically to refer to entities of the central government including departments, ministries, and agencies, except when describing specific examples where we use the term appropriate to that case. We conducted our work from January through June 2003 in Washington, D.C., and Ottawa, Canada, in accordance with generally accepted government auditing standards. We provided drafts of the relevant sections of this report to officials from the central agencies responsible for human capital issues, individual agencies, and the national audit office for each of the countries we reviewed as well as subject matter experts in the United States. We also provided a draft of this report to the Director of OPM for her information. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Leading public organizations here and abroad recognize that a more strategic approach to human capital management is essential for change initiatives that are intended to transform their cultures. To that end, organizations are looking for ways to identify and develop the leaders, managers, and workforce necessary to face the array of challenges that will confront government in the 21st century. GAO conducted this study to identify how agencies in four countries--Australia, Canada, New Zealand, and the United Kingdom--are adopting a more strategic approach to managing the succession of senior executives and other public sector employees with critical skills. These agencies' experiences may provide insights to executive branch agencies as they undertake their own succession planning and management initiatives. GAO identified the examples described in this report through discussions with officials from central human capital agencies, national audit offices, and agencies in Australia, Canada, New Zealand, and the United Kingdom, and a screening survey sent to senior human capital officials at selected agencies. Leading organizations engage in broad, integrated succession planning and management efforts that focus on strengthening both current and future organizational capacity. As part of this approach, these organizations identify, develop, and select their human capital to ensure that successors are the right people, with the right skills, at the right time for leadership and other key positions. To this end, agencies in Australia, Canada, New Zealand, and the United Kingdom are implementing succession planning and management initiatives that are designed to protect and enhance organizational capacity.
Asbestos is the name given to a number of naturally occurring fibrous silicate minerals mined for their useful properties, such as thermal insulation, chemical and thermal stability, and high tensile strength. Asbestos has been used intentionally in the manufacture of products ranging from insulation and roofing materials to floor tiles and automotive brakes, and it may occur as a contaminant in a variety of mineral products, including vermiculite, talc, and gravel. However, asbestos fibers embedded in lung tissue over time may cause serious lung diseases, including pleural abnormalities, reduced lung function, asbestosis, lung cancer, and mesothelioma. Diseases caused by inhalation of asbestos fibers may not appear until years after exposure has occurred. Multiple federal agencies, including OSHA and EPA, have roles and responsibilities for regulating or otherwise addressing hazards associated with exposure to asbestos. In July 1989, EPA issued a final rule banning most asbestos-containing products. In October 1991, the United States Court of Appeals for the Fifth Circuit vacated and remanded EPA’s rule as it applied to existing asbestos-containing products, but left intact that portion banning products that were not being manufactured, produced, or imported when the rule was published on July 12, 1989, which includes all new uses of asbestos as defined in the ban. Specifically with regard to asbestos in automotive brakes and clutches, OSHA’s asbestos standard requires the use of controls and safe work practices to protect employees of automotive repair facilities. State and local governments with employees who perform brake and clutch work in states without OSHA- approved state plans must follow the identical regulations found under the EPA Asbestos Worker Protection Rule. EPA also provides information for home mechanics outside the automotive repair industry. Asbestos is a hazard for which agencies use both rules and informational communication products to protect the health of workers and the general public. Rules and nonrule communication products affect the public differently and serve different purposes. The Administrative Procedure Act (APA) defines a rule, in part, as “the whole or a part of an agency statement of general or particular applicability and future effect designed to implement, interpret, or prescribe law or policy or describing the organization, procedure, or practice requirements of an agency.” The APA established the most long-standing and broadly applicable federal requirements for informal rulemaking, also known as notice and comment rulemaking. Among other things, the APA generally requires that agencies publish a notice of proposed rulemaking in the Federal Register. After giving interested persons an opportunity to comment on the proposed rule by providing “written data, views, or arguments,” and after considering the public comments, the agency may then publish the final rule. OSHA rulemaking is conducted pursuant to separate—although analogous— provisions found in the Occupational Safety and Health Act of 1970, as amended. Rules affect regulated entities by creating binding legal obligations and are subject to judicial review by the courts if, for example, a party believes that an agency did not follow required rulemaking procedures. In contrast, communication products, such as guidance documents and other informational products for the public, are generally advisory in nature and informational in content. In fact, under the APA, there is a statutory exception for having to go through notice and comment rulemaking for general statements of policy and interpretive rules. Agencies sometimes include disclaimers in guidance and other communication products to specifically note that the documents have no binding effect on regulated parties or the agencies themselves. OSHA and EPA officials noted that their offices produce large numbers of a variety of different communication products that may include, but are not limited to, brochures and pamphlets, compliance guides, educational and training materials, guidance, and regulatory fact sheets. These products have different characteristics and purposes. For example, in most cases OSHA develops SHIBs to address a new hazard or refocus the public’s attention on a recurring hazard in light of a recent incident, while informational fact sheets are limited to discussing OSHA standards and technical information, and “quick cards” are a simplified form of fact sheets that are targeted to a specific worker audience. Figure 1 illustrates some of the different types of products disseminated by OSHA and EPA. Despite the general distinctions between rules and communication products, determining whether an agency action is a rule is sometimes difficult and has been the subject of much litigation. Legal scholars and federal courts have at times struggled to determine whether an agency action is a rule that should be subject to the APA’s notice and comment requirements or is simply guidance or a policy statement, and therefore exempt from these requirements. Even though not legally binding, communication materials and guidance documents can have a significant impact, both because of agencies’ reliance on large volumes of such products and the fact that the products can prompt changes in the behavior of regulated parties and the general public. Concerns about the effects of agency guidance documents and how to ensure that agencies do not cross the line into rulemaking when drafting guidance are part of what prompted OMB to issue a bulletin on good guidance practices in January 2007. We have published prior work on both agencies’ actions to address hazards associated with asbestos and the rulemaking process in general. Several reports and testimonies that we released in 2007 contained findings and recommendations about opportunities to improve federal agencies’ communication of information about potential asbestos hazards. These products showed the need to be timely in getting out information to the public. For example, had additional or more complete information been provided, people might have made different decisions or taken different actions to protect themselves. In addition, Congress has often asked us to review aspects of federal rulemaking procedures and practices. However, with rare exceptions, such as a report on agencies’ small entity compliance guides, we have not previously been asked to review agencies’ general processes regarding communication products. Our prior reports and testimonies contained a variety of recommendations to improve various aspects of rulemaking procedures and practices. OSHA and OPPTS followed different paths from 2000 through 2007 to prepare their SHIB and brochure, respectively, on asbestos in automobile brakes and clutches. Among the primary differences, the two agencies initiated work on their asbestos products in response to different triggers, OSHA took longer than OPPTS to produce a final product, and OPPTS’ process incorporated more steps to obtain input from external parties. Each agency initiated the development of their product in response to external events that agency officials decided needed to be addressed through the publication of communication products. In total, OSHA and OPPTS took years to complete all the steps of their processes from initiation through dissemination of their products on asbestos in automotive brakes—approximately 5-½ years for OSHA and approximately 3-½ years for OPPTS. In doing so, both OSHA and OPPTS generally followed applicable agency policies and procedures for preparing communication products, as described below. The following is a description of the steps that OSHA and OPPTS took to initiate, develop, review, and disseminate the communication products on asbestos in automobile brake and clutch repairs. From 2000 through 2007, OSHA and OPPTS responded to the potential hazards associated with exposure to asbestos in brake and clutch repairs by developing and publishing their own communication products. (Fig. 2 illustrates one of the potential hazards.) However, each agency initiated its product in response to different triggering events. In December 2000, an OSHA regional office became aware of a media report that discussed the potential exposure to asbestos during brake and clutch repairs and its effect on automobile mechanics. According to the article, there were indications that mechanics were being exposed to asbestos levels potentially much higher than the level recommended in the standards. The article also raised concerns that many people were unaware that the EPA ban on asbestos products had been partially overturned and that asbestos-related products—including automobile brakes—were still being sold and used. Therefore, mechanics and automobile shop owners might not have been taking preventive measures to avoid exposure to asbestos fibers. OSHA regional officials suggested that the agency could either issue a hazard alert to automotive associations via the internet as a means of disseminating information to the public, or implement a local emphasis program (LEP) to address this issue. After being notified by its regional office, the OSHA National Office decided that the agency’s response would be to develop a hazard information bulletin. According to agency officials, LEPs are developed by the regional or area office and reviewed by the Directorate of Enforcement Programs, however, the regional office did not develop an LEP to address the issues of exposure to asbestos in automotive brakes. Officials decided that among the OSHA communication products available, the health information bulletin would alert the public in the most efficient manner. However, according to OSHA officials, the asbestos SHIB was unique because, in most cases, a SHIB is developed to address a new hazard or refocus the public’s attention on a recurring hazard in light of a recent incident. This was not the case for the asbestos bulletin because there had not been any recent incidents associated with asbestos in automobile brakes. OPPTS began to develop its communication product in 2003 in response to two events. The first was an EPA-initiated asbestos strategy project that recommended in its 2003 report that the agency revise its materials on asbestos. This project focused on how oversight, outreach, and education could help identify priorities and promote innovative approaches and best practices to address and manage costs and risks associated with asbestos. The other triggering event was a request for correction under the IQA that asked EPA to withdraw its 1986 Gold Book. Among other things, the IQA allows “affected persons” to seek and obtain correction of information maintained and disseminated by agencies. In essence, the requester asserted that the Gold Book contained statements that were based on inadequate and inappropriate scientific information, and that the book itself was badly outdated given the scientific studies published since 1986. Once the agency received the request for correction, updating the Gold Book became a higher priority. OPPTS officials acknowledged that, although the information provided by the Gold Book was still accurate, the format and presentation of the information could be perceived as very technical and not “user-friendly.” Therefore, officials decided to develop a product that would provide the necessary information and meet the needs of professional automobile mechanics and home mechanics, in a simple and user-friendly format. They agreed that the best approach would be a brochure. However, according to OPPTS officials the brochure was also a unique communication product. In most cases, OPPTS develops a communication product in response to a need that is identified by the agency itself or is brought to the attention of agency officials. According to agency officials, the brochure for the existing Gold Book was under revision to provide more relevant context and illustrations and to conform with communications practices developed in the years since its last publication in 1986 (including practices of plain English language and Web site addresses for additional information). Revision of the asbestos brochure did not address a new need and did not provide new information that was not available elsewhere on EPA’s Web site. As illustrated in figure 3, the preparation of the OSHA and OPPTS communication products on asbestos in automobile brakes occurred over several years, but OSHA’s SHIB was in development longer than OPPTS’ brochure. From initiation to public dissemination of a final SHIB, OSHA’s process took approximately 5-½ years. OPPTS’ process took approximately 3-½ years. OSHA and OPPTS officials stated that one reason for the delay in developing and disseminating the asbestos communication products was that other priorities, such as responding to Hurricane Katrina in the fall of 2005, overtook the development of these communication products. However, officials from both agencies pointed out that previously released information about the dangers of exposure to asbestos, applicable protective standards, and protective measures remained available during the products’ development. OSHA began developing its SHIB in 2001, and posted the final version of the SHIB to the agency’s Web site in the summer of 2006. During these 5-½ years, OSHA officials drafted the SHIB and reviewed it, but did not clear it on two separate occasions (see fig 5). In 2003, agency officials decided not to publish the SHIB because they were unsure of the extent to which asbestos in brake products was a problem. They were concerned about raising an unnecessary alarm about the possible exposure to asbestos in automobile brakes because they found information about the problem was limited and inconclusive. In 2004, OSHA received a draft of OPPTS’ brochure addressing the same issue. At that time, OSHA was still conducting research to determine the extent to which asbestos-containing products were still available in the market. For example, OSHA staff reviewed data from the U.S. Geological Survey that indicated that there were still friction products with asbestos available in the market but it was difficult to determine the exact amount of automobile brake and clutch products that contained asbestos. However, officials determined that none of the sources were able to provide information on the extent to which asbestos-containing brakes and clutches were still available in the market. In 2005, OSHA again decided against issuing a revised SHIB because it repeated existing standards, and agency officials were still uncertain as to the extent to which automobile brakes and clutches containing asbestos were still available in the market. OSHA officials said that the development of the SHIB was given lower priority when the agency staff became involved with the response to Hurricane Katrina, including the production of compliance assistance materials related to this event. In 2006, OSHA officials received the OPPTS’ draft brochure prior to its publication in the Federal Register and also became aware of another media report that raised concerns about the delays and the lack of activity at OSHA on the SHIB. OSHA officials consulted with an automobile manufacturer to determine if asbestos-containing brakes and clutches were still being used in the manufacture of new automobiles and the extent to which these parts were still available in the market. While the information was still inconclusive, at the end of July 2006, OSHA officials decided to issue the SHIB—that included a cross-reference to the EPA asbestos Web site—and posted it to the agency’s Web site. (See app. III for a copy of the final asbestos SHIB.) In 2003, OPPTS officials began to develop their brochure in response to the request for correction and its internal review of asbestos information products. OPPTS officials reviewed existing data to determine the prevalence of asbestos-containing automobile brakes and clutches in the market. OPPTS staff also consulted officials at the U.S. Geological Survey as well as with industry officials to determine if asbestos-containing products were still available in the market. According to their contacts, there were still products with asbestos available in the market but it was difficult to determine the amount of products. OPPTS officials decided that given the uncertainty about the prevalence, there was a need to inform the public about the potential hazard. By April 2004, after developing a draft of the brochure, OPPTS was ready to submit its draft for comments from other federal agencies. In July, OPPTS staff learned about the SHIB that OSHA had begun to draft in 2001 that addressed the same hazard. At various points during the rest of the development of the brochure, staff at OSHA and OPPTS worked together to ensure that the OPPTS brochure incorporated language from the OSHA SHIB and cross- referenced the OSHA SHIB and Web site. By the fall of 2004, OPPTS officials decided to defer to OSHA. They halted further development of the brochure. According to OSHA officials, in early 2005, EPA officials indicated to OSHA that they were no longer interested in pursuing a joint communications product on exposure to asbestos in automotive brakes. In 2006, OSHA officials confirmed their decision not to publish the information bulletin, and OPPTS officials moved forward with the development of their brochure, because they were responding to a request for correction, and finalized the draft by the summer. In August 2006, OPPTS published its draft brochure, and in March 2007 OPPTS published the final brochure. (See app. IV for a copy of the final brochure.) OPPTS also consulted and coordinated with officials at OMB. Because OPPTS was responding to a request for correction, OMB, in its oversight role under IQA, monitored the agency’s response to the request. However, there was no formal requirement for interagency coordination between OMB and OPPTS in developing communication products. According to OSHA officials, OMB’s inquiries into the SHIB development were due to EPA reporting to OMB that it was not developing its own response to the request for correction because OSHA was developing a SHIB that addressed the same hazard. However, there was no requirement for OMB to monitor or review the development of the SHIB. While OSHA and OPPTS developed new products that addressed the same health hazard and varied in the amount of time needed for development and review, agency officials stated that neither product contained any information that was not already available to the public. OSHA’s information bulletin was based on the existing workplace asbestos standards, and EPA’s brochure was an update to the Gold Book (a 16-page booklet). According to agency officials, these were products that were intended not only to inform the public about the potential health hazard, but also to provide other sources of information within each agency in a more user-friendly format. However, the EPA brochure differs from the previous Gold Book in several ways. The Gold Book not only drew attention to what it considered to be very serious health consequences that resulted from exposure to asbestos during brake and clutch repair, but also stated that it was very difficult to make the repair of asbestos- containing parts safe. The new brochure lists the health consequences of exposure to asbestos, but also outlines best practices that when followed, can reduce the potential for exposure to asbestos so that repair work on asbestos brakes can be conducted in a safe manner. While the brochure does not elaborate on the reasons for the discussion on best practices, OPPTS officials stated that the shorter brochure (a trifold pamphlet) was intended to be more user-friendly and not a compilation of all of the available information on the potential health consequences associated with asbestos exposure in a single publication. (Within the brochure, officials provided the link to the agency’s Web page that has more information on the health consequences associated with asbestos exposure.) Some of the respondents to OPPTS’ request for public comments questioned these differences in content. For example, one organization said that the draft failed to provide sufficient information concerning the risks of asbestos and appropriate risk practices and recommended that the final brochure address in more detail the issue of latency in the effects of asbestos disease, and that language of the EPA document should mirror the language of the OSHA SHIB, for example by stating that “Mechanics should assume that all brakes have asbestos-type shoes.” Another respondent, while generally supportive of the changes made in the new brochure, stated that warnings of health effects associated with exposure to asbestos listed in the new document should be expanded and should include information about the danger of exposing family members by wearing work clothes home. OPPTS officials stated that the intent of the brochure was to update the Gold Book and convey the work practice information in a more user-friendly format, and that other information related to asbestos could be found on the agency’s Web site. Under both OSHA and OPPTS processes, reviewing a communication product always includes internal review but also may include external review. This external review may come from other federal agencies, industry groups, or the general public. In developing its brochure, OPPTS sought comments from external parties and the general public. In comparison, OSHA’s process had more limited participation from external parties. As part if its process, OPPTS consulted with other federal agencies in the development of the brochure. In addition, in order to determine the extent to which asbestos was still present in automobile brakes and clutches, OSHA and OPPTS staff consulted officials at the U.S. Geological Survey. OSHA also consulted with an automobile manufacturer and OPPTS consulted with some automobile parts manufacturers and retailers to determine if asbestos-containing products were still prevalent. According to their contacts, there were still products with asbestos available in the market but it was difficult to determine their prevalence. Once OPPTS officials decided to develop their own brochure, they submitted the draft to OMB for review and coordination of the interagency review. Once the interagency review was completed, OPPTS published a notice of availability in the Federal Register and asked for public comments to the brochure. After agency officials revised the draft brochure in response to comments, they resubmitted the brochure to OMB for final review. OSHA officials did not generally include external parties in the development of OSHA’s information bulletin, and its collaboration with OPPTS staff was a result of outreach by OPPTS officials. For example, when officials were trying to determine the extent to which asbestos-laden brakes and clutches were still available, OSHA officials consulted the U.S. Geological Survey as well as an automobile manufacturer to determine if asbestos-containing brakes and clutches were still being used in the manufacture of new automobiles and the extent to which these parts were still available in the market. However, there was no evidence of attempts to obtain data from other parties, such as automobile parts distributors or retailers. OSHA also did not seek public comments on its draft bulletin. When OPPTS officials develop a communication product, they also develop a communication plan to ensure that the agency’s announcement and publication of the product reaches the intended audience. In developing the brochure, OPPTS also developed a communication plan that included a projected issuance date, identified the audiences and other stakeholders, and the method(s) for dissemination. According to the communication plan for the asbestos brochure, OPPTS officials notified OSHA officials about the dissemination of the brochure prior to its publication in the Federal Register and posting onto the EPA Web site. OPPTS officials also notified the media by announcing the brochure in its weekly media advisory that also provided the Web link to the agency’s asbestos information page (www.epa.gov/asbestos). After submitting the brochure for final review by OMB, OPPTS officials published the brochure in the Federal Register and on the agency’s Web site. After posting the brochure, EPA removed the Gold Book from its Web site. OSHA guidance, unlike that for OPPTS, does not require the agency to develop in advance a communications strategy to ensure that communication products reach their intended audience. Once OSHA officials developed and reviewed their information bulletin, they posted it to their Web site (www.osha.gov/dts/shib/shib072606.html) and announced its issuance in their biweekly e-news memo, Quick Takes, an OSHA publication that is available to interested parties. This publication has a circulation of more than 50,000 subscribers. In addition, the release of the SHIB was listed on the opening page of the agency’s public Web site under the feature, What’s New. However, according to OPPTS officials, OSHA officials did not notify them of OSHA’s decision to release the SHIB prior to its posting on the OSHA Web site. Both OSHA and OPPTS have standard policies, procedures, and practices that guide the initiation, development, review, and dissemination of their communication products, but agency officials noted that not all of the processes are documented. OSHA and OPPTS officials identified for us the main processes that their agencies use. In particular, the officials provided detailed descriptions of the processes applicable to preparing OSHA SHIBs and OPPTS communication materials—those that applied to the preparation of the agencies’ products on asbestos in automotive brakes and clutches. Because of the great variety of products that the agencies produce, there may be other processes applicable to a given communication product, but the processes identified are those that should most often apply to communication products. We reviewed these processes to determine how they addressed four generic phases: (1) initiation, (2) development, (3) review, and (4) dissemination of communication products. In the following sections, we identify the key OSHA and EPA/OPPTS processes and summarize the process steps the agencies said they typically follow to prepare OSHA SHIBs and OPPTS communication materials, such as brochures. OSHA primarily follows agency-specific instructions, rather than any DOL- wide procedures, when preparing compliance assistance products, although DOL’s Office of the Solicitor is included in the review and clearance process. Agency officials identified several specific OSHA instructions as most helpful in understanding their review and clearance process and aspects of OSHA’s compliance assistance material production. These include the OSHA directives on clearance of policy issuances, nonpolicy issuances, and SHIBs. In September 2007, OSHA issued an instruction on preparing Safety and Health Compliance Assistance Products that may now provide the most relevant process guidance for preparing such products. Compliance assistance products or materials covered by this instruction include, but are not limited to, SHIBs, quick cards, fact sheets, posters, and pamphlets. OSHA has made all of its directives publicly available on the agency’s Web site. However, agency officials said that not all details about their processes and standard practices appear in the written directives. Figure 4 illustrates the process that OSHA officials said they follow to prepare SHIBs. (This page left blank intentionally.) OSHA officials noted that the flowchart, although based on the SHIB directive, shows additional intricacies and review loops that can occur in the actual development and review of a SHIB (unwritten elements of the process). In general, the officials noted that everything goes through the clearance process, and there is little room for discretion, although they could deviate in an emergency situation if the Assistant Secretary of OSHA approves it. Although the flowchart and the following narrative summary focus on the process for SHIBs, we also include in the discussion below information to illustrate how other key OSHA directives are similar or different from the SHIB process, with a particular emphasis on OSHA’s new directive for preparing compliance assistance products. OSHA officials noted that a variety of triggers can initiate a decision to update or create a product, including, for example, evidence of inadequacies of controls in the workplace or lessons learned from catastrophic or major incidents. OSHA’s directive on SHIBs specifically identifies seven circumstances when it might be appropriate to use a SHIB and eight types of safety and health issues that might be covered by a SHIB (although OSHA does not limit SHIBs to only these issues). For example, the SHIB directive states that it might be appropriate to disseminate information to or through OSHA field offices as a SHIB when OSHA becomes aware of new, unusual, noteworthy, previously unrecognized, or little known but significant occupational safety and health hazards. Officials said that most ideas for SHIBs come from the field, and most come out of OSHA’s inspections. Among the types of safety and health issues that a SHIB might address are common misunderstandings or misnomers involving worker safety and health issues (such as the misunderstanding that asbestos was banned). The development phase includes two main steps, management approval to proceed with the development of a product and the actual drafting of the product. Selecting the appropriate type of product is an important element in the initial approvals, because this helps to determine which agency policies and procedures should apply. OSHA officials said that, in general, the specific procedures and clearances that would be required are driven mostly by whether a product is a policy or nonpolicy issuance. The OSHA instruction on nonpolicy issuances includes a process flow checklist to determine whether a proposed issuance is appropriate for release as a nonpolicy issuance. One distinguishing feature of OSHA’s instructions is that for SHIBs in particular and compliance assistance products in general, the Assistant Secretary of OSHA must approve the proposed product before development of a draft can proceed. There are also earlier steps during which field, regional, and national office officials determine whether an issue merits national attention. These approvals serve as an important internal control. For example, according to agency officials, OSHA developed its instruction on compliance assistance products to (1) implement a process that ensures that the development of guidance is appropriately coordinated between the national office and field operations before resources are spent to develop the products and (2) establish a process by which guidance projects are approved by OSHA management before the expenditure of resources. Centralized top-management approval is a prominent feature of OSHA’s new instruction on compliance assistance products. Under that instruction, the initiating OSHA region, directorate, or office must obtain approval from the Assistant Secretary of OSHA before development of any such products. To do so, OSHA will filter the proposals through OSHA’s Compliance Assistance Coordinating Group (CACG). Proposals are to be entered into a database and, unless expedited review has been requested, CACG will coordinate requests for presentation to the Assistant Secretary on a quarterly basis. (OSHA’s directive indicates that the agency will use the “Compliance Assistance Products under Development” database to track not only the initiation and approval of proposed products, but also their development and clearance.) CACG will submit all requests to the Assistant Secretary and note the ones that the group recommends for development. OSHA’s instructions also prompt the initiator of the request to indicate the potential economic significance of the compliance assistance product. If an approved idea merits a national product, OSHA will begin development of a SHIB by going through the appropriate subject matter office to prepare a draft. OSHA’s Directorate of Science, Technology, and Medicine (DSTM) is responsible for developing and issuing most SHIBs, but other directorates may forward ideas for, or contribute to, a SHIB. During the development phase, national and field office staff may consult with each other. However, according to agency officials, OSHA typically does not survey or consult with outside parties for additional information when developing a SHIB. OSHA’s instructions require that draft SHIBs and other compliance assistance products include a disclaimer, noting, for example, that the product is not a standard or regulation and creates no new legal obligations. The review phase requires internal agency reviews and approvals and might also include interagency reviews, external reviews, or both. During the formal internal review process, a draft SHIB will go through the Directors of OSHA’s offices. Agency officials told us that, ultimately, Directors are responsible for approving the product and are instructed to “look at the totality of the document when signing it.” For draft SHIBs, internal reviews are to include coordination with the Office of Communications, the Office of the Solicitor, and other OSHA Directorates (such as the Directorate of Enforcement Programs and the Directorate of Standards and Guidance). Other internal stakeholders who may review a draft SHIB include officials in OSHA regional offices. In some cases, the SHIB process may include seeking a review of the draft SHIB by entities or individuals outside of OSHA, such as recognized experts, state or federal agencies, and professional organizations. The SHIB directive suggests that the Director of DSTM refer to current OSHA Alliances to ensure inclusion of appropriate stakeholders (for example, trade associations connected with a topic). However, OSHA officials pointed out that their process for SHIBs and other guidance documents is largely internal, unless there is some reason to go outside OSHA. Officials told us that some products, such as guidance on pandemic flu, go through interagency and OMB review. If OSHA consults external stakeholders, agency officials said that these stakeholders are usually involved after a draft has been prepared. However, in some circumstances, such as if a fatality helped to trigger development of a SHIB, OSHA could involve external stakeholders up front. Since issuing the SHIB on asbestos in brakes, OSHA revised its review process that draft SHIBs be referred to the DOL Executive Secretariat, on a case-by-case basis, for concurrence before the Assistant Secretary of OSHA signs and disseminates the completed product. For SHIBs prepared by DSTM, part of the review package includes a table that contains all comments made during the review process and their disposition. The officials noted that there can be an iterative “loop” to this process, not reflected in the written SHIB directive. Specifically, if major issues surface during reviews, but the agency still wishes to proceed with a SHIB, officials would revise the document to address the concerns, and the draft would have to go through appropriate review steps again. Under the September 2007 OSHA instructions on compliance assistance products, the review and clearance processes are very similar to those outlined in the SHIB directive. However, unlike the SHIB directive, the instructions on compliance assistance products include some specific time frames for reviews. For example, offices generally are required to allow at least 20 working days for review of compliance assistance products. After incorporating appropriate changes, OSHA management determines whether a second review is needed. The instructions also note, however, that when a product is submitted for approval by the Assistant Secretary, clearances or concurrences from reviewers may not be more than 120 days old; otherwise, another review is needed. The directives on SHIBs and compliance assistance products encourage staff to coordinate with the Office of Communications regarding design and issuance of the product, including appropriate public notification. The directives identified by OSHA officials include provisions specifying responsibilities for posting, distributing, and maintaining the final products. The final products are posted on OSHA’s Web site, by product type. OSHA officials told us that OSHA does have processes to allow public comments on SHIBs or to provide public notification before the SHIBs are posted in final form, however, there is no requirement for either of these actions except in the case of significant guidance as defined by OMB. OSHA officials said that when approval is received per the review process, they simply post the signed SHIB. Sometimes there is a press release, but not always. The officials said there have been a few exceptions—not involving SHIBs—where the agency asked for comments on the Web before drafting guidance documents. OSHA’s directives establish no specific time frames or benchmarks for how long the entire process for producing a final product should take from initiation through development, review and dissemination, although the compliance assistance directive identifies time frames for a few review steps. There is not likely to be one single standard that would be appropriate for all products and in all circumstances, but the absence of time frames or benchmarks leaves OSHA’s processes with no mechanisms to prompt the timely release of communication products. In fact, some aspects of OSHA’s processes, such as the possibility of repeating development and review steps (as shown in the asbestos SHIB example) may contribute to delays. Timeliness is only one of a range of performance indicators that agencies should use to measure whether they are achieving their goals—others include the quantity, quality, cost, and outcome of agencies’ program activities—and this range is important because managers must balance competing goals. Nevertheless, it is an indicator that merits attention, especially once an agency has determined that there is a need to communicate information about how people can protect themselves from health and safety hazards. The very nature of such communication products indicates that timeliness is a necessary element for their effectiveness. OPPTS officials identified both OPPTS-specific and a number of EPA-wide internal processes that they use to prepare communication products. In general, agency officials told us that they do not follow the same procedures or conduct the same level of review for all products, although there may be a standard procedure and level of review for some categories of products. The detailed steps of the internal procedures may vary according to multiple factors, such as the specific type of product; the offices involved in the process; the significance of the document and the type of information it contains—for example, whether the information to be provided is new or an update; and the complexity and sensitivity of the subject. Agency officials noted that not all of their processes are documented in written guidance. EPA and OPPTS have different processes that apply to different types of nonrule products. At EPA, nonrule products include, among others, communication materials, scientific documents, analyses, reports, guidance, and compliance assistance materials. Among the main EPA-wide procedures or guidelines that may affect the procedural steps followed to prepare communication products are (1) EPA’s Action Development Process: Guidance for EPA Staff on Developing Quality Actions, (2) Policy and Implementation Guide for Communications Product Development and Approval (guidelines from EPA’s Office of Public Affairs, also referred to as the agency’s Product Review Process), (3) the agency’s procedures for notices to be published in the Federal Register, and (4) EPA’s information quality guidelines. The Product Review Process and the information quality guidelines are publicly available on EPA’s Web site, but the other guidelines are not. Other policies, procedures, and guidelines also might apply depending on the type of document that is being created. In addition, each EPA office and region, including OPPTS, has its own internal procedures and guidelines for the development and dissemination of the various products. OPPTS therefore follows applicable EPA-wide processes, as well as its own processes, when preparing its products. In particular, OPPTS officials identified a general five-phase process for preparing communication products based on the EPA Product Review Process: (1) initiation, (2) development, (3) review within EPA, (4) interagency/external review, and (5) dissemination. According to the officials, regardless of whether written procedures are developed for a particular category of products, the process that OPPTS follows is built around these core phases. Figure 5 illustrates the OPPTS process for preparing communication products. OPPTS officials characterized this process as one that they typically follow to prepare products, such as the brochure on preventing asbestos exposure among brake and clutch repair workers. However, it is not necessary for each communication product to follow each of these steps. During the initiation phase, OPPTS officials generally will identify the need for the product, identify the type of product to consider developing (for example, Web page, fact sheet, brochure, or Q&A document), consult with stakeholders (if officials determine there is a need for consultation), and obtain approval for the concept from the appropriate officials. Agency officials told us that the need to develop a new communication product, or to revise or update an existing product, might arise from several sources, including a legal mandate, identification by their staff responding to inquiries or implementing a program, or OPPTS management. In some cases, OPPTS may consult with another agency or affected stakeholders to identify the need for creating or revising a communication product. In the case of a product developed as a result of a regulatory program, the stakeholders may be representatives of the regulated community or other interested members of the public. Pursuant to EPA’s Product Review Process for Communications Materials, an important step in initiation is that OPPTS management agrees that the product needs to be developed or revised, and EPA’s Office of Public Affairs also reviews and approves the concept. Once a concept for a product is approved, the program officials consult with stakeholders (as appropriate), develop a time frame for product completion and a plan for disseminating the product (a communications plan), and determine whether a specific process applies to the development of a particular product. Once a development process is either identified or developed, the process must be approved by OPPTS management prior to developing the product. OPPTS officials told us that development time frames may vary for different product types depending on the specific needs identified and circumstances related to that product, and also depending on whether a process has been developed or needs to be developed. Unless specifically mandated by statute, or driven by other legal deadlines or an identified critical need, the time frame for developing the product is flexible and subject to change based on competing demands for the staff’s attention and other resources. The agency’s processes set no specific time frames for how long development of a product should take. In addition, OPPTS may decide to engage partner agencies, stakeholders, or both at different points during the development of the product, based primarily on the circumstances specific to the particular product. According to OPPTS officials, because the nature of communication products and the circumstances surrounding their development vary significantly, the process provides sufficient flexibility to ensure the development of a quality product. (EPA’s Action Development Process, the detailed guidance that the agency as a whole follows when developing its most significant actions—such as regulations, policy statements, risk assessments, and guidance documents—is similarly flexible. The required process steps for development vary according to the agency’s determinations about the priority of the action, from those that require the attention of the EPA Administrator to those that are delegated to one of EPA’s offices.) All OPPTS communication products must undergo internal review pursuant to the agency’s Product Review Process. Specifically, agency processes require that a communication product be approved by OPPTS management both at initiation and again at the final draft stage. Any issues or comments that might arise during the OPPTS management review must be addressed before the product undergoes broader EPA review. OPPTS also circulates draft communication products to those EPA offices that work on similar issues (including regional offices, depending on the issue) and central offices, such as EPA’s Office of General Counsel and Office of Public Affairs. Although not required, depending on the nature of the particular communication product, OPPTS may seek reviews by other agencies (such as those interested in programs or topics related to the product) and OMB before finalizing the product. In some cases, this may involve more than one agency. OPPTS has on occasion provided advance copies of certain high-profile products to OMB for an informal review. This usually has been in response to a request from OMB but also on occasion when OPPTS wanted OMB’s input. OPPTS might also seek comments from nonfederal parties. In general, communication products developed by OPPTS do not all undergo a formal notice and comment stage. OPPTS considers whether such a step is necessary as part of the planning process based on the nature and circumstances surrounding the particular product. Even when not required to do so, OPPTS may still seek public comments on the product in circumstances involving new types of communication products, stakeholder interest, external commitments for comment opportunities, potentially controversial issues, or for other reasons. OPPTS officials told us that when the agency uses notice and comment for a particular product, it opens a public docket. Public comments are submitted to that docket, and the public can access the product, other relevant information (if any), and any comments received. Once the product has undergone internal review, interagency/external review (if necessary), and final OPPTS management approval and signature, OPPTS disseminates the product to the general public. To obtain management approval for public release, OPPTS staff will prepare a final version of the product, along with any related materials, using EPA’s Product Review Process for all communications products. OPPTS typically develops a Communication Plan to ensure that its announcement and release of a particular product is tailored to reach the intended audience. EPA’s written guide on communication products includes guidance to agency staff about communications planning. In cases where the product is related to a well-established program area, OPPTS might maintain a list of interested parties who wish to be notified whenever OPPTS releases anything related to that established program area. EPA’s Product Review Process includes a mechanism for OPPTS to coordinate the development and review of the Communication Plan for the particular product with communication specialists across the agency. OPPTS also consults with EPA’s Office of Public Affairs on all releases. EPA officials told us that it would be difficult to compile a list of all disseminated communication products because of the great variety and number of products they produce. However, the agency maintains several lists of some of the available products for the public; for example, officials noted that the National Service Center for Environmental Publications is a central repository for EPA documents available for distribution, but this is not all-inclusive. An EPA official also pointed out that almost all communication products—whether from OPPTS or other EPA program offices—ultimately are reviewed by EPA’s Office of Public Affairs, which maintains an inventory of all public communication products that it has reviewed. As OPPTS officials said, it may also be less meaningful to attempt to catalog communication materials as the agency increasingly posts information to its Web site for quicker dissemination and wider accessibility and uses a variety of simpler, more focused formats to convey that information. They said that the differences between EPA’s 1986 Gold Book and the agency’s 2007 asbestos brochure illustrate this change. While the agency’s goal for the Gold Book was to compile all of the available information into a single publication, OPPTS now provides links to source documents, rather than repeating all the details. OPPTS officials noted that using a link or reference ensures that the public has up-to-date information and minimizes the need to correct or revise the brochure when the source information changes. Nevertheless, the ability to track and monitor the communication products that the agency is disseminating is important for internal control purposes—specifically to ensure that relevant, reliable, and timely information is available for management decision making and for external reporting purposes. As was the case with OSHA’s procedures, the EPA/OPPTS procedures establish no specific time frames or benchmarks for how long the entire process of producing communication products should take. Although OPPTS prepares schedules for individual products during the development phase, agency officials indicated that the time frames for the agency’s products are flexible and subject to change based upon competing demands for the staff’s attention and other resources, unless specifically mandated by statute, or driven by other legal deadlines or an identified critical need. While we recognize, as previously stated, that there is not likely to be a single standard appropriate for all products and in all circumstances, without some suggested time frames or benchmarks—such as limits on the length of intra- or interagency reviews—the EPA/OPPTS processes may not prompt the timely release of communication products. There are significant differences in the requirements that apply to rulemaking compared to the preparation of communication products, because rulemaking must comply with legal requirements that are not applicable to the preparation of communication products. Overall, there is less need for transparency and documentation regarding the preparation of communication products, which are not legally binding, compared to rules, which are. This is reflected in the requirements that apply to each. In January 2007, the administration amended the executive order on OMB’s oversight of draft rules and issued an OMB bulletin on good guidance practices. Among other provisions, these initiatives expanded coverage of some requirements for OMB review of significant draft rules to also include significant guidance documents and also required agencies to disclose more information about significant guidance. These changes bring the treatment of significant guidance closer to that for rules. However, the initiatives do not cover any other types of communication products, nor will they extend the transparency and documentation requirements applicable to OMB’s reviews of draft rules to its reviews of significant guidance. Although OSHA and OPPTS follow the same basic procedural steps— initiation, development, review, and dissemination—for producing communication products and rules, we identified at least five general areas in which the procedures governing rules and communication products can differ significantly. These differences are to be expected, given the legal effect and consequences of rules. The differences in each of these areas are rooted in legal requirements that apply to rulemaking. For communication products in general, there are no statutory requirements, and the specific processes used by the two agencies we reviewed also do not impose requirements in the five areas outlined below. Providing a justification – Under the APA, agencies are required to reference the legal authority under which a rule is proposed in a Federal Register notice and either the terms and substance of the proposed rule or a description of the subjects and issues involved. Under other statutes and executive orders—such as the Paperwork Reduction Act, Regulatory Flexibility Act, Unfunded Mandates Reform Act, Congressional Review Act, and Executive Order 12866 on regulatory planning and review— agencies may also be required to complete and publish analyses supporting the rule and the options selected by the agency. In some cases, statutes impose additional requirements on specific kinds of rules, such as requirements for public hearings. There are no such general statutory requirements for agencies to provide justification for their communication products, although, as discussed above, OSHA and OPPTS procedures typically involve a step where agency officials determine that there is a need for a proposed communication product. Interagency reviews – Under Executive Order 12866, OMB’s Office of Information and Regulatory Affairs (OIRA) reviews significant draft rules (for example, rules expected to have an annual effect of $100 million or more on the economy or that raise other coordination, budgetary, or policy issues) before they are published as proposed or final rules. The executive order generally requires OIRA to complete its reviews of significant rules within 90 days after an agency formally submits a draft regulation. In contrast, officials from OMB, OSHA, and EPA all noted that there generally are no formal procedures and requirements governing interagency and OMB reviews of communication products—with the exception of a recently implemented requirement for OMB reviews of significant guidance documents (discussed below). Agency officials confirmed that such reviews do take place informally for some communication products (although they are not necessarily required). Transparency of the process – In prior work, we identified transparency as a regulatory best practice, noted that transparency requirements help to make agencies’ processes more open (and promote participation), and quoted an Administrator of OIRA who pointed out that openness can help transform the public debate about regulation to one of substance rather than process. However, the transparency of the processes used to prepare communication products is much more limited than for rulemaking. During rulemaking, agencies typically maintain a rulemaking record, in the form of a public docket. Moreover, Executive Order 12866 requires OIRA and the agencies to document and disclose certain information about OIRA’s reviews of draft rules, including the substantive changes made to rules during OIRA’s review and at OIRA’s suggestion or recommendation, as well as any documents exchanged between the agencies and OIRA. OIRA is also required to disclosure its substantive communications (including telephone calls, meetings, and incoming correspondence) with outside parties (persons not employed by the executive branch) regarding rules under review. However, as discussed in our 2003 report on this process, such requirements do not necessarily ensure transparency. OMB and agencies may engage in informal reviews that are not subject to any of the documentation and disclosure requirements that apply when a draft rule is undergoing formal review. Agencies’ preparation of communication products is not subject to the same requirements as rulemaking for documentation and disclosure of the processes and steps taken. Further, information related solely to the internal practices of an agency is exempt from public disclosure under the Freedom of Information Act. Therefore, while OSHA and OPPTS officials confirmed that they document the internal review processes followed to prepare communication products, such documentation is not subject to public disclosure. Also, as we noted earlier, the basic processes that the agencies use are not always documented in writing or made publicly available. Public comment – In rulemaking, agencies are required to give interested persons an opportunity to comment on proposed rules by providing “written data, views, or arguments,” and also to consider the public comments before issuing a final rule. There generally are no such requirements for the agencies to provide the public an opportunity to comment on draft communication products. However, OSHA and OPPTS officials noted that they still may choose to seek public comments on certain products. For example, OPPTS officials said that they may provide external stakeholders an opportunity to comment on a communication product in circumstances involving new products, stakeholder interest, external commitments for comment opportunities, potentially controversial issues, or for other reasons. OSHA officials told us that they sometimes provide opportunities for public comment on communication products, although they have not done so for SHIBs. Monitoring development and review – The public is better able to track the status of the development and review of significant rulemaking. In response to provisions of Executive Order 12866, as amended, agencies make general information on rulemaking in process publicly available through mechanisms such as the Unified Agenda of Federal Regulatory and Deregulatory Actions, the Regulatory Plan, and OMB’s database on the status of draft rules submitted for review under the executive order. No similar mechanisms are available for publicly tracking communication products. OSHA and OPPTS have, or are creating, databases on the status of their communication products, but these are for internal management purposes, and are not available to the public. Per OSHA’s September 2007 directive on compliance assistance products, the agency will compile information on all proposed concepts in a centralized database, including information tracking the initiation, development, and reviews of those products. An OPPTS official told us that her agency uses several different databases to track the development and review of various products. She also noted that EPA has a publications catalog that is a master inventory of all numbered publications, but this is not all-inclusive. In January 2007, the President issued Executive Order 13422 to amend Executive Order 12866, and OMB released a related Final Bulletin for Agency Good Guidance Practices. The principal change made by the executive order amendments was to establish a process regarding interagency coordination and review of significant guidance documents prior to their issuance. The OMB bulletin established policies and procedures for the development, issuance, and use of significant guidance documents by agencies. In April 2007, the Administrator of OIRA issued a memorandum providing more specific instructions on the implementation of the OMB bulletin and Executive Order 13422. According to the OMB Director, the primary focus of Executive Order 13422 and the OMB bulletin is on improving the way the federal government does business with respect to guidance documents by increasing their quality, transparency, accountability, and coordination. OMB noted that well-designed guidance documents can serve many important or even critical functions in regulatory programs and, among other things, can channel the discretion of agency employees, increase efficiency, and enhance fairness. OMB cited various reasons for issuing the bulletin, noting, for example, that as the impact of guidance documents on the public has grown, so too has the need for good guidance practices. OMB also stated that guidance documents may not receive the benefit of careful consideration accorded under the procedures for development and review of rules, and OMB raised the concern that because it is procedurally easier to issue guidance documents, there may be an incentive for regulators to issue guidance documents in lieu of rules. OMB also cited potential benefits from enhancing the quality and transparency of agency guidance practices—including, when practical, using opportunities for public input to increase the quality of products and provide for greater public confidence in and acceptance of agency judgments. Among other things, the executive order, bulletin, and implementation memorandum require agencies to (1) develop clearance procedures for significant guidance documents; (2) provide OMB advance notice and an opportunity for consultation on significant guidance; (3) create and maintain a current list of all significant guidance on their Web sites and establish a means for the public to submit comments electronically on significant guidance, as well as requests for issuance, reconsideration, modification, or rescission of significant guidance documents; and (4) provide public notice and seek public comments on any economically significant guidance. These changes move the treatment of significant guidance closer to the requirements for rules. However, the changes only apply to significant guidance documents, not to any other types of communication products. The OMB bulletin outlines basic standards expected for significant guidance, including both approval procedures and standard elements of each significant guidance document. OSHA officials said that although their directive on compliance assistance products was not developed specifically to implement OMB’s bulletin and the revised executive order, its procedures appropriately reflect those requirements. EPA also revised its processes to reflect the new requirements for guidance documents. As required by the bulletin, both OSHA and EPA have listed the significant guidance documents subject to Executive Order 12866, as amended, and OMB’s bulletin on their Web sites. Under Executive Order 12866, as amended, and OIRA’s implementation memorandum, the requirements regarding notification to OIRA of a significant guidance document are similar, but not identical, to those applicable to OIRA’s reviews of significant rules. Agencies are required to provide advance notification to OIRA of a significant guidance document—as a general rule, no less than 10 days prior to intended dissemination. If the Administrator of OIRA determines that additional consultation is warranted, OIRA will review the guidance and coordinate review among appropriate executive branch departments and agencies. The Executive Order does not specify a time period for review of significant guidance documents, but according to the implementing memorandum, OIRA will complete its consultation on the guidance document within 30 days or, at that time, will advise the agency when consultation will be complete. However, the executive order amendments, OMB bulletin, and OIRA memorandum did not extend the transparency and documentation requirements applicable to OIRA’s review of draft rules (such as disclosing changes made at OIRA’s suggestion or documenting contacts with external parties) to its reviews of draft guidance. OSHA and OPPTS initiated work on their asbestos communication products for different reasons, but in both cases the agencies’ processes took years to complete. OSHA initiated work in 2000 in response to news reports that workers were not aware that asbestos had not been banned from automotive products and might still pose a potential hazard. OPPTS initiated work in 2003 in response to a request that the agency correct information in its Gold Book. From initiation to dissemination of final products, OSHA took approximately 5-½ years to publish its asbestos SHIB, while OPPTS took approximately 3-½ years to publish its final asbestos brochure. OSHA’s iterative review process contributed to delays in producing its SHIB, as OSHA officials cited the need to address uncertainties regarding the prevalence of asbestos in brake products. OPPTS officials also cited a number of explanations for the time required to produce their final brochure, including their external coordination and review activities and competing demands on resources. Officials from both agencies pointed out that, during the time that they worked on their asbestos products, information about the potential hazard and protective measures that could be taken remained available on the agencies’ Web sites. Ultimately, both OSHA and OPPTS determined that new asbestos communication products were needed, and the products were publicly released. Communication products are an important tool that OSHA and OPPTS (as well as other agencies) use to support and augment their regulatory activities. Communication products provide crucial information to regulated parties and the general public. Therefore, it is important that communication products be issued in a timely manner. Timeliness is but one of a range of performance indicators that agencies may use to measure whether they are achieving their goals, as managers balance competing priorities. But timeliness seems especially relevant once an agency has determined that there is a need to communicate information about how people can protect themselves from health and safety hazards to which they might be exposed. Having such information might lead people to make different decisions or take different actions to protect themselves than they would in the absence of such information. As the various OSHA and OPPTS processes for preparing communication products are currently designed, they contain few, if any, performance time frames or benchmarks to help ensure that the processes can produce final products in a timely fashion. Although there can be no single standard for how long the entire process should take, OSHA’s and OPPTS’ processes could benefit from general time frames or benchmarks to provide some impetus for moving products the agencies identified as needed through to dissemination. It should also be remembered that one of the reasons why agencies use alternatives to rulemaking—such as guidance or general communication products—is because these alternatives have the advantage of being less time consuming than rulemaking. It is also important that the processes the agencies use to prepare communication products be documented, transparent, and understood. Differences between the processes for preparing communication products and rules are to be expected, given the legal effect and consequences of rules. Preparation of communication products should not require the same level of justification, documentation, disclosure, and public comment as rulemaking. However, communication products are also important and can affect the actions of regulated parties and the public, so enhancing the general transparency and accountability of agencies’ processes could be beneficial. Knowing the many steps that agencies take when preparing communication products could not only help external parties contribute, when appropriate, to the preparation of the agencies’ products, but could also help those parties to understand why the process is sometimes lengthy. There are opportunities for both OSHA and OPPTS to enhance the transparency and accountability of the processes they use to prepare communication products. Those processes are not always easy to identify and understand, in part because of the great variety of the agencies’ products and processes, but also because not all key elements of the processes the agencies may follow are documented. For example, with the exception of required OMB reviews of significant guidance documents, OMB, OSHA, and OPPTS officials noted that they have no formal written procedures governing interagency and/or OMB reviews of communication products. Nevertheless, agency officials confirmed that such reviews do occur (although they are not necessarily required). As another example, OSHA’s process includes a potential review “loop” that OSHA officials said would not be apparent from reading their directive on SHIBs but can result in staff having to revise the product and repeat the review process. The transparency and accountability of the agencies’ processes can also be limited if they are not publicly disclosed. For both OSHA and OPPTS, this would include disclosing the unwritten elements of their key processes mentioned above, once documented. In addition, EPA/OPPTS could do more to publicize existing written guidelines about key processes for preparing communication products. In contrast to OSHA, which has posted its key written process instructions, this is not always the case for EPA/OPPTS. In particular, EPA’s Action Development Process is not publicly available but applies to the agency’s most significant actions, including rules. Although the agency’s process guidelines focus primarily on internal policies and procedures, the final products generated by the agency may be of interest to and affect a variety of external parties, from Congress and other federal agencies to regulated parties and the general public. Greater disclosure about OSHA’s and OPPTS’ processes could be limited to providing more information about their general processes and would not require the agencies to reveal the actual details of internal policy deliberations for individual communication products. We also observed that EPA/OPPTS identified difficulties in identifying communication products that have been disseminated, even when we limited our request to a subset of product types. OPPTS officials told us that their agency increasingly relies on disseminating information through a variety of formats and links on its Web site. They believe this is a more effective approach to disseminating information to the public, but it may also make it more difficult for the agency to catalog what has been disseminated. We think that it is important, as a matter of basic internal controls, for an agency to maintain an inventory of the products it produces. We recognize that EPA and OPPTS already have a number of separate databases to track various types of communication products, but we remain concerned that some of the products and information disseminated might not be captured by existing databases. Adopting a mechanism such as the centralized database that OSHA is implementing might enhance OPPTS’ ability to track, identify, and manage the inventory of its disseminated products. OSHA also could enhance its existing processes for preparing communication products. For example, the OPPTS processes, both in general and as illustrated during preparation of the asbestos brochure, prompt more and earlier consultation with external parties than seems to be the case with OSHA’s SHIB process. Although OSHA may seek external reviews in some cases, agency officials said that their processes for preparing SHIBs and other guidance documents are largely internal. We recognize that this, in part, reflects the different purposes and context for OSHA communication products, and that outreach to external parties comes at a cost to the agency in terms of both time and resources. However, consultation, outreach, and coordination also can provide important benefits, as OMB cited when explaining the need for agency good guidance practices. Just as the OMB guidance was intended to increase the quality and transparency of agency guidance practices— including, when practical, using opportunities for public input to increase the quality of products and provide for greater public confidence in and acceptance of agency judgments—so too may the preparation of other communication products benefit from appropriate outreach efforts. Similarly, OSHA might wish to enhance its existing process instructions regarding dissemination of communication products by considering elements of the EPA/OPPTS process. While OSHA’s directives prompt agency officials to post final products to the agency’s Web site and encourage OSHA staff to consult with the agency’s Office of Communications about whether an announcement should be made, the directives provide more guidance on distribution of the final products within OSHA than on distribution to regulated workplaces and the public. The EPA/OPPTS processes prompt early and ongoing attention to effective notification about and dissemination of communication products, through tools such as a communications plan, and also provide more guidance to agency staff about communications planning. While we recognize that OSHA and EPA/OPPTS have taken some steps in each of the following areas, more could be done to improve the transparency, accountability, and timeliness of their processes for the initiation, development, review, and dissemination of communication products. Therefore, we are making the following six recommendations: 1. The Assistant Secretary for OSHA and the Administrator of EPA should ensure that their key general policies and procedures for preparing communication products include, as appropriate, time frames or benchmarks to help ensure that products that the agencies have determined are needed are developed, reviewed, and disseminated in a timely manner. 2. The Assistant Secretary for OSHA and the Administrator of EPA should take steps to ensure that their key general policies and procedures for preparing communication products are fully documented. To the extent feasible, this should include identifying the applicable policies and procedures governing OMB/interagency coordination and reviews of such products, as well as any other key processes that the agencies believe are important to understanding how they prepare their products. 3. The Assistant Secretary for OSHA and the Administrator of EPA should ensure that their agencies make public the key general policies and procedures for preparing communication products, including any updated in response to the previous objective. 4. The Administrator of EPA should consider adopting for OPPTS—and other EPA offices, as appropriate—a centralized database or databases to more completely account for the inventory of communication materials disseminated by the agency. 5. The Assistant Secretary for OSHA should augment existing OSHA directives on the preparation of SHIBs and other communication products to prompt OSHA staff to identify opportunities to solicit input from external parties, as practical, during the preparation of communication products. 6. The Assistant Secretary for OSHA should augment existing OSHA directives on the preparation of SHIBs and other communication products to provide more guidance to OSHA staff on developing a communications strategy during the product development process (for example, to identify who the agency needs to inform of the product, how notification and dissemination will be done, and who will be responsible for specific notification and dissemination tasks). We provided a draft of this report to the Secretary of Labor, the Administrator of EPA, and the Director of OMB for their review and comment. In comments on the report, EPA generally agreed with the recommendations and concurred that a formal, well-understood process for coordination and review of communication materials is important to ensure quality information products (see app. V). With regard to the first recommendation, EPA also commented that a fair amount of flexibility and discretion is necessary for the development of communication materials. We agree and had already stated in our conclusions that there can be no single standard for how long the process should take and in our recommendation that agencies should incorporate time frames and benchmarks “as appropriate.” EPA also noted that the time frame associated with its development of the brakes brochure was an anomaly and may not be a useful standard to compare to other cases. However, we based our recommendations on our review of EPA’s (and OSHA’s) general policies and procedures, not on our review of the specific products on asbestos in brakes. With regard to the second, third, and fourth recommendations, EPA identified steps that it already has taken, such as more fully documenting the agency’s process guidance, making guidance available to the public on the agency’s Web site, and having a centralized approach and database on the development of communication materials. We recognized in our conclusions and recommendations that EPA (and OSHA) were already taking steps that addressed some elements of our recommendations. However, as discussed in our conclusions, we believe that more could be done to enhance the transparency and accountability of the agencies’ processes. EPA and OSHA also provided technical comments and suggestions that we incorporated as appropriate. OMB did not provide comments. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies of this report to the Secretary of Labor, the Administrator of EPA, the Director of OMB, and appropriate congressional committees. We will also provide copies to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Our objectives for this report were to describe the processes that the Department of Labor’s (DOL) Occupational Safety and Health Administration (OSHA) and the Environmental Protection Agency’s (EPA) Office of Prevention, Pesticides, and Toxic Substances (OPPTS) used to initiate, develop, review, and disseminate updated communication products on exposure to asbestos in automotive brakes and clutches, identify how long the processes took, and assess the extent to which the agencies followed applicable policies and procedures; describe the general policies and procedures that OSHA and OPPTS have for the initiation, development, review, and dissemination of communication products; and compare the agencies’ policies and procedures for communication products with those applicable to the initiation, development, review, and dissemination of rules, and describe what might be the effects of 2007 administration initiatives regarding guidance documents. To address the first objective, we obtained and analyzed information on the preparation of the OSHA and OPPTS products on asbestos in automotive brakes and clutches. We asked agency officials to provide a chronology and description of events that led to the initiation, development, review/clearance, and eventual dissemination of the products. We also asked the officials to provide any documentation, to the extent available, that would corroborate the events and processes as described in their respective chronologies. We compared the policies and procedures to identify the steps for (1) initiating the development of the asbestos communication products, (2) developing, or drafting, the asbestos communication products, (3) reviewing—internally, externally, or both—the asbestos communication products, and (4) disseminating the asbestos communication products. For some of the steps, the processes are informal, and therefore, difficult to document. Therefore, because the information was based, in large part, on testimonial evidence, we prepared statements of facts on our review at each agency and provided these statements to the agencies for vetting and confirmation of the information. The agency officials verified the information, and provided comments and technical corrections that we incorporated, as appropriate. To address the second objective, we reviewed the agencies’ applicable internal policies, procedures, and guidance documents governing the preparation of communication products. We reviewed relevant and available primary documents, such as the agencies’ Information Quality guidelines, EPA-specific guidance on the development and review of communication products, and OSHA directives governing the development and review of guidance documents, in particular, Safety and Health Information Bulletins (SHIB). Further, we interviewed agency officials at DOL/OSHA and EPA/OPPTS who are involved in the development and review of their respective agencies’ communication products, as well as officials at the Office of Management and Budget (OMB) to obtain information on interagency reviews of communication products. We compared the policies and procedures to identify the steps for (1) initiating, (2) developing or drafting, (3) reviewing (internally and externally), and (4) disseminating a communication product. For some of the steps, the processes are informal and not documented. Therefore, because some of the key information to address our findings was based on testimonial evidence, we prepared statements of facts on our review at each agency and provided these statements to the agencies for vetting and confirmation of the information. The agency officials verified the information, and provided comments and technical corrections that we incorporated, as appropriate. To address the third objective, we again reviewed applicable documents and interviewed officials at the three agencies to identify information on the similarities and differences between rulemaking and the processes used to develop and review communication products. We also solicited the views of agency officials regarding effects they anticipated from implementation of the amended executive order on regulatory review and planning and the OMB good guidance bulletin—both of which were promulgated in final form during the course of our review. Our review was limited to applicable processes of OSHA and OPPTS, the two agencies responsible for preparing communication products on asbestos in automotive brakes, although some of the applicable processes were DOL- or EPA-wide. Our scope and methodology for the first two objectives focused on the broad category of communication products at these two agencies, but did not encompass all nonrule regulatory or technical products that they produced. To illustrate the application of the agencies’ processes for preparing such products, we performed a detailed examination of their asbestos communication products. While we initially had expected to compare the processes used in developing the two asbestos products with the processes used to prepare a sample of like products, we concluded that it would not be possible to identify a representative sample of issued products in order to do a comparative analysis that would be meaningful and generalizable to a larger population of products. Agency officials told us the timeline and process undertaken for one product could be quite different from the timelines for other products of that type. Although our observations on the implementation of these processes are limited to OSHA and OPPTS and not generalizable to other parts of DOL and EPA, our review did encompass examination of DOL- and EPA-wide policies and procedures for communication products. We conducted our review in Washington, D.C., from September 2006 through October 2007 in accordance with generally accepted government auditing standards. The descriptions of the events in this appendix on the preparation of the OSHA and OPPTS communication products on asbestos in automotive brakes were provided by officials at OSHA and OPPTS. For some of these events, the agency officials were able to provide documentary evidence for corroboration. However, because agencies are not required to document their processes, much of this chronology is based on testimonial evidence obtained from agency officials during the course of our review. The OSHA Seattle Regional Office reported on a media report revealing that a large number of employees and employers in the automotive industry mistakenly believed that the 1989 ban on asbestos in automotive products was still in effect. While the regional office suggested two options for informing the public—a local emphasis program (LEP) or an e-mail alert to industry groups—the OSHA national office decide to develop a Hazard Information Bulletin on asbestos in automotive products. National office decided that the LEP, e-mail alert, or both would inform only a select segment of the populations, and they wanted to inform the general public about this hazard. OSHA’s national office decided to develop the bulletin. The Global Environment and Technology Foundation issued its Asbestos Strategy Report commissioned by EPA to develop approaches for asbestos oversight, outreach, and education approaches. Among the foundation’s recommendations was the update of certain existing asbestos guidance—specifically on asbestos in buildings. Internal discussion took place within OSHA on the advisability of publishing the bulletin. OSHA officials decided that there were still unanswered questions about the prevalence of asbestos-containing automotive brakes and clutches that needed to be addressed before disseminating the bulletin. EPA received a challenge under the Information Quality Act to its Guidance for Preventing Asbestos Disease Among Auto Mechanics, commonly referred to as the Gold Book. As a result of the recommendations from the Global Environmental and Technology Foundation and the request for correction, EPA officials developed a “top 6 high priority” list of documents to update. The first document listed was the EPA Gold Book. OPPTS officials developed initial drafts of a brochure and shared this information at the staff level with other agencies, including OSHA. OSHA’s Salt Lake City Technical Center received OPPTS’ draft brochure for review. On a parallel track, OSHA officials recirculated the draft SHIB for further agency review. Review of OPPTS’ document alerted OSHA officials to the lack of information/evidence concerning the extent of use of asbestos in brakes in the United States. OSHA and OPPTS officials agreed that this needed to be addressed and supported the issuance of a joint product. OPPTS and OSHA staff began collaboration to develop a joint product after OPPTS officials became aware that OSHA was also considering development of new materials regarding asbestos in brakes. OSHA suggested a number of technical corrections to OPPTS’ version of the brochure with the understanding that those corrections needed to be made before OSHA could cosponsor the brochure. OPPTS placed a hold on the development of the asbestos brochure when the agency learned that OSHA was developing a bulletin that would address the same concerns. OPPTS informed OSHA that they no longer wanted to be part of a joint OSHA/OPPTS information bulletin. Based on concerns about the use and prevalence of asbestos in brake friction products, OSHA contacted the U.S. Geological Survey to determine the exact amount of asbestos imported for use in the United States. OSHA obtained a study that supported the dissemination of the information bulletin on asbestos exposure in brake and clutch repairs. OSHA also obtained a study that cast doubt on the ability of asbestos brake dust to cause cancer. This study was referred to OSHA’s Salt Lake City Technical Center for its assessment on whether the bulletin should be published. OMB contacted OSHA inquiring about the status of the information bulletin on asbestos exposure in automotive brake and clutch repairs. OSHA’s understanding was that OMB was following up on discussions with OPPTS on the need to revise the Gold Book on asbestos exposure in brake and clutch repair, since OPPTS was responding to a request for correction and OMB monitors agencies’ responses to these requests. OMB officials were concerned since OPPTS officials had indicated that they would not be revising the Gold Book because OSHA was publishing an information bulletin. OSHA staff participated in a conference call with OMB staff. OMB was interested in the status of the information bulletin and its relationship to OPPTS’ Gold Book revision. OSHA staff explained to OMB the background on the original OPPTS/OSHA informal agreement to issue a joint document and OPPTS’ subsequent decision not to proceed. OPPTS officials had indicated that although their Gold Book was the subject of a request for correction, they would rather wait for OSHA to issue its bulletin that would include a statement about potential exposure to home mechanics. OSHA officials explained that the bulletin was primarily a reiteration of the OSHA asbestos standards and that there were still issues under review. The agency had not yet decided whether to issue the bulletin. OSHA officials decided there was no need to issue the bulletin since the document, in essence, reiterated the mandatory requirements found in Appendix F of the asbestos standards. Subsequent to this decision, OSHA’s Salt Lake City Technical Center recommended to the agency that the bulletin should be issued. According to agency officials, the decision not to issue the SHIB was not reexamined in response to this recommendation because of the higher-priority demands related to the agency’s response to Hurricane Katrina. OPPTS officials learned that OSHA officials decided not to proceed with the dissemination of the information bulletin. However, because OPPTS was committed to issuing an update of the Gold Book in response to the request for correction, it proceeded with the development and review of its brochure. A newspaper article raised concerns about the length of time and the lack of activity by OSHA and EPA in disseminating their communication products on asbestos exposure in automotive brake and clutch repairs. Once reviewed and approved within OPPTS, the draft brochure was also reviewed by management in other EPA offices and by other agencies with primary roles in the area of asbestos—OSHA, the National Institute for Occupational Safety and Health, and the Agency for Toxic Substances and Disease Registry. Additionally, although not formally required, OMB participated in a review of the draft brochure. OMB coordinated the interagency review and provided OPPTS officials with comments on their draft brochure from other federal agencies. OSHA officials reconsidered their prior decision not to publish the SHIB and began to recirculate their draft bulletin for review and final preparations for dissemination. OSHA’s Assistant Secretary approved the dissemination of the asbestos SHIB on the agency’s Web site. OMB officials informed OPPTS that it had completed its review of the revised draft brochure and that all the agencies were satisfied with the revisions. OPPTS proceeded to publish the draft brochure in the Federal Register for public comment. After posting the bulletin on its Web site, a former OSHA Assistant Secretary contacted the agency and suggested that the agency might want to reconsider publication of the SHIB based on whether brake dust is a “substantial source for exposure” to asbestos. The agency reviewed the existing data and found that there was a need to warn workers in the brake and clutch repair industry about the potential risk to exposure, albeit at much lower levels. Agency staff drafted a revision to the SHIB to reflect this finding and to acknowledge the fact that there is a scientific debate on the relationship between brake dust and mesothelioma. However, OSHA officials decided against revision of the SHIB. OPPTS submitted its final draft of the brochure to OMB (because the brochure was a response to a 2003 request for correction). OPPTS published the final brochure. OPPTS released the final brochure in the Federal Register, and posted the document on the agency’s Web site. percent was used for the manufacture of friction products that include automobile brakes and clutches. ). In addition to the contact named above, key contributors to this report were Tim Bober, Assistant Director; Andrea Levine; Shawn Mongin; Joseph Santiago; John Sauter; and Crystal Williams. In addition, Tom Beall, Robert Cramer, Donna Miller, Michael Volpe, and Greg Wilmoth provided key assistance.
Agencies address their missions not only through regulations but also by issuing communication products--such as guidance, fact sheets, and brochures--that can provide crucial information to regulated parties and the public. Since 2000, the Occupational Safety and Health Administration (OSHA) and the Environmental Protection Agency's (EPA) Office of Prevention, Pesticides, and Toxic Substances (OPPTS) developed new versions of such products to address the potential hazards of exposure to asbestos in automotive brakes. GAO was asked to describe (1) how OSHA and OPPTS prepared their products on asbestos in automotive brakes, (2) the general processes that OSHA and OPPTS use to prepare their communication products, and (3) how these processes compare to those for rulemaking and how recent administration initiatives might affect them. GAO reviewed and analyzed available documents and interviewed officials at OSHA, OPPTS, and the Office of Management and Budget (OMB). OSHA and OPPTS followed different paths from 2000 through 2007 to update communication products on asbestos in automotive brakes and clutches. OSHA took longer than OPPTS to produce a final product, and OPPTS' process incorporated more steps to obtain input from external parties. Twice before final posting, OSHA officials had decided to not release drafts that had been prepared, because they needed more data to understand how pervasive asbestos in brake products were and wanted to avoid raising unnecessary alarm. For a time, staff from OSHA and OPPTS considered releasing a joint product. Overall, OSHA and OPPTS took years to complete all the process steps to produce their products on asbestos in automotive brakes and clutches--approximately 5-? years for OSHA and approximately 3-? years for OPPTS. In preparing their respective communication products, both OSHA and OPPTS generally followed applicable agency policies and procedures. Both OSHA and OPPTS have standard processes that guide the initiation, development, review, and dissemination of their communication products. OSHA publicly posts all of its applicable instructions, while OPPTS publicly posts only some. Under both agencies' processes, communication products may be initiated by various sources, developed only after getting management approval, and undergo intraagency coordination and management-level clearance. But interagency (including OMB) or other external reviews are not always required. OSHA's policies for disseminating products focus on responsibilities for posting and maintaining final products on the agency's Web site. Beginning at the development phase, OPPTS policies call for the formulation of a communication plan intended to ensure that the dissemination of a particular product is tailored to reach the intended audience. The agencies' processes establish no specific time frames or benchmarks for how long the preparation of a product should take. GAO identified at least five areas where the agencies' processes for preparing communication products and those for rules have significant differences. In contrast to the agencies' processes for communication products, rulemaking imposes requirements on agencies regarding (1) justification of the rule, (2) interagency reviews of drafts, (3) transparency of the processes used, (4) opportunities for public comment, and (5) the public's ability to monitor development and review. These differences are to be expected, given the binding effect of rules, and are each rooted in legal requirements that apply to rulemaking, but not to the preparation of communication products. In January 2007, the administration imposed new requirements for agencies' significant guidance documents, for example requiring agencies to provide OMB advance notice and an opportunity to consult on significant guidance before issuance. These changes move the treatment of significant guidance closer to the requirements for rules but do not cover any other types of communication products.
Military missions differ from nonmilitary missions on a variety of factors, as shown in table 1. Military missions involve coordinated military actions, such as campaigns, engagements, or strikes, by one or more of the services’ combat forces. Operations Desert Storm in 1991 and Iraqi Freedom in 2003 are examples of overseas military missions, and Operation Noble Eagle is a domestic military mission started on September 11, 2001, and continuing today. In the latter mission, the President directed the Commander, North American Aerospace Defense Command, to order combat air patrols to identify and intercept suspect aircraft operating in the United States. Since these are military missions, DOD is the lead federal agency and is prepared to apply its combat power if needed. Requests for nonmilitary missions are evaluated against criteria contained in DOD’s Directive, Military Assistance to Civil Authorities. These requests generally seek DOD support to help alleviate suffering, recover from disasters or assist indirectly with law enforcement. DOD’s directive specifies that requests for nonmilitary support be evaluated against the following criteria: legality (compliance with laws), lethality (potential use of lethal force by or against DOD forces), risk (safety of DOD forces), cost (who pays, impact on the DOD budget), appropriateness (whether the requested mission is in the interest of DOD to conduct), and readiness (impact on DOD’s ability to perform its primary mission). According to DOD, in fiscal years 2001 and 2002, it supported over 230 nonmilitary missions in a variety of settings, such as assisting in fighting wildfires, recovering from tropical storms, providing post-September 11, 2001, assistance to New York City and Virginia, providing support for the presidential inauguration, and for other purposes. According to DOD, during this same period, the Department rejected a handful of missions based on the above criteria. The 1878 Posse Comitatus Act prohibits the use of the Army and Air Force “to execute the laws” of the United States except where authorized by the Constitution or Acts of Congress. Federal courts have interpreted “to execute the laws” to mean the Posse Comitatus Act prohibits the use of federal military troops in an active role of direct civilian law enforcement. Direct involvement in law enforcement includes search, seizure, and arrest. The act does not apply to military operations at home or abroad. Further, it does not apply to National Guard personnel when under the direct command of states’ governors. Congress has expressly authorized the use of the military in certain situations. For example, DOD can use its personnel and equipment to: assist with drug interdiction and other law enforcement functions (10 U.S.C. §§371-378 (excluding §375)); protect civil rights or property, or suppress insurrection (the Civil Disturbance Statutes; 10 U.S.C. §§331-334); assist the U.S. Secret Service (18 U.S.C. §3056 Notes); protect nuclear materials and assist with solving crimes involving nuclear materials (18 U.S.C. §831); assist with terrorist incidents involving weapons of mass destruction (10 U.S.C. §382); and assist with the execution of quarantine and certain health laws (42 U.S.C. §§97-98). The President identified as a major homeland security initiative a review of the legal authority for military assistance in domestic security, which would include the Posse Comitatus Act. The President maintained that the “threat of catastrophic terrorism requires a thorough review of the laws permitting the military to act within the United States in order to determine whether domestic preparedness and response efforts would benefit from greater involvement of military personnel and, if so, how.” In addition to this review, the Congress directed DOD to review and report on the legal implications of members of the Armed Forces operating on United States territory and the potential legal impediments affecting DOD’s role in supporting homeland security. In March, 2003, the Commander of U.S. Northern Command has stated, “We believe the [Posse Comitatus] Act, as amended, provides the authority we need to do our job, and no modification is needed at this time.” At the time of our review, neither the President’s nor the congressionally directed legal reviews had been completed. It is too early to assess the adequacy of DOD’s new management organizations or its plans, although forces may not be fully tailored to the current domestic missions. DOD has established new organizations for domestic missions at the policy and operational levels, and written a new campaign plan for the defense of the United States. At the same time, DOD has used existing forces for these missions since September 11, 2001. However, at the time of our review, the organizations were not yet fully operational; plans had been developed before issuance of a counterterrorism threat assessment and before DOD officials had reached agreement on the nature of the threat; and force capabilities were not well matched to their domestic missions, potentially leading to an erosion of military readiness. Two new organizations—the Office of the Assistant Secretary of Defense for Homeland Defense and U.S. Northern Command—together provide long-term policy direction, planning, and execution capability but are not yet fully operational, because they have only recently been established and are not fully staffed. Because these organizations had only recently been activated and were still being staffed and structured, we did not evaluate the adequacy of these organizations for their missions. The Senate confirmed the President’s nominee to be Assistant Secretary of Defense for Homeland Defense in February 2003, but this office was not fully operational at the time of our review, with approximately one-third of the staff positions filled. The new Assistant Secretary is to provide overall supervision for domestic missions. U.S. Northern Command was established by the President in an April 2002 revision to the Unified Command Plan and was activated in October 2002. However, the command is not planned to be fully operational until October 2003. As of last week, only about 46 percent of the command’s positions had been filled. During our trip to U.S. Northern Command, we found that a key challenge that the command is grappling with is the need to conduct its ongoing missions while staffing the command’s positions. The activation of the command marks the first time that there has been a unity of command for military activities within the continental United States. Prior to U.S. Northern Command’s activation, U.S. Joint Forces Command provided for military actions to defend U.S. territory from land- and sea-based threats. The North American Aerospace Defense Command defended the United States from airborne threats (and still does). The Commander of U.S. Northern Command is also the Commander of the North American Aerospace Defense Command providing the new unity of command for the three missions. DOD’s planning process requires the Department and the services to staff, train, and equip forces for their military missions as outlined in campaign plans and deliberate plans developed by the combatant commanders, including the Commander of U.S. Northern Command. U.S. Northern Command’s campaign plan was completed in October 2002 and is classified. However, I can note, that although it may reflect current intelligence from DOD and other intelligence community sources, it was completed before the January 2003 issuance of the Federal Bureau of Investigation’s counterterrorism threat assessment, so it may not take all threats into account. Moreover, an official in the Office of the Secretary of Defense acknowledged that DOD officials continue to debate the nature of the threat to U.S. territory, thus DOD itself has not yet reached internal agreement on the nature of the threat facing the United States. Based on our review, DOD’s forces are not tailored for some of the missions that they have been performing since September 11, 2001, and the result could be eventual erosion of military readiness. To respond to the terrorist attacks of that day, the President identified the need to protect U.S. cities from air attack, and in response, DOD deployed 338 Air force and about 20 Navy aircraft within 24 hours of the attacks. Air Force fighter aircraft flew continuously from September 11, 2001, through March 2002, and intermittently thereafter. These combat patrols continue today. While these forces may obtain some training benefit from actually conducting the mission, the benefit is limited by the narrow scope of maneuvers performed during these missions. Specifically, Air Force and Air National Guard fighter units performing domestic combat air patrols are inhibited from executing the full range of difficult, tactical maneuvers with the frequency that the Air Force requires to prepare for their combat missions. In one Air National Guard wing that we reviewed, the average pilot could not meet their training requirements in 9 out of 13 months between September 2001 and September 2002. Consequently, such units may need to resume training after domestic combat air patrols end or they are reassigned, to ensure their readiness for combat operations, their primary missions. Similarly, DOD identified the need to enhance installation security, and it subsequently deployed active, reserve, and National Guard military police units for the mission. However, these units were designed for a different mission, and received limited training benefit from the domestic mission. For example, officials at a military police internment and resettlement battalion told us that while the battalion can provide installation security, its primary mission is to operate enemy prisoner of war camps. Instead, for nearly a year, the battalion carried out a domestic installation security mission, which while important, prevented the battalion from completing required training for its primary overseas combat mission. As a result, the battalion’s military readiness may become eroded, which could mean accepting an increased risk to the battalion if it deploys or resuming training before it deploys again. Current overseas and domestic missions are stressing U.S. forces as measured in personnel tempo data. DOD believes that if servicemembers spend too much time away from home, a risk exists that they will leave the service and military readiness may ultimately suffer. The National Defense Authorization Act for Fiscal Year 2000 requires that DOD formally track and manage for the number of days that each member of the armed forces is deployed and established two thresholds— servicemembers deployed more than 182 or 220 days away from home out of the preceding 365 days. The National Defense Authorization Act for Fiscal Year 2001 established a third threshold, which requires that servicemembers who are deployed for 401 or more days out of the preceding 730-day (2-year) period receive a $100 high deployment per diem allowance. Between September 2001 and December 2002, personnel tempo increased dramatically for Army and Air Force personnel due to ongoing missions or commitments around the world and their increasing support of Operations Noble Eagle and Enduring Freedom. DOD data that we obtained indicated tempo is high and increasing. For example, as shown in figure 1, in September 2001, over 6,600 Army personnel (including active, reserve, and National Guard personnel) had exceeded a desired threshold, spending 182 to 219 days away from home during the previous 365 days. By December 2002, that number had risen to over 13,000. During the same period, the number spending 220 to 365 days away, had risen from about 800 to over 18,000. The Air Force reported similar trends. As shown in figure 2, in September 2001, about 2,100 Air Force servicemembers were away from home for 182 to 219 days, but that had risen to about 8,300 by December 2002. Also, as with the Army, Air Force servicemembers away 220 to 365 days had risen from about 1,600 to over 22,100. The number of Air Force active, Air Force reserve, and Air National Guard Air Force personnel exceeding the third personnel tempo threshold of 401 or more days away from home in the preceding 730-day period also increased during the latter period of 2002, starting at about 3,700 personnel in September 2002 and rising to more than 8,100 servicemembers in December 2002. Of those, about one-half of these personnel were Air National Guard personnel, some of whom were tasked with conducting air sovereignty alert missions in the continental United States. In September 2002, 1,900 had spent more than 401 days away from home over a 2-year period. By December 2002, the number of Air National Guard personnel spending more than 401 days away from home had increased to about 3,900. Exceeding the threshold on a sustained basis can indicate an inadequacy in the force structure or the mix of forces. DOD has recognized the potential for retention problems stemming from the current high personnel tempo but has balanced that against immediate critical skill needs to support ongoing operations. Therefore, to prevent servicemembers with key skills from leaving the services, DOD issued orders to prevent degradation in combat capabilities, an action known as stop loss authority. DOD took these actions because it recognized that individuals with certain key skills—such as personnel in Army military police and Air Force fighter units—were needed, in some cases, to perform the increasing number of military domestic missions. These orders affected personnel with designated individual job skills or in some cases all of the individuals in specific types of units that were critical for overseas combat and military domestic missions. Officials from the four services who manage the implementation of these orders cautioned that they are short-term tools designed to maintain unit- level military readiness for overseas combat and military domestic missions. Moreover, the officials added that the orders are not to be used as a long-term solution to address mismatches or shortfalls in capabilities and requirements, or as a substitute for the routine recruiting, induction, and training of new servicemembers. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions that you or members of the subcommittee may have. Contacts and Staff Acknowledgments For future questions about this statement, please contact Raymond J. Decker at (202) 512-6020. Individuals making key contributions to this statement include Brian J. Lepore, Deborah Colantonio, Richard K. Geiger, Kevin L. O’Neill, William J. Rigazio, Susan K. Woodward, and Michael C. Zola. Combating Terrorism: Observations on National Strategies Related to Terrorism. GAO-03-519T. Washington, D.C.: March 3, 2003. Major Management Challenges and Program Risks: Department of Homeland Security.GAO-03-102. Washington, D.C.: January 2003. Homeland Security: Management Challenges Facing Federal Leadership. GAO-03-260. Washington, D.C.: December 20, 2002. Homeland Security: Effective Intergovernmental Coordination Is Key to Success.GAO-02-1013T. Washington, D.C.: August 23, 2002. Reserve Forces: DOD Actions Needed to Better Manage Relations between Reservists and Their Employers. GAO-02-608. Washington, D.C.: June 13, 2002. Homeland Security: Key Elements to Unify Efforts Are Underway but Uncertainty Remains. GAO-02-610. Washington, D.C.: June 7, 2002. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001. Combating Terrorism: Observations on Options to Improve the Federal Response. GAO-01-660T. Washington, D.C.: April 24, 2001. Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy. GAO-01-556T. Washington, D.C.: March 27, 2001. Military Personnel: Full Extent of Support to Civil Authorities Unknown but Unlikely to Adversely Impact Retention. GAO-01-9. Washington, D.C.: January 26, 2001. Combating Terrorism: Federal Response Teams Provide Varied Capabilities: Opportunities Remain to Improve Coordination. GAO-01-14. Washington, D.C.: November 30, 2000. Combating Terrorism: Linking Threats to Strategies and Resources. GAO/T-NSIAD-00-218. Washington, D.C.: July 26, 2000. Combating Terrorism: Observations on the Threat of Chemical and Biological Terrorism. GAO/T-NSIAD-00-50. Washington, D.C.: October 20, 1999. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99-163. Washington, D.C.: September 7, 1999. Combating Terrorism: Issues to Be Resolved to Improve Counterterrorism Operations. GAO/NSIAD-99-135. Washington, D.C.: May 13, 1999. Combating Terrorism: Observations on Federal Spending to Combat Terrorism. GAO/T-NSIAD/GGD-99-107. Washington, D.C.: March 11, 1999. Combating Terrorism: Observations on Crosscutting Issues. GAO/T-NSIAD-98-164. Washington, D.C.: April 23, 1998. Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments. GAO/NSIAD-98-74. Washington, D.C.: April 9, 1998. Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination. GAO/NSIAD-98-39. Washington, D.C.: December 1, 1997. Combating Terrorism: Federal Agencies’ Efforts to Implement National Policy and Strategy. GAO/NSIAD-97-254. Washington, D.C.: September 26, 1997.
The way in which the federal government views the defense of the United States has dramatically changed since September 11, 2001. Consequently, the Department of Defense (DOD) is adjusting its Cold War strategic focus (of defending against massed combat forces) to better encompass defense against the asymmetric threats that small terrorist cells represent to U.S. territory. GAO was asked to review DOD's participation in domestic missions. This testimony represents our preliminary work in response to the request. It addresses (1) the primary differences in military and nonmilitary missions; (2) how DOD evaluates requests for nonmilitary missions; (3) how the 1878 Posse Comitatus Act impacts DOD's nonmilitary missions; (4) whether current management organizations, plans, and forces are adequate to support DOD's domestic missions; and (5) the impact of overseas and domestic missions on military personnel tempo. GAO is making no recommendations in this testimony. DOD's military and nonmilitary missions differ in terms of roles, duration, discretion to accept or reject, and capabilities normally employed. DOD evaluates nonmilitary mission requests on the basis of legality, lethality, risk to DOD forces, the cost, the appropriateness of the mission, and the impact on military readiness. The 1878 Posse Comitatus Act prohibits the direct use of federal military troops in domestic civilian law enforcement, except where authorized by the Constitution or Acts of Congress. Congress has expressly authorized the use of the military in certain situations such as to assist with drug interdiction or assist with terrorist incidents involving weapons of mass destruction. It is too early to assess the adequacy of DOD's new management organizations or plans but some forces may not be tailored for their domestic missions. DOD established an Office of the Assistant Secretary of Defense for Homeland Defense and U.S. Northern Command plan and execute domestic missions. U.S. Northern Command's plan for domestic military missions was developed before DOD officials had agreed on the nature of the threat. Forces are not adequately tailored for some domestic missions, and readiness could erode because of it. For example, Air Force fighter units deployed since September 11, 2001 to perform combat air patrols are unable to also perform required combat training. Overseas and domestic missions are stressing U.S. forces as measured in personnel tempo data. In September 2001, about 1,600 Air Force personnel had spent 220 to 365 days away from their homes over the previous year, but by December 2002 almost 22,100 Air Force personnel had been away that long. The Army reported similar increases. To prevent erosion in combat capabilities, DOD issued orders, known as stop loss, to involuntarily retain critical personnel.
OPS administers the national regulatory program to ensure the safe operation of nearly 2.2 million miles of natural gas and hazardous liquid pipelines in the United States. The agency develops, issues, and enforces pipeline safety regulations. These regulations contain minimum safety standards that the pipeline companies that transport natural gas or hazardous liquids must meet for the design, construction, inspection, testing, operation, and maintenance of their pipelines. In general, OPS retains full responsibility for inspecting pipelines and enforcing regulations on interstate pipelines, and certifies states to perform these functions for intrastate pipelines. In fiscal year 2000, OPS employed 97 people, 55 of whom were pipeline inspectors. Several federal statutes enacted since 1988 contain requirements designed to improve pipeline safety and enhance OPS’ ability to oversee the pipeline industry. In addition, the Safety Board makes recommendations designed to improve transportation safety to OPS and other federal agencies. These recommendations are based on the Safety Board’s investigations of transportation accidents, including significant pipeline accidents (such as those involving fatalities). Many of these recommendations address the same issues as the statutory requirements. OPS has made progress in implementing some of the 22 statutory requirements that it reported as open in our May 2000 report but has not fully implemented some significant, long-standing requirements. As of September 1, 2001, 6 of the 22 requirements have been closed as a result of OPS’ actions, 11 requirements are still open, and the remaining 5 have been closed because OPS now considers them to be superseded by or amendments to other requirements or because the agency does not believe it is required to take further action. The agency has fully implemented 6 of the 22 statutory requirements that it classified as open in May 2000. (See table 1.) Three of these six requirements were implemented in the last 16 months; OPS issued a final rule to define underwater abandoned pipeline facilities that present a hazard to navigation and specify how operators shall report these facilities, issued a report on its Risk Management Demonstration Program, and conducted activities to address population encroachment near pipelines. OPS had completed action on the other three requirements prior to May 2000, but did not report these actions to us at that time. (Appendix I provides the status of OPS’ actions to implement all 22 requirements as of September 1, 2001.) As of September 1, 2001, 11 requirements—including several from 1992 or earlier that could significantly improve pipeline safety—remain uncompleted. While OPS has made some progress on these requirements over the last year, the agency estimates that it will take from several months to more than a year to complete actions on them. For example, OPS is issuing a series of rules requiring pipeline operators to develop an integrity management program to assess and improve, where necessary, the safety of pipeline segments in areas where the consequences of a pipeline failure could be significant (called “high consequence areas.”) This series represents a broad-based, comprehensive effort designed to improve pipeline safety, as well as fulfill several specific statutory requirements such as requirements to inspect pipelines periodically and install valves to shut off the flow of product in the pipeline if a failure occurs. In December 2000, OPS issued a final integrity management rule for hazardous liquid pipelines that are at least 500 miles long. OPS still needs to issue similar integrity management rules for hazardous liquid pipelines that are less than 500 miles long, expected in late fall 2001, and for natural gas transmission pipelines. The agency expects to issue a proposed rule for transmission pipelines by the end of 2001 and a final rule in fall 2002. To facilitate the natural gas transmission rule, OPS officials have been meeting with representatives of the pipeline industry, research institutions, state pipeline safety agencies, and public interest groups to understand how integrity management principles can best be applied to improve the safety of gas pipelines. OPS also requested information and clarification in June 2001 and plans to hold a public meeting with its Natural Gas Technical Advisory Committee on this subject. According to OPS officials, they are close to reaching consensus with the pipeline industry and state agencies on safety standards for natural gas transmission pipelines. In addition, in response to a 1988 requirement to establish standards to complete and maintain a pipeline inventory, OPS is establishing multiple methods of collecting this information, such as annual reports, the integrity management process, and a national pipeline mapping system.According to OPS officials, they are collecting the necessary information for hazardous liquid and gas transmission pipelines, but still need to establish methods to collect additional information for gas distribution pipelines. OPS does not plan to complete forms that will allow it to collect such information until spring 2002—more than 13 years after the original requirement. Finally, in response to a 1992 requirement to define “gathering line” and “regulated gathering line,” OPS is still conducting studies to identify which lines should be regulated. OPS does not plan to issue a final rule before mid-2002. OPS officials estimate that it will take a year or more to implement 10 of the 11 open requirements. OPS does not plan to take action on the remaining open requirement to submit a report on underwater abandoned pipeline facilities, including a survey of where such facilities are located and an analysis of any safety hazards associated with them. According to OPS officials, the agency did not complete the report because there were insufficient data available, and it would be expensive to develop the needed data. OPS officials said they have analyzed to the extent possible all available data, and they do not plan to proceed further. We did not determine whether sufficient data exist or the cost to develop data to complete the report. OPS has closed the remaining five requirements that it reported as open in May 2000 because it now considers them to be superseded by or amendments to other requirements or because OPS believes it is no longer required to take action. Although OPS did not fulfill these requirements, we agree with OPS’ rationale for considering them closed. OPS closed one requirement because it was replaced by a later requirement. A 1988 statute required OPS to establish standards requiring that new and replacement pipelines accommodate the passage of “smart pigs”—mechanical devices that can travel through the pipeline to record flaws in the pipeline, such as dents or corrosion. Although OPS did not meet this requirement, the agency considers it closed because it was superseded by a similar requirement in a 1996 statute, which has not been completed. OPS closed three requirements from a 1996 statute that amended requirements from a 1992 statute that have not been completed: (1) defining “gathering lines” and “regulated gathering lines,” (2) requiring the periodic inspection of pipelines in high-density and environmentally sensitive areas, and (3) establishing criteria to identify all pipeline facilities located in areas that are densely populated and/or environmentally sensitive. In general, the amending provisions gave OPS more flexibility in fulfilling the requirements by adding language such as “where appropriate” or “if needed.” Although OPS considered these actions as open in our May 2000 report, OPS now believes that since these three provisions do not impose additional requirements they should not continue to be counted separately. OPS closed one requirement because it is no longer required to take action. A 1996 statute required OPS to issue biennial reports to the Congress on how the agency carried out its pipeline safety responsibilities for the preceding two calendar years. OPS issued the first report in August 1997 but did not issue a report in 1999. This reporting requirement was eliminated as of May 15, 2000, under the Federal Reports Elimination and Sunset Act of 1995, as amended. The Safety Board is encouraged by OPS’ recent efforts to improve its responsiveness, but it remains concerned about the amount of time OPS has been taking to implement recommendations. The Director of the Safety Board’s Office of Pipeline Investigations views OPS’ responsiveness as generally improving because OPS has recently initiated several activities to respond to recommendations and made efforts to communicate better with the Safety Board. To improve communications with the Safety Board, OPS has changed how it informs the Safety Board of progress made on recommendations by corresponding with the Safety Board as progress occurs on individual recommendations, rather than providing periodic updates that may cover a number of recommendations. While the Safety Board is encouraged by OPS’ recent efforts, it is reserving final judgment on OPS’ progress until the agency demonstrates that it can follow through with actions to fully implement the recommendations. OPS continues to have the lowest rate of any transportation agency for implementing recommendations from the Safety Board; and, in May 2000 we reported that the Safety Board was concerned that OPS had not followed through on promises to implement recommendations. According to the Director of the Safety Board’s Office of Pipeline Investigations, the Safety Board continues to be concerned about the amount of time OPS is taking to follow through with the recommendations. For example, the Safety Board initially recommended in 1987 that OPS require pipeline operators to periodically inspect pipelines. OPS is responding to this recommendation through its series of rules on integrity management that is expected to be completed in 2002—15 years after the Safety Board made the initial recommendation. According to the Safety Board’s records, OPS has completed action on only 1 of the 39 Safety Board recommendations that were open as of May 2000. Since then, the Safety Board has made 6 additional recommendations, resulting in 44 open recommendations on pipeline safety as of September 1, 2001. However, OPS officials believe that the agency’s progress is much greater than the Safety Board’s records indicate. The majority of the recommendations are related to damage prevention (damage from outside forces is the leading cause of pipeline accidents) and integrity management; OPS is in the process of implementing several broad-based, complementary efforts in these areas. According to OPS officials, the agency will have fulfilled 19 of the open recommendations by the end of 2001 and expects to complete action on 16 additional recommendations by the end of 2002. OPS has made some progress in implementing statutory requirements over the past 16 months and expects to implement most of the remaining requirements in the next year or so. OPS also believes that it will have completed action on most of the 44 open Safety Board recommendations over this same time period. Ultimately, however, it is the Safety Board’s decision on whether OPS’ actions fulfill the recommendations. While this progress represents an improvement over OPS’ previous performance, the agency has not fully implemented some important requirements and recommendations to improve pipeline safety that were imposed more than 10 years ago. The next 15 months are important to OPS because, among other actions, the agency intends to complete its series of integrity management rules within this time frame. These rules are expected to improve the safety of pipelines and allow OPS to fulfill a large portion of the outstanding statutory requirements and Safety Board recommendations. We are concerned that OPS does not plan to take action in response to the 1992 statutory requirement to report to the Congress on underwater abandoned pipeline facilities. While we did not assess OPS’ claims that it is not feasible to complete the report due to insufficient data and funding, OPS has made no response to this requirement, including advising the Congress that it is not possible to complete the study. If the department believes that it cannot complete a report to the Congress on underwater abandoned pipeline facilities, we recommend that the Secretary of Transportation direct OPS to advise the Congress of the reasons why it is unable to complete this study and, if appropriate, ask the Congress to relieve it of this responsibility. We provided a draft of this report to the Department of Transportation for its review and comment. We met with officials from the department, including OPS’ Associate Administrator, to obtain their comments. The officials generally agreed with the draft report and its recommendation. The officials stated that OPS is taking a long-term, strategic approach to address safety goals by improving pipeline integrity and preventing damage to pipelines. According to the officials, this approach is more beneficial than responding directly to individual requirements and recommendations as discrete actions. For example, OPS’ integrity management rules will require pipeline operators to comprehensively evaluate and respond to the entire range of risks to pipelines; the rules will include, but are not limited to, safety practices that have been required by the Congress or recommended by the Safety Board, such as internal inspections and safety valves. The officials stated that OPS has undertaken several broad-based, complementary efforts, particularly focused on pipeline integrity and damage prevention that, when completed, are expected to improve pipeline safety and fulfill many specific statutory requirements and Safety Board recommendations. They said that such a process requires OPS—working cooperatively with state and local officials and the pipeline industry—to thoroughly explore the safety risks faced by different types of pipelines, devise solutions that work for each unique pipeline, and carefully assess the costs and expected benefits of various methods of mitigating risks. The officials expect that, within a year, the results of these efforts will become apparent to the Congress and the public. In response to OPS’ comments, we provided more detailed information on specific actions OPS has taken to improve pipeline safety, where appropriate. To determine OPS’ progress in responding to statutory requirements, we asked OPS officials to identify actions the agency has taken to respond to requirements. We then collected and reviewed documentation on these actions, such as published rules and reports. To determine OPS’ progress in responding to recommendations from the Safety Board, we collected and analyzed information from the Safety Board on the status of pipeline safety recommendations. We also interviewed the Safety Board’s Director of the Office of Railroad, Pipeline, and Hazardous Materials Investigations to discuss OPS’ progress in responding to the Safety Board’s recommendations. Consistent with the approach used for our May 2000 report, we relied on OPS and the Safety Board to identify which actions were open and did not attempt to determine whether these open actions were, in actuality, completed. In addition, we did not assess the adequacy of OPS’ responses to statutory requirements or the Safety Board’s recommendations. We performed our work from July through September 2001 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days after the date of this letter. At that time, we will send copies of this report to congressional committees and subcommittees with responsibilities for transportation safety issues, the Secretary of Transportation, the Administrator of the Research and Special Programs Administration, the Director of the Office of Management and Budget, and the Acting Chairman of the National Transportation Safety Board. We will make copies available to others upon request and on our home page at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Key contributors to this report were Helen Desaulniers, Judy Guilliams-Tapia, James Ratzenberger, and Sara Vermillion. Appendix I: OPS’ Actions on Pipeline Safety Statutory Requirements Reported as Open in May 2000 (As of September 1, 2001) Citations included in table 5 are to the United States Code and to the Accountable Pipeline Safety and Partnership Act of 1996.
In a May 2000 report on the performance of the Department of Transportation's Office of Pipeline Safety (OPS), GAO found that the number of pipeline accidents rose four percent annually from 1989 to 1998--from 190 in 1989 to 280 in 1998. GAO also found that OPS did not implement 22 statutory requirements and 39 recommendations made by the National Transportation Safety Board. Since GAO's May report, OPS has fully implemented six of the 22 statutory requirements. However, 11 other requirements--including some that are significant and long-standing--have not been fully implemented. The agency does not plan to report on abandoned underwater pipeline facilities--a remaining open requirement--because it believes that insufficient data exists to conduct the study. The Safety Board is encouraged by OPS' recent efforts to improve its responsiveness, but the Board remains concerned about the amount of time OPS has taken to implement recommendations. OPS has the lowest rate of any transportation agency in implementing the Board's recommendations.
The law directs VA to compensate veterans for their service-connected physical or mental conditions according to a schedule of disability ratings, which represents the average impairment in earning capacity that results from these conditions. The first schedule was developed in 1919 and has undergone many changes since then. The Schedule for Rating Disabilities includes a list of physical and mental conditions with disability ratings assigned to each. These ratings are used to determine the amount of compensation that veterans are entitled to receive on the basis of their specific conditions. Federal law (38 U.S.C. 1110 and 1155) requires VA to “adopt and apply a schedule of ratings of reductions in earning capacity from specific injuries or combination of injuries” to determine the amount of compensation disabled veterans are entitled to receive. The ratings are to be based, “as far as practicable, upon the average impairments of earning capacity resulting from such injuries in civil occupations.” The law gives the chief administrator of VA the discretion to define “average impairments in earning capacity” and the authority to readjust the schedule to help ensure that disability ratings reflect VA’s experience. The War Risk Insurance Act of 1917 called for the creation of the first rating schedule. The schedule was developed in 1919 and provided an early framework for the basic design of the current compensation and pension programs for disabled veterans. It underwent major revisions in 1921, 1925, 1933, and 1945, becoming more comprehensive with each major revision (see fig. 1). The last major revision to the schedule was made in 1945. The Schedule for Rating Disabilities contains medical criteria and disability ratings. The medical criteria consist of a list of diagnoses organized by body system and a number of levels of medical severity specified for each diagnosis. The schedule assigns a percentage evaluation, commonly referred to as a disability rating, to each level of severity associated with a diagnosis. The disability rating conceptually reflects the average impairment in earning capacity associated with each level of severity. For example, VA presumes that the loss of a foot as a result of military service results in a 40-percent impairment in earning capacity, on average, among veterans with this injury. All veterans who lose a foot as a result of military service, therefore, are entitled to a 40-percent disability rating whether this injury actually reduces their earning capacity by more than 40 percent or not at all. Ratings for individual diagnoses in the schedule range from 10 percent to 100 percent in gradations of 10 (see table 1). The amount of compensation veterans are awarded for their disabilities is based on (1) the disability rating the schedule assigns to a veteran’s specific condition and (2) the specific benefit amount the Congress sets for each of these disability rating levels. To determine what basic compensation a veteran with a service-connected condition is due, first the veteran’s condition is medically evaluated to determine its severity. Then VA compares the results of the evaluation with the medical criteria in the schedule to determine what disability rating is warranted given the severity of the veteran’s condition. The veteran will receive the amount the Congress has set for that disability rating. The Congress has adjusted the benefit amounts for each disability rating level annually. In fiscal year 1995, the basic monthly benefit amount ranged from $89 for conditions assigned a rating of 10 percent to $1,823 for conditions assigned a rating of 100 percent (see table 2). Although the primary purpose of VA’s disability compensation program is compensation for impairment in earning capacity, the program also provides for additional monthly compensation over and above the amount based on the schedule, for loss of “physical integrity.” Loss of physical integrity is defined as tissue loss, loss of body parts, or any disease or injury that makes an individual less functionally whole. The law (38 U.S.C. 1114) provides for additional monthly compensation for such things as the loss of a hand, foot, eye, or procreative organ. VA regulations also allow veterans to receive “extra-schedular” awards when VA determines that the severity of a veteran’s condition is not adequately captured by the rating the schedule assigns to it. Extra-schedular awards allow veterans to receive compensation for a rating higher than the one specified in the schedule for their condition. In a case of unemployability, for example, if the criteria in the schedule indicate that a veteran’s condition warrants at least a 60-percent disability rating but VA determines that, on the basis of that veteran’s unusual circumstances, he or she is unable to obtain and sustain gainful employment, VA can raise the compensation for that veteran to the amount provided for a 100-percent rating. VA regulations also allow veterans to be compensated for “social inadaptability” or “social impairment” to the extent it affects industrial adaptability. Social inadaptability contemplates those abnormalities of conduct, judgment, and emotional reactions that affect economic adjustment, that is, that impair earning capacity. In 1995, about 70 percent of the 2.2 million veterans on the rolls were being compensated for conditions with disability ratings of 30 percent or less for a total of nearly $2.8 billion, or about 25 percent of total benefits paid to veterans that year. Those rated 100 percent accounted for only 6 percent of those on the disability rolls that year and received $3.7 billion, or about 32 percent of the total amount of benefits paid (see table 3). Disability ratings in the current schedule may not reflect the actual economic loss that disabled veterans, on average, now experience. While the law contains no definition of “impairments in earning capacity,” ratings assigned to conditions in the schedule are based more on judgments of the loss in functional capacity, rather than in earning capacity, resulting from these conditions. Advances in medicine and technology and changes in the economy and public policy and in the field of rehabilitation since 1945 raise questions about whether ratings for specific conditions set 50 years ago reflect the average loss in earning capacity today among veterans with these conditions. In addition, studies conducted in the mid-1950s and the late 1960s concluded that the ratings in the schedule did not accurately reflect the reduction in earning capacity that disabled veterans experienced at those times and that the ratings needed to be updated. The law gives the Secretary of Veterans Affairs the authority to determine what is meant by “average impairments in earning capacity” in civilian occupations. Although VA’s Economic Validation of the Rating Schedule (ECVARS) in the late 1960s defined reduction in earning capacity as “the loss or lowering of average income from wages or employment,” VA has not defined in regulations what is meant by average impairment in earning capacity other than to generally describe it as an economic or industrial handicap. Beginning at least as early as 1923, when assigning a rating to a condition, VA used the loss in physical or overall functional capacity resulting from that condition (or some other proxy, such as the average veteran’s ability to compete for employment in the job market) as an indicator of average impairment in earning capacity. According to an official in VA’s Office of General Counsel, the average impairment in earning capacity in civilian occupations means the impairment of an individual’s ability to engage in any type of work available in the economy. The actual loss in earnings associated with a service-connected condition has not been considered when determining the degree to which that condition impairs earning capacity. Nor has it been considered when determining the rating that condition should be assigned in the schedule. In 1945, when the framework for the current schedule was developed, the job market was oriented toward physical labor, and physical capacity was expected to have a major influence on earning capacity. At that time, a Disability Policy Board, consisting of doctors and lawyers, set the disability ratings for the conditions contained in the schedule. According to a former Director of VA’s Compensation and Pension Service, VA’s Department of Medicine and Surgery, now the Veterans Health Administration, provided the Board with a medical monograph—a detailed description of etiology and manifestations—for each of the conditions included in the schedule at that time. The Board used these monographs to estimate the relative effects different levels of severity of a condition have on the average veteran’s ability to compete for employment in the job market. It set disability ratings on this basis. Thus, ratings for conditions that limited physical ability, such as the loss of the use of an arm or leg, were expected to greatly impair veterans’ average earning capacity and were given a relatively high rating. Since 1945, VA has made many revisions to the schedule. The revisions have included modifications to medical criteria associated with the ratings, changes in the maximum convalescence period allowed before requiring reevaluation of the condition, and addition of more levels of evaluations or ratings. The revisions, however, have not been based on empirical data on the effects certain conditions have on veterans’ earnings. According to VA Compensation and Pension officials, the basic procedure used to determine what disability rating to assign to a condition has not changed since 1945. This determination has been and continues to be based on the judgment of individuals with knowledge and expertise in this area. When adjusting ratings for conditions already in the schedule or assigning ratings to new conditions added to the schedule, VA’s goal has been to maintain the internal consistency of the schedule over time. In doing so, VA tries to ensure that new or adjusted ratings are consistent with the ratings of analogous conditions and reasonable relative to all other conditions. As a result, the ratings in the 1945 schedule have been, in effect, the benchmark for all the ratings adjusted and added since then, and VA officials acknowledge that the ratings in the current schedule are consistent with the ratings developed in 1945. Even if functional capacity accurately approximated disabled veterans’ reduction in earning capacity in 1945, changes have occurred since then that have implications for how accurately those ratings reflect disabled veterans’ reduction in earning capacity today. Numerous technological and medical advances have taken place, as well as economic changes, that have created more potential for people to work with some conditions and less potential for people to work with other conditions. There have also been changes in the labor market and social attitudes toward the disabled that may affect disabled veterans’ ability to work. Since 1945, medical and technological advances have enabled individuals with some types of disabilities to obtain and sustain employment. Advances in the management of disabilities, like medication to control mental illness or computer-aided prosthetic devices that return some functioning to the physically impaired, have helped reduce the severity of the functional loss caused by both mental and physical disabilities. Electronic communications and assistive technologies, such as synthetic voice systems, standing wheelchairs, and modified automobiles and vans, have given people with certain types of disabilities more independence and potential to work. There has also been a shift in the U.S. economy since 1945 from predominantly labor and manufacturing to skill- and service-based jobs. In the 1960s, earning capacity became more related to a worker’s skills and training than to his or her ability to perform physical labor. Advancements in technology, including computers and automated equipment, following World War II and the Korean Conflict reduced the need for physical labor. The goods-producing sector’s share of the economy—mining, construction, and manufacturing—declined from about 44 percent in 1945 to about 21 percent in 1994. The service-producing industry’s share, on the other hand—such areas as wholesale and retail trade; transportation and public utilities; federal, state and local government; and finance, insurance, and real estate—increased from about 57 percent in 1945 to about 80 percent in 1994. While the shift to a more service-oriented economy may have had a positive effect on job opportunities for veterans with physical disabilities, it may have had the opposite effect for those with some mental impairments. However, new treatments and medications have made it possible for individuals with some mental illnesses to function more fully today. About 20 percent of the veterans on the disability rolls as of September 30, 1995, were receiving compensation for psychiatric and neurological conditions, whereas 80 percent were being compensated for general medical or surgical conditions, or physical disabilities (see table 4). In addition, in recent decades there has been a trend toward greater inclusion of and participation by people with disabilities in the mainstream of society. Changes in public attitudes toward people with disabilities have resulted, over the past 2 decades, in public policy requiring the removal of environmental and social barriers that prevent the disabled from fully participating in the workforce as well as in their communities. The Americans With Disabilities Act of 1990 (ADA), which supports the full participation of people with disabilities in society, fosters the expectation that people with disabilities can work. The act prohibits employers from discriminating against qualified individuals with disabilities and requires employers to make reasonable work place accommodations for these individuals. Two major studies have been conducted since the implementation of the 1945 version of the schedule to determine whether the schedule constitutes an adequate basis for compensating veterans with service-connected conditions. One was conducted by a presidential commission in the mid-1950s and a second by VA in the late 1960s. Both concluded, for various reasons, that at least some disability ratings in the schedule did not accurately reflect the average impairment in earning capacity among disabled veterans and needed to be adjusted. The President’s Commission on Veterans’ Pensions, commonly called the Bradley Commission, was created in 1955 “to carry out a comprehensive study of the laws and policies pertaining to pension, compensation, and related nonmedical benefits” for veterans. As part of this study, the Commission examined VA’s Schedule for Rating Disabilities. To determine whether the schedule at that time constituted an adequate and equitable basis for compensating disabled veterans, the Commission examined (1) the medical criteria in the schedule and (2) the disability ratings associated with these medical criteria. On the basis of the results of a survey designed to obtain the views of medical specialists nationwide, the Commission concluded that the medical criteria in the schedule did not reflect the advances that had been made in medicine since 1945. The Commission also asked 169 physicians whether they believed the ratings fairly represented the average impairment of earning capacity resulting from the various degrees of severity of physical impairment. Forty percent of the 153 physicians who responded believed that the ratings fairly represented average impairment in earning capacity, 40 percent believed the ratings did not, and 20 percent did not respond or gave vague responses. Many of those who believed the schedule’s ratings in general fairly represented average impairment of earning capacity, however, believed that the ratings for the lower disability percentages (usually below 30 percent) did not. The Commission’s comparison of the earnings and income of disabled veterans with the earnings and income of nondisabled veterans and others indicated that, with the exception of totally disabled veterans and elderly disabled veterans, there was little difference in combined median annual earned income of these groups. The Commission concluded that the amount of disability compensation seemed to make up for the difference in overall income between the two groups. But this compensation was not based on the average impairment in earnings capacity. The Commission observed that no studies had been conducted to measure the actual impairment in earnings capacity among the disabled, and the standard used to set disability ratings in the schedule was geared to the impairment of the individual who performs manual labor. Thus, because “functional physical capacity” has a major effect on a laborer’s ability to work, the Committee concluded that physical impairment has been VA’s predominant standard for setting disability ratings. In addition to presenting the results of its study, the Commission pointed out that advances have been made in surgery, prosthetics, medical treatment, and rehabilitation since the schedule was revised in 1945 and that these advances could change the extent to which physical impairment affects earning capacity. The Commission also noted that the job market has shifted from predominantly manual labor jobs to more clerical and service-oriented jobs. Thus, the Commission concluded that the rating schedule tended to be less representative of the average impairment in earning capacity of veterans who performed nonmanual labor jobs. The Commission’s overall recommendation with regard to the schedule was that it should be revised thoroughly on the basis of factual data to ensure that it reflects veterans’ average reduction in earning capacity, as required by law. The Commission stated that the basic purpose of the program is economic maintenance and, therefore, it is appropriate to compare periodically the average earnings of the working population and the earnings of disabled veterans, and update the schedule accordingly to help ensure that veterans are adequately compensated for the average reduction in earnings they experience as a result of their service-connected conditions. In the late 1960s, VA conducted the ECVARS in response to the Bradley Commission recommendations and recurring criticisms that ratings in the schedule were not accurate. This study was designed to estimate the average loss in earning capacity among disabled veterans by calculating the difference between the earnings of disabled veterans, by condition, and the earnings of nondisabled veterans, controlling for age, education, and region of residence. The ECVARS is the most comprehensive assessment of the validity of the ratings ever done. On the basis of the results, VA concluded that of the approximately 700 diagnostic codes reviewed, the ratings for 330 overestimated veterans’ average loss in earnings due to their conditions, and about 75 underestimated the average loss among veterans. For example, for the disarticulation of an arm (amputation through the joint where the shoulder and arm join), VA estimated a 60-percent rating more closely approximated veterans’ average reduction in earning capacity than the 90-percent rating listed in the schedule. VA also estimated that a 40-percent rating was more representative of veterans’ average reduction in earning capacity for the disarticulation of the thigh (with the loss of extrinsic pelvic girdle muscles) than the 90 percent that was listed in the schedule. Some of the ratings that underestimated veterans’ reduction in earning capacity were assigned to mental conditions. For example, VA estimated that pronounced neurotic symptoms so severe that they would impair a veteran’s ability to obtain or retain employment would result in an 80-percent reduction in earning capacity as opposed to the 70 percent listed in the schedule. VA has not systematically reviewed and adjusted the disability ratings in the schedule to reflect the current average impairment in earning capacity. Although the ECVARS found that many of the ratings in the schedule did not correspond to the actual earnings loss experienced by veterans, no changes were made to the schedule on the basis of these findings. Current revisions VA is making to the schedule focus on updating medical criteria, not on ensuring that disability ratings accurately represent the effect that service-connected conditions have on the average earning capacity of disabled veterans, and few adjustments are being made to ratings in conjunction with these revisions. When making adjustments to the ratings or adding conditions to the schedule, VA relies on its experience implementing the schedule and the responses it receives from the proposed rule-making process to help ensure that ratings are appropriate. On the basis of the results of the ECVARS, VA proposed adjustments to the disability ratings and produced a revised schedule that included ratings it believed more accurately represented the reduction in earning capacity that veterans experience as a result of their service-connected conditions. However, VA did not adopt this revised schedule. According to VA and VSO officials, the schedule was not adopted because VA believed that the Congress did not support it. Since the ECVARS was conducted, VA has not done another comprehensive study to systematically measure the effect of service-connected conditions on earnings. In a 1988 report, we reviewed the medical criteria in VA’s rating schedule to determine whether they were sufficiently current to ensure veterans were being given accurate and uniform percentage ratings. We found that VA could not ensure that veterans were given accurate and uniform ratings because the schedule had not been adjusted to incorporate recent medical advances at that time. We recommended that VA update the medical criteria in the schedule and keep them current. In response to these recommendations, VA is in the process of systematically updating the medical criteria in the rating schedule. VA is reviewing each major body system in the schedule to ensure that the medical criteria for each diagnosis are up to date. The objectives of the current update are to make the criteria for assigning the disability ratings clearer, more objective, and accurate. To date, VA has revised the medical criteria for 8 of the 16 body systems contained in the schedule. Revisions generally consist of such things as (1) wording changes for clarification or reflection of current medical terminology, (2) addition of alternative criteria, (3) addition of medical conditions not in the schedule, (4) deletion of conditions that through advances in treatment are no longer considered disabling, and (5) reductions in the time period for reevaluating unstable conditions. Few revisions involved the disability ratings themselves. Of about 68 diagnostic codes subject to revision in the first 4 body systems VA reviewed, the ratings for 12 were modified in some way. Of these 12 modifications, 3 resulted in obvious reductions in ratings, while none resulted in obvious increases. None of these reductions in ratings, however, will result in lower ratings for veterans currently on the disability rolls. Federal law (38 U.S.C. 1155) specifies that changes in the rating schedule will, in no event, reduce a veteran’s rating in effect when a change occurs, unless the veteran’s condition has improved. When a revision in the medical criteria or the addition of a new condition to the schedule requires VA to adjust or set ratings for conditions, these adjustments are generally based on the judgments of VA’s Compensation and Pension staff. VA’s goal is to maintain the internal consistency of the schedule over time by trying to ensure that new or adjusted ratings are consistent with the ratings of analogous conditions and reasonable relative to all others. For example, when VA added endometriosis to the schedule, it tried to find a condition already listed in the schedule that was analogous or comparable in terms of the physical impairment. On the basis of the Veterans Health Administration’s medical monograph for this condition, VA determined that the most severe outcome of having endometriosis would be a hysterectomy, which was already in the schedule under another diagnosis and has a disability rating of 50 percent. VA, therefore, set the maximum evaluation for endometriosis at 50 percent. VA then set disability ratings for the less severe symptoms associated with endometriosis. In setting the rates for the less severe symptoms, VA Compensation and Pension personnel told us that they used their best judgment or experience, or both, to estimate the amount of time an individual might lose from work as a result of this condition. VA set the rating at 30 percent for moderate symptoms and 10 percent for milder symptoms (see table 5). When it proposes changes to the schedule, VA relies on its experience in implementing the schedule, on feedback from veterans and VSOs, and on the comments it receives from the public. According to VA officials, the feedback they have received from veterans and VSOs over time about the schedule and VA’s experience implementing it indicate that veterans appear to be generally satisfied with the ratings in the schedule. The VSO officials we contacted believe that VA’s disability rating schedule is a well-constructed document that has withstood the test of time. They also believe that ratings in the schedule generally represent the average loss in earning capacity among disabled veterans. Under the proposed rule-making process, proposed changes to the schedule are published in the Federal Register, and veterans and others are given the opportunity to comment on these changes before they are adopted. According to VA officials, veterans have made relatively few comments on changes currently proposed, which they believe suggests that current changes are acceptable. Because the schedule appears to be widely accepted, VA officials believe that the process they use is adequate to ensure that ratings fairly accurately represent veterans’ average impairment in earning capacity, and therefore there is no need to further assess their appropriateness. Although VA has chosen not to do so, using an estimate of actual loss in earnings to approximate loss in earning capacity would help VA make certain that veterans are compensated to an extent commensurate with the economic losses attributable to service-connected conditions. This would also help to ensure that disability compensation funds are equitably distributed among disabled veterans given today’s work environment. Unlike judgments about loss in functional capacity, estimates of actual loss in earnings are objective and economic indicators of loss in earning capacity. When the 1945 schedule was developed, no study was done to determine whether ratings based on loss in functional capacity correlated with disabled veterans’ loss in earnings. Even if ratings did correlate with loss in earnings at that time, in 1956 the Bradley Commission found that they did not. The Commission recognized that the basic purpose of the program was economic maintenance and that it was appropriate to compensate disabled veterans on the basis of the average reduction in earnings they experience as a result of their service-connected conditions. It recommended updating the schedule periodically, primarily by using estimates of the average loss in earnings experienced by disabled veterans. The results of the ECVARS again illustrated that functional loss, even if it had correlated with economic loss in 1945, did not accurately approximate the economic loss associated with service-connected conditions in the late 1960s. When ratings based on functional capacity were compared with the estimated loss in earnings experienced by disabled veterans, they often did not coincide. There are several advantages to using empirical data, as opposed to judgments, to determine impairment in earning capacity. Estimates of the loss in earnings resulting from service-connected conditions based on empirical data are objective and more reliable than individuals’ judgments about the effect these conditions may have. Such judgments can vary greatly, as the results of the Bradley Commission’s survey of physicians illustrate. Half of the physicians who responded to the survey believed the ratings in the schedule fairly represented the average loss in earning capacity resulting from the various degrees of severity of physical impairment. The other half disagreed. Judgments about the effect certain conditions may have on the ability to function, work, or earn money do not allow VA to determine whether the program is compensating disabled veterans to an extent commensurate with their economic loss. If VA compared estimates of loss in earnings, based on empirical data, for specific conditions with the ratings for these conditions, it could objectively determine whether the program was achieving this goal and was distributing disability compensation equitably. The average impairment in earning capacity associated with specific service-connected conditions can be estimated by calculating the difference between what veterans with those conditions earn, on average, and what they would have earned if they did not have those conditions. The average loss in earnings associated with specific service-connected conditions can be determined by using widely applied research designs for estimating the effect of one variable on another. A number of decisions would have to be made, however, with respect to an overall methodology for a study that would produce these estimates, and a number of options are related to each. Each option has implications for the cost of such a study and the validity of its results. Our work suggests that it could cost between $5 million and $10 million to conduct a study like this. Some generally accepted research designs for estimating the effect of one variable on another can be used to estimate the average loss in earnings associated with specific service-connected conditions. These designs are widely applied. While no study that measures the effect of service-connected conditions on earnings loss will give absolutely definitive results, many studies have demonstrated that it is possible to produce acceptable estimates of the impact of one variable on another. These designs have been used in policy analyses to examine the factors affecting the growth of Social Security Administration disability programs,the role vocational rehabilitation plays in the tendency of disabled persons to return to work, and the impact of job training on employment among ex-offenders. Such designs have also been used in many studies that specifically measured the impact of such things as military service, functional impairments, and medical conditions such as epilepsy and arthritis on wages and earnings. VA’s ECVARS is an example of one of these. It relied on a design that is often used in policy analysis and program evaluation to estimate the effect of service-connected conditions on the average earnings of veterans on the VA disability rolls at that time. Given that other studies have successfully employed methods for quantifying the effect functional impairment and specific disabilities have on earnings, these methods can also be applied to the question of how service-connected conditions affect disabled veterans’ earning capacity. In deciding how to conduct a study to estimate the effect of disability on earning capacity, questions related to such things as scope and study design, data collection, and analysis would need to be addressed. The feasibility and cost of a study designed to estimate the effect of service-connected conditions on earnings would depend on the options chosen relative to each of these. Following are some options we identified during our review of the literature and discussions with experts. The study’s scope—how comprehensive and specific it should be—would need to be determined. Decisions about the scope will affect the overall cost and feasibility of the study and the validity of the results. The study could attempt to measure every condition’s effect on earnings at each disability rating level or could select only certain conditions, depending on (1) the extent to which a condition is thought to represent or be represented by other conditions in the schedule or (2) the number of veterans on the rolls with that condition. The more conditions examined individually, the more costly and complicated the study is likely to be. However, estimates for individual conditions are more valid if those conditions are examined individually. It is possible to quantify the effect of service-connected conditions on earnings by estimating the difference between the actual earnings of veterans on the disability rolls and what their earnings would have been if they did not have their service-connected conditions. The actual earnings of disabled veterans can be measured directly. If it were possible to control which veterans would incur service-connected conditions, veterans could be randomly assigned to groups with or without a disability, and the difference between the earnings of these two groups would constitute the effect of disability on earnings. Since this is not possible, what disabled veterans would have earned if they were not disabled has to be approximated. The earnings of the disabled prior to the onset of their disabilities, or the earnings of a group of individuals who were not disabled, could be used for this approximation. Given the data requirements associated with estimating loss in earnings by comparing the earnings of veterans before and after the onset of disability, it may be more feasible to estimate this by comparing the earnings of disabled veterans with those of a comparison group of nondisabled individuals. When using the difference between the earnings of the disabled and nondisabled to estimate the effect of a service-connected condition on earnings, the goal would be to use a nondisabled group that is similar in as many ways as possible to the disabled group. The more equivalent the two groups are, the more able we are to assume that the difference in earnings is the result of the condition and not some other factor. Veterans who are not on the disability rolls, therefore, would seem to be an appropriate comparison group. However, veterans not on the disability rolls may differ from disabled veterans in other characteristics that could explain earnings differences, including gender, age, and whether the veteran has been out of the workforce for reasons such as institutionalization. Some of these factors could be considered when selecting the final comparison group for the study or conducting the statistical analysis of the data (see next section). If the study design chosen compared the earnings of the disabled with those of the nondisabled, the simple difference between the two would not necessarily represent the effect of the condition on earnings. To isolate the condition’s effect on earnings, other variables that may differ between the disabled and nondisabled group and also influence earnings would have to be controlled for. The more variables influencing earnings that are controlled for simultaneously, the more valid the estimates of the effect of service-connected conditions on earnings. Which variables to control for is another issue that the study’s methodology would need to address. Some of the characteristics of both disabled and nondisabled veterans that are believed to have an impact on earnings are age, education, gender, race, and region of residence. The number of variables controlled for could influence the cost and complexity of the study. Cross-tabulation and multiple regression are two statistical approaches that can be used to control for the differences in the characteristics of disabled and nondisabled veterans, other than disability status, that may account for the difference in earnings. Cross-tabulation would involve making comparisons of disabled with nondisabled veterans within potentially many different subgroups of the control variables (for example, age, gender, and education). Multiple regression allows the analyst to more efficiently analyze a larger number of variables simultaneously than does a series of cross-tabulations. Recent studies have used multiple regression to estimate the influence of different variables on wages and earnings. Where and how to obtain data on earnings and the characteristics of veterans that may influence earnings is another decision to be made when developing an overall approach for this type of study. Existing administrative databases, such as Social Security Administration earnings records and Internal Revenue Service tax records, as well as data from national surveys, including the Survey of Income and Program Participation and the Current Population Survey conducted by the Bureau of the Census, contain information on earnings and, in some cases, other characteristics of the general population. These databases could be used in conjunction with information in VA administrative files to identify the effect service-connected conditions have on disabled veterans’ earnings. If data from these sources do not meet the requirements of this study or it is not feasible to use these sources, original data need to be collected. If this approach is necessary, sampling and data collection strategies for surveys of veterans on and off the disability rolls would need to be developed. As a result of their experience with similar studies, officials at the Bureau of the Census estimated that it would cost between $5 million and $10 million to conduct a study to determine the average impairment in earning capacity resulting from all, or nearly all, the conditions in the schedule. The precise cost would depend on the study’s design and methodology. VA’s disability rating schedule has served as a basis for distributing compensation among disabled veterans relative to their level of impairment in earning capacity since 1945. The schedule’s ratings do not, however, reflect the many changes that medical and socioeconomic conditions may have had on veterans’ earning capacity over the last 51 years. Thus, the ratings may not accurately reflect the levels of economic loss that veterans currently experience as a result of their disabilities. Estimates of disabled veterans’ average loss in earnings attributable to specific service-connected conditions could be (1) compared with the ratings for these conditions to determine whether the ratings correspond to economic loss and (2) used to adjust ratings that do not reasonably reflect this loss. There are pros and cons, however, to developing earnings-based disability ratings. It is uncertain what overall effect earnings-based ratings would have on total program outlays in the short term. Estimates of loss in earnings might show that ratings are appropriate and accurately represent the average loss in the earnings veterans experience. On the other hand, they might show that ratings assigned to some conditions are not appropriate and either overestimate or underestimate veterans’ average loss in earnings. Even if a significant number of ratings in the schedule are reduced on the basis of these estimates, it would not result in any short-term reduction in program outlays. Veterans on the rolls are protected by law from being adversely affected if the disability ratings assigned to their conditions are reduced. If estimates indicate that some ratings should be increased, the Secretary of VA has the discretion to increase these ratings for veterans on the rolls at that time. If the Secretary decides to do so, in the short term, total program outlays would increase. The long-term effect of an earnings-based schedule on total program outlays is also uncertain. Depending on (1) the number of ratings increased and reduced, (2) which rating levels change, (3) how much the levels change, and (4) the number of people that are affected by these changes over time, total program outlays might increase, decrease, or remain about the same over the long term. It could cost between $5 million and $10 million to develop estimates of the average loss in earnings veterans experience as a result of specific service-connected conditions. The cost, however, represents a small fraction of the approximately $11.5 billion in disability compensation benefits paid to veterans in fiscal year 1995. In our opinion, there is a distinct benefit to be derived from developing these estimates and using them to adjust disability ratings in the schedule. We recognize the uncertainty surrounding the effect that basing ratings on loss in earnings might have on long-term program outlays. However, we believe this uncertainty does not outweigh the benefit of ensuring that disabled veterans receive appropriate and equitable compensation. In addition, the cost of developing these estimates is not substantial relative to the program benefits paid annually. VA’s disability ratings do not reflect the effect economic, medical, and other changes since 1945 may have had on disabled veterans’ earning capacity. Therefore, the Congress may wish to consider directing VA to determine whether the ratings for conditions in the schedule correspond to veterans’ average loss in earnings due to these conditions and adjust disability ratings accordingly. In commenting on a draft of our report, VA said that the “schedule as it is currently structured represents a consensus among Congress, VA and the veteran community” and that the “ratings derived from the schedule generally represent the average loss in earning capacity among disabled veterans.” VA considers total disability to be “a purely medical determination,” and it contends that changing the basis for the ratings in the schedule would serve no useful purpose. In addition, VA believes that “economic factors converge with” disability ratings primarily when the Congress establishes the amount of compensation payable for each disability rating level, and the Congress may adjust these amounts whenever it determines they are not appropriate. VA also expressed concern that basing ratings in the schedule on average loss in earnings would (1) result in disparate awards based on such things as rank or education, (2) preclude the use of extra-schedular evaluations for exceptional disabilities, (3) not allow for meaningful input from VSOs, and (4) require annual revisions to the schedule to keep up with changing economic and vocational conditions. Although the schedule may represent a consensus among the program’s key stakeholders, there is no assurance that this consensus produces ratings for conditions in the schedule that accurately represent the average impairment in earning capacity currently associated with these conditions. Furthermore, while total, or 100 percent, disability may be a reasonable reference point from which to establish ratings for partial disability, we do not agree with VA’s contention that disability is or should be solely a medical determination. Other programs define disability as loss in the ability to earn wages or work as a result of an impairment. An impairment is defined as a medical diagnosis of a specific abnormality, such as “paralysis of upper and lower limbs—one side.” Studies have shown that medical conditions are poor predictors of incapacity to work, that is, disability. We agree with VA that the Congress can adjust the rate—that is, the amount of compensation—it establishes for each rating level (10 through 100 percent) in the schedule when it believes that these benefit amounts are not appropriate. However, the primary responsibility to ensure that veterans are compensated commensurate with the average impairment in earning capacity they experience because of these conditions rests with the VA. This can be done by establishing ratings for conditions contained in the schedule that reflect veterans’ average economic losses attributable to these conditions. Basing ratings on estimates of the average earnings loss among veterans would not necessarily result in disparate treatment of veterans. Service-connected conditions that result in a high-percentage loss in earnings, on average, among veterans with these conditions would be assigned a rating higher than conditions that result in a low-percentage loss in earnings. As with the current schedule, veterans who have conditions that are assigned the same disability rating would receive the same basic monthly compensation regardless of such circumstances as their military rank or education. We believe disability ratings in the schedule should be based primarily but not solely on estimates of veterans’ average loss in earnings. Therefore, earnings-based ratings would not preclude extra-schedular evaluations. Nor would an earnings-based schedule prevent VA from obtaining and taking into account comments from VSOs and others when it revises the schedule just as it does today. Finally, the economists we consulted agreed that ratings based on earnings loss would need to be validated only once every 10 to 20 years to keep pace with changes in the economy and advances in medicine and technology that might influence the earning capacity of veterans with service-connected conditions. We have modified the report where appropriate in response to VA’s technical comments on the draft report. The complete text of VA’s comments appears in appendix IV. We are sending copies of this report to the Chairman and Ranking Minority Member of the Senate Committee on Veterans’ Affairs; the Ranking Minority Member, Subcommittee on Compensation, Pension, Insurance and Memorial Affairs, House Committee on Veterans’ Affairs; other appropriate congressional committees; the Secretary of Veterans Affairs; and other interested parties. We will also make copies available to others on request. If you have any questions about this report, please call Clarita Mrena, Assistant Director, at (202) 512-6812, or Shelia Drake, Evaluator-in-Charge, at (202) 512-7172. Other major contributors to this report are listed in appendix V. The Economic Validation of the Rating Schedule (ECVARS) was designed to provide information that could be used to estimate the average economic loss attributable to individual recognize trends toward increases or decreases in the rate of economic loss that can be expected with the passage of time and aging of the veteran population, recognize and evaluate the basic differences between the disability evaluation policy of VA and that of other federal agencies for comparable disabilities, and formulate proposals for the refinement of the schedule on the basis of these estimates and evaluations. To determine the average impairment in earning capacity resulting from specific service-connected conditions on the rating schedule, the ECVARS calculated the difference between the median earnings of veterans on the VA disability rolls, grouped by their disability’s diagnosis, and the median earnings of veterans not on the rolls. The earnings of nondisabled veterans were used to approximate what the earnings of disabled veterans would have been if they did not have their disability. To estimate the average loss in earnings experienced by disabled veterans as a result of their specific service-connected condition, all disabled veterans on the disability rolls at that time were stratified into groups by the diagnosis assigned to their disability. While all disabled veterans in strata that contained 500 or fewer veterans were selected for this study, samples of disabled veterans were drawn from strata that contained more than 500. Sample sizes for each stratum ranged from about 200 to about 1,900 veterans. In total, 485,000 of the approximately 2 million veterans who were receiving disability compensation when this study was done were chosen to participate. Not included were female veterans on the disability rolls, veterans with multiple disabilities, and veterans whose VA disability compensation was based on the 1925 schedule. The ECVARS’ estimates of the median earnings of nondisabled veterans were based on the earnings of a sample of noninstitutionalized, nondisabled veterans selected from lists of individuals in the general population that the Bureau of the Census was using at that time to draw samples for its ongoing Current Population Survey. In total, approximately 14,000 nondisabled veterans were chosen for this survey. The ECVARS did not validate all diagnoses on the schedule, nor did it validate each individually. Diagnoses that accounted for very small numbers of veterans on the VA disability rolls at that time were excluded from the study. Diagnoses with fewer than 200 veterans and similar symptoms were combined and validated as a single diagnosis. Diagnoses accounting for at least 200 veterans were validated individually unless they were what VA referred to as “adequately represented” by another diagnosis or group of diagnoses, in which case they were not validated. The ECVARS validated about 400 diagnosis strata, each containing at least one diagnosis from the schedule. The ECVARS used a mail survey to collect data on earnings from disabled and nondisabled veterans. The Bureau of the Census administered this survey for VA. Census mailed out a total of approximately 500,000 questionnaires in February 1968, which asked the veterans for data on earnings and other characteristics during the prior year. Census mailed out two additional follow-up questionnaires to nonrespondents and conducted telephone and face-to-face interviews to obtain data from those who did not respond to the mail questionnaire. Data collection was completed in the first quarter of fiscal year 1969. In addition to data on earnings, the ECVARS collected data on the age, education, and geographic residence of veterans. The age variable was split into four categories—under age 30; ages 30 to 49; ages 50 to 64; and age 65 and over. Education was classified as less than a high school graduate, high school graduate, and 1 or more years above high school graduate. There were two categories for the regional variable—the South and all other geographical regions. When calculating the difference between the earnings of the disabled and nondisabled, each diagnosis stratum was paired with a unique “control” group that contained nondisabled veterans who were equivalent with respect to age, education, and region of residence to the disabled veterans in that diagnosis stratum. By controlling for the influence of these other variables, the study attempted to isolate the effect that the service-connected condition alone had on earnings. The ECVARS calculated a separate estimate of loss in earnings for each rating level associated with a specific diagnosis stratum. Study results were presented in terms of disabled veterans’ annual dollar loss in earnings, disabled veterans’ median percentage loss in earnings relative to the median earnings of nondisabled veterans, and disabled veterans’ median loss in earnings relative to the median earnings of production workers. Ovary, removal of: for 3 months after removal - 100; thereafter, complete removal of both ovaries - 30; removal of one with or without partial removal of the other - 0 (review for entitlement to special monthly compensation under 3.350 of this chapter) Reference is to 38 C.F.R. parts 0-17 (1995). Genitourinary (out of 27 diagnoses) Oral/dental (out of 13 diagnoses) Gynecological (out of 17 diagnoses) Hemic/lymphatic (out of 11 diagnoses) All (out of 68 diagnoses) Table III.3: Changes in Disability Ratings—Number of Diagnoses Changed in Each Body System, by Type of Change Genitourinary (out of 27 diagnoses) Oral/dental (out of 13 diagnoses) Gynecological (out of 17 diagnoses) Hemic/ lymphatic (out of 11 diagnoses) The following individuals made important contributions to this report: Connie D. Wilson, Senior Evaluator, collected a major portion of the evidence presented; Timothy J. Carr, Senior Economist, reviewed the literature on the relationship between disability and earnings and provided advice on methodology; Steven Machlin, Statistician, provided guidance on research design and statistical methods; and Stefanie Weldon, Attorney, served as legal advisor. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information that would enable the subcommittee to assess the need for a comprehensive study of the economic validity of the Department of Veterans Affairs' (VA) disability rating schedule, focusing on: (1) the basis for the disability ratings assigned to conditions in the current schedule; (2) socioeconomic changes that have occurred since the original version of the schedule was developed that may have influenced the earning capacity of disabled veterans; (3) the results of a previous study that examined the validity of ratings in the schedule; (4) VA's efforts to help ensure that the ratings do reflect disabled veterans' average impairment in earning capacity; and (5) the advantage of basing ratings in the schedule on actual loss in earnings, and approaches that could be used to estimate this loss. GAO found that: (1) the disability ratings in VA's current schedule are still primarily based on physicians' and lawyers' judgments made in 1945 about the effect service-connected conditions had on the average individual's ability to perform jobs requiring manual or physical labor; (2) although the ratings in the schedule have not changed substantially since 1945, dramatic changes have occurred in the labor market and in society since then; (3) the results of an economic validation of the schedule conducted in the late 1960s indicated that ratings for many conditions did not reflect the actual average loss in earnings associated with them; (4) it is likely that some of the ratings in the schedule do not reflect the economic loss experienced by veterans today; (5) the schedule may not equitably distribute compensation funds among disabled veterans; (6) VA has done little since 1945 to help ensure that disability ratings correspond to disabled veterans' average loss in earning capacity; (7) despite the results of the economic validation study, VA's efforts to maintain the schedule have concentrated on improving the appropriateness, clarity, and accuracy of the descriptions of the conditions in the schedule rather than on attempting to ensure that the schedule's assessments of the economic loss associated with these conditions are accurate; (8) basing disability ratings at least in part on actual earnings loss rather than solely on judgments of loss in functional capacity would help to ensure that veterans are compensated to an extent commensurate with their economic losses and that compensation funds are distributed equitably; (9) GAO's work demonstrates that there are generally accepted and widely used approaches to statistically estimate the effect of specific service-connected conditions on veterans' average earnings; (10) these estimates could be used to set disability ratings in the schedule that are appropriate in today's socioeconomic environment; and (11) it could cost between $5 million and $10 million to collect the data that produce these estimates, a small fraction of the over $11 billion VA paid in disability compensation to veterans in fiscal year 1995.
NRC’s Office of Nuclear Reactor Regulation provides overall direction for the oversight process and the Office of Enforcement is responsible for ensuring that appropriate enforcement actions are taken when performance issues are identified. NRC’s regional offices are responsible for implementing the ROP, along with the inspectors who work directly at each of the nuclear power facilities. NRC relies on on-site resident inspectors to assess conditions and the licensees’ quality assurance programs, such as those required for maintenance and problem identification and resolution. With its current resources, NRC can inspect only a relatively small sample of the numerous activities going on during complex operations. NRC noted that nuclear power facilities’ improved operating experience over more than 25 years allows it to focus its inspections more on safety significant activities. One key ROP goal is to make safety performance assessments more objective, predictable, and understandable. The unexpected discovery, in March 2002, of extensive corrosion and a pineapple-size hole in the reactor vessel head—a vital barrier preventing a radioactive release—at the Davis- Besse nuclear power facility in Ohio led NRC to re-examine its safety oversight and other regulatory processes to determine how such corrosion could be missed. Based on the lessons learned from that event, NRC made several changes to the ROP. NRC continues to annually assess the ROP by obtaining feedback from the industry and other stakeholders such as public interest groups, and incorporates this feedback and other information into specific performance metrics to assess its effectiveness. In anticipation of licensing new reactors, NRC has accelerated its efforts to build up its new reactor workforce. NRC’s workforce has grown from about 3,100 employees in 2004 to about 3,500 employees as of August 2007, and NRC projects that its total workforce size needs will grow to about 4,000 employees by 2010. NRC estimates that the first few COL applications will require about 100,000 hours of staff review and identified around 2,500 associated review activities related to each application’s detailed safety, environmental, operational, security, and financial information, which may total several thousand pages. NRC anticipates that for each application, the review process will take 42 months—including 30 months for its staff review, followed by approximately 12 months for a public hearing. In addition to the COL, NRC has established (1) the design certification, which standardizes the design of a given reactor for all power companies using it, with modifications limited to site-specific needs, and (2) an early site permit, which allows a potential applicant to resolve many preliminary siting issues before filing a COL application. Electric power companies plan to use five different reactor designs in their COL applications. In implementing its ROP, NRC oversees the safe operation of nuclear power facilities through physical inspections of the various complex plant equipment and operations, reviews of reactor operator records, and quantitative measures or indicators of each reactor’s performance. (See table 1 for a more expansive treatment of these tools.) These tools are risk-informed in that they focus on the aspects of operations considered most important to safety. NRC bases its oversight process on the principle and requirement that licensees have programs in place to routinely identify and address performance issues without NRC’s direct involvement. Thus, an important aspect of NRC’s inspection process is ensuring the effectiveness of licensee programs designed to identify and correct problems. On the basis of the number and risk significance of inspection findings and performance indicators, NRC places each reactor unit into one of five performance categories on its action matrix, which corresponds to graded, or increasing, levels of oversight. NRC assesses overall facility performance and communicates the results to licensees and the public on a semiannual basis. From 2001 through 2005, the ROP identified performance deficiencies through more than 4,000 inspection findings at nuclear power facilities. Ninety-seven percent of these findings were designated green—very low risk to safe facility operations, but important to correct. Two percent (86) were white findings that were considered to be of low to moderate risk significance. Twelve findings were of the highest levels of risk significance—7 yellow and 5 red. More recently, from January 2006 through June 2007, NRC identified an additional 1,174 green findings, 27 white findings, 1 yellow finding, and no red findings. NRC also reviews performance indicators data—used to monitor different aspects of operational safety—that facility operators report to categorize the level of reactor unit performance for each indicator. From 2001 through June 2007, NRC reported that less than 1 percent of over 39,000 indicator reports exceeded acceptable performance thresholds and nearly half of all reactor units have never had a performance indicator fall outside of the acceptable level. Through June 2007, 3 of the 16 performance indicators have always been reported to be within acceptable performance levels—measuring the amount of time that the residual heat removal safety system is unavailable, monitoring the integrity of a radiation barrier, and monitoring radiological releases. Since 2001, three reactor units have reported a yellow indicator for one performance indicator. No red indicators have ever been reported. For varying periods from 2001 through 2005, on the combined basis of inspection findings and performance indicators, NRC has subjected more than 75 percent of the reactor units to oversight beyond the baseline inspections. While most reactors received the lowest level of increased oversight through a supplemental inspection, five reactors were subjected to NRC’s highest level of oversight. Reactor units in this category were generally subjected to this higher oversight for long periods due to the more systemic nature of their performance problems. Currently, 1 unit is receiving the highest level of oversight by NRC, and 10 units at 6 facilities are receiving the second level of oversight. NRC inspectors at the facilities we reviewed indicated that when a reactor unit’s performance declines it is often the result of deficiencies or ineffectiveness in one or more of the three cross-cutting areas—problem identification and resolution, human performance, and a safety-conscious work environment. NRC inspectors cited examples of possible cross- cutting issues: (1) a facility does not have an effective corrective action program that appropriately identified and resolved problems early; (2) a facility employee has not followed correct maintenance procedures, and NRC made a finding associated with the human performance area; and (3) facility management is complacent by not paying attention to detail or adhering to procedures. Our examination of ROP data found that all reactor units that NRC subjected to its highest level of oversight had findings related to one or more of these substantive cross-cutting issues. In addition, recent NRC inspections have found more problems associated with these cross-cutting issues, in part because of new guidance for identifying and documenting them. Our 2006 report found that NRC has generally taken a proactive approach to continuously improving its oversight process, in response to recommendations that grew out of the Davis-Besse incident; independent reviews; and feedback that is usually obtained during NRC’s annual self- assessment of its oversight process from stakeholders, including its regional and on-site inspectors. Continued efforts will be needed to address other shortcomings or opportunities for improvement, however, particularly in improving its ability to identify and address early indications of declining safety performance at nuclear power facilities. For the most part, NRC considers these efforts to be refinements to its oversight process, rather than significant changes. Specific areas that NRC is addressing include the following: To better focus efforts on the areas most important to safety, NRC has formalized its process for periodically revising its inspection procedures. In particular, NRC completed substantive changes to its inspection and assessment program documents—including those currently guiding the highest level of NRC inspections—to more fully incorporate safety culture. To address concerns about the amount of time, level of effort, and knowledge and resources required to determine the risk significance of some inspection findings, NRC has modified its significance determination process, which, according to NRC’s 2006 self-assessment, has significantly improved timeliness. To address concerns that performance indicators did not facilitate the early identification of poor performance, NRC has modified several indicators to make them more risk-informed for identifying the risks associated with changes in the availability and reliability of important safety systems. In addition, NRC revised an indicator to more accurately reflect the frequency of events that upset reactor unit stability and challenge critical safety functions. NRC is considering options for revising indicators for emergency preparedness and reactor cooling systems. Both NRC’s 2006 self-assessment and internal staff survey cited the need to further improve the performance indicators and their associated guidance. Although NRC and others have long recognized the effects of a facility’s safety culture on performance, NRC did not undertake efforts to better incorporate safety culture into the ROP until 2005, when it formed a working group to lead the agency’s efforts. To date, the group has completed guidance for identifying, addressing, and evaluating cross- cutting issues specific to safety culture. Our 2006 report concluded that NRC’s efforts to incorporate safety culture into the ROP may be its most critical future change to the ROP and recommended that NRC aggressively monitor; evaluate; and, if needed, implement additional measures to increase the effectiveness of its initial safety culture changes. We also recommended that NRC consider developing specific indicators to measure important aspects of safety culture through its performance indicator program. While NRC has largely implemented initial safety culture enhancements to the ROP that primarily address cross-cutting issues, it does not plan to take any additional actions to further implement either recommendation before it completes its assessment of an 18-month implementation phase at the end of this year. This assessment will include lessons learned that NRC managers have compiled since July 2006, including insights from internal and external stakeholders about the effectiveness of ROP enhancements. In addition, we recommended that NRC, in line with its desire to make the ROP an open process, make available additional information on the safety culture at nuclear power facilities to the public and its other stakeholders to provide a more comprehensive picture of performance. NRC has implemented this recommendation by modifying its ROP Web site to fully explain the review process regarding cross-cutting issues and safety culture, and now provides data and correspondence on the reactor units or facilities that have substantive open cross-cutting issues. NRC has prepared its workforce for new reactor licensing reviews by increasing funding for new reactor activities, reorganizing several offices, creating and partly staffing the Office of New Reactors (NRO), and hiring a significant number of entry-level and midlevel professionals. As of August 2007, NRC had assigned about 350 staff to NRO, about 10 percent of the total NRC workforce; however, some critical positions are vacant, and the office plans to grow to about 500 employees in 2008. To assist its staff in reviewing the safety and environmental portions of the applications, NRC plans to contract out about $60 million in fiscal year 2008 through support agreements with several Department of Energy national laboratories and contracts with commercial companies. NRC also has rolled out several new training courses, but it is still developing content for in-depth training on reactor designs. NRC is using a project management approach to better schedule, manage, and coordinate COL application and design certification reviews. While NRC has made progress, several elements of NRC’s activities to prepare its workforce are still under way, as the following illustrates: NRC has developed plans for allocating resources for a design certification application and an early site permit it is currently reviewing, 20 COL applications, 2 additional design certification applications, and a design certification amendment application. However, NRC has not yet developed specific criteria to set priorities for reviewing these applications if it needs to decide which applications take precedence. Without criteria, NRC managers are likely to find it more difficult to decide how to allocate resources across several high-priority areas. Accordingly, we recommended that NRC fully develop and implement criteria for setting priorities to allocate resources across applications by January 2008, which NRC has agreed to do. NRC is developing computer-based project management and reviewer tools to assist staff in scheduling and reviewing multiple applications at the same time. For example, Safety Evaluation Report templates are designed to assist COL reviewers by providing standardized content that will enable them to leverage work completed during the design certification review process. However, the implementation of this and other tools has been delayed. We recommended that NRC provide the resources for implementing reviewer and management tools needed to ensure that the most important tools will be available as soon as is practicable, but no later than March 2008, which NRC has agreed to do. NRO established a cross-divisional resource management board early in 2007 for resolving resource allocation issues if major review milestones are at risk of not being met. However, it has not clearly defined the board’s role, if any, in setting priorities or directing resource allocation. Because NRO expects to review at least 20 COL applications and 6 design certification, early site permit, and limited work authorization applications associated with its new reactor program over the next 18 months, it may not be able to efficiently manage thousands of activities simultaneously that are associated with these reviews. NRC managers we spoke with recognize this problem and plan to address it. We recommended that NRC clarify the responsibilities of NRO’s Resource Management Board in facilitating the coordination and communication of resource allocation decisions, which NRC has agreed to do. NRC has significantly revised most of its primary regulatory framework and review process to prepare for licensing new reactors. Specifically, NRC has revised and augmented its rules, guidance, and oversight criteria for licensing and constructing new reactors primarily to provide for early resolution of issues, standardization, and predictability in the licensing process. In making these changes, NRC has regularly interacted with nuclear industry stakeholders to determine which parts of an application’s technical and operational content could be standardized and to clarify guidance on certain technical matters. In addition, NRC just completed modifications to its acceptance review process to include an evaluation of the application’s technical sufficiency as well as its completeness and made internal acceptance review guidance available last week. While NRC has made progress in these areas, it has not yet completed some ancillary rules and regulatory guidance, or actions to implement certain review process components. For example, because NRC only recently solicited public comments to further update its environmental guidance, applicants may have more difficulty developing specific COL content for unresolved issues. In addition, while NRC proposed a rule to update physical protection requirements in September 2006, officials told us that it will not be made final until 2008. Furthermore, NRC’s limited work authorization rule, while substantially complete, will not be available in final form before October 2007. Lastly, NRC is revising its policy for conducting hearings on both the contested and uncontested portions of applications. In addition, NRC is refining its processes to track its requests for additional information to each applicant. In some instances, applicants using the same reference reactor design may be asked the same question, and one applicant may have already provided a satisfactory answer. With a completed tracking process, the second reviewer could access the previously submitted information to avoid duplication. We recommended that NRC enhance the process for requesting additional information by (1) providing more specific guidance to staff on the development and resolution of requests for additional information within and across design centers and (2) explaining forthcoming workflow and electronic process revisions to COL applicants in a timely manner. NRC has agreed to do so. In conclusion, the safe operation of the nation’s nuclear power facilities has always been of fundamental importance and has received even more emphasis recently as the nation faces an expected resurgence in the licensing and construction of new nuclear reactors to help meet our growing electricity needs. Our assessment of the ROP has found that NRC has made considerable effort to continuously improve its oversight activities and to prompt industry to make constant management improvements. However, while the current oversight process appears logical and well-structured, NRC recognizes the need to make further improvements in such areas as the timeliness of its significant determination process and the redefinition of some performance indicators. Regulating the often complex and intangible aspects of safety culture is clearly challenging. While NRC had taken some concrete actions to incorporate safety culture into the ROP and now has a structured process in place through its inspection program, we recommended that NRC continue to act to improve its safety culture efforts. NRC plans to evaluate the effectiveness of its current actions at the end of this year before considering any further implementation of our recommendations. We continue to believe that NRC needs to give this issue attention in further revising the ROP so that it can better identify and address early indications of declining safety performance at nuclear power facilities. NRC has made important strides in revising its regulatory framework and review process for licensing new nuclear reactors to improve timeliness and provide more predictability and consistency during reviews. Nevertheless, NRC’s workforce will face a daunting task in completing certain regulatory actions currently under way and implementing this new process as it faces a surge in applications over the next 18 months—the first of which has just been submitted. We identified four actions that NRC could take to better ensure its workforce is prepared to review new reactor applications and that its review processes more efficiently and effectively facilitate reviews, and NRC agreed to implement them. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or the other Members of the Subcommittee may have at this time. For further information about this testimony, please contact Mark Gaffigan, at (202) 512-3841 or by e-mail at [email protected]. Richard Cheston, Assistant Director; Sarah J. Lynch; Alyssa M. Hundrup; and David Stikkers made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Nuclear Regulatory Commission (NRC) is responsible for overseeing the nation's 104 commercial nuclear power reactors to ensure they are operated safely. Since 2000, NRC has used a formal Reactor Oversight Process (ROP) to oversee safety. NRC is also responsible for licensing the construction and operation of new reactors. Electric power companies have announced plans to submit 20 applications in the next 18 months. This testimony is based on GAO reports that reviewed (1) how NRC implements the ROP, (2) the results of the ROP over several years, (3) the status of NRC's efforts to improve the ROP, (4) NRC's efforts to prepare its workforce and manage its workload for new reactor licensing, and (5) NRC's efforts to develop its regulatory framework and review processes for new reactor activities. In conducting this work, GAO analyzed programwide information and interviewed cognizant NRC managers and industry representatives. In implementing its ROP, NRC uses various tools and takes a risk-informed and graded approach to ensure the safety of nuclear power facilities. The ROP primarily relies on physical inspections of equipment and operations and quantitative measures or indicators of performance at each facility to assess the status of safety and determine appropriate levels of oversight. Since 2001, NRC has made more than 4,000 inspection findings that reactor unit operators had not fully complied with safety procedures. Almost all of these findings were for actions NRC considered important to correct but of low significance to safe operations. As a result of NRC inspections, more than 75 percent of the nation's reactor units received some level of increased oversight while five units were subjected to NRC's highest level of oversight for long periods because their performance problems were more systemic. In 2006, GAO reported that NRC has generally taken a proactive approach to improving its ROP. However, concerted efforts will be needed to address shortcomings, particularly in identifying and addressing early indications of declining reactor safety performance. For example, NRC is implementing several enhancements to the ROP to better assess a facility's safety culture--organizational characteristics that ensure safety issues receive the attention their significance warrants. GAO made recommendations to further improve this effort, and NRC has taken initial steps to implement them. NRC has taken important steps to prepare its workforce for new licensing reviews, but several key activities are still underway and uncertainties remain about its management of the expected surge of applications. For example, NRC has increased funding, hired hundreds of new employees, and created and partly staffed a new office. However, NRC has not completed its development of some computer-based tools for enhancing the consistency and coordination of application reviews and has not fully developed criteria for setting priorities if the workload exceeds available resources. Also, while NRC's Office of New Reactors established a resource management board for coordinating certain office review activities, it has not clearly defined the extent of the board's responsibilities. NRC agreed with recommendations GAO made to further improve its workload management. NRC has revised most of its primary regulatory framework and review processes, including its rules, guidance, and oversight criteria to provide for early resolution of issues, standardization, and enhanced predictability. However, NRC has not yet completed some associated rules, guidance, and review process components, including revisions to its environmental guidance, its hearing process, and its process for requesting additional information from applicants. Without these components, expected efficiencies and predictability may be limited regarding the total time an applicant needs to obtain a license. NRC agreed with a recommendation GAO made to further improve its application review process.
The federal government supports two major loan programs for postsecondary students under Title IV of HEA: the Federal Family Education Loan Program (FFELP) and the William D. Ford Direct Loan Program (FDLP). In 2000, FFELP and FDLP provided approximately $23 billion and $10 billion, respectively, in loans and loan guarantees to postsecondary students and their parents. Both programs provide subsidized and unsubsidized Stafford loans, Parent Loans for Undergraduate Students and Consolidation loans. Under the FFELP, private lenders, such as banks, provide loan capital. The federal government guarantees the loans but uses 36 guaranty agencies to administer many aspects of the program. With federal funding, these guaranty agencies generally provide insurance to the lenders for 98 percent of the unpaid amount of defaulted loans. The guaranty agencies also work with lenders and borrowers to prevent loan defaults and collect on the loans after default. In contrast, under the FDLP the federal government provides the loan capital to borrowers. For over a decade, GAO has included student aid programs on a list of “high-risk” federal programs. These programs are designated high-risk primarily because of deficiencies in Education’s maintenance of the financial and management information required to administer the student aid programs and the internal controls needed to maintain the integrity of the programs. Over the years Education has addressed many of the high- risk issues identified by GAO; however, these long-standing conditions continue to plague the student aid programs. To achieve FFEL program and cost efficiencies, and to improve the availability and delivery of loans, the VFA legislation of 1998 authorized VFAs between Education and the state-designated guaranty agencies. The VFA legislation restricted Education to six VFAs through fiscal year 2001, and as of January 2002, Education had entered into agreements with four guaranty agencies. Five other guaranty agencies applied for VFAs but either were not selected or failed to reach agreement with Education (see table 1). Since the beginning of fiscal year 2002, Education has had the authority to enter into VFAs with all of the guaranty agencies. In May 1999, Education officials discussed VFAs with guaranty agency representatives who were attending a conference hosted by the National Council of Higher Education Loan Programs, Inc. Two months later, notice of invitation for any of the 36 guaranty agencies to apply for a VFA appeared in the Federal Register. The Register Notice included five “criteria” Education planned to use in its evaluation of the proposals for the VFAs, including (1) how the agency’s proposed VFA could be extrapolated and easily used by other FFEL participants; (2) how the proposal would improve the “system” for delivering and servicing of loans for borrowers and schools; (3) if and how the proposal uses new technology; (4) the impact the proposal would have on overall operating costs for the agency and its partners, including Education; and (5) a description of any proposed waiver of the prohibited inducement restrictions (prohibited inducements are efforts by guaranty agencies to encourage schools, borrowers, or lenders to submit applications for loan guarantees through direct or indirect premiums, payments, or, for example, uncompensated services such as loan processing services normally performed by lenders). The VFA development process did not fully meet the needs of guaranty agencies and other program participants. Most of the guaranty agency officials we talked to indicated frustration in one or more steps of the process, which began when Education invited all guaranty agencies to submit VFA proposals. Guaranty agency officials were particularly dissatisfied with Education’s lack of communication about the VFA development process and its inability to meet its own timetable. Program participants other than guaranty agencies, such as representatives of lender and loan servicing groups, said that the opportunities for examining the proposed agreements were insufficient. Also, these program participants criticized Education for not using a more formal process for determining VFA selection criteria and inviting VFA proposals. In response to these criticisms, Education explained that some of the delay in the VFA development process was the result of broader changes at Education and turnover of key staff assigned to the VFA project. Additionally, Education noted that it had taken extra actions—such as posting the draft agreements to an Internet site—to facilitate public comment on the VFA draft agreements. In commenting on a draft of the report it also noted that some guaranty agencies and other program participants that we consulted had been opposed to the VFA legislation from its inception. According to the guaranty agency officials we talked to, after the invitation process, Education did not communicate adequately with guaranty agencies after failing to stay on schedule. Most of these guaranty agency officials, including those that were generally supportive of Education, expressed a variety of concerns about Education’s communication efforts during the VFA development process. For instance, several guaranty agencies indicated a need for more information on Education’s methodology for analyzing the projected federal program costs of the VFAs, or on Education’s five criteria for selecting the VFAs. Furthermore, the established timetable was not met. Education indicated it would select the six initial guaranty agencies within two weeks after the application deadline of August 27, 1999, but the notice of selections did not occur until February 2000. Education set December 1, 1999, as the target date for signing the VFAs; however, the first VFA was not signed until November of 2000 and the other three were not signed until March 2001. Guaranty agency officials told us that criticisms of Education’s failure to meet its own timetable would have been somewhat mitigated if Education had done a better job in communicating the status of the VFAs to the guaranty agencies. In response to these criticisms, Education officials explained that the process was hampered by organizational changes and staff turnover that occurred during the VFA development process. For instance, officials told us that delays were partially the result of Education’s decision to place a higher priority on developing regulations for implementing other 1998 HEA amendments and on reorganizing the Office of Student Financial Assistance as a performance- based organization. Education officials also indicated that turnover of key personnel assigned to the VFA project as well as disagreements within Education concerning, for example, evaluations of the costs of VFAs contributed to the delays in the VFA development process. Although Education provided opportunity for public comment, program participants other than guaranty agencies—for instance, representatives of lender groups such as the Consumer Bankers Association—said these opportunities were insufficient. Education posted each draft agreement for about a 2-week period on the Internet in order to allow interested third parties the opportunity to comment on the agreements. However, some third parties told us that information available on the Web site was insufficient to evaluate the draft agreements and that Education did not provide responses to those who commented on the draft agreements. In response to this, Education officials told us that the Internet posting was not required by the VFA legislation, but that they did so to increase opportunities for public comment. Additionally, Education staff have recently begun meeting with a variety of student loan industry participants to discuss ongoing VFA concerns. Program participants other than guaranty agencies also criticized Education for not using a more formal process in determining VFA selection criteria and inviting VFA proposals. A couple of third party participants we talked to said the selection criteria should have been developed through a rulemaking process similar to that used to develop federal regulations. Another participant said that VFA proposals should have been solicited through a more formal process, such as those used in federal contracting procedures. According to Education, however, because the agreements were specifically authorized by statute and involved state-designated, not competitively selected, entities, Education was not subject to legal requirements applicable to the rulemaking process and that it was not required to use the more formal contracting process. In commenting on a draft of this report, Education noted that some guaranty agencies and third party program participants had been opposed to the VFA legislation from its inception, and not surprisingly continued to be dissatisfied with the implementation of the VFAs. VFA provisions complied with most of the legislative requirements. For instance, we found that as required by the VFA legislation, the agreements made no changes to the statutory terms and conditions of the loans. However, we were not convinced that the agreements conform to the requirement that the projected program cost to the government not increase due to the VFAs. For one VFA, Education projected federal program costs would increase each year of the 3-year analysis period. Furthermore, the agreements appear to have violated the cost requirement if Education’s cost determination had been based on a different time period, or if the analyses had been based on changes in assumptions about certain factors, such as default rates. The authorizing statute specifies, “in no case may the cost to the Secretary of the agreement, as reasonably projected by the Secretary, exceed the cost to the Secretary, as similarly projected, in the absence of the agreement.” Education’s budget service analyzed each of the four VFAs in the course of Education’s negotiations with the guaranty agencies and concluded that each agreement met the requirement. However, Education’s analysis of the Texas VFA projected that federal costs will increase an average of about $1 million a year. Budget service staff indicated that they regarded this amount as insignificant compared with total federal cash flows being estimated. Education’s estimates for the Texas agency show that the projected amount of collections on defaulted loans less federal program costs is an average of $161 million per year over fiscal years 2001 to 2003. An alternative basis of comparison could be to use the projected net amount of the agency’s receipts from federal sources and its retentions of collections (an average of $71 million per year over the same time period). In either case, the projected increase is not consistent with the VFA legislative requirement that the projected federal program costs not increase due to the VFA. Our review of Education’s analyses raised two additional questions about Education’s conclusion that the VFAs would not increase projected federal costs. Costs considered for first 3 years only. First, Education based its conclusion on projected costs for only the first 3 years, while Education’s projections show that costs for three VFAs would increase substantially in years 4 and 5. As table 2 shows, during the first 3 years, only the Texas agreement (discussed above) was projected to cause an increase in federal costs. By including projections for the fourth year or for both the fourth and fifth year, however, costs for three of the four VFAs would rise, with costs for the Texas and Great Lakes guaranty agencies rising substantially. These increases would occur as the size of these guaranty agencies’ loan volumes and the cumulative size of their portfolios increase. Education officials and Office of Management and Budget officials said they took this approach because they viewed the VFAs as demonstration programs of limited duration to be evaluated by the Congress during the next reauthorization of HEA. This act is due for reauthorization at the end of fiscal year 2003. Although the American Student Assistance VFA specifies a termination date at the end of fiscal year 2003, the other three agreements have no specified termination date. They each remain in effect until either the guaranty agency or Education chooses to withdraw with advanced written notice. Effects of changes in performance not adequately considered. Second, budget service officials reached their conclusions about the cost effects of the VFAs using a set of base year assumptions that did not adequately consider the effect of changes in guaranty agency performance—that is, they assumed that such things as default rates, collection rates, and delinquency rates would remain unchanged in future years. The VFAs were designed to improve guaranty agency performance and under the agreements, doing so would mean higher payments to the guaranty agencies for their improved performance. Thus, analyzing the proposed payment structures to estimate how such improvements would affect net federal costs—in the form of lower default rates, for example— seems warranted. However, according to budget service officials this happened in only one case and to a limited extent. In that particular case, budget service staff analyzed the effect of a decline in loan defaults for the California VFA, and its estimates illustrate the importance of considering the effect of changes in guaranty agency performance on federal costs. A provision in the California VFA provides an incentive payment to the guaranty agency for achieving lower default payments. At the time this VFA was being developed Education staff calculated that California’s fiscal year 1998 “trigger default rate”–was 3.1 percent compared with the aggregate national rate of 2.9 percent. In an effort to encourage the California guaranty agency to reduce its trigger default rate, the VFA provides for a payment from Education to the guaranty agency equal to half of the amount of claims payments avoided by having a trigger rate below 3 percent. Budget service staff then analyzed the effects of a decline in trigger default rates below 3 percent— to see how much the payment would be in the event the agency was able to reduce it’s trigger default rate that much. Education found that the payment to California would be greater than the savings from the reduced defaults—and thus would result in increases in federal costs. However, in doing their formal analysis of the California’s VFA, budget service staff did not include the results of their default analysis and instead assumed no change in the base-year 3.1 percent trigger default rate; thus as table 2 shows, there are no projected increases or decreases to the costs for that VFA. Subsequently, California’s trigger default rate did drop below 3 percent—down to 2.6 percent for fiscal year 2001. Our analysis based on Education’s estimates shows that the California guaranty agency’s fiscal year 2001 trigger rate of 2.6 percent entitles it to a VFA incentive payment of about $17.3 million—an amount approximately $2.6 million greater than the estimated total the government saved due to the lower volume of defaulted loans. Because there were no other projected cost considerations for this VFA, the decline in loan defaults under the VFA resulted in an increase in projected net federal costs. Appendix II discusses this analysis in more detail. All four agreements contain provisions for incentive payments for improved guaranty agency performance, and all four grant waivers to certain statutory and regulatory requirements. For the most part these changes are designed to enhance agency performance, such as reduce delinquencies and defaults, while increasing guaranty agency efficiencies. At the same time, however, guaranty agencies without VFAs told us that they have efforts under way to improve their agencies’ performance— efforts that did not require the incentive payment structure or waivers granted for the VFA agencies. The VFAs establish incentive payments that reward a guaranty agency for better performance. The use of these incentive payments offers an alternative to the traditional guaranty agency payment structure—a structure some participants describe as containing a perverse payment incentive for the guaranty agencies. Under the traditional payment structure that continues to be used for the non-VFA agencies, it is financially more beneficial for a guaranty agency to allow borrowers to default on their loans and to subsequently collect on the loans than to prevent defaults in the first place. A guaranty agency currently retains 24 percent of the money that it recovers from borrowers whose loans are in default—that is, the borrowers who are more than 270 days behind in making payments. According to some guaranty agency officials, this percentage is typically higher than a guaranty agency’s actual cost of collecting on defaulted loans. As a result, a non-VFA guaranty agency has more financial incentive to “allow” borrowers to default than to prevent the default upfront. Three of the four VFAs have incentive provisions that reduce the guaranty agencies’ share of collections on defaulted loans. To compensate for this lower collection retention rate, the VFAs have enhanced incentives for better performance. For example, the American Student Assistance VFA reduces the collection retention rate from 24 percent to 18.5 percent for regular collections on defaulted loans in exchange for potentially greater incentive payments for lower defaults. To implement such incentive provisions, VFA agencies have created programs aimed at improving their performance, particularly in the areas of reducing delinquencies and net defaults. For example: To help borrowers with defaulted loans, American Student Assistance created Bright Beginnings. This program focuses on providing support to the borrowers and finding solutions to loan default instead of making payment demands and threatening sanctions for nonpayment, such as wage garnishment and negative reports to credit bureaus. Help may involve, for example, working with the borrowers on a strategy to get the education or training necessary to obtain employment that would provide the income needed to repay their loans. Additionally, the program points out to borrowers the advantages of making payments on their loans. For example, if borrowers make nine consecutive monthly payments they will be eligible for rehabilitation, a process by which the guaranty agency sells the defaulted loan back to a lender. Rehabilitation is important because, in addition to being current on their loan payments, the borrowers become eligible for additional Title IV student financial aid. To avert defaults by borrowers who withdraw from school without completing their educational program, the California Student Aid Commission is planning an early-withdrawal counseling program. Individuals who withdraw from school early are at high risk of defaulting on their loans and the Commission believes that early intervention by the guaranty agency is more likely to result in the borrowers being able to avoid default. Under current regulations, a guaranty agency provides default aversion assistance to borrowers only after they become 60 or more days delinquent on their loan. Under the early-withdrawal counseling program, the Commission will contact borrowers as soon as they withdraw from school. The program plans to educate borrowers through a variety of services and provide information about their responsibilities and options for avoiding default. To help keep delinquent borrowers from defaulting, Great Lakes Higher Education Guaranty Corporation and the Texas Guaranteed Student Loan Corporation are both requiring lenders to submit requests for default aversion assistance between the 60th and the 70th day of delinquency. Under current regulation lenders can submit requests as soon as the 60th day or as late as the 120th day to submit such a request. Great Lakes and Texas guaranty agency officials believe that by helping to contact delinquent borrowers earlier, they have a better chance to prevent defaults. The statutory and regulatory waivers granted under VFAs attempt to improve guaranty agency performance in two ways—by eliminating duplicate or less effective fiscal, administrative, and enforcement requirements; and by substituting more efficient and effective alternatives. For example: The Great Lakes VFA allows for the elimination of some duplicative collection efforts that lenders or loan servicers and the guaranty agency are both required to perform when a borrower became delinquent. Officials from Great Lakes explained that they were concerned that the duplication of effort can be confusing and unnecessarily frustrating to borrowers. The American Student Assistance VFA grants authority to replace certain administrative requirements for collection efforts on defaulted loans with new, more targeted approaches. Current regulations specify in considerable detail what collection actions must be taken and during what time periods. For example, after 45 days of delinquency, the guaranty agencies must “diligently attempt to contact the borrower by telephone.” Between 46 and 180 days of delinquency, the agencies must “send at least three written notices to the borrower forcefully demanding immediate commencement of repayment.” Under the VFA, American Student Assistance has flexibility to develop procedures it considers to be more efficient utilizing best practices common to the financial services industry. Agency officials told us of plans to study borrower behavior to determine the characteristics of borrowers that are most apt to respond to particular default aversion or collection efforts. Guaranty Agencies without While VFAs represent a new approach to such matters as reducing VFAs Also Taking Steps to perverse payment incentives and allowing guaranty agencies to be more Improve Performance innovative in efforts to prevent defaults, they are not the only avenue through which important attempts are being made to seek improvements and innovations in the FFEL program. Guaranty agencies without VFAs are introducing efforts to reduce delinquencies and defaults. Some of the non-VFA guaranty agency officials we contacted indicated that they were uncertain that VFAs are needed in order to improve performance. They believe their mission provides sufficient motivation to increase efforts to prevent defaults by, for example, devoting more resources to work with delinquent borrowers and improving the exchange of information between guaranty agencies, lenders, schools, and Education. They also said that any innovations in customer service could be accomplished under current regulations. For example, the largest guaranty agency, USA Funds, Inc., is working in cooperation with other guaranty agencies on electronic data exchange and electronic signature authority. The agency is also implementing a program to provide students with current and historical student financial aid information from guarantors, lenders, and secondary- markets, as well as to deliver services over the Internet. For most of the guaranty agencies, the trend in recent years has been a decline in default rates. As figure 1 shows, trigger default rates decreased steadily through fiscal year 2000. The reasons for this reduction are likely multiple, including a low unemployment rate (giving more people jobs to pay off their student loans) resulting from generally favorable economic conditions during that period. Although many observers also credit the decline to the effect of more diligent or effective efforts by guaranty agencies, how much these efforts have contributed is unclear. We were not able to identify any study that has isolated the effects of these influences on default rates. Education is not fully prepared to evaluate the results of the VFA agreements. The agreements went into effect without Education having developed a clear way to measure changes in guaranty agency performance. For example, Education does not have a way to uniformly measure satisfaction among the agencies’ customers. Furthermore, it cannot adequately determine what has happened as a result of the VFAs through, for instance, comparisons with the results of past efforts to cure delinquent loans and comparisons of the results of similar efforts by other guaranty agencies. For the latter, a commonly used measure is the “cure rate” (the rate at which guaranty agencies and lenders keep borrowers who are delinquent in their payments from defaulting on their loans). This measure currently varies from guaranty agency to guaranty agency. It is likely to be difficult to distinguish the results of the VFAs from the effects of other factors, such as the general condition of the economy, but without uniform measures the task becomes even more difficult. To measure and compare the benefits that result from VFAs, Education needs uniform performance measures. The data Education routinely collects from guaranty agencies will provide several comparable measures of guaranty agencies’ performance, such as certain default rates and the delinquency status of guaranteed loans in repayment. According to an Education official, Education is working with a consulting firm to develop additional evaluation measures. Additionally, in commenting on a draft of this report, Education noted that it is establishing common measures to evaluate the performance of each VFA. These measures should provide useful data for comparing non-VFA and VFA guaranty agencies. However, other measures of VFA guaranty agency performance might not be as easily compared across the guaranty agencies. For example, Education currently lacks a means of calculating the cost of the VFAs. Specifically, it cannot calculate the amount by which VFA provisions increase federal payments to the VFA agencies, because it does not have a way to determine the amount of default aversion fees that each agency would have received in the absence of the VFA agreements. Also, the Great Lakes guaranty agency plans to measure VFA performance, in part, by measuring customer satisfaction. However, according to guaranty agency and Education officials, no effort is under way to measure other guaranty agencies’ customer satisfaction in a similar manner, thus making comparisons difficult. Another example is the lack of uniformity in calculating a cure rate. Although two of the VFAs specify cure rates as performance measures, these two guaranty agencies calculate cure rates differently and another guaranty agency uses a third method to calculate a cure rate. A uniformly calculated cure rate could be a useful indicator of guaranty agencies’ success in preventing defaults for loans that are prone to default (delinquent loans). The current inconsistencies in methods of calculating cure rates make systematic evaluation of VFA results difficult. The VFA legislation required that Education report on the status of the VFAs, including a description of the standards by which each agency’s performance under the agreement was assessed and the degree to which each agency achieved the performance standards. Additionally, Education was required to include an analysis of the fees paid by the secretary, and the costs and efficiencies achieved under each agreement. The report was due no later than September 30, 2001; however, as of this time, no report has been issued. The VFA development process did not fully meet the needs of the guaranty agencies or other program participants. Despite circumstances at Education that hampered VFA development, such as turnover of key staff, Education might have been able to develop the VFA with fewer frustrations had officials better communicated with participants, particularly with respect to how the cost projections were done. Additionally, a more realistic initial timetable might have lessened some of the criticism from guaranty agency officials. Education’s evaluation of the cost effects of the current agreements raises concerns about whether the federal program costs of current VFAs will grow in the years ahead to the point that they exceed projected costs in the absence of the agreements. In particular, we question the time period Education used for making the cost estimates and the fact that Education did not generally consider potential changes in agency performance for the cost estimates. Although projected cost increases were relatively small in comparison with the total amount of program costs during the first 3 years, estimates for years 4 and 5 showed substantial growth. Also, the general lack of a more thorough analysis of VFA costs—including an analysis of how factors, such as changing default rates might change projected costs—could leave the government vulnerable to greater than projected costs for the VFAs. VFAs are principally aimed at improving guaranty agency performance through innovative incentive payment structures and in granting waivers to statutory and regulatory procedures that might be hampering agency performance. To that end, the VFAs afforded the guaranty agencies the opportunity to try new ways of operating. Whether the incentive payments and waivers used by the VFA agencies improve guaranty agency performance more than the self-initiated efforts of the non-VFA agencies remains to be determined. Measuring the benefits of the VFAs is central in deciding if more VFAs should be entered into and if current VFA practices should be replicated at other guaranty agencies. We found that Education is not fully prepared to evaluate the success of VFAs in part because it does not have adequate standardized performance measures, such as delinquent loan cure rates. Without adequate performance measures Education is not well positioned to judge the success or failure of the VFA provisions. To improve the VFA development process for any future VFAs, we recommend that the secretary of Education develop a plan to more regularly communicate with guaranty agencies concerning the status of VFA development efforts, including disclosing to program participants the planned methods for projecting the federal program cost effects of VFAs; and a timetable for selection, negotiation, and completion of agreements based on experience developing the first four VFAs. In order to ensure that all VFAs are in compliance with statutory requirements, we recommend that the secretary of Education renegotiate the Texas VFA as soon as practicable to obtain changes necessary to ensure that the VFA does not increase projected federal costs; renegotiate the California VFA as soon as practicable to obtain changes necessary to ensure that the VFA does not increase projected federal costs, with or without changing the trigger default rate; renegotiate the Great Lakes and American Student Assistance VFAs for time periods after fiscal year 2003 to ensure that the VFAs do not increase projected federal program costs; and improve projections of the cost effects of renegotiated VFAs and any future VFA proposals by (1) requiring that each VFA specify an effective time period, (2) conducting a cost analysis covering that period, and (3) conducting analyses to project the cost effects of changes in assumptions regarding guaranty agency performance, such as default rates, in making the cost projections. To ensure that the results of the VFAs can be effectively evaluated, we recommend that the secretary of Education develop specific evaluation plans enabling Education to compare VFA guaranty agency performance with past performance and the performance of other guaranty agencies using uniformly defined performance measures, including delinquent loan cure rates. We provided a draft of this report to Education for comment. In its response, Education indicated that it had a number of concerns about the report. Education stated that our mention of GAO’s designation of the student financial assistance programs as “high-risk” (in the Background section) was beyond the scope of our review and that it detracts from the analysis in the report. We disagree. The report contains analyses and descriptive information on many aspects of the FFEL program, which provided approximately $23 billion of loans for postsecondary students in fiscal year 2000. The mention of the student loan programs as high risk and the ensuing discussion are important to help establish the significance that any changes—including the VFAs—might have on the program. Regarding the development of the VFAs, Education said that it appears that our conclusions were based primarily on conversations with individual guaranty agencies that did not apply for a VFA and representatives of various interest groups, many of which had consistently opposed the VFAs. In fact, as indicated in our report, our conclusions are largely based on comments from representatives of 18 guaranty agencies—including representatives from all four agencies with VFAs; representatives from those agencies that had unsuccessfully sought a VFA; representatives from agencies that did not seek a VFA, but may wish to in the future; and representatives from agencies that had opposed the VFA legislation from the beginning. Concerning the cost effects of the VFAs, Education stated that it had, in keeping with its standard procedures for estimating costs, (1) used a closed time period (in this case, 3 years) to project costs; (2) not considered the impact of possible changes to borrower or institutional behavior in projecting costs; and (3) appropriately treated the $1 million per year projected cost increase for the Texas VFA as “insignificant.” First, in looking at the 3-year time period, Education said that its conclusions about the cost effects of the VFAs were appropriately limited to the first 3 years because there was no reason to expect that the agreements would necessarily remain in effect beyond the time period for reauthorization of HEA, which may bring changes that could alter any cost analyses. We agree that the projected increases in federal costs in the fourth and fifth years would not be relevant if the current agreements no longer remain in effect after the end of fiscal year 2003. However, since three of the VFAs are open-ended, there is reason to believe they could extend beyond three years. Therefore, to ensure that projected federal costs do not increase due to the VFAs, Education would need to renegotiate the VFAs for the time period beyond 3 years. Education’s statement that, "GAO's interpretation of the statute as requiring strict 'cost neutrality' over a long period of time is not supported in the statute or the legislative history," is incorrect. We did not interpret the statute in this manner. Instead, our reading of the statute is that the period of time to be examined should correspond to the projected life of the agreement. As mentioned above, three of the agreements we reviewed were for an open- ended period of time. Education chose a 3-year period for their cost analysis, which is within its discretion and not inconsistent with the statute. However, the report was intended to make clear that, given the open-ended nature of the agreements, a decision by Education not to terminate the agreements after 3 years would warrant a reassessment of the cost projections and a renegotiation of the agreements, if necessary. Second, Education stated that it does not base cost estimates on behavioral assumptions that cannot be supported by available data. We agree that this is appropriate for baseline estimates, however one of the purposes of the VFAs is to improve guaranty agency performance, and thus the cost effects of potential improvements need to be considered in Education’s cost projections. Accordingly, we recommended that Education supplement baseline estimates with sensitivity analyses in order to avoid provisions that increase federal costs when an agency’s performance improves, by reducing default rates for example. Third, with respect to Education’s assertion that the projected increase of $1 million per year for the Texas VFA is “insignificant,” we disagree. Education based its assertion on a comparison of the $1 million to the total federal cash flows being estimated. The projected amount of collections on defaulted loans less federal program costs averaged $161 million per year for the 3-year period—an amount lower than the “hundreds of million of dollars per year” Education cited in its comments. Additionally, an alternative basis of comparison could be to use the projected net amount of the agency’s receipts from federal sources and its retentions of collections (an average of $71 million per year for the 3-year period). In either case, the projected increase is not consistent with the VFA legislative requirement that the projected federal program costs not increase due to the VFAs. Regarding preparations to evaluate the VFAs, Education said that it is establishing common, general measures to evaluate the performance of each VFA and, whenever possible, to compare VFA guaranty agency performance with other non-VFA guaranty agencies. Education noted that it has had preliminary discussions with representatives of the 36 guaranty agencies regarding uniform performance measures. Also, it noted that the guaranty agencies are in the process of establishing an eight-member task force to assist in determining the specific formulae for measuring VFA performance. As our report indicates, Education does currently have several possible uniform measures of agency performance. We welcome its efforts to develop additional measures, but conclude that a uniform cure rate measure would assist in evaluating the performance of the VFAs, considering that two guaranty agencies with VFAs specifically identified a cure rate as a performance indicator. We reviewed these and additional Education comments and modified the draft as appropriate. Education’s comments are included in appendix IV. We are sending copies of this report to Honorable Roderick R. Paige, secretary of Education; appropriate congressional committees; the guaranty agencies with VFAs; and other interested parties. Please call me at (202) 512-8403 if you or your staff have any questions about this report. Key contacts and staff acknowledgements for this report are listed in appendix V. As agreed with your office, we focused our review of voluntary flexible agreements (VFA) on addressing the following questions: 1. To what extent did the VFA development process meet the needs of guaranty agencies and other program participants? 2. To what extent do VFAs comply with requirements in the VFA legislation? 3. What changes are being implemented under the VFAs? 4. How well prepared is Education to assess the effects of the VFAs? To determine the extent to which Education’s VFA development process met the needs of guaranty agencies and other program participants, we interviewed Education officials involved in the development of the VFAs, officials at each of the nine guaranty agencies that submitted an application for a VFA, and nine guaranty agencies that did not submit applications. The nine guaranty agencies that did not submit applications included the five guaranty agencies with the largest amounts of loan guarantees and four randomly selected smaller guaranty agencies that did not submit applications. We also reviewed VFA proposals and comments Education received during the public comment period. To determine the extent to which the VFAs complied with statutory requirements we reviewed the VFA agreements, provisions of the Higher Education Act (HEA) concerning the Federal Family Education Loan Program (FFELP), and related regulations. We also discussed the agreements with Education and guaranty agency officials, and representatives of industry associations including the National Council of Higher Education Loan Programs, Inc. and the Consumer Bankers Association. To review Education’s methods for projecting the costs of the VFA agreements, we examined computerized schedules Education used to project each VFA guaranty agency’s costs and financial data compiled by Education staff from submissions by the guaranty agencies. We also discussed these projections with Education’s budget service staff and Congressional Budget Office and Office of Management and Budget officials. To identify changes being implemented under the VFAs we reviewed the VFAs and discussed them with the guaranty agency officials and reviewed documents they provided concerning their programs. In addition, to determine how well prepared Education is to identify the effects of the VFAs, we discussed plans for evaluation of the VFAs with guaranty agency officials and Education officials responsible for collecting and analyzing data from guaranty agencies. On the basis of budget service subsidy rate estimates, we projected the level of noninterest federal costs for $263 million of loans—the amount of loans that would default if the California guaranty agency’s trigger default rate were 3 percent in fiscal year 2001. As shown in table 3 below, we estimated the net federal costs of these loans (excluding interest subsidy costs that the budget service indicated would not be affected) under four different scenarios: (1) a 3 percent trigger default rate with all $263 million of these loans defaulting without the VFA in effect, (2) a 3 percent trigger default rate with the VFA in effect, (3) a 2.6 percent trigger default rate with $228 million of the $263 million of loans defaulting without the VFA in effect, and (4) a 2.6 percent trigger default rate with the VFA in effect. As shown in table 3, federal noninterest costs for these loans would be about $107 million under either scenario 1 or scenario 2. In scenario 3, federal costs would decline by about $15 million to $92 million as trigger basis defaults decline from 3 percent to 2.6 percent. Under scenario 4, however, Education would benefit from lower loan defaults, but it would also have to pay the California guaranty agency half of the $34.7 million reduction in the amount of claims payments to lenders (a $17.3 million VFA fee). Because the VFA fee exceeds the benefit Education would realize from the lower level of defaults, federal costs would increase by an estimated $2.6 million. The VFA default rate incentive payment, one-half of the claims payments avoided with a trigger default rate below 3 percent, was identified in the VFA agreement as “50% of the savings in claim payments resulting from its default aversion activities under this VFA.” This calculation, however, fails to take into account two potentially significant factors. First, the federal cost of loan default is mitigated in part by subsequent collections on the defaulted loan. If the guaranty agency receives payment on a loan after the loan defaults it generally is allowed to retain 24 percent of the amount collected. The remaining 76 percent must be remitted to Education. Budget service staff looked to see how the present value of these payments would affect the present value of program costs for Subsidized Stafford loans. They concluded that the federal cost (aside from federal administrative costs) on a subsidized Stafford loan that defaults is on average about 47.5 percent of the amount of the loan. The comparable figure for the same loan without default, but with the VFA incentive payment was 51.7 percent. In other words, the incentive payment to California’s guaranty agency exceeded the present value of the federal cost of the default adjusted for the subsequent collections on the loan. Instead of benefiting from fewer defaults of loans guaranteed by the California guaranty agency, Education stands to benefit from increases in defaults until the guaranty agency’s trigger default rate reaches 3 percent. Above that point the guaranty agency would not receive an incentive payment and Education would not benefit from higher levels of defaults. The second reason for questioning the provision’s definition of federal cost savings resulting from the VFAs default aversion activities is that the entire decline in default costs may not be solely attributable to the VFA. Default rates change for many reasons. According to guaranty agency and Education officials, declines in default rates are due to such factors as a change in definition of default from 180 to 270 days of delinquency brought by the VFA legislation, increased default aversion assistance activities by all guaranty agencies, enhancements in loan servicing methods, and a prosperous economy. The VFA incentive payment to California rewards the guaranty agency for any decline in default rates whether it is due to VFA prompted efforts or to other factors. As shown in figure 2 below, generally guaranty agencies have seen declines in trigger default rates. Guaranty agencies that received VFAs and guaranty agencies that did not both saw declines in default rates from fiscal year 1997 to fiscal year 2000, with increases in fiscal year 2001. For example, the largest guaranty agency, USA Funds, Inc. had a higher default rate than California’s in fiscal years 1997 and 1998. However, by fiscal year 2001, its default rate was slightly lower than California’s. In addition to the individuals named above, Jonathan H. Barker, Daniel R. Blair, Christine E. Bonham, Richard P. Burkard, Timothy A. Burke, Aaron M. Holling, Stanley G. Stenersen, and James P. Wright made key contributions to this report.
The relationship between the Department of Education and state-designated guaranty agencies that run the largest federal student loan program is changing in order to achieve program and cost efficiencies and improve delivery of student financial aid. These state or private not-for-profit agencies guarantee payment if students fail to repay loans obtained through the Federal Family Education Loan programs. The 1998 Amendments to the Higher Education Act authorize the Secretary of Education to enter into "voluntary flexible agreements" (VFA) with individual guaranty agencies. These agreements allow a guaranty agency to waive or modify some of the federal requirements that apply to other guaranty agencies. GAO found that the process for developing the agreements did not fully meet the needs of the guaranty agencies and other program participants. The process frustrated guaranty agency officials GAO talked to, especially those who ultimately chose not to apply for a VFA and those who were not granted a VFA. Agency officials said that Education's communication about the VFA development process was poor and that Education was unable to meet its own timetable. The VFAs generally complied with most of the legislative requirements. However, one of the four agreements does not conform to the requirement that projected federal program costs not increase due to the agreements. The key changes implemented under the VFAs include incentive pay structures for guaranty agencies and waivers of certain statutory and regulatory requirements. Each VFA contains provisions for paying the guaranty agency incentive amounts on the basis of specific performance measures, such as default rates. Education is not prepared to assess the effects of VFAs because it lacks a way to adequately measure changes in guaranty agency performance. The lack of uniform measures makes it difficult to distinguish the results of the VFAs from the effects of other factors, such as the general condition of the economy. Although the Department is required to report on the status of the VFA by September 2001, no reports have been issued so far.
In pursuing its mission of aiding small businesses, SBA provides small businesses with access to credit, primarily by guaranteeing loans through its 7(a) and other loan programs, and provides entrepreneurial assistance through partnerships with private entities that offer small business counseling and technical assistance. SBA also administers various small business procurement programs, which are designed to assist small and small disadvantaged businesses in obtaining federal contracts and subcontracts. In addition, SBA makes loans to businesses and individuals trying to recover from a disaster. As figure 1 shows, SBA has experienced many organizational changes over the past 20 years partly due to changing the way it delivers its services and partly due to budget cuts. Perhaps the largest change to SBA’s service delivery has occurred in its lending programs, where the agency went from making loans directly to guaranteeing loans made by commercial lenders. SBA provides small businesses with access to credit, primarily by guaranteeing loans through its 7(a) and 504 programs. For the 7(a) program, SBA can guarantee up to 85 percent of the loan amount made by private lenders to small businesses. Within the 7(a) program, for smaller loans, SBA offers SBA Express as an option to lenders who will use their own applications and underwriting procedures by agreeing to a lower guaranty of 50 percent. Within the 7(a) program, there are three classifications of lenders—regular, certified, and preferred lenders—that illustrate the various range of responsibilities handed over to lenders. SBA continues to provide final approval of loans made by its regular lenders through the district offices. Certified lenders have the authority to process, close, service, and may liquidate SBA guaranteed loans, and SBA provides expedited loan processing and servicing. Preferred lenders are given full authority to make loans without prior SBA approval. However, these lender-approved preferred loans are submitted to SBA’s Sacramento Processing Center, which, among other things, verifies that the lender has documented eligibility requirements, issues a loan number, and processes the loan guaranty. Under the 504 program, SBA provides its guaranty through certified development companies—private nonprofit corporations—that sell debentures that are fully guaranteed by SBA to private investors and lend the proceeds to qualified small businesses for acquiring real estate, machinery, and equipment, and for building or improving facilities. When a 7(a) or 504 loan defaults, SBA reviews the lender’s request for SBA to purchase the guaranty, and if the lender met SBA’s program requirements, SBA pays the claim. SBA usually relies on the lender to recover as much as it can by liquidating collateral or SBA takes over the loan servicing and liquidation. SBA’s loan programs have also been the focus of a major organizational change with the creation of centers to process and service the majority of SBA’s loans—work once handled largely by district office staff. (See fig. 1.) About 92 percent of the processing and servicing of SBA-guaranteed loans are handled in centers instead of district offices. Among other things, these centers process the loan guaranty and review servicing requests submitted by lenders and borrowers. In response to budget reductions, SBA streamlined its field structure during the 1990s, downsizing the 10 regional offices, moving the workload to either district offices or headquarters offices, and eliminating most of the regions’ role as an intermediate management layer between headquarters and the field. SBA created the Office of Field Operations to take over the role of intermediary. SBA’s overall workforce has decreased by over 20 percent since 1992 and as of 2002 includes about 4,075 employees, including 956 for the Office of Disaster Assistance and 102 employees for the Office of the Inspector General. When SBA embarked on this current transformation effort, it planned its implementation in three phases. The key pilot initiatives SBA undertook in phase 1 that began on March 10, 2003, focuses on (1) transforming the role of the district office to focus on outreach to small businesses about SBA’s products and services and linking these businesses to the appropriate resources, including lenders and (2) centralizing its loan functions to improve efficiency and consistency of its loan approval, servicing, and liquidation processes. Later phases will include expanding these pilots to the remaining district offices. As SBA proceeds in transforming the district offices and centralizing many of its processes, it will analyze its business processes to identify opportunities for improvement and reduce its office space to achieve some cost savings. Finally, SBA’s plan included initiatives to apply technology and use the Internet to reach out to more small businesses. As part of the first phase of SBA’s transformation, the agency began implementing pilot initiatives to test a new marketing focus for its district offices and centralizing some of its loan functions. As the first phase nears completion, SBA has made some progress in implementing the pilot initiatives at three district offices and two centers. While SBA’s implementation efforts have been and could continue to be impeded by budget constraints, we found that the agency did not always clearly communicate its budget requirements. SBA’s centralization efforts could also be impeded by the challenge of realigning staff from multiple field offices so that it can operate its central locations with experienced employees. SBA’s purpose for transformation is to realign its organization, operations, and workforce to better serve its small business customers. Based on SBA transformation documents and agency officials, the agency planned to approach its transformation in phases to allow it to test a number of initiatives and to make refinements before implementing the initiatives agencywide. In our July 2002 testimony on SBA’s workforce transformation plan, we noted that SBA had started to develop a sound implementation plan for its transformation. As part of phase one, SBA intended to test a new marketing and outreach initiative for its district offices that would refocus their efforts on becoming more responsible and accountable for promoting small business growth and development as well as on providing better oversight and management of its lenders and resource partners. Additionally, SBA planned to centralize a number of the offices’ loan functions to (1) free up district office staff to reach and respond to the needs of local businesses and to do more lender and partner management and oversight and (2) improve the efficiency and consistency of its loan processing, servicing, and liquidation functions. To accomplish these initiatives, in March 2003, SBA began its initial pilot initiative at three district offices and two centers and based on its initial transformation plan, it expected to run the pilots for 6 months before moving to the second phase of its transformation. As of our report date, SBA is nearing the completion of phase one of its district office and centralization pilots and plans to expand the results of phase one to all of its other district offices. Based on our site visits to the pilot offices and discussions with SBA headquarters officials, we identified a number of transformation-related activities that SBA has made progress in since implementing its initial pilot initiative. Specifically, for its district office initiative to prepare staff in carrying out their new marketing and outreach roles, during March through June 2003, SBA provided training at the three district office pilots on topics such as marketing and outreach, presentation skills, and customer/partner relationships; develop the competencies necessary for staff to carry out their new roles and to evaluate gaps in the existing skill sets of its staff, SBA has hired a contractor to conduct a skills analysis. In July 2003, the contractor completed the analysis for the three pilot district offices and according to SBA officials, district office management will use the results to identify its employees’ developmental needs in the marketing and outreach areas; update and clarify the specific duties that SBA expects its district office staff to perform in their new marketing roles, the agency developed new job descriptions for its marketing and outreach specialist positions at the district office level; and allow staff at the three pilot district offices more time to conduct marketing and outreach functions, in March 2003, these offices had stopped processing any new 7(a) liquidations and guaranty purchase cases and 504 loan origination applications. In addition, the offices had also transferred most of their outstanding 7(a) liquidation cases to SBA’s liquidation center in Santa Ana, California. Additionally, a key component of SBA’s transformation is to make fundamental changes over the next 5 years at its district office level to reflect the change in the agency’s vision for its district offices from making and servicing loans to primarily reaching out to new markets and overseeing its private-sector partners. Based on our site visits to the three pilot district offices, we found that the offices have begun to move toward SBA’s new vision for its district offices. Specifically, SBA’s Phoenix, Arizona, office has officially changed its organization structure to allow its staff to focus mainly on marketing and outreach-related activities. As shown in figure 2, the office has replaced its portfolio management division with divisions for lender development and marketing and outreach, and it also moved some staff formerly in portfolio management to its business development division. The Miami and Charlotte district pilots have also started to expand their marketing and outreach efforts. For example, a Charlotte official told us that it plans to use “SBA Days” as a way to reach out to small businesses in its district. SBA Days are events conducted at local chambers of commerce around the district’s state where SBA staff along with chamber members and other firms in the area conduct one-on- one counseling sessions with business owners and potential entrepreneurs. To reach small businesses in the Miami area, officials told us that the office is using one of its resource partners to work with a national chain of office supply stores to provide on-site counseling to small business customers when they are in the stores. SBA headquarters officials provided us with briefing slides that show that the three district office pilots have submitted proposals for establishing alternative customer service sites so that SBA employees can provide direct customer service in areas outside the physical location of the district offices. For example, the Phoenix district office already has one marketing specialist located in Tucson and is proposing two additional positions to support lender relations. Officials also told us they are working with local governments and resource partners to identify free office space for these new sites, but in some cases there may be some rental expenses. Finally, as part of its centralization initiative to improve the efficiency and consistency of its loan approval, servicing, and liquidation processes, in March 2003, SBA’s two pilot centers assumed their new roles and responsibilities. The liquidation center in Santa Ana, California, began processing new 7(a) liquidations and guaranty purchase cases from the three pilot district offices, and the loan processing center in Sacramento, California started processing new 504 loan origination applications from these offices. In May 2003, the Santa Ana center also started working on 1,275 outstanding 7(a) liquidation cases from the three pilot district offices. Based on SBA’s benchmark reporting data for its centralization pilot, as of October 2003, the Santa Ana center had processed 185 of 227 new 7(a) guaranty purchase cases it had received and closed 55 of 450 7(a) liquidation cases. The Sacramento center had processed 582 new 504 applications that it had received since beginning the pilot initiative. According to SBA and representatives from two lender trade associations, the centralization pilot has resulted in a more efficient and consistent processing of SBA’s 7(a) liquidation and guaranty purchases and 504 loan approvals. SBA headquarters officials told us that the agency would be able to perform these functions with far fewer resources than it has to date. According to the officials, based on results from a workload analysis SBA did of the Santa Ana centralization initiative, it found that the 7(a) liquidation and purchase guaranty process could be done by 40 employees in a center, as opposed to the 266 employees that now process the cases in its district offices. SBA officials also told us that centralization results in faster processing times. SBA data indicate that the average turnaround time for processing 7(a) guaranty purchases has decreased from 129 days to 32 days and, for 504 applications, it has gone from about 14 days on average to about 2 days. We reviewed about 450 cases of the 504 application approvals from the pilot and found that most applications were processed and returned to the certified development companies in about 2 days. We did not review data for any of the other measures. When we visited the two centers participating in the pilots, center officials showed us documentation they were using to make the process more efficient and consistent. For example, for the 504 pilot, the Sacramento center developed standardized letters to send to certified development companies in situations where the center receives an incomplete application package from a company. According to a center official, some district offices spend a lot of time making telephone calls to the development companies requesting the necessary data to complete the processing. However, by using the letters, the official said the center is saving time because it stops processing the application until it receives the needed information, and in the meantime it can continue processing applications that are complete. One official representing certified development companies told us that the companies participating in the pilot initiative for SBA’s 504 program are pleased with the results of the pilot. Officials representing 7(a) lenders said that some lenders might not be in favor of centralization because they have good working relationships with the local SBA district office and would prefer to continue working directly with them. SBA transformation efforts have been impeded and could continue to be impeded by budget uncertainties and constraints. SBA officials stated that due to inflation and increases in employee compensation and benefits, available operating funds had been declining since 2001 as shown in figure 3. Therefore, SBA requested specific funding for its transformation. According to SBA officials, the agency expected to start its pilot initiative in July 2002 with funds from its 2002 operating budget and then expand the initiative in phase two of its transformation, 6 months later, with funds specifically requested for transformation in its 2003 budget. But SBA delayed the start of the pilot until March 2003 due to a number of uncertainties about its budget. SBA officials explained that language in its appropriations bills requires that SBA notify the appropriations committees 15 days prior to reprogramming its funds for relocating an office or employees, or reorganizing offices. In the summer of 2002, SBA notified the appropriations committees about its intent to go forward with the pilots. However, SBA was told that it should first negotiate with its union before moving forward. Although SBA reached agreement with its union, starting the initiative still remained an issue for SBA because, according to officials, it was too late to use 2002 operating funds as it initially planned. While SBA then planned to use 2003 operating funds to start the pilot initiative, officials said that the government’s 2003 continuing resolution further delayed the start because without an approved operating budget, SBA did not know the portion of its operating budget that would be available for transformation. For its 2003 budget, Congress did not approve any of the $15 million that SBA specifically requested for transformation activities planned for phase two, and SBA officials told us they do not believe the agency will receive any of the $21.1 million for transformation in its 2004 budget request. According to officials, SBA has had to change its transformation plans and the level of funding associated with these plans because of its shrinking operating budget and the lack of specific appropriations for transformation. Specifically, officials stated that SBA actually spent $96,000 in 2003 operating funds on the first phase of its transformation for activities associated with its pilot initiative, including shipping files, training, travel, and pilot office evaluations. Officials could not tell us how much money SBA initially planned to spend in phase one when it was going to use 2002 operating funds or whether any of the activities associated with this phase had to be cut back due to the lack of funds. However, many employees in the district offices we visited told us that they had not received the level of funding needed to support marketing and outreach functions including money for travel, laptops, and cell phones that would allow them to cover a wider geographic area in the districts and to test telecommuting and alternative work sites. Although SBA struggled with budget uncertainties and constraints as it began implementation of its transformation, SBA could have provided better information about its budget requirements. Based on our analysis of SBA budget request data for fiscal years 2003 and 2004, SBA has not clearly defined its budgetary needs for transformation. As shown in figure 4, the labeling of specific transformation initiatives varies between SBA’s fiscal years 2003 and 2004 Budget Request and Performance Plans, making it difficult to compare and align its transformation activities from year to year. Also, as shown in figure 4, in its fiscal year 2004 budget request, SBA requested $21.1 million for a number of investment initiatives, of which $8.8 million was for transformation. The $8.8 million figure was also the amount cited by SBA’s Administrator during two congressional hearings. When we met with SBA headquarters officials to discuss the variances in its budget request data, the officials told us that SBA’s 2004 budget request for transformation is the entire $21.1 million, and not the $8.8 million. In response to our questions about the budget data inconsistencies, SBA officials attributed the differences to the agency’s changing environment. However, the inconsistencies we found in SBA budget request data and the lack of a detailed plan make it difficult for outsiders, including congressional stakeholders, to understand the direction SBA wants to take with transformation and the resources it needs to achieve results. To staff its centralization initiatives, SBA will have to relocate employees from its 68 district offices scattered throughout the country. Realigning staff from multiple field offices to central locations is and will be an ongoing challenge for SBA. Relocations could not only prove potentially disruptive for employees but can also have an effect on SBA’s operations by negatively impacting morale and productivity. As part of phase one of its transformation, SBA centralized a number of loan functions from the three pilot district offices to two of its existing loan processing and servicing centers. In phases two and three of its transformation, SBA had planned to expand its centralization initiative until all of its loan functions performed by its remaining 65 district offices were centralized. In addition, SBA had planned to have fewer centers by consolidating some of its existing ones. Based on our discussions with SBA staff in the pilot offices, the staffing of any centralization initiative with experienced staff could be potentially challenging for SBA. Specifically, some staff believed that the two pilot centers would not have a sufficient number of staff to handle the increased workloads when SBA expands its centralization initiative to include more district offices. According to one district office employee, unless the two pilot centers or any other center have enough staff with the right skill mix, they will be unable to adequately respond to lenders, which the employee believed could potentially affect relationships between SBA and the lending community. One center official characterized the problem as fundamental because in his view staff are not all equally adept and SBA is faced with matching jobs with people who do not have the skills to do the work. An official representing one of SBA’s lender trade associations also expressed concern that if SBA forced employees to move, that the centralization initiatives will be staffed with employees with low morale that could hurt productivity. SBA’s first attempt to realign staff with one of its centralization initiatives was to establish a new 7(a) liquidation and guaranty purchase center near Washington, D.C., beginning in early October 2003 and operate it with 40 liquidation staff relocated to the center from its district offices. Based on SBA transformation documents, SBA plans to relocate those staff with the greatest experience into the center to take advantage of their expertise. According to SBA officials, to identify experienced staff the agency used results from a cost allocation survey that provided information on the amount of time district office staff spend on loan liquidation functions. On September 10, 2003, SBA sent notification letters to certain district office employees identified as having worked on liquidations, informing them that they were eligible for a monetary buy-out if they separated from federal service not later than September 30, 2003. While the letter also states that the employee has 7 calendar days to accept the buy-out offer, it is unclear how SBA would handle reassigning those staff who do not accept the buy- out offer. Specifically, the letter does not mention where staff are being assigned, or what relocation costs SBA would pay. According to the memorandum of understanding between SBA and its employees’ union signed September 9, 2003, the two parties agreed that current district office staff at the GS-9 level and above who reported spending at least 25 percent of their time performing liquidations on SBA’s most recent cost allocation study would be directly reassigned to the new liquidation center in the Washington, D.C., metropolitan area, or to one of the six most severely understaffed SBA district offices in New York, New York; Newark, New Jersey; Atlanta, Georgia; Chicago, Illinois; and San Francisco and Los Angeles, California. The memorandum indicates that SBA identified the six offices based on staffing levels for those district offices with the lowest ratio of SBA staff to small businesses in their service area, as of August 1, 2003. Also, the memorandum states that SBA plans to begin relocating staff 30 days from the time it notifies them about their reassignment to the center and that it will pay all of an employee’s relocation cost in accordance with the law. While SBA has indicated that it will make reassignments as minimally disruptive for its employees as possible, depending on where the 40 staff being reassigned to the center currently work, logistical factors associated with moving, such as finding a new home, could pose a challenge for these staff. As of our review date, SBA had not informed us about when it expects to begin the reassignments or the number of and office locations for the employees that it intends to relocate. We compared SBA’s implementation process to practices that have been identified in major private and public sector organizational transformations as key for a successful transformation. Building on lessons learned from the experiences of large private and public sector organizations, these practices can help agencies successfully transform their cultures so that they can be more results oriented, customer focused, and collaborative. While SBA applied some key practices, such as involving top leadership, dedicating an implementation team and developing an implementation plan, it also overlooked key aspects that emphasize transparency and communication. For example, although it developed a draft transformation plan with implementation goals and a timeline, it did not share the plan with employees and stakeholders. SBA developed strategic goals for transformation but still needs to link those goals with performance goals and its performance management system. Finally, a lack of communication and employee involvement in SBA’s communication approach did not encourage two-way communication to obtain feedback from employees and stakeholders and involve employees to obtain their ideas and gain their ownership for the transformation. According to key transformation practices, people are at the center of any change management initiative—people define the organization’s culture, drive its performance, and embody its knowledge base. Experience shows that failure to adequately address—and often even consider—a wide variety of people and cultural issues are at the heart of unsuccessful transformations. Recognizing the “people” element in these initiatives and implementing strategies to help individuals maximize their full potential in the new organization, while simultaneously managing the risk of reduced productivity and effectiveness that often occurs as a result of the changes, is the key to a successful transformation. Thus, transformations that incorporate strategic human capital management approaches will help to sustain agency efforts to improve efficiency, effectiveness, and accountability in the federal government. We convened a forum on September 24, 2002, to identify and discuss useful practices and lessons learned from major private and public sector mergers, acquisitions, and transformations. The invited participants were a cross section of leaders who have had experience managing large-scale organizational mergers, acquisitions, and transformations, as well as academics and others who have studied these efforts. The forum neither sought nor achieved consensus on all of the issues identified through the discussion. Nevertheless, there was general agreement on a number of key practices that have consistently been found at the center of successful mergers, acquisitions, and transformations. In a follow-up report issued on July 2, 2003, we identified specific implementation steps for these key practices. These practices and implementation steps are shown in table 1. One of the key practices important to a successful transformation is for the agency to ensure that top leadership drives the transformation. SBA has followed this practice, with both the Administrator and the Deputy Administrator demonstrating support for the transformation. The SBA Administrator has provided a rationale behind the purpose of the agency and the goals of the transformation by addressing district directors and visiting field offices to discuss the importance and goals of transformation—to increase awareness of SBA’s services and to make SBA a better trained, better equipped, and more efficient organization. SBA officials told us that the Deputy Administrator has also visited many field offices to discuss the transformation. Designating a strong and stable implementation team that will be responsible for the transformation’s day-to-day management is also important to ensuring that transformation receives the focused, full-time attention needed to be sustained and successful. SBA has dedicated an implementation team to manage the transformation process, but it has experienced leadership changes that were not made apparent to employees and stakeholders. The composition of the team is important because of the visual sign it communicates regarding which organizational components are dominant and subordinate or whether the transformation team involves a team of equals. Prior to the Deputy Administrator assuming the lead for implementing the transformation, the Chief Operating Officer was responsible. The Chief Operating Officer, along with SBA’s Associate Administrator for the Office of Field Operations, visited the pilot district offices during the kick off to promote the transformation and to address questions and concerns of the pilot district office staff. However, the Chief Operating Officer left SBA shortly after the first pilot phase was initiated. Similarly, the person who was initially the Associate Administrator for the Office of Field Operations, who was responsible for overseeing the district office pilots, was no longer involved in the transformation shortly after implementation. SBA officials told us that it was not productive for its Chief Operating Officer to be in charge of the transformation because the Chief Operating Officer position was equal in terms of authority to the other key positions on the implementation team. Since the Chief Operating Officer left the agency, SBA has not publicly designated a day-to-day manager for the transformation effort. Based on our discussions with stakeholders and field and union officials, the Counselor to the Administrator appeared to be the manager. However, SBA has not issued any announcement or otherwise clarified the leadership or implementation team to employees and stakeholders. SBA officials told us that the person now serving as the Associate Administrator for the Office of Field Operations leads the weekly conference calls with the district and center directors involved in the pilots and is the person most involved in the day-to-day management of the transformation. The Deputy Administrator, who can direct the other members of the implementation team, leads the current team, which comprises senior executives of the key program areas affected by the transformation such as the Associate Deputy Administrator for Capital Access, the Associate Administrator for the Office of Field Operations, Chief Human Capital Officer, and the three pilot district office directors. The team also includes the Counselor to the Administrator and two Regional Administrators. Officials on the implementation team told us that they meet on a weekly basis with the Deputy Administrator and sometimes the Administrator to discuss the status and concerns related to the pilot’s implementation. SBA officials also emphasized that the implementation team includes a mix of political appointees and senior career officials. For example, the Associate Deputy Administrator for Capital Access and the Associate Administrator for Office of Field Operations are political appointees. The Chief Human Capital Officer and the Counselor to the Administrator are career officials. A key practice in organizational transformations is to set implementation goals and a timeline to build momentum and show progress from day one. Although SBA had developed a transformation plan that contains goals, anticipated results, and an implementation strategy, it never made the plan public. SBA headquarters officials told us that all of its plans provided to us were “preliminary” documents because of changes made to the plan; therefore, it had not been shared with employees or stakeholders. Making the implementation goals and timeline public is important for transparency and accountability in a transformation and because employees and stakeholders are not only concerned with what results are to be achieved, but also how to achieve those results. According to SBA’s draft transformation plan, SBA intended to keep its employees apprised of the current status of activities, and continuously inform its employees on what the agency intended to do. However, SBA has not made much information available to its employees and stakeholders regarding the details of upcoming steps, measures for success, and reasons for decisions. As a result, it appeared to many district office employees and stakeholders that headquarters lacked a plan and direction. Stakeholders, including representatives from lender trade associations, informed us that SBA has not been forthcoming in discussing its transformation plans with them. Generally, district office employees told us they thought SBA had no clear plan and lacked direction. Specifically, two district office employees told us that despite any planning that SBA had done for the transformation, headquarters officials kept adding to the plan, and changing goals during mid-year, which left employees in the district office uncertain about what to expect. SBA officials told us that internal and external factors, such as budget uncertainties, caused SBA to alter aspects of the draft transformation plan. Initially, phases two and three of its transformation were to expand its district office and centralization pilot initiatives to additional district offices. SBA had also planned a number of other initiatives as part of the later phases, including analyzing its business processes to identify opportunities for improvement, restructuring its surety bond program, and expanding its technology systems. According to a revised plan dated August 1, 2003, and discussions with SBA officials, the focus of SBA’s transformation is now on creating a new center for centralizing all of its 7(a) loan liquidation and loan guaranty cases. Also, the plan and other documentation describing SBA’s new centralization initiative indicate that SBA’s reason for the initiative is to allow it to correct staffing imbalances at its district offices nationwide and will allow these districts to increase the number of people in the field offices who are providing direct assistance to small businesses, including providing assistance in areas that have not had access to SBA services. While SBA officials told us the focus of the transformation had changed, we had difficulty in determining the extent of changes to the specific initiatives in its initial transformation plan, including to what extent SBA would test new marketing and outreach approaches, centralize other functions, and improve business processes. According to a senior SBA official, although there has not been a formal announcement about creating the liquidation center, he expected that staff would be aware that SBA was moving toward centralizing loan-related functions based on the new marketing and outreach focus in the pilot district offices, and because the union had been informed. Similarly, although SBA planned for evaluating the progress of its pilot initiatives, the SBA evaluations provided to us have been limited to measuring the results of its centralization pilots and not the results of the district office pilots or lessons learned from the implementation process. As a result, employees and stakeholders are uncertain about the results of the district office pilots. According to key transformation practices, it is essential to establish and track implementation goals to pinpoint performance shortfalls and suggest midcourse corrections. According to SBA transformation documents and officials, follow-up evaluations of its pilot initiatives were to take place after kick off—every 90 days for the district office pilots and every 30 days for the center pilots—to evaluate the progress of the pilots, and to monitor and validate the information SBA received. In addition, these reviews were intended to identify any problems related to the transformation process, as well as best practices, which would be documented and shared with the others in the pilot to improve efficiency and effectiveness. For its centralization initiative, SBA has established some evaluation standards—such as measuring average turnaround and processing time for the centers, and has generated a benchmark report reflecting the results of these measures. While SBA gathered benchmark measurements to monitor progress in the district office pilots as part of its quality service reviews conducted in January 2003, SBA did not provide an evaluation of the results of SBA’s district office initiative. As of our report date, it is unclear to us whether SBA has completed or begun district office evaluations. SBA officials told us that they are working on developing a way to evaluate the impact of the district office pilots and to link their marketing and outreach focus with their existing performance goals, such as loan volume, so that they would have a road map on lessons learned to use when adding more district offices to the pilot. Establishing a coherent mission and integrated strategic goals is another key practice in organizational transformations. Although SBA has developed strategic goals to guide its transformation and included these goals in its fiscal year 2004 performance plan, SBA has not linked them with measurable performance goals that demonstrate the success of the agency’s expanded focus on marketing and outreach. According to the Government Performance and Results Act, agencies are required to develop annual performance plans that use performance measurement to reinforce the connection between the long-term strategic goals outlined in their strategic plans and the day-to-day activities of their staff, and include performance indicators that will be used to measure performance and how the performance information will be verified. District office employees we interviewed generally indicated an understanding of the strategic goals and the purpose of the transformation, and had a sense of what the transformation intends to accomplish. However, some district office employees told us that they did not know what the measures would be for determining whether the new marketing and outreach focus was successful, while others told us that they were unclear on how the district office staff should conduct marketing and outreach. SBA officials told us that the agency was still struggling with how to link its marketing and outreach focus with its existing performance goals, such as number of loans made by lending partners. SBA currently uses quantitative measures, such as the number of jobs created, number of loans made, and dollar volume of loans to determine how well it is achieving its strategic goals. SBA officials told us that SBA uses an Execution Scorecard, which is an Intranet-based system, as the internal management tool to track data on each district offices’ performance goals, for monthly progress reviews with the Deputy Administrator on key initiatives, including transformation. According to an SBA official, the scorecard shows that the loan volume in two of the three pilot district offices has increased more than in nonpilot district offices when compared to last year’s volume. However, we identified other factors that could have contributed to an increase in loan volume. For instance, the policy changes made to its SBA Express program, which allows the lender to use its own documentation and applications, also most likely contributed to an increase in loan volume. In fact, other district offices not in the pilot have also seen an increase in loan volume. As a result, the scorecard may be limited in measuring success that could be directly attributed to the pilot efforts for marketing and outreach. Using the performance management system to define responsibility and assure accountability for change is a key practice in organizational transformations. SBA has taken steps toward creating a performance management system that would define responsibility and set expectations for the individuals’ role in the transformed SBA. However, since SBA is still struggling with how to define measurable outcomes for the new marketing and outreach focus, its performance management system may also send a confusing or ambiguous message to employees. We previously reported that as agencies continued to shift towards a greater focus on results, they would need to make progress connecting employee performance with agency success. An explicit alignment of daily activities with broader results helps individuals see the connection between their daily activities and organizational goals. According to SBA headquarters officials, SBA’s performance management system, modeled after IBM’s, would focus more on results and not on activity. SBA officials told us that SBA implemented its performance management system for senior executives and supervisory staff in fiscal year 2003. SBA is implementing the system for its nonsupervisory staff beginning in fiscal year 2004. SBA officials provided us with documentation of the new position descriptions for the marketing and outreach positions that explained the duties and expectations. However, at the time of our review SBA was still developing the performance standards and had not yet implemented them for nonsupervisory staff. SBA recognized that it would need to provide training to help employees make the transition from their former areas of expertise to a new, broader, and in some respects, more complex job. It was unclear what the linkage will be between these new job responsibilities, performance standards, agency performance goals, and the strategic goals for the transformation. District office employees who have been conducting new marketing and outreach activities told us that they were not sure how their performance will be measured because they have not received information on their performance management standards, and are unclear as to how their job responsibilities would change, or how they would be rated. Specifically, one district office employee told us that it was easy to measure loan specialist performance prior to the pilot because the standards were clear and concise—he knew from his own self-assessment where his performance stood—and that under the new performance management system, it will be harder to measure results because they are not tangible. In addition, another district office employee told us that although many employees see benefits to the transformation, they do not know how SBA will measure its progress toward reaching more of the public since employees do not understand what exactly they need to accomplish, such as number of clients the staff should contact or how many marketing events staff should attend, to help SBA reach its goals. While establishing a communication strategy is a key practice in organizational transformations, SBA has not established an effective and on-going communication strategy that would allow the agency to create shared expectations and report related progress to its employees and stakeholders. Organizations implementing transformations have found that communicating information early and often helps build an understanding of the purpose of planned changes and builds trust among employees and stakeholders. In particular, SBA does not have an effective communication strategy that reaches out to its employees and stakeholders to engage them in the transformation process, encourages two-way communication, and communicates early and often to build trust. A comprehensive communication strategy that reaches out to employees and stakeholders and seeks to genuinely engage them in the transformation process is essential to implementing a transformation. SBA officials acknowledged that it was important for headquarters to communicate and address staffs’ concerns. However, when we reviewed SBA’s current methods of communication and asked employees in the pilot offices how they received information, we determined that communication is one-way and through a chain of command model, newsletters, or rumors. Communication is not about just “pushing the message out,” but also involves facilitating an honest two-way exchange and allows for feedback from employees and stakeholders. SBA officials told us that SBA headquarters disseminated information to the employees through the regional administrators and the district directors—and a newsletter—The SBA Times. District office employees told us that they generally hear about transformation-related events either through their district director or their immediate supervisor, while other employees stated that they get most of their information through rumors. In addition, stakeholders also told us that they initially hear information through rumors. For instance, a representative from a lender association informed us that they get information through rumors because SBA did not provide any information about the transformation to them. As we noted in an earlier report, it is important for stakeholders to be involved because it helps to ensure that resources are targeted at the highest priorities, and it creates a basic understanding among the stakeholders of the competing demands that confront most agencies, such as the limited resources available. It is also important to consider and use employee feedback and make any appropriate changes to the implementation of a transformation. According to union officials, SBA had set up an e-mail address in June 2002 to which employees could send their questions regarding the transformation. However, despite staff submitting questions, the district office staff told us they have yet to see a list of the questions or SBA’s responses. According to SBA officials, these emails were provided to senior management officials to respond to questions in conference calls held with field staff. The draft transformation plan we reviewed included a set of questions and answers about the transformation, but they were never made public. SBA officials told us that because all the transformation plans were preliminary, SBA has not drafted a thorough list of questions and answers and therefore had not made them available to employees. SBA did not communicate sufficiently with its employees. The information on the transformation initiative found in SBA’s monthly newsletters from June 2002 through March 2003 reported on the status of the transformation effort, described the purpose of transformation, announced when the pilots began, and described them. We reviewed all of the newsletters issued after the kick off of the pilots in March 2003, to see what kind of information was provided to SBA employees. With one exception, the newsletters had no information about the transformation or the creation of the new 7(a) Liquidation and Purchase Guaranty Center and SBA’s intention to reassign staff from overstaffed district offices to understaffed offices. The topic related to transformation included in one issue was a brief reference to the district office pilot in Phoenix. None of the newsletters mentioned who would replace two people who had been key leaders in the transformation—the Chief Operating Officer who left the agency or the Associate Administrator for Office of Field Operations who had moved to a different position within SBA. SBA officials told us that no one has filled the position of the Chief Operating Officer and the replacement for the Associate Administrator for Office of Field Operations was announced in an agencywide e-mail. However, as we stated earlier, after the Chief Operating Officer left the agency, SBA had not clarified who was leading the implementation team for transformation. Involving employees from the beginning to obtain their ideas and gain ownership of the transformation is important to successful transformations. It strengthens the process by including frontline perspectives and experiences. In addition, a study conducted by the National Academy of Public Administration indicates that agencies that have effectively restructured have also worked with their unions to implement changes. The Academy reported that when Congress mandated in 1998 that the Internal Revenue Service (IRS) restructure, IRS management worked with the National Treasury Employees Union to implement benchmarks and develop alternatives. As a result of this collaboration, according to the Academy, IRS facilitated the process of moving employees into new jobs and made the transition easier. Although SBA officials told us that SBA has involved its union, the American Federation of Government Employees, and signed memorandums of understanding with the union on implementation of the pilot and on establishing a liquidation center, union officials told us that they had very little involvement. A union representative told us that SBA does not involve them in any of the planning and only includes the union after it has decided what it wants to accomplish. In addition, another union representative told us that since signing the memorandum of understanding for the first pilot phase in October 2002, SBA has not included the union in aspects of the transformation, such as creating SBA’s competency models, or following up on training courses. SBA made a presentation to the union in July 2003 regarding the second phase of the pilot—to create a new liquidation center in the Washington, D.C., metropolitan area—prior to signing the second memorandum of understanding but did not give the union an opportunity for input on planning for the second phase. In September 2003, SBA and the union signed a memorandum of understanding on the creation of the new center in which SBA agreed to offer an early retirement for all agency personnel and a buyout option to those employees who performed the liquidation function. SBA’s transformation has not involved employees in the planning or implementation stages. During our field visits, we found that because SBA has not actively involved its employees in the transformation process, there is often anxiety and apprehension, as well as low morale in the pilot district offices. However, based on our field visit, we observed that the Arizona District Office’s former Portfolio Management Team appeared to be less anxious about the transformation than Portfolio Management teams in the other district office pilots, mostly because the team leader and her staff were involved early in the transformation by preparing the loan files for the Santa Ana 7(a) center, and training the center staff. We found that because of this early involvement, they had a better sense of their role and were more optimistic about the transformation. An SBA headquarters official told us that SBA intends to use its employee feedback from training evaluations to modify its training curriculum for the next pilot phase, but we were unable to identify any other examples where employee opinions and perspectives were sought. During our field visits to the pilot offices, we found that the employees had valuable input on lessons learned and on ways that SBA could improve its implementation process. For example, one employee suggested that SBA create a guidebook for its employees on what to expect from the transformation, and that the three district office pilots be a resource for the guidebook. In addition, one district office employee suggested that SBA change the order of the training curriculum so that the course on results management is offered first to help supervisors communicate with their staff regarding the transformation. We also observed that employees generally were not opposed to the transformation and saw benefits resulting from the transformation; however, a few employees expressed frustration with the way the process was implemented. If employees had a larger role in implementing and planning the transformation, such as through employee teams, they could help to facilitate the process by sharing their knowledge and expertise, particularly those employees who have had experience in the marketing and outreach area. SBA has made some progress in implementing its transformation plan for phase one. However, continued success and progress in implementing its transformation may be impeded by budget uncertainties and constraints and the difficulties in realigning employees to staff centralization efforts. To some extent, SBA has compounded the budget challenge by not sharing its plan with a key stakeholder—Congress—and not providing clear, consistent budget requests with a detailed plan that show priorities and link resources to desired results. In addition, as SBA moves forward in centralizing its loan and other functions, realigning staff will likely present additional challenges, such as problems with employee morale and productivity. Transforming an organization is not an easy endeavor. It requires a comprehensive, strategic approach that takes leadership, time, and commitment. Although SBA may achieve progress in the short-term by establishing new centers to improve some of its business processes, its long-term success in defining and institutionalizing a new role for its district offices will take more time and commitment. The practices we have identified as being important to successful transformation are especially important as SBA moves forward with its transformation and could also help mitigate the challenges it faces with its budget and staff realignment. However, the weaknesses we identified in SBA’s implementation process could derail or negatively impact its transformation effort as the agency attempts to expand transformation and affect more of its operations and employees. SBA’s leadership changes, plans, and rationales for decisions have not been made public and therefore have created an environment of confusion about the leadership, specific goals, and timeline for transformation. SBA is in the infant stages of developing a link between its broad strategic objectives and measurable performance goals, which will be important for determining the success of transformation. The lack of frequent and two-way communication has exacerbated an environment of confusion, even though many employees understand the goals of transformation. Finally, SBA is missing out on one of its key strengths—its employees—by not adequately involving employees in the transformation process. This lack of employee involvement means that SBA does not receive information and perspectives that could improve and facilitate the transformation and promote employee buy-in. In order to improve and build on transformation efforts under way at SBA, we recommend that the Administrator adopt key practices that have helped other organizations succeed in transforming their organizations. Based on our review of SBA’s initial implementation of phase one of its transformation, we specifically recommend that the Administrator Clarify for employees, congressional, and other stakeholders the leadership and implementation team members who are guiding transformation. Finalize the draft transformation plan that clearly states SBA’s strategic goals for transformation and includes implementation goals, timeline, and resource requirements, and share the plan with stakeholders and employees. Develop performance goals that reflect the strategic goals for transformation and more clearly link the strategic goals of transformation to existing performance goals. In addition, develop budget requests that clearly link resource needs to achieving these strategic and performance goals. Ensure that the new performance management system is clearly linked to well-defined goals to help individuals see the connection of their daily activities and organizational goals and encourage individuals to focus on their roles and responsibilities to help achieve those goals. Develop a communication strategy that facilitates and promotes frequent and two-way communication between senior managers and employees and between the agency and its stakeholders, such as Congress and SBA’s lenders. For example, SBA could electronically post frequently asked questions and answers on its Intranet. Facilitate employees’ involvement by soliciting ideas and feedback from its union and staff, ensuring that their concerns and ideas are considered. For example, SBA could develop employee teams and expand employee feedback mechanisms like those it employed in the pilot training. We received written comments on a draft of this report from SBA’s Chief Financial Officer, which are reprinted in appendix I. In commenting on the draft, SBA did not state whether it concurred with our recommendations but said it would consider them as it continues to plan for and implement its transformation efforts. SBA specifically noted that it had already addressed recommendations regarding developing performance goals and using the performance management system to define responsibility as a result of issuing a new strategic plan with revised performance goals and implementing its new performance management system for employees on October 1, 2003. SBA provided us with a draft strategic plan but then told us that the plan was being revised significantly and that we should wait until the revised plan was completed. Since this revised strategic plan was issued after we had completed our work, we did not have time to determine whether SBA had sufficiently addressed our recommendations related to linking its transformation efforts to strategic and performance goals and performance expectations for employees. Therefore, these recommendations will remain in the report, and we will determine whether SBA has implemented the recommendations as part of our recommendation follow-up process. SBA disagreed with our finding that its budget requests for transformation were unclear. SBA stated that it clearly lays out its funding requests for transformation in the Fiscal Year 2003 and Fiscal Year 2004 Budget Request and Performance Plans. We used these documents to review SBA’s budget requests for transformation and as the source for our analysis shown in figure 4 of the report. In its comments, SBA said that it had made changes to its budget format in fiscal year 2004 to bring it more in line with the requirements of the Results Act by integrating budget with performance goals. We clarified some language in the final report to better reflect the issues we identified with SBA’s transformation budget requests. While the fiscal year 2004 budget request may have at some level integrated its budget request with performance goals for its programs, it did not make clear linkages between its request for transformation funds and its performance goals. The budget requests for transformation were not consistent in terms of amounts requested or stated purposes nor were they accompanied by a detailed plan that showed priorities and linked resources to desired transformation results. Therefore, we still maintain that SBA could improve its transformation budget request presentation to better ensure that it links the request to transformation performance goals and outcomes. SBA also disagreed with our findings related to communication and employee involvement. SBA stated that officials have traveled to the pilot district offices to explain the agency’s transformation plans and solicited comments from district directors in a May 2002 district director conference. However, our draft report did not state that management was not involved or was uninformed, but that employees below the district director level in the pilot offices were not sufficiently involved and informed. Furthermore, SBA cites its efforts to communicate prior to the implementation of the pilots, which we recited in our draft report, but employees told us that their level of involvement and the amount of information they received was lacking after the pilot began. In its comments, SBA also stated that it conducts weekly telephone calls with the pilot district directors who in turn have regular meetings with their employees. Our draft report acknowledged SBA’s use of conference calls with the district directors and the expectation that directors would then have meetings with their employees. However, we also found that notwithstanding communications with district directors, district office employees remained confused and lacked avenues for two-way communication with headquarters about the transformation. SBA also stated that it has worked with its union to gain agreement through memorandums of understanding for different parts of the plan, and these efforts were reflected in our draft report. However, in more than one discussion with us, union officials expressed concerns that SBA had approached them after having already decided what it was going to do and had not adequately informed the union about new initiatives or changes to the plan. We continue to maintain that SBA’s transformation efforts could benefit from improved communication and employee involvement. SBA also provided technical corrections, which we incorporated as appropriate in this report. In preparing this report, we focused on the district office and centralization pilots of phase one of SBA’s transformation effort because (1) they were initiatives that had begun at about the same time we began our review and, therefore, we could observe the implementation process and (2) these pilot initiatives, if expanded, would impact all 68 SBA district offices. To determine SBA’s progress in implementing its transformation effort and challenges that have or could impede progress, we analyzed planning, budget, and implementation documents related to SBA’s transformation and interviewed key officials at SBA headquarters involved in the transformation planning and implementation processes. We also conducted site visits at each of the pilot offices involved in the first phase— three district office pilots in Phoenix, Arizona; Miami, Florida; and Charlotte, North Carolina; and two center pilots in Santa Ana and Sacramento, California. At the center locations, we reviewed documents that were developed to make the process more efficient and consistent (for example, checklists and standardized letters). We also reviewed measures that SBA is using to assess the centralization pilots. From data SBA headquarters uses to track the pilots, we reviewed about 450 approvals for the 504 loan program pilot and calculated an average total response and processing time using the dates that were included in the data. At each of the pilot locations, we interviewed all employees who were directly affected by the pilot—in the case of the district offices, we met with virtually all employees. To ensure open communication, we met with directors, supervisors, and employees separately. We asked them to describe how their office, role, and job had changed; how information was communicated to them about transformation; and whether they had been provided training and resources to transition into new roles. We also asked them to identify the top five or fewer challenges and benefits of transformation and lessons learned from the initial implementation process. To assess whether SBA applied practices that are important to organizational change and human capital management in the federal government, we reviewed the literature and our previous work on reorganizations, organizational change, and human capital management to identify key practices that have been recognized as contributing to successful organizational transformation. The main document we relied on in identifying key practices was our recent report Results Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations. We used these criteria as a basis to assess SBA’s planning process for transformation, implementation process for the pilots for phase one, leadership of the transformation, communication with employees and key stakeholders, and level of employee involvement. When interviewing SBA employees for objective one, we also asked questions to determine their understanding of the transformation effort and how they received information and communicated their questions or concerns. In addition to talking with employees involved in the pilots, we also interviewed representatives of SBA’s union and asked the extent to which they were involved in the transformation process. To obtain feedback from SBA stakeholders, we interviewed officials representing the National Association of Government Guaranteed Lenders and the National Association of Development Companies, whose members include SBA 7(a) lenders and certified development companies that make 504 loans, respectively. We also met with SBA’s congressional stakeholders who expressed views about their role in SBA’s transformation process. We conducted our work in Washington, D.C.; Phoenix, Arizona; Sacramento and Santa Ana, California; Miami, Florida; and Charlotte, North Carolina, between February and September 2003, in accordance with generally accepted government auditing standards. Unless you publicly announce its contents earlier, we plan no further distribution until 30 days after the date of this report. At that time, we will send copies of the report to the Ranking Minority Member of the Senate Committee on Small Business and Entrepreneurship, the Ranking Minority Member of the House Committee on Small Business, other interested congressional committees, the Administrator of the Small Business Administration, and the Director of the Office of Management and Budget. We will make copies available to others on request. This report will also be available at no charge on the GAO Web site at http:/www.gao.gov. Please contact me at (202) 512-8678, [email protected] or Katie Harris at (202) 512-8415, [email protected] if you or your staff have any questions. Major contributors to this report were Patty Hsieh, Kay Kuhlman, and Rose Schuville. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The Small Business Administration (SBA) has recognized that it needs to realign its current organizational structure and processes to improve its ability to fulfill its primary mission--supporting the nation's small businesses. In July 2002, SBA announced that it was initiating a transformation effort to increase the public's awareness of SBA's services and products and make its processes more efficient. GAO evaluated SBA's progress in implementing its transformation initiatives and challenges that have impeded or could impede implementation and whether SBA's transformation incorporates practices GAO has identified in previous work that are important to successful organizational change. SBA has made some progress in transforming its organization, although efforts could be impeded by budgetary and staffing challenges. SBA started three district office pilots to test marketing and outreach techniques and two pilots to centralize loan processes. However, SBA officials told us that their plans for expanding the pilots and implementing additional initiatives have changed because the agency did not receive any funding for transformation in fiscal year 2003 and may not receive any in fiscal year 2004. GAO found that SBA did not provide consistent, clear budget requests with a detailed plan for transformation results. The challenge of staffing its centralization initiatives, including relocating employees and avoiding undue disruptions to operations, could further complicate SBA's progress. When SBA initially planned and began implementing transformation, it gave some attention to practices important to successful organizational change. SBA drafted a plan and created an implementation team to manage the transformation. However, significant weaknesses in implementation could impede further progress and exacerbate the challenges noted above. The transformation could fail if practices and implementation steps focusing on transparency and communication are not given more attention.
As a result of controversy and litigation surrounding the 1990 Decennial Census, the U.S. Census Bureau recognized the need for a full-scale review of its decennial census program. The Congress, OMB, and GAO also agreed that this review was needed and that it must occur early in the decade to implement viable actions for the 2000 Census and to prepare for the 2010 Census. Early in the 1990s, in reports and testimonies, we stressed the importance of strong planning and the need for fundamental reform to avoid the risk of a very expensive and seriously flawed census in 2000. To address a redesign effort, in November 1990 the bureau formed the Task Force for Planning the Year 2000 Census and Census-Related Activities for 2000-2009. The task force was to consider lessons learned from the 1990 Census, technical and policy issues, constitutional and statutory mandates, changes in U.S. society since earlier decennial censuses, and the most current knowledge of statistical and social measurement. The bureau also established a Year 2000 Research and Development Staff to assist the task force and conduct numerous research projects designed to develop new approaches and techniques for possible implementation in the 2000 Census. In June 1995, the task force issued its report, Reinventing the Decennial Census. Concerns about the 1990 Census also led the Congress to pass the Decennial Census Improvement Act of 1991 (Public Law 102-135) requiring the National Academy of Sciences to study the means by which the government could achieve the most accurate population count possible and collect other demographic and housing data. The academy established a panel on methods to provide an independent review of the technical and operational feasibility of design alternatives and tests conducted by the U.S. Census Bureau. The panel issued its final report in September 1994. A second academy panel on requirements examined the role of the decennial census within the federal statistical system and issued its final report in November 1994. In March 1995, the bureau conducted the 1995 Census Test which provided a critical source of information to decide by December 1995 the final design of the 2000 Census. These efforts resulted in a planned approach for reengineering the 2000 Census which was presented in a May 19, 1995, U.S. Census Bureau report, The Reengineered 2000 Census. In October 1995, we testified on the bureau’s plans for the 2000 Census. In that testimony, we concluded that the established approach used to conduct the 1990 Census had exhausted its potential for counting the population cost-effectively and that fundamental design changes were needed to reduce census costs and to improve the quality of data collected. We also raised concerns about the bureau proceeding with design plans for the 2000 Census without input from the Congress. In the intervening months, the bureau was unable to come to agreement with the Congress on critical design and funding decisions. In February 1997, we designated the 2000 Decennial Census a new high-risk area because of the possibility that further delays could jeopardize an effective census and increase the likelihood that billions of dollars could be spent and the nation be left with demonstrably inaccurate census results. In July 1997, we updated our 1995 testimony on bureau design and planning initiatives for the 2000 Census and assessed the feasibility of bureau plans for carrying out the 2000 Census. To respond to Title VIII of Public Law 105-18, which required the Department of Commerce to provide detailed data about the bureau’s plans by July 12, 1997, the bureau issued its Report to Congress, The Plan For Census 2000. This plan also incorporated the bureau’s Census 2000 Operational Plan that was updated annually. In November 1997, Public Law 105-119 established the Census Monitoring Board to observe and monitor all aspects of the bureau’s preparation and implementation of the 2000 Census. Section 209 (j) of this legislation also required the bureau to plan for dual tracks of the traditional count methodology and the use of statistical sampling to identify historically undercounted populations of children and minorities. As 1 of 13 bureaus within the Department of Commerce, the U.S. Census Bureau must submit its annual budget for review and inclusion in the department’s budget. The department must then make choices in consideration of its overall budget to OMB and will therefore make adjustments to bureau-requested budgets as deemed necessary. OMB will review and further adjust department and bureau budgets to consider the programs and priorities of the entire federal government that become the President’s Budget. The Congress may then adjust the President’s Budget through the appropriation process that becomes the budget of the departments and the bureaus after signature by the President. The appropriations for decennial census are no-year funds that are available until expended, rescinded, transferred, or until the account is closed. As shown in table 1, the Department of Commerce requested a total of $268.7 million for 2000 Census planning and development in the President’s Budgets for fiscal years 1991 through 1997. The program received total funding of $223.7 million from the Congress, or about 83 percent of the amount requested. Although the 2000 Census received all of the funding requested in the President’s Budgets for fiscal years 1991 and 1992, it received reduced funding for each fiscal year from 1993 through 1997. According to the bureau, these reductions resulted in the elimination, deferral, or scaling back of certain projects in planning for the 2000 Census. The bureau subsequently obligated 99 percent of its appropriated 2000 Census funding through fiscal year 1997. Bureau records indicated that the bulk of $86 million of decennial funding received through the end of fiscal year 1995 was obligated for program development and evaluation methodologies, testing and dress rehearsals, and planning for the acquisition of automated data processing and telecommunications support. For fiscal years 1996 and 1997, bureau records indicated that the bulk of $138 million of decennial funding received was obligated for planning the establishment of field data collection and support systems, refining data content and products, evaluating test results, and procuring automated data processing and telecommunications support. For the planning and development phase, personnel costs consumed about 53 percent of planning and development funds; contractual services consumed 16 percent; and space, supplies, travel, and other expenses consumed the remaining 31 percent. Because of different major program categories used by the bureau from fiscal years 1991 through 1997, we could not present a comprehensive table of funding for the period. However, we were able to analyze the funding by fiscal year and a detailed analysis of funding requested, received, and obligated, and funds budgeted by major program category for fiscal years 1991 through 1997 are presented in appendix II. The U.S. Census Bureau was responsible for carrying out its mission within the budget provided and bureau management determined the specific areas in which available resources were invested. We could not determine what effect, if any, that higher funding levels might have had on census operations as this is dependent upon actual implementation and the results of management decisions that may or may not have occurred. However, according to bureau officials, lower than requested funding levels for fiscal years 1993 through 1997 adversely affected the bureau’s planning and development efforts for the 2000 Census. As examples, they cited the following 10 areas where reduced funding levels caused the bureau to curtail planning initiatives. Although lower funding levels may have affected these areas, information from previous bureau and GAO reports and testimony indicated that operational, methodological, and other factors also contributed to weaknesses in the bureau’s planning efforts. 1. Difficulties in retaining knowledgeable staff. Although many key bureau personnel and project managers involved with the 2000 Census had also worked on the 1990 and earlier decennial censuses, bureau officials stated that many experienced people retired or left the bureau after the 1990 Census. According to the bureau, a contributing factor was lower funding levels to pay personnel compensation and benefits, which in turn affected the number of personnel with institutional knowledge of the decennial census to lend support to the 2000 Census planning and development effort. We noted that soon after a major event such as the decennial census count, it is not unusual for personnel to leave the bureau, as did three senior executives after the 2000 Census. In addition, Office of Personnel Management data indicated that over half of the bureau’s full-time, nonseasonal work force of 5,345 employees as of March 2002 is eligible for retirement by 2010. Thus, the human capital issue will remain a key planning area to ensure that the bureau has the skill mix necessary to meet its future requirements. 2. Scaled-back plans for testing and evaluating 1990 Census data. A bureau official stated that the amount of qualitative and quantitative data from the 1990 Census was limited and hampered the quality and results of planning and development efforts for the 2000 Census. Additionally, many opportunities were lost in capitalizing on the 1990 Census data that did exist and more funding to evaluate this data could have facilitated 2000 Census research and planning efforts. Bureau officials stated that as they moved forward with planning for the 2000 Census, they had to scale back plans for testing and evaluating 1990 Census data because of a lack of funding. For example, they cited the inability to update a 1990 Census study of enumerator supervisor ratios. 3. Delays in implementing a planning database. Bureau officials stated that they were unable to implement an effective planning database in the early years of the 2000 Census. In one of its first plans, the bureau conceived of a planning database that would capture data down to very small geographic levels and would be continuously updated over the decade for a number of census purposes. This database would have enabled the bureau to target areas where language resources were needed, identify areas where enumeration and recruiting could be difficult, and position data capture centers to support the most cost- efficient and effective infrastructure. However, according to bureau officials, with lower funding through fiscal year 1995, the planning database was put on hold. Later in the decade, the bureau resurrected the planning database but did not develop and use it fully. 4. Limited resources to update address databases. According to bureau officials, sufficient resources to update and coordinate large databases of addresses and physical locations provided a continuous challenge to the bureau. At the end of the 1990 Census, the bureau’s database contained 102 million addresses, each assigned to the census block area in which it was located. At that point, the U.S. Census Bureau’s Geography Division initiated discussions with the U.S. Postal Service to utilize its Delivery Sequence File (DSF) that contained millions of addresses used to deliver the U.S. mail. The bureau planned to use the DSF in updating its address database which became the Master Address File (MAF). With lower funding through 1995, bureau officials cited limited resources to update the MAF database and to assess the quality of entered information. 5. Program to identify duplicate responses was not fully developed. Bureau officials stated the program to identify duplicate responses was not fully developed for the 2000 Census and more emphasis and funding were needed to develop appropriate software and procedures. It is important to be able to identify duplications in the MAF and multiple responses from a person or household that contribute to a population overcount. This includes operations to identify multiple responses for the same address and computer matching of census responses received against all other people enumerated in the block. Duplications also occurred due to college students counted both at school and at home, people with multiple residences, and military personnel residing outside their home state. 6. Abandoned plans to use administrative records. In early planning for the 2000 Census, the bureau funded efforts to use records from nonbureau sources of information (such as driver licenses, voter registrations, and other government programs) to supplement the census count. This administrative records project was the result of extensive research studies conducted by the bureau beginning in 1993 that focused on initial plans for three uses of nonbureau information to derive census totals for some nonresponding households, enhance the coverage measurement operations, and help provide missing content from otherwise responding households. Although bureau officials determined that administrative records had the potential to improve coverage, the bureau abandoned plans to fund and more fully develop an administrative records database in February 1997. While the lack of funding may have been a contributing factor, bureau documents indicated that this action was primarily due to questions about the accuracy and quality of administrative records and issues of privacy protection. 7. Problems with multiple language questionnaires. Bureau officials cited several funding and operational problems with census questionnaires in the five languages that were used other than English. In 1995, the bureau planned to mail forms in both Spanish and English to areas with high concentrations of Spanish speakers and produce forms in other languages as needed. In March 1997, in response to requests for forms in other languages, the bureau announced its intent to print questionnaires in multiple languages in an effort to increase the mail response rate. The bureau selected four additional languages as a manageable number based upon a perceived demand. However, the bureau could not determine how to pinpoint the communities that needed the non-English questionnaires. Instead, the bureau indicated in a mailing that the questionnaires were available in five languages and if an individual wanted a questionnaire in a language other than English, the individual had to specifically request the questionnaire in that language. As a result, the bureau did not know the number of questionnaires to print in the five languages until late in the process. Finally, the bureau did not have the time to comprehensively assess the demand for questionnaires in other languages. 8. Cost-effective use of emerging data capture technology. Early bureau research assessed current and emerging data capture technologies, such as electronic imaging, optical mark recognition, and hand-held devices, which offered the potential for significant cost reductions in processing large volumes of data. Bureau officials indicated they were unsure of their exact requirements for the emerging data capture technologies, and this resulted in most contracts being cost- reimbursement contracts that required more funding than planned. The bureau estimated that it ultimately spent about $500 million on contracts to improve the data capturing process. Bureau officials also stated that they did not have the time to fully develop and test the data capture systems or data capture centers, both of which were contracted for the first time in the 2000 Census. For example, the bureau said it could not adequately prepare for the full development and testing of the imaging contract. As a consequence, the first imaging test did not occur until 1998, and bureau officials stated that it became clear that imaging was not working due to technical and implementation problems. To some extent, this is not unexpected when implementing new technologies. Although the contractor and the bureau felt the system was not ready, it was tested anyway due to the short time frame and major problems developed. Even though the system eventually became operational in time for the 2000 Census count, bureau officials indicated that this occurred at a higher than anticipated risk and cost. 9. More use of the Internet. In the early 1990s, the full impact of the Internet as a global communications tool was not yet envisioned. Officials indicated that the bureau did not have sufficient time and funding during the planning phase to fully understand and test all the implications of using the Internet as a vehicle for census responses. In addition, the bureau’s major concern was that computer security issues had not been adequately addressed, particularly since census information must be protected and significant penalties may be imposed for unauthorized disclosure. Also, the public perception of using the Internet as a response medium had not been fully explored. Nevertheless, in February 1999, the bureau established a means for respondents to complete the 2000 Census short forms on the Internet protected by a 22-digit identification number. According to bureau officials, they received about 60,000 short forms via the Internet. The rapid evolution of the Internet has the potential to significantly reduce bureau workload and the large volume of paper forms for the 2010 Census. 10. Preparation for dress rehearsals. Bureau officials cited many problems during the fiscal year 1998 dress rehearsals for the 2000 Census that were a direct result of funding levels in the early planning and development years. They stated that because of delays in receiving funding in the fall of 1997, they had to delay the dress rehearsal census day from April 4 to April 18, 1998. In addition, because many new items were incomplete or still under development, the bureau said it could not fully test them during the dress rehearsals with any degree of assurance as to how they would affect the 2000 Census. However, despite these problems, the bureau testified in March 1998 that all preparatory activities for the dress rehearsal—mapping, address listing, local updates of addresses, opening and staffing offices, and printing questionnaires—had been completed. In 1999, the bureau issued an evaluation that concluded that all in all, the Census 2000 dress rehearsal was successful. The evaluation also stated that the bureau produced population numbers on time that compared favorably with independent benchmarks. It also acknowledged some problems, but devised methods to address those problems. Although the bureau conceded that planning efforts could be improved, the lack of funding did not appear to be a significant issue, except as it affected the ability to earlier plan the dress rehearsal. The bureau’s experience in preparing for the 2000 Census underscores the importance of solid, upfront planning and adequate funding levels to carry out those plans. As we have reported in the past, planning a decennial census that is acceptable to stakeholders includes analyzing the lessons learned from past practices, identifying initiatives that show promise for producing a better census while controlling costs, testing these initiatives to ensure their feasibility, and convincing stakeholders of the value of proposed plans. Contributing factors to the funding reductions for the 2000 Census were the bureau’s persistent lack of comprehensive planning and priority setting, coupled with minimal research, testing, and evaluation documentation to promote informed and timely decision making. Over the course of the decade, the Congress, GAO, and others criticized the bureau for not fully addressing such areas as (1) capitalizing on its experiences from past decennial censuses to serve as lessons to be learned in future planning, (2) documenting its planning efforts, particularly early in the process, (3) concentrating its efforts on a few critical projects that significantly affected the census count, such as obtaining a complete and accurate address list, (4) presenting key implementation issues with decision milestones, and (5) identifying key performance measures for success. Capitalizing on experiences from past censuses. In a fiscal year 1993 conference report, the Congress stated that the bureau should direct its resources towards a more cost-effective census design that would produce more accurate results than those from the 1990 Census. Further, the Congress expected the bureau to focus on realistic alternative means of collecting data, such as the use of existing surveys, rolling sample surveys, or other vehicles and that cost considerations should be a substantial factor in evaluating the desirability of design alternatives. In March 1993 we testified that time available for fundamental census reform was slipping away and important decisions were needed by September 1993 to guide planning for 1995 field tests, shape budget and operational planning for the rest of the census cycle, and guide future discussions with interested parties. We noted that the bureau’s strategy for identifying promising census designs and features was proving to be cumbersome and time consuming, and the bureau had progressed slowly in reducing the design alternatives for the next census down to a manageable number. Documenting early planning efforts. It is particularly important early in the planning process to provide a roadmap for further work. We found that the bureau did not document its 2000 Census planning until late in the planning phase. While the U.S. Census Bureau prepared a few pages to justify its annual budget requests for fiscal years 1991 through 1997, it did not provide a substantive document of its 2000 Census planning efforts until May 1995, and this plan was labeled a draft. Finally, the Congress mandated that the bureau issue a comprehensive and detailed plan for the 2000 Census within 30 days from enactment of the law. On July 12, 1997, the bureau issued its Report to the Congress—The Plan for Census 2000, along with its Census 2000 Operational Plan. Concentrating efforts on a few critical projects. While the bureau required many activities to count a U.S. population of 281 million residing in 117.3 million households, a few critical activities significantly affected the Census 2000 count, such as obtaining a complete and accurate address list. Although the bureau was aware of serious problems with its address list development process, it did not acknowledge the full impact of these problems until the first quarter of 1997. Based upon its work with the postal service database, the 1995 Census Test, and pilot testing at seven sites, the bureau had gained sufficient evidence that its existing process would result in an unacceptably inaccurate address list due to inconsistencies in the quality of the postal service database across missing addresses for new construction; difficulties in identifying individual units in multiunit structures, such as apartment buildings; and inability of local and tribal governments to provide usable address lists. In September 1997, the bureau acknowledged these problems and proposed changes. However, we believe that this action occurred too late in the planning process and was not given a higher priority to benefit the 2000 Census enumeration. Presenting key implementation issues and decision milestones. The bureau discussed program areas as part of its annual budget requests for fiscal years 1991 through 1997, but the requests did not identify key implementation issues with decision milestones to target its planning activities. Decision milestones did not appear until July 1997, when the bureau issued its Census 2000 Operational Plan. Stakeholders such as the Congress are more likely to approve plans and funding requests when they are thoroughly documented and include key elements such as decision milestones. Identifying key performance measures. Census planning documents provided to us through fiscal year 1997 did not identify key performance measures. We believe that identifying key performance measures is critical to assessing success in the planning phase of the census and can provide quantitative targets for accomplishments by framework, activity, and individual projects. Such measures could include performance goals such as increasing mail response rates, reducing population overcount and undercount rates, and improving enumerator productivity rates. The lessons learned from planning the 2000 Census become even more crucial in planning for the next decennial census in 2010, which has current unadjusted life cycle cost estimates ranging from $10 billion to $12 billion. Thorough and comprehensive planning and development efforts are crucial to the ultimate efficiency and success of any large, long-term project, particularly one with the scope, magnitude, and deadlines of the U.S. decennial census. Initial investment in planning activities in areas such as technology and administrative infrastructure can yield significant gains in efficiency, effectiveness, and cost reduction in the later implementation phase. The success of the planning and development activities now occurring will be a major factor in determining whether this large investment will result in an accurate and efficient national census in 2010. Critical considerations are a comprehensive and prioritized plan of goals, objectives, and projects; milestones and performance measures; and documentation to support research, testing, and evaluation. A well-supported plan early in the process that includes these elements will be a major factor in ensuring that stakeholders have the information to make funding decisions. As the U.S. Census Bureau plans for the 2010 Census, we recommend that the Secretary of Commerce direct that the bureau provide comprehensive information backed by supporting documentation in its future funding requests for planning and development activities, that would include, but is not limited to, such items as specific performance goals for the 2010 Census and how bureau efforts, procedures, and projects would contribute to those goals; detailed information on project feasibility, priorities, and potential risks; key implementation issues and decision milestones; and performance measures. In commenting on our report, the department agreed with our recommendation and stated that the bureau is expanding the documents justifying its budgetary requests. For example, the bureau cited a document which outlines planned information technology development and activities throughout the decennial cycle of the 2010 Census. The bureau also included a two-page document, Reengineering the 2010 Census, which presented three integrated components and other plans to improve upon the 2000 Census. In this regard, it is essential that, as we recommended, the bureau follow through with details and documentation to implement these plans, define and quantify performance measures against goals, and provide decision milestones for specific activities and projects. As agreed with you office, unless you announce its contents earlier, we plan no futher distribution of this report until 7 days after its issuance date. At that time, we will send copies of this report to the Chairman and Ranking Minority Member of the Senate Committee on Governmental Affairs, the House Committee on Government Reform, and the House Subcommittee on Civil Service, Census, and Agency Organization. We will also send copies to the Director of the U.S. Census Bureau, the Secretary of Commerce, the Director of the Office of Management and Budget, the Secretary of the Treasury, and other interested parties. This report will also be available on GAO’s home page at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact Gregory D. Kutz at (202) 512-9095 or [email protected], Patricia A. Dalton at (202) 512-6806 or [email protected], or Roger R. Stoltz, Assistant Director, at (202) 512-9408 or [email protected]. Key contributors to this report were Corinne P. Robertson, Robert N. Goldenkoff, and Ty B. Mitchell. The objectives of our review focused on the planning and development phase of the 2000 Census that we classified as covering fiscal years 1991 through 1997 and (1) the funding requested, received, and obligated with funding received and obligated by major planning category, (2) funding and other factors that affected planning efforts, and (3) lessons learned for the 2010 Census. To determine the amount of 2000 Census planning and development funding requested, received, and obligated, we obtained and analyzed annual decennial census budgets included in the President’s Budgets for fiscal years 1991 through 1997, budgets subsequently received after appropriation by the Congress, and amounts later obligated for the purchase of goods and services by the bureau against those budgets. We then obtained explanations from senior bureau officials for significant variances in these budgets and the effect on decennial planning and development. However, we did not assess the efficiency of the budgeting process and the validity, accuracy, and completeness of obligations against budgeted amounts received. To determine the funding received and obligated by major planning category for 2000 Census planning and development, we obtained and analyzed funding requested, received, and obligated by framework, activity, project, and object class and examined annual operational plans. However, our analysis was hampered by the bureau’s inconsistent use of categories that evolved from 1 activity of general planning in fiscal year 1991, 8 major study areas in fiscal years 1992 and 1993, and 8 to 15 broad categories called frameworks beginning in 1994. For internal management and reporting, the bureau further identified program efforts by activities and projects that have varied since fiscal year 1991. Additionally, the bureau expanded, contracted, or modified program names and descriptions making comparisons more difficult. We also obtained explanations from bureau officials for significant efforts and variances in its funding received and obligation of planning and development funding for the 2000 Census. However, we did not assess the merits of budgeting by program and the subsequent validity, accuracy, and completeness of obligations. To identify funding and other factors that affected planning efforts, we analyzed significant changes in funding requested, received, and obligated at the framework level; identified initiatives that were reduced, eliminated, or severely curtailed; discussed the effect of these areas with bureau officials; and evaluated bureau responses. We also reviewed various reports, testimony, and supporting documents prepared by the bureau, GAO, and others. However, we could not determine what effect, if any, that higher levels of funding might have had on 2000 Census operations. These factors are dependent upon actual implementation and the results of management decisions that may or may not have occurred. To provide lessons learned for the 2010 Census, we identified areas for improvement and obtained support from bureau, GAO, and congressional reports, testimony, interviews, and other documents. Our work was performed in Washington, D.C. and at U.S. Census Bureau headquarters in Suitland, Maryland between January and July 2001 when our review was suspended due to an inability to obtain access to certain budget records. After lengthy discussions with senior officials of the bureau, Department of Commerce, and OMB, and consultation with your staffs, this access issue was resolved in May 2002 and we completed our analysis in June 2002. Our work was done in accordance with U.S. generally accepted government auditing standards, except that we did not audit budget and other financial data provided by the U.S. Census Bureau. On October 16, 2002, the Department of Commerce provided written comments on a draft of this report, including two attachments. These comments are presented in the “Agency Comments and Our Evaluation” section of the report and are reprinted in appendix III, except for the second attachment, Potential Life-Cycle Savings for the 2010 Census, which is currently under revision and is outside the scope of our review. This appendix includes our analysis of 2000 Census funding requested, received, and obligated, and funding received and obligated by major planning category for fiscal years 1991 through 1997. Our analysis was hampered by the bureau’s inconsistent use of major planning categories that evolved over the period as follows: 1 activity of general planning in fiscal year 1991, 8 major study areas in fiscal years 1992 and 1993, and 8 to 15 broad categories called frameworks beginning in 1994. For internal management and reporting, the bureau further identified program efforts by activities and projects that have varied since fiscal year 1991. In addition, the bureau expanded, contracted, or modified program names and descriptions making comparisons more difficult. In March 1991 we testified that fundamental census reform was needed because escalating costs and the apparently increased undercount of the 1990 Census suggested that the current census methodology may have reached the limits of its effectiveness. Of three principles we presented, the last was that the Department of Commerce must be willing to invest sufficient funds early in the decade to achieve cost savings and census improvements in 2000. In fact, OMB deemed some of the Department of Commerce requests to fund early census reform as insufficient and doubled the department’s requested amounts to $1.5 million for fiscal year 1991 and $10.1 million for fiscal year 1992. These amounts were included in the President’s Budgets and the Congress concurred by authorizing the full amount requested. Census planning officials said that if OMB had not augmented the department’s request, testing of reform options for 2000 would have been constrained. For the first year of the 7-year 2000 Census planning and development phase, the fiscal year 1991 funding received was $1.5 million and the bureau obligated the entire amount. The funding contained only one category of general planning for the 2000 Census with funds to be used for: completion of detailed cost-benefit studies of alternatives designs for conducting the decennial census; exploration of new technologies to improve the 2000 Census; establishment of research and development efforts for administrative methods and modeling and estimation techniques; and planning of field tests in fiscal year 1993 to include new census content, methods, technologies, and field structures. Because total amounts were small and involved only general planning, there were no significant variances. We noted that about 46 percent of the funding was obligated for personnel costs relating to 19 full-time equivalent (FTE) staff, 29 percent for services including consultants; and the remaining 25 percent for space, supplies, travel, and other costs. Fiscal year 1992 funding received was $10.1 million and the bureau obligated $9.4 million against it. The funding now identified eight major study areas for the 2000 Census as indicated in table 2. For fiscal year 1992, the bureau experienced almost a six-fold increase in its funding received of $10.1 million over the $1.5 million for fiscal year 1991. About half of the fiscal year 1992 funding was obligated for personnel costs as a result of almost a five-fold increase in FTE staff from 19 in fiscal year 1991 to 111 in fiscal year 1992 to work on decennial planning and development issues. Services, including consultants, accounted for another quarter of the obligations with the remaining quarter for space, supplies, travel, and other costs. Technology options included a $1.7 million services contract to develop emerging data capture technology to compile census statistics. For fiscal year 1993, the Congress reduced the President’s Budget request of $19.4 million for 2000 Census planning and development to $13.7 million, for a reduction of about 29 percent. As a result of this $5.7 million reduction, the bureau made significant cuts in its funding of techniques for special areas and subpopulations by $2.2 million, or about 70 percent, and also eliminated activities to: establish contacts with state and local government budgeted for $1.6 assess customer needs budgeted for $1.0 million, survey public motivation budgeted for $.8 million, and prepare infrastructure for a 1995 Census Test budgeted for $.5 million. In a fiscal year 1993 conference report, the Congress stated that the bureau should direct its resources towards a more cost-effective census design that will produce more accurate results than those from the 1990 Census. For example, the bureau’s research in fiscal year 1992 indicated that reducing the number of questions on the census form is an important way to increase response, thereby increasing accuracy and reducing cost. Therefore, the Congress expected the bureau to focus on realistic alternative means of collecting data, such as the use of existing surveys, rolling sample surveys, or other vehicles and that cost considerations should be a substantial factor in evaluating the desirability of design alternatives. In March 1993 we testified that time available for fundamental census reform was slipping away and important decisions were needed by September 1993 to guide planning for 1995 field tests, shape budget and operational planning for the rest of the census cycle, and guide future discussions with interested parties. The bureau’s strategy for identifying promising census designs and features was proving to be cumbersome and time consuming, and the bureau had progressed slowly in reducing the design alternatives for the next census down to a manageable number. Fiscal year 1993 funding received was $13.7 million and the bureau obligated $13.5 million against it. The budget continued to identify eight major study areas for the 2000 Census as indicated in table 3. For fiscal year 1993, the bureau experienced a 36 percent increase in its funding received of $13.7 million over the $10.1 million for fiscal year 1992. About 53 percent of the fiscal year 1993 funding was obligated for personnel costs as a result of a 48 percent increase in FTE staff from 111 in fiscal year 1992 to 164 in fiscal year 1993 to work on decennial planning and development issues. Services, including consultants, accounted for about 11 percent of the funding with the remaining 36 percent used for space, supplies, travel, and other costs. Fiscal year 1993 was identified by the bureau as the beginning of a 3-year period to identify the most promising changes to be integrated in the 1995 Census Test. For fiscal year 1994, the Congress reduced the President’s Budget request of $23.1 million for 2000 Census planning and development to $18.7 million, for a reduction of about 19 percent. As a result of this $4.4 million reduction, the bureau eliminated decennial operational preparation for $2.5 million, and reduced funding for questionnaire design and cost modeling by $1.6 million or 70 percent. In May 1993 we testified that the U.S. Census Bureau had altered its decision-making approach and refocused its 2000 Census research and development efforts. Driven by its impending September 1993 deadline for deciding which designs to test in 1995 for the 2000 Census, the bureau recommended rejecting all 14 design alternatives that had formed the framework of its research program that was under study for a year. Instead, the bureau reverted to an earlier approach of concentrating favorable features into the design for application in the 2000 Census. A fiscal year 1994 House Appropriations Committee report cited our May 1993 testimony and stated that it was unacceptable for the bureau to conduct the 2000 Census under a process that followed the general plan used in the 1990 Census. A fiscal year 1994 conference report expressed concern that the U.S. Census Bureau had not adequately addressed cost and scope issues for the 2000 Census and expected the Department of Commerce and OMB to take a more active role in planning for the decennial census to ensure that data requirements for federal agencies and state and local government were considered in the planning effort. In October 1993 we testified that the U.S. Census Bureau’s research and development efforts had been slowed by its changing planning strategy and that the bureau still faced the difficult task of integrating its Test Design Recommendation proposals into a detailed implementation plan for the 1995 census test. We noted that the bureau’s plans to conduct research and evaluations for such promising proposals as the one-number census, sampling for nonresponse, and defining the content of the census were in a state of flux. Other important research and planning activities, such as improving the address list and using new automated techniques to convert respondent answers to machine-readable format, were behind schedule. Funding for research and test census preparation in fiscal years 1994 and 1995 was in doubt as evidenced by the budget cuts proposed by the House Appropriations Committee and the opinions expressed in its report accompanying the fiscal year 1994 appropriations bill. The bureau obligated the entire amount of its fiscal year 1994 funding received of $18.7 million. Funding originally contained 6 design areas for 2000 Census research and development, the 1995 Census test, and decennial operational preparation but was later revised to present funds received and obligated in 13 frameworks of effort as indicated in table 4. For fiscal year 1994, the bureau experienced a 36 percent increase in its funding received of $18.7 million over the $13.7 million for fiscal year 1993. About 44 percent of the fiscal year 1994 funding was obligated for personnel costs as a result of a 34 percent increase in FTE staff from 164 in fiscal year 1993 to 220 in fiscal year 1994 to work on decennial planning and development issues. Services, including consultants, accounted for another 13 percent of obligations with the remaining 43 percent for space, supplies, travel, and other costs. We noted that six frameworks received little or no funding and three frameworks accounted for 89 percent of the fiscal year 1994 funds received and obligated as follows: Framework 5 - Evaluation and development consumed $7.1 million or 38 percent of funding received and obligated for research and developmental work to support the 1995 census test. This included research on the use of matching keys beyond just a person’s residence address to develop matching procedures that would allow the bureau to make use of person-based administrative records files that do not have a current residential address; research on various uses of sampling including technical and policy issues on conducting the entire census on a sample basis and conducting only the nonresponse follow-up portion of the census on a sample basis; and race and ethnicity studies including extensive consultation with stakeholders, focus group testing, and planning of field tests. Framework 3 - Test census and dress rehearsal consumed $5.5 million or 29 percent of funding received and obligated to increase 1995 Census Test activities from preliminary studies and planning to the full-scale preparatory level program. These included such activities as completion of questionnaire content determination, analysis of a database of population characteristics by geographic area to make selections of test sites, determination of evaluation program objectives for the test, and determination of objectives for and design stakeholder consultation. Framework 11 - Automation/telecommunication support consumed $4.0 million or 21 percent of funding received and obligated for automated systems design and acquisition of data capture technology to upgrade the 1990 Census system (FACT90) to a 2000 Census system (DCS 2000). For fiscal year 1995, the Congress reduced the President’s Budget request of $48.6 million for 2000 Census planning and development to $42.0 million for a reduction of about 14 percent. As a result of this $6.6 million reduction, the bureau eliminated $9.0 million for decennial operation preparation and $.8 million for 1996 testing while increasing funding for program development and other areas by $3.2 million. In January 1994 we testified that while we were encouraged by the U.S. Census Bureau’s recent focus on testing specific proposals to modify the census methodology, we believed that the bureau must aggressively plan for and carefully implement its research, testing, and evaluation programs. Further, the results of those efforts must be available to make fully informed and timely decisions and build needed consensus among key stakeholders and customers for changes in the 2000 Census. A fiscal year 1995 Senate Appropriations Committee report strongly recommended that the bureau adopt more cost-effective means of conducting the next census as the budgetary caps and strict employment ceilings adopted by the President and the Congress would not accommodate a repeat of the process used in the 1990 Census. Fiscal year 1995 funding received was $42.0 million and the bureau obligated $40.9 million against it. The number of frameworks increased to 15 as indicated in table 5. For fiscal year 1995, the bureau experienced a 125 percent increase in its funding received of $42.0 million over the $18.7 million for fiscal year 1994. About 51 percent of the fiscal year 1995 funding was obligated for personnel costs as a result of a 211 percent increase in FTE staff from 220 in fiscal year 1994 to 685 in fiscal year 1995 to work on decennial planning and development issues. Services, including consultants, accounted for about 7 percent of the obligations with the remaining 42 percent for space, supplies, travel, and other costs. We noted that eight frameworks received little or no funding and Framework 3 accounted for over 70 percent of fiscal year 1995 funds received and obligated. The main focus of Framework 3 was conducting the 1995 Census Test, in order to select by December 1995 the features to be used for the 2000 Census. According to census plans and our discussions with officials, the bureau focused on the following major areas. Complete preparation for the 1995 Census Test, conduct the test, and begin evaluations in order to select the features to be used for the 2000 Census. In addition, the bureau would conduct a full-scale census test in four district office areas that would be the culmination of the research and development program. Investigate, develop, test, and evaluate components of a continuous measurement system as a replacement for the 2000 Census sample data questionnaire. Develop, test, and evaluate various matching keys for the automated and clerical matching and unduplicating systems developed under the direction of the matching research and specifications working group. Conduct activities independent of the research and development program; these are preparatory activities required to implement the 2000 Census regardless of the design. This included such activities as planning the address list update activities as necessary to supplement the Master Address File (MAF) for use in the 2000 Census and begin initial planning of the field organization structure for the 2000 Census. Recommend the broad scope of content that should be included in the 2000 Census questionnaire based on consulting with both federal and nonfederal data users, and begin planning for small special purpose tests to supplement or follow up on the 1995 Census Test. For fiscal year 1996, the Congress reduced the President’s Budget request of $60.1 million for 2000 Census planning and development to $51.3 million, for a reduction of about 15 percent. As a result of this $8.8 million reduction, the bureau reduced funding for field data collection and support systems by $9.9 million or 43 percent while increasing funding in other areas. In October 1995 we testified that the U.S. Census Bureau had decided to make fundamental changes to the traditional census design such as shortening census questionnaires, developing an accurate address list, and sampling households that fail to respond to questionnaires. However, we noted that successful implementation of these changes would require aggressive management by the bureau and that the window of opportunity for the Congress to provide guidance on these changes and applicable funding was closing. A fiscal year 1996 conference report continued to express concern about progress related to the next decennial census. It cautioned the bureau that the cost of the 2000 Census had to be kept in check and only through early planning and decision making could costs be controlled. The report further recognized that fiscal year 1996 was a critical year in planning for the decennial census, and that numerous decisions will be made and preparations taken which will have a significant bearing on the overall cost of conducting the census, as well as the design selected. The bureau obligated the entire amount of its fiscal year 1996 funding received of $51.3 million. Beginning with fiscal year 1996, the number of frameworks was reduced to eight as indicated in table 6 below. For fiscal year 1996, the bureau experienced a 22 percent increase in its funding received of $51.3 million over the $42.0 million for fiscal year 1995. About 44 percent of the fiscal year 1996 funding was obligated for personnel costs as a result of a 5 percent decrease in FTE staff from 685 in fiscal year 1995 to 653 in fiscal year 1996 to work on decennial planning and development issues. Services, including consultants, accounted for about 13 percent of the obligations with the remaining 43 percent for space, supplies, travel, and other costs. Three frameworks incurred over 60 percent of funding received and obligated for the following. Framework 3 - Field data collection and support systems incurred costs of $13.3 million including $4.4 million to develop personnel and administrative systems for field office enumeration; $3.1 million for precensus day data collection activities; and $2.0 million for automation acquisition and support for field offices. Framework 2 - Data content and products incurred costs of $9.6 million including $4.4 million to develop and produce questionnaires and public use forms for the census including conduct of a National Content Test; $2.9 million for race and ethnicity testing of concepts and respondent understanding and wording of the race and ethnicity questions; and $1.6 million for continued work with federal and nonfederal data users in the content determination process to prepare for the congressional submission by April 1, 1997. Framework 6 - Testing, evaluations, and dress rehearsals incurred costs of $9.4 million including $3.3 million for an Integrated Coverage Measurement (ICM) special test; $2.6 million for research and development on sampling and sampling methods for the 2000 decennial count; and $2.1 million for 1995 Census Test coverage and evaluation. For fiscal year 1997, the Congress reduced the President’s Budget request of $105.9 million for 2000 Census planning and development to $86.4 million, for a reduction of about 18 percent. As a result of this $19.5 million reduction, the bureau reduced funding for marketing, communications, and partnerships by $14.4 million or 76 percent, and field data collection and support systems by $23.6 million or 53 percent, while increasing amounts in other areas by $18.5 million. A fiscal year 1996 House Appropriation Committee report expressed concern that the bureau appeared not to have developed options and alternative plans to address issues of accuracy and cost. In addition, sufficient progress had not been made on issues the committee had highlighted many times—the number of questions on the long-form and reimbursement from other agencies for inclusion of such questions to assure that the question is important. The bureau obligated the entire amount of its fiscal year 1997 budget of $86.4 million. Planning continued in eight frameworks as indicated in table 7. For fiscal year 1997, the bureau experienced a 68 percent increase in its funding received of $86.4 million over the $51.3 million for fiscal year 1996. About 63 percent of the fiscal year 1997 funding was obligated for personnel costs as a result of a 36 percent increase in FTE staff from 653 in fiscal year 1996 to 891 in fiscal year 1997 to work on decennial planning and development issues. Services, including consultants, accounted for about 25 percent of the obligations with the remaining 12 percent for space, supplies, travel, and other costs. The bureau viewed fiscal year 1997 as pivotal, since this was the year when research and testing activities culminated into operational activities and marked the end of the planning and development phase of the 2000 Census. For the fiscal year, four frameworks incurred about 85 percent of funding received and obligated as follows. Framework 3 - Field data collection and support systems incurred $20.9 million for activities under precensus day operations and support systems, and postcensus day operations. Projects included: $4.1 million for geographic patterns including questionnaire delivery methodologies by area and corresponding automated control systems; $4.0 million for planning of data collection efforts including activities for truncation and/or the use of sampling for nonresponse follow-up and increased efforts to develop procedures for enumerating special populations such as the military, maritime, institutional, migrant, reservation, and those living in other than traditional housing units; $3.8 million for direction and control by 12 regional offices that would provide logistical support and direct enumeration efforts by local census offices; and $3.1 million for planning and developing personnel and administrative systems to support 2000 Census data collection and processing activities, such as types of positions, pay rates, personnel and payroll processes, and systems, space, and security requirements. Framework 5 - Automation/telecommunication support incurred $20.2 million for activities to include evaluating proposals for the acquisition of automation equipment and related services, funding the development of prototype systems, and moving toward awarding contracts to implement such systems for the 2000 Census. Projects included setting up data capture systems and support to process census questionnaire responses and telecommunication systems required to provide nationwide toll-free 800 number services to answer respondent questions and to conduct interviews. Framework 6 - Testing, evaluation, and dress rehearsal incurred $19.8 million for the following activities: $3.7 million to begin gearing up for the 1998 dress rehearsal in order to prepare personnel to conduct the census testing efficiently and effectively; and $7.0 million to conduct activities for ICM special testing and American Indian Reservation (AIR) test census such as questionnaire delivery and mail return check-in operations, ICM computer-assisted personal visit interviews, computer and clerical matching, follow-up and after follow-up matching, and evaluation studies. Framework 2 - Data content and products incurred $12.3 million for activities related to the development of computer programs and systems for data tabulation and for the production of paper, machine-readable, and on-line data products. Projects included: $4.5 million to move from research in fiscal year 1996 to implementation in fiscal year 1997 of the Data Access and Dissemination System (DADs), including development of the requirements for Census 2000 tabulations from DADs, and development of computer programs and control systems that will format the processed Census 2000 data for use in DADs; and $2.2 million towards development of a redistricting program for Census 2000. The following are GAO’s comments on the letter dated October 16, 2002, from the Department of Commerce. 1. The objectives of our report did not include assessing the degree of success of the 2000 Census. 2. See “Agency Comments and Our Evaluation” section of this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
GAO reviewed the funding of 2000 Census planning and development efforts and the impact it had on census operations. Total funding for the 2000 Census, referred to as the life cycle cost, covers a 13-year period from fiscal year 1991 through fiscal year 2003 and is expected to total $6.5 billion adjusted to 2000 year dollars. This amount was almost double the reported life cycle cost of the 1990 Census of $3.3 billion adjusted to 2000 year dollars. Considering these escalating costs, the experience of the U.S. Census Bureau in preparing for the 2000 Census offers valuable insights for the planning and development efforts now occurring for the 2010 Census. Thorough and comprehensive planning and development efforts are crucial to the ultimate efficiency and success of any large, long-term project, particularly one with the scope, magnitude, and the deadlines of the U.S. decennial census. For fiscal years 1991 through 1997, $269 million was requested in the President's Budgets for 2000 Census planning and development and the program received funding of $224 million by Congress, or 83 percent of the amount requested. According to U.S. Census Bureau records, the bulk of the $86 million in funding received through the end of fiscal year 1995 was obligated for program development and evaluation methodologies, testing and dress rehearsals, and planning for the acquisition of automated data processing and telecommunications support. The U.S. Census Bureau was responsible for carrying out its mission within the budget provided and bureau management determined the specific areas in which available resources were invested. GAO could not determine what effect, if any, that higher funding levels might have had on bureau operations as this is dependent upon actual implementation and the results of management decisions that may or may not have occurred. According to bureau officials, early planning and development efforts for the 2000 Census were adversely affected by lower funding than requested for fiscal years 1993 through 1997. They identified 10 areas where additional funding could have been beneficial. These included difficulties in retaining knowledgeable staff, scaled back plans for testing and evaluating 1990 Census data, delays in implementing a planning database, and limited resources to update address databases. The bureau's experience in preparing for the 2000 Census underscores the importance of solid, upfront planning and adequate funding levels to carry out those plans.
The National Defense Authorization Act for Fiscal Year 1998 (NDAA for FY 1998), as amended by the National Defense Authorization Act for Fiscal Year 1999 (NDAA for FY 1999) required OMB to issue both a classified and an unclassified report on funding to combat terrorism. Under the NDAA reporting requirements, OMB’s annual report addressed funding for combating terrorism without differentiating between homeland security and overseas activities. However, in its 2002 unclassified report, OMB, for the first time, explicitly distinguished between overseas combating terrorism activities coordinated by the National Security Council and homeland security activities coordinated by the President’s Office of Homeland Security. Section 889 of the Homeland Security Act of 2002 repealed the NDAA reporting requirements in favor of new reporting requirements. In particular, section 889 required the President’s budget to include an analysis of “homeland security funding,” which it defined by reference to OMB’s 2002 report as activities to detect, deter, protect against, and if needed, respond to terrorist attacks occurring within the United States. OMB’s definition of homeland security activities included activities that the agency had not previously treated as combating terrorism. The 2003 annual report on combating terrorism was the last combating terrorism report issued under the NDAA reporting requirements. OMB’s next report to Congress was published as part of the President’s fiscal year 2005 budget, which was issued in February 2004 and reflected the changes called for in that act for the first time. In its final 2003 unclassified annual report on combating terrorism, OMB categorized the government’s homeland security activities into the six critical mission areas discussed in the National Strategy for Homeland Security. The seven agencies we contacted use different methods to estimate the portion of their authorized funding that supports combating terrorism activities. Although OMB provides guidance to agencies, it does not prescribe a specific methodology for how agencies should determine the portion of their budget authority that relates to combating terrorism activities. One agency we contacted—DOD—reported that OMB determines how much of DOD’s funding relates to combating terrorism. While OMB staff said that they expect most executive agencies to provide them with funding data related to combating terrorism activities, they said that they make these determinations for DOD. DOD officials said that they enter budget data into OMB’s central database and then OMB staff review the data and extract information that they find consistent with OMB’s definition of combating terrorism. Six of the other seven agencies we reviewed developed their own methodologies using guidance, such as OMB Circular No. A-11, which includes definitions for combating terrorism activities and instructions for submitting information on funding data related to combating terrorism activities to OMB. Because these methodologies involve estimations, some level of professional judgment is inherent throughout the process. To implement these methodologies, agencies first identify their combating terrorism activities and then estimate their related funding levels. (See app. VI for guidance agencies most commonly use to identify combating terrorism activities.) Officials from two of these six agencies—GSA and DHS—reported that they used methods involving formulas to determine their funding levels that are related to combating terrorism. Officials from GSA told us that they use a formula-driven methodology for estimating its budget authority for the portion of Real Property Activities within its Federal Buildings Fund that relate to homeland security activities. To derive this methodology, GSA officials from its Office of Budget said GSA consulted with OMB and reviewed all activities conducted under the Federal Buildings Fund, looked at historical trends related to homeland security activities associated with the fund, and applied their professional judgment. Figure 1 illustrates GSA’s methodology and demonstrates how the agency applied its methodology in estimating its fiscal year 2006 budget authority for the portion of Real Property Activities within its Federal Buildings Fund that relate to homeland security activities. Officials from DHS’s component offices with whom we met—Customs and Border Protection, Immigration and Customs Enforcement, the Information Analysis and Infrastructure Protection Directorate, the Office of State and Local Government Coordination and Preparedness (SLGCP), the Transportation Security Administration, USCG, the United States Secret Service (USSS), and the Science and Technology Directorate—told us that they also derived formula-driven methodologies for determining their homeland security funding levels. For example, officials from USSS said that they derived a quantitative methodology for determining the portion of their appropriation for Operating Expenses that relates to homeland security. They said that to develop this methodology, USSS reviewed all of its programs, activities, and related staff hours conducted under its two missions—Protective Services and Investigative Services— to determine those activities that related to homeland security. See figure 2 for the USSS methodology. USSS officials told us that they discussed their methodology with OMB and received OMB’s approval to implement it. USSS officials told us that they have been using this methodology to estimate the portion of their operating expenses budget authority related to homeland security since 2003. The other four agencies—DOE, USDA, USACE, and DOJ—reported having methodologies in place to determine their funding levels for combating terrorism activities that are less formula driven. For example, using the definitions contained in OMB Circular No. A-11, a DOE official told us that DOE personnel review the agency’s programs and activities to determine which are related to homeland security or overseas combating terrorism. Then, a DOE official consults with OMB to determine whether OMB would like the agency to make any revisions to the activities it has designated as combating terrorism. Once DOE finalizes its determination that an activity is categorized as a combating terrorism activity, 100 percent of that activity’s budget authority is attributed to combating terrorism. Additionally, officials from the component offices in USDA we met with— the Animal and Plant Health Inspection Service (APHIS) and the Agricultural Research Service (ARS)—told us that they used qualitative methods for determining their homeland security funding levels. For example, APHIS officials said that they developed a set of six questions to determine whether their activities relate to homeland security. In reviewing these activities, APHIS officials told us that if any of the questions in figure 3 apply, they will consider the activity related to homeland security, and then 100 percent of that activity’s budget authority will be attributed to homeland security. In addition to estimating funding levels for combating terrorism activities, OMB also requires agencies to align their homeland security activities with the critical mission areas in the National Strategy for Homeland Security. Officials at four of the six agencies we visited that estimate their combating terrorism funding levels said that they used their professional judgment to determine which critical mission area best aligns with their homeland security activities by comparing those activities with the definitions of the national strategy’s critical mission areas. As previously discussed, to estimate funding levels, agencies first identify their combating terrorism activities. Officials at two of the six agencies we contacted said that activities with multiple or dual purposes pose a particular challenge to them when determining their combating terrorism activities because they must apply professional judgment to determine which purpose to emphasize. As a result, determining funding levels for combating terrorism activities and aligning homeland security activities to critical mission areas cannot be precise. For example, as previously noted, GSA officials told us that they conduct upgrades to buildings’ fire alarm enhancement systems that could be used to alert employees to a fire. However, these officials also said the same system could also be used to alert employees to stay in the building in the event of a terrorist attack. Consequently, GSA cannot definitively categorize its fire alarm enhancement systems as a homeland security activity because the efforts within this activity are not exclusively related to homeland security. DHS officials also reported facing similar challenges. For example, SLGCP has multi-use grants that could be used for both combating terrorism and other goals. SLGCP staff cited the fact that the chemical protection suits provided under the Firefighter Assistance Grant program could be used in the field for a fuel spill or for a terrorist incident such as a dirty bomb. Consequently, DHS believes the process of categorizing combating terrorism activities is an estimation exercise, for which the department’s staff must apply their professional judgment. Furthermore, agency officials at three agencies we visited said that an additional challenge in determining whether an activity should be considered a combating terrorism activity involves interpreting OMB’s combating terrorism definitions. For example, a DOE official told us that it can be problematic to categorize activities that are associated with the Defense Nuclear Nonproliferation Program such as efforts to assist the Russians in converting surplus plutonium into fuel for commercial reactors. After its conversion, this material is no longer suitable for use in a nuclear weapon. Although the argument could be made that the program’s eventual effect may be to take potential ammunition away from future terrorists, the amount spent in Russia for the conversion to fuel of already well protected materials is not reported as combating terrorism. DOE cites this as one example of the fact that the decision of whether an activity is considered combating terrorism or non-combating terrorism is a judgment call based on interpretation of the definition. Agencies in our review manage the process of estimating funding levels for combating terrorism activities through OMB oversight and supervisory review. According to OMB, the responsibility of ensuring that homeland security activities are properly categorized is a joint effort made by OMB and the agencies involved. For their part, OMB staff perform reviews of activities determined to be related to homeland security to ensure that they are in accordance with the homeland security definition. OMB staff told us that there is no written guidance for such a review. Instead, OMB staff rely on the definition in OMB Circular No. A-11 and their judgment to decide if the activity has been reasonably categorized. OMB staff said that they currently do not review agency estimates of funding data for overseas combating terrorism activities because OMB is no longer required to report on overseas combating terrorism funding data. Section 889 of the Homeland Security Act of 2002 repealed the NDAA reporting requirements in favor of new reporting requirements. The section 889 reporting requirement applies only to homeland security activities, not overseas activities related to combating terrorism. Although OMB still collects overseas combating terrorism funding data, OMB staff said that they have not reviewed or validated this information since fiscal year 2003. As a result, the overseas combating terrorism data for fiscal years 2004–2006 has not received the same level of scrutiny as the homeland security data. Similarly, without any future legislative action, OMB does not plan to review or validate future funding estimates related to overseas combating terrorism activities. As a result, Congress does not receive reports on both the homeland security and overseas combating terrorism portions of combating terrorism funding. In addition to reviewing homeland security data, OMB also reports taking steps to ensure that agencies have properly aligned their homeland security activities with the six critical mission areas outlined in the National Strategy for Homeland Security. OMB staff told us that annually, they examine the activities agencies aligned with each critical mission area to ensure consistency across all federal agencies and determine if the activities have been properly aligned based on the definitions of the critical mission areas in the National Strategy for Homeland Security. Such alignments can help inform congressional decision makers about the amount of funding that has been allocated to any one critical mission area. In addition to undertaking the previously mentioned reviews, each year since 2002, OMB has provided agencies with an opportunity to make changes to the activities they report as homeland security. In fiscal year 2005, OMB formalized this process by asking agencies to complete a form outlining the agency’s proposed changes prior to official submission of annual budget requests. OMB staff said that OMB’s examiners use the definition for homeland security in its Circular No. A-11 to review each agency’s request, and then the examiners discuss the agencies’ request with staff from OMB’s Homeland Security Branch, as well as the agency to decide if the activity has been reasonably categorized. Officials at all six of the agencies we contacted who make their own determinations of funding related to combating terrorism reported having controls in place to help ensure that the agency’s combating terrorism activities are appropriately identified and categorized as either homeland security or overseas combating terrorism. For example, DHS officials said that DHS budget desk officers within the Chief Financial Officer’s Budget Division review activities its components recommend as combating terrorism and check for activities that may have been categorized incorrectly. Then, another budget desk officer, within the same division, reviews the data for unusual fluctuations or trends. If necessary, the budget desk officer and component representatives are to contact OMB to resolve any potential disagreements. After agencies submit information to OMB, OMB program examiners review the data and approve the activities or request the agencies to make revisions according to their instruction. OMB has not implemented three recommendations from our 2002 report that would have and still could provide OMB and Congress with additional information for making budget decisions and help them understand performance results. In our 2002 report, we recommended that OMB require agencies to provide information on obligations in the database used by OMB to produce the President’s annual budget request—and that OMB should include obligations as reported in this database in its annual report on combating terrorism. We made this recommendation to help Congress obtain information on spending that supports the President’s annual budget request related to combating terrorism activities and to provide decision makers with insights as to whether programs are being run according to plans established by their budget projections. Without including obligation data in the Analytical Perspectives, along with funding levels authorized by Congress for agencies’ homeland security activities, it is difficult for decision makers to know (1) how much funding from prior years is still available to potentially reduce new spending requests, (2) whether the rate of spending for a program is slower than anticipated, or (3) what the level of effort (i.e., size of the program) is for a particular year as well as for a program over time. OMB staff told us that OMB has not substantially changed its position on this recommendation since we published our 2002 report. OMB staff continue to cite the effort required to produce such data. While OMB staff acknowledged that OMB examiners use obligation data in assessing the appropriateness of agency budget requests overall, they have felt that budget authority data provide the most insight into combating terrorism programs and facilitate follow up on areas of concern. OMB staff also said that including obligation information in its funding analysis is not necessary because at the end of the fiscal year, most agencies with homeland security activities have already obligated the majority of their budget authority. However, OMB staff also said that they might consider reporting obligation information for a targeted set of accounts that receive multiyear funding and might carry balances for homeland security programs from year to year, unlike the majority of accounts that receive funding with only 1 year of availability. A conservative analysis suggests that unobligated balances associated with funding for homeland security activities for fiscal years 2002 through 2004 could be between $2 billion and $3.4 billion. Although it would be important to understand how agencies plan to use these balances, information on what funding is unobligated—and in which accounts—is potentially useful for congressional deliberations on the President’s budget request. We recognize that collecting these data would create an additional workload for both OMB and agency budget officials, but we continue to believe that such an effort is warranted for congressional oversight because of the high priority placed on combating terrorism. Therefore, we continue to believe that our prior recommendation on this issue from our 2002 report is relevant and should be implemented. Similarly, implementation of our 2002 recommendation that OMB direct relevant departments to develop or enhance combating terrorism performance goals and measures and include such measures in the governmentwide plan would assist in determining whether funding increases have improved performance results. Although three of the seven agencies in our review told us that OMB did not direct them to develop performance measures for combating terrorism, OMB staff said that they are working with agencies on the development of combating terrorism performance measures at the agency level, primarily with DHS. OMB staff also said that they have not yet taken any action to prepare measures on a governmentwide basis. Additionally, OMB has not yet prepared a governmentwide plan that could include such measures related to combating terrorism. OMB staff said that the Homeland Security Council—which provides advice to the President on homeland security issues—DHS, and OMB are coordinating and planning for the future development of governmentwide performance measures related to combating terrorism. However, OMB staff said that they have not yet established a timeline for developing such measures. Implementation of our 2002 recommendation to include national-level and federal governmentwide combating terrorism performance measures as a supplement to existing strategies and their future revisions would help to assess and improve preparedness at the federal and national levels. As previously discussed, federal governmentwide performance measures related to combating terrorism have not yet been developed. Moreover, there have been no supplements or revisions to the existing national strategies that include federal governmentwide or national-level combating terrorism performance goals and measures. Without goals and measures from the federal and national levels, it is difficult to organize a coordinated and effective combating terrorism effort. Because numerous agencies are responsible for combating terrorism, federal governmentwide performance planning could better facilitate the integration of federal and national activities to achieve federal goals in that they could provide a cohesive perspective on the goals of the federal government. Furthermore, without governmentwide combating terrorism goals and measures, the Administration does not have an effective means of articulating to Congress or the American people the governmentwide accomplishments related to combating terrorism. While OMB has not yet developed governmentwide performance goals and measures, OMB established a formal assessment tool for the budget formulation process in fiscal year 2002—the Program Assessment Rating Tool to help measure program performance. However, OMB has not yet completed all PART reviews for programs that relate to combating terrorism activities, or done a crosscutting combating terrorism or homeland security PART review that could address the appropriateness of performance measures in the larger context. In our recommendation from an earlier report, we stated that targeting PART could help focus decision makers’ attention on the most pressing policy and program issues. Furthermore, we recommended that such an approach could facilitate the use of PART assessments to review the relative contributions of similar programs to common or crosscutting goals and outcomes. It is critical that the federal government, as the steward of the taxpayers’ money, ensure that such funds are managed to maximize results. Governmentwide combating terrorism performance measures that support the national strategies would allow the Administration and Congress to more effectively assess the federal government’s progress in combating terrorism initiatives, and better determine how effectively the government is using valuable resources. Furthermore, they would provide a more effective means of holding agencies accountable for achieving results. Notwithstanding a lack of progress in developing governmentwide performance measures, some agencies have performance goals and measures that reflect priorities for combating terrorism. Performance measures can provide information on many things, such as outputs, which provide the number of activities, and outcomes, which demonstrate achievement of intended results. Four of the seven agencies we contacted—DHS, DOE, USDA, and DOJ— developed performance measures for combating terrorism activities as part of their efforts under the Government Performance and Results Act of 1993 (GPRA). An example of a DOE output performance measure, designed to help achieve its goal of protecting National Nuclear Security Administration personnel, facilities, and nuclear weapons is as follows: “Replace, upgrade, re-certify 15 percent of emergency response equipment by 2009.” To help DHS evaluate its efforts related to preventing entry of unauthorized individuals and those that pose a threat to the nation, DHS created the following performance measure: “Determine the percentage of foreign nationals entering the United States who have biometric and biographic information on file prior to entry, including the foreign nationals who are referred for further inspection actions and with fraudulent documents identified.” This is an output-related performance measure that provides information about the number of foreign nationals who enter the country requiring further inspection. Under GPRA, virtually every executive agency is required to develop strategic plans covering a period of at least 5 years forward from the fiscal year in which it is submitted and to update those plans at least every 3 years. Under this act, strategic plans are the starting point for agencies to set annual performance goals and to measure program performance in achieving those goals. Although GPRA does not specifically require executive agencies to develop strategic plans related to combating terrorism, DOD has initiated efforts to develop strategic plans that incorporate performance measures for combating terrorism. In response to our previous recommendation that DOD develop a framework for the antiterrorism program that would provide the department with a vehicle to guide resource allocations and measure the results of improvement efforts, DOD developed an Antiterrorism Strategic Plan. This preliminary framework includes, among other things, a collective effort to defend against, respond to, and mitigate terrorist attacks aimed at DOD personnel. According to DOD officials, the strategic goals established in DOD’s Antiterrorism Strategic Plan directly align with OMB’s definitions of homeland security and overseas combating terrorism. One of DOD’s strategic goals is to conduct effective antiterrorism training and execute realistic antiterrorism exercises. DOD officials reported that they intend to collect antiterrorism performance data from all DOD components and plan to issue the first performance report on antiterrorism in the second quarter of fiscal year 2006. While we have not yet fully evaluated the effectiveness of this framework or plan, such efforts represent important steps taken by an agency to develop performance measures and, consequently, a results-oriented management framework specifically related to combating terrorism activities. Currently, because governmentwide performance measures have not been developed, the executive branch does not have a means to effectively measure and link resources expended and performance achieved related to combating terrorism efforts on a governmentwide basis. Without a clear understanding of this linkage, the executive branch and Congress may be missing opportunities to increase productivity and efficiency to ensure the best use of taxpayer funds. Therefore, we continue to believe that our prior recommendations on this issue from our 2002 report are important and should be implemented. In our 2002 report, we also made recommendations to OMB concerning an analysis of duplication of effort related to combating terrorism activities and the timely reporting of information to support congressional budget deliberations. Our November 2002 report was issued concurrently with the enactment of the Homeland Security Act of 2002, which repealed OMB’s prior reporting requirements, including the duplication analysis, and accelerated the timeline for OMB to report funding data on homeland security activities. The status of these recommendations is discussed further in appendix VII. Given the recent emphasis on and significance of combating terrorism, Congress should have the best available funding information to assist in its oversight role. Since the enactment of the Homeland Security Act of 2002, OMB has not been required to report on overseas combating terrorism data. Moreover, OMB staff said that funding data for overseas combating terrorism activities do not receive the same level of scrutiny as funding data for homeland security activities. Thus, the quality of the overseas combating terrorism data may degrade over time. Reporting such data along with homeland security funding would greatly improve the transparency of funding attributed to combating terrorism activities across the federal government. Although OMB’s analysis of homeland security funding in the Analytical Perspectives of the President’s budget satisfies the current legal requirements under the Homeland Security Act of 2002, it does not provide a complete accounting of all funds allocated to combating terrorism activities. If Congress is interested in receiving data on overseas combating terrorism funding as well as data on homeland security funding, then Congress should consider requiring OMB to report on overseas combating terrorism funding data in the Analytical Perspectives of the President's budget along with homeland security funding. We provided a draft of this report to OMB, USDA, DOD, DOE, DHS, DOJ, USACE, GSA, and the National Security Council for review and comment. OMB provided formal written comments on December 21, 2005, which are presented in appendix VIII. OMB said it appreciates the in-depth analyses in the report and the detailed review of the government’s homeland security spending levels, but objected to GAO including information on overseas combating terrorism funding data. OMB questioned the reliability of this information because it has not reviewed agency submissions of overseas combating terrorism data since fiscal year 2003. We believe the overseas combating terrorism data are sufficient for the purposes of this report and therefore have included them in appendix I. As discussed in this report, agencies in our review that provide information on combating terrorism activities—including those with overseas combating terrorism responsibilities—use OMB criteria and their own internal monitoring and review processes to categorize funding by activities. These agencies reported that they have designed controls over the estimation process to help ensure the reliability of the data. However, we agree that OMB’s review would provide an additional level of assurance, which is why we have made this a matter for congressional consideration. In addition to OMB’s comments, GSA provided formal written comments on a draft of this report on December 2, 2005, which are presented in appendix IX. In commenting on the draft report, GSA noted that it concurred with the contents of the report that discussed GSA. USDA, DOD, DOE, and USACE had no comments on the report. OMB, DHS, and DOJ provided technical and clarifying comments that we incorporated as appropriate. The National Security Council did not provide comments. We are sending copies of this report to the Senate Committee on Homeland Security and Governmental Affairs; Senate Committee on the Judiciary, Subcommittee on Terrorism, Technology, and Homeland Security; the House Committee on Government Reform; the House Committee on the Judiciary; the Director of the Office of Management and Budget; the Commanding General and Chief of Engineers of the U.S. Army Corps of Engineers; the Secretary of the Department of Agriculture; the Secretary of the Department of Defense; the Secretary of the Department of Energy; the Attorney General; the Administrator of the General Services Administration; the Secretary of Homeland Security; and other interested parties. We will also make copies available to others on request. In addition, the report will be available on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777 or by e-mail at [email protected]. GAO contacts and staff acknowledgments are listed in appendix X. In this report, we use fiscal year 2002 as the base year for analyzing trends in funding for combating terrorism for a number of reasons. Although fiscal year 2001 may seem like the logical starting point, fiscal year 2002 was the first full year in which decision making was informed by the terrorist attacks in the United States. Moreover, to make information comparable for the President’s fiscal year 2004 budget request, the Office of Management and Budget (OMB) restructured fiscal years 2002 and 2003 budget data to reflect changes that occurred with the creation of the Department of Homeland Security (DHS) in 2003. This made fiscal year 2002 the earliest year that OMB’s MAX database captured funding for combating terrorism that is, for the most part, in the current agency, bureau, and account structure. In addition, OMB for the first time required agencies to identify funding for homeland security and overseas combating terrorism separately from other funding in an account. Finally, fiscal year 2002 marked the earliest year in which OMB presented information organized according to the National Strategy for Homeland Security’s six critical mission areas in its Annual Report to Congress on Combating Terrorism (September 2003). The information for homeland security and overseas combating terrorism (OCT) is shown separately in the following tables. Tables 1 to 3 provide information on homeland security at progressively finer levels of detail. However, none of these tables, or tables 4 and 5 include funding in fiscal years 2004 and 2005 for DHS’ Project BioShield. OMB asserts that including this information can distort year-over-year comparisons. The Homeland Security Appropriations Act of 2004 provided $5.6 billion for this project to develop and procure tools to address public health consequences of terrorism. Pursuant to that act, specific amounts became available in fiscal year 2004 ($0.9 billion) and in fiscal year 2005 ($2.5 billion). Tables 4 and 5 display how OMB and agencies characterize funding according to the six critical mission areas for homeland security. Unlike the appropriations account structure, which is based in law, mission area categories and activities can be modified to meet changing needs. OMB has stated that “the Administration may refine definitions or mission area estimates over time based on additional analysis or changes in the way specific activities are characterized, aggregated, or disaggregated.” Tables 6 and 7 provide unpublished OMB data on OCT. According to OMB officials, they continue to collect these data from agencies, but do not review agency information since OMB is no longer required to report on overseas combating terrorism funding or activities. All comparisons or trends are for fiscal years 2002 through 2005. We have included the President’s fiscal year 2006 budget request because it contained the latest data available at the time of this review. Homeland security activities are funded in over 200 appropriations accounts in 32 agencies, and the District of Columbia. As shown in table 1, the Departments of Homeland Security, Defense (DOD), Health and Human Services (HHS), Justice (DOJ), and Energy (DOE) account for over 90 percent of governmentwide homeland security funding annually since fiscal year 2003. As shown in table 2, DHS has received the largest share of funding for homeland security activities. The department’s average annual funding has been $22.1 billion, or about 54 percent of the total amount available annually from fiscal years 2002 through 2005. For fiscal year 2006, the President proposed $27.3 billion for DHS’s homeland security activities, or 55 percent of total spending. DOD also received a large share of homeland security funding averaging 18 percent annually over the same period, with $9.5 billion requested for fiscal year 2006. Most of DOD’s homeland security funding is for functions related to security at military installations domestically and for research and development of antiterrorism technologies. Homeland security-related funding for DOD is increasing at an annual rate of about 18 percent, over 5 percent more than the rate of increase for DHS. HHS has had the largest percentage increase since fiscal year 2002, with an average annual rate of about 30 percent. Funding has been provided primarily to improve local response to catastrophic events and for research at the National Institutes of Health to find new ways to detect and combat biological agents. Table 3 provides homeland security data by agency, bureau, and account. OMB first started reporting information by the six critical mission areas in its 2003 Report to Congress on Combating Terrorism. See appendix I for definitions of each of the critical mission areas laid out in the National Strategy for Homeland Security. As shown in table 4, the distribution of funding across the six critical mission areas has been fairly consistent during this period. Table 5 shows which agencies are responsible for activities covered under the six mission areas. According to OMB data, the greatest share of funding between fiscal years 2002 and 2005 has been associated with border and transportation security, followed by funding for protecting critical infrastructure and key assets. According to OMB data shown in table 4, border and transportation security activities’—almost all of which are located in DHS—received between 38 and 41 percent annually of total funding. In fiscal year 2006, the President proposed funding totaling $19.3 billion for these activities, of which $18.2 billion is for activities in DHS. Nearly a third of all homeland security spending for this period has been labeled as protecting critical infrastructure and key assets. For fiscal years 2002 through 2005, DOD generally has received 50 percent or more annually for activities in this critical mission area. DHS and HHS activities are the primary recipients of funding for activities associated with emergency preparedness and response. Prior to the September 11 attacks, OMB’s annual report to Congress on combating terrorism made no distinction between domestic and overseas combating terrorism. With the development of policies and definitions to support the newer concept of homeland security and the creation of the Department of Homeland Security, “overseas combating terrorism” became the term used to describe those activities associated primarily with securing U.S. embassies and military facilities overseas and some intelligence efforts. Tables 6 and 7 show funding for overseas combating terrorism activities. For fiscal years 2002 and 2003, these amounts reflect estimates of gross budget authority that agencies attributed to overseas combating terrorism activities and reported to OMB. OMB then reviewed and validated these amounts—along with funding associated with homeland security activities—and published them in its annual report on combating terrorism. However, the Homeland Security Act of 2002 required that only funding related to homeland security activities be reported. Thus, while OMB continues to collect information on funding associated with overseas combating terrorism activities, it reported that the overseas combating terrorism data for fiscal years 2004 through 2006 have not been reviewed or validated. As a result, the overseas combating terrorism data for fiscal years 2004–2006 did not receive the same level of scrutiny as the homeland security data. Nevertheless, on the basis of funding data agencies attributed to overseas combating terrorism, most of that funding was provided to DOD. As shown in table 7, funding for the Administration’s oversight of the nation’s intelligence programs grew at an average annual rate of 68 percent between fiscal years 2002 and 2005, according to agency data as reported in OMB’s MAX database. The second fastest growing category was for international assistance programs, with an average annual growth rate of 40 percent for the period under review. The latter primarily supports foreign governments’ efforts to combat terrorism and increase law enforcement capability. In contrast to the DOD funding increase for homeland security activities shown in table 1, DOD’s funding for activities defined as overseas combating terrorism has declined an average of 7 percent between fiscal years 2002 and 2005. The National Strategy for Homeland Security sets out a plan to improve combating terrorism domestically through the cooperation and partnering of federal, state, local, and private sector organizations on an array of functions. The strategy organizes these functions into six critical mission areas: Intelligence and warning involves the identification, collection, analysis, and distribution of intelligence information appropriate for preempting or preventing a terrorist attack. Border and transportation security emphasizes the efficient and reliable flow of people, goods, and material across borders while deterring terrorist activity. Domestic counterterrorism focuses on law enforcement efforts to identify, halt, prevent, and prosecute terrorists in the United States. Protecting critical infrastructure and key assets stresses securing the nation’s interconnecting sectors and important facilities, sites, and structures. Defending against catastrophic threats emphasizes the detection, deterrence, and mitigation of terrorist use of weapons of mass destruction. Emergency preparedness and response highlights damage minimization and recovery from terrorist attacks. As part of our work, we examined the statutory changes in requirements for reporting combating terrorism activities that occurred as the result of the passage of the Homeland Security Act of 2002. This appendix provides additional background on the act, as well as the challenges OMB continues to face in tracking combating terrorism activities and ensuring the transparency of related funding data. Enacted on November 25, 2002, the Homeland Security Act of 2002 established the Department of Homeland Security and, among other things, changed OMB’s requirements for reporting funding data related to combating terrorism. Section 889 of the Homeland Security Act of 2002 repealed the NDAA reporting requirements in favor of new reporting requirements. In particular, section 889 required the President’s budget to include an analysis of “homeland security funding,” which it defined by reference to OMB’s 2002 report as activities to detect, deter, protect against, and if needed, respond to terrorist attacks occurring within the United States. OMB's definition of homeland security activities included activities that the agency had not previously treated as combating terrorism. Under section 889, OMB is required to report only on funding for homeland security by agency, budget function (i.e., functions that cover 17 areas of the government such as agriculture and health), and initiative areas. OMB staff said that although they do not report on funding related to overseas combating terrorism data, they still collect it as part of the annual budget. Because there is no longer a requirement to report on overseas combating terrorism funding data, OMB staff said that they are not reviewing the information that agencies provide to them. In addition, the definition of overseas combating terrorism activities in OMB Circular No. A-11 has not changed since 2003. As a result, OMB staff said that data on overseas combating terrorism funding data are not necessarily valid and could be misleading. In response to section 889’s changes, OMB began showing homeland security funding data by agency, by budget function, by account, and by each of the six critical mission areas established in the National Strategy for Homeland Security in the Analytical Perspectives of the President’s fiscal year 2005 budget. OMB also included narrative descriptions of major activities and the administration’s priorities in this section of the Analytical Perspectives. To present funding data for homeland security activities by critical mission area, OMB included a table for each critical mission area displaying budget authority for 3 fiscal years (prior year, current year, and budget year request). Section 889 of the Homeland Security Act also required OMB to include the most recent risk assessment and summary of homeland security needs in each initiative area in the President’s annual budget. OMB’s prior reporting requirements required that OMB report on the amounts expended by executive agencies on combating terrorism activities, as well as the specific programs and activities for which funds were expended, while section 889 explicitly mandates a risk assessment and summary of resource needs in each initiative area. According to OMB staff, OMB does not have the expertise or the staff to conduct separate risk assessments, and it relies on the risk assessments of each individual agency to determine areas of high risk in order to meet this requirement. In addition, section 889 required that OMB include in the President’s annual budget an estimate of the user fees collected by the federal government to help offset expenses related to homeland security activities, such as the Transportation Security Administration’s passenger security fees, which are added to airline passengers’ ticket costs. To meet this requirement, OMB included a table for users’ fees by major cabinet- level department displaying the related budget authority in the Analytical Perspectives that accompanied the President’s fiscal years 2005 and 2006 budgets. Finally, section 889 of the Homeland Security Act of 2002 accelerated the timeline for reporting funding data by requiring OMB to report funding data in the President’s budget, which must be submitted to Congress by the first Monday in February. Under its previous reporting requirement, OMB was required to issue a separate stand-alone report on combating terrorism to Congress by March 1 of each year. OMB complied with the new timeline for the fiscal years 2005 and 2006 President’s budget. Despite these changes, OMB staff report still facing challenges in tracking activities related to combating terrorism funding data and ensuring the transparency of related funding data. OMB staff said that the creation of DHS helped minimize the difficulties they face in ensuring the transparency of related funding data and tracking activities related to combating terrorism, since the creation of DHS resulted in approximately 60 percent of homeland security funding being merged into funding for one agency at the time DHS became operational. Although OMB is no longer required to report on funding data related to overseas combating terrorism activities, OMB staff said that many of the difficulties cited in our 2002 report still apply and that they will most likely always face these challenges. For example, OMB staff reported that they are still challenged by the large number of agencies involved in combating terrorism activities. To obtain information needed to fulfill its reporting requirements related to funding data, OMB has to interact with 32 other agencies and the District of Columbia that have responsibilities to combat terrorism in addition to DHS. OMB staff also said that it will always require significant effort to identify funding for combating terrorism activities, since such funding is often subsumed in budget accounts that provide funding for other activities. In addition, OMB staff also stated that they were challenged in tracking funding related to combating terrorism, given the wide variety of missions represented, including intelligence, law enforcement, health services, and environmental protection, as well as the global nature of missions for combating terrorism. However, OMB staff told us that they have worked diligently to identify homeland security activities by monitoring agency reviews of homeland security spending and developing an annual crosscut review, which identifies projects with common themes across agencies. Hundreds of budget accounts include activities related to combating terrorism. The following summarizes 15 of the 34 accounts we reviewed. The funding levels shown in these accounts represent the portion of the account that supports combating terrorism efforts by critical mission area as reflected in OMB’s Homeland Security and Overseas Combating Terrorism database that supports the President’s fiscal year 2006 budget request. Our summaries also include descriptions of the combating terrorism activities within these accounts as well as the agencies’ estimates of budget authority that relate to these combating terrorism activities. For purposes of this appendix, we selected one account to display for the Department of Energy, General Services Administration, and the United States Army Corps of Engineers. We also selected to display one account for each component office that we contacted at the Departments of Agriculture, Homeland Security, and Justice—the Animal and Plant Health Inspection Service (APHIS) and the Agricultural Research Service (ARS) within the Department of Agriculture (USDA); Customs and Border Protection (CBP), Immigration and Customs Enforcement (ICE), the Information Analysis and Infrastructure Protection Directorate (IAIP), the Office of State and Local Government Coordination and Preparedness (SLGCP), the Transportation Security Administration (TSA), the United States Coast Guard (USCG), the United States Secret Service (USSS), and the Science and Technology Directorate of DHS; and the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF), and the Federal Bureau of Investigation (FBI) of the Department of Justice. The activities included in all 34 of the accounts that we reviewed were consistent with OMB’s definitions of homeland security and overseas combating terrorism as defined in OMB Circular No. A-11. Department of Homeland Security: Information Analysis and Infrastructure Protection Critical Mission Area: Intelligence and Warning Assessment and Evaluation Account (024-90-0911) The Assessment and Evaluation account provides funding for threat analysis associated with collecting and fusing law enforcement, intelligence, and other information to evaluate terrorist threats to the homeland. Infrastructure Vulnerability and Risk Assessment includes efforts to provide analytic tools to promote communication, coordination, collaboration, and cooperation to analyze intelligence information with the Intelligence Community; law enforcement agencies; state, local, and tribal authorities; the private sector; and other critical stakeholders regarding existing threats to the homeland. Homeland Security Operations Center (HSOC) serves as the nation’s center for information sharing and domestic incident management. The HSOC collects and fuses intelligence information from a variety of sources every day to help deter, detect, and prevent terrorist acts. Operating 24 hours a day, 7 days a week, the HSOC is tasked with providing real-time situational awareness and monitoring of the homeland, and coordinates incidents and response activities. Analysis and Studies includes efforts by IAIP personnel to develop threat databases, participate in exercises and crisis simulations, and prepare products on threats. It also includes funding for an independent evaluation of IAIP’s risk assessment methodology. Threat Determination and Assessment includes efforts by IAIP personnel to develop terrorist threat situational awareness, (i.e., the analytical capability required to develop and integrate timely, actionable, and valuable information based on analysis of terrorist threat intelligence information and infrastructure vulnerability assessments). Biosurveillance includes efforts by IAIP personnel to integrate biosurveillance data from other federal agencies such as the Centers for Disease Control with threat information. These activities are conducted to help IAIP become better positioned to provide information to decision makers and others to aid in the response to threats and incidents. Other Activities include fiscal year 2004 activities that were restructured for the fiscal years 2005 and 2006 budget request. For instance, in fiscal year 2004, activities to conduct risk assessments were included under the activity, risk assessment division; whereas, for the fiscal year 2006 budget request, these activities are now included in analysis and studies and threat determination and assessment as discussed above. (dollars in millions) IAIP Assessment and Evaluation (024-90-0911) The funding levels shown in this account represent the portion of the account that supports combating terrorism efforts for the critical mission area Intelligence and Warning. Critical Mission Area: Border and Transportation Security Customs and Border Protection, Salaries and Expenses Account (024-50-0530) The Salaries and Expenses account provides funding for Customs and Border Protection personnel efforts to enforce laws relating to border security, immigration, customs, and agricultural inspections and regulatory activities related to plant and animal imports; acquisition, lease, maintenance and operation of aircraft; purchase and lease of police-type vehicles; and contracting with individuals for personal services abroad. Enforcement funds activities related to CBP personnel’s efforts to identify, investigate, apprehend, and remove criminal aliens; maintain and update systems to track criminal and illegal aliens on the border in areas with high apprehensions to deter illegal entry; repair, maintain, and construct border facilities; and collect fines levied against aliens for failure to depart the United States after being ordered to do so. Border Protection funds activities by CBP personnel to enforce various provisions of law that govern entry and presence in the United States, including detecting and preventing terrorists and terrorists’ weapons from entering the United States, seizing illegal drugs and other contraband, determining the admissibility of people and goods, apprehending people who attempt to enter the country illegally, protecting our agricultural interests from harmful pests and diseases, collecting duties and fees, and regulating and facilitating international trade. Small Airport Facilities includes the collection of user fees by CBP personnel generated from inspection services that are provided to participating small airports, including the airports located at Lebanon, New Hampshire, Pontiac/Oakland, Michigan, and other small airports designated by the Department of Treasury based on the volume or value of business cleared through the airport from which commercial or private aircraft arrive from a place outside the United States. (dollars in millions) CBP Salaries and Expenses Account (024-50-0530) The funding levels shown in this account represent the discretionary portion of the account that supports combating terrorism efforts for the critical mission area Border and Transportation Security. CBP determined that 67 percent of its discretionary funding relates to combating terrorism efforts. The Discretionary Fee Funded, Salaries and Expenses account provides funding for TSA personnel’s efforts to provide security services for civil aviation. This account is funded through collections from passenger security and air carrier fees (see descriptions below). These fees offset TSA’s appropriated funds as the fees are collected, thereby reducing the general fund contribution. TSA received authority to collect such fees under the Aviation and Transportation Security Act. Component activities: Aviation Security includes collections from passenger security and air carrier fees. The passenger fee is added to each airline passenger’s ticket purchase and the air carrier fee is paid directly by air carriers. TSA receives its full aviation security appropriation, and these fees offset the appropriated funds as the fees are collected, thereby reducing the general fund contribution for TSA personnel’s efforts to provide security services for civil aviation such as passenger and baggage screening, and establishing Federal air marshals on various commercial flights. Aviation Security Fee Proposal: In the fiscal year 2006 budget request, the President proposed to increase the air passenger security fee by $3.00, raising the fee on a typical flight to $5.50. For passengers traveling multiple legs on a one-way trip, the President proposed a maximum fee of $8.00. The budget states that such fee increases will allow TSA to almost fully recover the costs of federal airport screening operations, a subset of aviation security activities. (dollars in millions) The funding levels shown in this account represent the portion of the account that supports combating terrorism efforts for the critical mission area Border and Transportation Security. Ports, Waterways, and Coastal Security includes efforts by USCG to conduct harbor patrols, vulnerabilities assessments, and intelligence gathering and analysis to prevent terrorist attacks and minimize the damage from any attacks that could occur. It also includes USCG’s efforts to escort and conduct security boardings of any vessel that may pose a substantial security risk to U.S. ports because of the composition of its crew, passengers, or cargo. Drug Interdiction includes efforts by USCG personnel to interdict illegal drug shipments by apprehending smugglers at sea attempting to import illegal drugs into the United States and halting the destructive influence of drug consumption by disrupting the drug supply and preventing potential funding sources for terrorism. Migrant Interdiction includes efforts by USCG personnel to maintain a presence in migrant departure, and to prohibit or deter people who attempt to enter the United States illegally via maritime routes. Defense Readiness includes efforts by USCG personnel to deploy cutters and other boats in and around harbors to protect the Department of Defense during military operations and meet requirements within the national strategy for homeland security and the national security strategy. Other Law Enforcement protects U.S. fishing grounds, and therefore the nation’s economic security, by keeping out those who mean to do harm and ensuring that foreign fisherman do not illegally harvest U.S. fish stocks. (dollars in millions) US Coast Guard Operating Expenses Account (024-60-0610) Sum includes the USCG’s fiscal year 2004 supplementals for these activities totaling $90.6 million. Investigations: Immigration and Customs Enforcement personnel conduct investigations to uncover and eliminate vulnerabilities that terrorists and other criminals exploit to harm our nation’s citizens, national security, and the economy through an array of investigative processes in the area of smuggling, finance, and national security. Through these investigations, Immigration and Customs Enforcement personnel work to identify the people, materials, and funding essential to sustaining terrorist threats and criminal enterprises, and to disrupt and dismantle those operations. (dollars in millions) ICE Salaries and Expenses Account (024-50-0540) The funding levels shown in this account represent the portion of the account that supports combating terrorism efforts for the critical mission area Domestic Counterterrorism. Firearms includes efforts by ATF personnel to counter firearms violence, including acts of terrorism, through enforcement of the federal firearms laws, regulation of the firearms industry, and participation in outreach efforts to leverage partnerships with federal, state, local, and foreign law enforcement in the fight against terrorism. Arson and Explosives includes efforts by ATF personnel to enforce federal explosives and arson laws, as well as the regulation of the explosives industry and training through innovation to protect the public from terrorists’ use of explosives and acts of arson. Alcohol and Tobacco includes ongoing efforts by ATF personnel to reduce the rising trend of illegal diversion of tobacco products that may provide financial support to the causes of terrorist groups. Reduce Violent Crime includes efforts by ATF personnel to deny terrorists access to firearms, explosives, and explosive materials, such as the participation of ATF agents in various terrorism task forces. Protect the Public includes efforts by ATF personnel to safeguard the public from arson and explosives incidents. (dollars in millions) ATF Salaries and Expenses (011-14-0700) Department of Justice: Federal Bureau of Investigation Critical Mission Area: Domestic Counterterrrorism Salaries and Expenses Account (011-10-0200)The Salaries and Expenses account provides funding for efforts for FBI personnel to detect, investigate, and prosecute crimes by terrorists against the United States. Counterterrorism Field Investigations includes efforts by the FBI to lead investigations in countering the threat of terrorism and Equipment/Technology includes efforts by FBI personnel to provide engineering services, technical support, and equipment to FBI field offices; and to conduct research and development to adapt technology for use against criminals and terrorists. Counterterrorism Headquarters Coordination includes activities by FBI program managers in directing and guiding field investigators by managing investigations and providing training in the latest terrorism investigation techniques and methods. Terrorist Screening Center funds multi-agency efforts, including components of the Departments of Homeland Security, Justice, and State, to maintain a consolidated watch list of known or suspected terrorists. Miscellaneous Activities include a range of activities such as those provided under the Critical Incident Response Group (CIRG). CIRG responds to crimes which pose great dangers and require skills that are not routinely available in law enforcement agencies. For example, CIRG provides trained, experienced negotiators, crisis managers, and tactical and aviation personnel to assist law enforcement agencies. A portion of these activities are considered related to counterterrorism investigations. (dollars in millions) FBI Salaries and Expenses (011-10-0200) $1,738.0 Department of Homeland Security: United States Secret Service Critical Mission Area: Protecting Critical Infrastructure and Key Assets Operating Expenses Account (024-40-0400)The Operating Expenses account provides funding to support efforts of U.S. Secret Service personnel in providing protective services and conducting investigations. Protective Services provide for the protection of the President of the United States, immediate family members, the President-elect, the Vice President, or other officer next in order of succession to the Office of the President, and the Vice President-elect and the members of their immediate families, a visiting head of state and accompanying spouse, of a foreign state or foreign government. Investigative Services provide for investigation of counterfeiting of currency and securities; forgery and altering of government checks and bonds; thefts and frauds relating to Treasury electronic funds transfers; financial access device fraud, telecommunications fraud, computer and telemarketing fraud; fraud relative to federally insured financial institutions; and other criminal and noncriminal cases. Component activities: Domestic Protection of Persons includes activities conducted by Secret Service officials to protect the President of the United States, the President-elect, the Vice President, or other officer next in order of succession to the Office of the President, and the Vice President-elect and the members of their immediate families, former Presidents, their spouses and children under the age of 16, visiting heads of foreign states or governments; and major presidential, vice presidential candidates and their spouses. It also includes efforts conducted by Secret Service officials to plan, coordinate, and implement security operations at National Special Security Events, such as Republican and Democratic National Conventions. Financial and Infrastructure Investigations includes activities by Secret Service officials to protect the nation’s financial and monetary systems, and critical infrastructure that supports those systems, such as the development of tools to combat cyber terrorism. Domestic Protection of Government Buildings includes activities conducted by Secret Service officials to protect critical infrastructure and key assets by providing a security perimeter and building security at the White House/Treasury complex, the foreign diplomatic community located within the Washington metropolitan area, and at other Secret Service-secured sites. (dollars in millions) Operating Expenses Account (024-40-0400) Department of Energy: National Nuclear Security Administration Critical Mission Area: Protecting Critical Infrastructure and Key Assets Weapons Activities Account (019-05-0240) The Weapons Activities account provide for the maintenance and refurbishment of nuclear weapons to sustain confidence in their safety, reliability, and performance; expansion of scientific, engineering, and manufacturing capabilities to enable certification of the enduring nuclear weapons stockpile; and manufacture of nuclear weapon components under a comprehensive test ban. The weapons activities account also provides for continuous maintenance and investment in DOE’s enterprise of nuclear stewardship, including maintaining the capability to return to the design and production of new weapons and to underground nuclear testing, if so directed by the President. National Nuclear Security Administration (NNSA) Safeguards and Security ensures the protection of NNSA personnel, nuclear weapons, information, cyber infrastructure, and other materials at NNSA sites and facilities. NNSA Secure Transportation Asset provides for the transportation of nuclear weapons, special nuclear material, selected non- nuclear weapons components, limited-life components, and any other DOE materials to and from military locations, between nuclear weapons complex facilities, and to other government locations within the continental United States. NNSA Safety and Security Cybersecurity provides a foundation to facilitate detection of intrusions (hackers and other forms of attacks), and conduct vulnerability assessments and take corrective action at each NNSA site. It also includes actions to implement the Department of Energy’s and NNSA’s cybersecurity policies and practices, and continuously improve NNSA’s network and computing systems. The costs of these activities also include personnel time and acquisition and maintenance of cybersecurity technology (hardware and software) needed to maintain NNSA’s cybersecurity posture while addressing cybersecurity threats. National Nuclear Security Administration Safeguards and Security-HQ Research and Development aids in the efforts to enhance (dollars in millions) National Nuclear Security Administration Weapons Activities (019-05-0240) Critical Mission Area: Protecting Critical Infrastructure and Key Assets Federal Properties Activities/Fee Funded, Federal Buildings Fund Account (023-05-4542) The Real Property Activities account provides funding for GSA personnel efforts to implement security measures at federal buildings. New Construction includes efforts to implement security enhancements to newly constructed federal buildings such as implementing a structural design to ensure that support columns are sized, reinforced, and protected so that a terrorist event will not cause collapse; perimeter protection measures; and window systems design to mitigate the hazardous effects of flying glass following an explosive event. Major Repairs and Alterations includes efforts associated with major repairs and alteration projects (that is, requests for repairs and alterations greater than $2.41 million for fiscal year 2006, $2.36 million in fiscal year 2005, and $2.3 million in fiscal year 2004) to implement security measures to modify federal buildings for security enhancements such as installing bollards. Building Operations includes studies conducted to determine the need for retrofitting federal facilities against threats that will cause building columns or structures to be critically damaged and collapse. Minor Repairs and Alterations includes efforts associated with minor repair and alteration projects (that is, those requests costing less than $2.41 million for fiscal year 2006, $2.36 million in fiscal year 2005, and $2.3 million in fiscal year 2004) to implement security measures to modify federal buildings for security enhancements such as installing bollards. (dollars in millions) . Critical Mission Area: Protecting Critical Infrastructure and Key Assets Operation and Maintenance Account (202-00-3132) The Civil Works/Operation and Maintenance account provides funding for U.S. Army Corps of Engineer personnel to prepare for emergencies and secure infrastructure owned and operated by, or on behalf of, the United States Army Corps of Engineers, including administrative buildings, facilities, and labs. Continuity of Operations funds USACE preparedness planning, including exercises related to USACE emergency relocation as a result of either a natural or a man-made disaster. Critical Project Security provides funds for physical security upgrades such as fences and cameras; guards hired to control access to critical project assets such as hydropower generators; and protection of administrative facilities and laboratories. (dollars in millions) Civil Works/Operation and Maintenance, General (202-00-3132) Department of Agriculture: Agricultural Research Service Critical Mission Area: Defending against Catastrophic Threats Salaries and Expenses Account (005-18-1400) The Salaries and Expenses account provides funding for Agricultural Research Service personnel to conduct research that helps counter agricultural bioterrorism including research that minimizes the risk of agriculture to contamination (chemical, biological, or genetic), helps ensure the security of the food supply, and allows Agricultural Research Service personnel to provide scientific knowledge and expertise in agriculture to support a response to a bioterrorism attack. Research includes research activities conducted by Agricultural Research Service personnel to help protect the nation’s animal and plant resources by preventing bioterrorism attacks on crops and animal agriculture or providing rapid responses to thwart such attacks, and developing rapid and accurate techniques to monitor the safety of the food supply. (dollars in millions) Agricultural Research Service, Salaries and Expenses (005-18-1400) Department of Agriculture: The Animal and Plant Health Inspection Service Critical Mission Area: Defending against Catastrophic Attacks Salaries and Expenses Account (005-32-1600)The Salaries and Expenses account provides funds for APHIS staff to safeguard U.S. plant and animal resources against the introduction of foreign diseases and pests before they cause significant economic or environmental damage. Pest Detection/Animal Health Monitoring supports efforts by APHIS staff to track plant and animal disease agents that could be used in acts of bioterrorism. Overseas Activities supports efforts by APHIS staff to collect information on and track foreign pests and animal diseases. (dollars in millions) Department of Homeland Security: Science and Technology Critical Mission Area: Defending against Catastrophic Attacks Research, Development, Operations, and Acquisitions Account (024-80-0800)The Research, Development, Operations, and Acquisitions account provides funds for Science and Technology personnel to conduct and stimulate research, development, test, evaluation, and the timely transition of domestic combating terrorism capabilities to federal, state, and local agencies. Component activities: Biological Countermeasures includes research activities on early biowarning systems and their future implementation, as well as analysis and countermeasures of biological threats. Radiological and Nuclear Countermeasures includes activities associated with radiological detection research and implementation, analysis, and countermeasures of nuclear and radiological threats, and the development of systems that will help to coordinate consequence management and recovery. Research and Development Support to Department of Homeland Security Agencies includes coordination and collaboration research and development activities with the other components of the department to assist and enhance their technical capabilities. Man-Portable Air Defense Systems Countermeasures Special Program includes activities associated with the development of countermeasures to mitigate threats posed by shoulder-fired missiles directed toward commercial aircraft. Chemical Countermeasures includes a range of activities to address chemical defense, such as studies to prioritize efforts for mitigating threats among chemical threats and targets, and the development of new chemical detection and forensic technologies. Miscellaneous Activities includes a range of activities such as enhancing explosive detection equipment for aviation security, providing funding to the academic community to provide support for qualified students and faculty to conduct research and development, and supporting studies and analysis to be conducted by the Homeland Security Institute. (dollars in millions) Science and Technology Research, Development, Operations, and Acquisitions Account (024-80-0800) Department of Homeland Security: State and Local Government Coordination and Preparedness Critical Mission Area: Emergency Preparedness and Response State and Local Programs Account (024-10-0560)The State and Local Programs account provides funding for grants, training, exercises, and technical assistance to enhance the terrorism preparedness of first responders, including police, fire, rescue, and emergency response. Component activities: State Homeland Security Grants provide funding to support grants to states for domestic combating terrorism activities such as training, exercises, support costs, and Citizen Corps. Citizen Corps was created to help coordinate volunteer activities that will make our communities safer, stronger, and better prepared to respond to any emergency situation. High Threat Urban Area Grants provide funding to support grants to states and localities for terrorism preparedness and infrastructure protection in high threat urban areas. National Exercise Programs provide funding to support the Top Officials Weapons of Mass Destruction Exercise and other federally administered terrorism exercises. Center for Domestic Preparedness provides funds to train state and local first responders to operate within a live agent Miscellaneous Activities includes funding for a range of activities such as the storage of emergency equipment located at certain National Guard facilities and for emergency preparedness training through the National Domestic Preparedness Consortium (terrorism preparedness training centers). (dollars in millions) State and Local Programs Account (024-10-0560) To identify the methods agencies use to determine the portion of their annual appropriation that relates to combating terrorism, we met with OMB officials to review OMB’s efforts to define, categorize, and track homeland security and overseas combating terrorism funding both prior to and after the enactment of the Homeland Security Act. In addition, we met with 7 agencies, including 12 directorates, offices, or bureaus at those agencies that reported receiving funding for combating terrorism activities. The seven agencies we contacted are the Department of Homeland Security, the Department of Defense, the United States Army Corps of Engineers, the Department of Justice, the Department of Energy, the General Services Administration, and the United Stated Department of Agriculture. To reflect a range of funding levels, we selected these agencies from 33 agencies and the District of Columbia that reported receiving funding related to combating terrorism activities to OMB. We selected DHS and DOD because they account for 73 percent of the gross budget authority enacted for homeland security activities for fiscal year 2005, and DOD and DOE because they account for 69 percent of the gross budget authority enacted for overseas combating terrorism activities for fiscal year 2005. We also selected four agencies: two agencies from a list of those with the most fiscal year 2005 budget dollars related to combating terrorism activities—USDA and DOJ—and two agencies from a list of those with the least enacted budget authority related to combating terrorism activities— GSA and USACE—to ensure we included agencies in our review that had a range of combating terrorism funding. Because the selection we used was a nonprobability sample, the information we obtained from these 7 agencies is not generalizable to all agencies with similar funding for combating terrorism activities. We used a random sample number generator to select USDA, DOJ, GSA, and USACE from the two categories we established, that is, agencies that were moderately funded and those that were minimally funded. We excluded DOD, DHS, and DOE when performing the random number generation, since we had already included them in our selection. (We also excluded the Postal Service because it did not estimate receiving any funding to combat terrorism activities in fiscal year 2006 and 10 other agencies that each received less than 0.1 percent of combating terrorism dollars—to ensure that our analysis included the more significant of the minimally funded agencies). Within the seven agencies, we selected directorates or offices that received the most funding for combating terrorism activities. These included the Animal and Plant Health Inspection Service and the Agricultural Research Service, within USDA; Customs and Border Protection, Immigration and Customs Enforcement, the Information Analysis and Infrastructure Protection Directorate, the Office of State and Local Government Coordination and Preparedness, the Transportation Security Administration, the United States Coast Guard, the United States Secret Service, and the Science and Technology Directorate of DHS; the Bureau of Alcohol, Tobacco, Firearms and Explosives; and the Federal Bureau of Investigation of DOJ. We also reviewed the activities contained in 34 budgetary accounts— separate financial reporting units for which all transactions within the budget are recorded—for these agencies designated as related to combating terrorism. We selected accounts with the most combating terrorism funding at each agency as well as some accounts with smaller amounts to ensure we covered a range of funding. On the basis of our selection, we reviewed at least 70 percent of each agency’s estimated gross budget authority related to combating terrorism activities as reported in the President’s fiscal year 2006 budget request. While we initially selected an additional 23 budgetary accounts to review at DOD, we did not review these accounts because DOD does not enter its combating terrorism activities—specific lines of work—into OMB’s Homeland Security and Overseas Combating Terrorism database. Although OMB computed the portion of budget authority DOD receives to combat terrorism and aligned DOD’s budget authority related to homeland security with the six critical mission areas, OMB staff said that they did not enter information on activities conducted by DOD to combat terrorism. Thus, we did not have any activity level-information to review for DOD for the accounts we selected. We interviewed agency officials at the seven agencies included in our review and OMB to determine how they identified, categorized, and tracked homeland security and overseas combating terrorism activities and estimated the portion of their budget authority that relates to such activities. To supplement interviews with agency budget officials, we also reviewed relevant budget documentation from each agency and asked agency budget officials to describe procedures the agency had in place to ensure that their funding levels were developed in accordance with OMB’s guidelines and definitions. To identify the status of recommendations from our November 2002 report, we met with OMB and attempted to meet with National Security Council (NSC) officials to document what actions have been taken to implement our recommendations and the reasons they did or did not implement them. Additionally, we reviewed the National Strategy for Combating Terrorism, the National Strategy for Homeland Security, and the National Security Strategy of the United States and conducted a literature search to determine if any updates or supplements had been written that included governmentwide performance goals and measures. We also interviewed agency officials at the seven agencies included in our review and reviewed their performance plans to determine whether these plans included performance goals and measures that reflected their combating terrorism activities. To determine the status of recommendations made in our 2002 report regarding an analysis of duplication of effort related to combating terrorism activities in annual reporting on funding data associated with such activities and to report this information in a timely manner to support congressional budget decisions, we met with OMB to determine what actions have been taken to implement our recommendations and the reasons it did or did not implement them. We also reviewed the Analytical Perspectives accompanying the President’s fiscal years 2005 and 2006 budgets to determine whether or not OMB included funding data information on combating terrorism and whether this information was issued in a timely manner. In addition to pursuing our two main objectives, we also identified funding patterns and trends for overseas combating terrorism activities and for homeland security activities between fiscal year 2002 and what is proposed for fiscal year 2006. We extracted, summarized, and analyzed combating terrorism data from the database used to prepare the Budget of the United States for fiscal years 2002 through 2006. To ensure that the database we received was consistent with published sources, we conducted electronic data testing and determined that the data was sufficiently reliable for our purposes. We also analyzed the effects of the Homeland Security Act of 2002 on reporting requirements for funding data related to combating terrorism activities since our 2002 report by reviewing the act and comparing it with prior reporting requirements under the National Defense Authorization Act for Fiscal Year 1998 as amended by the National Defense Authorization Act for Fiscal Year 1999. To supplement this information, we also reviewed OMB’s 2003 Report to Congress on Combating Terrorism and the Analytical Perspectives of the President’s budget from fiscal years 2005 and 2006. We conducted our work from January 2005 through November 2005 in accordance with generally accepted government auditing standards. Officials at five of the seven agencies we contacted—Department of Homeland Security, the Department of Justice, the Department of Energy, the General Services Administration, and the United Stated Department of Agriculture—most commonly reported using guidance from OMB, the Homeland Security Act of 2002, and agency-specific guidance to identify their combating terrorism activities. Officials at all five of these agencies said they use OMB Circular No. A-11, which includes definitions for homeland security and overseas combating terrorism activities, and instructions for submitting information on funding data related to homeland security and overseas combating terrorism to OMB. In addition, officials from four of the seven agencies in our review reported that they consulted with OMB to determine which of their agency’s activities are related to combating terrorism. Three of the seven agencies we contacted—DHS, USDA, and the United States Army Corps of Engineers—have developed additional guidance, which provides details specific to each agency to help determine combating terrorism activities. For example, to supplement OMB Circular A-11 guidance, DHS established an internal directive, Planning, Programming, Budgeting and Execution, which helps establish policy, procedures, and responsibilities relative to the planning, programming, budgeting, and execution process at DHS. The objective of the directive is to articulate DHS’s goals, objectives, and priorities while guiding the development of the department’s budget request and establishing parameters and guidelines for implementing and executing the current budget. Furthermore, officials in four of DHS’s components told us that they refer to information included in the Homeland Security Act of 2002 to determine which of their activities relate to homeland security. For example, section 888 of the Homeland Security Act designated 5 of the U.S. Coast Guard’s 11 missions as homeland security and the remaining 6 as non-homeland security. Similarly, Information Analysis and Infrastructure Protection (IAIP) officials at DHS stated that they categorized all of their activities as related to homeland security, since section 201 of the Homeland Security Act of 2002 only authorized IAIP to conduct activities related to homeland security. This appendix discusses the status of recommendations made in our 2002 report for the Office of Management and Budget to (1) include an analysis of duplication of effort related to combating terrorism activities in its annual reporting of funding data associated with such activities and (2) report this information in a timely manner to support congressional budget decisions. To improve the usefulness of OMB’s Annual Report to Congress on Combating Terrorism, we recommended that OMB include, as required by the National Defense Authorization Act for Fiscal Year 1998, an analysis of areas where overlap in programs could result in unnecessary duplication of effort. We also recommended that OMB publish the report by the required annual March 1 deadline to provide information in time for congressional budget deliberations. Although OMB has not implemented our recommendation that it include an analysis of unnecessary duplication of effort in its annual combating terrorism report, this requirement no longer exists. Our November 2002 report was issued concurrently with the enactment of the Homeland Security Act of 2002. Section 889 of the Homeland Security Act of 2002 repealed OMB’s prior reporting requirements, including the duplication analysis. OMB staff stated that they did not include an analysis of duplication in any of the agency’s prior reports primarily because they perform an analysis of homeland security initiatives and related resource needs across all federal agencies as part of the annual budget preparation process, and that they take action to address duplication prior to the publication of the President’s budget. Therefore, OMB staff said that they believe any issue of duplication is addressed in the President’s budget, specifically through his recommendations related to funding needs. Our recommendation that OMB improve the usefulness of its Annual Report to Congress on Combating Terrorism by publishing the report by the then-required March 1 annual deadline was also superseded by section 889 of the Homeland Security Act. This section requires that OMB report on funding data for homeland security activities in the President’s budget. Because the President must submit his budget by the first Monday in February, the Homeland Security Act of 2002 accelerated the timeline for reporting funding data on homeland security activities. OMB complied with this reporting requirement in fiscal years 2005 and 2006. In addition to the contact named above, Debra B. Sebastian, Assistant Director, Grace A. Coleman, Christine F. Davis, Gerard DeBie, Denise M. Fantone, Michele Fejfar, Jacob Hauskens, Laura R. Helm, Dawn Locke, Sara R. Margraf, and John W. Mingus, and made key contributions to this report. Global War on Terrorism: DOD Should Consider All Funds Requested for the War when Determining Needs and Covering Expenses, GAO-05-767. Washington, D.C.: Sept. 28, 2005. Global War on Terrorism: DOD Needs to Improve the Reliability of Cost Data and Provide Additional Guidance to Control Costs, GAO-05-882. Washington, D.C.: Sept. 21, 2005. Managing for Results: Enhancing Agency Use of Performance Information for Management Decision Making. GAO-05-927. Washington, D.C.: Sept. 9, 2005. A Glossary of Terms Used in the Federal Budget Process, GAO-05-734SP. Washington, D.C.: September 2005. Protection of Chemical and Water Infrastructure: Federal Requirements, Actions of Selected Facilities, and Remaining Challenges. GAO-05-327. Washington, D.C.: Mar. 28, 2005. Results-Oriented Government: Improvements to DHS’s Planning Process Would Enhance Usefulness and Accountability. GAO-05-300. Washington, D.C.: Mar. 31, 2005. Homeland Security: Much Is Being Done to Protect Agriculture from a Terrorist Attack, but Important Challenges Remain. GAO-05-214. Washington, D.C.: Mar. 8, 2005. Homeland Security: Agency Plans, Implementation, and Challenges Regarding the National Strategy for Homeland Security. GAO-05-33. Washington, D.C.: Jan. 14, 2005. Homeland Security: Further Actions Needed to Coordinate Federal Agencies’ Facility Protection Efforts and Promote Key Practices. GAO- 05-49. Washington, D.C.: Nov. 30, 2004. Military Operations: Fiscal Year 2004 Costs for the Global War on Terrorism Will Exceed Supplemental, Requiring DOD to Shift Funds from Other Uses. GAO-04-915. Washington, D.C.: July 21, 2004. Homeland Security: Transformation Strategy Needed to Address Challenges Facing the Federal Protective Service. GAO-04-537. Washington, D.C.: July 14, 2004. Results-Oriented Government: GPRA Has Established a Solid Foundation for Achieving Greater Results. GAO-04-594T. Washington, D.C.: Mar. 31, 2004. Coast Guard: Relationship between Resources Used and Results Achieved Needs to Be Clearer. GAO-04-432. Washington, D.C.: Mar. 22, 2004. Summary Analysis of Federal Commercial Aviation Taxes and Fees. GAO-04-406R. Washington, D.C.: March 12, 2004. Combating Terrorism: Evaluation of Selected Characteristics in National Strategies Related to Terrorism. GAO-04-408T. Washington, D.C.: Feb. 3, 2004. Performance Budgeting: Observations on the Use of OMB’s Program Assessment Rating Tool for the Fiscal Year 2004 Budget. GAO-04-174. Washington, D.C.: Jan. 30, 2004 Homeland Security: Challenges Facing the Department of Homeland Security in Balancing its Border Security and Trade Facilitation Missions. GAO-03-902T. Washington, D.C.: June 16, 2003. Combating Terrorism: Funding Data Reported to Congress Should Be Improved. GAO-03-170. Washington, D.C.: Nov. 26, 2002. Homeland Security: Title III of the Homeland Security Act of 2002. GAO-02-927T. Washington, D.C.: July 9, 2002. National Preparedness: Integration of Federal, State, Local, and Private Sector Efforts Is Critical to an Effective National Strategy for Homeland Security. GAO-02-621T. Washington, D.C.: Apr. 11, 2002. Combating Terrorism: Enhancing Partnerships through a National Preparedness Strategy. GAO-02-549T. Washington, D.C.: Mar. 28, 2002. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: Oct. 31, 2001. Homeland Security: Key Elements of a Risk Management Approach. GAO-02-150T. Washington, D.C.: Oct. 12, 2001. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: Sept. 20. 2001. Combating Terrorism: Actions Needed to Improve DOD Antiterrorism Program Implementation and Management. GAO-01-909. Washington, D.C.: Sept. 19, 2001. IRS Modernization: IRS Should Enhance Its Performance Management System. GAO-01-234. Washington, D.C.: Feb. 23, 2001. Managing for Results: The Statutory Framework for Performance-Based Management and Accountability. GAO/GGD/AIMD-98-52. Washington, D.C.: Jan. 28, 1998. Federal User Fees: Budgetary Treatment, Status, and Emerging Management Issues. GAO/AIMD-98-11. Washington, D.C.: Dec. 19, 1997. The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven. GAO/GGD-97-109. Washington, D.C.: June 2, 1997. Executive Guide: Effectively Implementing the Government Performance and Results Act. GAO/GGD-96-118. Washington, D.C.: June 1996.
The President's annual budget reports on federal funding dedicated to combating terrorism activities. Identification of such funding is inherently difficult because a significant portion of combating terrorism funding is embedded within appropriation accounts that include funding for other activities as well. In 2002, GAO reported on the difficulties that the executive branch faced in reporting funding for combating terrorism to Congress (see GAO-03-170). This report updates the information contained in the 2002 report by providing information on (1) the methods agencies use to determine the portion of their annual appropriations related to combating terrorism, and (2) the status of recommendations from GAO's 2002 report. Seven of 34 agencies that reported receiving funding related to combating terrorism activities to OMB used different methodologies to estimate the portion of their authorized funding that supports such activities. These 7 agencies account for about 90 percent of the total fiscal year 2006 budget request that the 34 agencies estimate relate to combating terrorism. All of these methods involve some level of professional judgment. Agencies stated this process is managed through OMB oversight and supervisory review. OMB staff said they do not review the overseas component of combating terrorism funding data since they are no longer required to report it. As a result, Congress does not receive OMB-reviewed data on the entirety of counterterrorism funding. Three recommendations from GAO's 2002 report have not been implemented. The first recommendation requests that OMB include agencies' obligation data in its annual reporting of funding data on combating terrorism. OMB staff continue to cite the effort required to produce such data but said they might consider reporting obligation information for a targeted set of accounts. Without obligation data, it is difficult for Congress to know (1) how much funding from prior years is still available to potentially reduce new spending requests, (2) whether the rate of spending for a program is slower than anticipated, or (3) what the size of the program is for a particular year and over time. The second recommendation was for OMB to direct relevant departments to develop or enhance combating terrorism performance goals and measures and include such measures in the governmentwide plan. Three of the seven agencies told us that OMB had not directed them to develop performance measures or enhance such measures for combating terrorism activities. However, four of the seven agencies had developed such measures. OMB staff said they are working with agencies to improve performance measurement of government programs related to combating terrorism. The development of such measures would assist Congress in determining whether funding increases have improved performance results. The third recommendation calls for the inclusion of national-level and federal governmentwide combating terrorism performance measures in supplements to existing strategies and their future revisions. There have been no supplements or revisions to the existing strategies that include governmentwide or national-level combating terrorism measures.
The Internal Revenue Code (IRC) requires IRS to notify taxpayers of taxes they might owe and about actions it plans to take to collect the taxes. Since the Revenue Act of 1928, IRS has been required to send such notifications to a taxpayer’s last known address. A taxpayer’s last known address was not defined by the Revenue Act nor has it been defined by Department of the Treasury regulations. However, over the years, courts have generally defined a taxpayer’s last known address as the address shown on the taxpayer’s most recently filed tax return, unless the taxpayer notified IRS of an address change. Generally, IRS requires that such notifications be in writing. If IRS’ notices cannot be delivered to the taxpayer as addressed, the Postal Service is supposed to return them to IRS. The Postal Service is also supposed to forward mail to taxpayers’ new addresses if a change of address form has been submitted to it. Also, the Postal Service is to return mail that is refused or unclaimed by taxpayers. However, IRS generally does not consider mail that is refused or unclaimed to be undeliverable mail. IRS’ processing of undeliverable mail involves labor-intensive and manual procedures at the 10 IRS service centers. Mail at each service center is separated into two groups—high and low priority. Mail in the high-priority group includes notices such as the final notices that are sent taxpayers regarding delinquent tax returns and collection of taxes and notices returned with change of address information provided by the Postal Service. For mail in the high-priority group, IRS’ procedures require that efforts be made to find the taxpayers’ current addresses. To do this, high-priority undeliverable mail is to be returned to the IRS service center function that originated it. For example, those notices involving proposed assessments for underpayment of taxes are returned to the Examination or Underreporter functions for processing. Similarly, notices involving taxpayers’ failures to file tax returns and pay delinquent taxes are returned to the Collection function for processing. All low-priority mail is destroyed without further processing. All service center functions generally use internal IRS sources, such as W-2 Forms and other types of information returns, as leads to help them contact taxpayers whose high-priority mail was returned as undeliverable. Service center staff may also use forwarding address information provided on mail returned to IRS by the Postal Service. If a different address is located, IRS is to make an attempt to contact the taxpayer at that address to request verification of an address change. If a taxpayer responds to IRS by confirming a new address, IRS is to change its master file address record—a taxpayer’s last known address at IRS. In situations where verification of a different address is not received from the taxpayer and the law requires that a notice be sent to the taxpayer’s last known address, IRS’ procedures require that the notification be sent to both the master file address and the unverified address. This may be done in situations where IRS is sending notices of intent to levy taxpayers’ liquid assets (e.g., bank accounts or wages) that are in the possession of third parties (e.g., financial institutions and employers) or statutory notices of tax deficiencies to taxpayers. Unresolved cases from service center functions, other than Collection, may ultimately become collection cases when proposed taxes are assessed. After assessments are made against taxpayers, IRS is to begin sending them collection notices. If unresolved by collection notices, cases over a predetermined dollar threshold are to be sent to the next stage of IRS’ collection process—the Automated Collection System (ACS) call sites, where more detailed address searches are to be done. In addition to address sources used by service centers, ACS uses sources such as state employment commissions and motor vehicle records to find better addresses for taxpayers. ACS may also try to contact taxpayers by telephone to get taxpayers to file delinquent tax returns or pay taxes owed. If the case is not resolved while in ACS and it falls within a certain dollar range, it may be referred to a revenue officer in a district office, where the revenue officer is to attempt to contact taxpayers by conducting field investigations and using local information sources. If a taxpayer does not provide IRS written notification of an address change, IRS continues to send notices to the master file address, which is the same as the last known address, even though previous mail sent to that address has been returned as undeliverable. When mail is undeliverable, taxpayers are generally still accountable for the taxes, interest, and penalties IRS says are owed as long as the mail was sent to the taxpayer’s last known address. Our objectives were to obtain information on (1) the number of undeliverable notices that are returned to IRS, (2) the impact these notices have on taxpayers and IRS, and (3) the causes for nondelivery of the mail. Another objective was to assess IRS’ procedures for processing undeliverable mail. To obtain information on the extent of IRS’ undeliverable notices, we reviewed relevant IRS studies and data that estimated the volume of undeliverable mail. We did not test or independently verify the data provided by IRS. To gather information on the impact undeliverable notices can have on taxpayers and IRS, we interviewed staff from IRS’ National Office and all 10 service centers with responsibilities for processing this mail. We also reviewed studies and projects on undeliverable mail by IRS’ Internal Audit, service centers, and other groups within IRS. Because certain notices are legally required to be sent to taxpayers, we discussed the impact that undeliverable mail can have on taxpayers and IRS with representatives of IRS’ Chief Counsel’s Office. To acquire information on the causes of IRS’ undeliverable mail, we reviewed service center policies and procedures for processing undeliverable mail and for accepting changes of address from taxpayers. To help in understanding the possible effects of these policies and procedures, we interviewed IRS officials in both the National Office and service centers with responsibilities for processing undeliverable mail. Also, to obtain information on reasons why mail may be undelivered in general, we contacted Postal Service officials at the National Address Information Center in Memphis, Tennessee. To assess IRS’ procedures for processing undeliverable mail, we reviewed prior IRS studies focusing on ways to better process this mail. We contacted staff at the Cincinnati and Fresno Service Centers to determine (1) how specialized locator services units at these locations were used to locate taxpayers whose mail was undelivered and (2) how their procedures may have varied from the other service centers. Because IRS recently implemented the Undeliverable Mail System (UMS) in 9 of its 10 service centers, we discussed how it affects undeliverable mail with National Office Collection officials. We also interviewed Collection staff in the Southeast Region, where UMS was piloted. We limited our work to undeliverable mail at IRS’ service centers because the majority of IRS’ mail originates at the centers, and undelivered mail is returned to its originating location. Because other companies have problems with undeliverable mail, we contacted two large credit card companies to determine whether their procedures offered methods that IRS could possibly use in handling its undeliverable mail. The two companies were judgmentally selected and are similar to IRS in that they send notices and bills to their customers regarding account adjustments and delinquent payments. We did our work between August 1992 and March 1994 in accordance with generally accepted government auditing standards. On October 26, 1994, we met with Collection’s Acting Executive Director, (Operations); the Chief of Document Handling, Taxpayer Services; and other IRS National Office officials responsible for overseeing undeliverable mail to obtain their comments on a draft of this report. Their comments are summarized and evaluated on pages 16 and 17 and incorporated in the report where appropriate. American society is very mobile. According to the U.S. Census Bureau, between 15 and 20 percent of Americans move annually. At that rate, as many as 49 million people may move yearly. Keeping up with address changes for such a mobile society presents IRS with a formidable task. Current addresses are critical to IRS because it mails hundreds of millions of pieces of correspondence to taxpayers yearly. IRS does not have precise information on the volume of mail it sends taxpayers annually that is returned undeliverable. However, from time to time various IRS study groups have made estimates of the undeliverable mail volume. Estimates made by these groups indicated that undeliverable mail rose from 6.5 million pieces in 1986 to as much as 15 million pieces in 1992. Because IRS often sends more than one notice to a taxpayer, the number of taxpayers affected by undeliverable mail was probably less than these estimates, but the exact number is unknown. The volume of IRS’ undeliverable mail may continue to rise if its accounts receivable inventory continues to grow as it has done for years. This is because the volume of mail IRS sends to taxpayers pertaining to their tax cases is directly related to the number of delinquent accounts in the receivables inventory. Virtually all delinquent accounts are sent notices, and some of this mail may be returned to IRS as undeliverable. According to IRS, most undeliverable mail has three principal causes. Taxpayers move and leave no forwarding addresses with either the Postal Service or IRS. When this occurs, the taxpayers may ultimately bear responsibility for the fact that their mail was delayed or undelivered, but IRS still has to process the undeliverable mail. The Postal Service may not deliver or forward mail, and mail is returned to IRS. This happens even though Postal Service policy states that all First-Class Mail, which most IRS notices are, is to be forwarded for up to 1 year when a valid forwarding address is on file. IRS incorrectly records taxpayers’ addresses in its databases. Of the three principal causes, IRS can completely control only the one dealing with how its staff records taxpayers’ addresses in its databases. As discussed later in this report, IRS could do more to encourage taxpayers to provide it with address changes even though there is no statute that requires taxpayers to do so. To some extent, IRS is depending on TSM to eliminate some of the errors associated with transcribing taxpayers’ addresses in the future. The Taxpayer Ombudsman has also recommended adopting procedures to help ensure that taxpayers’ addresses are accurately updated in IRS’ databases. Such procedures could help address the problems IRS has disclosed in this area. According to a 1991 IRS Internal Audit report, IRS incorrectly input 450,000 new addresses to the master file when tax returns were filed in 1988. This resulted in approximately 300,000 undeliverable notices, which included balance due notices totaling $49 million. It is very important that IRS’ mail reach the intended parties promptly. When not paid, taxes grow even higher as interest and penalties may be added to the tax liability. In some instances, IRS may be unsuccessful in contacting taxpayers at their home addresses but may have information on where they work or bank. With this information, IRS may eventually levy their bank accounts or garnish their wages to satisfy the debt. Thus, the consequences for taxpayers can be quite severe when IRS has an incorrect address and mail is returned to IRS as undeliverable. The impact that undeliverable mail has on taxpayers varies depending on the reason IRS is attempting to contact them. If IRS is questioning the amount of taxes owed because information it has shows that taxpayers might have underreported their tax liabilities, IRS sends notices containing information about this proposed deficiency and instructions on how to resolve it. If the notices are returned to IRS because they were undeliverable, the taxpayers would be unaware that IRS is attempting to contact them. Ultimately, this lack of information could adversely affect their opportunity to appeal the proposed assessments. Even though IRS is unable to contact taxpayers and obtain current addresses, a proposed tax deficiency would ultimately be legally assessed against them. Once IRS assesses the tax, the taxpayer may be required to pay the tax in order to appeal the case. If IRS is attempting to collect delinquent taxes already assessed against taxpayers, it sends collection notices to them. This occurs when (1) taxpayers file balance due tax returns and do not pay, and (2) IRS determines that taxpayers owe additional taxes on the basis of audits and other means. If the notices are returned to IRS as undeliverable and IRS is unable to contact the taxpayer and obtain a written verification of a different address, IRS’ computers will automatically send future notices to the same address, even though prior notices were returned as undeliverable. Taxpayers may face unanticipated enforcement actions by IRS, such as seizures of assets and garnishment of wages, without first having had an opportunity to negotiate payment arrangements or show proof that IRS’ records may be incorrect. Such actions may occur when IRS’ notices are returned as undeliverable even if the taxpayers and IRS had no contact. This is because IRS may have information on where the taxpayer banks or works. If it has this information, IRS’ procedures state that it may seize funds in the bank accounts and garnish wages to cover the amount of the taxes. For some taxpayers who did not receive IRS notices, enforcement actions such as these may be their first indication that IRS has been trying to contact them regarding their tax situations. In circumstances such as this, taxpayers’ situations would probably worsen since IRS may be required by law to assess penalties and interest. In certain instances, IRS may even seize assets, such as real and personal property, to recover the total amount owed if the case is considered to be in jeopardy. A jeopardy case is one in which IRS feels it must take immediate distraint action to protect the government’s interest versus risking further losses. However, IRS officials said that such cases are infrequent, and IRS’ procedures require that field collection staff make additional attempts to contact the taxpayers prior to seizing assets. When mail is undeliverable, taxpayers may incur added expenses and time to resolve their tax situations and this could increase their burden and frustration when dealing with IRS as well as lower their general perceptions of IRS. When IRS sends mail to a taxpayer’s last known address, it fulfills its legal obligation of notifying the taxpayer even if the taxpayer does not receive it. IRS does not have the burden of proving that the taxpayer actually received the mail. In addition to the inconvenience and burden that undeliverable mail can cause taxpayers, IRS is also adversely affected when its mail does not reach taxpayers. The millions of pieces of mail returned to IRS as undeliverable must be processed, adding to IRS’ service center costs. In addition to sorting and routing the mail back to the originating service center functions, attempts are to be made to contact taxpayers to obtain written notification of their address changes. When changes to the taxpayers’ addresses cannot be verified, IRS must send legally required notices to taxpayers at the addresses on their most recently filed tax returns. By mailing statutory notices to taxpayers’ last known addresses, IRS fulfills its legal requirements for notifying taxpayers about their tax situations. However, many taxpayers may not receive IRS’ notices because the addresses on their last tax returns are not their current addresses. IRS’ efforts to collect delinquent taxes may be hampered if the taxpayers do not receive the notices because the addresses on the notices were no longer current. IRS recognizes this problem and generally attempts to notify taxpayers of their tax obligations by sending additional notices to addresses it believes may be more current than the addresses in its records. IRS’ accounts receivable inventory is also affected to the extent that collection bills are not delivered to taxpayers for timely collection. Not only will the accounts receivable inventory show a higher balance, the government is denied access to funds it is owed. According to IRS staff, as delinquent accounts get older, they are generally more difficult to collect; thus, any delay in collecting accounts may pose some risk regarding the ultimate collectibility of older accounts. If the collection notices reflect taxpayer or IRS errors, delays in resolving invalid cases will only result in IRS wasting time and resources pursuing unproductive cases. Although IRS could not provide us precise estimates of the total costs of undeliverable mail to IRS—either in added costs of operations or in lost revenues—a few studies have attempted to measure component parts of the overall cost. For example: A 1991 report by IRS Internal Audit showed that 70 percent of the estimated 9 million pieces of undeliverable mail in 1988 were notices to taxpayers who potentially owed $3.4 billion in delinquent taxes or had not filed tax returns. According to that report, IRS spent about $13.9 million to print, mail, and process this mail. In addition, it said these undeliverable notices cost IRS millions of dollars in lost revenue and increased collection costs. In a December 1992 briefing on undeliverable mail for IRS’ Chief Operations Officer, IRS estimated that for fiscal year 1992, it issued 340,000 undeliverable statutory notices valued at $1.7 billion. IRS’ Chief Counsel estimated that undeliverable statutory notices with last known address problems cause losses of $5.5 million annually. A 1992 National Office quality improvement project reported that IRS had at least 1.2 million invalid business addresses in IRS computer files, resulting in about 2.25 million pieces of undeliverable mail annually. According to the report, IRS incurs increased operating costs of at least $3.6 million annually, a minimum revenue loss of $100 million, and decreased taxpayer compliance. This report also noted that the problem of undeliverable mail diminishes taxpayers’ image of IRS because of the undue burdens imposed on them. IRS has recognized the need to reduce the amount of its undeliverable mail and has several studies and projects focusing on ways to deal with it. One project involved the implementation of UMS in 9 of the 10 service centers by early 1994. UMS is an automated system designed to make the Collection function’s search for taxpayers’ addresses easier. In searching for address leads, UMS uses information from IRS’ own databases, such as information returns like W-2 Forms, and external data from credit bureaus. If a different address is located, UMS sends a computer-generated letter to the taxpayer requesting verification of the address. If the taxpayer confirms a different address, IRS changes the taxpayer’s master file address. However, when a different address is not found or confirmed by the taxpayer through UMS, the results of UMS’ research are to be electronically transmitted to ACS. According to Collection officials, UMS should reduce processing time and lower operations costs for handling undeliverable mail. As currently used, UMS researches undeliverable notices regarding only delinquent tax returns and payments for the Collection function. At one time, IRS’ future plans called for adding additional address sources to the UMS database and allowing all functions that process undeliverable mail to use it. However, as we were completing our work, we learned that UMS will become a part of the new Inventory Delivery System. The purpose of the Inventory Delivery System is to further automate the service center Collection processes. The Inventory Delivery System’s enhancements include direct interface with IRS’ computer files to update taxpayers’ addresses in IRS’ records and an increase in the number of sources used for locating addresses. In January 1995, IRS will implement a program designed to speed the resolution of tax cases. The program is referred to as an early intervention program because IRS staff are to contact taxpayers by telephone at the same time it sends out collection notices. To implement this program, IRS’ Collection function in the 10 service centers is to start processing undeliverable mail after the first occurrence of this mail as the other service center functions do. Under Collection’s new procedures, taxpayers will be sent only two service center notices instead of the four notices currently being sent. By sending fewer notices, this program should have a potential to reduce the amount of Collection’s undeliverable mail and lower the costs associated with sending notices to undeliverable addresses. One of IRS’ studies on undeliverable mail was a multifunctional study on last known addresses issues sponsored by the IRS Taxpayer Ombudsman. As a starting point, this study looked at the recommendations made in other studies and projects. The study was critical of IRS for not seriously considering prior recommendations and stated that few changes aimed at better handling undeliverable mail had occurred because of a feeling within IRS that the future operational improvements under TSM will resolve the undeliverable mail problems. Recommendations from prior IRS studies and projects, as well as new recommendations, have been summarized in the report by the Taxpayer Ombudsman. In total, 25 recommendations aimed at helping IRS better deal with undeliverable mail have been made. The recommendations included (1) developing standardized procedures for processing undeliverable mail throughout IRS and making the Collection function’s automated systems available to other service center functions; (2) testing alternative methods for taxpayers to provide address changes to IRS, such as the use of a tear-off return stub on notices; and (3) adopting procedures to help ensure that taxpayers’ addresses would be accurately updated in its databases. When we were completing this report, we learned that IRS had approved the report on the Taxpayer Ombudsman’s project in August 1994, and responsible offices were developing action plans to implement its recommendations. TSM, a long-term project to modernize computer operations and enhance customer service, is to shift IRS from a paper-based environment to an electronic one. IRS anticipates that this shift will potentially reduce the volume of undeliverable mail because more taxpayers are expected to file tax returns electronically, which should result in more accurate processing. This should eliminate errors caused by manually keying information, and therefore reduce IRS’ need to contact taxpayers to correct mistakes. Also, under TSM, IRS plans to make many of its contacts with taxpayers by telephone, and this should eliminate some of the need to correspond by mail. In addition, TSM’s new Document Processing System would allow IRS to enter information into its databases by optically scanning paper documents sent to IRS by taxpayers and eliminate the need for IRS’ staff to manually transcribe the data as is currently being done. This will also result in faster and more accurate processing of tax information. Currently, through tax packages and publications, IRS informs taxpayers that they should notify it in writing of address changes. IRS’ instructions tell the taxpayers to use the preaddressed labels supplied on their tax packages. If the addresses on these labels are incorrect, taxpayers are instructed to simply cross out the old addresses and write in their new addresses. If a taxpayer’s address changes after the current year’s tax return was filed, the instructions advise taxpayers to notify their service centers or district offices in writing. Taxpayers are told that they can use an IRS change of address form to do this, and they should also notify the Postal Service of the change if they anticipate receiving a tax refund. If a taxpayer voluntarily calls IRS to report an address change, IRS’ procedures require that the staff accept the new address for the sole purpose of mailing the taxpayer a change of address form. According to IRS officials, it will not change the address in its records until the taxpayer returns the change of address form. However, IRS’ procedures allow address changes based on oral statements taken over the telephone when an IRS employee contacts a taxpayer in connection with an unresolved tax case and when a taxpayer calls IRS to inquire about an undelivered income tax refund check. Even though IRS’ procedures require written notification from taxpayers to change their addresses, except for the two circumstances previously mentioned, such notification is not fail-safe because IRS generally accepts it without verification. Thus, the acceptance of address changes over the telephone should pose no greater risk to IRS than accepting written notifications, since both written notifications and orally supplied changes of address can be fraudulently supplied to IRS. IRS’ TSM plans call for systems that would allow taxpayers to change their addresses simply by using the keypad on their telephones. Since IRS currently accepts changes of address by telephone when its staff contacts taxpayers regarding unresolved tax cases and when taxpayers contact IRS about undeliverable income tax refund checks, it might consider accepting address information over the telephone now, especially when taxpayers call IRS to provide it. The general acceptance of changes of address by telephone should help IRS promote its one-stop concept of resolving taxpayers’ concerns with minimum contact and effort. We contacted two large private sector companies in the collection business for information on how they handle address changes. Like IRS, these companies need accurate addresses to contact customers. Officials from these companies told us that they ordinarily do not require written verifications from customers before they make address changes because their experiences have shown that information obtained by telephone was usually reliable. We believe that increasing taxpayers’ awareness of the importance of providing address changes to IRS is fundamental to developing a strategy to minimize the volume of undeliverable mail. If taxpayers are unaware of the importance, planned TSM enhancements will not be much help in resolving the problem of undeliverable mail. We found that IRS publications such as tax packages supplied annually to taxpayers did not discuss the importance of keeping addresses current with IRS. Even though the tax packages requested taxpayers to use the change of address form—Form 8822—to notify IRS of address changes, the form is not included in the packages. Taxpayers must take additional steps to obtain the change of address form, such as (1) visiting an IRS office, (2) calling a toll-free number, or (3) using a special order form. In contrast, many businesses make it much more convenient for customers to change their addresses. They often provide customers a conspicuous means for changing their addresses, such as on return envelopes, order forms, or change of address forms accompanying each mailing. To raise taxpayers’ awareness of IRS’ need for current addresses, IRS could explore ways to make (1) taxpayers more aware of the importance of keeping their addresses current and (2) the change of address form more conveniently available. We believe that such actions could help IRS reduce the volume of undeliverable mail and improve customer service. The different IRS service center functions that process undeliverable mail perform similar address searches. They work independently, however, and generally do not coordinate or share results despite the fact that taxpayers’ cases may ultimately be referred to and worked on by the other functions. As a result, each function may perform the same research on the same taxpayer. This duplication of effort increases IRS’ costs and the time associated with obtaining different addresses for taxpayers whose mail was undeliverable. In the Examination function, for example, staff manually maintain a file for each taxpayer’s case that includes information on the results of efforts to locate and contact the taxpayer. While these files are available to all Examination function staff, they are not shared with other service center functions that may be assigned the case at some time in the future. The Inventory Delivery System, which Collection plans to implement, should give it the means to automatically maintain results of prior address searches. However, the results of address searches would still not be shared among service center functions. We identified attempts by two service centers to consolidate efforts to locate taxpayers’ addresses in order to reduce costs and improve effectiveness. In one, the Cincinnati Service Center established a unit to serve service center functions handling undeliverable mail. The purpose of the unit is to (1) search for different addresses for selected notices and (2) identify the best available address and provide it to the function responsible for the mail. However, this unit did not process all of the service center’s undeliverable mail. The other consolidation effort at the Fresno Service Center has been disbanded because of IRS’ Chief Counsel’s objection to it changing addresses without taxpayers’ confirmation. Although IRS did not collect productivity data on these consolidation efforts, staff who were involved in them told us that the consolidation eliminated some of the duplicate research at the service centers. Collection’s experience with UMS and the centralization projects at the Fresno and Cincinnati Service Centers suggested that a centralized means of processing undeliverable mail would be more efficient. Further, as a result of IRS’ Taxpayer Ombudsman’s study, IRS plans to establish centralized units in an effort to standardize their procedures servicewide. By using centralized units to process undeliverable mail, IRS could expect earlier resolution of address problems, reduced rework and duplicative address searches, and lower operations costs. On that basis, IRS could proceed to centralize the process and, in the future, gather the data necessary to continuously improve the centralized process. Although it is unlikely that the problem of undeliverable mail can be totally eliminated, IRS needs to give undeliverable mail more attention because it adversely affects operations and can cause undue burden on taxpayers. IRS is aware of the need to better manage its undeliverable mail and is considering ways to better deal with this mail. Although previous efforts to deal with this mail were primarily limited to IRS’ service center Collection functions, new efforts are expected to have Service-wide consequences because IRS agreed in August 1994 to implement the recommendations of the Taxpayer Ombudsman’s study. The implementation of these recommendations should have a significant impact on reducing IRS’ undeliverable mail. One recommendation from this study calls for IRS to standardize its procedures for processing undeliverable mail throughout the service centers and expand Collection’s initiatives, such as UMS, for Service-wide use. The implementation of this recommendation could help IRS implement its planned centralized unit in each service center to process all undeliverable mail starting with the initial occurrence of returned mail. This would further ensure that the duplication of effort that currently exists across service centers would be eliminated and that IRS would resolve the problem of its undeliverable mail sooner. Over time, IRS expects TSM to bring further operational improvements and eliminate some of the paper correspondence between IRS and taxpayers. Ideally, TSM will foster IRS’ goal of accepting taxpayer address changes by telephone and reduce the errors associated with having staff manually update taxpayers’ addresses. However, to minimize the amount of undeliverable mail, IRS should also explore ways to make taxpayers more aware of the importance of keeping it informed of address changes. It should allow taxpayers to make address changes with minimum effort by such means as telephoning IRS or using the change of address form, which should be conveniently available. To help IRS better manage its undeliverable mail we recommend that the Commissioner of Internal Revenue take the following actions: Better encourage taxpayers to make address changes by (1) accepting changes of address over the telephone; (2) making Form 8822, Change of Address, more conveniently available; and (3) emphasizing to taxpayers the importance of keeping their addresses current with IRS. Proceed with plans to establish a centralized unit within each service center to process all service center undeliverable mail starting with the initial occurrence of returned mail. Responsible IRS officials, including the Acting Executive Director, Collection (Operations), and the Chief of Document Handling Services, Taxpayer Services, reviewed a draft of this report and provided oral comments in a meeting on October 26, 1994. The officials agreed that additional steps should be taken to process undeliverable mail more efficiently and reduce the volume of such mail. In this regard, they said that IRS senior management has approved the recommendations in the Taxpayer Ombudsman’s study and action plans are being prepared to implement them. According to the IRS officials, one action plan is to deal with standardizing procedures for address searches and would involve centralizing the processing of undeliverable mail in the service centers as we are recommending. The IRS officials explained that several measures that affect how undeliverable mail is processed are currently being tested or planned. They believed that these measures generally address the issues discussed in our recommendations. For example, to encourage taxpayers to make address changes, they said Taxpayer Services is including a change of address form in notices sent to taxpayers on a test basis. The Chief of Document Handling Services, Taxpayer Services, told us that IRS has several planned studies that could potentially affect IRS’ undeliverable mail. These include (1) IRS’ participation in a project with the Postal Service and other federal agencies in which the Postal Service will collect change of address information and provide it to the participating agencies, and (2) IRS’ use of the Postal Service’s National Change of Address database to contact taxpayers in order to verify addresses. To encourage taxpayers to provide IRS address changes, we are recommending that IRS accept address changes by telephone. Although Collection officials agreed with us, they said that IRS’ Chief Counsel must first approve this change. In a draft of this report that IRS reviewed, we proposed that IRS stop sending service center collection notices to known undeliverable addresses while research for a current address is ongoing, except for notices that are legally required to be sent to taxpayers. IRS’ Collection officials disagreed with this proposal. They said that the costs of sending notices are negligible and that because some taxpayers may be located by subsequent mailings to the same addresses, IRS should not begin searches until these mailings are returned undeliverable. Collection officials also said that they would incur increased staff costs if they were to eliminate some notices and accelerate processing of undeliverable mail to the next stage in its collection process. The IRS officials said that costs would be higher in the subsequent collection stages because higher graded staff are used to work unresolved collection cases. We have since dropped our proposal because Collection officials later told us that beginning in January 1995, they will reduce the number of service center notices sent to taxpayers, which will result in earlier processing of undeliverable mail. When this change takes effect, Collection will be sending taxpayers only two service center notices. The effect of IRS’ elimination of two of the four service center notices basically carries out what we had previously proposed. However, we question whether IRS would incur increased staff costs by accelerating a case to the next stage in the collection process. We raise this question because all unresolved cases would eventually move to the next stage of the collection process, and delaying collections and resolution of such cases may actually cost more. As arranged with the Subcommittee, we are sending copies of this report to the Commissioner of Internal Revenue and other interested parties. We will make copies available to others upon request. Major contributors to this report are listed in appendix I. Please contact me on (202) 512-9044 if you or your staff have any questions. Terry Tillotson, Evaluator-in-Charge Marvin McGill, Evaluator Kathy Squires, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
In response to a congressional request, GAO reviewed the Internal Revenue Service's (IRS) processes for handling undeliverable mail, focusing on the: (1) amount of and reasons for undeliverable mail; and (2) impact of undeliverable mail on taxpayers and IRS. GAO found that: (1) IRS estimates that it had about 6.5 million pieces of undeliverable mail in 1986 and about 15 million pieces in 1992; (2) undeliverable mail is principally caused by taxpayers failing to leave forwarding addresses, the U.S. Postal Service not delivering or forwarding mail, and IRS incorrectly recording taxpayers' addresses in its files; (3) taxpayer interest and penalties can substantially increase because of undeliverable mail, which eventually can lead to IRS attachment of taxpayers' liquid assets; (4) IRS loses millions of dollars in revenue annually and incurs increased operating costs because of undeliverable mail; (5) the IRS Collection Division plans to start processing undeliverable mail after the first occurrence and send only two instead of four service center notices to decrease collection costs; (6) IRS has implemented only a few recommendations to decrease its undeliverable mail because it expects its Tax Systems Modernization initiative to resolve its undeliverable mail problem; (7) senior IRS management recently requested responsible IRS offices to develop action plans to decrease the amount of undeliverable mail in a recent internal IRS report; (8) IRS needs to increase taxpayers' awareness of the need to provide address changes to minimize the volume of undeliverable mail; and (9) more efficient processing of undeliverable mail could result if IRS consolidates mail processing functions into one centralized unit.
In the late 1980s, the judiciary recognized that it was facing space shortages, security shortfalls, and operational inefficiencies at courthouse facilities around the country. To address this problem, the Judicial Conference of the United States directed each of the 94 judicial districts, with assistance from AOC, to develop long-range space plans to determine where new and additional space was needed. To date, AOC has provided each judicial district with planning guidance in developing 5-, 10-, and 30-year space shortage projections. As a result of this process, the judiciary identified approximately 200 locations that would be out of space within the next 10 years and has estimated that funding for new courthouses at these locations would cost approximately $10 billion. In addition to identifying space shortages, these planning efforts also identified security concerns and operational inefficiencies at many of these facilities nationwide. The judiciary makes requests for new courthouse projects to GSA, the federal government’s central agency for real property operations. GSA requests funding for courthouses as part of the president’s annual budget request to Congress. Under the Public Buildings Act of 1959, as amended, GSA is required to submit to the Senate Committee on Environment and Public Works and the House Committee on Transportation and Infrastructure detailed project descriptions, called prospectuses, that contain project cost estimates and justifications for projects that exceed a prospectus threshold. Under the act, GSA can adjust the prospectus threshold upward or downward based on changes in construction costs during the preceding calendar year—the threshold is $1.74 million for fiscal year 1997. Once projects are funded by Congress, GSA is to contract with private sector firms for design and construction work. In the early 1990s, Congress, we, and the private sector began calling on the judiciary and GSA to prioritize projects for this major initiative. In 1990, we began reporting that Congress needed better information for decisionmaking, including a prioritization of capital investment needs. In 1994, the Conference Committee on GSA’s 1995 appropriations act directed that the courthouse construction requirements established by GSA and the Office of Management and Budget (OMB) include a prioritization of projects by AOC. A year earlier, the Independent Courts Building Program Panel—which was formed in 1993 by GSA and AOC and comprised leading architects, engineers, and construction professionals—recommended that courthouse projects be prioritized into yearly 5-year plans. More recently, in November 1995, we testified that the process for funding new courthouse projects lacked—and could benefit from—a comprehensive capital investment plan that articulates a rationale or justification for projects and presents projects in a long-term strategic context.Furthermore, during the last 6 years, we have reported that Congress lacks quality information to assess the merits of individual projects, understand the rationale for project priorities, and justify funding decisions. In March 1996, the judiciary—through the Judicial Conference of the United States—issued a 5-year plan for courthouse construction for fiscal years 1997 through 2001. The plan, which is intended to communicate the judiciary’s urgent housing needs to Congress and GSA, identifies 45 projects for funding based on information from Congress and GSA that $500 million could be used as a planning target in estimating funds that will be available for courthouse construction each year. Appendix I shows the projects in the plan by fiscal year. To determine project urgency, the judiciary developed a methodology for assigning urgency scores to projects. The criteria and related weights applied in assessing urgency include the length of time space shortages have existed as defined by the year a location was or will be out-of-space (30 percent); security concern ratings of 1 through 4 (30 percent), where a 1 indicates the lowest level of security concern; operational inefficiency ratings of 1 through 5 (25 percent), where a 1 indicates minimal operational inefficiencies; and the number of judges affected as defined by the number of judges without courtrooms (15 percent). Under the methodology, each project receives an urgency score on a scale of 100, with a score of 100 indicating the highest level or degree of urgency. Appendix II contains a more detailed description of the judiciary’s urgency score methodology. In addition to the plan, AOC provided us with related material, including a description of the methodology for assessing urgency, an overview of the process used to develop the plan, and urgency scores for the projects in the plan. AOC indicated that it provided the same material to key congressional committees. Our objectives were to determine whether the judiciary’s 5-year plan (1) reflects the judiciary’s most urgent courthouse construction needs and (2) provides information needed by decisionmakers to evaluate the relative merits of project proposals. To meet the first objective, we focused on determining whether the plan contains all the most urgently needed projects and if priorities in the plan correlate with the judiciary’s own project urgency scores. In making this assessment, we relied primarily on the urgency scores the judiciary developed for projects in the plan, its methodology for assessing project urgency, and AOC data related to urgency for projects that were not included in the plan. The judiciary’s methodology for assessing urgency appears to include factors that would be important in gauging the relative urgency of competing projects, and the process used to assign scores for each of the four criteria, though subjective, seems straightforward. However, we did not assess the validity of the methodology or the reliability of the urgency scores developed for each location. To determine whether the plan contains the most urgently needed projects, we developed minimum urgency scores for 80 locations that were not in the plan but, according to AOC, also need new courthouse projects. AOC provided us with security concern and out-of-space year data for these projects. As previously mentioned, security concern and out-of-space year data each have weights of 30 percent that are applied in developing the urgency score. Operational inefficiencies and the number of judges affected—the two other components of the urgency score—have weights of 25 percent and 15 percent of the score, respectively. Therefore, security concern and out-of-space year data equate to 60 percent of the total urgency scores these projects would receive. To calculate minimum scores for these locations, we used the security concern and out-of-space year data and applied the judiciary’s urgency score methodology to these 80 other locations. Because data for operational inefficiencies were not available for these locations, we assigned minimum ratings of “1” to each of the 80 locations. AOC officials told us that, according to the scoring methodology, 1 was the lowest score locations could receive for this criterion. For the fourth factor, number of judges affected, AOC did not have data, and thus we used “0” for this factor in our calculation. Therefore, our minimum scores do not include an assessment of operational conditions at these locations or a calculation for the number of judges affected. If actual scores for these two factors were included, urgency scores for these projects could either increase or remain the same—the scores could not decrease. We then compared these minimum scores to the complete scores assigned to the 45 projects in the 5-year plan and discussed the results with AOC officials. Appendix II contains a more detailed description of the urgency score methodology and our calculation of minimum scores for projects not included in the plan. To determine whether priorities in the plan correlate with the project urgency scores the judiciary developed, we compared the urgency scores for the 45 projects in the plan with the yearly sequence of funding priorities specified in the plan for fiscal years 1997 through 2001. We specifically focused on comparing projects that were at similar stages, such as site and design, that are scheduled for funding in different years according to the plan. We also discussed project priorities with AOC and GSA officials to identify other factors that may have been considered in prioritizing projects. To meet the second objective, we compared the information in the plan and related material to the types of information decisionmakers need to effectively assess project proposals and funding requests. Our past work specifically identified the types of information decisionmakers need when making decisions on courthouse construction funding. It includes a capital investment plan that prioritizes individual projects and puts them in some long-term strategic context and provides a rationale or justification for priorities set among competing projects. In making our comparison, we also considered the results of our work on the first objective because knowing whether the plan reflects the judiciary’s most urgently needed projects has ramifications for the amount of information decisionmakers would need to understand the basis for the plan’s priorities. Also, as mentioned before, the judiciary’s intent in developing the plan was to communicate its urgent courthouse construction needs. In addition, we reviewed congressional reports and testimonies pertaining to capital investment planning. We also considered a 1993 report by a GSA/judiciary-sponsored panel of private sector experts that outlined ways to improve the courthouse construction initiative. We did our work between March and November 1996 in accordance with generally accepted government auditing standards. We received written comments on a draft of this report from AOC, which we have included in appendix III. GSA provided oral comments on a draft of this report. We summarize and evaluate AOC’s and GSA’s comments on pages 14 and 15. Our analysis showed that the 5-year plan does contain projects with high urgency scores, including 13 projects with scores above 65. However, it also contains others that have scores lower than projects that were not included in the plan. Using the judiciary’s methodology and available data on security and space concerns and assuming the lowest possible scores for operational conditions and number of judges affected, we calculated minimum urgency scores for 80 projects that were not in the plan. Of these, we identified 30 projects that had minimum urgency scores higher than the complete scores for some of the projects in the plan. In fact, according to AOC, 1 of the 30 projects not in the plan would have a complete urgency score that is higher than those for 40 of the 45 projects in the plan. In developing the plan, the judiciary did not develop urgency scores for all competing projects. Instead, the judiciary focused on those projects that were in the GSA pipeline or were previously identified during earlier internal efforts to develop project priorities. Using this approach and the assumption that $500 million would be available for projects in each of the 5 years, the judiciary developed scores for 45 projects. AOC officials told us that they were unable to develop scores for other projects not included in the plan in time for the plan’s March 1996 issuance. They added that, based on their general knowledge of conditions at the other locations, they believed these other projects would not have urgency scores as high as the projects in the plan, except for a few cases. However, AOC did not provide any analysis to support these assertions. It said that it intends to develop scores for all the projects for possible inclusion in future versions of the plan. Urgency scores and related data were not available for projects not included in the plan when we began our review. However, AOC subsequently provided us with out-of-space year and security concern data it had developed for 80 projects identified for funding consideration in the fiscal years 2002 through 2006 timeframe. These two factors have a total weight of 60 percent that is applied in developing the urgency score. We used these data to apply the judiciary’s methodology for assigning project urgency scores to identify minimum scores for these 80 projects. Data for operational inefficiencies at these locations were not available; therefore, we assigned a minimum rating of 1 to each of the 80 locations. According to the scoring methodology, 1 is the score locations receive when operational inefficiencies are minimal. For the fourth factor, number of judges affected, AOC did not have data, and we used 0 in our calculation, which is the minimum score a location can receive for this factor. As shown in table 1, using these data and the judiciary’s methodology, we calculated that 30 locations have a minimum urgency score of 41.3 or higher, which is higher than the complete scores for 3 projects in the plan—San Diego, CA; San Jose, CA; and Cheyenne, WY. In addition to having a minimum score higher than the San Diego, San Jose, and Cheyenne projects, Los Angeles has a minimum score higher than another 16 locations that were included in the plan. Jackson has a minimum score higher than a total of eight projects in the plan. We also noted that the projects in San Diego and San Jose are scheduled to begin receiving the $197 million they are estimated to require in fiscal year 1998, which is 4 years before funding is to be considered for any of the projects in table 1. Appendix II shows our calculations of minimum urgency scores for each of the 80 projects that were not included in the plan. It is important to recognize that the minimum score calculations for these 30 locations include a minimum assessment of operational conditions and number of judges affected for these locations, 2 factors that have weights totaling 40 percent that are applied in developing the urgency scores. Although we were unable to determine the extent to which additional data on these other two criteria would increase the urgency scores, our minimum score calculations clearly showed that the 45 projects in the plan do not reflect the 45 most urgent projects according to the judiciary’s methodology. The AOC official responsible for developing the plan told us that the actual urgency score for Los Angeles when taking into account all 4 criteria would be somewhere in the 80s. Only 5 of the plan’s 45 projects have scores of 80 or above. The official said that the judiciary is aware of the conditions in Los Angeles and that the project was left out of the 5-year plan until some key planning decisions are made by GSA and the judiciary. The official added that one major obstacle to moving this project up in the plan is its cost, which is estimated at over $200 million. Another obstacle is the unwillingness of certain judicial districts to have their projects pushed back to make room for this project given that the plan assumes only $500 million will be available each year. We recognize that these factors will need to be considered in the funding process. However, not explaining the situation in Los Angeles in the plan seems questionable given the urgency score it would receive and that the objective of the plan was to communicate the judiciary’s urgent needs. Further, the obstacles to including it in the plan provided by AOC seem to be ones in which Congress has a stakeholder interest since it funds the projects and may not be fully aware of the situation in Los Angeles. We did note that the plan recognizes in a footnote that further study is needed to determine how to resolve the need for a project in Los Angeles. However, the plan provides no indication of the forthcoming challenge of funding this large project with limited resources or the severe conditions that exist in Los Angeles. For example, Los Angeles has a “severe” security concern rating of 4,which is quite different from the two California locations that are scheduled for funding beginning in 1998. These two locations, San Jose and San Diego, had security concern scores of only 1—the lowest score possible. No other locations, including those in the plan, had security concerns lower than 2. In fact, six other locations not included in the plan—including Jackson, MS—had “major” security concerns warranting scores of 3. In addition to not reflecting the most urgent projects, project funding priorities in the plan itself were not always exclusively based on the urgency scores. Our analysis of project priorities and urgency scores showed that several of the projects in the plan identified for funding in fiscal year 1998 had lower urgency scores than several projects scheduled for fiscal years 1999 and 2000. The first year of the plan, 1997, does contain several projects with high urgency scores, including projects in Brooklyn, NY; Corpus Christi, TX; Cleveland, OH; and Seattle, WA that have scores ranging from 77.5 to 100. However, eight projects identified for site and/or design funding in 1999 had higher urgency scores than seven projects identified for site and/or design funding in 1998. Table 2 shows these projects and their urgency scores. In fact, one 1999 project—Richmond, VA—has the fifth highest urgency score among projects in the plan, yet 23 other projects in the plan with lower urgency scores have higher funding priority. Furthermore, five projects scheduled for site and/or design funding in 2000 all have urgency scores higher than three of the projects scheduled for site and/or design funding in 1998. These five 2000 projects have scores ranging from 47.9 to 50.8 and are located in Harrisburg, PA; Sioux Falls, SD; Muskogee, OK; Birmingham, AL; and Toledo, OH. Appendix I shows the scores for all these projects as well as for the other projects in the plan. AOC officials said that projects in the plan were not prioritized exclusively on the basis of their urgency scores. As previously mentioned, the plan places heavy emphasis on projects that were already in the GSA pipeline. These pipeline projects include projects in the latter stages of the GSA planning process that GSA had already planned to request funding for in 1997 and 1998. The GSA planning process includes assessing needs, estimating costs, and developing prospectuses for congressional review. GSA officials confirmed that projects in the plan for 1997 and 1998 were in the GSA pipeline. They added that for projects identified for 1999 and beyond, GSA was not prepared to request funding any sooner than is specified in the 5-year plan. For example, they had only recently become aware of the urgent need in Richmond and were not prepared to request funds for a project there any earlier than 1999. According to AOC officials, the plan is transitional in that it addresses projects already identified for funding by GSA in 1997 and 1998, and then begins addressing urgent projects identified through the judiciary’s new process in 1999 and beyond. According to these officials, these pipeline projects should be funded first because their planning efforts are already under way. We recognize that the process for identifying and funding projects is complex and dynamic and that various factors, including planning decisions already made, total funding available, and the political nature of the process, will influence final decisions. Nonetheless, the judiciary’s plan and its related urgency score methodology have the potential to provide important baseline information to help decisionmakers weigh priorities and make more informed decisions. While we believe that pipeline projects should compete for funding, we also believe that the plan should make a convincing argument as to why these projects should be funded first. As discussed in more detail in the next section, the plan and related material do not (1) provide a rationale or justification for why Congress should fund these pipeline projects first or (2) discuss the consequences of or trade-offs involved in funding projects with low urgency scores that GSA had already planned to request instead of others that have higher scores. The judiciary’s 5-year plan and related materials do not provide all the information needed by decisionmakers to fully assess the relative merits of project proposals. Over the last several years, we have stressed the importance of placing construction proposals in a priority-based plan. And, our November 8, 1995, testimony on courthouse construction noted that Congress lacked information that (1) puts individual projects in some long-term strategic context and (2) provides a rationale or justification for project priorities. During our current review, we examined the judiciary’s plan and its related material to see whether they contained the type of information we said Congress lacked when making critical capital investment decisions. Our analysis showed that the plan and its related material do not articulate priorities in a long-term strategic context, primarily because they do not reflect an assessment of the urgency of all competing projects. As mentioned earlier, the judiciary focused on projects in the GSA pipeline and others identified during earlier internal efforts to plan for future projects. However, it did not assess other projects that, our work showed, have higher urgency scores than several of the projects in the plan. Although this approach produced a list of the judiciary’s priorities, it did not provide decisionmakers a long-term perspective on the urgency of projects in the plan compared to others that were not included. In addition, the plan did not explain that all needed projects had yet to be assessed. Without this explanation, decisionmakers could get the impression that projects not included in the plan all have lower urgency scores. Furthermore, the plan does not contain a rationale or justification for its project priorities. As discussed earlier, the judiciary fashioned the plan to give higher priority to projects in the GSA pipeline. However, the plan and related material do not explain that pipeline projects did not always have the highest urgency scores or articulate why Congress should fund these projects first. As a result, the plan does not convey to Congress the consequences of or trade-offs involved in not funding higher urgency projects sooner in favor of projects with lower urgency scores that are in the GSA pipeline. Related to not justifying its priorities, the plan and its related material lack specificity about conditions that exist at each location—information that would help decisionmakers better understand priorities. The plan and its related material do not discuss conditions such as security concerns or severe space shortages at different locations. Although the urgency scores and related data were provided, summarizing the specific conditions that are driving the need for individual projects could strengthen the plan and give decisionmakers a better perspective or understanding about why one project is more urgent than another. The judiciary has made an effort to improve capital investment planning for courthouse construction as evidenced by its methodology for assessing project urgency and its efforts to prepare a construction plan. However, the current 5-year plan does not reflect all the judiciary’s most urgently needed projects, and project funding priorities are not always based exclusively on urgency. Furthermore, the plan does not provide key project-specific information needed by decisionmakers to compare and evaluate the merits of individual projects and understand the rationale that supports priorities. We recognize that the plan is transitional and that it will evolve. We also recognize that the process for funding courthouse projects is dynamic and that various factors influence funding decisions. Within this context, the judiciary’s plan and the related urgency score methodology have the potential to provide important baseline information to help decisionmakers weigh priorities and make more informed decisions. This plan and its related material do not alert Congress, an important stakeholder, that the projects do not reflect all the judiciary’s most urgent needs nor do they explain that pipeline projects with high funding priority do not always have the highest urgency scores. Absent this information, decisionmakers may not be aware of the severity of needs in other locations not included in the plan—such as Los Angeles—or that projects in the plan with high scores—such as Richmond—have a lower funding priority than other projects with lower scores. We recommend that the Director of AOC work with the Judicial Conference Committee on Security, Space, and Facilities to make improvements to the 5-year plan. These improvements should be aimed at making the plan more informative and a more useful tool for helping Congress to better understand project priorities and individual project needs. At a minimum, the plan should (1) fully disclose the relative urgency of all competing projects and (2) articulate the rationale or justification for project priorities, including information on the conditions that are driving urgency—such as specific security concerns or operational inefficiencies. AOC, on December 13, 1996, provided written comments on a draft of this report that generally concurred with the draft and our recommendations. AOC said that many of the improvements we recommended were already under consideration (see app. III). AOC recognized that the judiciary is responsible for providing its requirements to GSA as one of GSA’s many tenants and has done its part to project space requirements in a methodical way. However, AOC pointed out that, since the executive branch has not released strategic real property plans for the federal government as a whole, our comment about the lack of a strategic real estate plan would be more appropriately addressed to the executive branch. We agree that a governmentwide strategic plan for real property is needed and have recommended that GSA take the lead in developing such a plan in several of our prior products (see Related GAO Products at the end of this report). However, the development of a governmentwide plan was not the subject of this review. Instead, we reviewed the judiciary’s 5-year plan, which serves as input to GSA’s overall planning efforts. To date, through the 5-year plan and related urgency score methodology, the judiciary has begun playing an important role in improving strategic planning for the courthouse construction initiative. Their approach has the potential to provide important baseline data that are key ingredients to strategic planning. However, the judiciary’s efforts to date have been incomplete and could benefit from the improvements outlined in our recommendations. We believe that any GSA customer with major capital investment needs like the judiciary should think and plan strategically and have significant input into the development of a governmentwide plan. The proportion of courthouse projects in GSA’s new construction budget submissions has been significant, far surpassing that of all GSA’s other tenants combined—about $633 million of the $715 million GSA requested for new construction in fiscal year 1997 were for courthouse projects. According to the judiciary’s plan, courthouses could continue to take up a large proportion of GSA’s construction resources given that the plan identifies about $500 million in needs per year between fiscal years 1997 and 2001 and that, as our work showed, 80 additional locations have needs to be addressed beyond 2001. We received oral comments on a draft of this report from key GSA Public Buildings Service officials involved in the courthouse construction initiative—the Assistant Commissioner for Portfolio Management, the Courthouse Management Group (CMG) Program Executive, and the CMG Program Director. These officials agreed with the thrust of the report and said that it was a fair representation of issues related to the 5-year plan. In addition, the Assistant Commissioner pointed out that judiciary needs do not always have to be met through new construction. GSA will consider other options, including leasing and lease-construction, in developing proposals for consideration by Congress. She said that, because the judiciary conveyed its needs in what was called a construction plan, its audience may assume that new construction is the only option for meeting these space needs. The CMG Program Director, speaking for himself and the CMG Program Executive, wanted to reemphasize that the pipeline projects identified for 1997 and 1998, including those with low urgency scores, were the only projects GSA was prepared to request in these years. He said that GSA would need time to plan and develop proposals for locations with high urgency scores identified for 1999 and beyond. Finally, these officials also suggested a few minor clarifying changes to the draft, which we made where appropriate. We are sending copies of this report to the Director of AOC; Chairman of the Judicial Conference Committee on Security, Space, and Facilities; Administrator of GSA; Director, Office of Management and Budget; and other interested congressional committees and subcommittees. The major contributors to this report are listed in appendix IV. If you have any questions or would like additional information, please contact me on (202) 512-8387. Brooklyn, NY (Cellar Annex) The criteria and related weights applied in assessing urgency under the judiciary’s methodology include the length of time space shortages have existed as defined by the year a location was or will be out of space (30 percent); security concern ratings of 1 through 4 (30 percent), where a 1 indicates the lowest level of security concern; operational inefficiency ratings of 1 through 5 (25 percent), where a 1 indicates the lowest level of operational inefficiency; and the number of judges affected as defined by the number of judges without courtrooms (15 percent). Under the methodology, the range of possible conditions for each of the four criteria has a corresponding multiplication factor between 0 and 1. These factors are multiplied by the weight for each of the criteria to determine the urgency score. As a result, each project receives an urgency score on a scale of 100, with a score of 100 indicating the highest level or degree of urgency. To calculate minimum scores for the 80 locations not included in the 5-year plan, we used security concern and out-of-space year data AOC provided and applied the judiciary’s urgency score methodology. The data AOC provided are shown in table II.1 under the columns entitled “security concern” and “out-of-space year.” According to the methodology, each level of security concern and out-of-space year have corresponding multiples used in calculating the score. These multiples are shown in table II.1 under the columns entitled “security score multiple” and “out-of-space year multiple.” The security concern and out-of-space year portions of the urgency score result from applying the multiple to 30, the weight given to each of these factors. These scores are shown in table II.1 under the columns “security score” and “out-of-space year score.” Although data for operational inefficiencies at these locations were not available, we assigned a minimum rating of 1 to each of the 80 locations. AOC officials told us that, according to the scoring methodology, 1 was the lowest score locations could receive for this criterion. According to the judiciary’s methodology, a score of 1 equates to 5 points in the calculation of the urgency score. For the fourth factor, number of judges affected, AOC did not have data, and thus we used 0 for this factor in our calculation, which is the lowest score a location can receive for this factor. The minimum urgency score total shown in the last column of table II.1, therefore, represents an addition of the security score, out-of-space year score, and minimum scores applied for operational conditions and number of judges affected. Out-of-space year score (multiple x 30) Number of judges affected score (continued) Security score (multiple x 30) Out-of-space year score (multiple x 30) Number of judges affected score (continued) Out-of-space year score (multiple x 30) Federal Courthouse Construction: More Disciplined Approach Would Reduce Costs and Provide for Better Decisionmaking (GAO/T-GGD-96-19, Nov. 8, 1995). General Services Administration: Opportunitites For Cost Savings in the Public Buildings Area (GAO/T-GGD-95-149, July 13, 1995). Federal Real Property: Key Acquisition and Management Obstacles (GAO/T-GGD-93-42, July 27, 1993). Federal Office Space: Obstacles to Purchasing Commercial Properties From RTC, FDIC, and Others (GAO/GGD-92-60, Mar. 31, 1992). General Services Issues (GAO/OCG-93-28TR, Dec. 1992). Real Property Management Issues Facing GSA and Congress (GAO/T-GGD-92-4, Oct. 30, 1991). Long-Term Neglect of Federal Building Needs (GAO/T-GGD-91-64, Aug. 1, 1991). Federal Buildings: Actions Needed to Prevent Further Deterioration and Obsolescence (GAO/GGD-91-57, May 13, 1991). The Disinvestment in Federal Office Space (GAO/T-GGD-90-24, Mar. 20, 1990). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the General Services Administration (GSA) and the Administrative Office of the U.S. Courts' (AOC) 5-year courthouse construction plan, focusing on whether the 5-year plan: (1) reflects the judiciary's most urgent courthouse construction needs; and (2) provides information needed by decisionmakers to evaluate the relative merit of project proposals. GAO found that: (1) while the judiciary has developed a methodology for assessing project urgency and a 5-year construction plan to communicate its urgent courthouse construction needs, GAO's analysis suggests that the 5-year plan does not reflect all of the judiciary's most urgent courthouse construction needs; (2) in preparing the 5-year plan, the judiciary developed urgency scores for 45 projects; (3) it did not develop urgency scores for other locations that according to AOC also need new courthouses; (4) GAO's analysis of available data on conditions at the 80 other locations showed that 30 of them likely would receive an urgency score higher than some projects in the plan; (5) for projects that are in the plan, high urgency scores did not always lead to high funding priority; (6) AOC officials said that this was a transitional plan in that it placed heavy emphasis in assigning funding priorities on the projects already in the GSA pipeline rather than solely on project urgency; (7) GAO's work also showed that the judiciary's plan and related material do not present competing projects in a long-term strategic context or articulate a rationale or justification for proposed projects and their relative priority; (8) they do not contain project-specific information on the conditions that exist at each location that would help decisionmakers compare the merits of individual projects, better understand the rationale for funding priorities, and justify funding decisions; and (9) GAO recognizes that the plan is transitional and that it is reasonable for pipeline projects to receive priority consideration for funding, but the plan and related material should make a convincing argument as to why they should be funded before others that have higher urgency scores.
Federal regulation is a basic tool of government. Agencies issue thousands of rules and regulations each year to implement statutes enacted by Congress. The public policy goals and benefits of regulations include, among other things, ensuring that workplaces, air travel, foods, and drugs are safe; that the nation’s air, water and land are not polluted; and that the appropriate amount of taxes is collected. The costs of these regulations are estimated to be in the hundreds of billions of dollars, and the benefits estimates are even higher. Given the size and impact of federal regulation, it is no surprise that Congresses and Presidents have taken a number of actions to refine and reform the regulatory process within the past 25 years. One goal of such initiatives has been to reduce regulatory burdens on affected parties, but other purposes have also played a part. Among these are efforts to require more rigorous analyses of proposed rules and thus provide better information to decision makers, to enhance oversight of rule making by Congress and the President, and to promote greater transparency and participation in the process. Over the last decade, at the request of Congress, GAO has released over 60 reports and testimonies reviewing the implementation of various regulatory reform initiatives. Some initiatives, such as the Paperwork Reduction Act (PRA), Regulatory Flexibility Act (RFA), Unfunded Mandates Reform Act (UMRA), and Executive Order 12866 on Regulatory Planning and Review, have undergone repeated scrutiny. While our reviews identified specific strengths and weaknesses of individual initiatives, it may be more worthwhile to focus on crosscutting strengths and weaknesses. The common strengths we identified largely mirror the general purposes of various reform initiatives. The common weaknesses reflect issues associated with both the design and implementation of the initiatives. Our reviews suggest at least four overall strengths or benefits that have been associated with existing regulatory reform initiatives: (1) increasing the attention directed to rules and rule making, (2) increasing expectations regarding the analytical support for proposed rules, (3) encouraging and facilitating greater public participation in rule making, and (4) improving the transparency of the rule-making process. First, the simple fact that such initiatives bring added attention to rules and the rule-making process is an important benefit. As we have pointed out in prior reports, oversight of agencies’ rule making can result in useful changes to rules. Furthermore, awareness of this added scrutiny may provide an important indirect effect. For example, in a previous GAO review, Department of Transportation officials told us that they will not even propose certain regulatory provisions because they know that the Office of Management and Budget (OMB), which reviews significant agency draft rules under Executive Order 12866, will not find them acceptable. Similarly, there is evidence that the focus placed on potential mandates under UMRA may have helped to discourage or limit the costs of federal mandates. Second, several of the reform initiatives have increased the analytical requirements and expectations in the regulatory process. These initiatives have raised the bar for agencies regarding the information and analysis needed to support policy decisions underlying regulations. Simply put, the initiatives call for more analysis of the effects—both benefits and costs—of proposed regulations before they are implemented. Whether imposed by statute or executive order, these initiatives seek to answer a basic question, “What are the consequences of this rule?” Closely related are other requirements that encourage agencies to identify and consider alternatives when developing regulations. Executive Order 12866, for example, asks agencies to first identify and assess available alternatives to direct regulation. Initiatives such as RFA and UMRA ask agencies to identify regulatory alternatives that will be less burdensome to regulated parties. Third, some of the reform initiatives have encouraged and facilitated greater public participation and consultation in rule making. Initiatives such as the E-Government Act and the Government Paperwork Elimination Act encourage agencies to allow the public to communicate with them by electronic means. Other initiatives require additional consultation by agencies with the parties that might be affected by rules under development. These initiatives ask that agencies seek input earlier in the process, rather than waiting for the public to comment on proposals published in the Federal Register. A final shared strength of many of these initiatives, and one closely connected to the three previous items, is that they help to improve the transparency of the regulatory process. In prior work, we have cited transparency as a regulatory best practice. By providing more information about potential effects and alternatives, requiring more documentation and justification of agencies’ decisions, and facilitating public access to and queries about such information, regulatory reform initiatives can help make the process more open. We recommended that more could be done to increase transparency, and we have also highlighted the value of transparency when agencies had particularly clear and complete documentation supporting their rule making. As the Administrator of OMB’s Office of Information and Regulatory Affairs (OIRA) pointed out, openness can help to “transform the public debate about regulation to one of substance … rather than process.” Despite these strengths, the overall results and effectiveness of regulatory reform initiatives have often been mixed. This may be particularly true when results of the initiatives are compared to the goals and purposes originally established for them. For example, despite the goals set for the reduction of paperwork burdens under PRA, we have repeatedly testified about the growth in burden hours imposed by federal information collections. We similarly reported that initiatives such as UMRA, the executive order on federalism, and requirements imposed under Section 610 of RFA for reviews of existing rules, have had little impact on agencies’ rule making. Our reviews have identified at least four general reasons that might explain why reform initiatives have not been more effective: (1) the limited scope and coverage of various requirements, (2) lack of clarity regarding key terms and definitions, (3) uneven implementation of the initiatives’ requirements, and (4) a predominant focus on just one part of the regulatory process, agencies’ development of rules. First, we have pointed out significant limits in the scope and coverage of certain reform initiatives. UMRA provides one example of the effect of definitional limitations, exceptions, and thresholds on restricting an initiative’s coverage. As we noted in a report last year, part of the reason for the relatively small number of rules identified as containing mandates under UMRA could be traced to 14 different restrictions on the identification of federal mandates under the Act. Furthermore, our analysis of all 122 major or economically significant rules (generally, rules with an impact of $100 million or more) published in 2001 and 2002 also showed that more than one of these restrictions applied to 72 percent of the 65 rules that were not identified as containing federal mandates under UMRA but nonetheless appeared to result in significant financial effects on nonfederal parties. UMRA, along with RFA, also illustrates the potential domino effect of building reform requirements on other procedural requirements. Both acts only apply to rules for which an agency publishes a notice of proposed rule making. However, agencies can publish final regulatory actions without notices of proposed rule making using either good cause, categorical, or statute-specific exceptions to the Administrative Procedure Act’s notice and comment requirements. In one of our prior reports, we estimated that about half of all final regulatory actions published by agencies were issued without going through the proposed rule stage. Although many final rules without proposed rules were minor actions, in both that analysis and our recent UMRA review there were major rules that did not have notices of proposed rule making. Another recurring message in our reports has been the effect of unclear terms and definitions that affect the applicability of requirements. Combined with the discretion given rule-making agencies to interpret the requirements in reform initiatives, it is not surprising that we have observed uneven implementation across agencies. In particular, we have often cited the need to clarify key terms in the Regulatory Flexibility Act. RFA requires analyses and other actions to help address concerns about the impact of regulations on small entities, but the requirements do not apply if the agency head certifies that the agency’s rule will not have a “significant economic impact on a substantial number of small entities.” However, the Act neither defines this key phrase nor places clear responsibility on any party to define it consistently across government. As a result, we found that agencies had different interpretations of RFA’s requirements. We said in a series of reports that, if Congress wanted to strengthen the implementation of RFA, it should consider amending the Act to define the key phrases or provide some other entity with clearer authority and responsibility to interpret RFA’s provisions. To date, Congress has not acted on our recommendations. Again, there is a domino effect associated with this uncertainty, because other reform initiatives, such as the requirement for agencies to review existing rules under Section 610 of RFA and a requirement to provide compliance assistance guides to regulated entities, only apply if an agency has determined the rule will have a significant economic impact on a substantial number of small entities. Sometimes, though, it might not be uncertainty over the provisions of an initiative that help to limit its effectiveness, but rather an agency’s implementation of the requirements. For example, as noted in our recent report on the Paperwork Reduction Act, one of the provisions aimed at helping to achieve the goals of minimizing burden while maximizing utility is a requirement for chief information officers (CIO) to review and certify information collections. However, our analysis of case studies showed that CIOs provided these certifications despite often missing or inadequate support from the program offices sponsoring the collections. We recommended that OMB clarify the kinds of support it asks agency CIOs to provide for certifications and that heads of certain agencies direct responsible CIOs to strengthen agency support for CIO certifications, including with regard to the necessity of collection, burden reduction efforts, and plans for the use of information collected. Our reports over the years have also highlighted issues regarding agencies’ implementation of analytical requirements, such as the economic analyses that support regulations. Although the economic performance of some federal actions is assessed prospectively, few federal actions are monitored for their economic performance retrospectively. In addition, our reviews have found that economic assessments that analyze regulations prospectively are often incomplete and inconsistent with general economic principles. Moreover, the assessments are not always useful for comparisons across the government, because they are often based on different assumptions for the same key economic variables. In our recent report on UMRA, we noted that parties from various sectors expressed concerns about the accuracy and completeness of agencies’ cost estimates, and some also emphasized that more needed to be done to address the benefits side of the equation. Our reviews have found that not all benefits are quantified and monetized by agencies, partly because of the difficulty in estimation. Finally, although not an explicit finding in any of our reports, it is clear when stepping back to look at the big picture presented by the set of reform initiatives and our body of regulatory work that these initiatives primarily target one particular phase of the regulatory process, agencies’ development of rules. While rule making is clearly an important point in the process when the specific substance and impact of regulations are most open to public debate, other phases also help determine the effectiveness of regulation. Few of the reform initiatives contain major requirements or processes that address those other phases in the life cycle of regulations— from the underlying statutory authorizations, through effective implementation and monitoring of compliance with regulatory provisions, to evaluation and revision of existing rules. For example, only UMRA explicitly addresses the potential effect of legislative proposals in creating mandates that would ultimately be implemented through regulations, and that element of UMRA has generally been viewed as among its most effective elements. We have reported that agencies sometimes have little rule-making discretion, so in some cases concerns raised about burdensome regulations are traceable to the statutes underlying the regulations, rather than a failure of an agency to comply with rule-making requirements. With regard to other phases in the regulatory process, RFA is unique among statutory requirements in having a provision (Section 610) for reviews of existing rules, although it is limited to rules with significant effects on small entities. Executive Order 12866 also includes some provisions to encourage agencies to review and revise existing rules. It is not clear, however, that either the Section 610 or the executive order look back provisions have been consistently and effectively implemented. As this subcommittee begins to develop its regulatory reform agenda, our body of work on regulatory issues, and also on results-oriented government management, suggests two general avenues of effort you may want to consider as useful starting points. One avenue is to revisit the procedures, definitions, exemptions, and other provisions of existing initiatives to determine whether changes might be needed to better achieve their goals. Second, the subcommittee may wish to explore options to more effectively and productively evaluate existing regulations and the results they have generated. Not only could such retrospective evaluations help to inform Congress and other policymakers about ways to improve the design of regulations and regulatory programs, but they could play a part in the overall reexamination of the base of the federal government that we have recommended in our recent work on addressing 21st century challenges. With respect to the first avenue, my testimony to this point indicates that there are ample opportunities to revisit and refine existing regulatory reform initiatives. Although progress has been made to implement recommendations and matters for consideration we have raised in our prior reports, there are still unresolved issues. In particular, Congress may want to consider whether some provisions of existing statutory initiatives need to be amended to make those initiatives more effective. We still believe, for example, that Congress should clarify key terms and definitions in RFA or provide another entity with the authority and responsibility to do so. We also believe there is some value to taking a broader look at how all of the pieces of existing initiatives have, or have not, contributed to achieving the purposes intended. For example, we suggested in our recent review of PRA that a new approach might be required to address burden reduction. As illustrated by our work on lessons learned about UMRA in the 10 years since it was enacted, such reviews can reveal opportunities and options for both reinforcing the strengths and addressing the weaknesses that have emerged in practice. The options can take a number of different directions. For example, in our work on UMRA, concerns about the scope of coverage were most frequently raised by the many knowledgeable parties we consulted, but issues and options were also identified regarding enforcement, consultation, and the analytic framework, among other topics. In undertaking reviews of existing initiatives, it will be important to also revisit the reasons why particular limitations and exceptions were included in the initiatives to begin with. As pointed out in the UMRA work, this probably needs to be an inclusive effort to be successful, involving all affected parties in the debate to find common ground if changes are to be accepted. The second broad avenue I would suggest the subcommittee consider in its reform agenda would be to explore using retrospective evaluations of existing regulations. Such evaluations could help to keep the regulatory process focused on results and identify ways to better meet emerging challenges. Among the potential benefits of more retrospective analysis of federal regulations are that it could enable policymakers to better gauge actual benefits and costs and whether regulations are achieving their desired goals, bring additional accountability to the regulatory process, identify opportunities to revise existing regulations, and provide information that could lead to better decisions regarding future regulations. In our work this year on both UMRA and economic performance measures, we clearly heard from the experts we consulted that they believe more retrospective analysis is needed and, further, that there are ways to improve the quality and credibility of the analyses that are done. In the UMRA work, parties had particularly strong views about the need for better evaluation and research of federal mandates, including those imposed by regulations. The most frequently suggested option to address this issue was to do more postimplementation evaluation of existing mandates or “look backs” at their effectiveness. As one of the parties pointed out, retrospective evaluation of regulations is useful because rules can change people’s behavior in ways that cannot be predicted prior to implementation. In our recent workshop where we obtained the views of experts about the use of economic performance measures, such as a comparison of benefits and costs (net benefits) and cost-effectiveness, participants identified several gaps in the application of these measures to analyze federal regulations and programs. For example, while some agencies have done retrospective economic performance assessments, the participants said that in general federal agencies often do not assess the performance of regulations or existing programs retrospectively, even though this information could be useful in managing programs. However, there are also challenges to effectively implementing retrospective evaluations. For example, we previously identified some of the difficulties regulatory agencies face in demonstrating the results of their work, such as identifying and collecting the data needed to demonstrate results, the diverse and complex factors that affect agencies’ results (for example, the need to achieve results through the actions of third parties), and the long time period required to see results in some areas of federal regulation. There is also a potential balance concern because, as I noted earlier, it may be more difficult to quantify the benefits of regulations than it is to quantify the costs. Finally, I want to emphasize that this is a particularly timely point to be reviewing the regulatory process because of the long-term fiscal imbalance facing the United States, along with other significant trends and challenges. The 21st century challenges that we have been highlighting this year establish the case for change and the need to reexamine the base of the federal government and all of its existing programs, policies, functions, and activities. We recognize that a successful reexamination of the base of the federal government will entail multiple approaches over a period of years. No single approach or reform can address all of the questions and program areas that need to be revisited. However, federal regulation is a critical tool of government, and regulatory programs play a key part in how the federal government addresses many of the country’s needs. Asking the questions necessary to begin reexamining the federal regulatory process is an important first step in the long-term effort to transform what the federal government does and how it does it. Madam Chairman, this concludes my prepared statement. Once again, I appreciate the opportunity to testify on these important issues. I would be pleased to address any questions you or other members of the subcommittee might have at this time. If additional information is needed regarding this testimony, please contact J. Christopher Mihm, Managing Director, Strategic Issues, at (202) 512-6806 or [email protected]. Congresses and Presidents have taken a number of actions to refine and reform the regulatory process within the past 25 years. The following paragraphs summarize the general purpose, applicability, and requirements imposed by some of those regulatory reform initiatives. PRA was originally enacted in 1980, then amended in 1986 and 1995. PRA requires agencies to justify any collection of information from the public in order to minimize the paperwork burden they impose and to maximize the practical utility of the information collected. The Act applies to independent and nonindependent regulatory agencies. Under PRA, agencies are required to submit all proposed information collections to the Office of Management and Budget (OMB) for approval. In their submissions, agencies must establish the need and intended use of the information, estimate the burden that the collection will impose on respondents, and show that the collection is the least burdensome way to gather the information. PRA also established the Office of Information and Regulatory Affairs (OIRA) within OMB to provide central agency leadership and oversight of government efforts to reduce unnecessary paperwork and improve the management of information resources. Subsequent reform initiatives, including amendments of PRA, have added responsibilities for OIRA, such as making the office responsible for overseeing and reporting on agencies’ compliance with new regulatory requirements. PRA of 1995, for example, included a requirement that OIRA, in consultation with agency heads, set annual governmentwide goals for the reduction of information collection burdens. RFA was enacted in response to concerns about the effect that federal regulations can have on small entities. RFA requires independent and nonindependent regulatory agencies to assess the impact of their rules on “small entities,” defined as including small businesses, small governmental jurisdictions, and certain small not-for-profit organizations. Under RFA an agency must prepare an initial regulatory flexibility analysis at the time proposed rules are issued unless the head of the agency determines that the proposed rule would not have a “significant economic impact upon a substantial number of small entities.” The Act also requires agencies to ensure that small entities have an opportunity to participate in the rule- making process and requires the Chief Counsel of the Small Business Administration’s Office of Advocacy to monitor agencies’ compliance. Further, Section 610 of RFA requires agencies to review existing rules within 10 years of promulgation that have or will have a significant impact on small entities to determine whether they should be continued without change or amended or rescinded to minimize their impact on small entities. Congress amended RFA in 1996 with SBREFA. SBREFA made certain agency actions under RFA judicially reviewable. Other provisions in SBREFA added new requirements. For example, SBREFA requires agencies to develop one or more compliance guides for each final rule or group of related final rules for which the agency is required to prepare a regulatory flexibility analysis, and the Act requires agencies to provide small entities with some form of relief from civil monetary penalties. SBREFA also requires the Environmental Protection Agency and the Occupational Safety and Health Administration to convene advocacy review panels before publishing an initial regulatory flexibility analysis. UMRA was enacted to address concerns about federal statutes and regulations that require nonfederal parties to expend resources to achieve legislative goals without being provided funding to cover the costs. UMRA generates information about the nature and size of potential federal mandates but does not preclude the implementation of such mandates. UMRA applies to proposed federal mandates in both legislation and regulations, but it does not apply to rules published by independent regulatory agencies. With regard to the regulatory process, UMRA requires federal agencies to prepare written statements containing a “qualitative and quantitative assessment of the anticipated costs and benefits” for any rule for which a proposed rule was published that includes a federal mandate that may result in the expenditure of $100 million or more in any 1 year by state, local, and tribal governments in the aggregate, or by the private sector. For such rules, agencies are to identify and consider a reasonable number of regulatory alternatives and from those select the least costly, most cost-effective, or least burdensome alternative that achieves the objectives of the rule (or explain why that alternative was not selected). UMRA also includes a consultation requirement that agencies develop a process to permit elected officers of state, local, and tribal governments (or their designees) to provide input in the development of regulatory proposals containing significant intergovernmental mandates. CRA was enacted as part of SBREFA in 1996 to better ensure that Congress has an opportunity to review, and possibly reject, rules before they become effective. CRA established expedited procedures by which members of Congress may disapprove agencies’ rules by introducing a resolution of disapproval that, if adopted by both Houses of Congress and signed by the President, can nullify an agency’s rule. CRA applies to rules issued by nonindependent and independent regulatory agencies. CRA requires agencies to file final rules with both Congress and GAO before the rules can become effective. GAO’s role under CRA is to provide Congress with a report on each major rule (for example, rules with a $100 million impact on the economy) including GAO’s assessment of the issuing agency’s compliance with the procedural steps required by various acts and executive orders governing the rule-making process. Congress enacted GPEA in 1998, and the Act promoted the expansion of a trend in the federal government toward using e-government applications to collect and disseminate information and forms. GPEA requires federal agencies to provide the public, when practicable, the option of submitting, maintaining, and disclosing required information—such as employment records, tax forms, and loan applications—electronically, instead of on paper. GPEA also requires agencies to guard the privacy and protect documents from being altered and encourages federal government use of a range of electronic signature alternatives when practicable. In 2000, Congress enacted TIRA to provide a mechanism for Congress to obtain more information about certain rules. TIRA contemplated a 3-year pilot project during which GAO would perform independent evaluations of “economically significant” agency rules when requested by a chairman or ranking member of a committee of jurisdiction of either House of Congress. The independent evaluation would include an evaluation of the agency’s analysis of the potential benefits, potential costs, and alternative approaches considered during the rule-making proceeding. Under TIRA, GAO was required to report on its evaluations within 180 calendar days after receiving a committee request. Section 6(b) of the Act, however, provided that the pilot project would continue only if, in each fiscal year, a specific annual appropriation was made. During the 3-year period contemplated for the pilot project, Congress did not enact any specific appropriation to cover TIRA evaluations, and the authority for the 3-year pilot project expired on January 15, 2004. Congress has considered reauthorizing TIRA, and we have strongly urged that any reauthorization of TIRA continue to contain language requiring a specific annual appropriation before we are required to undertake independent evaluations of major rule makings. We have also recommended that TIRA evaluations be conducted under a pilot project basis. Enacted in Section 515 of the Treasury and General Government Appropriations Act of 2001, the Information Quality Act directed OMB to issue governmentwide guidelines to ensure and maximize the quality, objectivity, utility, and integrity of information (including statistical information) disseminated by federal agencies. The Act requires OMB to issue guidelines directing all agencies to issue their own guidelines within 1 year and to establish administrative mechanisms allowing affected persons to seek and obtain correction of information maintained and disseminated by the agency. The Act also requires agencies to report periodically to the Director of OMB on the number and nature of complaints received and how such complaints were handled by the agency. The E-Government Act was intended to enhance the management and promotion of electronic government services and processes. With regard to the regulatory process, the Act requires agencies, to the extent practicable, to accept public comments on proposed rules by electronic means. The Act also requires agencies to ensure that publicly accessible federal Web sites contain electronic dockets for their proposed rules, including all comments submitted on the rules and other relevant materials. The E-Government Act also established an Office of Electronic Government within OMB, headed by an administrator appointed by the President. In addition to congressional regulatory reform initiatives enacted in statutes, it is important to also recognize the key role that presidential initiatives have in the regulatory process. Centralized review of agencies’ regulations within the Executive Office of the President has been part of the rule-making process for more than 30 years. The formal process by which OIRA currently reviews agencies’ proposed rules and final rules is essentially unchanged since Executive Order 12866 was issued in 1993. Under Executive Order 12866, OIRA reviews significant proposed and final rules from all agencies, other than independent regulatory agencies, before they are published in the Federal Register. The executive order states, among other things, that agencies should assess all costs and benefits of available regulatory alternatives, including both quantitative and qualitative measures. It also provides that agencies should select regulatory approaches that maximize net benefits (unless a statute requires another approach). Among other principles, the executive order encourages agencies to tailor regulations to impose the least burden on society needed to achieve the regulatory objectives. The executive order also established agency and OIRA responsibilities in the review of regulations, including transparency requirements. OIRA provides guidance to federal agencies on implementing the requirements of the executive order, such as guidance on preparing economic analyses required for significant rules. There are also other orders that impose requirements on agencies during rule making, such as Executive Order 13132 on federalism that requires agencies to prepare a federalism summary impact statement for actions that have federalism implications. Also, in January 2005, OMB published a final bulletin on peer review that establishes minimum standards for when peer review is required for scientific information, including stricter minimum standards for the peer review of “highly influential” scientific assessments, and the types of peer review that should be considered by agencies in different circumstances. The selection of an appropriate peer review mechanism is left to the agency’s discretion. More detailed information about these various initiatives is available in the related GAO products listed at the end of this testimony. Economic Performance: Highlights of a Workshop on Economic Performance Measures. GAO-05-796SP. Washington, D.C.: July 2005. Paperwork Reduction Act: New Approach May Be Needed to Reduce Government Burden on Public. GAO-05-424. Washington, D.C.: May 20, 2005. Unfunded Mandates: Views Vary About Reform Act’s Strengths, Weaknesses, and Options for Improvement. GAO-05-454. Washington, D.C.: March 31, 2005. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP. Washington, D.C.: February 2005. Electronic Government: Federal Agencies Have Made Progress Implementing the E-Government Act of 2002. GAO-05-12. Washington, D.C.: December 10, 2004. Unfunded Mandates: Analysis of Reform Act Coverage. GAO-04-637. Washington, D.C.: May 12, 2004. Paperwork Reduction Act: Agencies’ Paperwork Burden Estimates Due to Federal Actions Continue to Increase. GAO-04-676T. Washington, D.C.: April 20, 2004. Rulemaking: OMB’s Role in Reviews of Agencies’ Draft Rules and the Transparency of Those Reviews. GAO-03-929. Washington, D.C.: September 22, 2003. Electronic Rulemaking: Efforts to Facilitate Public Participation Can Be Improved. GAO-03-901. Washington, D.C.: September 17, 2003. Civil Penalties: Agencies Unable to Fully Adjust Penalties for Inflation Under Current Law. GAO-03-409. Washington, D.C.: March 14, 2003. Regulatory Flexibility Act: Clarification of Key Terms Still Needed. GAO- 02-491T. Washington, D.C.: March 6, 2002. Regulatory Reform: Compliance Guide Requirement Has Had Little Effect on Agency Practices. GAO-02-172. Washington, D.C.: December 28, 2001. Federal Rulemaking: Procedural and Analytical Requirements at OSHA and Other Agencies. GAO-01-852T. Washington, D.C.: June 14, 2001. Regulatory Reform: Implementation of Selected Agencies’ Civil Penalties Relief Policies for Small Entities. GAO-01-280. Washington, D.C.: February 20, 2001. Regulatory Flexibility Act: Implementation in EPA Program Offices and Proposed Lead Rule. GAO/GGD-00-193. Washington, D.C.: September 20, 2000. Electronic Government: Government Paperwork Elimination Act Presents Challenges for Agencies. GAO/AIMD-00-282. Washington, D.C.: September 15, 2000. Regulatory Reform: Procedural and Analytical Requirements in Federal Rulemaking. GAO/T-GGD/OGC-00-157. Washington, D.C.: June 8, 2000. Federalism: Previous Initiatives Have Little Effect on Agency Rulemaking. GAO/T-GGD-99-131. Washington, D.C.: June 30, 1999. Regulatory Accounting: Analysis of OMB’s Reports on the Costs and Benefits of Federal Regulation. GAO/GGD-99-59. Washington, D.C.: April 20, 1999. Regulatory Flexibility Act: Agencies’ Interpretations of Review Requirements Vary. GAO/GGD-99-55. Washington, D.C.: April 2, 1999. Regulatory Burden: Some Agencies’ Claims Regarding Lack of Rulemaking Discretion Have Merit. GAO/GGD-99-20. Washington, D.C.: January 8, 1999. Federal Rulemaking: Agencies Often Published Final Actions Without Proposed Rules. GAO/GGD-98-126. Washington, D.C.: August 31, 1998. Regulatory Management: Implementation of Selected OMB Responsibilities Under the Paperwork Reduction Act. GAO/GGD-98-120. Washington, D.C.: July 9, 1998. Regulatory Reform: Agencies Could Improve Development, Documentation, and Clarity of Regulatory Economic Analyses. GAO/RCED-98-142. Washington, D.C.: May 26, 1998. Regulatory Reform: Implementation of Small Business Advocacy Review Panel Requirements. GAO/GGD-98-36. Washington, D.C.: March 18, 1998. Congressional Review Act: Implementation and Coordination. GAO/T- OGC-98-38. Washington, D.C.: March 10, 1998. Regulatory Reform: Agencies’ Section 610 Review Notices Often Did Not Meet Statutory Requirements. GAO/T-GGD-98-64. Washington, D.C.: February 12, 1998. Unfunded Mandates: Reform Act Has Had Little Effect on Agencies’ Rulemaking Actions. GAO/GGD-98-30. Washington, D.C.: February 4, 1998. Regulatory Reform: Changes Made to Agencies’ Rules Are Not Always Clearly Documented. GAO/GGD-98-31. Washington, D.C.: January 8, 1998. Regulatory Reform: Agencies’ Efforts to Eliminate and Revise Rules Yield Mixed Results. GAO/GGD-98-3. Washington, D.C.: October 2, 1997. Managing for Results: Regulatory Agencies Identified Significant Barriers to Focusing on Results. GAO-GGD-97-83. Washington, D.C.: June 24, 1997. Regulatory Burden: Measurement Challenges and Concerns Raised by Select Companies. GAO/GGD-97-2. Washington, D.C.: November 18, 1996. Regulatory Reform: Implementation of the Regulatory Review Executive Order. GAO/T-GGD-96-185. Washington, D.C.: September 25, 1996. Regulatory Flexibility Act: Status of Agencies’ Compliance. GAO/GGD-94- 105. Washington, D.C.: April 27, 1994.
Federal regulation is a basic tool of government. Agencies issue thousands of rules and regulations each year to achieve goals such as ensuring that workplaces, air travel, and foods are safe; that the nation's air, water and land are not polluted; and that the appropriate amount of taxes are collected. The costs of these regulations are estimated to be in the hundreds of billions of dollars, and the benefits estimates are even higher. Over the past 25 years, a variety of congressional and presidential regulatory reform initiatives have been instituted to refine the federal regulatory process. This testimony discusses findings from the large number of GAO reports and testimonies prepared at the request of Congress to review the implementation of regulatory reform initiatives. Specifically, GAO discusses common strengths and weaknesses of existing reform initiatives that its work has identified. GAO also addresses some general opportunities to reexamine and refine existing initiatives and the federal regulatory process to make them more effective. GAO's prior reports and testimonies contain a variety of recommendations to improve particular reform initiatives and aspects of the regulatory process. GAO's evaluations of regulatory reform initiatives indicate that some of these initiatives have yielded mixed results. Among the goals of the initiatives are reducing regulatory burden, requiring more rigorous regulatory analysis, and enhancing oversight. The initiatives have been beneficial in a number of ways, but they also were often less effective than anticipated. GAO's reviews suggest at least four overall strengths or benefits associated with existing initiatives: (1) increasing the attention directed to rules and rule making, (2) increasing expectations regarding the analytical support for proposed rules, (3) encouraging and facilitating greater public participation in rule making, and (4) improving the transparency of the rule-making process. On the other hand, at least four recurring reasons help explain why reform initiatives have not been more effective: (1) limited scope and coverage of various requirements, (2) lack of clarity regarding key terms and definitions, (3) uneven implementation of the initiatives' requirements, and (4) a predominant focus on just one part of the regulatory process, agencies' development of rules. As Congress develops its regulatory reform agenda, the lessons and opportunities identified by GAO's body of work suggest two avenues that might provide a useful starting point. The first would be to broadly revisit the procedures, definitions, exemptions, and other provisions of existing initiatives to determine whether changes are needed to better achieve their goals. As a second avenue to explore, GAO's reviews found that the regulatory process could benefit from more attention to evaluations of existing regulations, although recognizing some of the difficulties associated with carrying out such evaluations. The lessons that could be learned from retrospective reviews could help to keep the regulatory process focused on results and inform future action to meet emerging challenges. This is a particularly timely point to be reviewing the regulatory process. The long-term fiscal imbalance facing the United States, along with other significant trends and challenges, establishes the case for change and the need to reexamine the base of the federal government and all of its existing programs, policies, functions, and activities. No single approach or reform can address all of the questions and program areas that need to be revisited. However, federal regulation is a critical tool of government, and regulatory programs play a key part in how the federal government addresses many of the country's needs. Therefore, reassessing the regulatory framework must be part of that long-term effort to transform what the federal government does and how it does it.
NNSA conducts nuclear weapon and nonproliferation-related national security activities in research and development laboratories, production plants, and other facilities. Specifically, NNSA operates three weapons laboratories—Lawrence Livermore National Laboratory (LLNL), California; Los Alamos National Laboratory (LANL), New Mexico; and the Sandia National Laboratories, New Mexico and California; and four nuclear weapons production sites—the Pantex Plant, Texas; the Y-12 Plant, Tennessee; the Kansas City Plant, Missouri; and the Savannah River Site, South Carolina. NNSA also operates the Nevada Test Site. To implement its nuclear weapons programs, NNSA received about $6.4 billion for fiscal year 2006 and has requested more than $6.4 billion for fiscal year 2007. Between fiscal years 2008 and 2011, NNSA is proposing to spend almost $27 billion for these programs. Over the past decade, NNSA has invested a substantial amount of money in sustaining the cold war stockpile and upgrading the three weapons laboratories with new, state-of-the-art experimental and computing facilities. However, as described in studies over the past decade, the production infrastructure of the weapons complex is aging and increasingly outdated. For example, a 2000 DOE Office of Inspector General report concluded that the postponement of repairs to aging and deteriorating facilities had resulted in delays in weapons modification, remanufacture, and dismantlement, among other things. In addition, a 2001 report by the Foster Panel found the state of the production facilities to be troubling and recommended that NNSA restore missing production capabilities and refurbish the production infrastructure. In its fiscal year 2007 budget request, NNSA estimated that it will cost $2.4 billion to reduce the backlog of deferred maintenance at these facilities to an appropriate level consistent with industry best practices. Events over the past several years have served to intensify concern about how the United States maintains its nuclear deterrent and what the nation’s strategy should be for transforming the weapons complex. Specifically: The 2001 Nuclear Posture Review stated, among other things, that cold war practices related to nuclear weapons planning were obsolete, and few changes had been made to the size or composition of the nation’s nuclear forces. Furthermore, there had been underinvestment in the weapons complex, particularly the production sites. The Nuclear Posture Review called for, among other things, the development of a “responsive infrastructure” that would be sized to meet the needs of a smaller nuclear deterrent while having the capability to respond to future strategic challenges. The terrorist attacks of September 11, 2001, led DOE to increase the size of its Design Basis Threat, a classified document that identifies the size and capabilities of terrorist forces. This increase in the size of the Design Basis Threat has greatly increased NNSA’s cost for protecting its weapons- grade nuclear material. The 2002 Moscow Treaty between the United States and Russia set a goal of reducing the number of deployed U.S. nuclear warheads to between 1,700 and 2,200 by 2012. However, a significant number of existing warheads would be kept in reserve to address potential technical contingencies with the existing stockpile. NNSA, at the Congress’ direction, created the RRW program to study a new approach to maintaining nuclear warheads over the long term. The RRW program would redesign weapon components to be easier to manufacture, maintain, dismantle, and certify without nuclear testing, potentially allowing NNSA to transition to a smaller, more efficient weapons complex. A design competition between LANL and LLNL is scheduled to end in November 2006. Finally, in recent congressional testimony, the Secretary of Energy and the Administrator of NNSA emphasized to the Congress that while they believe stockpile stewardship is working, the current cold war legacy stockpile is the wrong stockpile for the long term, and the current nuclear weapons infrastructure is not responsive to unanticipated events or emerging threats. Current NNSA plans call for substantial funding to operate the existing weapons complex. For example, according to NNSA’s fiscal year 2007 budget request, over the next 5 years, NNSA plans to spend about $7.4 billion to operate and maintain the existing infrastructure of the weapons complex. In addition, NNSA plans to spend $1.8 billion on new construction projects. These construction projects include the Highly Enriched Uranium Materials Facility at the Y-12 Plant, which is estimated to cost $335 million and be completed in fiscal year 2007; the Chemistry and Metallurgy Research Replacement facility at LANL, which is estimated to cost $838 million and be completed in fiscal year 2013; and the proposed Uranium Processing Facility at the Y-12 Plant, which is projected to cost between $600 million to $1 billion. During testimony before this Subcommittee in March 2004, the Secretary of Energy agreed to conduct a comprehensive review of the weapons complex. Subsequently, in January 2005, the Secretary of Energy requested the SEAB to form the Nuclear Weapons Complex Infrastructure Task Force to assess the implications of presidential decisions on the size and composition of the stockpile; the cost and operational impacts of the new Design Basis Threat; and the personnel, facilities, and budgetary resources required to support a smaller stockpile. The review was also to evaluate opportunities for consolidating Special Nuclear Material, facilities, and operations across the weapons complex in order to minimize security requirements and the environmental impacts of continuing operations. The SEAB task force formally transmitted the final report to the Secretary of Energy in October 2005. According to the report, the SEAB task force assessed the impact of its recommendations on near-term funding requirements, as well as total costs, for the weapons complex over the next 25 years. The report stated that implementing all of the recommendations will increase near-term costs substantially but would result in a substantial reduction in future operating costs after the CNPC is in full operation. The SEAB task force estimated that the long-term cost savings would be approximately twice the near-term cost increases. Initially, NNSA officials did not provide us with any detailed information concerning their plans for transforming the infrastructure of the weapons complex and for addressing the recommendations in the SEAB task force report. Instead, NNSA officials described to us the following process they were using to establish a detailed vision for the future weapons complex and to identify a “path forward” for achieving that vision: In March 2005, NNSA established a Responsive Infrastructure Steering Committee and created a position within the Office of Defense Programs to lead this effort. In October 2005, NNSA received the final SEAB task force report. NNSA officials said that they have reviewed the recommendations from this report, along with recommendations from other advisory bodies, such as the Defense Science Board. In November 2005 and January 2006, NNSA held two meetings for senior- level officials within the weapons complex to establish a broad range of planning options, which NNSA refers to as its “preferred infrastructure planning scenario.” In January 2006, NNSA held a 3-week session for about 50 key midlevel managers within the weapons complex to evaluate the proposed planning options. As a result of this process, NNSA recently offered a proposal for transforming the weapons complex that it believes is responsive to the recommendations in the SEAB task force report. Specifically, NNSA officials stated that NNSA will decide on the RRW design competition in November 2006 and, assuming that the RRW is technically feasible, will seek authorization to proceed to engineering development and production; NNSA is requesting an additional $15.6 million in its fiscal year 2007 budget request to dismantle legacy weapons material at the Pantex Plant; and NNSA is requesting about $15 million for fiscal year 2007, as well as over $30 million annually from fiscal years 2008 through 2011, to support the implementation of its responsive infrastructure strategy, including the creation of an Office of Transformation within the Office of Defense Programs. However, NNSA does not plan to adopt the SEAB task force’s recommendation for a CNPC and the accompanying recommendation of consolidating all Category I and II quantities of Special Nuclear Material at the CNPC. NNSA believes that these recommendations are not affordable or feasible. For example, in recent congressional testimony, the Deputy Administrator for Defense Programs said that the SEAB task force report’s recommendation on the timing for a CNPC—i.e., that a CNPC could be designed, built, and operational by 2015—is not plausible and underestimates the challenges of transitioning a unique and highly skilled workforce to a new location. He also stated that the recommendation does not recognize the challenge of meeting near-term requirements of the current stockpile and transforming the weapons complex infrastructure at the same time. In addition, he stated that it may be decades before all existing legacy weapons are fully removed from the stockpile and dismantled. Instead, NNSA has proposed the following plan for the 2030 weapons complex, which it states will achieve many of the benefits of the SEAB task force’s approach in a way that is technically feasible and affordable over both the near and longer term: Consolidated plutonium center. All research and development (except certain experiments at the Nevada Test Site), surveillance, and production activities involving Category I and II quantities of plutonium would be transferred to a consolidated plutonium center. The center would have a baseline production capacity of 125 pits per year by 2022 and would be situated at an existing Category I and II Special Nuclear Material site. In the interim, NNSA would upgrade the plutonium facility at Tech Area 55 at LANL to produce 30-50 pits per year and operate the Chemistry and Metallurgy Research Replacement facility at LANL as a Category I and II Special Nuclear Material facility. Consolidation of Category I and II Special Nuclear Material. This material would be consolidated to fewer sites, and to fewer locations within sites. Specifically, NNSA would remove all Category I and II Special Nuclear Material from Sandia National Laboratory by 2008 and from LLNL by 2014, and would cease all activities involving this material at LANL by 2022. The remaining NNSA sites with Category I and II Special Nuclear Material would include the consolidated plutonium center, the Nevada Test Site, the Pantex Plant, the Y-12 Plant, and the Savannah River Site. Modernizing the remaining production sites. The planned Uranium Processing Facility at the Y-12 Plant would consolidate existing highly enriched uranium contained in legacy weapons, dismantle legacy warhead secondaries, support associated research and development, and provide a long-term capacity for new secondary production. Tritium production and stockpile support services would remain at the Savannah River Site. All weapons assembly and disassembly would be carried out at a Pantex Plant modernized for increased throughput for the long term. In addition, NNSA would build a new, nonnuclear component production facility by 2012 at an unspecified location. Finally, the Nevada Test Site would become the only site for large-scale hydrodynamic testing, which measure how stockpile materials behave when exposed to explosively driven shocks. Regardless of the approach chosen to transform the weapons complex, any attempt to change such an extremely complex enterprise must be based on solid analysis, careful planning, and effective leadership. We have identified four actions that, in our view, are critical to the successful transformation of the weapons complex. As the Congress oversees NNSA’s future actions, it should expect to see each of these actions carefully and fully implemented. The U.S. nuclear weapons stockpile consists of nine weapon types. (See table 1.) The lifetimes of the weapons currently in the stockpile have been extended well beyond the minimum life for which they were originally designed—generally about 20 years—increasing the average age of the stockpile and, for the first time, leaving NNSA with large numbers of weapons that are close to 30 years old. NNSA is currently rebuilding several of these weapon types through the Stockpile Life Extension Program. Already, the W87 has been refurbished. In addition, the B61, W76, and W80 are well into their respective refurbishments. The first production unit for the B61 is scheduled for September 2006, while the first production unit for the W76 is scheduled for September 2007 and for the W80 for January 2009. These are costly and difficult undertakings. According to NNSA’s fiscal year 2007 budget request, over the next 5 years, the agency will need $118.4 million for the B61 life extension, $669.9 million for the W76, and $581.5 million for the W80. These efforts place considerable demands on the computational and experimental facilities of the weapons laboratories, as well as the production facilities. Finally, some of the life extensions have experienced significant cost and schedule overruns. For example, the total cost of the W80 life extension has increased by almost $600 million, while the first production unit date has slipped from February 2006 to the current date of January 2009. In its 2001 Nuclear Posture Review, DOD described the need to substantially reduce operationally deployed strategic warheads through 2012. These reductions were subsequently reflected in the Moscow Treaty between the United States and Russia, which was signed in May 2002. As part of this strategy, DOD has stated its support for the development of an RRW, which could enable reductions in the number of older, nondeployed warheads maintained as a hedge against reliability problems in deployed systems and assist in the evolution to a smaller and more responsive nuclear weapons infrastructure. Currently, LANL and LLNL are developing competing designs for an RRW deployed on a submarine-launched ballistic missile, with the first production unit planned for fiscal year 2012. However, since the RRW design competition will not be completed until November 2006, more information on the viability of the RRW program will be necessary before any firm plans can be drawn up, budgeted, and implemented. In particular, it is not clear at this point whether the RRW can achieve the military characteristics, such as yield, provided by the current stockpile. Some NNSA officials have indicated that the military characteristics may need to be relaxed in order to design a warhead that is safer, easier to build, and easier to maintain. Producing an RRW warhead, while at the same time refurbishing a significant portion of the stockpile and continuing to dismantle retired weapons, will be a difficult and costly undertaking. Given NNSA’s performance to date with the life extension programs and the current unresolved questions about the RRW, in our view, DOD will need to establish clear, long-term requirements for the nuclear stockpile before NNSA can make any final decisions about transforming the weapons complex. Specifically, DOD, working with NNSA through the Nuclear Weapons Council, needs to determine the types and quantities of nuclear weapons that will provide for our nation’s nuclear deterrent over the long term. To facilitate this process, and to provide a foundation for transforming the weapons complex, the Congress may wish to consider setting firm deadlines for DOD, NNSA, and the Nuclear Weapons Council to determine the future composition of the nuclear stockpile. Once a decision about the size and composition of the stockpile is made, NNSA will need accurate estimates of the costs of proposals for transforming the weapons complex. However, historically, NNSA has had difficulty developing realistic, defensible cost estimates, especially for large complex projects. For example, in our August 2000 report on the National Ignition Facility, we found that NNSA and LLNL managers greatly underestimated the costs of creating such a technically complex facility and failed to include adequate contingency funding, which virtually assured that the National Ignition Facility would be over budget and behind schedule. Similarly, as noted in a March 2005 NNSA report, inadequate appreciation of the technical complexities and inadequate contingency funding directly contributed to the cost overruns and schedule slippage experienced by the Dual Axis Radiographic Hydrodynamic Test facility. As noted earlier, NNSA has experienced similar cost and schedule problems with some of its life extension efforts. Some cost estimates to transform the weapons complex were included in the SEAB task force report. Specifically, using the results of computer models developed at LANL and LLNL, the SEAB task force estimated that NNSA would need about $175 billion between now and 2030 to support its current baseline program and modernize the current weapons complex in place, while NNSA would need only $155 billion to carry out the task force’s recommendations. According to NNSA officials, NNSA is currently using the same cost models to evaluate its proposal. However, according to SEAB task force and NNSA and laboratory officials, while the LANL and LLNL models are useful for analyzing overall cost trends and evaluating the cost implications of alternative strategies, they are not currently designed to provide overall life-cycle cost estimates. In addition, we found, among other things, that the cost data used in the models have a high degree of uncertainty associated with them and that the models do not currently have the ability to provide any confidence intervals around their estimates. Several of the SEAB task force members told us that they recognize the limitations associated with their cost estimates. Since they did not have the time to fully analyze the costs and implementation issues associated with their recommendations, they expected that the proposed Office of Transformation would perform the necessary, detailed cost-benefit analyses of their recommendations in order to make the most informed decisions. As previously mentioned, NNSA officials have stated that they do not support building a CNPC because they believe that it is neither affordable nor feasible. However, until NNSA develops a credible, defensible method for estimating life-cycle costs and performs detailed cost analyses of the recommendations contained in the SEAB task force report, as well as its own proposal, it will not be possible to objectively evaluate the budgetary impact of any path forward. According to one count, NNSA has established over 70 plans with associated performance measures to manage the Stockpile Stewardship Program. Nevertheless, over the last 6 years, we have repeatedly documented problems with NNSA’s process for planning and managing its activities. For example, in a December 2000 report prepared for this Subcommittee, we found NNSA needed to improve its planning process so that there were linkages between individual plans across the Stockpile Stewardship Program and that the milestones contained in NNSA’s plans were reflected in contractors’ performance criteria and evaluations. However, in February 2006, we reported similar problems with how NNSA is managing the implementation of its new approach for assessing and certifying the safety and reliability of the nuclear stockpile. Specifically, we found that NNSA planning documents did not contain clear, consistent milestones or a comprehensive, integrated list of the scientific research being conducted across the weapons complex in support of the Primary and Secondary Assessment Technologies programs. These programs are responsible for setting the requirements for the computer models and experimental data needed to assess and certify the safety and reliability of nuclear warheads. We also found that NNSA had not established adequate performance measures to determine the progress of the weapons laboratories in developing and implementing this new methodology. However, the need for effective planning applies to more than the Stockpile Stewardship Program. One of the major recommendations of the SEAB task force is to consolidate Category I and II Special Nuclear Material at the CNPC. In our July 2005 report, we noted that the successful consolidation of Special Nuclear Material into fewer locations is a crucial component of several DOE sites’ Design Basis Threat implementation plans. Such consolidation requires the cooperation of a variety of entities, including NNSA’s Office of Secure Transportation, which moves weapons- grade material from site to site. In our report, we recommended that DOE develop a departmentwide Design Basis Threat implementation plan that includes the consolidation of Special Nuclear Material. However, while DOE has established a Nuclear Material Disposition Consolidation and Coordination Committee, it has yet to develop such a comprehensive plan. The process of transforming the weapons complex will take a long time to complete—as long as two decades, according to some estimates. As a result, NNSA will need to develop a transformation plan with clear milestones that all involved can work toward and that the Congress can use to hold NNSA accountable. For example, as we stated in a 2003 report, one key practice in successful transformations is to set implementation goals and a time line to build momentum and show progress from day 1. In addition, given the demand for transparency and accountability in the public sector, these goals and time lines should be made public. We would note that NNSA should be able to establish milestones for some activities quickly, while others will take more time. For example, NNSA’s Deputy Administrator for Defense Programs has indicated a willingness to establish an Office of Transformation and to implement other SEAB task force recommendations, such as developing a consistent set of business practices across the weapons complex. In these situations, the Congress should expect NNSA to move quickly to establish specific milestones needed to create the Office of Transformation, select key staff to fill this office, and implement key initiatives. We recognize that NNSA will not be able to establish specific milestones in some areas until after the Office of Transformation has performed a detailed, cost-benefit analysis of both the recommendations in the SEAB task force report and of NNSA’s own preferred approach. However, once this analysis is complete, the Congress should expect to see specific, detailed plans and accompanying milestones for the RRW program, the establishment of a pit production capability, and the other adopted recommendations from the SEAB task force report. Many of the recommendations in the SEAB report are not new. A number of studies over the past 15 years have stressed the need to transform the weapons complex. However, for a variety of reasons, DOE and NNSA have never fully implemented these ideas. One of the key problems that NNSA has experienced during this time has been its inability to build an organization with clear lines of authority and responsibility. As we noted in our June 2004 report, NNSA, through its December 2002 reorganization, made important strides in providing clearer lines of authority and responsibility. However, we also noted problems in certain oversight functions, such as safety. We are currently evaluating NNSA’s management effectiveness for the Subcommittee on Strategic Forces of the House Committee on Armed Services. The Congress, and Chairman Hobson in particular, have offered leadership in supporting the creation of the SEAB task force and in funding the RRW program. However, as we stated in a 2003 report, organizational transformation entails fundamental and often radical change. As a result, top leadership must set the direction, pace, and tone for the transformation, while simultaneously helping the organization remain focused on the continued delivery of services. One key strategy is to dedicate a strong and stable implementation team that will be responsible for the transformation’s day-to-day management. Accordingly, this team must be vested with the necessary authority and resources to set priorities, make timely decisions, and move quickly to implement decisions. Therefore, in our view, it is imperative that the proposed Office of Transformation (1) report directly to the Administrator of NNSA; (2) be given sufficient authority to conduct its studies and implement its recommendations; and (3) be held accountable for creating real change within the weapons complex. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Subcommittee may have. For further information on this testimony, please contact me at (202) 512- 3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. James Noel, Assistant Director; Jason Holliday; and Peter Ruedel made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Over the past several years, a serious effort has begun to comprehensively reevaluate how the United States maintains its nuclear deterrent and what the nation's approach should be for transforming its aging nuclear weapons complex. The National Nuclear Security Administration (NNSA), a separately organized agency within the Department of Energy, is responsible for overseeing this weapons complex, which comprises three nuclear weapons design laboratories, four production plants, and the Nevada Test Site. At the direction of the Subcommittee on Energy and Water Development, the Secretary of Energy Advisory Board's (SEAB) Nuclear Weapons Complex Infrastructure Task Force issued a report in October 2005 that provided a systematic review of the requirements for the weapons complex for the next 25 years and offered its vision for an agile and responsive weapons complex. GAO was asked to discuss (1) the current actions NNSA is taking to address the SEAB task force's recommendations and (2) the critical steps that will be needed to achieve and sustain a meaningful, cost-effective transformation of the weapons complex. The SEAB task force report contained the following five recommendations: (1) immediately begin to modernize the cold war nuclear stockpile by designing a Reliable Replacement Warhead (RRW); (2) create a Consolidated Nuclear Production Center (CNPC) that contains a modern set of production facilities in one location; (3) consolidate all weapons-grade material and weapons components at the CNPC; (4) aggressively dismantle the cold war stockpile; and (5) create an Office of Transformation to oversee the transformation of the nuclear weapons complex. NNSA has offered a proposal for transforming the nuclear weapons complex that it believes is responsive to the recommendations in the SEAB task force report. Specifically, NNSA officials noted, they (1) will decide on a design competition for the RRW in November 2006, (2) have requested an increase of over $15 million in funding for dismantling legacy weapons in fiscal year 2007, and (3) have requested $15 million in their fiscal year 2007 budget proposal to create an Office of Transformation, among other things. However, NNSA does not support the SEAB task force's recommendation for a CNPC and the accompanying recommendation of consolidating weapons-grade material at the CNPC, primarily because it views these recommendations as too costly. Instead, NNSA has proposed building a consolidated center for processing plutonium, removing weapons-grade material from the three weapons laboratories, and modernizing the remaining production capabilities at their existing locations. Regardless of the approach chosen, any attempt to change an extremely complex enterprise must be based on solid analysis, careful planning, and effective leadership. GAO has identified the following four actions that, in its view, are critical to successfully transforming the weapons complex: (1) the Department of Defense will need to establish clear, long-term requirements for the nuclear stockpile by determining the types and quantities of nuclear weapons needed to provide for our nation's nuclear deterrent; (2) after the Department of Defense determines the size and composition of the future stockpile, NNSA will need to develop accurate cost estimates of the proposals for transforming the weapons complex because current estimates of the costs of transforming the weapons complex contain considerable uncertainty; (3) after NNSA selects a proposal based on accurate cost estimates, it will need to develop a clear transformation plan containing measurable milestones so that it can evaluate progress and the Congress can hold it accountable; and (4) the proposed Office of Transformation must have authority to make and enforce its decisions on transformation and must be held accountable by the Congress for achieving timely and cost-effective results.
Most modern nuclear warheads contain a nuclear explosive package, which contains the primary and the secondary, and a set of nonnuclear components. The nuclear detonation of the primary produces energy that drives the secondary, which produces further nuclear energy of a militarily significant yield. The nonnuclear components control the use, arming, and firing of the warhead. All nuclear weapons developed to date rely on nuclear fission to initiate their explosive release of energy. Most also rely on nuclear fusion to increase their total energy yield. Nuclear fission occurs when the nucleus of a heavy, unstable atom (such as uranium-235) is split into two lighter parts, which releases neutrons and produces large amounts of energy. Nuclear fusion occurs when the nuclei of two light atoms (such as deuterium and tritium) are joined, or fused, to form a heavier atom, with an accompanying release of neutrons and larger amounts of energy. The U.S. nuclear stockpile consists of nine weapon types. (See table 1.) The lifetimes of the weapons currently in the stockpile have been extended well beyond the minimum life for which they were originally designed—generally about 20 years—increasing the average age of the stockpile and, for the first time, leaving NNSA with large numbers of weapons that are close to 30 years old. Established in 1993, the Stockpile Stewardship Program faces two main technical challenges: provide (1) a better scientific understanding of the basic phenomena associated with nuclear weapons and (2) an improved capability to predict the impact of aging and remanufactured components on the safety and reliability of nuclear weapons. Specifically, An exploding nuclear weapon creates the highest pressures, greatest temperatures, and most extreme densities ever made by man on earth, within some of the shortest times ever measured. When combined, these variables exist nowhere else in nature. While the United States conducted about 1,000 nuclear weapons tests prior to the moratorium, these tests were conducted mainly to look at broad indicators of weapon performance (such as the yield of a weapon) and were often not designed to collect data on specific properties of nuclear weapons physics. After more than 60 years of developing nuclear weapons, while many of the physical processes are well understood and accurately modeled, the United States still does not possess a set of completely known and expressed laws and equations of nuclear weapons physics that link the physical event to first principles. As nuclear weapons age, a number of physical changes can take place. The effects of aging are not always gradual, and the potential for unexpected changes in materials causes significant concerns as to whether weapons will continue to function properly. Replacing aging components is, therefore, essential to ensure that the weapon will function as designed. However, it may be difficult or impossible to ensure that all specifications for the manufacturing of new components are precisely met, especially since each weapon was essentially handmade. In addition, some of the manufacturing process lines used for the original production have been disassembled. In 1995, the President established an annual assessment and reporting requirement designed to help ensure that nuclear weapons remain safe and reliable without underground testing. As part of this requirement, the three weapons laboratories are required to issue a series of reports and letters that address the safety, reliability, performance, and military effectiveness of each weapon type in the stockpile. The letters, submitted to the Secretary of Energy individually by the laboratory directors, summarize the results of the assessment reports and, among other things, express the directors’ conclusions regarding whether an underground nuclear test is needed and the adequacy of various tools and methods currently in use to evaluate the stockpile. To address these challenges, in 1999 DOE developed a new three-part program structure for the Stockpile Stewardship Program that included a series of campaigns, which DOE defined as technically challenging, multiyear, multifunctional efforts to develop and maintain the critical capabilities needed to continue assessing the safety and reliability of the nuclear stockpile into the foreseeable future without underground testing. DOE originally created 18 campaigns that were designed to focus its efforts in science and computing, applied science and engineering, and production readiness. Six of these campaigns currently focus on the development and improvement of the scientific knowledge, tools, and methods required to provide confidence in the assessment and certification of the safety and reliability of the nuclear stockpile in the absence of nuclear testing. These six campaigns are as follows: The Primary and Secondary campaigns were established to analyze and understand the different scientific phenomena that occur in the primary and secondary stages of a nuclear weapon during detonation. As such, the Primary and Secondary campaigns are intended to support the development and implementation of the QMU methodology and to set the requirements for the computers, computer models, and experimental data needed to assess and certify the performance of nuclear weapons. The ASC campaign provides the leading-edge supercomputers and models that are used to simulate the detonation and performance of nuclear weapons. Two campaigns—Advanced Radiography and Dynamic Materials Properties—provide data from laboratory experiments to support nuclear weapons theory and computational modeling. For example, the Advanced Radiography campaign conducts experiments that measure how stockpile materials behave when exposed to explosively driven shocks. One of the major facilities being built to support this campaign is the Dual Axis Radiographic Hydrodynamic Test Facility at LANL. The ICF campaign develops experimental capabilities and conducts experiments to examine phenomena at high temperature and pressure regimes that approach but do not equal those occurring in a nuclear weapon. As a result, scientists currently have to extrapolate from the results of these experiments to understand similar phenomena in a nuclear weapon. One of the major facilities being built as part of this campaign is the National Ignition Facility at LLNL. The other two program activities associated with the Stockpile Stewardship Program are “Directed Stockpile Work” and “Readiness in Technical Base and Facilities.” Directed Stockpile Work includes the activities that directly support specific weapons in the stockpile, such as the Stockpile Life Extension Program, which employs a standardized approach for planning and carrying out nuclear weapons refurbishment activities to extend the operational lives of the weapons in the stockpile well beyond their original design lives. The life extension for the W87 was completed in 2004, and three other weapon systems—the B61, W76, and W80—are currently undergoing life extensions. Each life extension program is specific to that weapon type, with different parts being replaced or refurbished for each weapon type. Readiness in Technical Base and Facilities includes the physical infrastructure and operational readiness required to conduct campaign and Directed Stockpile Work activities across the nuclear weapons complex. The complex includes the three nuclear weapons design laboratories (LANL, LLNL, and SNL), the Nevada Test Site, and four production plants—the Pantex Plant in Texas, the Y-12 Plant in Tennessee, a portion of the Savannah River Site in South Carolina, and the Kansas City Plant in Missouri. From fiscal year 2001 through fiscal year 2005, NNSA spent over $7 billion on the six scientific campaigns (in inflation-adjusted dollars). (See table 2.) NNSA has requested almost $7 billion in funding for these campaigns over the next 5 years. (See table 3.) Within NNSA, the Office of Defense Programs is responsible for managing the campaigns and the Stockpile Stewardship Program in general. Within this office, two organizations share responsibility for overall management of the scientific campaigns: the Office of the Assistant Deputy Administrator for Research, Development, and Simulation and the Office of the Assistant Deputy Administrator for Inertial Confinement Fusion and the National Ignition Facility Project. The first office oversees campaign activities associated with the Primary and Secondary campaigns—as well as the ASC, Advanced Radiography, and Dynamic Materials Properties campaigns—with a staff of about 13 people. The second office oversees activities associated with the ICF campaign with a single staff person. Actual campaign activities are conducted by scientists and other staff at the three weapons laboratories. LANL and LLNL conduct activities associated with the nuclear explosive package, while SNL performs activities associated with the nonnuclear components that control the use, arming, and firing of the nuclear warhead. NNSA has endorsed the use of a new common methodology, known as the quantification of margins and uncertainties, or QMU, for assessing and certifying the safety and reliability of the nuclear stockpile. NNSA and laboratory officials told us that they have made progress in applying the principles of QMU to the certification and assessment of nuclear warheads in the stockpile. However, QMU is still in its early stages of development, and important differences exist among the three laboratories in their application of QMU. To date, NNSA has commissioned two technical reviews of the implementation of QMU at the weapons laboratories. While strongly supporting QMU, the reviews found that the development and implementation of QMU was still in its early stages. The reviews recommended that NNSA take steps to further define the technical details supporting the implementation of QMU and integrate the activities of the three weapons laboratories in implementing QMU. However, NNSA and the weapons laboratories have not fully implemented these recommendations. Beyond the issues raised in the two reports, we also found differences in the understanding and application of QMU among the three laboratories. When the Primary and Secondary campaigns were established in 1999, they brought some organization and overall goals to the scientific research conducted across the weapons complex. For example, as we noted in April 2005, the Primary campaign set an initial goal in the 2005 to 2010 time frame for certifying the performance of the primary of a nuclear weapon to within a stated yield level. However, according to senior NNSA officials, NNSA still lacked a coherent strategy for relating the scientific work conducted by the weapons laboratories under the campaigns to the needs of the nuclear stockpile and the overall Stockpile Stewardship Program. This view was echoed by a NNSA advisory committee report, which stated in 2002 that the process used by the weapons laboratories to certify the safety and reliability of nuclear weapons was ill defined and unevenly applied, leading to major delays and inefficiencies in programs. Starting in 2001, LLNL and LANL began developing what is intended to be a common methodology for assessing and certifying the performance and safety of nuclear weapons in the absence of nuclear testing. In 2003, the associate directors for nuclear weapons at LLNL and LANL published a white paper—entitled “National Certification Methodology for the Nuclear Weapon Stockpile”—that described this new methodology, which they referred to as the quantification of margins and uncertainties or QMU. According to the white paper, QMU is based on an adaptation of standard engineering practices and lends itself to the development of “rigorous, quantitative, and explicit criteria for judging the robustness of weapon system and component performance at a detailed level.” Moreover, the quantitative results of this process would enable NNSA and the weapons laboratories to set priorities for their activities and thereby make rational decisions about allocating program resources to the nuclear stockpile. The process envisaged in the white paper focuses on creating a “watch list” of factors that, in the judgment of nuclear weapons experts, are the most critical to the operation and performance of a nuclear weapon. These factors include key operating characteristics and components of the nuclear weapon. For each identified, critical factor leading to a nuclear explosion, nuclear weapons experts would define performance metrics. These performance metrics would represent the experts’ best judgment of what constitutes acceptable behavior—i.e., the range of acceptable values for a critical function to successfully occur or for a critical component to function properly—as well as what constitutes unacceptable behavior or failure. To use an analogy, consider the operation of a gasoline engine. Some of the events critical to the operation of the engine would include the opening and closing of valves, the firing of the spark plugs, and the ignition of the fuel in each cylinder. Relevant performance metrics for the ignition of fuel in a cylinder would include information on the condition of the spark plugs (e.g., whether they are corroded) and the fuel/air mixture in the cylinder. Once nuclear experts have identified the relevant performance metrics for each critical factor, according to the 2003 white paper, the goal of QMU is to quantify these metrics. Specifically, the QMU methodology seeks to quantify (1) how close each critical factor is to the point at which it would fail to perform as designed (i.e., the performance margin or the margin to failure) and (2) the uncertainty in calculating the margin. According to the white paper, the weapons laboratories would be able to use their calculated values of margins and uncertainties as a way to assess their confidence in the performance of a nuclear weapon. That is, the laboratories would establish a “confidence ratio” for each critical factor —they would divide their calculated value for the margin (“M”) by their calculations of the associated uncertainty (“U”) and arrive at a single number (“M/U”). According to the white paper, the weapons laboratories would only have confidence in the performance of a nuclear weapon if the margin “significantly” exceeds uncertainty for all critical issues. However, the white paper did not define what the term “significantly” meant. In a broad range of key planning and management documents that have followed the issuance of the white paper, NNSA and the weapons laboratories have endorsed the use of the QMU methodology as the principal tool for assessing and certifying the safety and reliability of the nuclear stockpile in the absence of nuclear testing. For example, in its fiscal year 2006 implementation plan for the Primary campaign, NNSA stated as a strategic objective that it needs to develop the capabilities and understanding necessary to apply QMU as the assessment and certification methodology for the nuclear explosive package. In addition, in its fiscal year 2006 budget request, NNSA selected its progress toward the development and implementation of QMU as one of its major performance indicators. Finally, in the plans that NNSA uses to evaluate the performance of LANL and LLNL, NNSA has established an overall objective for LANL and LLNL to assess and certify the safety and reliability of nuclear weapons using a common QMU methodology. Officials at NNSA and the weapons laboratories have also stated that QMU will be vital to certifying any weapon redesigns, such as are envisioned by the RRW program. For example, senior NNSA officials told us that the Stockpile Stewardship Program will not be sustainable if it only involves the continued refurbishment in perpetuity of existing weapons in the current nuclear stockpile. They stated that the accumulation of small changes over the extended lifetime of the current nuclear stockpile will result in increasing levels of uncertainty about its performance. If NNSA moves forward with the RRW program, according to NNSA documents and officials, the future goal of the weapons program will be to use QMU to replace existing stockpile weapons with an RRW whose safety and reliability could be assured with the highest confidence, without nuclear testing, for as long as the United States requires nuclear forces. According to NNSA and laboratory officials, the weapons laboratories have made progress in applying the principles of QMU to the certification of life extension programs and to the annual stockpile assessment process. For example, LLNL officials told us that they are applying QMU to the assessment of the W80, which is currently undergoing a life extension. They said that, in applying the QMU methodology, they tend to focus their efforts on identifying credible “failure modes,” which are based on observable problems, such as might be caused by the redesign of components in a nuclear weapon, changes to the manufacturing process for components, or the performance of a nuclear weapon under aged conditions. They said that, for the W80 life extension program, they have developed a list of failure modes and quantified the margins and uncertainties associated with these failure modes. Based on their calculations, they said that they have increased their confidence in the performance of the W80. Similarly, LANL officials told us that they are applying QMU to the W76, which is also currently undergoing a life extension and is scheduled to finish its first production unit in 2007. They said that, in applying the QMU methodology, they tend to focus their efforts on defining “performance gates,” which are based on a number of critical points during the explosion of a nuclear weapon that separate the nuclear explosion into natural stages of operation. The performance gates identify the characteristics that a nuclear weapon must have at a particular time during its operation to meet its performance requirements (e.g., to reach its expected yield). LANL officials told us that they have developed a list of performance gates for the W76 life extension program and are beginning to quantify the margins and uncertainties associated with these performance gates. Despite this progress, we found that QMU is still in its early stages of development and that important differences exist among the weapons laboratories in their application of QMU. To date, NNSA has commissioned two technical reviews of the implementation of QMU at the weapons laboratories. The first review was conducted by NNSA’s Office of Defense Programs Science Council (Science Council)—which advises NNSA on scientific matters across a range of activities, including those associated with the scientific campaigns—and resulted in a March 2004 report. The second review was conducted by the MITRE Corporation’s JASON panel and resulted in a February 2005 report. Both reports endorsed the use of QMU by the weapons laboratories and listed several potential benefits that QMU could bring to the nuclear weapons program. For example, according to the Science Council report, QMU will serve an important role in training the next generation of nuclear weapon designers and will quantify and increase NNSA’s confidence in the assessment and certification of the nuclear stockpile. According to the JASON report, QMU could become a useful management tool for directing investments in a given weapon system where they would be most effective in increasing confidence, as required by the life extension programs. In addition, the JASON report described how LANL and LLNL officials had identified potential failure modes in several weapon systems and calculated the associated margins and uncertainties. The report noted that, for most of these failure modes, the margin for success was large compared with the uncertainty in the performance. However, according to both the Science Council and the JASON reports, the development and implementation of QMU is still in its early stages. For example, the JASON report described QMU as highly promising but unfinished, incomplete and evolving, and in the early stages of development. Moreover, the chair of the JASON panel on QMU told us in June 2005 that, during the course of his review, members of the JASON panel found that QMU was not mature enough to assess its reliability or usefulness. The reports also stated that the weapons laboratories have not fully developed or agreed upon the technical details supporting the implementation and application of QMU. For example, the JASON report stated that, in the course of its review, it became evident that there were a variety of differing and sometimes diverging reviews of what QMU really was and how it was working in practice. As an example, the report stated that some of the scientists, designers, and engineers at LANL and LLNL saw the role of expert judgment as an integral part of the QMU process, while others did not. In discussions with the weapons laboratories about the two reports, LANL officials told us that they believed that the details of QMU as a formal methodology are still evolving, while LLNL officials stated that QMU was “embryonic” and not fully developed. While supporting QMU, the two reports noted that the weapons laboratories face challenges in successfully implementing a coherent and credible analytical method based on the QMU methodology. For example, in its 2004 report, the Science Council stated that, in its view, the QMU methodology is based on the following core assumptions: Computer simulations can accurately predict the behavior of a complex nuclear explosive system as a function of time. It is sufficient for the assessment of the performance of a nuclear weapon to examine the simulation of the time evolution of a nuclear explosive system at a number of discrete time intervals and to determine whether the behavior of the system at each interval is within acceptable bounds. The laboratories’ determinations of acceptable behavior can be made quantitatively—that is, they will make a quantitative estimate of a system’s margins and uncertainties. Given these quantitative measures of the margins and uncertainties, it is possible to calculate the probability (or confidence level) that the nuclear explosive system will perform as desired. However, the Science Council’s report noted that extraordinary degrees of complexity are involved in a rational implementation of QMU that are only beginning to be understood. For example, in order for the QMU methodology to have validity, it must sufficiently identify all critical failure modes, critical events, and associated performance metrics. However, as described earlier, the operation of an exploding nuclear weapon is highly integrated and nonlinear, occurs during a very short period of time, and reaches extreme temperatures and pressures. In addition, the United States does not possess a set of completely known and expressed laws and equations of nuclear weapons physics. Given these complexities, it will be difficult to demonstrate the successful implementation of QMU, according to the report. In addition, the Science Council stated that it was not presented with any evidence that there exists a method—even in principle—for calculating an overall probability that a nuclear explosive package will perform as designed from the set of quantitative margins and uncertainties at each time interval. To address these and other issues, the two reports recommended that NNSA take steps to further define the technical details supporting the implementation of QMU and to integrate the activities of the three weapons laboratories in implementing QMU. For example, the 2004 Science Council report recommended that NNSA direct the associate directors for nuclear weapons at LANL and LLNL to undertake a major effort to define the details of QMU. In particular, the report recommended that a trilaboratory team be charged with defining a common language for QMU and identifying the important performance gates, failure modes, and other criteria in the QMU approach. The report stated that this agreed-upon “reference” set could then be used to support all analyses of stockpile issues. In addition, the report recommended that NNSA consider establishing annual or semiannual workshops for the three weapons laboratories to improve the identification, study, and prioritization of potential failure modes and other factors that are critical to the operation and performance of nuclear weapons. Similarly, the 2005 JASON panel report noted that the meaning and implications of QMU are currently unclear. To rectify this problem, the report recommended that the associate directors for nuclear weapons at LANL and LLNL write a new, and authoritative, paper defining QMU and submit it to NNSA. Furthermore, the report recommended that the laboratories establish a formal process to (1) identify all failure modes and performance gates associated with QMU, using the same methodology for all weapon systems, and (2) establish better relationships between the concepts of failure modes and performance gates for all weapon systems in the stockpile. However, NNSA and laboratory officials have not fully implemented these recommendations, particularly the recommendations of the Science Council. For example, while LLNL and LANL officials are drafting a new “white paper” on QMU that attempts to clarify some fundamental tenets of the methodology, officials from SNL are not involved in the drafting of this paper. In addition, NNSA has not required the three weapons laboratories to hold regular meetings or workshops to improve the identification, prioritization, and integration of failure modes, performance gates, and other critical factors. According to NNSA’s Assistant Deputy Administrator for Research, Development, and Simulation, NNSA has not fully implemented the recommendations of the Science Council’s report partly because the report was intended more to give NNSA a sense of the status of the implementation of QMU than it was to provide recommendations. For example, the 2004 report states that the “friendly review,” as the report is referred to by NNSA, would not have budget implications and that the report’s findings and recommendations would be reported only to the senior management of the weapons laboratories. As a result, the Assistant Deputy Administrator told us that he had referred the recommendations to the directors of the weapons laboratories and told them to implement the recommendations as they saw fit. Furthermore, LLNL and LANL officials disagreed with some of the statements in the Science Council report and stressed that, in using QMU, they do not attempt to assign an overall probability that the nuclear explosive package will perform as desired. That is, they do not attempt to add up calculations of margins and uncertainties for all the critical factors to arrive at a single estimate of margin and uncertainty, or a single confidence ratio, for the entire nuclear explosive package. Instead, they said that they focus on ensuring that the margin for each identified critical factor in the explosion of a nuclear weapon is greater than the uncertainty. However, they said that, for a given critical factor, they do combine various calculations of individual uncertainties that contribute to the total amount of uncertainty for that factor. In addition, in addressing comments in the JASON report, LLNL and LANL officials stressed that QMU has always relied, and will continue to rely heavily, on the judgment of nuclear weapons experts. For example, LLNL officials told us that since there is no single definition of what constitutes a threshold for failure, they use expert judgment to decide what to put on their list of failure modes. They also said that the QMU methodology provides a way to make the entire annual assessment and certification process more transparent to peer review. Similarly, LANL officials said that they use expert judgment extensively in establishing performance metrics and threshold values for their performance gates. They said that expert judgment will always be a part of the scientific process and a part of QMU. Beyond the issues raised in the two reports, we found that there are differences in the understanding and application of QMU among the three laboratories. For example, the three laboratories do not agree about the application of QMU to areas outside of the nuclear explosive package. Specifically, LLNL officials told us that the QMU methodology, as currently developed, only applies to the nuclear explosive package and not to the nonnuclear components that control the use, arming, and firing of the nuclear warhead. According to LLNL and LANL officials, SNL scientists can run hundreds of experiments to test their components and, therefore, can use normal statistical analysis in certifying the performance of nonnuclear components. As a result, according to LLNL and LANL officials, SNL does not have to cope with real uncertainty and does not “do” QMU. Furthermore, according to LLNL officials, SNL has chosen not to participate in the development of QMU with LLNL and LANL. However, SNL officials told us that while some of the nonnuclear components are testable to a degree, SNL is as challenged as the other two weapons laboratories in certifying the performance of their systems without actual testing. For example, SNL officials said that they simply do not have enough money to perform enough tests on all of their nonnuclear components to be able to rely completely on statistical analysis to meet their safety performance levels. In addition, SNL scientists are not able to test their components under the conditions of a nuclear explosion but are still required to certify the performance of the components under these conditions. Thus, SNL officials told us that they had been using their own version of QMU for a long time. SNL officials told us that they define QMU as a way to make risk-informed decisions about the effect of variabilities and uncertainties on the performance of a nuclear weapon, including the nonnuclear components that control the use, arming, and firing of the nuclear warhead. Moreover, they said that this kind of risk-informed approach is not unique to the nuclear weapons laboratories and is used extensively in areas such as nuclear reactor safety. However, they told us that they have been left out in the development of QMU by the two other weapons laboratories. Specifically, they said that while SNL scientists have worked with other scientists at LANL and LLNL at a “grass roots” level, there has only been limited cooperation and dialogue between upper-level management at the three laboratories concerning the development and implementation of QMU. In addition, we found that while LLNL and LANL both agree on the fundamental tenets of QMU at a high level, their application of the QMU methodology differs in some important respects. For example, LLNL and LANL officials told us that, at a detailed level, the two laboratories are pursuing different approaches to calculating and combining uncertainties. For the W80 life extension program, LLNL officials showed us how they combined calculations of individual uncertainties that contributed to the total uncertainty for a key failure mode of the primary—the amount of primary yield necessary to drive the secondary. However, they said that the scientific support for their method for combining individual calculations of uncertainty was limited, and they stated that they are pursuing a variety of more sophisticated analyses to improve their current approach. Moreover, the two laboratories are taking a different approach to generating a confidence ratio for each critical factor, as described in the 2003 white paper on QMU. For example, for the W80 life extension program, LLNL officials showed us how they calculated a single confidence ratio for a key failure mode of the primary, based on their calculations of margin and uncertainty. They said that the weapon systems for which they are responsible have a lot of margin built into them, and they feel comfortable generating this number. In contrast, in discussions with LANL officials about the W76 life extension program, LANL officials told us that they prefer not to calculate a single confidence ratio for a performance gate, partly because they are concerned that their customers (e.g., the Department of Defense) might think that the QMU methodology is more formal than it is currently. In commenting on the differences between the two laboratories, NNSA officials stated that the two laboratories are pursuing complementary approaches, and that these differences are part of the rationale for a national policy decision to maintain two nuclear design laboratories. In addition, they stated that the confidence in the correctness of scientific research is improved by achieving the same answer through multiple approaches. LLNL officials also made similar comments, stating that the nation will benefit from some amount of independence between the laboratories to assure that the best methodology for assessing the stockpile in the absence of nuclear testing is achieved. NNSA relies on its Primary and Secondary campaigns to manage the development and implementation of QMU. According to NNSA policies, campaign managers at NNSA headquarters are responsible for developing campaign plans and high-level milestones, overseeing the execution of these plans, and providing input to the evaluation of the performance of the weapons laboratories. However, NNSA’s management of these processes is deficient in four key areas. First, the planning documents that NNSA has established for the Primary and Secondary campaigns do not adequately integrate the scientific research currently conducted that supports the development and implementation of QMU. Second, NNSA has not developed a clear, consistent set of milestones to guide the development and implementation of QMU. Third, NNSA has not established formal requirements for conducting annual, technical reviews of the implementation of QMU or for certifying the completion of QMU-related milestones. Finally, NNSA has not established adequate performance measures to determine the progress of the laboratories in developing and implementing QMU. As part of its planning structure, NNSA requires the use of program and implementation plans to set requirements and manage resources for the campaigns and other programs associated with the Stockpile Stewardship Program. Program plans are strategic in nature and identify the long-term goals, high-level milestones, and resources needed to support a particular program over a 7-year period, while implementation plans establish performance expectations for the program and each participating site for the current year of execution. According to NNSA policies, program and implementation plans should flow from and interact with each other using a set of cascading goals and requirements. NNSA has established a single program plan, which it calls the “Science campaign program plan,” that encompasses the Primary and the Secondary campaigns, as well as two other campaigns—Advanced Radiography and Dynamic Materials Properties. NNSA has also established separate implementation plans for each of these campaigns, including the Primary and Secondary campaigns. According to NNSA, it relies on these plans—and in particular the plans related to the Primary and Secondary campaigns—to manage the development and implementation of QMU, as well as to determine the requirements for the experimental data and computer modeling needed to analyze and understand the different scientific phenomena that occur in a nuclear weapon during detonation. However, the current Primary and Secondary campaign plans do not contain a comprehensive, integrated list of the relevant scientific research being conducted across the weapons complex to support the development and implementation of QMU. For example, according to the NNSA campaign manager for the Primary campaign, he had to hold a workshop in 2005 with officials from the weapons laboratories in order to catalogue all of the scientific activities that are currently performed under the heading of “primary assessment” regardless of the NNSA funding source. According to this official, the existing Primary campaign implementation plan does not provide the integration across NNSA programs that is needed to achieve the goals of the Primary campaign and to develop and implement QMU. According to NNSA officials, the lack of integration has occurred in large part because a significant portion of the scientific research that is relevant to the Primary and Secondary campaigns is funded and carried out by different campaigns and other programs. Specifically, different NNSA campaign managers use different campaign planning documents to plan and oversee research and funding for activities that are directly relevant to the Primary and Secondary campaigns and the development and implementation of QMU. For example, the ASC campaign provides the supercomputing capability that the weapons laboratories use to simulate and predict the behavior of an exploding nuclear weapon. Moreover, the weapons laboratories rely on ASC supercomputers to quantify their uncertainties with respect to the accuracy of these computer simulations—a key component in the implementation of QMU. As a result, the ASC campaign plans and funds activities that are critical to the development and implementation of QMU. To address this problem, according to NNSA officials, NNSA is taking steps to establish better relationships among the campaign plans. For example, NNSA is currently drafting a new plan—which it calls the Primary Assessment Plan—in an attempt to better coordinate the activities covered under the separate program and implementation plans. The draft plan outlines high-level research priorities, time lines, and proposed milestones necessary to support (1) NNSA’s responsibilities for the current stockpile, (2) primary physics design for the development of an RRW, and (3) certification of an RRW in the 2012 time frame and a second RRW in the 2018 time frame. According to NNSA officials, they expect to finalize this plan by the third quarter of fiscal year 2006. In addition, they expect to have a similar plan for the Secondary campaign finalized by December 2006 and are considering combining both plans into a full-system assessment plan. According to one NNSA official responsible for the Primary and Secondary campaigns, NNSA will revise the existing campaign program and implementation plans to be consistent with the Primary Assessment Plan. More fundamentally, some nuclear weapons experts have suggested that NNSA’s planning structure should be reorganized to better reflect the use of QMU as NNSA’s main strategy for assessing and certifying the performance of nuclear weapons. For example, the chair of the LLNL Defense and Nuclear Technologies Director’s Review Committee—which conducts technical reviews of LLNL’s nuclear weapons activities for the University of California—told us that the current campaign structure has become a series of “stovepipes” that NNSA uses to manage stockpile stewardship. He said that in order for NNSA to realize its long-term goals for implementing QMU, NNSA is going to have to reorganize itself around something that he called an “uncertainty spreadsheet” for each element of a weapon’s performance (e.g., implosion of the primary, transfer of energy to the secondary, etc.), leading to the weapon’s yield. He said that the laboratories should develop a spreadsheet for each weapon in the stockpile that (1) identifies the major sources of uncertainty at each critical event in their assessment of the weapon’s performance and (2) relates the laboratory’s scientific activities and milestones to these identified sources of uncertainty. He said that the development and use of these spreadsheets would essentially capture the intent of the scientific campaigns and make them unnecessary. NNSA has established a number of milestones that relate to the development and implementation of QMU. Within the Science campaign program plan, NNSA has established a series of high-level milestones, which it calls “level-1” milestones. According to NNSA policies, level-1 milestones should be sufficient enough to allow strategic integration between sites involved in the campaigns and between programs in NNSA. Within the implementation plans for the Primary and Secondary campaigns, NNSA has established a number of lower-level milestones, which it calls “level-2” milestones, which NNSA campaign managers use to track major activities for the current year of execution. The level-1 milestones related to QMU are shown in table 4, and the level-2 milestones related to QMU for the Primary campaign are shown in table 5. According to NNSA officials, the level-1 milestones in table 4 represent a two-stage path to systematically identify uncertainties and reduce them through analyzing past underground test results, developing new experimental capabilities, and performing new experiments to understand the relevant physical processes. According to these level-1 milestones, NNSA expects to complete the second stage or “cycle” of this process by fiscal year 2014 (i.e., milestone M20), at which time NNSA will have sufficiently reduced major sources of uncertainties and will have confidence in its ability to predict the performance of nuclear weapons in the absence of nuclear testing. However, we identified several problems with the NNSA milestones related to the development and implementation of QMU. Specifically, the level-1 milestones in the Science campaign program plan have the following problems: The milestones are not well-defined and never explicitly mention QMU. According to NNSA officials responsible for overseeing the Primary campaign, these milestones are too qualitative and too far in the future to enable NNSA to effectively plan for and oversee the implementation of QMU. They described these milestones as “fuzzy” and said that they need to be better defined. However, NNSA officials also stated that these milestones are not just for QMU but for the entire Science campaign, of which QMU is only a part. The milestones conflict with the performance measures shown in other important NNSA management documents. Specifically, while the Science campaign program plan envisions a two-stage path to identify and reduce key uncertainties related to nuclear weapon operations using QMU by 2014, the performance measures in NNSA’s fiscal year 2006 budget request and in Appendix A of the Science campaign program plan call for the completion of QMU by 2010. The milestones have not been integrated with other QMU-related level-1 milestones in other planning documents. For example, the current ASC campaign program plan contains a series of level-1 milestones for completing the certification of several weapon systems—including the B61, W80, W76, and W88—with quantified margins and uncertainties by the end of fiscal year 2007. However, these milestones do not appear in and are not referenced by the Science campaign program plan. Moreover, the ASC campaign manager told us that, until recently, he was not aware of the existence of the level-1 milestones for implementing QMU that are contained in the Science campaign program plan. In addition, we found that neither the Science campaign program plan nor the Primary campaign implementation plan describe how the level-2 milestones on QMU in the Primary campaign implementation plan are related to the level-1 milestones on QMU in the Science campaign program plan. Consequently, it is unclear how the achievement of specific level-2 milestones—such as the development of probabilistic tools and methods to combine various sources of uncertainty for primary performance—will result in the achievement of level-1 milestones for the implementation of QMU or how NNSA expects to certify several major nuclear weapon systems using QMU before the QMU methodology is fully developed and implemented. NNSA, as well as laboratory officials, agreed that there are weaknesses with the current QMU milestones. According to NNSA officials, when NNSA established the current tiered structure for campaign milestones in 2003, the different tiers of milestones served different purposes and, therefore, were never well-integrated. For example, NNSA officials said that the level-1 milestones were originally created to reflect measures that were deemed to be important to senior NNSA officials, while level-2 milestones were created to be used by NNSA campaign managers to perform more technical oversight of the weapons laboratories. Furthermore, according to NNSA officials, the current level-2 milestones are only representative of campaign activities conducted by the weapons laboratories. That is, the level-2 milestones were never designed to cover the entire scope of work being conducted by the weapons laboratories and are, therefore, not comprehensive in scope. To address these problems, according to NNSA officials, NNSA is taking steps to develop better milestones to track the implementation of the QMU methodology. For example, in the draft Primary Assessment Plan, NNSA has established 19 “high-level” milestones that cover the time period from fiscal year 2006 to fiscal year 2018. According to these draft milestones, by fiscal year 2010, NNSA expects to “complete the experimental work and methodology development needed to demonstrate the ability of primary certification tools to support certification of existing stockpile system and RRW.” In addition, NNSA expects to certify a RRW in fiscal year 2012 and a second RRW in fiscal year 2018. According to NNSA policies, campaign managers are required to track the status of level-1 and level-2 milestones and provide routine, formal reports on the status of their programs. For example, campaign managers are required to track, modify, and score the status of level-1 and level-2 milestones through the use of an Internet-based application called the Milestone Reporting Tool. On a quarterly basis, campaign managers assign one of four possible scores for each milestone listed in the application: (1) “blue” for completed milestones, (2) “green” for milestones that are on track to be finished by the end of the fiscal year, (3) “yellow” for milestones that may not be completed by the end of the fiscal year, and (4) “red” for milestones that will not be completed by the end of the fiscal year. At quarterly program review meetings, campaign managers brief senior-level NNSA officials on the status of major milestones, along with cost and expenditure data for their programs. In addition, campaign managers are responsible for conducting technical reviews of the campaigns for which they are responsible, at least annually, to ensure that campaign activities are being executed properly and that campaign milestones are being completed. However, NNSA campaign managers have not met all of the NNSA requirements needed to effectively oversee the Primary and Secondary campaigns. For example, we found that the campaign managers for the Primary and Secondary campaigns have not established formal requirements for conducting annual, technical reviews of the implementation of QMU at the three weapons laboratories. Moreover, these officials have not established requirements for certifying the completion of level-2 milestones that relate to QMU. They could not provide us with documentation showing the specific activities or outcomes that they expected from the weapons laboratories in order to certify that the laboratories had completed the level-2 milestones for QMU. Instead, they relied more on ad hoc reviews of campaign activities and level-2 milestones as part of their oversight activities for their campaigns. According to the Primary campaign manager, the officials at the weapons laboratories are the principal managers of campaign activities. As a result, he views his role as more of a “sponsor” for his program and, therefore, does not require any written reports or evidence from the laboratories to certify that they have completed specific milestones. In contrast, we found that the ASC campaign manager has established formal requirements for a variety of reoccurring technical reviews of activities associated with the ASC campaign. Specifically, the ASC campaign relies on semiannual reviews conducted by the ASC Predictive Science Committee—which provides an independent, technical review of the status of level-2 milestones—as well as on annual “principal investigators” meetings that provide a technical review of every program element within the ASC campaign. The ASC campaign manager told us that he relies on these technical reviews to oversee program activities because the quarterly program review meetings are not meant to help him manage his program but are really a way for senior-level NNSA officials to stay informed. In addition, the ASC campaign manager has established detailed, formal requirements for certifying the completion of level-2 milestones for the ASC campaign. Specifically, the fiscal year 2006 implementation plan for the ASC campaign contains a detailed description of what NNSA expects from the completion of each level-2 milestone, including a description of completion criteria, the method by which NNSA will certify the completion of the milestone, and an assessment of the risk level associated with the completion of the milestone. The ASC campaign manager told us that, when NNSA officials created the level-2 milestones for the campaigns in 2003, the milestones were really just “sentences” and lacked the detailed criteria that would enable NNSA managers to adequately track and document the completion of major milestones. As a result, the ASC campaign has made a major effort in recent years to develop detailed, formal requirements to support the completion of ASC level-2 milestones. NNSA uses performance measurement data to inform resource decisions, improve the management and delivery of products and services, and justify budget requests. According to NNSA requirements, performance measurement data should explain in clear, concise, meaningful, and measurable terms what program officials expect to accomplish for a specific funding level over a fixed period of time. In addition, performance measurement data should include annual targets that describe specific outputs that can be measured, audited, and substantiated by the detailed technical milestones contained in documentation such as campaign implementation plans. With respect to QMU, NNSA has established an overall annual performance target to measure the cumulative percentage of progress toward the development and implementation of the QMU methodology. Specifically, in its fiscal year 2006 budget request to the Congress, NNSA stated that it expects to complete the development and implementation of QMU by 2010 as follows: 25 percent complete by the end of fiscal year 2005, 40 percent complete by the end of fiscal year 2006, 55 percent complete by the end of fiscal year 2007, 70 percent complete by the end of fiscal year 2008, 85 percent complete by the end of fiscal year 2009, and 100 percent complete by the end of fiscal year 2010. According to NNSA, it had progressed 10 percent toward its target of completing QMU by the end of fiscal year 2004. However, NNSA officials could not document how they can measure progress toward the performance target for developing and implementing QMU. Moreover, NNSA officials could not explain how the 2010 overall performance target for the completion and implementation of QMU is related to the level-1 milestones for QMU in the Science campaign program plan, which describes a two-stage process to identify and reduce key uncertainties in nuclear weapon performance using QMU by 2014. According to one NNSA official responsible for overseeing the Primary campaign, NNSA created this annual performance target because the Office of Management and Budget requires agencies to express some of their annual performance targets in percentage terms. However, this official said the actual percentages are not very meaningful, and he does not have any specific criteria for how to measure progress to justify the use of the percentages in the budget request. NNSA has also established broad performance measures to evaluate the performance of LANL and LLNL. Specifically, in its performance evaluation plans for LANL and LLNL for fiscal year 2006, NNSA has established the following three performance measures: Use progress toward quantifying margins and uncertainty, and experience in application, to further refine and document the QMU methodology. Demonstrate application of a common assessment methodology (i.e., QMU) in major warhead assessments and the certification of Life Extension Program warheads. Complete the annual assessment of the safety, reliability, and performance of all warhead types in the stockpile, including reaching conclusions on whether nuclear testing is required to resolve any issues. However, the plan that NNSA uses to evaluate the performance of SNL does not contain any performance measures or targets specifically related to QMU, and the performance evaluation plans for LANL and LLNL do not contain any annual targets that can be measured and linked to the specific performance measures related to QMU. Instead, the plans state that NNSA will rely on LLNL and LANL officials to develop the relevant targets and related dates for each performance measure, as well as to correlate the level-1 and level-2 milestones with these measures. When asked why these plans do not meet NNSA’s own requirements, NNSA officials said that they have not included specific annual performance targets in the plans because to do so would make it harder for them to finalize the plans and adjust to changes in NNSA’s budget. However, they said that NNSA is planning on implementing more stringent plans that will include annual performance targets when the next contract for LANL and LLNL is developed. In addition, NNSA officials told us that they recognize the need to develop performance measures related to QMU for SNL and anticipate implementing these changes in the fiscal year 2007 performance evaluation plan. NNSA officials told us that they have used more specific measures, such as the completion of level-2 milestones, in their assessment of the weapons laboratories’ performance since fiscal year 2004. However, we also found problems with the way NNSA has assessed the performance of the weapons laboratories in implementing QMU. For example, in NNSA’s annual performance appraisal of LANL for fiscal year 2004, NNSA states that LANL had completed 75 percent of the work required to develop “QMU logic” for the W76 life extension by the end of fiscal year 2004. However, NNSA officials could not document how they are able to measure progress toward the development and implementation of QMU logic for the W76 life extension. Again, an NNSA official responsible for overseeing the Primary campaign told us that the actual percentages are not very meaningful, and that he did not have any specific criteria for how to measure progress to justify the use of the percentage in the appraisal. In a recent report, we recognized the difficulties of developing useful results-oriented performance measures for programs such as those geared toward research and development programs. For programs that can take years to observe program results, it can be difficult to identify performance measures that will provide information on the annual progress they are making toward achieving program results. However, we also recognize that such efforts have the potential to provide important information to decision makers. NNSA officials told us that they recognize the need for developing appropriate measures to ensure that adequate progress is being maintained toward achieving the goals and milestones of the campaigns. However, according to NNSA, very few products of the scientific campaigns involve the repetition of specific operations whose costs can be monitored effectively as a measure of performance. As a result, the best measure of progress for the scientific campaigns is through scientific review by qualified technical peers at appropriate points in the program. However, NNSA has not established any performance measures or targets for implementing QMU that require periodic scientific peer reviews or define what is meant by “appropriate” points in the program. Faced with an aging nuclear stockpile, as well as an aging workforce, NNSA needs a methodologically rigorous, transparent, and explainable approach for how it will continue to assess and certify the safety and reliability of the nation’s nuclear weapons stockpile, now and into the foreseeable future, without underground testing. After over a decade of conducting stockpile stewardship, NNSA’s selection of QMU as its methodology for assessment and certification represents a positive step toward a methodologically rigorous, transparent, and explainable approach that can be carried out by a new cadre of weapons designers. However, important technical and management details must be resolved before NNSA can say with certainty that it has a sound and agreed upon approach. First, NNSA must take steps to ensure that all three nuclear weapons laboratories—not just LANL and LLNL—are in agreement about how QMU is to be defined and applied. While we recognize that there will be methodological differences between LANL and LLNL in the detailed application of QMU to specific weapon systems, we believe that it is fundamentally important that these differences be understood and, if need be, reconciled, to ensure that QMU achieves the goal of a common methodology with rigorous, quantitative, and explicit criteria, as envisioned by the original 2003 white paper on QMU. More importantly, we believe that SNL has an important role in the development and application of QMU to the entire warhead, and we find the continuing disagreement over the application of QMU to areas outside of the nuclear explosive package to be disconcerting. There have been several recommendations calling for a new, technical paper defining QMU, as well as the establishment of regular forums to further develop the QMU methodology and reconcile any differences in approach. We believe the NNSA needs to fully implement these recommendations. Second, NNSA has not made effective use of its current planning and program management structure to ensure that all of the research needed to support QMU is integrated and that scarce scientific resources are being used efficiently. We believe that NNSA must establish an integrated management approach involving planning, oversight, and evaluation methods that are all clearly linked to the overall goal of the development and application of QMU. In particular, we believe that NNSA needs clear, consistent, and realistic milestones and regular, technical reviews of the development of QMU in order to ensure sound progress. Finally, while we support the development of QMU and believe it must be effectively managed, we also believe it is important to recognize and acknowledge that the development and application of QMU, especially the complexities involved in analyzing and combining uncertainties related to potential failure modes and performance margins, represents a daunting research challenge that may not be achievable in the time constraints created by an aging nuclear stockpile. To ensure that the weapons laboratories will have the proper tools in place to support the continued assessment of the existing stockpile or the certification of redesigned nuclear components under the RRW program, we recommend that the Administrator of NNSA take the following two actions: Require the three weapons laboratories to formally document an agreed upon, technical description of the QMU methodology that clearly recognizes and reconciles any methodological differences. Establish a formal requirement for periodic collaboration between the three weapons laboratories to increase their mutual understanding of the development and implementation of QMU. To ensure that NNSA can more effectively manage the development and implementation of QMU, we recommend that the Administrator of NNSA take the following three actions: Develop an integrated plan for implementing QMU that contains (1) clear, consistent, and realistic milestones for the development and implementation of QMU across the weapons complex and (2) formal requirements for certifying the completion of these milestones. Establish a formal requirement for conducting annual, technical reviews of the scientific research conducted by the weapons laboratories that supports the development and implementation of QMU. Revise the performance evaluation plans for the three weapons laboratories so that they contain annual performance targets that can be measured and linked to specific milestones related to QMU. We provided NNSA with a draft of this report for their review and comment. Overall, NNSA agreed that there was a need for an agreed-upon technical approach for implementing QMU and that NNSA needed to improve the management of QMU through clearer, long-term milestones and better integration across the program. However, NNSA stated that QMU had already been effectively implemented and that we had not given NNSA sufficient credit for its success. In addition, NNSA raised several issues about our conclusions and recommendations regarding their management of the QMU effort. The complete text of NNSA’s comments on our draft report is presented in appendix I. NNSA also made technical clarifications, which we incorporated in this report as appropriate. With respect to whether QMU has already been effectively implemented, during the course of our work, LANL and LLNL officials showed us examples of where they used the QMU methodology to examine specific issues associated with the stockpile. At the same time, during our discussions with laboratory officials, as well as with the Chairs of the JASON panel on QMU, the Office of Defense Programs Science Counsel, and the Strategic Advisory Group Stockpile Assessment Team of the U.S. Strategic Command, there was general agreement that the application of the QMU methodology was still in the early stages of development. As NNSA pointed out in its letter commenting on our report, to implement QMU, the weapons laboratories need to make a number of improvements, including techniques for combining different kinds of uncertainties, as well as developing better models for a variety of complex processes that occur during a nuclear weapon explosion. In addition, the successful implementation of QMU will continue to rely on the expert judgment and the successful completion of major scientific facilities such as the National Ignition Facility. We have modified our report to more fully recognize that QMU is being used by the laboratories to address stockpile issues and to more completely characterize its current state of development. At the same time, however, because QMU is still under development, we continue to believe that NNSA needs to make more effective use of its current planning and program management structure. NNSA raised several specific concerns about our conclusions and recommendations. First, NNSA disagreed with our conclusion and associated recommendations that NNSA take steps to ensure that all three nuclear weapons laboratories are in agreement about how QMU is to be defined and applied. NNSA stated that we overemphasized the differences between LANL and LLNL in implementing QMU and that, according to NNSA, LANL and LLNL have a “common enough” agreement on QMU to go forward with its implementation. Moreover, NNSA stated that our recommendations blur very clear distinctions between SNL and the two nuclear design labs. According to NNSA, QMU is applied to issues regarding the nuclear explosive package, which is the mission of LANL and LLNL. While we believe that some of the technical differences between the laboratories remain significant, we have revised our report to more accurately reflect the nature of the differences between LANL and LLNL. With respect to SNL, we would again point out that SNL officials are still required to certify the performance of nuclear weapon components under the conditions of a nuclear explosion and, thus, use similar elements of the QMU methodology. Therefore, we continue to believe that all three laboratories, as well as NNSA, would benefit from efforts to more formally document the QMU methodology and regularly meet to increase their mutual understanding. As evidence of the benefits of this approach, we would note that LLNL and LANL are currently developing a revised “white paper” on QMU, and that in discussions with one of the two authors, he agreed that inclusion of SNL in the development of the draft white paper could be beneficial. Second, NNSA made several comments with respect to our recommendation that NNSA develop an integrated plan for implementing QMU that contains clear, consistent, and realistic milestones. For example, NNSA stated that they expect to demonstrate the success of the implementation of QMU and the scientific campaigns by the performance of a scientifically defensible QMU analysis for each required certification problem. In addition, NNSA stated that the 2010 budget target and the 2014 milestone were developed for different purposes and measure progress at different times. According to NNSA, the 2010 target describes developing QMU to the point that it can be applied to certification of a system (e.g., the W88) without underground testing, while the 2014 milestone is intended to be for the entire Science campaign effort. However, as we state in our report, and as acknowledged by NNSA officials responsible for the Primary and Secondary campaigns, there continue to be problems with the milestones that NNSA has established for implementing QMU. Among these problems is the fact that these milestones are not well-defined and conflict with other performance measures that NNSA has established for QMU. Moreover, in its comments on our report, NNSA agreed that better integration and connectivity of milestones between various program elements would improve the communications of the importance of program goals and improve the formality of coordination of program activities, “which is currently accomplished in an informal and less visible manner.” Given this acknowledgment by NNSA, we continue to believe that an integrated plan for implementing QMU, rather than NNSA’s current ad hoc approach, is warranted. Third, NNSA made several comments regarding our recommendation that NNSA establish a formal requirement for conducting annual, technical reviews of the scientific research conducted by the weapons laboratories that supports the development and implementation of QMU. NNSA stated that it believes the ad hoc reviews it conducts, such as the JASON review, provide sufficient information on scientific achievements, difficulties, and required redirection to manage these programs effectively. As a result, NNSA stated that it has not selected a single review process to look at overall success in the implementation of QMU but expects to continue to rely on ad hoc reviews. We agree that reviews, such as the JASON review, are helpful, and we relied heavily on the JASON review, as well as other reviews as part of our analysis. However, as we point out in the report, the issue is that the campaign managers for the Primary and Secondary campaigns do not meet all of NNSA’s own requirements for providing effective oversight, which include the establishment of formal requirements for conducting technical reviews of campaign activities. Therefore, we believe that NNSA needs to take steps to implement its own policies. In addition, we believe that the ASC campaign provides a good role model for how the Primary and Secondary campaigns should be managed. Finally, NNSA made several comments with respect to our recommendation for NNSA to revise the performance evaluation plans for the laboratories so that they contain annual performance targets that can be measured and linked to specific milestones related to QMU. Specifically, NNSA stated that the implementation of QMU is an area where it is difficult to establish a meaningful metric. According to NNSA, since QMU is implicitly evaluated in every review of the components of the science campaign, NNSA does not believe it is necessary to formally state an annual QMU requirement. However, as we point out in the report, the current performance evaluation plans for LANL and LLNL do not meet NNSA’s own requirements for the inclusion of annual performance targets that can be measured and linked to the specific performance measures related to QMU. More fundamentally, since NNSA has placed such emphasis on the development and implementation of QMU in the years ahead, we continue to believe that NNSA needs to develop more meaningful criteria for assessing the laboratories’ progress in developing and implementing QMU. We are sending copies of this report to the Administrator, NNSA; the Director of the Office of Management and Budget; and appropriate congressional committees. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations or Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the individual named above, James Noel, Assistant Director; Jason Holliday; Keith Rhodes; Peter Ruedel; and Carol Herrnstadt Shulman made key contributions to this report.
In 1992, the United States began a unilateral moratorium on the testing of nuclear weapons. To compensate for the lack of testing, the Department of Energy's National Nuclear Security Administration (NNSA) developed the Stockpile Stewardship Program to assess and certify the safety and reliability of the nation's nuclear stockpile without nuclear testing. In 2001, NNSA's weapons laboratories began developing what is intended to be a common framework for a new methodology for assessing and certifying the safety and reliability of the nuclear stockpile without nuclear testing. GAO was asked to evaluate (1) the new methodology NNSA is developing and (2) NNSA's management of the implementation of this new methodology. NNSA has endorsed the use of the "quantification of margins and uncertainties" (QMU) methodology as its principal method for assessing and certifying the safety and reliability of the nuclear stockpile. Starting in 2001, Los Alamos National Laboratory (LANL) and Lawrence Livermore National Laboratory (LLNL) officials began developing QMU, which focuses on creating a common "watch list" of factors that are the most critical to the operation and performance of a nuclear weapon. QMU seeks to quantify (1) how close each critical factor is to the point at which it would fail to perform as designed (i.e., the margin to failure) and (2) the uncertainty that exists in calculating the margin, in order to ensure that the margin is sufficiently larger than the uncertainty. According to NNSA and laboratory officials, they intend to use their calculations of margins and uncertainties to more effectively target their resources, as well as to certify any redesigned weapons envisioned by the Reliable Replacement Warhead program. According to NNSA and weapons laboratory officials, they have made progress in applying the principles of QMU to the assessment and certification of nuclear warheads in the stockpile. NNSA has commissioned two technical reviews of the implementation of QMU. While strongly supporting QMU, the reviews found that the development and implementation of QMU was still in its early stages and recommended that NNSA further define the technical details supporting the implementation of QMU and integrate the activities of the three weapons laboratories in implementing QMU. GAO also found important differences in the understanding and application of QMU among the weapons laboratories. For example, while LLNL and LANL both agree on the fundamental tenets of QMU at a high level, they are pursuing different approaches to calculating and combining uncertainties. NNSA uses a planning structure that it calls "campaigns" to organize and fund its scientific research. According to NNSA policies, campaign managers at NNSA headquarters are responsible for developing plans and high-level milestones, overseeing the execution of these plans, and providing input to the evaluation of the performance of the weapons laboratories. However, NNSA's management of these processes is deficient in four key areas. First, NNSA's existing plans do not adequately integrate the scientific research currently conducted across the weapon complex to support the development and implementation of QMU. Second, NNSA has not developed a clear, consistent set of milestones to guide the development and implementation of QMU. Third, NNSA has not established formal requirements for conducting annual, technical reviews of the implementation of QMU at the three laboratories or for certifying the completion of QMU-related milestones. Finally, NNSA has not established adequate performance measures to determine the progress of the three laboratories in developing and implementing QMU.
This section describes (1) IAEA’s structure and budget, (2) IAEA safeguards, (3) the nuclear fuel cycle, and (4) Iran’s nuclear program. IAEA is structured into six major programs, including Nuclear Verification, which carries out the agency’s safeguards activities. Other IAEA programs are generally intended to help promote safe and secure uses and applications of nuclear energy for civilian purposes. For example, IAEA’s Technical Cooperation program helps member states achieve their sustainable development priorities by providing relevant nuclear technologies and expertise. IAEA funds its programs primarily through (1) its regular budget, for which all member countries are assessed, and (2) voluntary extra-budgetary contributions from certain member countries and other donors to meet critical needs. IAEA’s operational budget requirements for 2016 totaled to $436.6 million, including $155.3 million for the nuclear verification program (i.e., safeguards). See table 1 for IAEA projected budget requirements for 2016 by program. IAEA has a Board of Governors, which provides overall policy direction and oversight for the agency. A Secretariat, headed by the Director General, is responsible for implementing the policies and programs of the IAEA General Conference and the Board of Governors. The State Department coordinates the United States’ financial and policy relationship with IAEA. IAEA safeguards are a set of technical measures and activities by which IAEA seeks to verify that nuclear material subject to safeguards is not diverted to nuclear weapons or other proscribed purposes. To carry out its safeguards activities, inspectors and analysts in IAEA’s Safeguards Department collaborate to verify that the quantities of nuclear material that non-nuclear weapon states have formally declared to the agency are correct and complete. Most countries have concluded a CSA with IAEA that covers all nuclear material in all peaceful nuclear activities and serves as the basis for the agency’s safeguards activities. Most countries with a CSA have also brought into force an Additional Protocol to their CSAs, which requires that country to provide IAEA with a broader range of information on the country’s nuclear and nuclear-related activities. IAEA developed the Additional Protocol to obtain additional information about and access to countries’ nuclear and nuclear-related activities as part of its response to the discovery in 1991 of a clandestine nuclear weapons program in Iraq. The Additional Protocol gives the agency’s inspectors access to an expanded range of locations, including those where the agency seeks to assure the absence of undeclared nuclear material and activities. Undeclared nuclear material and activities are those a state has not declared and placed under safeguards but is required to do so pursuant to its CSA or Additional Protocol. Iran’s CSA entered into force in May 1974. According to IAEA officials, Iran applied the Additional Protocol beginning in December 2003, ceased to do so in February 2006, and has been provisionally applying it since January 16, 2016. IAEA regards Iran’s provisional application as if the Additional Protocol were in force. IAEA implements safeguards through a range of activities and techniques to help ensure that all nuclear material is where it was declared to be and to verify that there was no misuse of the facility, no diversion of declared nuclear material, and no undeclared nuclear material or activities. Safeguards activities include on-site inspections, environmental sampling, and remote monitoring. For example, to verify nondiversion of nuclear material, IAEA inspectors count items (e.g., containers of uranium or plutonium), measure attributes of these items (e.g., isotopic composition), and compare their findings with records and declared amounts. Inspectors typically verify the nuclear material inventory by reviewing a facility’s nuclear material accounting documentation (e.g., reports and records) and through, for example, visual observation, radiation detection and measurement, and application of seals and other identifying and tamper-indicating devices, according to IAEA documents. Visual observation allows inspectors to observe the processes within a location and the equipment it contains, and to check the consistency of observations with declarations. Inspection activities are supported by off- site safeguards activities, such as analysis of the environmental samples collected during inspections, remote monitoring through the equipment installed, analysis of commercial satellite imagery, and analysis of open source documents, such as technical journals. IAEA may conduct three types of inspections pursuant to comprehensive safeguards agreements: ad hoc, routine, and special inspections. For example, IAEA may conduct ad hoc inspections to verify a state’s initial declaration under the CSA and any changes to these declarations. Routine inspections give IAEA access to strategic points at a location to verify, among other things, the location, identity, quantity, and composition of all nuclear material subject to safeguards under the CSA. Notification of inspections can be transmitted from 1 week to 24 hours in advance or less. Routine inspections may also be unannounced. IAEA may also conduct special inspections in certain circumstances, either in addition to the routine or ad hoc inspection effort or involving access to locations or information beyond those subject to a routine or ad hoc inspection. The Additional Protocol also authorizes “complementary access” for IAEA, which is access to nuclear sites and other locations related to a state’s nuclear fuel cycle—beyond declared nuclear facilities that are routinely subject to inspections under the CSA— including locations at which nuclear fuel-cycle research and development not involving nuclear material is carried out; manufacturing and import locations; and all buildings on a nuclear site, including undeclared locations. IAEA may also negotiate “managed access” with a state to prevent the dissemination of proliferation-sensitive information, meet safety or physical protection requirements, or protect proprietary or commercially sensitive information. According to an IAEA document, an example of managed access is the designation by the operator, based on arrangements made with IAEA, of the routes to be followed on a site to prevent the exposure of inspectors to high levels of radiation or to protect proprietary sensitive information associated with certain equipment. Furthermore, managed access should not hinder IAEA inspectors or prevent them from fulfilling the purposes of the complementary access— that is, the arrangements shall not preclude the agency from conducting activities necessary to provide credible assurance of the absence of undeclared nuclear material and activities at the location in question. IAEA plans inspections according to its reporting requirements and its goals for timely detection. The safeguards agreements with a given country, its nuclear materials, and the nature of its fuel cycle and facilities to be safeguarded inform the frequency of inspections and other in-field activities. For example, according to IAEA documents, in countries without an Additional Protocol in force or where IAEA has not drawn a broader conclusion, IAEA’s inspections would be timed to detect the diversion of unirradiated direct use material—nuclear material that can be used for the manufacture of nuclear explosive devices in its present form—within 1 month. The goal for irradiated direct use material, such as spent fuel, which would require more time and effort to be converted to components of nuclear explosive devices, would be to detect any diversion in 3 months. The goal for all other nuclear material, such as depleted, natural, and low-enriched uranium, as well as thorium, would be to detect any diversion in a year. IAEA plans its supporting safeguards activities—such as analysis of satellite imagery before inspections, of Additional Protocol declarations, and of information obtained during inspections (such as environmental samples)—in proportion to the frequency of inspections. )—timeliness would be several months for tens of kilograms of material. efficient at producing plutonium under certain circumstances; the operating power, among other things, influences how much plutonium is produced. The fuel cycle—the series of processes used to make fuel for and manage spent fuel from nuclear reactors—may also be used to produce special nuclear material for weapons. The uranium nuclear fuel cycle consists of three stages: (1) the front end, in which uranium is mined, milled, enriched, and fabricated into fuel; (2) reactor operation; and (3) the back end, in which spent fuel is either disposed of (open fuel cycles) or processed to produce new fuel (closed or partially closed fuel cycles). IAEA verifies that nuclear material subject to safeguards is not diverted. Under a CSA, the starting point of safeguards is when nuclear material reaches the stage in the nuclear fuel cycle where it is suitable, by composition and purity, for enrichment or fuel fabrication and leaves the plant or the process stage by which it has been produced, or when material that has not yet reached such a stage is imported into the state or exported to a non-nuclear weapon state. See figure 1 for an illustration of the nuclear fuel cycle. Iran’s nuclear program includes two uranium mines and mills—the Gchine uranium mine and mill, the Saghand mine, and the Ardakan mill. Iran operates a conversion facility and fuel fabrication plant in Esfahan, the Tehran Research Reactor, and the Bushehr Nuclear Power Plant. Iran’s nuclear program also includes the Arak/IR-40 heavy water reactor, enrichment facilities at Natanz and Fordow, and a heavy water production plant in Arak. See figure 2 for a map of major facilities in Iran’s nuclear program. Iran had previously failed to declare some of these facilities to IAEA. For example, in 2002, IAEA was informed by member states of previously undeclared nuclear facilities—a uranium enrichment plant in Natanz and a heavy water production plant in Arak. In the same year, IAEA started to become increasingly concerned about the possible existence of undisclosed nuclear-related activities in Iran involving military-related organizations and, in 2011, reported to the Board of Governors on outstanding issues related to possible military dimensions (PMD) to Iran’s nuclear program. The information indicated that Iran had carried out activities relevant to the development of a nuclear explosive device, such as studies in high explosives and exploding bridgewire detonators, and work to manufacture neutron initiators. IAEA has also previously found instances where Iran was in non- compliance with its obligations under its CSA. For example, in June 2003, IAEA’s Director General reported that Iran had failed to meet its obligations under its CSA with respect to the reporting of nuclear material imported into Iran, among other things. In November 2003, the Director General concluded that Iran had failed to report uranium conversion experiments and the separation of plutonium from material irradiated in its Tehran Research Reactor, and had failed to provide IAEA design information for various nuclear facilities. In 2009, the Board of Governors noted that Iran’s failure to notify the agency of the construction of the Fordow uranium enrichment plant until September of that year was inconsistent with its obligations under the subsidiary arrangements to its CSA. In July 2015, Iran made commitments under the JCPOA related to its nuclear facilities, equipment, materials, and activities, among other things, and the United Nations Security Council endorsed the JCPOA and requested that IAEA verify and monitor these commitments. IAEA has been requested by the United Nations Security Council, and authorized by the Board of Governors, to verify and monitor Iran’s implementation of a range of nuclear-related commitments. To do so, IAEA is using its safeguards authorities and conducting additional verification and monitoring activities agreed to by Iran. The JCPOA commitments IAEA has been asked to verify include limits on Iran’s nuclear program, including those on numbers of centrifuges (for example, no more than 5,060 of specified centrifuges at Natanz for 10 years); uranium enrichment levels (no more than 3.67 percent for 15 years); stocks of enriched uranium (no more than 300 kilograms for 15 years); heavy water inventories; and centrifuge manufacturing. stockpile limit. metallurgy, or activities that could contribute to the design and development of a nuclear explosive device. The duration of certain commitments ranges from 8 years for certain centrifuge restrictions to 25 years for monitoring of uranium ore concentrate. The JCPOA does not contain any provisions relating specifically to Iran’s Bushehr Nuclear Power Plant, so according to IAEA, the agency will not carry out verification or monitoring activities in relation to the JCPOA at Bushehr beyond its standard safeguards under Iran’s CSA and Additional Protocol. Iran also agreed to fully implement the “Roadmap for Clarification of Past and Present Outstanding Issues.” The roadmap sets out a process for IAEA to address issues relating to the possible military dimensions of Iran’s nuclear program. IAEA issued a report on the results of its PMD investigation in December 2015, and the Board of Governors subsequently adopted a resolution closing its consideration of the “past and present outstanding issues.” The resolution noted the board’s decision to transition IAEA’s work in Iran from under previous Board of Governors and United Nations Security Council resolutions to JCPOA implementation and verification, in light of United Nations Security Council Resolution 2231. State Department officials noted that the board, in its resolution, stated that it will be watching closely to verify that Iran fully implements its commitments under the JCPOA and will remain focused going forward on the full implementation of the JCPOA to ensure the exclusively peaceful nature of Iran’s nuclear program. According to officials in IAEA’s Office of Legal Affairs, the agency draws on its safeguards authorities to verify and monitor Iran’s implementation of its nuclear-related commitments. For example, using its safeguards authorities, including the CSA, IAEA will verify and monitor Iran’s implementation of most of its nuclear-related commitments largely through a range of traditional safeguards approaches and techniques that it has used in the past, such as inspecting nuclear facilities and conducting nuclear material accountancy to verify quantities of nuclear material declared to the agency and any changes in the quantities over time. Under the JCPOA, Iran agreed to provisionally apply, and seek ratification of the Additional Protocol, which gives the agency’s inspectors access to an expanded range of locations, including those where the agency seeks assurance regarding the absence of undeclared nuclear materials and activities. According to IAEA officials, Iran previously applied the Additional Protocol beginning in December 2003 but ceased to do so in February 2006, and has been provisionally applying it since Implementation Day (January 16, 2016). IAEA regards this as if the Additional Protocol were in force. Under the JCPOA, IAEA is also conducting certain additional verification and monitoring activities agreed to by Iran, such as containment and surveillance measures for monitoring Iran’s uranium mines and mills, according to IAEA officials. Containment and surveillance measures include the use of video cameras to detect any movement of nuclear material and any tampering with agency equipment as well as seals that indicate whether the state has tampered with installed IAEA safeguards systems. Material in mining or ore processing activities (e.g., uranium at mines and mills) is not yet suitable for enrichment and so is not subject to the agency’s safeguards under a CSA, though the Additional Protocol does require states to declare the location and status, among other things, of uranium mines and uranium and thorium mills. Iran also committed under the JCPOA to cooperate with IAEA and facilitate its safeguards activities. For example, Iran agreed to make arrangements to allow for the long-term presence of IAEA inspectors by issuing long-term visas, among other things. Iran also agreed to permit the use of modern technologies, such as online enrichment monitors, to increase the efficiency of monitoring activities. The JCPOA includes a mechanism in which its participants commit to resolve access issues with the agency regarding an undeclared location within 24 days after the request is made. The JCPOA also describes a dispute resolution mechanism through which a participant in the agreement can bring a complaint if it believes that commitments are not being met and which allows the participant to cease performance of its commitments in certain cases if dispute resolution fails to resolve the participant’s concerns. Iran also agreed, under the JCPOA, to fully implement Modified Code 3.1 of the subsidiary arrangements to its CSA. According to IAEA, the text of the Modified Code 3.1 in Iran’s subsidiary arrangements is based on model language under which a country is required to provide preliminary design information for new nuclear facilities “as soon as the decision to construct, or to authorize construction, of such a facility has been taken, whichever is earlier.” Furthermore, Iran has agreed to import any enumerated nuclear-related and nuclear-related dual-use materials and equipment exclusively through a new “procurement channel” established under the JCPOA and United Nations Security Council Resolution 2231. The JCPOA details the establishment of a Joint Commission comprising representatives of participants in the agreement, under which a procurement working group will review and make recommendations on proposed imports. Furthermore, pursuant to United Nations guidance, the exporting state will provide information to IAEA on these proposed imports. Under the JCPOA, IAEA may access the locations of intended use of specified nuclear-related imports. IAEA officials told us that they expect the information provided through the procurement channel to support the agency’s efforts to detect undeclared activity. IAEA has estimated the financial, human, and technical resources necessary to verify and monitor Iran’s implementation of its nuclear- related commitments in the JCPOA. IAEA’s process for estimating resource needs is based on the frequency of its verification and monitoring activities, which, as previously noted, is driven by timely detection goals and reporting requirements. IAEA reports to the Board of Governors on its work under the JCPOA quarterly. IAEA has estimated that it needs approximately $10 million per year for 15 years in additional funding above its current safeguards budget to fund additional inspections, among other things, under the JCPOA. Of this amount, IAEA estimates that it will need about $3.3 million for costs associated with implementing the Additional Protocol; about $2.4 million for other inspector and direct staff costs; and about $4.4 million in other costs, such as travel, equipment, and support services beyond those associated with Additional Protocol implementation (see table 2). IAEA officials said that consistent with its Statute, the Director General intends to propose to the Board of Governors that the approximately $5.7 million in costs associated with Additional Protocol activities and inspector and other direct staff costs attributable to the JCPOA be funded through IAEA’s regular budget after 2016. These officials said that the remaining $4.4 million in estimated funding needs in each of the following 15 years will remain unfunded in the regular budget and therefore be supported through extra-budgetary funding. Under its Statute, IAEA is to apportion the costs of implementing safeguards, which would include inspector salaries and the cost of implementing the Additional Protocol, through assessments on member countries. As previously noted, such assessments form IAEA’s regular budget. IAEA’s Statute also states that any voluntary contributions may be used as the Board of Governors, with the approval of the General Conference, may determine. The JCPOA was not finalized in time for the agency to include these costs for 2016 in its assessments. Consequently, according to a 2015 IAEA report, all of IAEA’s JCPOA work through 2016 will be funded through extra-budgetary contributions. According to IAEA officials, how quickly the $5.7 million in JCPOA costs are incorporated into the regular budget depends on member state support. These officials told us that IAEA hopes to resolve the questions about funding the JCPOA through the regular budget by the June 2016 Board of Governors meeting. IAEA’s estimate of $10 million per year for funding requirements related to JCPOA activities is approximately 6 percent of the agency’s $155.3 million operational safeguards requirements for 2016. These requirements are consistent with historical operational safeguards expenditures from 2006 to 2014—the latest year for which expenditure data are available. During this timeframe, IAEA’s operational safeguards expenditures ranged from $127.1 million to $202.6 million, with amounts from $10.8 million to $46.9 million coming from extra-budgetary contributions (see table 3). As we previously noted, funding for IAEA’s safeguards activities— including those related to the JCPOA—comes from member state contributions to IAEA’s regular budget and from member state extra- budgetary voluntary contributions to IAEA. According to State Department officials, the balance of JCPOA-related costs not covered by IAEA’s 2017 regular budget will require extra-budgetary contributions from member states. The total level of voluntary contributions needed in fiscal year 2017 to cover JCPOA-related requirements remains unclear pending resolution of IAEA’s 2017 budget process. IAEA officials told us that many member states have pledged financial support for JCPOA implementation. Regarding funding from the United States, the State Department and DOE have requested approximately $190 million for fiscal year 2017 to support IAEA generally and JCPOA- related IAEA activities specifically, in the form of both regular contributions to the IAEA budget and extra-budgetary funding: The State Department has requested approximately $101.1 million for fiscal year 2017 to be contributed to IAEA’s regular budget. State Department officials told us that they expect some of the regular budget contribution to support IAEA safeguards, but that the budget request does not designate a specific amount within this total for IAEA’s safeguards program or JCPOA verification and monitoring activities. According to IAEA officials, contributions to the agency’s regular budget cannot be designated for specific IAEA programs or activities. The State Department has requested $89.8 million for fiscal year 2017 for its extra-budgetary contribution to IAEA. State Department officials told us that some of this funding may be used to support JCPOA verification and monitoring activities, but that none of this funding is specifically designated for these activities. State officials said the final amount of U.S. voluntary contributions to JCPOA-related funding requirements will depend on the amount of international donor support made available to IAEA, but that the United States plans to provide ongoing support to IAEA to meet these requirements. Furthermore, because the United States pays its assessed (regular budgetary) contribution on a largely deferred basis, the funds requested for fiscal year 2017 will be largely used to pay 2016 calendar year assessments, which include no JCPOA-related costs. The United States’ extra-budgetary contribution to IAEA generally includes funding for the U.S. Support Program to IAEA Safeguards, which was established in 1977 to augment IAEA’s regular budget for safeguards activities with U.S.-sponsored expertise, equipment, and techniques. The program supports IAEA’s overall safeguards mission, and some of what it funds may benefit JCPOA implementation. The program may fund, among other things, equipment (for example, cameras or seals), research and development of safeguards technologies, subsidies for the analysis of environmental samples at IAEA’s Network of Analytical Laboratories, and training for IAEA inspectors at DOE laboratories. This training covers, among other things, the use and analysis of safeguards tools and equipment, as well as concealment scenarios—for example, where material being measured may have been altered to mislead the detectors. DOE has requested $1 million for fiscal year 2017 for IAEA verification and monitoring related to the JCPOA, as part of $13 million to support JCPOA implementation. This $1 million includes funding for any DOE staff loaned to IAEA to assist with the agency’s JCPOA-related activities or contractors who are made available on a short-term basis to IAEA. IAEA’s annual $10 million funding estimate includes approximately $7.5 million in funding to cover estimated human resource costs associated with additional inspectors and support services under the JCPOA. IAEA officials told us that the agency identified the need for 18 experienced inspectors and nearly twice that number of other staff for its Iran Task Force—now the Office of Safeguards Verification in Iran. The agency plans to transfer these inspectors from divisions within its Safeguards Department that cover countries and regions beyond Iran. According to IAEA officials, the other Safeguards divisions would backfill the vacancies created by the transfer of inspectors to verification and monitoring related to Iran by hiring and training new inspectors. State Department officials noted that IAEA may draw on U.S. and other member-state support to temporarily fill vacated positions until new staff can be permanently hired and brought into place. According to IAEA officials, IAEA’s existing technical resources are sufficient to implement its verification and monitoring activities under the JCPOA. IAEA officials generally did not specify the technical measures that IAEA will use to verify and monitor each of Iran’s nuclear-related commitments under the JCPOA. However, some technical measures that IAEA generally uses to supplement visual observations and examination of records include the following: Portable radiation detectors and gamma spectrometers to take measurements—for example, on pipework and equipment—to verify that nuclear material is as declared (for example, at the declared level of enrichment). Tamper-indicating seals and cameras for containment and surveillance over previously verified material and equipment. Such measures increase the efficiency of safeguards by reducing inspection costs and allowing IAEA to focus inspection efforts where most needed. See figure 3 for an image of an inspector replacing a seal. Mass spectrometers to analyze environmental samples for traces of undeclared material and activity. IAEA conducts bulk and particle analysis of environmental samples. Particle analysis provides information on the history of a facility’s operation—for example, whether enrichment had occurred beyond the level declared. Bulk analysis provides information on the average enrichment level in a facility and the presence of trace elements that may provide information about where material is from or what processes it had been subject to. Under the JCPOA, Iran agreed to allow IAEA to use online enrichment measurement. IAEA’s online enrichment monitor (OLEM) continuously monitors enrichment levels, allowing for more efficient enrichment monitoring. IAEA is using the OLEM in the Natanz Fuel Enrichment Plant to confirm that enrichment levels are at or below 3.67 percent, as per Iran’s commitment under the agreement. IAEA has previously used continuous enrichment monitors, but the OLEM is a newer technology that improves upon older monitoring systems. IAEA may face challenges in verifying and monitoring Iran’s implementation of certain nuclear-related commitments in the JCPOA. These potential challenges include (1) detecting undeclared nuclear materials and activities, (2) accessing sites in Iran, and (3) managing safeguards budgetary, human, and technical resources. IAEA has identified mitigating actions for some of these challenges. Detection of undeclared nuclear materials and activities is an inherent challenge for IAEA; IAEA and member states have taken some steps to improve the agency’s ability to detect undeclared activities. According to IAEA, the agency can draw a broader conclusion that all nuclear material in Iran remains in peaceful activities only after the agency has completed its evaluations and found no indications of diversion of declared nuclear material from peaceful nuclear activities and no indications of undeclared nuclear material or activities in the state as a whole. According to current U.S. officials, a former U.S. official, some former IAEA officials, and officials from several expert organizations we interviewed, detection of undeclared nuclear material and activities is an inherent challenge for IAEA. Iran has previously failed to declare activity to IAEA. For example, according to IAEA documents and officials, prior to 2003, Iran failed to provide IAEA with information on a number of nuclear fuel cycle-related activities and nuclear material. In addition, according to IAEA documents, Iran also failed to notify the agency at the time of its decision to construct the Fordow enrichment facility, as required under Modified Code 3.1 of the subsidiary arrangements to Iran’s CSA. To detect undeclared material and activities, IAEA looks for indicators of such activities, including equipment and infrastructure necessary for the activities, as well as nuclear and nonnuclear material or traces of such material in the environment, according to an IAEA document. According to current U.S. and IAEA officials, some former U.S. officials, some former IAEA officials, and officials from several expert organizations, IAEA faces inherent challenges and limitations in identifying indicators of undeclared activity. For instance: Some activities may not be visible to IAEA—for example, through satellite imagery—or do not involve nuclear material, and may not leave traces in the environment, such as centrifuge manufacturing and some weapons development activities. According to a former U.S. official, some former IAEA officials, and officials from several expert organizations, this poses a challenge for IAEA in detecting undeclared activity. The Board of Governors’ decision to close its consideration of the PMD issue without a complete accounting of Iran’s past nuclear program could reduce indicators of potential undeclared activity, according to one expert organization. Officials from this organization said that without a complete accounting, only part of Iran’s nuclear program is visible to IAEA, and IAEA is missing information that could inform future safeguards planning. The procurement channel established under the JCPOA may serve as a source of indicators for IAEA on potential undeclared activities in Iran, according to current and two former U.S. officials as well as officials from two expert organizations. However, IAEA officials told us that there is additional work to be done in informing exporting countries of their obligations and standardizing the data that the countries would report to IAEA so that they are usable to the agency. These officials told us that ensuring that countries report the data as required is particularly challenging for countries that do not have a robust export control system. Current IAEA and U.S. officials as well as a former IAEA official said that IAEA has taken steps to improve its ability to detect undeclared nuclear activities and materials and told us that there are other mitigating factors to the challenges IAEA faces in this area. First, according to a current IAEA official, current U.S. officials, and a former IAEA official, IAEA has improved its capabilities in detecting undeclared activity. For example, according to U.S. officials, IAEA has adapted its inspector training program to focus on potential indicators of undeclared activity beyond the agency’s traditional safeguards focus on nuclear materials accountancy. IAEA also has analytical tools at its disposal, some of which IAEA officials demonstrated to us, to detect undeclared activities worldwide. IAEA also receives member-state support in detecting undeclared activities. For example, member states provided some of the information that formed the basis of IAEA’s PMD investigation. In addition, State Department officials said that they have conducted outreach to exporters and exporting countries about the procurement channel so that the suppliers know their responsibilities and requirements. According to State Department officials, this outreach included sending cables to all posts with instructions to inform host countries and their industries of procurement channel requirements. The United Nations has provided information regarding the procurement channel on its website. Further, according to IAEA and U.S. officials, Iran’s application of the Additional Protocol improves IAEA’s ability to investigate indicators of undeclared activities in Iran. For instance, on the PMD issue, DOE officials noted that under the JCPOA, IAEA will have the authorities of the Additional Protocol and enhanced transparency measures of the JCPOA with which to investigate any indication of undeclared activities, including those activities suspected of having possible military dimensions or potential weaponization activities not involving nuclear materials. Furthermore, State Department officials noted that the JCPOA puts IAEA in a better position to detect such activities in Iran, as inspectors will have increased access to information and locations to clarify and resolve inconsistencies or other indicators of noncompliance and will have an increased scope of materials accountancy at various sites, such as mining and milling processes. IAEA officials told us that any uncertainties regarding the peaceful nature of Iran’s nuclear program that may arise would have to be resolved for the agency to reach a broader conclusion that all nuclear material in Iran remains in peaceful activities. As noted above, broader conclusion refers to IAEA’s determination for a country that, for a given year, all nuclear material remained in peaceful activities. The JCPOA states that the United States and European Union will take further steps to eliminate nuclear-related sanctions on Iran either on October 18, 2023, or before if IAEA reaches a broader conclusion. According to State Department officials, on October 18, 2023, under the JCPOA, the United States and European Union would take these steps to eliminate these sanctions on Iran regardless of whether IAEA has reached a broader conclusion. IAEA officials told us that the agency does not draw a broader conclusion lightly, for any state, and that it has taken on average 3 to 5 years for states with CSAs and Additional Protocols. The estimates for the amount of time needed for IAEA to reach a broader conclusion for Iran varied among the former IAEA and U.S. officials and the expert organization officials we interviewed. Some former U.S. officials, two former IAEA officials, and officials from two expert organizations stated that it is possible for IAEA to reach a broader conclusion before October 18, 2023. The former U.S. officials stated that this could be possible if Iran cooperates with IAEA and provides the access and information needed. Others we interviewed did not believe IAEA would be able to reach a broader conclusion in that time frame, citing examples of countries in which IAEA took considerable time to reach broader conclusions. For instance, a former U.S. official and an official from one expert organization told us that it took IAEA 10 years to reach a broader conclusion for Turkey even with the country’s relatively basic fuel cycle. Both also stated that Turkey’s former involvement in the illicit procurement network and black market contributed to the length of time to reach a broader conclusion in that country. The former U.S. official also said that the broader conclusion for Taiwan took from 6 to 8 years, noting that Taiwan, which had a weapons program, made a strategic decision to shut down the program and fully cooperate with IAEA. An official from one expert organization stated that the broader conclusion process is very technical and complex, and that even in compliant countries such as Australia and Canada, arriving at a broader conclusion is an “incredibly difficult feat” for IAEA. State Department officials told us that in their view it would not be an impediment to the JCPOA if IAEA does not reach a broader conclusion regarding Iran’s nuclear program by October 18, 2023. These officials said that they believed it is more important for IAEA to draw a broader conclusion in an appropriate manner and time frame, and less important that a broader conclusion be reached before the United States and European Union take further steps to eliminate sanctions in October 2023. These officials added that Iran’s nuclear-related commitments under the JCPOA extend beyond this date, as well as IAEA’s authorities and capabilities to continue to verify the peaceful nature of Iran’s nuclear program. IAEA may face challenges in gaining access to sites in Iran, according to officials and expert organizations we interviewed. IAEA officials stated that access depends on the cooperation of the member state and the operators of its facilities under safeguards. However, two former U.S. officials, a former IAEA official, and officials from one expert organization we interviewed told us that Iran has a history of denying access to IAEA inspectors. For example, IAEA requested access in February 2012 to the Iranian military complex at Parchin—where high explosive experiments were believed to have been conducted—and Iran did not allow access until the fall of 2015 as part of IAEA’s PMD investigation. In addition, earlier IAEA reports stated that Iran did not cooperate with IAEA on access to the Heavy Water Production Plant, although Iran eventually granted the agency managed access in December 2013. Prior IAEA reports also stated that Iran had denied the agency’s requests for access to locations related to the manufacturing of centrifuges, research and development on uranium enrichment, and uranium mining and milling, among other things. IAEA and U.S. officials said that IAEA is taking action to facilitate access and cooperation. For instance, IAEA officials stated that they plan to work to train operators in Iran who are less experienced in working with IAEA and who may be less experienced in keeping records that facilitate the agency’s safeguards activities, such as the operators of the Heavy Water Production Plant, which IAEA officials stated was the only type of facility subject to verification and monitoring under the JCPOA that is new to the agency. Iran’s agreement to provisionally apply the Additional Protocol will facilitate the agency’s access to sites in Iran, according to IAEA officials. Specifically, they told us that, under the Additional Protocol, the agency has authority to access any part of a site that it is inspecting within 2 hours’ notice and any other location within 24 hours. Furthermore, IAEA officials disputed the view of one expert organization that Iran’s limited cooperation during the PMD investigation may have set a precedent for limiting IAEA access going forward. IAEA officials told us that the closure of the PMD investigation would not preclude future IAEA access requests to the sites that were part of the investigation, should IAEA determine that such access is warranted. These officials added that IAEA’s PMD investigation was conducted without the Additional Protocol, and that any future investigations into potential undeclared activity would be conducted under the expanded legal authority of the Additional Protocol. In addition, as we noted earlier, the JCPOA includes a mechanism that limits the time for resolution of access issues between the participants to 24 days for matters related to JCPOA implementation. IAEA officials told us that the 24-day period under the mechanism would begin once the agency raises a given access issue. Appendix II discusses this mechanism in detail. Officials and expert organizations we interviewed discussed two potential challenges regarding the mechanism. First, they noted that the mechanism is untested and may not facilitate access. Second, they differed on whether the mechanism’s 24-day limit would help IAEA gain timely access before Iran could hide certain activities. First, a former IAEA official and an official from one expert organization told us that the mechanism is untested in that an access dispute has not yet arisen under the JCPOA; therefore, it is too soon to tell how it will work and whether it will improve access. Specifically, according to the official from the expert organization, the agreement is not clear about how the Joint Commission will coordinate with the Board of Governors or IAEA as a whole if the mechanism is invoked. Furthermore, a former U.S. official expressed concern that the politics among members of the Joint Commission may mean that access disputes raised by IAEA may not be resolved in IAEA’s favor. Nonetheless, State Department officials said that the JCPOA access mechanism provides additional recourse to IAEA and is supplementary to the Additional Protocol, and that it does not take authority away from the Board of Governors. IAEA officials also said that if Iran were to deny an IAEA request for access at any site, the agency has various options for resolving the matter, including referral to IAEA’s Board of Governors, the Joint Commission, or both. Second, officials and expert organizations differed on whether the mechanism’s 24-day time frame could allow Iran to hide activities from IAEA before the agency gained access. These officials and expert organizations differed on the activities that Iran could hide within 24 days and the proliferation significance of such activities. For example, two former IAEA officials and officials from two expert organizations specifically stated that small-scale enrichment activities and activities not including nuclear material could be hidden within the 24-day time frame. A former U.S. official also said that Iran could clean up traces of nuclear materials within 24 days and, even if it were unable to hide all evidence, it could create enough ambiguity to preclude further investigation or action. However, other officials and expert organizations said that the potential for a 24-day access delay in the mechanism was generally not a concern. Specifically, several former IAEA officials, some former U.S. officials, and officials from some expert organizations noted that nuclear material, even in trace amounts, cannot be cleaned up within that time. A former IAEA official also stated that much of what IAEA is looking for is not easily disguised, cleaned, or removed in 24 days. U.S. officials stated that they believed that it is unlikely that Iran would risk reinstatement of sanctions by denying IAEA access. For instance, State Department officials told us that refusal by Iran to comply with the access provisions of the Additional Protocol or JCPOA could lead to the reinstatement of sanctions. Additionally, DOE officials said that the JCPOA’s provisions for the reinstatement of sanctions will encourage Iranian cooperation with and access for IAEA. IAEA faces potential resource management challenges stemming from the monitoring and verification workload in Iran, and is taking actions to mitigate them. These challenges include (1) integrating the additional JCPOA-related funding needs that IAEA has identified into the agency’s regular budget, (2) managing human resources within the safeguards program, which could affect IAEA’s safeguards efforts internationally, and (3) addressing potential challenges with technical resources. State Department officials told us that they are confident that IAEA will obtain the funding it needs for JCPOA activities in the near term, but IAEA officials expressed concerns about the reliability of long-term funding. State Department officials told us that the United States and other member states would provide extra-budgetary contributions to support IAEA’s JCPOA activities. However, IAEA officials expressed concerns, which State Department officials acknowledged, about possible donor fatigue with regard to extra-budgetary contributions in the long run, as IAEA will be conducting certain JCPOA verification activities for 10 or more years. We have previously concluded that IAEA cannot necessarily assume that donors will continue to make extra-budgetary contributions at the same levels as in the past. IAEA and State Department officials, as well as a former IAEA official and an official from one expert organization, stated that funding the JCPOA from the IAEA regular budget—rather than through extra-budgetary contributions—would give the safeguards program a more stable and predictable funding base for its verification and monitoring activities. As we previously noted, IAEA proposes to integrate approximately $5.7 million in JCPOA costs into IAEA’s regular budget after 2016. However, IAEA may face challenges in integrating some JCPOA funding needs into its regular budget. IAEA officials, as well as a former IAEA official, two former U.S. officials, and an official from one expert organization stated that the proposal to move funding for verification and monitoring efforts under the JCPOA into IAEA’s safeguards regular budget could face resistance from some member states without corresponding budget increases for other IAEA programs, such as the Technical Cooperation program, which supports nuclear power development and other civilian nuclear applications. State Department officials said that delay or failure to incorporate costs into the regular budget would increase IAEA’s reliance on extra-budgetary contributions but would not prevent IAEA from carrying out JCPOA- related activities as long as those contributions are forthcoming. State Department officials also told us that no member state has opposed integration of certain JCPOA costs into the regular budget or proposed corresponding increases for other programs. These officials added that they recognize that long-term reliance on extra-budgetary contributions risks donor fatigue, and that they will plan for providing support with a view toward filling any future funding gaps that arise. IAEA also faces a potential human resource management challenge in its safeguards program as it implements actions to verify and monitor the JCPOA, which could affect its broader international safeguards mission. Specifically, IAEA’s strategy of transferring inspectors to its Office of Safeguards Verification in Iran from other safeguards divisions may pose a challenge to IAEA and its safeguards work in other countries because of the extensive time it takes IAEA to hire and train new inspectors for those divisions. According to current IAEA and U.S. officials as well as two former IAEA officials and officials from two expert organizations we interviewed, hiring and training qualified inspectors can take years. A former IAEA official and current officials said that recruiting inspectors is difficult because their skills are highly specialized—typically requiring a combination of nuclear engineering knowledge with analytical abilities. These officials also said that IAEA’s hiring process requires multiple interviews and examinations. Furthermore, current IAEA officials and two former IAEA officials, as well as an official from one expert organization, said that training new inspectors to be proficient in executing their safeguards responsibilities can be a time-consuming process. As a result, IAEA faces a potential challenge as it prioritizes JCPOA activities in meeting the need for additional experienced inspectors to work on Iran-related safeguards, while ensuring that other safeguards efforts in other countries are not understaffed. IAEA officials have said that its work in Iran is its priority. IAEA officials, as well as a former IAEA official, some former U.S. officials, and officials from several expert organizations told us that IAEA could mitigate human resources challenges in the short term through remote monitoring and the use of cost-free experts in its headquarters. According to State Department officials, the United States, as well as other IAEA member states, have provided a list of qualified candidates to IAEA to backfill positions of IAEA staff transferred within the agency for JCPOA work to avoid gaps while full-time staff are hired and trained. Many of these have previously worked as IAEA inspectors and are already trained. As we previously noted, IAEA officials told us that the agency’s existing technical resources are sufficient for JCPOA verification and monitoring. However, IAEA officials also noted that they expect an increase in environmental sampling as a result of the JCPOA. IAEA laboratories handle approximately 500 environmental samples a year at IAEA’s Environmental Sample Laboratory in Seibersdorf, Austria, and other laboratories within the Network of Analytical Laboratories. The IAEA laboratory at Seibersdorf handles about 20 percent of the overall environmental sample analysis, with the other network facilities processing the remainder. According to IAEA officials, particle analysis is time-consuming and expensive. IAEA uses a spectrometer called the Large Geometry Secondary Ionization Mass Spectrometer (LG-SIMS) for particle analysis at its Seibersdorf Analytical Laboratory. IAEA officials at the laboratory told us that the LG-SIMS is expensive and is operating at capacity, raising concerns about IAEA’s ability to meet any future environmental sampling demands at the Seibersdorf laboratory alone. These officials told us that a second LG-SIMS would cost approximately $5 million, plus additional personnel costs to operate and maintain the equipment. Other IAEA officials and some U.S. officials told us, however, that other laboratories in the network could accommodate increases in environmental sampling analysis workload, and that there was no need at this time for IAEA to procure a second LG-SIMS in light of other critical funding priorities for technical needs. We are not making any recommendations in this report. We provided the Departments of State and Energy and the International Atomic Energy Agency a draft of this report to for their review and comment. State, DOE, and IAEA provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to appropriate congressional committees, the Secretaries of State and Energy, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report examines (1) the Joint Comprehensive Plan of Action (JCPOA) commitments that the International Atomic Energy Agency (IAEA) has been asked to verify and monitor and its authorities to do so, (2) the resources IAEA has identified as necessary to verify and monitor Iran’s nuclear-related commitments under the JCPOA, and (3) potential challenges and mitigating actions, if any, IAEA and others have identified with regard to verifying Iran’s nuclear-related commitments under the JCPOA. To identify the nuclear-related commitments in the JCPOA that IAEA has been asked to verify and monitor and IAEA’s authorities for verifying and monitoring these commitments, we analyzed the JCPOA, in close coordination with IAEA and the Department of State. We also analyzed IAEA documentation concerning the safeguards legal framework, including the Statute of the IAEA, which authorizes the agency to apply safeguards, at the request of parties, to any bilateral or multilateral arrangement; “The Structure and Content of Agreements Between the Agency and States Required in Connection with the Treaty on the Non- Proliferation of Nuclear Weapons” (information circular (INFCIRC)/153), which provides the basis for the comprehensive safeguards agreement that most countries have concluded with IAEA and that covers all of the countries’ nuclear material in peaceful activities; Iran’s Comprehensive Safeguards Agreement (INFCIRC/214); the Model Additional Protocol (INFCIRC/540), which provides the basis for an Additional Protocol that most countries with a CSA have concluded with IAEA to provide additional information about countries’ nuclear and nuclear-related activities; and the November 2011 IAEA Safeguards Report, which details items concerning “possible military dimensions” of Iran’s nuclear program; IAEA’s report on its investigation of the possible military dimensions; and the related Board of Governors’ resolution. We also analyzed the Treaty on the Non-Proliferation of Nuclear Weapons and United Nations Security Council Resolution 2231, which requests IAEA to undertake the necessary verification and monitoring of Iran’s commitments. To examine the resources IAEA has identified as necessary to verify and monitor Iran’s nuclear-related commitments under the JCPOA, we reviewed IAEA planning and budget documents, such as “The Agency’s Programme and Budget 2016 –2017,” the Director General’s report titled “Verification and Monitoring in the Islamic Republic of Iran in light of United Nations Security Council Resolution 2231 (2015),” and pertinent Director General’s statements to the Board of Governors. In addition, to further understand IAEA authorities and resource needs, and to examine potential challenges and mitigating actions IAEA and others have identified with regard to verifying and monitoring Iran’s nuclear-related commitments under the JCPOA, we interviewed officials of IAEA, the Department of State, and the Department of Energy’s (DOE) National Nuclear Security Administration (NNSA); as well as representatives of Oak Ridge National Laboratory, Los Alamos National Laboratory, Sandia National Laboratories, and Brookhaven National Laboratory. We also held classified interviews with officials in the Office of the Director of National Intelligence and representatives of Lawrence Livermore National Laboratory. The information from these interviews is not reflected in this report. We also interviewed 8 former IAEA officials,10 former U.S. government and national laboratory officials, and officials from 10 expert organizations—research institutions and nongovernmental organizations with knowledge in the areas of nuclear verification, monitoring, and safeguards. We selected these expert organizations by first identifying organizations that had previously served as sources of IAEA subject matter experts for GAO. To ensure a wide range of viewpoints, we supplemented our initial selection with individuals and organizations identified through a literature search and by recommendations from our initial set of expert organizations. We requested interviews from all the identified officials from expert organizations and suggested contacts and interviewed all who agreed to participate (officials from 2 expert organizations provided written responses in lieu of in-person interviews). We analyzed their responses and grouped them into overall themes related to different elements of the objective. When referring to these categories of interviewees throughout the report, we use “some” to refer to three members of a group, “several” to refer to four or five members of a group, and “many” to refer to more than five members of a group. We conducted this performance audit from July 2015 to June 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Section Q of Annex I of the Joint Comprehensive Plan of Action (JCPOA) details procedures for the International Atomic Energy Agency (IAEA) access to sites in Iran. These procedures together total no more than 24 days, as follows: 1. If IAEA has concerns about undeclared materials or activities—or activities otherwise inconsistent with the JCPOA—at locations that have not been declared under the comprehensive safeguards agreement or additional protocol, the agency may first seek clarification from Iran and if Iran’s explanations do not resolve IAEA’s concerns, then request access to the sites in question. 2. Iran may propose means other than access to the site for resolving IAEA’s concerns, but if IAEA cannot verify the absence of undeclared nuclear materials and activities or activities inconsistent with the JCPOA after implementation of the alternative means or the two sides cannot come to an agreement on alternative means within 14 days of the agency’s original request for access, Iran, in consultation with the Joint Commission, would resolve IAEA’s concerns through necessary means agreed upon with IAEA. 3. If there is no agreement between Iran and IAEA, the Joint Commission would, by consensus or a vote of 5 or more of its 8 members, advise on the necessary means to resolve the IAEA’s concerns. This process would not exceed 7 days. Iran would then have 3 days to carry implement the necessary means. According to IAEA, these procedures are for the purpose of JCPOA implementation and are without prejudice to the Comprehensive Safeguards Agreement (CSA) and Additional Protocol. Generally, IAEA notifies the state of a request for access (e.g., inspections and complementary access), specifying the location, date and time, purpose, and activities to be carried out as required in the CSA and Additional Protocol. The state is to confirm the receipt of the notification and facilitate IAEA’s access. If there were issues related to the implementation by the member state of its obligations under the CSA or the Additional Protocol, the Director General would inform the Board of Governors. In the case of Iran, if there were issues affecting the fulfilment of JCPOA commitments, the Director General would inform the Board and in parallel the Security Council, as appropriate. Should IAEA’s concerns regarding undeclared nuclear materials or activities, or activities inconsistent with the JCPOA continue to be unresolved after engaging with Iran, the access procedures set out above, from Section Q of Annex I of the JCPOA, may be used. In addition to the contact named above, William Hoehn (Assistant Director), Alisa Beyninson, Antoinette Capaccio, R. Scott Fletcher, Bridget Grimes, Joseph Kirschbaum, Grace Lui, Thomas Melito, Alison O’Neill, Sophia Payind, Timothy M. Persons, Steven Putansu, Vasiliki Theodoropoulos, and Pierre Toureille made key contributions to this report.
In July 2015, multilateral talks with Iran culminated in an agreement called the JCPOA, through which Iran committed to limits on its nuclear program in exchange for relief from sanctions put in place by the United States and other nations. IAEA, an independent international organization that administers safeguards designed to detect and deter the diversion of nuclear material for nonpeaceful purposes, was requested to verify and monitor Iran's implementation of these commitments. The U.S. Department of State coordinates the United States' financial and policy relationship with IAEA. GAO was asked to review the authorities and resources IAEA has to carry out its activities regarding the JCPOA. This report, which updates the preliminary findings from an interim report released in February 2016 (GAO-16-417), examines (1) the JCPOA commitments that IAEA has been asked to verify and monitor and its authorities to do so, (2) the resources IAEA has identified as necessary to verify and monitor those JCPOA commitments, and (3) potential challenges and mitigating actions IAEA and others have identified with regard to verifying and monitoring the JCPOA. GAO analyzed the JCPOA and key IAEA documents and interviewed current and former IAEA officials, U.S. government officials, national laboratory representatives, and experts from research institutions. As outlined in the Joint Comprehensive Plan of Action (JCPOA), the International Atomic Energy Agency (IAEA) was asked to verify and monitor Iran's implementation of a range of nuclear-related commitments. IAEA is using its safeguards authorities and conducting additional activities agreed to by Iran under the JCPOA to do so. Iran's commitments include limits on uranium enrichment levels and on enriched uranium inventories. IAEA is verifying and monitoring Iran's implementation of these commitments through a range of activities conducted by its Safeguards Department, such as inspecting Iran's nuclear facilities, analyzing environmental samples, and monitoring Iran's uranium mines and mills. Under the JCPOA, Iran agreed to provisionally apply the Additional Protocol, an agreement that will give IAEA's inspectors access to an expanded range of locations, including where the agency seeks assurance regarding the absence of undeclared nuclear material and activities. The JCPOA also includes a mechanism in which participants to the agreement commit to resolve an access request from the agency within 24 days after the request is made. IAEA has identified the financial, human, and technical resources necessary to verify and monitor Iran's nuclear-related commitments in the JCPOA. IAEA has estimated that it needs approximately $10 million per year for 15 years in additional funding above its current safeguards budget for JCPOA verification. According to IAEA documents, this $10 million will be entirely funded through extra-budgetary contributions through 2016. IAEA officials said that the agency intends to propose that of the $10 million approximately $5.7 million for all Additional Protocol activities and inspector costs attributable to the JCPOA be funded through IAEA's regular budget after 2016; approximately $4.4 million will be supported through extra-budgetary contributions from member states, such as the United States. IAEA also plans to transfer 18 experienced inspectors to its Office of Safeguards Verification in Iran from other safeguards divisions and to hire and train additional inspectors. According to IAEA officials, existing safeguards technical resources are sufficient to implement the JCPOA. IAEA may face potential challenges in verifying and monitoring Iran's implementation of certain nuclear-related commitments in the JCPOA. According to current and former IAEA and U.S. officials and expert organizations, these potential challenges include (1) integrating JCPOA-related funding into IAEA's regular budget and managing human resources in the safeguards program, (2) access challenges depending on Iran's cooperation and the untested JCPOA mechanism to resolve access issues, and (3) the inherent challenge of detecting undeclared nuclear materials and activities. IAEA has identified mitigating actions, such as utilizing remote monitoring and cost-free experts to address potential understaffing of IAEA safeguards activities in other countries as additional experienced inspectors are transferred to work on Iran-related safeguards. In addition, according to IAEA and U.S. officials as well as a former IAEA official GAO interviewed, IAEA has improved its capabilities in detecting undeclared activity. For example, according to U.S. officials, IAEA has adapted its inspector training program to focus on potential indicators of undeclared activities. GAO is not making any recommendations.
Foot and mouth disease (FMD) is one of the most devastating viral animal diseases affecting cloven-hoofed animals such as cattle and swine, and has occurred in most countries of the world at some point during the last century. Although the disease has no human-health implications, it can have enormous economic and social consequences, as recent outbreaks in the United Kingdom and Taiwan have demonstrated. These consequences occur because the international community values products from countries that are FMD-free and generally restricts international trade in FMD- susceptible products from countries affected by an outbreak. Most FMD- affected countries, therefore, take whatever measures necessary to regain their FMD-free status as quickly as possible. In the United States, the U.S. Department of Agriculture (USDA) has primary responsibility for protecting domestic livestock from animal diseases such as FMD. The U.S. Customs Service supports USDA in these efforts. FMD—a highly contagious viral disease affecting primarily cloven-hoofed animals, such as cattle, sheep, swine, and goats—has 7 types and over 80 subtypes. Immunity to, or vaccination for, one type of the virus does not protect animals against infection from the other types. FMD-infected animals usually develop blister-like lesions in the mouth, on the tongue and lips, on the teats, or between the hooves, which causes them to salivate excessively or become lame. Other symptoms include fever, reduced feed consumption, and abortions. Cattle and pigs are very sensitive to the virus and show symptoms of the disease after a short incubation period of 3 to 5 days. The incubation period in sheep is considerably longer, about 10 to 14 days, and the clinical signs of the disease are usually mild and may be masked by other conventional conditions, thereby allowing the disease to go unnoticed. The mortality rate for nonadult animals infected with FMD varies and depends on the species and strain of the virus; in contrast, adult animals usually recover once the disease has run its course. However, because the disease leaves them severely debilitated, meat-producing animals do not normally regain their lost weight for many months, and dairy cows seldom produce milk at their former rate. The disease therefore can cause severe losses in the production of meat and milk. The FMD virus is easily transmitted and spreads rapidly. Prior to and during the appearance of clinical signs, infected animals release the virus into the environment through respiration, milk, semen, blood, saliva, and feces. The virus may become airborne and spread quickly if pigs become infected because pigs prolifically produce and excrete large amounts of the virus into the air. Animals, people, or materials that are exposed to the virus can also spread FMD by bringing it into contact with susceptible animals. For example, the virus can spread when susceptible animals come in contact with contaminated animal products, such as meat, milk, hides, skins, and manure; transport vehicles and equipment; clothes or shoes worn by people; and hay, feedstuffs, or veterinary biologics. The FMD virus has a remarkable capability for remaining viable for long periods of time in a variety of animate and inanimate objects. For example, the virus can persist in the human nasal passages for up to 36 hours, manure for 1 to 24 weeks, fodder for 1 month, and on shoes for 9 to 14 weeks. The ability of the virus to persist in the environment and other products depends on the temperature and potential of hydrogen (pH) conditions. Generally, the virus can survive freezing but cannot survive at temperatures above 50° Celsius (122° Farenheit) and at pH levels of less than 6, or greater than 9. Table 1 shows the various lengths of time that the FMD virus can survive in some selected products. FMD can be confused with several similar but less harmful animal diseases that also produce blisters and cause animals to salivate, such as vesicular stomatitis, bovine viral diarrhea, and foot rot. Two foreign swine diseases are also clinically identical to FMD—swine vesicular disease and vesicular exanthema of swine. The only way to distinguish between FMD and these other diseases is through laboratory analyses of fluid or tissue samples. FMD is also sometimes confused with mad cow disease or bovine spongiform encephalopathy (BSE). BSE is a fatal, neuro-degenerative disease found in cattle in 23 countries around the world. Cattle contract the disease through animal feed that contains protein derived from the remains of diseased animals. Scientists generally believe that an equally fatal disease in humans—known as variant Creutzfeldt-Jacob Disease—is linked to eating beef from cattle with BSE. However, unlike mad cow disease, FMD has no known human health implications. FMD is present in about 60 percent of the countries in the world and endemic in many countries in Africa, the Middle East, Asia, and South America. The relatively few areas that are considered free of FMD include North and Central America, Australia, New Zealand, and the Caribbean. Figure 1 shows the presence of FMD worldwide for the period 1992 through 2002. In 2000 and 2001, over 40 countries reported outbreaks of FMD, and during the first 5 months of 2002, five countries reported outbreaks. The spread of certain strains of the virus also demonstrates how quickly it is spreading throughout the world. For example, the FMD virus serotype O, known as the Pan-Asia strain, was first identified in northern India in 1990 and was subsequently found in Nepal in 1993. It then spread westward into Saudi Arabia during 1994 and, subsequently, throughout the Near East and into Europe (Thrace region of Turkey, Bulgaria, and Greece) in 1996. The Pan- Asia strain was also found in Bangladesh in 1996 and in Bhutan in 1998. In 1999 it was reported in mainland China and then detected in Taiwan. By late 1999 and in 2000, it had reached most of Southeast Asia. Most recently, the Pan-Asia strain was found in the Republic of Korea, Japan, the Primorsky Territory of the Russian Federation, and Mongolia (areas free from FMD since 1934, 1908, 1964, and 1973, respectively). The Pan-Asia strain is also responsible for the 2001 outbreak of FMD in the United Kingdom that subsequently spread to France, Ireland, and the Netherlands. Figure 2 shows the incidence of four types of FMD virus worldwide, including the type O Pan-Asia strain. In North America, the last outbreaks of FMD for the United States, Canada, and Mexico occurred in 1929, 1952, and 1953, respectively. The United States has worked closely with both Canada and Mexico to eradicate FMD from North America. The Office International des Epizooties (OIE)—an intergovernmental organization created in January 1924 by an international agreement signed by 28 countries—was established to guarantee the transparency of information on the animal disease status of member countries. In addition, OIE collects and analyzes veterinary scientific information and disseminates it to member countries, provides expertise and promotes international solidarity for the control of animal diseases, and guarantees the sanitary safety of world trade by developing rules for international trade in animals and animal products. In May 2001, OIE had 158 member countries. OIE classifies member countries (or certain zones within these countries) as being FMD-free with or without vaccination if they meet certain criteria detailed in the OIE International Animal Health Code. For example, to obtain FMD-free status without vaccination, a member country should (1) have a record of prompt animal disease reporting; (2) send a declaration that it has been FMD-free and has not used vaccination for 1 year; (3) present evidence that it has an effective system of surveillance; (4) implement regulatory measures for the prevention and control of FMD; and (5) provide evidence that no vaccinated animals have been imported into the country, since such animals can become the source of future infections. Similarly, to obtain FMD-free-with-vaccination status, a country should (1) have a record for prompt animal disease reporting, (2) send a declaration that there have been no outbreaks of FMD for 2 years, (3) provide evidence that the country has effective surveillance systems and has implemented necessary regulatory measures to prevent and control FMD, (4) provide proof that routine vaccinations are carried out and that the vaccines comply with OIE standards, and (5) have an intensive and frequent system to detect any viral activity. When FMD occurs in an FMD-free country or zone where vaccination is not practiced, the affected country must reapply after the outbreak to regain its FMD-free-without-vaccination status from OIE. OIE standards require a country to wait until 3 months after the last reported case of FMD when a “stamping out approach” (immediate slaughter of diseased and exposed animals with no vaccination) is used to eradicate the disease before the country can apply for reinstatement of its FMD-free status. As part of this process, surveillance results of laboratory-screening tests (serological surveillance results) must be provided to OIE to prove that the disease has been eradicated. If vaccination was used to control the outbreak, then the country must wait until 3 months after the last vaccinated animal is slaughtered and serological surveillance results prove that the disease has been eradicated before reapplying for FMD-free status. The international community generally places a high value on products from countries that are FMD-free without vaccination. Such countries can export both live animals and animal products easily to other FMD-free countries. In contrast, countries that have an FMD-free-with-vaccination status are restricted to trading animal products that can be treated to ensure that the virus is inactivated. As a result, most countries that are FMD-free without vaccination resort to a stamping out process to eradicate the disease if an outbreak occurs. The United Kingdom and Taiwan followed this process in 2001 and 1997, respectively. Similarly, if an outbreak were to occur in the United States, the current U.S. policy requires all infected and exposed animals to be immediately slaughtered and disposed of by incineration, burial, or rendering. In recent years, the international community has been encouraging the acceptance of regionalization policies for international trade. Regionalization involves declaring one or more areas of a country FMD- free while other areas are responding to an outbreak. Under a regionalization policy, in the event of an FMD outbreak in the United States, even if one state or area was infected, the nation as a whole might not lose its FMD-free status, and trade restrictions might not be enforced on all of our FMD-susceptible products. An FMD outbreak could cost the U.S. economy billions of dollars in both direct and indirect costs. Direct costs to the government would include the costs of disease control and eradication, such as the maintenance of animal movement controls, control areas, and intensified border inspections; the destruction and disposal of infected animals; vaccines; and compensation to producers for the costs of disease containment. However, government compensation programs may not cover 100 percent of producers’ costs. As a result, direct costs would also occur for disinfection and for the value of any slaughtered animals not subject to government compensation. According to recent U.K. government estimates, the direct costs for control and eradication of the 2001 outbreak was about $4 billion. According to several estimates, the direct costs of controlling and eradicating a U.S. outbreak of FMD could range up to $24 billion in current dollars, depending, among other things, on the extent of the outbreak and the control strategy employed. The value of lost export sales in the event of an FMD outbreak would represent a significant portion of the total direct costs to the U.S. economy. According to USDA officials, a single case of FMD in the United States would cause our trading partners to prohibit U.S. exports of live animals and animal products. This ban could result in losses of between $6 billion and $10 billion a year while the United States eradicated the disease and until it regained disease-free status. These losses may be mitigated to some extent by increased domestic sales of meat from disease-free portions of the United States that may otherwise have been exported. However, USDA officials believe that many people in the United States would refuse to eat meat during an FMD outbreak, thus the offset probably would be minimal. Indirect costs of an FMD outbreak would include those costs affecting consumers, ancillary agricultural industries, and other sectors of the economy. For example, if large numbers of animals are destroyed as part of a control and eradication effort, then ancillary industries such as meat- processing facilities and feed suppliers are likely to lose revenue. Furthermore, an FMD outbreak would result in adverse effects such as unemployment, loss of income (to the extent that government compensation does not fully reimburse producers), and decreased economic activity, which could ripple through other sectors of the economy as well. For example, the loss of agricultural income could result in reduced sales of consumer goods. In the United Kingdom, according to government estimates, the 2001 outbreak resulted in losses to the tourism industry of over $5 billion that were comparable to the losses sustained by the food and agriculture sector. In addition, not only may consumers have to pay higher prices for the remaining supply of animal products affected by an FMD outbreak, but the price of substitutes is likely to rise, as well. For example, as the price of FMD-free meat increases, some consumers are likely to buy poultry or other meat substitutes, thus causing the prices of these substitute products to rise. However, the higher prices that consumers pay for substitutes do not result in a net cost to the economy because these higher prices result in increased revenues for poultry producers and others. An FMD outbreak can have significant social impacts, such as enormous psychological damage, especially on families and localities directly affected by the outbreak, as the U.K. experience in 2001 illustrates. For example, in May 2001, the Welsh Institute of Rural Health found that individuals affected by the FMD outbreak experienced a range of symptoms, including tearfulness, lack of sleep, loss of appetite, increased anger, irritability, and general depression. An increase in marital discord was also noted. One general practioner reported that 50 percent of his patients affected by the FMD outbreak required antidepressant drugs. Some farming families even sent their children away from home during the outbreak rather than have them witness the slaughter and disposal of the family’s livestock. Consumer confidence in the safety of the U.K. food supply was also adversely affected by the outbreak. A survey by the United Kingdom’s Institute of Grocery Distribution determined that because of the FMD and mad cow disease outbreaks, many consumers in the United Kingdom now consider meat and dairy products to be unsafe. As a result, these consumers have changed their grocery-buying habits. An outbreak also significantly disrupts daily life. Normally busy livestock producers suddenly had almost nothing to do because their animals had been destroyed and their properties were quarantined. According to one study of the effects of FMD on farm life in the Cumbria area of the United Kingdom, most farming households had to curb their usual daily activities, and only the most essential movements on and off the farms were permitted. Lost income caused stress to families because they had to cut back on their household expenditures and some had to renegotiate loans. The study notes that the enforced isolation caused by the quarantines added to the tensions and stresses already being experienced by both adults and children. Within USDA, the Animal and Plant Health Inspection Service (APHIS) has the lead responsibility for protecting the nation’s livestock from foreign animal diseases, which are diseases not native to the United States as well as those thought to have been eradicated. Assisting APHIS in this endeavor are USDA’s Foreign Agricultural Service, the Food Safety and Inspection Service, and the Homeland Security Council. Within APHIS, several groups share responsibility for protecting U.S. livestock from the incursion of foreign animal diseases such as FMD: International Services. Working with its counterpart organizations in foreign countries, this group seeks to reduce the international spread of animal and poultry diseases. Its goal is to protect U.S. livestock and poultry by reducing risk abroad through disease-management strategies provided to exporting countries before they send their animals and products to the United States. Veterinary Services. To protect and improve the health, quality, and marketability of our nation's animals, animal products, and veterinary biologics, this group seeks to prevent, control, and/or eliminate animal diseases, and monitor and promote animal health and productivity. This group administers laws and regulations on importing animals and animal products, including embryos and semen, to ensure that imports are free from certain disease agents. In addition, Veterinary Services provides training for state and private veterinarians on foreign animal diseases of concern and provides animal disease diagnostic and surveillance testing. Veterinary Services has primary responsibility for inspecting and ensuring the safety of live animal and animal product imports to the United States. Within Veterinary Services, the Emergency Programs unit coordinates efforts to prepare for and respond to animal disease outbreaks, including FMD, and in the fall of 2001, published a draft plan for responding to an FMD outbreak. It employs veterinarians trained to detect and respond to an FMD outbreak. Emergency Programs also provides federal and state veterinarians and others with training on foreign animal diseases. Plant Protection and Quarantine. Inspectors in this group are USDA’s primary presence at 144 of the 301 ports of entry in the United States, as well as 8 foreign ports. According to USDA, inspectors are present at all major ports of entry, and staffing is based on risk assessments and supplemented with tools such as detector dogs, X-rays, and hand-held remote-sensing equipment. USDA inspectors screen and physically inspect animal products and other cargo arriving by air, sea, or land, as well as international passengers and their luggage arriving via air, sea, or land border crossings. Most notably, the beagles in Plant Protection and Quarantine’s Beagle Brigade sniff travelers' luggage for prohibited fruits, plants, and meat that could harbor harmful plant and animal pests and diseases. According to USDA, by the end of this year, it will have increased the number of dog teams to 123, which is double the level available 2 years ago. In commenting on a draft of this report, USDA stated that by the end of this year, APHIS will also have increased the number of its safeguarding personnel to approximately 3,870—an increase of 50 percent over its fiscal year 2000 staffing levels. Moreover, USDA told us that it has hired 18 additional veterinarians who are conducting port-of-entry reviews, working with state counterparts, and providing technical guidance and training on working with and handling animal products and byproducts and international garbage that could pose a threat of foreign animal diseases. In addition, USDA’s Foreign Agricultural Service (FAS) operates programs designed to build new markets and improve the competitive position of U.S. agriculture in the global marketplace. FAS is responsible for USDA’s overseas activities, such as market development, international trade agreements and negotiations, and the collection and analysis of statistics and market information. FAS supports U.S. agricultural interests through its network of agricultural counselors, attachés, and trade officers stationed in many foreign countries. FAS officials primarily deal with agricultural trade issues and meet with host government and industry officials to discuss and facilitate agricultural trade. USDA’s Food Safety and Inspection Service (FSIS) has primary responsibility for ensuring the safety of imported and domestic meat and meat products meant for human consumption. FSIS stations inspectors at ports of entry to conduct sampling and inspection functions on imported meat products. FSIS also has responsibility for approving countries that are eligible to export meat products to the United States. In fulfilling this responsibility, FSIS conducts periodic reviews of eligible countries. According to USDA, FSIS’s inspection of livestock before slaughter is an important surveillance tool for detecting the presence of FMD in the United States. Finally, USDA’s Homeland Security Council is responsible for leading and coordinating USDA’s activities to plan for and manage agriculture-related crises as well as emergency programs. This council serves as USDA’s primary contact with the Federal Emergency Management Agency and facilitates coordination with other federal agencies, state and local governments, and private-sector organizations. The U.S. Customs Service (Customs) is the nation’s primary enforcement agency for preventing the entry of a number of potentially harmful products into the United States, including FMD-contaminated products. In addition to their Customs responsibilities to ensure that proper duties or tariffs are paid on imported products, Customs inspectors work to enforce the regulations of about 40 federal agencies, such as those of USDA. Customs inspectors review paperwork, such as manifests and bills of lading, and physically inspect cargo and international passengers and their luggage. Customs has inspectors stationed at all 301 ports of entry throughout the United States, including international airports and seaports and land border crossings along the Canadian and Mexican borders. Customs also has inspectors at some foreign locations, such as the international airport in Toronto, Canada, where they perform preclearance inspections of passengers and their luggage prior to entry into the United States. Customs inspectors also examine international mail and packages arriving in the United States at the 14 facilities handling mail of foreign origin. Senator Tom Daschle asked us to determine whether (1) U.S. processes to obtain and disseminate information on foreign FMD outbreaks are adequate and timely, (2) U.S. measures for preventing FMD from entering the country are effective and comparable with those of other selected countries, and (3) the United States could respond quickly and effectively to an outbreak of FMD if it were to occur. To address the first question, we obtained and reviewed relevant documents, and we interviewed USDA and Customs officials. In particular, we reviewed the adequacy and timeliness of the information obtained and disseminated by USDA after the 2001 FMD outbreak in the United Kingdom. For the second question, we reviewed relevant legislation, regulations, and other USDA documents. We also interviewed USDA, Customs, and state officials. To observe the preventive measures for international cargo, we visited three large seaports in Elizabeth, New Jersey; Baltimore, Maryland; and Houston, Texas. To observe the preventive measures for international mail, we visited international mail-processing facilities in New Jersey and Virginia and one international express package carrier in Kentucky. To observe the preventive measures for live animals imported through U.S.-Canadian and U.S.-Mexican land ports of entry, we visited the Sarnia, Ontario, and Nuevo Laredo, Mexico, border crossings. To observe the preventive measures for international passengers, we visited Dulles International Airport, and obtained information on the preclearance procedures used to process international passengers entering the United States via Canada at the international airport in Toronto, Canada. We also gathered information on how garbage from international carriers is handled at airports and seaports. In addition, we visited two nearby county and two state fairs in Maryland and Virginia to observe how USDA’s guidance for biosecurity measures to prevent the spread of disease at U.S. livestock and agricultural shows was implemented. As a result of the heightened level of security at airports after September 11, 2001, and because our review was largely conducted after the U.K. outbreak had ended, we were unable to implement a portion of the review as originally planned. In particular, we were unable to survey passengers who were returning to the United States from the United Kingdom, during the outbreak, at the airport after they left the passenger-processing area. Instead, we surveyed by telephone 60 passengers who visited the United Kingdom during the time of the outbreak (Mar. through Sept. 2001). We asked them to recall various aspects of their return trip and the processing they underwent at U.S. airports. These results cannot be generalized and represent only the experiences of the people whom we surveyed. In addition, because we asked people to recall events after the passage of 4 to 6 months, their recollections of certain events and processes might not have been as clear as they would have been immediately upon arrival. Furthermore, to respond to our second question, we compared the preventive measures used by Canada, Mexico, and the United Kingdom with those used by the United States. We selected Canada and Mexico for our review because the ability of the United States to protect its livestock from FMD also depends on the ability of our neighbors to prevent the disease; according to USDA officials, if any of the three countries has an FMD outbreak, the other two are also likely to have an outbreak. We included the United Kingdom in our analysis because it is a major U.S. trading partner and because of its recent experience with FMD. To obtain information on the preventive measures used by Canada and Mexico, we visited these two countries, met with federal officials, and obtained and reviewed relevant documents. While in Canada and Mexico, we visited airports, seaports, international mail-processing facilities, and border crossings to observe the preventive measures used by these countries. To obtain information on the United Kingdom’s preventive measures, we reviewed and summarized legislation and regulations for the European Union and the United Kingdom, as well as other publicly available documents. To ensure the accuracy and completeness of our information, we shared the summaries that we prepared on the preventive measures used by the three countries with officials in these countries and asked for their review and comments. The information on these foreign countries’ preventive measures does not reflect our independent legal analysis. Finally, for our third question, we reviewed federal and state emergency response plans as well as other key documents and federal legislation and regulations. We interviewed USDA officials, industry representatives, and state officials. We also interviewed a group of selected veterinarians and animal health technicians who were part of the U.S. contingent that supported the United Kingdom’s response efforts in 2001 to obtain their perspectives on U.S. preparedness and observations on lessons learned from the U.K. outbreak. We interviewed the state veterinarian in six states that are major U.S. livestock producers to obtain their perspective on their state’s preparedness efforts as well as the overall U.S. ability to respond to an outbreak if it were to occur. We also attended a USDA training session and a conference organized by the Western States Livestock Health Association that included information on U.S. preparedness and response to an FMD outbreak. We provided USDA and Customs with a draft of this report for review and comment. The written comments we received from USDA are presented in appendix V, and those we received from Customs, in appendix VI. In addition, we received technical comments from USDA that we have incorporated throughout the report as appropriate. We conducted our work from August 2001 through May 2002 in accordance with generally accepted government auditing standards. USDA relies on a wide variety of sources to obtain information on outbreaks of FMD overseas. Its sources include APHIS and FAS staff stationed abroad, official notifications from international trade or animal health organizations, and notifications from affected countries. But USDA’s dissemination of this information is more problematic because it has no formal process—detailed procedures and protocols—for sharing information on foreign FMD outbreaks with Customs, which provides the first line of defense against potentially contaminated products entering U.S. ports. USDA does, however, share the information it develops with agencies within the department, states, public and private veterinarians, industry groups, and the public through various methods, including E- mails, postings to USDA’s Internet site, telephone calls, and media alerts. U.S. measures to prevent an FMD outbreak—control and eradication overseas and the port of entry screening of livestock, animal products, and passengers—have been successful since 1929. Nevertheless, the United States remains vulnerable to an FMD outbreak because of the nature of the virus, the many pathways by which it can come into the country, and the growing magnitude and volume of both legal and illegal passengers and cargo entering the country. Other countries face similar challenges in protecting their livestock from FMD and use preventive measures that are comparable to those the United States uses. However, the United States could also build on the experiences of other countries to improve its preventive measures. As a first line of defense to safeguarding U.S. animal resources against the introduction of pests and diseases, USDA helps prevent, control, and eradicate agricultural health threats where they originate outside the United States. By helping other nations eradicate or control these outbreaks, USDA reduces the risk of agricultural pests and diseases reaching U.S. borders. In North America, U.S. efforts to eradicate and control FMD have largely focused on Mexico, because of our shared border and the possible threat of the FMD virus’s moving overland from South America, where the disease is endemic in some countries. USDA has staff located in Mexico working with the Mexico-United States Commission for the Prevention of Foot and Mouth Disease and Other Exotic Animal Diseases. The commission, formed in 1947 as a combined U.S.-Mexican effort to eradicate FMD from Mexico, built Mexico’s animal health infrastructure and successfully eradicated FMD from Mexico in 1954. Today, USDA and Mexican veterinarians work together, through the activities of the commission, to provide disease surveillance, diagnostic testing, and training for Mexico to ensure that the country remains FMD-free. According to USDA officials in Mexico, the United States initially covered about 80 percent of the costs for the joint program; however, as the Mexican government assumed greater responsibility for the program, the U.S. share has decreased to about 20 percent. In fiscal year 2001, USDA provided about $160,000 in funding for the commission’s activities. According to USDA officials, this funding supports the commission’s high-security laboratory in Mexico City by providing training, supplies, and equipment. In addition, for over 30 years, the United States has held regular meetings on animal health issues with the governments of Canada and Mexico to harmonize North America’s import requirements and, more recently, to coordinate preventive actions and emergency response activities in the event of an FMD outbreak. For example, in 2000, the three countries held joint exercises—known as the Tripartite Exercise 2000—to test their FMD communication and response plans, and to assess their response systems. As a result of this exercise, the three governments signed a memorandum of understanding to formally establish the North American Animal Health Committee. This committee represents animal health issues for the North American Free Trade Agreement and seeks to harmonize live animal and animal product import requirements for North America. The committee will also plan emergency response activities and perform joint test exercises to ensure that all three countries remain prepared to respond to an FMD outbreak. The United States also supports efforts to establish FMD-control zones in Central and South America. For example, to help alert countries in Central and North America about the potential incursion of FMD from South America, USDA has established cooperative programs with Panama and Colombia. In Panama, USDA supports the U.S.-Panama Cooperative Program for the Prevention of Foot and Mouth Disease, which maintains the Darien Gap area of Panama free from FMD and other foreign animal diseases. This program conducts field surveillance at high-risk border points and annual training, analyzes technical data, and improves the infrastructure. The program also provides support for the Investigative Laboratory for Vesicular Disease, which provides bio-containment, diagnostic, and detection capabilities for vesicular and other foreign animal diseases in Central America. Through the Colombian program, USDA helps maintain an FMD-free barrier along the Colombia-Panama border. This barrier serves as the "first line of defense" for preventing the spread of FMD northward into Central America, Mexico, and the United States, which are all FMD-free. Until FMD is eradicated from South America, USDA believes that maintaining this barrier will prevent the disease’s northward spread. USDA provides technical assistance and half of the funding for the program. As part of its disease exclusion activities for the region, USDA also has cooperative agreements with all the other Central American countries to support joint monitoring and surveillance activities, including field investigations and the collection of laboratory samples for FMD and other foreign animal diseases. Through these agreements, USDA helps transfer surveillance and detection technologies to these countries. When FMD strikes other nations—as it did recently, for example, in Argentina and the United Kingdom—the United States may assist in controlling and eradicating the disease. For example, a total of 327 U.S. animal health professionals, including over 300 veterinarians, helped eradicate the 2001 outbreak in the United Kingdom. The Americans came from USDA, other federal agencies, and state governments. Beginning in March 2001, they traveled to the United Kingdom, generally in groups that averaged about 10 per week, and assisted with the response for about a month. At the peak of the outbreak during March and April, about 100 U.S. animal health professionals were assisting in the U.K. response. The U.S. responders with whom we spoke participated in surveillance activities, such as collecting blood samples, and epidemiology tasks, such as tracking and predicting the path of new disease outbreaks. They also issued permits and licenses to move animals and products such as silage. By providing such assistance, the United States not only helps ensure that the disease is eradicated quickly, but also helps reduce the potential for FMD-infected products to arrive at U.S. ports of entry. Preventive measures at U.S. borders provide the second line of defense against the incursion of FMD into the United States. USDA has identified several key pathways by which the FMD-virus could enter the United States. To respond to the risk posed by these pathways, USDA implemented measures designed to ensure that animals, products, passengers, and equipment arriving at U.S. borders are free of the virus and do not pose a risk to U.S. livestock. However, some level of risk is inherent in international trade and travel, and no set of measures can ever completely eliminate the possibility that FMD will enter the country. Moreover, because FMD is a hardy virus and the level of inspection resources cannot keep pace with the increasing volume and magnitude of cargo and passengers, both legal and illegal, that continue to enter the country, the United States remains vulnerable to an outbreak. The FMD virus could enter the United States through a number of key pathways: live animal imports, imports of animal and other products, international passengers and their luggage, garbage from international carriers, international mail, and military personnel and equipment returning from overseas. For each of these pathways, USDA has developed and implemented specific preventive measures described below. Live animal imports. The United States allows imported livestock, such as swine, cattle, and sheep, only from preapproved countries that USDA judges to be free of FMD and other diseases of concern. For example, in April 2002, USDA recognized 49 countries or geographical regions as free of FMD. (See app. I.) Generally, live animals can be imported only through designated ports of entry, the majority of which are located along U.S. borders shared with Canada and Mexico, and three others located on the east and west coasts. Most live cattle imports into the United States originate from Canada and Mexico; live hog imports, from Canada; and live lamb imports, from Australia and New Zealand. Livestock exported to the United States must be accompanied by a U.S. import permit and a health certificate from an official government veterinarian in the country of origin. The health certificate states that the animals have been in the exporting country for at least 60 days prior to shipment and are free of other diseases of concern. Generally, animals arriving from countries other than Canada and Mexico may be quarantined. Zoological ruminants and swine from FMD-affected countries are permitted into the United States but must be processed through USDA’s New York Animal Import Center. Animal and other product imports. Thousands of animal and other products that could be contaminated with the FMD virus could potentially enter the United States during the course of normal international trade. These products include animal products meant for human consumption, such as meat and dairy products; nonfood animal products, such as hides, skins, casings, and animal extracts; as well as nonanimal products, such as farm equipment, hay, and straw. USDA regulates the importation of this diverse range of products to help minimize the risk of introducing FMD into the United States. USDA implements different import rules for FMD-free and FMD-affected countries. Generally, for countries free of FMD and other diseases of concern, USDA imposes few restrictions on animal product imports. For FMD-affected countries, USDA prohibits the importation of all susceptible products shipped 3 weeks prior to the date of official notification of the outbreak. This prohibition remains in effect until USDA reassesses the disease status of the affected country and determines the level of trade that can resume. USDA allows imports of animal and other products from FMD-affected countries only if they meet certain requirements. These requirements vary for different kinds of products, as follows: Animal products meant for human consumption. Generally fresh, chilled, or frozen meat from cattle, sheep, and pigs, and fresh milk are prohibited from FMD-affected countries. However, processed meat and dairy products are allowed from FMD-affected countries if they meet certain requirements. For example, meat products can be imported from FMD-affected countries only if (1) the country and meat processing plants have been deemed eligible to export meat products to the United States by FSIS and (2) the processing plants also meet APHIS’s meat-processing standards. The APHIS standards ensure that meat products from these countries are not contaminated with the FMD virus, and require that the products be processed in a manner that will inactivate the virus. For example, they must be fully cooked, dry cured, or canned and shelf-stable, with all bones removed. Moreover, a U.S. import permit and an official veterinary health certificate from the country of origin must accompany certain meat shipments. Similarly, most dairy products from FMD-affected countries must meet APHIS’s requirements to ensure that they do not pose a risk of FMD’s introduction. For example, milk products that are in a concentrated liquid form and are shelf-stable without refrigeration are allowed from FMD-affected countries. Some dairy products, such as condensed milk require a U.S. import permit, while others, such as yogurt and butter are unrestricted and do not require a permit. Nonfood animal products. A variety of nonfood animal products are allowed from FMD-affected countries if they have been properly treated to inactivate the virus; however, a U.S. import permit may be required. For example, tanned hides, leather, and fully finished mounted animal trophies can be imported into the United States from FMD-affected countries. Other products. USDA does not allow imports of grass, hay, or straw used for feeding, bedding, or other purposes from FMD-affected countries. However, used farm equipment is allowed with a certificate from the exporting country stating that the equipment has been steam cleaned. APHIS officials inspect farm equipment at U.S. ports of entry to ensure that it is free from dirt and soil. If dirt and soil are found, then inspectors will determine whether they can be adequately washed with detergent and disinfected at an appropriate location before granting approval for entry into the United States. All animal and other products arriving at U.S. ports of entry, whether from FMD-free or FMD-affected countries, are subject to inspection by U.S. federal inspectors. Customs officials, who review the documents accompanying the shipments, either electronically or on paper, provide the first level of inspection for these shipments. On the basis of this review, Customs is authorized to either release the shipments into commerce or hold them for USDA inspection. USDA provides Customs with a list of products to be flagged for inspection by APHIS. APHIS inspectors ensure that all the necessary documents accompanying the shipment, such as import permits and official health certificates, are complete and ensure that the shipments match their manifest. In some instances, APHIS inspectors will inspect the shipping containers to check their contents. After APHIS completes its inspection, the shipment may proceed to FSIS and/or the Food and Drug Administration for further inspection, depending on which agency regulates the safety of these products for human health issues, or may proceed to Customs for release into commerce. According to USDA, FSIS inspectors at ports of entry visually examine all shipments of products under FSIS’s jurisdiction and randomly select some for more in-depth examination. In commenting on a draft of this report, USDA noted that it has primary inspection responsibility for agricultural cargo and manifests at those ports staffed with USDA inspectors. To ensure that these shipments continue to be referred to USDA for inspection, the department said that it is working with Customs and other federal agencies to develop an automated targeting system, which will serve as an electronic interface among federal agencies to identify and automatically segregate high-risk plant cargo and track imported animals and animal products. International passengers. International passengers who may have been in contact with the FMD virus, either through contact with infected animals or materials such as soil and manure, or who bring potentially contaminated products into the country may also transmit the virus to the United States. USDA provides the following FMD-prevention information and types of scrutiny for international passengers in an effort to reduce the risk associated with this pathway: USDA requests airlines to make in-flight announcements on international flights; at ports of entry, it places warning signs and it plays prerecorded announcements about how international passengers can assist in keeping FMD out of the United States. International passengers must fill out a U.S. Customs declaration form that asks if they are bringing any animal or plant products into the country and if, while traveling abroad, they visited a farm or were in contact with animals. Passengers responding affirmatively to these questions are sent by Customs officials to a USDA inspection area at the port of entry for further processing. USDA officials may x-ray and inspect the contents of the passengers’ baggage; ask them additional questions; confiscate any prohibited items, such as meat and dairy products; and clean and disinfect their shoes. USDA’s Beagle Brigade and inspectors generally rove the baggage claim areas at major ports of entry to help identify passengers and their luggage that may be carrying prohibited food items. USDA inspectors look not only for suspicious packages, such as bulky, misshapen, and leaking containers, but also question passengers about their travels to determine whether they present a greater risk of disease transmission. If the dogs or the inspectors identify such passengers, these passengers are referred to the USDA inspection area for further processing. After the 2001 FMD outbreak in the United Kingdom, according to the international passengers we surveyed, some of these measures were not consistently implemented. For example, some passengers told us that the airlines they traveled on did not make any in-flight announcements about FMD. Other passengers told us that even though their Customs declaration form indicated that they had been in contact with animals or visited a farm while in the United Kingdom, they were not referred by officials to the USDA inspection area at the airport for further processing or they had to request USDA personnel at the airports to examine and disinfect the shoes that they wore while they were in FMD-affected areas in the United Kingdom. Garbage from international carriers. Garbage from international carriers, such as airplanes and ships, can also transmit the FMD virus into the United States if the garbage contains food items contaminated with the virus. Therefore, USDA has developed guidelines to ensure that garbage from international carriers is properly handled and disposed of so that it does not present a risk to U.S. livestock. For example, USDA inspectors supervise the removal of all international garbage from airplanes and ships. This garbage must be transported in leak-proof containers and must be disposed of properly, such as by incineration or sterilization, and subsequent burial at a landfill. USDA has compliance agreements with catering firms and cleaners that outline the proper handling and approved disposal methods for international garbage. Before a compliance agreement is signed, APHIS officials will, among other things, review the application; visit the handling, processing, or disposal facilities; observe the operation of any equipment to determine its adequacy for handling garbage; and certify and approve the garbage cookers and sterilizers to be used to process international garbage. USDA also monitors firms operating under these compliance agreements to ensure that they abide by the conditions stated in the agreement. International mail. Prohibited animal products that could transmit the FMD virus may also be sent through international mail and courier services to U.S. residents. As a result, international mail packages entering the United States are subject to inspection by Customs and USDA officials. Customs generally reviews the declaration form on the packages and either visually inspects or x-rays them as part of its responsibility to screen international mail for illegal and prohibited items, such as contraband and drugs. At USDA’s request, Customs can also screen international packages for prohibited animal products, such as meat and dairy products from FMD-affected countries. Customs sets aside packages that appear to contain such items for USDA’s inspection. USDA officials will review the declaration forms and may x-ray or open these packages for physical inspection. If the item in the package is a permissible product, the officials will reseal the package and release it for delivery; otherwise it will be confiscated and destroyed. In commenting on a draft of this report, USDA noted that mail from high-risk countries is more thoroughly scrutinized on the basis of pathway analysis. Military personnel and equipment. Because U.S. military forces are deployed throughout the world, troops and military equipment returning to the United States could introduce FMD and other diseases into the country. As a result, USDA provides support for the military and helps oversee the reentry of military cargo, personnel, equipment, and personal property to reduce the risk of introducing diseases into the United States. For example, military personnel must declare all agricultural items they are bringing back to the United States and identify whether they have been on farms or in contact with animals while abroad. Their clothing and gear should also be cleaned and washed before reentering the country. Similarly, all military rolling stock, such as humvees, trucks, weapons systems, and tanks, as well as other used military gear, such as canvas tents, must be thoroughly cleaned before reentry. Pallets, wooden crates, and other military equipment must be free of soil, manure, and debris. Military equipment used to eradicate animal diseases overseas, such as FMD, is not allowed reentry. For small-scale operations, the military must notify USDA at least 7 days in advance of arrival at a U.S. port of entry. USDA will determine if appropriate cleaning facilities are available at the first port of entry, and all items will be held at this port for inspection. If approved cleaning facilities are not available or if the equipment is contaminated to an extent that prevents cleaning, USDA will refuse to allow reentry. Large-scale operations require a 30-day notification. The United States has not had an outbreak of FMD since 1929, and some USDA officials and animal health experts believe that this healthy condition of U.S. livestock is directly related to the effectiveness of U.S. measures to prevent the incursion of the disease. However, these and other experts agree that the nation remains vulnerable to an FMD outbreak for the following reasons: FMD is a highly contagious and hardy virus that remains viable for long periods of time. FMD can be carried and transmitted by a variety of animate and inanimate items. Although the key pathways described earlier pose varying levels of risk to U.S. livestock, according to USDA, it could take only one contaminated product to come into contact with one susceptible U.S. animal to start a nationwide outbreak. The magnitude and volume of international passengers, mail, and products entering the United States creates an enormous challenge for USDA and other federal inspection agencies. As a result, most inspections at ports of entry are restricted to paper reviews of manifests supported by a limited number of judgmentally selected samples for physical inspection. For example, in fiscal year 2001, over 470 million international passengers and pedestrians arrived at U.S. ports of entry; of these, USDA inspected about 102 million. According to APHIS officials, about 30 percent of the items seized from passengers at airports were prohibited animal products or by-products. Table 3 provides information on the volume/numbers of passengers, vehicles, and cargo entering the United States and the level of APHIS’s inspections for fiscal year 2001. Similarly, the volume of international mail entering the United States makes it difficult for APHIS and Customs to adequately screen incoming parcels for FMD-susceptible products. For example, APHIS inspectors at the international mail facility in Elizabeth, New Jersey, told us that about 30,000 international parcels pass through their check point every day. This volume of mail provides the inspectors approximately 3 seconds per parcel to judge whether the package might contain FMD-susceptible products. Moreover, mail is processed at the facility during the day and night to keep up with the volume of international mail arriving daily. However, APHIS inspectors are present only during the day shifts and detector dogs are available for only 1 to 2 days per week. Although Customs’ inspectors screen packages for FMD-susceptible products during the time when APHIS inspectors are not available, both APHIS and Customs inspectors told us that the process is less effective than having an APHIS inspector on site. Nonetheless, according to APHIS’s Assistant Director for Port Operations, even doubling or tripling the agency’s inspection resources would not significantly reduce the FMD risk from overseas entries because the percentage of passengers, vehicles, and cargo receiving a physical inspection is likely to continue to be relatively low. Moreover, most U.S. preventive measures are not designed to intercept illegal entries of products or passengers that may harbor the FMD virus. According to USDA, the volume of illegal agricultural products entering the United States is growing, and contraband meat products entering the country is the single most important risk for the introduction of FMD. In addition, illegal shipments of products from countries other than the stated point of origin and illegal immigrants also pose significant risks. USDA and Customs annually confiscate thousands of contraband and prohibited products at U.S. ports of entry. For example, in fiscal year 2001, USDA seized 313,231 shipments of prohibited meat/poultry and animal by-products. According to USDA officials, these seizures are only a small portion of the contraband entering the United States. To respond to the growing threat from illegal entries, USDA recently created the Smuggling Interdiction and Trade Compliance program. Program officials collaborate with several federal, state, and private organizations to ensure compliance with U.S. agriculture import laws at ports of entry. U.S. preventive measures for FMD are comparable to the measures used by Canada, Mexico, and the United Kingdom for four key pathways included in this review: livestock imports, animal product imports, international mail, and garbage from international carriers. The pathway that presented the most significant area of difference concerned the measures used to process international passengers entering these countries. (Detailed information on the preventive measures used by Canada, Mexico, and the United Kingdom are provided in appendixes II through IV of this report.) Generally, U.S. preventive measures were similar to those used by the other three countries for the following four pathways: Imported livestock. The three countries allow imports of livestock only from approved countries that are FMD-free. Generally, these live animals must be imported through predetermined inspection ports that have adequate facilities available to quarantine the animals, if necessary. In addition, the countries require import permits and health certificates to accompany the livestock shipments unless the animals are imported directly for slaughter. Of the three countries, Mexico requires an official government veterinarian to (1) preinspect animals imported from countries other than the United States in their country of origin before they are loaded for transport to Mexico and (2) accompany the shipment and monitor the health status of the animals while they are in transit. Imported animal products. The countries generally allow animal product imports only from countries that they consider FMD-free and that meet their specific animal health and food safety standards. The countries also allow certain animal product imports from FMD-affected countries if they originate from a preapproved establishment and are processed in a manner that would inactivate the virus. For example, meat products that are fully cooked and canned and are shelf-stable can be imported from FMD-affected countries, but unprocessed products, such as fresh, chilled, or frozen meat, and untreated milk, are not allowed. In addition, all imported animal product shipments are subject to review and may be selected for physical inspection when they arrive at the port of entry in each of the countries. International mail. The countries handle international mail in a similar manner, which includes a review of the documentation detailing the sender, country of origin, and contents of the package. Only packages considered suspect, for example, because they do not include required information, are from high-risk countries, or have been sent by repeat offenders, are selected and opened for further inspection. Canada uses x-ray technology to help identify packages containing prohibited items, and Mexican officials told us that all international packages arriving from FMD-affected countries are opened and inspected for prohibited items. Garbage from international carriers. The countries’ federal agencies responsible for protecting animal health supervise the containment, transportation, and processing of garbage from international carriers. They generally dispose of international garbage by incineration or under certain conditions by burial at federally approved sites. For example, in Canada, international garbage can be buried at approved sites located at least half a kilometer from any premise with livestock and/or poultry and must be immediately covered by 1.8 meters (approx. 5.5 feet) of local refuse and/or other standard covering material. At the time of our review, none of the countries allowed domestic animals to be fed international garbage from airlines or ships. In commenting on a draft of this report, USDA noted that the United Kingdom faces greater risk than the United States because it is a member of the European Union, which includes, and provides for trade among, countries that are FMD-free as well as some that are not. The United States differed from Canada and Mexico in the measures used to prevent FMD from entering the country via international passengers. Specifically, we noted the following three areas of difference: Use of signs at ports of entry. While Canada, Mexico, and the United States all posted special signs at ports of entry to alert international passengers to the dangers of FMD, the U.S. signs were smaller and less visible in comparison with the signs used by the other two countries. For example, the Canadian signs were over 6 feet tall and warned passengers in large, bold letters in both English and French about FMD. Similarly, in Mexico the signs were also over 6 feet tall, included pictures, and colored text in English or Spanish. In contrast, the first U.S. signs were 1-by-1 foot and included relatively small-sized text on a white background that was difficult to read and did not easily convey the importance of the message. According to USDA officials, these signs were subsequently replaced with larger signs (3-by-3 feet) that included a colored graphic and larger-sized text. While larger, we observed at one U.S. international airport that the new signs were placed at a considerable distance from arriving passengers. These signs were placed on easels on top of the baggage carousels and therefore were several feet above eye level. In contrast, we observed that the signs in Canada and Mexico were placed in more easily visible locations that were in greater proximity to the passengers. According to agriculture officials in all three countries, they are limited in their ability to place signs at ports of entry because they have to negotiate the size and placement with the port authorities. As a result, they are not always able to use the most effective signs or locations. Figures 3 and 4 show the signs that were used in the United States and Canada and Mexico. Modified declaration forms. In 2001, both Canada and Mexico made changes to the declaration forms they use to process international passengers upon arrival. For example, after the U.K. outbreak in 2001, Canada reworded its declaration form to provide examples of food products of concern, such as dairy products. Similarly, Mexico developed a separate form that passengers coming from FMD-affected countries must complete, and it asks clear, detailed agriculture-related questions. In contrast, the United States did not make any changes to its declaration form in 2001, and some of the international passengers we contacted considered the agriculture-related question on the form ineffective and unclear. A senior APHIS official told us that USDA was aware that the question on the form was confusing and ambiguous to travelers. This official said that most of the confusion arises because the question on the form consolidates three questions into one. In commenting on a draft of this report, USDA stated that it has recently worked with Customs to revise the agricultural question on the Customs declaration form. The form now includes two agriculture- related questions that USDA believes will be more easily understood by travelers and will yield better information to the department to help focus its inspection efforts. The new form is currently being distributed throughout the country. Because USDA’s actions address our concerns, we have deleted our recommendation on this issue from this report. (See table 4 for a comparison of the agriculture-related questions on the prior and revised U.S. declaration forms.) Use of disinfectant mats. As a precaution, both Canada and Mexico developed guidelines requiring all international passengers arriving at airports and seaports to walk over disinfectant mats when entering the country. However, according to USDA officials, the United States chose not to use disinfectant mats because USDA research found that the disinfectant in the mat would become ineffective after a certain number of uses and may begin to harbor the virus, thus contaminating shoes that were otherwise clean. The United States has had significant success in keeping the nation’s livestock FMD-free since 1929. To some extent, the success of this effort is directly related to the effectiveness of U.S. preventive measures both abroad and at the nation’s borders. However, because of the extensive presence of FMD worldwide and because the magnitude and volume of international cargo and travel continue to expand, the nation’s vulnerability to an introduction of FMD remains high. The steps that other nations have taken to reduce the risk of FMD—such as signs to alert international passengers—could help improve USDA’s efforts to protect U.S. livestock. While we recognize that there is an additional cost to preparing new, larger, and more noticeable signs, we believe that, given the significant economic costs of an FMD outbreak to the nation, these costs are justified if they can help improve our preventive measures. To help improve the effectiveness of U.S. measures to prevent the introduction of FMD by international passengers, we recommend that the Secretary of Agriculture direct the Administrator, APHIS, to develop more effective signage about FMD for ports of entry. In its comments on a draft of this report, USDA stated that it is in the process of developing new signage for ports of entry that will be larger and more mobile than the ones that we observed during the course of our work. If FMD enters the United States despite USDA’s preventive measures, the nation’s ability to identify, control, contain, and eradicate the disease quickly and effectively becomes paramount. Recognizing the importance of an effective response and the necessity to prepare before an outbreak occurs, USDA and most states have developed emergency response plans that establish a framework for the key elements necessary for a rapid and successful U.S. response and eradication program. Many of these plans have, to some extent, been tested by federal and state agencies to determine their effectiveness. However, planning and testing exercises have also identified several challenges that could ultimately impede an effective and timely U.S. response if they are not resolved before an FMD outbreak occurs. Planning for a coordinated response to emergencies, including outbreaks of animal disease, is occurring at both the federal and state levels. Furthermore, both the federal government and many states have tested and revised their plans in response to the results of these tests. At the federal level, 26 federal agencies and the American Red Cross signed the federal response plan in April 1999, which is intended to guide the federal response to national emergencies and augment state response efforts. Under this plan, the Federal Emergency Management Agency (FEMA) is designated as the coordinating agency and is responsible for providing expertise in emergency communications, command and control, and public affairs. In the event of an FMD outbreak, FEMA would designate USDA as the lead agency and work closely with the department to coordinate the support of other federal agencies to respond to the outbreak. For example, under the plan, Customs would “lock down” ports of entry; the Department of Defense would provide personnel, equipment, and transport; the Environmental Protection Agency would provide technical support on the disposal of animal carcasses; the National Park Service would guide the response if wildlife become infected; and other agencies would provide additional support. To supplement the federal response plan and provide specific guidelines for an animal disease emergency, such as implementing quarantines of infected premises and disposing of animal carcasses, APHIS, USDA’s Homeland Security Council, and FEMA are taking the lead in developing a federal plan specifically for responding to an FMD or other highly contagious outbreak of an animal disease. The draft plan calls for the involvement of more than 20 agencies and describes the authorities, policies, situations, planning assumptions, concept of operations, and federal agency resources that will provide the framework for an integrated local, state, and federal response. At the state level, many states have developed an animal disease component for their state’s emergency management plans. According to the National Animal Health Emergency Management System (NAHEMS), in January 2000, only about half the states and U.S. territories had developed animal health emergency response plans. At that time, NAHEMS recommended that each state develop a plan for responding to animal health emergencies that links to their state’s emergency management plan and includes information on the following key elements: Animal health surveillance and detection systems. Control and eradication procedures. Communication between key partners. Involvement of emergency management officials. Collaboration between state and federal emergency responders. Involvement of state and federal animal health officials in responding to natural disasters. According to NAHEMS, in its 2001 annual report dated March 2002, the number of states and U.S. territories with animal disease emergency plans had increased to 46, of which 45 had included the plan as part of their state’s emergency management plan, and 30 indicated that their plan included all of the elements listed above. To ensure the efficacy and completeness of their plans, the federal government and many of the states have conducted “tabletop” and functional exercises. Tabletop exercises bring together key decision makers in a relatively stress-free setting to discuss the contingencies and logistics of a hypothetical disease evaluate plans, policies, and procedures; and resolve questions of coordination and responsibility. The setting is relatively stress-free because there is no time limit to resolve the hypothetical outbreak. In contrast, functional exercises simulate an emergency in the most realistic way possible, without moving people or equipment. It is a stressful, real-time exercise in which people apply emergency response functions to a hypothetical scenario. According to one APHIS official, functional exercises are best described as “dress rehearsals” for actual emergencies. The federal government has held both tabletop and functional exercises, as described below: To ensure that the federal FMD emergency response plan is comprehensive and well coordinated, USDA conducted a tabletop exercise in 2001. In this exercise, USDA developed a scenario involving a modest, limited FMD outbreak in the United States and obtained the views of 21 federal agencies and the American Red Cross on how they could support the federal response to an FMD outbreak. USDA used this information to revise its draft national FMD response plan. The federal government held a functional exercise in 2000—the Tripartite Exercise 2000—to test the plans, policies, and procedures that would guide the emergency response to a multifocal FMD outbreak in North America. The test focused on communication between the various entities involved in an outbreak and the use of vaccines by Canada, Mexico, and the United States. The test resulted in many recommendations to improve the three countries’ abilities to (1) communicate effectively, (2) provide program support, and (3) use vaccines. According to the final report, the recommendations, if implemented, will improve North America’s overall response capacity. The three countries have established working groups tasked with responding to these recommendations. Similarly, as of 2001, about 26 states had periodically conducted various kinds of exercises to test state responses to an FMD or other animal disease outbreak, according to NAHEMS. For example, in June 2001, the Texas Animal Health Commission, in conjunction with the Texas Division of Emergency Management within the Texas Department of Public Safety, conducted a 4-day modified functional exercise of the state’s draft FMD response plan and engaged 23 federal, state, academic, and private entities in the exercise. The exercise was designed to test participants’ abilities to control the simulated outbreak, find and deliver indemnity funds, and streamline the decision-making processes. Overall, the exercise determined that better communication and coordination could improve the speed and effectiveness of the state’s response. It also identified areas of ambiguity in the plan that left participants without clear directions at crucial times during the exercise. According to state officials, the plan was revised as a result of the exercise, and according to the Executive Director of the Texas Animal Health Commission, more exercises are necessary to continuously improve the plan. However, the state veterinarian also said that he does not believe that adequate resources are available either at the federal or state level for such activities. As the U.K. experience has demonstrated, responding to an FMD outbreak can tax a nation’s fiscal, scientific, and human resources. If a similar outbreak were to occur in the United States, the nation would face a wide spectrum of challenges that can hamper an effective and rapid response: (1) the need for rapid disease identification and reporting; (2) effective communication, coordination, and cooperation between federal, state, and local responders; (3) an adequate response infrastructure, including equipment, personnel, and laboratory capacity; and (4) clear animal identification, indemnification, and disposal policies. While USDA has made some progress in addressing some of these issues, significant work remains. The rapid identification and reporting of an FMD incident is key to mounting a timely response. However, a timely response depends on livestock producers’ and private veterinarians’ quickly identifying and reporting suspicious symptoms to state and federal officials. If they do not do so, FMD could become out of control before the federal and state governments could initiate any action. For example, within the first few days of the outbreak in the United Kingdom, before the first reports of FMD reached British officials, infected animals were criss-crossing the country in hundreds of separate movements, putting other livestock at risk. The main geographical spread of the disease occurred before any suspicion that the disease was present in the country. In contrast, in France, county officials quickly identified diseased animals from the United Kingdom, and were able to slaughter them quickly and avoid a large-scale outbreak. As a result, France sustained minimal animal losses and was declared FMD-free within months, while it took the United Kingdom almost a year to eradicate the disease and regain its FMD-free status. Several federal and state animal health officials with whom we spoke were concerned about how quickly disease identification and reporting would actually occur in the United States. They told us that livestock producers or veterinarians may not readily identify FMD because (1) the disease presents symptoms that are similar to other less-serious diseases, (2) FMD and other foreign animal diseases are not usually included in veterinary school curricula, and (3) many veterinarians may never have seen FMD- infected animals. Furthermore, livestock producers and veterinarians may not report the disease because they are not aware of the reporting process or may not realize the criticality of prompt reporting. According to USDA officials, the U.K. outbreak helped raise general awareness among state officials, private veterinarians, and livestock producers about the risks and potential of an FMD outbreak in the United States. An indication of this increased awareness is the doubling of foreign animal disease investigations from about 400 in 2000 to more than 800 in 2001. In addition, federal and state officials told us that the U.K. outbreak led to greater awareness of the need to have trained diagnosticians for foreign animal diseases in the field. In recent years, more field veterinarians have attended foreign animal disease training at USDA’s Plum Island facility. Nevertheless, as described in chapter 2, USDA intensified its efforts to increase public and industry awareness about FMD after the U.K. outbreak in 2001. As part of these efforts, USDA also addressed industry and animal health associations, and sponsored workshops, conferences, and informational telecasts for federal, state, and local officials, and others. In addition, the state governments also supported and supplemented USDA’s informational efforts. Despite USDA and state efforts to flood the livestock industry with information about the risks of FMD during 2001, the challenge to USDA will be to maintain this heightened awareness about FMD, now that the immediate risk from the U.K. outbreak has subsided. Cooperation, coordination, and communication between federal, state, and local agencies, private veterinarians, and livestock producers are essential for an effective FMD response. Recent planning efforts and test exercises have helped start the process of establishing greater coordination and improving the level of cooperation and communication between all levels. According to a USDA official, for example, USDA’s recent planning efforts to develop a national FMD response plan brought together officials from a variety of federal agencies to consider the implications of an FMD outbreak to their areas of responsibility and helped them develop ways in which they could support a federal response. Moreover, efforts to improve communication, cooperation, and coordination are beginning to transcend state boundaries. In 2001, 26 U.S. states/territories reported to NAHEMS that they were part of a group of states that had agreed to support each other in preparing for and responding to animal health emergencies. For example, according to Midwestern state officials, they are now beginning to address regional coordination and cooperation issues. In May 2002, seven Midwestern states met in Iowa for a planning conference to discuss a coordinated response plan for the region. While these planning and testing efforts have improved the level of communication, coordination, and cooperation, they have also identified areas that need considerable attention. For example, although the Tripartite Exercise of 2000 identified generally good communication and cooperation between government and industry participants, it also identified the need for the following actions: Improve the technology used to ensure an uninterrupted flow of information. Develop written agreements between national animal health and industry officials to ensure a continued high level of communication even when players change. Have federal and state counterparts work together to develop collaborative relationships that will improve communications during an actual outbreak. We also found that cooperation and communication between federal and state officials varied by state. For example, while some state officials indicated that they had excellent working relationships with their federal counterpart located in the state, others told us that cooperation and communication were limited. According to one APHIS field veterinarian, the level of cooperation and communication depends to a large extent on the personalities of the people involved and therefore such variance is to be expected. While the development of written agreements as suggested by the Tripartite exercise report and NAHEMS could help alleviate this problem, as of 2001, only about 32 U.S. states/territories had such agreements or other documents that detailed the respective roles of federal and state officials. To help improve cooperation, coordination, and communication, USDA officials told us that they are working with organizations such as the National Emergency Management Association to help states with their animal-emergency-planning efforts. In addition, USDA awarded 38 grants totaling $1.8 million in 2001 to state agencies, tribal nations, and emergency management organizations. According to USDA, this funding was to be used for training, equipment, and emergency-preparedness exercises. In commenting on a draft of this report, USDA stated that in late May 2002, it announced that it would be making more than $43 million available as grants to the states for strengthening homeland security preparedness. Of this $43 million, $14 million is to help states meet the national standards of emergency preparedness established by NAHEMS. Moreover, USDA stated that it is working with FEMA to develop a framework for a comprehensive communications plan to address a foreign animal disease outbreak. The plan will help better ensure the timely dissemination of information to critical audiences, including federal agencies, states, and industries. An effective response to an FMD outbreak requires an effective infrastructure, including a national emergency management control and command center, technical and other personnel, transportation and disposal equipment, and laboratory facilities and testing capacity. To ensure that a U.S. response to an FMD outbreak is properly coordinated and adequately controlled, USDA has established an Emergency Management Operations Center at its Riverdale, Maryland, location. In the event of an outbreak, USDA will activate this center to coordinate day-to- day activities during an FMD response and notify U.S. trading partners of the status of the outbreak. According to USDA’s draft FMD response plan, APHIS will set up the Joint Information Center—collocated with the Emergency Management Operations Center—to serve as the primary source of public information about the response and will coordinate with other federal and state information centers. In addition, as the U.K. outbreak illustrated, responding to an FMD outbreak requires extensive personnel resources. These include persons who can provide (1) specialized animal disease support for testing and diagnosis, epidemiology, vaccination, slaughter, and carcass disposal; (2) biohazard response support for controlling animals’ movement and decontaminating infected and exposed premises, equipment, and personnel; and (3) general logistics support for sheltering and feeding responders; the transportation, movement, and positioning of equipment and supplies; and general law enforcement. During the 2001 outbreak, the U.K. government had to request specialized animal disease support from several countries, including the United States, Canada, Australia, and New Zealand; hire thousands of private contractors to provide slaughter and decontamination support; and use military personnel to provide general logistical support. According to a U.K. government working paper issued in March 2002, during the peak of the outbreak, more than 7,000 civil servants, 2,000 veterinarians, and 2,000 armed forces personnel were involved in the response—making it a bigger and more complex logistical exercise than the United Kingdom’s involvement in the Gulf War. A recent test exercise in Iowa indicates that the personnel requirements to respond to an FMD outbreak in the United States would also be enormous—approaching 50,000 people to support a response. More specifically, according to APHIS estimates, the United States would be at least 1,200 veterinarians short of the required 2,000 to 3,000 specially trained veterinarians needed to respond to an animal health emergency. APHIS officials told us that while state and private veterinarians could help make up some of this difference, without appropriate training, their help would be of limited use. To address the personnel challenges posed by an FMD outbreak, USDA has undertaken several efforts. By partnering with FEMA and other emergency management organizations, USDA will be able to leverage these agencies’ resources to help provide many of the general logistical support activities. Similarly, USDA has established a memorandum of understanding with the Department of Defense to provide military personnel and equipment to support a response effort. In addition, APHIS has implemented an Emergency Veterinarian Officer Program to increase the number of veterinarians available to assist in an animal health emergency. The program trains federal, state, and private veterinarians to handle emergency situations. As of December 2001, APHIS had trained 276 emergency veterinarian officers, 145 of whom participated in responding to the U.K. outbreak. Moreover, USDA has trained 520 veterinarians across the country as foreign animal disease diagnosticians, and they may be called upon to provide specialized animal health support in the event of an outbreak. Finally, according to APHIS officials, USDA has informal arrangements with the United Kingdom and other countries to provide the United States with veterinary support. More formally, Australia, Canada, New Zealand, the United States, and the United Kingdom are currently drafting a memorandum of understanding that would allow the five countries to share veterinary resources in the event of an animal health emergency. In commenting on a draft of this report, USDA also indicated that it has created a National Animal Health Reserve Corps, composed of private veterinarians from around the country who would be willing to assist APHIS veterinarians in field and laboratory operations during a foreign animal disease situation. According to USDA, to date, more than 275 private veterinarians have signed on to this corps and the department is continuing its efforts to recruit more members. This corps will supplement the personnel drawn from states, and other federal agencies and organizations. A response infrastructure also requires a diagnostic laboratory system that is capable of handling the volume of testing and analysis necessary in the event of an outbreak. For example, from February through December 2001, the United Kingdom’s Pirbright Laboratory, that country’s primary reference laboratory, tested 15,000 samples for the presence of the FMD virus and performed 1 million monitoring tests to ensure that the disease had been eradicated. Nationwide, a total of 2.75 million samples were tested as part of the response to the outbreak. Despite this level of testing, according to U.S. veterinarians returning from the United Kingdom, the United Kingdom had unmet needs for laboratory assistance. In the United States, USDA’s Plum Island facility—the primary laboratory in the United States that is authorized to test suspected FMD samples—would be quickly overwhelmed in the event of an FMD outbreak, according to many federal and state officials with whom we spoke. Recognizing this potential problem, the National Association of State Departments of Agriculture recently recommended that the United States develop a national strategy for animal health diagnostic laboratory services that would include USDA’s Plum Island facility and its National Veterinary Services Laboratories at Ames, Iowa, as well as state and university laboratories. Currently, state diagnostic laboratories have no formal role in a foreign animal disease response. In addition, the Director of the Plum Island facility stated that the nation needs to look beyond Plum Island for laboratory support in the event of a large-scale FMD outbreak. He suggested that off-site noncentralized testing, using noninfectious material (tests that do not use the live virus), should be considered with backup testing support provided by Plum Island. APHIS officials told us that while the idea of a regional laboratory structure has merit, several issues would have to be addressed before such a structure could be implemented. For example, laboratory personnel would have to undergo continuous training and certification, and facilities would have to be renovated and maintained to provide state of the art capabilities. This would require a significant commitment of resources. In commenting on a draft of this report, USDA stated that as part of its efforts to strengthen homeland security preparedness, it is providing state and university cooperators with $20.6 million to establish a network of diagnostic laboratories dispersed strategically throughout the country. This network will permit the rapid and accurate diagnosis of animal disease threats. Moreover, USDA stated that earlier this year it allocated $177 million to make improvements at key locations, including its diagnostic and research facilities in Ames, Iowa, and Plum Island, and that $15.3 million was allocated to USDA’s Agricultural Research Service to improve rapid detection technology for FMD as well as other animal diseases. The effectiveness of a U.S. response to an FMD outbreak will require an animal identification and tracking system to allow responders to identify, control, and slaughter infected and exposed animals as well as clear animal disposal and indemnification policies. The 2002 farm bill, addresses animal disposal and indemnification issues by providing the Secretary of Agriculture with broad authority to hold, seize, treat, or destroy any animal, as well as to limit interstate livestock movement as part of USDA’s efforts to prevent the spread of any livestock disease or pest. The Secretary may also take measures to detect, control, or eradicate any pest or disease of livestock, as needed. In addition, the farm bill requires the Secretary to compensate owners on the basis of the fair market value of destroyed animals and related materials. USDA is currently trying to develop specific guidance on how these authorities will be implemented. Many epidemiologists believe that in the event of an FMD outbreak, successfully tracing affected animal movements within 24 hours is essential if the response is to be effective. However, the United States generally does not require animal identification, nor does it have a system for tracking animal movements. As a result, according to a USDA official, in the event of an FMD outbreak, USDA would likely have to rely on sales records to track animal movements, which could take days, or weeks, depending on the accuracy of record-keeping and producer/seller cooperation. The longer it takes to identify animals and track their movements from premise to premise, the more difficult it becomes to contain the outbreak. USDA officials told us that, depending on where the outbreak is first identified, it may be relatively easy or extremely difficult to trace. For example, if only one farm were infected and animals had not recently been moved on or off the premises, no tracing of live animals would be necessary. However, if the outbreak first appeared in a major market or feedlot where hundreds of animals move in and out on almost a daily basis, tracing would be very difficult and time-consuming. Recognizing the importance of an animal identification and tracking system, USDA began planning such a system 3 years ago, according to the Director of the National Animal Identification initiative. However, the industry resisted the concept because of the costs involved and the potential for the unauthorized disclosure of proprietary information. The Director noted that the events of September 11, 2001, as well as technological advances appear to be reducing the level of industry opposition to a national animal identification system. For example, this official told us that the National Cattlemen’s Beef Association recently indicated some support for such a system. However, the following issues will need to be resolved before a national system can be developed and implemented: The responsibility for funding the system. The type of technology that should be employed—strictly visual, electronic, or some combination. The amount of information that should be included on each animal’s identification tag or electronic-tracking device. The persons able to access this information. The information that should be shared with other federal departments and agencies. At what point on the farm-to-table continuum should identification end? In addition, during an FMD outbreak in the United States, the disposal of carcasses could become a significant challenge because of the potential number of animals that may have to be slaughtered. For example, during the U.K. outbreak, over 4 million animals, primarily sheep, were slaughtered for controlling the disease. According to USDA estimates, if the United States had an outbreak of comparable magnitude (affecting about 8 percent of the livestock population), over 13 million animals would be affected, and most of them would be cattle and hogs. Generally, disposal can occur by burial, incineration, or rendering. In the United States, according to USDA’s draft FMD response plan, burial would be the preferred method of disposal when conditions make it practical. The plan states that burial is the fastest, easiest, and most economical method of disposal. When burial is not feasible, the plan recommends incineration as the alternative means of disposal even though USDA recognizes that incineration is both difficult and expensive. According to a USDA veterinarian who helped during the U.K. outbreak, a 200-meter funeral pyre was used to incinerate 400 cows or 1,200 sheep or 1,600 pigs. Such a pyre required 1,000 railway ties, 8 tons of kindling, 400 wooden pallets, 4 tons of straw, 200 tons of coal, and 1,000 liters of diesel fuel. In addition, heavy equipment, such as bulldozers and a team of about 18 to 20 people, was needed to construct the pyre. Figures 5 and 6 show burial pits and incineration pyres used in the United Kingdom to dispose of slaughtered animals. According to the federal and state officials we spoke with, each of these disposal methods presents significant implementation challenges that have not yet been fully considered. For example, burial poses such challenges as the potential to contaminate groundwater, the need to identify burial sites and obtain appropriate federal and state permits and clearances in advance, and the potential to spread the disease if animals have to be transported to an off-farm burial site. For incineration, the incineration site has to be accessible to large equipment, and yet has to be sufficiently away from public view to minimize negative public reaction to the sight of large burning pyres. In addition, incineration could not only affect air quality but also may be ineffective because if not constructed properly, the pyres may not generate sufficient temperatures to completely incinerate the carcasses. According to a USDA veterinarian, in the United Kingdom the pyres generally burned for about 9 to 10 days before all of the carcasses were incinerated. Similarly, rendering poses challenges because transporting carcasses to rendering plants increases the risk of spreading the disease, and additional cleaning and disinfecting procedures would be needed at the rendering facility. Some U.S. veterinarians returning from the United Kingdom told us that during the outbreak, the United Kingdom faced many of these disposal challenges and they were concerned that the United States might not have devoted enough attention to deciding how it would address these or similar disposal issues. According to APHIS officials, USDA is currently creating digital maps of the whole country to help identify appropriate burial and incineration locations. In addition, USDA is trying to determine alternative uses of carcasses, such as safely converting the meat into food, and using vaccinations to limit the number of animals slaughtered and thus requiring disposal. Finally, clear indemnification and compensation criteria are needed to ensure producer cooperation to slaughter and dispose of infected and exposed livestock during an outbreak. During the U.K. outbreak, the government agency responsible for responding to the outbreak experienced delays in slaughtering animals because of farmers’ resistance and legal challenges. According to state and livestock association officials, indemnification would be a significant issue—one that could hamper a rapid response in the United States. USDA published a proposed rule on May 1, 2002, amending the indemnity provisions for its FMD-related regulations. This proposed rule clarifies how USDA will determine the value of animals and materials affected by an FMD outbreak and how indemnity payments will be made to claimants. USDA developed this proposed rule because it was concerned that potential delays to an FMD eradication program in the United States might occur because of producers’ perceptions that they might not be adequately compensated for the fair market value of destroyed animals, products, and materials as well as cleaning and disinfecting costs. Under the proposed rule, the federal government would pay 100 percent of the costs for the purchase, destruction, and disposition of animals if they become infected with FMD, as well as for materials contaminated with FMD and the cleaning and disinfection of affected premises, according to USDA. In commenting on a draft of this report, USDA agreed that animal identification, carcass disposal, and indemnity are all absolutely vital areas that have to be addressed before any major outbreak of disease. In this regard, USDA stated that it is working closely with the agricultural industries to provide forums for a national dialogue on the issue of a national identification plan for American livestock. The ultimate objective is to establish a national identification plan that provides the essential elements to improve emergency response and meet future needs. USDA further stated that it is investing in other options for disposing of carcasses on a large scale. Finally, USDA stated that it has extended the comment period from July 1 to July 31, 2002, for its proposed regulations that address how decisions regarding indemnity payments will be made in the event of an FMD outbreak. If an outbreak of FMD in the United States rages out of control, it could ultimately cost tens of billions of dollars and the destruction of millions of animals. To avoid such catastrophic consequences, the disease must be stamped out quickly. Although the federal government and state governments have made significant progress in developing and testing emergency response plans for an animal disease outbreak, such as FMD, significant issues remain unresolved. These unresolved issues could present major impediments to an effective and timely response if not addressed before an outbreak occurs. While USDA currently has several ongoing efforts to resolve many of these issues, the department has not established specific time frames for the completion of these efforts. We believe it is critical that adequate management attention and resources be made available to ensure that these issues are resolved expeditiously. To ensure that the United States is well positioned to respond effectively to an animal disease outbreak such as FMD, we recommend that the Secretary of Agriculture direct the Administrator of APHIS to develop a plan, which should include interim milestones and completion dates, for addressing the various unresolved issues that could challenge an effective U.S. response.
The 2001 outbreak of foot and mouth disease (FMD) in the United Kingdom decisively illustrated the devastation that this highly contagious animal disease can cause to a nation's economy. By the time the disease was eradicated, the United Kingdom had slaughtered more than 4 million animals and sustained losses of $5 billion in the food and agricultural sectors, as well as comparable losses to its tourism industry. Before 2001, the United Kingdom had been FMD-free for almost 34 years. Following the outbreak, the country was generally barred from participating in the international trade of live animals and animal products that could transmit the virus. The United States has adequate processes for obtaining information on foreign FMD outbreaks and providing the Department of Agriculture (USDA) and others with this information, but it lacks adequate processes for sharing this information with the Customs Service. The United States receives information on FMD outbreaks from USDA officials stationed abroad, international agricultural and animal health organizations, and foreign governments. These officials collect a wide array of agricultural and animal health information about the countries and regions in which they are stationed, which ensures that the United States has timely access to information on foreign FMD outbreaks. However, USDA's processes for disseminating information on foreign FMD outbreaks are uneven. U.S. measures to prevent the introduction of FMD are comparable to those used by other countries and have kept the United States free of the the disease for 75 years. Nevertheless, because of the nature of the disease and the risk inherent in the ever-increasing volume of international travel and trade, U.S. livestock remains vulnerable to the disease. USDA has a two-pronged approach to prevent FMD from reaching U.S. livestock. USDA tries to keep FMD as far as possible from U.S. borders by helping other countries control and eradicate the disease. USDA has developed and implemented specific preventive measures at ports of entry to ensure that international cargo, animals, passengers, and mail do not bring the disease into the United States. In the event of an FMD outbreak, the United States will face several challenges in mounting an effective and quick response, although USDA and many states have developed and tested emergency animal disease response plans.
Yellowstone was created by an act of Congress in 1872 as a public park for the benefit and enjoyment of the people and for the preservation and retention of its resources in their natural condition. Yellowstone’s mandate, creating a dual mission to preserve natural resources while providing for the public’s enjoyment of them, has served as a model for the rest of the park system and for parks around the world. Yellowstone is at the center of approximately 20 million acres of land, commonly called the Greater Yellowstone Area or ecosystem. These lands are managed by four different federal agencies—the National Park Service, the Forest Service, the Fish and Wildlife Service, and the Bureau of Land Management (BLM); three different states—Idaho, Montana, and Wyoming; and numerous private land holders. The Park Service manages bison and elk only within Yellowstone. Outside the park, the neighboring states of Idaho, Montana, and Wyoming manage wildlife not only on their own lands but also on BLM and Forest Service lands. Although the Forest Service manages wildlife habitat on its lands, the states manage the wildlife. For example, in Gallatin National Forest, the Forest Service manages wildlife habitat, while the Montana Department of Fish, Wildlife, and Parks manages wildlife within the forest’s borders. The Fish and Wildlife Service manages wildlife refuges, such as the National Elk Refuge south of Yellowstone, and the Bureau of Land Management manages land used by both wildlife and cattle in the Greater Yellowstone Area. This past winter, park officials estimated the size of the northern elk herd at about 17,000 in Yellowstone and the total number of elk in the Greater Yellowstone Area at about 120,000. The population of Yellowstone’s northern range elk herd has ranged between 16,000 and 20,000 since 1991. At the beginning of this past winter, about 3,500 bison lived within the park, 900 of which occupied the northern range. Subsequently, about 1,100 bison left the park and were shot or shipped to slaughter because of concerns about brucellosis. About 700 other bison were killed by the severe winter, leaving approximately 1,700 bison in the park this spring, including about 300 in the northern range. For thousands of years, various animal species have routinely migrated in and out of what is now Yellowstone National Park. Bison and elk herds seasonally migrate out of the park to seek forage, especially in severe winters like that of 1996-97. While elk have traditionally migrated widely in the Greater Yellowstone Area, bison have more recently left the park, primarily through its northern and western borders, to seek available winter range. Appendix I illustrates the Greater Yellowstone Area elk herds’ winter ranges and migration routes. Appendix II illustrates the Greater Yellowstone Area bison herds’ winter ranges and migration routes . Because bison that migrate outside Yellowstone may be infected with brucellosis and may interact or share rangeland with domestic cattle, the U.S. Department of Agriculture’s Animal and Plant Health Inspection Service (APHIS) and its state counterparts also have a strong interest in the management of Yellowstone’s wildlife. APHIS is responsible for eradicating brucellosis from cattle in the United States. According to APHIS, it also has statutory authority to eradicate brucellosis in all animals—including bison. Since a national brucellosis control program was first instituted in 1934, more than $3.5 billion in federal, state, and industry funds have been spent trying to eradicate the disease. According to APHIS, nationwide, only 22 herds of domestic cattle and bison are now known to be infected. The states also play a major role in the effort to eradicate brucellosis. Because federal statutes on controlling disease in livestock pre-empt the states’ authority only when cattle and bison are moving in interstate commerce, most states have enacted their own statutes to supplement federal regulatory efforts. The Brucella abortus organism, a bacterium, is transmitted among animals primarily through exposure to infected reproductive material, such as aborted fetuses. APHIS tests cattle and bison for antibodies to the Brucella abortus organism. Antibodies in blood samples may indicate either past exposure to the disease or current infection. Positive tissue cultures for Brucella abortus confirm the presence of live bacteria and the potential for animals to be infectious. However, according to APHIS, negative tissue cultures do not prove the absence of bacteria because the organism cannot always be isolated even when it is present. After surveillance tests and procedures are conducted to ensure that cattle and bison herds are free of the disease, APHIS may certify states as brucellosis-free. This certification allows the states to ship their cattle and bison in interstate commerce without having to perform expensive testing to assure importing states that the cattle or bison do not pose a threat of the disease to their livestock industry. As of June 1997, Idaho, Montana, Wyoming, and 34 other states were certified as brucellosis-free. The economic consequences of infection with brucellosis could be significant. Under the requirements of APHIS’ eradication program, if a single herd of cattle or bison in a state that is designated brucellosis-free becomes infected, the infected animals must be slaughtered, the herd quarantined, and the herds in the surrounding area tested to ensure that the disease has not spread. If the herd is slaughtered and no additional infection is found, the state can remain classified as brucellosis-free. If the herd is not slaughtered or additional infection is found, the state’s classification will be lowered and additional interstate testing requirements implemented. Montana estimates that it saves between $1 million and $2 million annually because it does not have to test cattle for brucellosis. A state with infected cattle or bison may also be subject to restrictions imposed by other states. For example, because of the increased movement of brucellosis-infected and -exposed bison out of the Greater Yellowstone Area, the state of Oregon decided in March 1997 to protect the interests of its cattle industry by immediately requiring the testing of any cattle entering Oregon from Montana or Wyoming. Other states have imposed, or threatened to impose, similar restrictions. The management of Yellowstone’s wildlife, especially of bison and elk, has gone through many phases as wildlife managers have gained experience and scientific knowledge has grown. When the park was founded in 1872, there were numerous elk, estimated at 25,000 in 1891, and, according to park officials, bison were also very common. However, no estimates of the bison population exist for that period. After almost two decades of slaughter by market hunters, the bison population in Yellowstone dwindled to about 44 in 1901-02. Yellowstone officials saved the bison from extinction by aggressively protecting the remnant population and supplementing it with bison imported from Montana and Texas. For several decades, Yellowstone also aggressively reduced the populations of wolves and other predators. As a result, the park’s bison population gradually increased, growing to more than 1,000 in 1930. However, from about 1935 to 1968, park rangers controlled the elk and bison populations by shooting or by trapping and removing animals. This “culling program” reflected the then-prevailing view that wildlife populations had to be controlled to meet an area’s carrying capacity—a determination of how many animals can live in an area without degrading the range. In the early 1960s, however, elk kills initiated by park officials to reduce the size of a herd that was considered too large, led to a public outcry, studies, and U. S. Senate hearings on Yellowstone’s wildlife management policy. As a result, in the late 1960s, Yellowstone’s wildlife management policy changed significantly. According to park staff, although little information was available on how functioning elk and bison populations might respond in a natural environment, park managers thought that Yellowstone might be a place to develop this knowledge and resolve the controversy over the size of the herds by letting natural forces regulate the populations. Therefore, in Yellowstone, natural regulation replaced the capture and culling of elk and bison herds. The park’s master plan, written in 1974, reflects the shift to natural regulation, stating that “Yellowstone should be a place where all the resources in a wild land environment are subject to minimal management”. For wildlife, the plan proposes to reduce or eliminate disruptive human influences, relying, whenever possible, upon natural controls to regulate animal numbers. For the past 30 years, the Park Service has been implementing natural regulation in Yellowstone, in essence, following the park’s master plan. However, the Park Service recognizes that because of the pervasiveness of human influences in today’s world, true natural process management is seldom feasible. In the lower 48 states, the Park Service believes that Yellowstone is the only park large enough to test the effects of natural regulation. At Yellowstone today, the Park Service relies on natural forces within the park—mainly animal behavior, climate, food supply, and predation—to regulate bison and elk populations. In addition, elk have always been hunted in the surrounding states. More recently, bison have been killed when they have migrated out of the park, and some public hunting of bison has occurred in both Wyoming and Montana. However, in 1991, Montana discontinued public hunting. According to park officials, once humans stopped controlling the size of the herds and Yellowstone adopted the natural regulation policy, the bison and elk populations increased considerably. For example, from 1967 to 1988, the bison population rose from 397 to more than 2,500 and then peaked at about 4,200 in the summer of 1994. Yellowstone’s elk population grew about sixfold, from 3,200 in 1968 to about 19,000 in 1994. Park officials point out that without human intervention, the low bison and elk populations of the 1960s would not have occurred. They stated that these low numbers were achieved only by large-scale reductions involving the slaughter of thousands of animals each year. In addition, park officials noted that a key predator, the wolf, was missing during this period. Wolves were reintroduced to Yellowstone in 1995, and park officials believe time is needed to determine their impact on the elk population. Current laws and regulations give park managers broad discretion on how to manage wildlife in the park. While an overall mandate of the Park Service is to conserve wildlife, wildlife management policies can vary from park to park, depending on the history of the park, the enabling legislation, the neighboring land, and the local geography. For example, Grand Teton National Park (330,000 acres), just south of Yellowstone (2.2 million acres), has a different mandate, history, neighbors, and geography and has adopted a different policy for managing bison and elk. Grand Teton National Park’s legislation provides for hunting elk within portions of the park and for grazing cattle—two uses that are not allowed in Yellowstone. Hunting gives the park some direct control of elk populations, and the presence of cattle adds management challenges and increases working relationships with ranchers. The National Elk Refuge, which is adjacent to Grand Teton, provides winter range and feed for both bison and elk, as do 22 feedgrounds operated by the state of Wyoming. However, feeding these animals further complicates issues by concentrating their populations and increasing the risk of disease transmission. In Grand Teton National Park the bison herd grew from 16 in 1969 to about 320 this past winter. Park officials said that at the conclusion of this year’s calving season, the bison herd will number nearly 380. The growth of the herd has raised a number of management concerns, including questions about the need to set specific objectives for the herd’s size. Grand Teton’s draft management plan states that the park could maintain a free-roaming herd of about 200-250 bison without jeopardizing the genetic viability of the herd. However, park officials say they are considering public comments on the draft suggesting that the herd should be maintained at 400 animals. To sustain the herd at the levels suggested, the park has considered alternative management measures, which we discuss at the end of this statement. The condition of Yellowstone’s northern range has concerned the public, land managers, and scientists for more than 70 years. Critics of the Park Service’s wildlife management policies—including some scientists, state officials, and representatives of livestock interests—believe large populations of elk and bison have overgrazed Yellowstone’s available grasses and, in some cases, destroyed grasses that were once natural to the northern range. They contend that many of the natural grasses have been replaced by nonnative agricultural grasses that better withstand heavy use by wildlife. In addition, critics say that the large elk and bison herds have damaged riparian areas. For example, the critics often cite declines in woody vegetation, especially willows, aspens, and several species of sagebrush in the Lamar Valley of the northern range, as indications of the herds’ negative impact on riparian areas. The critics contend that the destruction of the willows and aspens has reduced beaver populations and accelerated soil erosion in streambeds. Finally, the critics maintain that the bison herds have grown so large that they are naturally migrating out of Yellowstone in search of forage that is no longer available in the park because of overgrazing. According to the Park Service’s recently published compilation of 28 reports on research studies of the northern range, Yellowstone’s grasslands do not appear to be overgrazed by any definition of overgrazing. The studies were conducted during a 6-year period that began in 1986 and concluded in 1991. The studies were researched and written by a variety of scientists from several universities and agencies. The researchers found that the production of grasses either was not reduced or was enhanced by the grazing of ungulates (hoofed animals) in all but drought years. The research shows that the decline in the range and riparian areas’ vegetation was due to a number of factors, including changing climatic conditions as well as grazing by elk. According to park staff, the riparian plants are smaller in size but in no danger of disappearing. Furthermore, the park report states the supposed declines in beaver and white-tailed deer populations were based on inaccurate historical interpretations. Park officials point out that beaver populations persist in low levels on the northern range, while larger colonies live in suitable habitat elsewhere in the park. Park officials do not attribute the migration of bison out of the park to overpopulation but to a combination of factors. First, bison migrate because they are nomadic. Second, severe winter conditions can make forage inaccessible beneath deep snow and ice, forcing bison to search for forage elsewhere. Finally, park officials point out that except in the northern range, Yellowstone has “groomed” or packed the snow on roads for snowmobiling in the park since the early 1970s. These trails facilitate the migration of bison out of the park and enable the animals to conserve a great deal of energy by avoiding travel through deep snow. Park officials said that access to more winter range for bison outside the park would enhance their chances of survival in severe winters, but opponents think that the herds should be reduced to numbers that can be supported within the park. The park is currently reevaluating its policies on the use of snowmobiles because of their effects on the environment and wildlife. Both supporters and critics of the Park Service’s policies have scientific evidence that supports their points of view. For example, the 6-year study of the northern range addressed the population dynamics and ecological effects of elk, bison, moose, deer, and other ungulates on the soil, vegetation, and watersheds of the northern range. The research found that the bunchgrass, swale, and sagebrush grasslands of the northern range did not appear to be overgrazed. In riparian areas, willows were much taller in some parts of the northern range in the late 1800s than currently, and virtually no aspen have reached tree height since the 1930s. A study of historical aspen growth found that there was only one period, between about 1870 and 1895, when young aspen were not eaten by ungulates and grew as tall as trees on the northern range. According to the park’s summary report, the discovery that aspen reached full height during only one period in the park’s history suggests that the failure of aspen to grow into trees should not be regarded as proof that elk are overabundant. Rather, the summary continued, several factors are involved in aspen growth, including the number of elk, changes in climate, dry or wet weather, fires, and the number of predators feeding on elk. Park officials have called for more research on woody vegetation. Critics of Yellowstone’s wildlife management policy disagree that factors other than wildlife grazing are to any significant degree responsible for the lack of robust woody vegetation on the northern range. They contend the research program undertaken by the Park Service did not look for evidence of overgrazing and was incomplete. They maintain, for example, that park scientists have not documented a cause-and-effect relationship between climate and the decline of willows. In addition, some critics assert that independent research on range and riparian areas in the park has been restricted by the park, which controls funding for research and access to the park. For example, in February 1997, a researcher with the Biological Resources Division of the U.S. Geological Survey testified before the House Subcommittee on National Parks and Public Lands that the park would not approve or fund his proposed research on woody vegetation in the northern range or grant him a permit to work in the park. Park officials said they denied the work assignment because of concerns over the research design and the relevancy of the proposal to the work priorities of both the park and the Biological Resources Division—then known as the National Biological Service. To support their position, critics often cite a 1990 dissertation by a Utah State University researcher that linked the decline of riparian vegetation directly to growth in the elk population. Park officials, however, state that this study was based on a number of key assumptions about conditions in the park during pre-European times. Park officials say they disagree with the researcher on issues such as the number of Native Americans that lived in Yellowstone and the impact they had on wildlife. Park officials added, however, that there is no scientific evidence available on either issue. Critics familiar with the principles of commercial range management for the production of livestock believe that the number of grazing animals in Yellowstone should be reduced to balance the available forage. They cite a 1963 survey of Yellowstone’s northern range conducted by what was then the U.S. Department of Agriculture’s Soil Conservation Service. This survey concluded that the range could support no more than 5,000 elk and 350 bison. According to the survey, populations of bison and elk in excess of these numbers would cause severe damage to the range and riparian areas. However, park officials said that the 1963 survey used commercial standards for domestic livestock to assess the park’s carrying capacity. According to park officials, they and other leading wildland ecologists believe these standards should not be applied to wildlife. A Forest Service official at Gallatin National Forest, which borders Yellowstone on the north and west sides of the park, also believes that a commercial carrying capacity cannot be set for wildlife. According to this official, Gallatin National Forest does not develop carrying capacity limits for wildlife because the Forest Service cannot control when wildlife come or go on the land. Gallatin National Forest does develop carrying capacity limits for cattle because the Forest Service can control where and when cattle graze on its land. The official noted that cattle use only that portion of the forage that is not required to support wildlife. To help resolve the rangeland controversy, the House Committee on Appropriations, in its July 1997 Committee Report on Interior’s 1998 appropriation, directs the Park Service to initiate a review by the National Academy of Sciences of all available science related to the management of ungulates and their ecological effects on the rangeland of Yellowstone. The extent to which domestic cattle risk infection through exposure to diseased bison and elk—either from mingling directly with infected wild animals or from using rangeland where infected wild animals have previously grazed—is the subject of intense controversy between the Park Service, wildlife management agencies, wildlife conservation groups, livestock interests, Native Americans, and others. Yellowstone National Park, under its interpretation of natural regulation, allows natural processes to control wildlife populations and opposes efforts to manage wildlife in a way that conflicts with natural regulation or restricts wild animals’ free-roaming nature. APHIS, however, is committed to eradicating brucellosis in the United States and believes that wildlife should be tested and, if infected, slaughtered to prevent the disease from spreading further. APHIS maintains that the techniques developed through its 63-year-old eradication program for domestic livestock can be applied to eliminate brucellosis in wildlife. In Yellowstone, blood tests indicate that 40 to 54 percent of the bison and about 1.5 percent of the elk from the northern range carry antibodies to Brucella abortus. Some of the elk from the northern range migrate to Montana for the winter. Other elk migrate to Wyoming for the winter and use the federal National Elk Refuge or the state’s 22 feedgrounds to supplement their food base. On average, about 38 percent of the mature cow elk using the National Elk Refuge’s feedground have had positive blood tests for brucellosis antibodies. Positive blood tests indicate that an animal is infected with or has been exposed to brucellosis. On the one hand, a positive test does not necessarily indicate that an animal is infectious; on the other hand, a negative test does not exclude the possibility of infection, because the blood of some animals that are infected does not react positively to the test. In addition to blood tests, tissue cultures are performed to detect the presence of brucellosis. Although tissue cultures are a much more reliable method of identifying active infection, they also will not identify all infected animals. The rate of current infection as determined by tissue cultures is always lower than the rate of positive blood tests because Brucella abortus cannot always be cultured from infected animals. For example, an ongoing analysis of samples from 41 bison killed during the winter of 1996-97 showed that the blood tests for 30 females were positive. For 18 of these 30, tissue cultures have been completed and the results were positive for only 7. According to Wyoming officials, research with elk have suggested a higher correlation between positive blood tests and positive tissue cultures. According to Park Service officials, in the scientific literature, there is no documentation of brucellosis transmission from elk or bison to cattle in a wild, uncontrolled setting. Furthermore, although the risk of such transmission has never been quantified, the Park Service maintains that it is likely to be very low. Hence, park officials believe that testing and slaughtering infected wildlife to eradicate a potential source of infection for cattle is not necessary in Yellowstone and could result in the unnecessary slaughter of bison and negatively affect the genetic viability of the herd. Park officials also object to the use of vaccines that were developed and tested for cattle but have not been proven effective for bison. They contend that the untested vaccines may be ineffective and/or unsafe for the herds and other wildlife that may come into contact with them. Park officials also question whether the disease can be eliminated from wildlife. For example, they note that the disease may be impossible to eliminate from bison because elk and other mammals can carry brucellosis, which could then find its way back into bison. Unless brucellosis is eliminated from all of these mammals, park officials and others have stated, some chance remains that the disease will be transmitted back to the bison. According to APHIS officials, in several cases of brucellosis, wild elk or bison have been identified as the source of transmission. These officials believe that any risk is unacceptable in an eradication program. In addition, they refer to several other parks where the disease has been eliminated from bison and elk. However, APHIS officials agree that vaccines need to be tested and proven to be safe and effective before being used on elk and bison. During our review, we visited two of the three states that surround the park—Montana and Wyoming. Both states are concerned about the potential for the transmission of brucellosis between wildlife and cattle. However, each state approaches this problem differently. For example, the state veterinarian in Montana believes that no risk is acceptable because transmission would threaten the states’ brucellosis-free certification from APHIS. In December 1994, APHIS wrote a letter to Montana setting forth its intention to downgrade the brucellosis-free classification of the state if the state failed to take action against bison within its borders that were known to be infected with or had been exposed to the disease. As a result, Montana officials believe that they have no alternative but to slaughter bison that move into the state. Montana officials stated that they are not addressing the disease in elk because the rate of infection in elk is low. In the long term, Montana officials said, they plan to take action to eradicate the disease in elk. Wyoming, which has fewer bison than Montana but a much higher incidence of brucellosis in elk, has tried to manage the risks of exposure to the disease while implementing a long-term program to eradicate it. For example, in the Jackson area, Wyoming has worked with federal agencies and private landowners to develop policies for separating cattle from bison and elk to minimize the risk of transmission. Also, many of the ranchers in the Jackson area voluntarily vaccinate their cattle. Both Montana and Wyoming officials believe that the vaccines they have used successfully with domestic cattle could be applied to the park’s bison and elk herds. They and APHIS noted that the vaccine, combined with efforts to test and slaughter infected animals, has been used successfully on bison herds on private and other public lands. Finally, some experts believe that even if brucellosis remains in “other mammals,” the disease would naturally decline and be eliminated from other wildlife because the carriers would not be able to transmit it to other animals. Scientific data on both sides of the brucellosis debate are limited. According to the Park Service, neither it nor APHIS has performed or sponsored many scientific studies on the transmission of brucellosis among elk and bison or on the development of vaccines against the disease. Recently, however, the park, APHIS, and others have initiated an ambitious series of studies on brucellosis in bison to obtain answers needed for making future management decisions. Critics of the park’s position on brucellosis derive support for their views from the biological similarities between bison and cattle and data developed through APHIS’ program for eradicating the disease in domestic livestock, including bison. Some critics do not believe that they are responsible for conducting additional research on brucellosis in wild bison. However, since the late 1970s, Wyoming, with technical and financial assistance from APHIS, has sponsored a number of studies on the disease in elk. For example, the state sponsored research to determine the effectiveness of a reduced dosage of one type of cattle vaccine in elk and is testing the effectiveness of injecting the vaccine through the use of a “biobullet” shot from an air gun. Various federal, state, and private groups are conducting many research studies and planning efforts to control or eradicate brucellosis in Yellowstone wildlife. In discussing the controversy surrounding this issue, one official described it as a war. Another official stated that the federal and state representatives are so entrenched in their positions that no one wants to be the first to compromise. He added that meetings on this issue have become so heated that a fight once broke out between participants. Recognizing the need to coordinate the work on brucellosis in the region, in July 1995, the states and responsible federal agencies established the Greater Yellowstone Interagency Brucellosis Committee. This interagency committee includes representatives of the states surrounding the park, the four federal land management agencies, and APHIS. The committee has agreed on the objective of planning for the elimination of brucellosis by the year 2010. However, the states and agencies disagree on the current feasibility of eliminating the disease, the actions needed to eliminate it, and the effect of the disease on wildlife or on the livestock industry if it is not eliminated. Although members are generally very supportive of the committee’s efforts, they agree that achieving results has been difficult even when issues are generally agreed upon. For example, a paper summarizing generally accepted information on brucellosis underwent 12 revisions over 22 months before it received final approval. Despite these difficulties, members of the interagency committee believe they are slowly making strides towards coordinating policies and addressing scientific data needs. For example, the committee has completed a policy on elk feedgrounds, produced an informational report on the potential for brucellosis transmission by bull bison, developed a bison quarantine protocol, and conducted a national symposium on brucellosis in the Greater Yellowstone Area. Among its current activities, the committee is coordinating a joint effort by the park, the state of Montana, APHIS, and the Forest Service, as well as three cooperative efforts in Wyoming. Since 1989, Montana and the Park Service have been meeting to develop a long-term plan for managing the brucellosis-exposed, free-roaming bison that move primarily during the winter from the park to public and private lands in Montana along the northern and western boundaries of the park. The first goal of this effort was to issue a long-term plan and an environmental impact statement (EIS) by December 1991. In a May 1992 Memorandum of Understanding, the Forest Service and APHIS joined this effort. However, as negotiations have continued on ways to better manage brucellosis in bison, many deadlines for completing this effort have come and gone. In the interim, Montana filed a complaint in January 1995 in federal district court contending that the conflicting policies of APHIS and the Park Service threaten Montana’s brucellosis-free certification. To settle the lawsuit, Montana, the park, and APHIS agreed to develop interim bison management procedures to prevent the potential spread of brucellosis from bison to domestic cattle. The August 1996 interim plan was implemented over the last winter and remains in effect. Where cattle graze in Montana, the interim plan has no tolerance for bison. As a result, about 1,100 bison were shot or captured and slaughtered last winter. The procedures do allow bison to use adjacent federal lands where cattle either do not graze or are not present when bison are in the area. Early this year, to move forward on the long-term plan, the Park Service committed staff from its field area office to assist in preparing both documents. The park and the state are committed to issuing a draft management plan and an EIS for public comment in July 1997 and to completing final products by March 1998. In June 1997, the state, APHIS, the Forest Service, and the Park Service agreed upon a preferred alternative for managing brucellosis and Yellowstone’s bison population. Generally, the alternative provides for the capture and shipment to quarantine of animals testing negative for brucellosis. These animals would then be made available to Native American tribes to help establish herds. The alternative also provides for the capture of bison to control their movement onto private lands; the hunting of bison in certain situations; the vaccination of bison when a vaccine is developed for them; and the acquisition of additional winter range outside the park when such range becomes available for purchase from willing sellers. Three separate ongoing cooperative efforts are addressing brucellosis issues in the area south of Yellowstone Park. First, Wyoming has been working with APHIS, the Park Service, the Fish and Wildlife Service, and the Forest Service since December 1995 to develop an interim brucellosis plan for elk and bison. The goal is to design a plan that will maintain the state’s brucellosis-free classification, reduce damage to private property, and sustain the free-roaming bison and elk herds. Last November, the agencies received public comments on a draft plan, which they are now analyzing. A second effort is being conducted by Grand Teton National Park and the National Elk Refuge, in cooperation with the Wyoming Game and Fish Department and Bridger-Teton National Forest, to develop a long-term management plan for the Jackson bison herd. The plan’s goal, in part, is to minimize the potential for transmitting brucellosis among bison, elk, and domestic livestock. A draft plan and environmental assessment were published in September 1996, public comments were received, and a final plan is expected in August 1997. To reduce the risk of transmission among bison, elk and cattle, the draft plan proposes measures such as baiting or feeding the bison for a limited time to keep them from migrating onto the National Elk Refuge, separating bison from elk and cattle when the potential for transmission is greatest, vaccinating cattle, using a vaccine on bison when one is developed for them, and developing disease transmission risk assessments to use as the basis for wildlife management programs. The plan would also allow small public bison hunts outside the park and make some bison available to Native Americans. A third effort, led by the Wyoming Game and Fish Department, is to develop brucellosis management action plans for each of the state elk herds and the surrounding range used by cattle. The objective is to develop plans that minimize the potential for transmitting brucellosis among elk, and from elk to cattle, by reducing the animals’ overlapping use of rangeland and conducting other actions designed to ultimately eliminate the disease. Finally, at the request of the Secretary of the Interior late last winter, the National Academy of Sciences’ Commission on Life Sciences agreed to review the scientific data on brucellosis contained in published studies in the fields of wildlife ecology, epidemiology, zoonotic diseases, infectious disease control, animal physiology and health, and veterinary science. The review is to examine the scientific issues surrounding the transmission of brucellosis among wild and domestic animals, especially among bison and cattle; determine the extent of infection in wild herds; and identify the additional research that is needed on these subjects. Specific questions include, among others, the relationship between blood tests and the ability of animals to transmit the disease, the effectiveness and safety of vaccines, and the impact of various risk reduction measures. The study is due to be published by October 1997. The impact of Yellowstone’s bison and elk herds on the park’s range and riparian areas and the potential for these animals to transmit brucellosis to cattle are highly controversial, sensitive, and emotional issues for the affected parties. Scientific and historic data on some aspects of these issues are limited, and when agreement does exist, the data are often interpreted differently, reflecting differences in people’s values and in agencies’ mandates and missions. Many questions will need to be answered before these concerns can finally be resolved. For example, how will the reintroduction of the wolf in Yellowstone affect the size of the elk herd and, subsequently, the park’s woody vegetation? This past winter, the slaughter of bison that migrated out of the park, combined with the winter kill, reduced the bison herd to about half of its size the previous year. In the short term, this reduction may limit the migration of bison from the park, relieve some of the immediate pressure on the Park Service to take management actions, and create an opportunity for the Park Service and its critics to complete and assess the results of studies such as the National Academy of Sciences’ review of brucellosis issues. The results of these studies are needed to make informed management decisions. This concludes my statement, Mr. Chairman. I would be happy to respond to any questions you or other Members of the Subcommittee may have. NATL. NATL. (12.4 mi) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed wildlife management issues at Yellowstone National Park, focusing on the: (1) National Park Service's (NPS) current policy for managing free-roaming bison and elk in Yellowstone; (2) controversy surrounding the impact of these herds on the park's rangeland and riparian areas; and (3) controversy surrounding the risks to domestic livestock posed by exposure to diseased bison and elk. GAO noted that: (1) current laws and regulations provide park managers with broad discretion on how to manage their park's resources; (2) as a result, parks with similar wildlife resources, such as Yellowstone and neighboring Grand Teton National Park, can apply different approaches to managing these resources; (3) while Yellowstone uses "natural regulation," a policy that allows natural forces to regulate the size of its bison and elk herds, Grand Teton has established specific goals and objectives to control the size of its bison herd; (4) critics of Yellowstone's policy believe that the policy's implementation has produced bison and elk herds that are too large and damage the park; (5) in their view, the park's rangelands are being overgrazed, the riparian areas are being damaged, and because these lands are being depleted, bison and elk are migrating from the park in search of forage on private lands and public grazing areas; (6) according to NPS' recently published studies, however, researchers have found that Yellowstone's grasslands are not overgrazed, and several factors have contributed to the decline of the range and of the riparian areas' woody vegetation; (7) park officials believe that bison are leaving the park for a combination of reasons, because these animals are nomadic by nature, they do not have access to sufficient forage during hard winters, and they can follow snowmobile trails out of the park; (8) the health of Yellowstone's bison and elk herds is a major concern for livestock owners and public officials in the states bordering the park; (9) because many Yellowstone bison and elk are infected with brucellosis, a disease that can cause cattle to abort during pregnancy, these parties fear that the wild animals may transmit the disease to domestic cattle; (10) a state with infected livestock may lose its federal brucellosis-free classification, jeopardizing its right to freely transport cattle across state lines; (11) as a result, these parties believe that the risk of transmitting brucellosis from bison to domestic cattle must be eliminated by containing bison within the park, by using vaccines, or by shooting or capturing bison that leave the park; (12) according to NPS, the risk that brucellosis will be transmitted from either elk or bison to cattle is likely to be very low; (13) this past winter, the Yellowstone bison herd was reduced to about a half of its size the previous year; and (14) in the short term, this reduction may provide an opportunity for NPS and its critics to complete and assess the results of studies which could go a long way toward resolving this controversy.
Established in 1972, WIC is designed to improve the health and nutritional well-being of participants by providing nutritious supplemental foods, nutrition education, and referrals to health care services. The program is available in each state, the District of Columbia, 33 Indian tribal organizations, Puerto Rico, the U.S. Virgin Islands, American Samoa, and Guam. FCS administers the program in cooperation with state and local health departments and related agencies. The supplemental foods that WIC provides include milk, cheese, fruit and vegetable juices, iron-fortified adult and infant cereals, dried beans or peas, peanut butter, eggs, and infant formula. Special infant formulas are also available to meet unusual dietary or health-related conditions. Each state designates the types and amounts of foods that local WIC agencies can prescribe to meet each participant’s nutritional needs. The WIC food benefit (referred to as a food package) can be provided through local WIC health clinics or home delivery. More commonly, participants receive their food benefits in the form of a check or a voucher that is used to purchase the specific foods at authorized retail vendors. These vendors have been selected by the state to participate in the program for a period of time. FCS requires the states to operate a rebate program for infant formula. By negotiating rebates with infant formula manufacturers for each can of formula purchased through WIC, the states greatly reduce their average per person food costs so that more people can be served. In fiscal year 1996, infant formula rebates to all states totaled about $1.2 billion. Federal WIC appropriations totaled $3.47 billion in fiscal year 1995 and $3.73 billion annually in fiscal years 1996 and 1997. The program is primarily funded by federal appropriations; some states supplement the federal grant with their own funds. In fiscal years 1995 and 1996, the average monthly WIC participation nationwide was about 6.9 million and 7.2 million, respectively, and in fiscal year 1997, the average monthly participation was about 7.4 million through February 1997. Grants to the states are divided into food grants and nutrition services and administration grants. Food grants cover the costs of supplemental foods and are allocated to the states through a formula that is based on the number of individuals in each state who are eligible for WIC benefits because of their income. The nutrition services and administration grants are allocated to the states through a formula that is based on factors such as the state’s number of projected program participants and WIC salary costs. Nutrition services and administration grants cover costs for program administration, start-up, monitoring, auditing, the development of and accountability for food delivery systems, nutrition education, breast-feeding promotion and support, outreach, certification, and developing and printing food vouchers. State WIC agencies establish program eligibility criteria that are based on federal guidelines. To qualify for the program, WIC applicants must show evidence of nutritional risk that is medically verified by a health professional. In addition, participants may not have incomes that exceed 185 percent of the poverty guidelines that are established annually by the U.S. Department of Health and Human Services. In 1997, for example, the annual WIC income limit for a family of four is $29,693 in the 48 contiguous states and the District of Columbia. Federal regulations allow the states to individually determine their income documentation requirements for applicants seeking to participate in WIC. The states are also required by federal regulations to automatically certify as income eligible those individuals who document their participation in the Food Stamp Program, Medicaid, and the Temporary Assistance for Needy Families Program. Approximately two-thirds of all WIC participants were enrolled in one or more of these programs in fiscal year 1994, the last year for which complete data were available at the time of our review. In addition, WIC participants are required by federal regulations to reside in the jurisdiction of the state where they receive benefits, and the states are required to check the identification of all participants when they seek certification for program participation and when they receive their vouchers. Although the applicants who meet the program’s health and nutrition, income, and residency requirements may be certified as eligible to participate in WIC, the number of participants that are actually served each year primarily depends on the total amount of funds available to the states. According to FCS’ estimates, about 75 percent of the eligible women, infants, and children actually participated in WIC during fiscal year 1995. States’ initiatives to control food costs by limiting the types and package sizes of WIC foods and by more carefully selecting and regulating vendors have reduced the program’s costs by millions of dollars. These practices could be expanded in the states that have already implemented them and could be adopted by other states. However, the National Association of WIC Directors and some WIC directors we spoke with are concerned that, among other things, the program’s regulations can constrain a state’s ability to effectively use the additional funds that become available as a result of cost containment initiatives. Two practices that some states are using to contain food costs are reported by state WIC directors to be saving millions of dollars. These two practices include (1) contracting with manufacturers to obtain rebates on WIC foods in addition to infant formula and (2) limiting authorized food selections by, for example, requiring participants to select brands of foods that have the lowest cost. In fiscal year 1996, nine state agencies received rebates for two WIC-approved foods—infant cereal and/or infant fruit juices. Table 1 shows the states that were receiving rebates during fiscal year 1996 from the expanded use of rebates through individual or multistate contracts. The cost savings resulting from these infant cereal and juice rebates are relatively small in comparison with the savings resulting from infant formula rebates. As shown by the figures in table 1, the rebates for infant cereal and juices represented about 1 percent of the total rebates received in these nine states in fiscal year 1996. The $6.2 million in infant cereal and juice rebates reduced total food costs in these states by about six-tenths of a percent in fiscal year 1996. Eleven states—Alaska, California, Delaware, the District of Columbia, Hawaii, Missouri, New Jersey, New York, Oklahoma, Rhode Island, and West Virginia—reported that their agencies were considering, or were in the process of, expanding their use of rebates to foods other than infant formula. In May 1997, Delaware joined the District of Columbia, Maryland, and West Virginia in a multistate rebate contract for infant cereal and juices. California was the first state to expand its rebate program to include adult juices, adding this option in March 1997. California currently spends about $65 million annually on adult juice purchases. California’s WIC director told us that the state expects to collect about $12 million in annual rebates on the adult juices, thereby allowing approximately 30,000 additional persons to participate in the program each month. According to FCS officials, there is the potential to reduce WIC costs further through the expanded use of rebates. They said that FCS has encouraged the states to examine and aggressively pursue rebates to stretch food dollars to serve a maximum number of eligible participants. In May 1997, FCS sent its regional directors a memorandum outlining a strategy to “manage, contain, and control” food costs using rebates on products such as special infant formula and other WIC foods in addition to infant formula. The officials told us that if federal funding for WIC remains constant or declines, more states may consider expanding the use of rebate contracts to provide funds for their WIC programs. FCS officials also told us that some states’ WIC agencies may not be expanding the use of rebates because other cost containment practices have proven effective in reducing food costs. For example, the states that have elected to use only store brand foods may be incurring lower costs than the states that receive rebates on national brand products. While these rebates reduce costs, the procurement process requires additional administrative effort by the states. The California WIC director and FCS officials told us that the process of entering into and monitoring rebate contracts can be complicated and time-consuming. In addition, FCS officials told us that bid protests filed by the manufacturers that are not awarded contracts impose additional administrative burdens on the states. The administrative burden associated with procuring and monitoring rebate contracts can be exacerbated if a state contracts with more than one manufacturer for rebates. For example, when California expanded its rebate program to include adult juices, the state requested bids on rebate contracts from juice companies for frozen and ready-to-drink apple, grape, orange, and pineapple juices that were available in all parts of the state and had to negotiate five separate contracts. The states also need to use additional resources to manage the rebate contracts. FCS officials told us that disagreements between the states and manufacturers occur over the rebate billings that the manufacturers are obligated to pay the states. They said that the states must therefore develop billing systems that track the amount of the manufacturers’ products selected by WIC clients using their vouchers. For example, the California WIC director told us that before the state implemented its adult juice rebate contracts, the state agency had to develop a system for determining the amount and quantity of each type of juice selected by WIC participants and a system for rebate billing that was acceptable to the juice manufacturers. FCS officials told us that the states could become increasingly dependent on the funds provided by their rebate contracts. Historically, the annual funds received by the states from their infant formula rebate contracts have continued to increase, but this source of funding may not always be reliable. If manufacturers begin offering lower rebates, the states could have insufficient funds to provide program benefits to their current level of WIC participants. According to FCS officials, such a decrease in rebate funds would be similar to an increase in food prices because of inflation, something which the program has experienced before. In such instances, the states would need to make adjustments to the foods they offer to contain the escalating costs and/or remove people from the program. According to FCS officials, the prices of the food items provided by WIC can vary dramatically, depending, for example, on the brand of the item or how it is packaged. Individually wrapped sliced cheese can cost substantially more than the same cheese in block form, and a national brand of juice could cost substantially more than a vendor’s own brand. All state WIC directors responding to our survey reported that their agencies imposed limits on one or more of the WIC food items. The states may specify certain brands; limit certain types of foods, such as sliced cheese; restrict container sizes; and require the selection of only the lowest-cost brands. Figure 1 shows the number of WIC directors who reported that their states use various types of limits for one or more food items. As the figure indicates, some types of restrictions are more widely used than others. Forty-seven of the 48 WIC directors reported that their states’ participants are allowed to choose only certain container or package sizes of one or more food items. For example, 27 of the 48 directors who responded to our questionnaire reported that their states limit the container or package size of infant juice. In addition, 8 states limit allowable types of infant juice, and 18 do not offer infant juice. Some states have also extended limits to non-infant foods. For example, Texas participants can select only cheese that is not packaged as individually wrapped slices or shredded, and milk must be in 1-gallon or half-gallon sizes and must be the least expensive brand. In Pennsylvania, dry beans or peas must be in 1-pound packages, ready-to-drink juices must be in 46-ounce cans, and the price of a dozen eggs must not exceed $1.75. While all states have one or more food selection restrictions, 17 of the 48 WIC directors responding to our questionnaire reported that their states are considering the use of additional food selection limits to contain or reduce costs in the WIC program. Most of the 48 WIC directors reported that placing selection limits on WIC foods has at least moderately decreased their food costs. Twelve of these directors reported that selection limits have greatly or very greatly reduced their WIC food costs. Figure 2 shows the range of food cost reductions that the directors reported from implementing these restrictions. Texas, for example, which reported that the restrictions had a very great impact, uses a combination of food selection limits, including a least-cost brand policy. The policy requires participants to buy the cheapest brand of milk, evaporated milk, and cheese available in the store—usually the store’s own brand. Texas also requires participants to buy the lowest-cost 46-ounce fluid or 12-ounce frozen fruit juices from an approved list of types (orange, grapefruit, orange/grapefruit, purple grape, pineapple, orange/pineapple, and apple) and/or specific brands. According to Texas WIC officials, the least-cost brand policy has had a “tremendous” impact on lowering the dollar amount that the state pays for WIC food products. For example, in fiscal year 1989 (the first full fiscal year that the policy was in effect), the cost of milk was reduced by about $3 million. In fiscal year 1996, Texas had a lower than average food cost per person among the 50 states and the District of Columbia even before rebates were factored in. (See app. I.) FCS headquarters officials told us that the selection by state agencies of the foods available to participants is one of the states’ most powerful cost containment tools. FCS encourages the states to approve WIC foods that are low in price. However, the officials said that while cost efficiencies are important, the states must maintain the nutritional integrity of the program’s food package. The practice of limiting food items can have a negative impact if participants do not select the food products or do not eat them. For example, Texas WIC officials told us that they discontinued the least-cost brand requirement for peanut butter when they discovered that participants were not selecting the product. In addition, FCS officials said that the restrictions may make food selections more confusing for participants and burdensome for vendors. For example, Texas WIC officials told us that participants and cashiers often have difficulty determining which products have the lowest price. A 1995 study of participants’ selections of lowest-cost WIC foods performed by a Texas WIC food chain found that 95 percent of the participants were selecting one or more nonapproved food items that had to be exchanged for the correct item. In response, the food chain, among other things, upgraded the quality, location, and clarity of WIC labels and signs in all of its stores, adding color displays and descriptions of approved WIC items. The Texas WIC agency has also published and displayed a color brochure of approved items that has helped participants to select the approved foods. According to an official of the supermarket chain, these actions have reduced exchanges of food items between 19 and 50 percent. Separately or in conjunction with measures to contain food costs, some state agencies have placed restrictions on vendors to hold down costs. Some states are also selecting alternatives to vendor distribution for certain food products. Thirty-nine of the 48 states responding to our questionnaire reported that they use special selection requirements or limits to contain the number of authorized vendors. Twenty-nine WIC directors reported that they considered it extremely or very important to contain the number of vendors in order to control the program’s costs, and 9 reported that it is moderately important. Of the 39 states, 34 reported using competitive food costs as one of their criteria for selecting vendors. In addition, 27 states have established price limits that vendors cannot exceed for allowable foods, and 5 states require vendors to bid competitively for vendor slots. The food prices of WIC vendors in Texas must not exceed by more than 8 percent the average prices charged by vendors doing a comparable dollar volume in the same area. Once selected, vendors must maintain competitive prices. According to Texas WIC officials, the state does not limit the number of vendors that can participate in the WIC program. However, Texas’ selection criteria for approving vendors exclude many stores from the program. By approving only stores with competitive prices, Texas officials said that they save WIC food dollars by paying competitive prices for WIC products. Similarly, Delaware’s Project SAVE (Selecting Authorized Vendors Efficiently) requires vendors to bid competitively for all authorized WIC food items. Vendors that meet the minimum qualification requirements and bid the lowest prices are selected to fill the available retail outlet slots. Delaware selects vendors every 2 years. Delaware’s WIC director said that while SAVE maintains the clients’ access to vendors, administrative savings have been achieved by training and monitoring vendors, and the number of potentially high-risk vendors has declined. The director noted that SAVE enables the state to control unexpected price increases because the prices are locked in for 2 years through agreements with vendors, thereby allowing grant funds to be more effectively and efficiently managed. Between fiscal years 1991 and 1996, the director estimated, the agreements saved the program about $1.8 million in food costs. Eighteen WIC directors reported that their states use ratios of participants to vendors to restrict the number of vendors allowed to participate in the program. By limiting the number of vendors, the states can more frequently monitor stores and conduct compliance investigations, according to FCS and state WIC officials. For example, Delaware uses a ratio of 200 participants per store to determine the total number of vendors that can participate in the program in each WIC service area. Of the 39 states reporting that they contain the number of vendors, 31 states reported that as a result, their programs’ costs have decreased somewhat or greatly. Figure 3 presents the WIC directors’ estimates of the cost reductions resulting from limits on vendors and selection policies. The WIC directors in 7 of the 39 states (Maine, Massachusetts, Nebraska, New Mexico, Rhode Island, South Carolina, and Wisconsin) that currently contain the number of vendors allowed to participate in the program reported that they are planning to introduce additional initiatives, such as requiring competitive food pricing by currently authorized vendors, to contain the program’s costs. In addition, the directors in two other states (Connecticut and North Carolina) also reported that they plan to select vendors on the basis of competitive pricing. FCS headquarters officials told us that limiting the number of vendors and selecting vendors with competitive prices are important aspects of containing WIC costs. However, they told us that the retail community does not favor limits on the number of approved vendors. Instead, vendors have pressured state WIC agencies and FCS officials to allow all vendors that qualify to participate. According to the FCS officials, the amount that the WIC program spends for food would be substantially higher if stores with higher prices were authorized for the program. Upon a physician’s instructions, WIC infants with special dietary needs or medical conditions may receive special infant formula. While only a small percentage of the WIC infants nationwide require these formulas, the monthly retail costs for them can be high—ranging in one state we surveyed from $540 to $900 for each infant. Twenty-one states avoid paying retail prices by purchasing the special formula at much lower wholesale prices and distributing it to participants. Opportunities exist to substantially lower the cost of special infant formula. Cost savings may be achieved if the states purchase special infant formula at wholesale instead of retail prices. Additional savings may also be possible if these states are able to reduce or eliminate the cost of authorizing and monitoring the retail vendors and pharmacies that distribute only special infant formula to WIC participants. Pennsylvania, for example, turned to direct purchasing to make special infant formula more available and to avoid the high cost of vendor-provided formulas. It established a central distribution warehouse for special formulas in August 1996 to serve the less than 1 percent of WIC infants in the state—about 400—who needed special formulas in fiscal year 1996. Pennsylvania purchases the special formulas directly from the manufacturers at wholesale prices, usually for between $300 to $500 for a 1-month supply. The warehouse ships the special formulas, at the participant’s option, either directly to the participant or to the WIC clinic. According to the state WIC director, in many instances, the WIC warehouse delivers the formula faster than pharmacies do. The program is expected to save about $100,000 annually. In addition, by relying on its warehouse, the state can remove over 200 pharmacies from the program, resulting in significant and measurable administrative cost savings, according to the WIC director. Appendix II provides information on the states’ use of cost containment practices that affect the program’s costs. According to the National Association of WIC Directors and some WIC directors we spoke with, the program’s funding structure can constrain a state’s ability to make effective use of the additional funds that become available as a result of cost containment initiatives. FCS policy requires that during the grant year, any savings from cost containment accrue to the food portion of the WIC grant, thereby allowing the states to provide food benefits to additional WIC applicants. None of the cost containment savings are automatically available to the states for support services, such as staffing, clinic facilities, voucher issuance sites, outreach, and other activities that are needed to increase participation in the program. As a result, the states may not be able to serve more eligible persons or they may have to carry a substantial portion of the program’s support costs until the federal nutrition services and administration grant is adjusted for the increased participation level—a process that can take up to 2 years—according to the National Association of WIC Directors. FCS officials pointed out that provisions in the federal regulations allow the states where participation increases to use a limited amount of their food grant funds for program support activities. However, some states may be reluctant to use the option. For example, according to a Texas WIC official, states may not want to redirect food funds to support services because doing so may be perceived as taking food away from babies. Although California implemented cost containment initiatives during the current and past year, the WIC director told us that the state received less funding for support services this year compared with last year. As a result, she said California has a large, multimillion-dollar imbalance between food money and program support funds that is likely to get worse. She told us that the California program has been hampered by the lack of adequate support funds to sustain its caseload. Some WIC directors told us that such shortfalls in funding for support services may discourage state agencies from expanding the use of cost containment initiatives. FCS officials stated that while the WIC funding process does not immediately adjust the amount of funds for support services to reflect cost containment savings, such adjustments are generally made in the following year’s funding allocation. FCS officials also noted that a major reason for the lack of adequate funding for program support activities is an insufficient appropriation level overall—a factor that affected California as well as all WIC state agencies. Federal regulations allow the states to establish their own documentation requirements for applicants who do not automatically meet the income requirements for participation in WIC. Thirty-two of the 48 WIC directors reported that their state agencies generally require documentation of income eligibility for these applicants. Fourteen directors reported that their states do not require documentation. These states allow applicants to declare their income without providing supporting documentation. Finally, two directors reported that income documentation procedures are determined individually by the local WIC agencies. In addition, 20 state WIC directors reported that their states do not require applicants to provide proof of residency, and 12 reported that their states do not require applicants to provide proof of identity when they seek certification for program participation. Thirty of the 32 states that generally require applicants to document their income will waive this requirement under certain conditions. The responses to our questionnaire and our review of state policies indicate that waiving this requirement can be routine. For example, in some instances when individuals report that they are homeless or lack any income, the documentation requirement can be waived. We found that some states also allow individuals to self-declare their income if they do not bring income documents to their certification meeting. While these states will waive their documentation requirements, 27 of the 32 state directors reported that 75 percent or more of the participants who were not automatically income eligible provided documentation, such as pay stubs and letters, to establish eligibility in fiscal year 1996. Appendix III provides information on the states’ income documentation requirements and the percentage of participants who were not automatically income eligible and provided income documentation during fiscal year 1996. In addition to meeting income requirements, WIC applicants must reside within the jurisdiction of the state where they expect to establish eligibility to receive benefits. FCS allows the states to accept an applicant’s declaration of state residency without documentation. While 20 of the 48 WIC directors reported that their states do not require applicants to provide any proof of state residency, 28 states do require applicants to provide proof of state residency. The types of residency documentation accepted by these states include utility bills, rent receipts, driver’s licenses, voter registration cards, and bank statements. To prevent duplicate payments, the program’s regulations require the local WIC agency to check the identification of each participant at certification and when issuing food or food vouchers. The types of identification accepted by states include driver’s licenses, birth certificates, hospital records, pay stubs, voter registration cards, or recent correspondence. Twelve of the 48 WIC directors reported that their states do not require such proof of identification at certification. There has not been a study of the incidence and magnitude of errors in determining income eligibility for the WIC program since 1988. The 1988 study found that 5.7 percent of the participants were not eligible. According to FCS officials, there is potential for error in making income eligibility decisions, and income documentation requirements may need to be tightened. FCS has begun a nationwide study, scheduled to be completed in 1999, that will develop a national estimate of the number of people participating in the program who are not income eligible. The study will also assess the extent to which various income documentation procedures reduce the level of participation by individuals who are ineligible. The information from this study will assist FCS in determining what changes are needed in income documentation to ensure that the states provide benefits only to applicants who are eligible. FCS officials told us they strongly encourage the states to obtain income documentation. However, they said that imposing stricter documentation requirements could result in increased administrative costs for state and local agencies and might discourage some eligible individuals from applying for benefits. They also noted that certain subgroups of the WIC population, such as aliens, may find stricter documentation requirements a barrier to participation because individuals may be intimidated by the paperwork. FCS officials also expressed concern that the states not requiring proof of personal identification may not be able to ensure that they are complying with the federal requirement that they check the identification of participants when they are certified and when they receive vouchers. Also, FCS officials expressed concern that the states not obtaining evidence of participants’ residency may not be able to ensure that the participants are residents of their states as required by federal regulations. A number of the states are making effective use of a variety of practices to contain the WIC program’s costs and to extend coverage to more women and children. However, these states have had to overcome various obstacles to implement cost containment. These obstacles include incurring the increased administrative burden associated with procuring and monitoring rebate contracts, ensuring that cost reduction does not result in a food package that is unacceptable to participants, and overcoming resistance from the retail community when attempting to establish special selection requirements or limits on vendors authorized to participate in the program. Given such obstacles, and the states’ concern with how the program allocates the additional funds made available through cost containment initiatives, some states may be discouraged from adopting or expanding the use of cost containment practices. As they seek to expand cost containment practices, FCS and the states can benefit from the experiences of those states that have implemented such practices effectively. Expanding cost containment depends, in part, on reducing or eliminating the obstacles that can discourage the states from initiating such practices. The expansion of these practices can have a substantial impact on the WIC program because for every 1-percent reduction in food costs that may result from these initiatives, the federal food expenditure of about $2.7 billion could be reduced by about $27 million annually. Cost savings could be used to provide benefits to additional participants, improve the quality of WIC services, and/or reduce the cost of the program to the federal government. The states that base income-eligibility decisions on WIC applicants’ declarations of income without documentation may be allowing applicants who are not eligible to participate in the WIC program. It is clear that this policy may result in unintentional or deliberate misreporting of income information. However, the extent of the problem is unknown because there has not been a recent study of the number of participants in the program that are not income eligible. Information from the new study FCS has begun should enable the agency to determine what changes are needed in the program’s income documentation requirements. Similarly, WIC participants must reside in the jurisdiction of the state where they receive benefits and provide identification at the time they are certified to participate in the program and when they receive their vouchers. However, some states are not requiring proof of residency or identity. Without such proof, the states cannot ensure that these requirements are being met. To encourage further implementation of WIC cost containment initiatives, the Secretary of Agriculture should direct the Administrator of FCS to work with the states to identify and implement strategies, including policy and regulatory and legislative revisions, to reduce or eliminate the obstacles that may discourage such initiatives. These strategies could include modifying policies and procedures that allow the states to use cost containment savings for the program’s support services and establishing regulatory guidelines for selecting vendors to participate in the program. The Secretary should also direct the Administrator to take the necessary steps to ensure that the state agencies are requiring participants to provide evidence that they reside in the states where they receive WIC benefits and to provide identification when their eligibility is certified and when they receive food or food vouchers. We provided the Food and Consumer Service with copies of a draft of this report for review and comment. We met with agency officials, including the Administrator, the Acting Deputy Administrator for Special Nutrition Programs, and the Director of the Supplemental Food Program Division. FCS generally agreed with the report’s findings and recommendations, but FCS suggested revising the presentation of our first recommendation that FCS work with the states to reduce or eliminate the obstacles that may discourage the use of cost containment initiatives. FCS believed that the clarity of our recommendation could be improved by emphasizing that a variety of additional approaches could be taken by the agency to reduce or eliminate cost containment obstacles or provide additional incentives to encourage more cost containment. In response to these concerns, we revised the wording of the recommendation. FCS also provided us with a number of technical comments that we incorporated into the report as appropriate. In developing information for this report, we spoke with and obtained documents from officials at FCS headquarters. We also spoke with officials at all seven of FCS’ regional offices. We interviewed state WIC officials in California, Delaware, Pennsylvania, and Texas. In addition, we collected pertinent information from the National Association of WIC Directors. We reviewed federal laws and regulations applicable to the establishment and operation of the WIC program. We also mailed questionnaires to the WIC agency directors in the 50 states and the District of Columbia. We received responses to our questionnaire from 48 directors (94 percent). We conducted our work from December 1996 through August 1997 in accordance with generally accepted government auditing standards. We did not, however, independently verify the accuracy of the state WIC agency directors’ responses to our questionnaire. We are sending copies of this report to the appropriate congressional committees, interested Members of Congress, the Secretary of Agriculture, and other interested parties. We will also make copies available upon request. If you have any questions, please call me at (202) 512-5138. Major contributors to this report are listed in appendix IV. Postrebate average food cost per person (continued) Direct purchase of special infant formula (continued) Percentage of applicants who were not automatically income eligible and provided documentation (continued) Thomas Slomba, Assistant Director Peter Bramble Leigh McCaskill White Carolyn Boyce Carol Herrnstadt Shulman The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed cost containment initiatives states are using to control the cost of the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC), focusing on the practices that the states use to: (1) contain costs by controlling the foods approved for use in the WIC program and by more closely selecting and regulating participating vendors; and (2) ensure that WIC applicants' incomes meet the program's eligibility requirements. GAO noted that: (1) the states are using a variety of cost containment initiatives to control the WIC program's costs; (2) for example, 10 states have contracted with manufacturers to obtain rebates on WIC foods in addition to infant formula, and some states have placed greater limits on WIC participants' food choices than other states; (3) separately, or in conjunction with efforts to contain food costs, 39 states use various practices to restrict the number of vendors or ensure that the prices vendors charge for WIC food items are competitive; (4) these and other practices to contain food costs have saved millions of dollars annually and enabled more individuals to enroll in the program, according to WIC directors; (5) while the use of cost containment practices could be expanded, certain obstacles, including the states' concern with how the program allocates the additional funds made available through cost containment initiatives, may discourage the states from adopting or expanding their use; (6) federal regulations provide that WIC program applicants who participate in the Food Stamp Program, Medicaid, and the Temporary Assistance for Needy Families Program automatically meet the income eligibility requirements of the WIC program; (7) the states use a variety of procedures to certify the income eligibility of the applicants who do not participate in these programs; (8) thirty-two of the 48 state WIC directors responding to GAO's questionnaire reported that their states generally require these applicants to provide documents, such as pay stubs and letters, to verify their income; (9) of the remaining 16 WIC directors, 14 reported that their states do not require documentation; (10) these states allow applicants to declare their income without providing supporting documentation; and (11) the other two directors reported that income documentation procedures are determined individually by the local WIC agencies.
ERISA, among other requirements, establishes the responsibilities of employee benefit plan decision makers (fiduciaries) and the requirements for disclosing and reporting plan fees. ERISA is designed to protect the rights and interests of participants and beneficiaries of employee benefit plans and to outline the responsibilities of the employers and administrators who sponsor and manage these plans. Under Titles I and IV of ERISA and the Internal Revenue Code (IRC), pension and other employee benefit plan administrators are required to file information annually on the financial condition and operations of the plan. The requirements for completing the Form 5500 vary according to the type of plan. If a company sponsors more than one plan, it must file a Form 5500 for each plan. Additionally, ERISA and the IRC provide for the assessment or imposition of penalties by Labor and the Internal Revenue Service (IRS) for plan sponsors not submitting the required information when due. There are various types of Form 5500 filers. Filers are classified as either single-employer plans, multiemployer plans, multiple-employer plans, or direct filing entities (DFE). In general, a separate Form 5500 must be filed for each plan or DFE. Single-employer plans are maintained by one employer or employee organization. Multiemployer plans are established pursuant to collectively bargained pension agreements negotiated between labor unions representing employees and two or more employers and are generally jointly administered by trustees from both labor and management. Multiple-employer plans are maintained by more than one employer and are typically established without collective bargaining agreements. DFEs are trusts, accounts, and other investment or insurance arrangements in which plans participate and that are required to or allowed to file the Form 5500. The Form 5500 was intended, in part, to measure employers’ compliance with ERISA’s fiduciary and funding provisions, among other requirements. It provides information about the financial condition of the plan, annual amounts contributed by participants, and the plan’s investment income. The form also provides information on plan characteristics, such as plan type (defined benefit or defined contribution), method of funding, and numbers of employees and participants as well as the number of employees who are excluded from the plan for various reasons. The Form 5500 is the principal source of information about employer- sponsored pension and welfare benefit plans that is available to Labor, IRS, and the Pension Benefit Guaranty Corporation (PBGC), and is the reporting vehicle for about 730,000 such plans. Accordingly, the Form 5500 constitutes an integral part of each agency’s enforcement, research, and policy formulation programs. It is also a source of information and data for use by other federal agencies, Congress, and the private sector in assessing employee benefit, tax, and economic trends and policies. The form also serves as a primary means by which plan operations can be monitored by participants, beneficiaries, and the general public. Labor, IRS, and PBGC jointly developed the Form 5500 so that employee benefit plans could satisfy (1) the provisions of the IRC that apply to tax- qualified pension plans and (2) the annual reporting requirements under ERISA. Labor enforces ERISA’s reporting and disclosure provisions and fiduciary responsibility standards, which, among other things, concern the type and extent of information provided to the federal government and plan participants and ensure that employee benefit plans are operated solely in the interests of plan participants. IRS enforces standards that relate to such matters as how employees become eligible to participate in benefit plans; how they become eligible to earn rights to benefits; and how much, at a minimum, employers must contribute. PBGC insures the benefits of participants in defined benefit private pension plans. Labor’s regulatory initiatives to expand disclosure requirements cover the following three distinct areas: (1) disclosures by plan sponsors to assist participants in making informed investment decisions; (2) disclosures by service providers to assist plan fiduciaries in assessing the reasonableness of provider compensation and potential conflicts of interest; and (3) more efficient, expanded fee and compensation disclosures to the government and the public through a substantially revised, electronically filed Form 5500 Annual Return/Report. Labor implemented the third initiative on expanding fee and compensation disclosures on the Form 5500—issuing regulations revising the Form 5500 in November 2007—in an effort to facilitate the transition to an electronic filing system; reduce and streamline annual reporting burdens; and update the annual reporting forms to reflect current issues, agency priorities, and new requirements under the Pension Protection Act of 2006. According to officials at Labor, these changes were made to increase transparency regarding the fees and expenses paid by employee benefit plans. Labor also wanted to ensure that plan officials obtain the information they need to assess the compensation paid for services rendered to the plan, taking into consideration revenue-sharing arrangements among plan service providers and potential conflicts of interest. For the 2009 plan year Form 5500, the new Schedule C requires plan sponsors to classify the fees they pay service providers as either “direct” or “indirect” compensation. As shown in table 1, fees are separated into those paid directly by the plan to a service provider and those received by a service provider indirectly from another service provider. Plan sponsors must also determine whether any indirect compensation is reportable (i.e., “ineligible” or “eligible” for exemption from Labor’s reporting requirements, as shown in table 2). Most indirect compensation starts out as having to be reported on the Schedule C. However, indirect compensation can readily become “eligible” indirect compensation (EIC). For indirect compensation to be EIC, and thus not reported on the Schedule C, the plan sponsor must receive written materials from the service provider that describe and disclose the following information: 1. the existence of indirect compensation, 2. services provided for this compensation, 3. formulas used to calculate the value of this compensation, 4. who received the compensation, and 5. who paid the compensation. When indirect compensation does qualify as “eligible,” sponsors have the option of using an alternative reporting format that, according to Labor, is simpler than the format that must be used to report ineligible indirect compensation. With the alternative reporting format, plan sponsors only have to disclose the name, address, and employer identification number of these service providers. Whether a plan sponsor receives the required written disclosures is the key to whether indirect compensation is reportable on the Schedule C. According to Labor, reporting indirect compensation as EIC is an option that the sponsor may choose instead of reporting under the rules applicable to other indirect compensation. Indirect compensation does not qualify as EIC if a service provider does not provide the required disclosures to the plan sponsor. In this case, the plan sponsor is required to report the available information from the service provider on the Schedule C, such as the identity of the service provider and nature of the services provided. The plan sponsor is also required to list the service provider for failing to or refusing to provide necessary information. However, if the plan sponsor does receive information from the service provider upon request, the plan sponsor has the option of reporting the indirect compensation as EIC (i.e., reporting only that indirect compensation was paid and who provided the disclosure). Many of the fees and expenses associated with mutual fund investments are not explicitly reported on the Form 5500. According to a 2004 report by Labor’s ERISA Advisory Council Working Group on Plan Fees and Reporting on Form 5500, many 401(k) and 403(b) plans have moved toward using mutual funds as an investment option. With mutual funds, the plan service provider takes the investment management fees and expenses of managing the mutual fund directly from the mutual fund earnings, and these fees are not explicitly reported to plan sponsors. Without data on mutual funds, the largest component of most 401(k) retirement plans, Labor is unable to fully assess the impact of service provider fees on investment returns. In our November 2006 report, we recommended that Congress consider a statutory change with explicit disclosure requirements for service providers. Without such a change, we concluded, Labor will continue to lack comprehensive information on all fees being charged directly or indirectly to 401(k) plans. Figure 1 illustrates the disclosure of plan fee information from service providers to plan sponsors, which then report the fees to the federal government. The figure also shows that some fees are reported to the Securities and Exchange Commission (SEC), not to Labor. Additionally, many plan fiduciaries enter into bundled arrangements with other plan service providers for recordkeeping or other administrative services that typically do not entail explicit charges to the plan. In a “bundled arrangement” plan, service providers such as recordkeepers and trustees are often compensated for their services to the plan (1) through either subtransfer agent fees, 12b-1 fees, or other administrative fees or (2) through what are called “revenue-sharing arrangements.” As a result, fees and expenses are not paid from plan assets, but rather from the expenses of one of the plan’s investments (e.g., a mutual fund’s operating expense, which is shared with the plan’s service provider). Even though Labor has provided guidance on their recent changes to the Form 5500 Schedule C, plan sponsors and service providers reported that they were unclear about Labor’s new reporting requirements. Specifically, plan sponsors and experts told us that they have questions regarding the distinction between eligible and ineligible indirect compensation, and several said that they were unclear about what types of compensation qualified as EIC. A recent survey of service providers also reports confusion regarding compliance. An industry association representing service providers surveyed its membership, asking if sponsors and service providers understand Labor’s new Schedule C requirements enough to effectively comply. Although only a small number of members (19) responded to the survey, 74 percent of the respondents (14) reported that Labor has not provided sufficient guidance for providers to accurately determine what elements of compensation qualify as EIC. Plan sponsors and experts were also concerned about how much compensation should be disclosed. Figure 2 illustrates the potential difficulty. For example, a plan pays a recordkeeper direct compensation to administer the plan, which includes sending new participants a welcome packet about the plan. Part of the compensation that the recordkeeper receives goes toward paying a fulfillment vendor to make the welcome packets and send them to participants. The fulfillment vendor, in turn, pays a printer to print and collate the packets. As a result, there are multiple layers of payments involved, and sponsors and experts were unsure of how much of the indirect compensation they should be required to disclose. They were also concerned that the compensation would be reported multiple times on the Schedule C. For example, amounts paid by the recordkeeper to vendors would already be included in the overall amount paid by the plan sponsor to the recordkeeper. Concerns have also been raised about how to report noncash compensation. Sponsors and service providers said that they were uncertain about the new Schedule C requirement to report noncash compensation, which is also a type of indirect compensation. Sixty-three percent (12 of 19) of members who responded to the industry survey reported insufficient guidance on this issue. For example, one plan sponsor explained that he was not sure how he would handle, or whether he would even report, the noncash compensation benefit (food, entertainment, and making contacts) of attending a marketing event designed to facilitate future sales of ERISA plans. Respondents (service providers) in the industry survey were asked to imagine that their organization sponsored a similar event for customers and potential customers. Respondents were evenly split on how they would communicate the value of the benefit to attendees for purposes of Schedule C reporting. Some respondents believed the event would not be reportable, while others said they would provide its full value to all attendees and leave it to them to decide whether the event is reportable. Because service providers may have difficulty determining what elements of compensation qualify as EIC, different interpretations and reporting practices may ensue and could result in inconsistent and incomplete data being reported to Labor. For example, some sponsors may interpret certain compensation as reportable, while others may not, leaving Labor with incomplete information from some plans. In addition, since amounts categorized as EIC will not be reported, Labor will have no way of using these data to determine whether the amounts being paid by plans are reasonable and will be unable to compare these types of compensation across plans. According to Labor, not having to report amounts categorized as EIC is intended to simplify the annual reporting process and reduce the burden for plans and service providers for the types of indirect compensation that commenters said would be difficult and potentially expensive to allocate to individual plans. Industry experts and plan sponsors with whom we spoke said additional guidance on reporting indirect compensation may make it easier for plan sponsors to comply. Labor has posted a set of FAQs on its Web site regarding the changes that are specific to Schedule C reporting. However, industry officials with whom we spoke said that although these FAQs answered many questions, additional questions have stemmed from reading the FAQs that have not yet been addressed. Labor officials told us that they were reviewing and prioritizing the additional questions they have received to develop further guidance. As filing deadlines for the 2009 plan year draw closer, sponsors and service providers have told us that they still have questions about the new Schedule C requirements, and may need more time to comply. Labor has already noted on its Web site that there is flexibility regarding reporting for the 2009 plan year, stating that as long as sponsors receive a statement from their service providers that, despite a good-faith effort, they were unable to provide the newly required information to them, sponsors will not be required to report those service providers on the Schedule C. According to industry experts and plan sponsors with whom we spoke, the coordination of one of Labor’s other initiatives on fee disclosure with the current Form 5500 requirements could make it easier for plan sponsors to comply. Specifically, sponsors and service providers stressed the importance of Labor coordinating the new Form 5500 requirements, which govern reporting at the end of a plan year, with the finalization of its proposed rule on “up-front” service provider disclosure to plan sponsors. Labor has requirements that govern the entering of a service agreement between a sponsor and service provider, referred to as the 408(b)(2) requirements, and has proposed changes to them that have not yet been finalized. Consequently, the new Schedule C requirements, or “after-the- fact” disclosures,” were finalized before the 408(b)(2) regulation, which governs “up-front” disclosures. Since plan sponsors report on the Schedule C an after-the-fact summary of the fees and expenses paid by their plans during the plan year, the information provided on this form is directly related to information about fees and expenses that service providers will be required to disclose to plan sponsors by the 408(b)(2) requirements “up front,” at the beginning of a service relationship. In our discussions with Labor officials, they noted that the better scenario would have been publishing the finalized 408(b)(2) regulation before the new Form 5500 requirements and acknowledged the importance of coordinating the finalization of the proposed regulation with the Form 5500 requirements. The officials told us that the move to require electronic filing for the 2009 plan year led them to finalize the Schedule C requirements first. However, it is unclear when the 408(b)(2) regulation will be finalized, which is important given that the first Form 5500s will be filed under the new requirements in July 2010. If the Schedule C requirements are not coordinated with the finalization of the proposed rule to change up-front disclosure, there could be competing sets of disclosure requirements for sponsors and service providers. Service providers had anticipated that the 408(b)(2) regulations would have already been finalized to coordinate with the changes to the Form 5500 to ensure they comply with both sets of rules at once. Without the coordination, service providers are in the position of potentially having to make expensive investments to update their data systems two separate times. In addition, coordinating the 408(b)(2) requirements with the Form 5500 requirements could help ensure that plan sponsors are meeting their fiduciary responsibilities when selecting or renewing a contract with a service provider. Labor officials told us that they do not have specific plans for using the data received as a result of the new requirements, and we found that even with the changes, the Form 5500 may not be useful to Labor, sponsors, or others. Labor officials told us that they will wait to see how the newly required information is reported before determining its use. In addition, because plan sponsors will be required to list any service provider who fails to or refuses to provide the necessary information on the Schedule C, Labor could potentially pursue listed service providers for enforcement action. However, it is unclear whether Labor has any plans to devote additional resources to follow up on service providers. In fact, for the 2009 plan year, plan sponsors will not be required to list service providers who fail to provide information if the service provider provides a statement that they made a good-faith effort to make any necessary recordkeeping and information system changes in a timely fashion. It is also unclear whether Labor has a plan to follow up with plan sponsors to ensure that they have received the newly required disclosures. Labor officials told us that the new requirements are meant to reinforce a fiduciary’s obligation of monitoring service providers and plan fees and also are intended as an “exercise in discipline” for the sponsor, since the sponsor will have to create a financial record to submit it to Labor. According to these officials, this will ensure that sponsors receive the information they need about indirect compensation paid to service providers. Labor’s efforts are also meant to effect a behavioral change in plan sponsors and service providers. According to Labor officials, the intent is for plan sponsors to understand and then obtain the information they need to determine the reasonableness of the fees they are paying. Since service providers often prepare the Form 5500 on behalf of plan sponsors, the regulatory changes are also designed to notify service providers that compensation information should be provided to sponsors. Still, Labor and others are concerned that some service providers, who may not feel bound by Labor’s reporting requirements, may neglect to list the indirect compensation they may have received. Finally, the Form 5500 continues to have limited use for Labor, sponsors, and participants. Labor. Despite Labor’s efforts with the new Form 5500 requirements, information on asset-based fees is still not explicitly required to be reported on the form. As we have previously reported, the form does not explicitly list all of the fees paid from plan assets. For example, plan sponsors were not required to report mutual fund investment fees to Labor, even though they received this information for each of the mutual funds they offered in the 401(k) plan in the form of a prospectus. While prospectuses are provided to SEC, on a fund-by-fund basis, neither SEC nor Labor have readily available information to be able to link individual fund information to the various 401(k) plans to which the funds may be offered as investment options. Furthermore, prospectuses provide fees as expense ratios, which are used as an intermediate step in calculating net rates of return, and, as such, the dollar amount of deductions from plan assets are not explicitly stated. Labor officials told us that asset-based fees are now required to be reported on the Schedule C. However, even with the changes made to the reporting of indirect compensation, plan sponsors may wind up only reporting the presence of such fees on the Schedule C along with the identity of the service provider. Because these fees are already reported to SEC, the service provider must either (1) provide the plan sponsor with a document that discloses the documents already sent to SEC, with references to the pages or sections of the documents that contain the required information, or (2) determine whether the amount of the fund’s investment management fee that is allocable to the specific 401(k) plan is enough to be reportable, and then provide the dollar amount or a description of the formula used to calculate the amount of the compensation to the plan sponsor. The plan sponsor can then treat the asset-based fee or compensation as EIC and only report the identity of the service provider. Without information on all of the fees charged directly or indirectly to 401(k) plans, Labor is limited in its ability to identify fees that may be questionable. Labor officials told us that the changes to the Form 5500 were not meant to result in a comprehensive database of plan fees, because Labor did not want to put an undue burden on plan sponsors to comply with the new Form 5500 requirements. Labor asserts the expansion of the Schedule C is already significant. In addition, Labor officials told us that because the Schedule C is only filled out by larger plans with more than 100 participants, the schedule is not a complete picture of the universe of plan fees. Sponsors. The Form 5500 may also not be the best vehicle for sponsors to assess service provider fee reasonableness or to understand business arrangements between service providers, since the form is filed long after the plan sponsor has already engaged the provider and selected the investment options. For the most part, the form is filled out for plan sponsors by service providers, who know the compensation arrangements and how to calculate the fees charged. According to industry experts, determining the dollar amounts to attach to ineligible indirect compensation is a source of confusion, because although some indirect compensation is straightforward, the calculation of other compensation is left to the best judgment of the service provider. Also, service providers may also choose to disclose only the formulas they use to determine ineligible indirect compensation, making it difficult for sponsors to assess fees or understand business arrangements. Participants. Participant groups told us that plan sponsors who use the Form 5500 to inform participants are likely to overwhelm participants with the volume and detail required as a result of Labor’s new regulations. Both Labor and industry experts told us that the Schedule C is not designed to be used by participants because it does not provide them with an easy comparison of available investment options. Labor has recently made some effort to improve the Form 5500, specifically the Schedule C. However, the changes made seem unlikely to resolve the issues surrounding service provider disclosure to plan sponsors. Absent detailed guidance aimed at clarifying the indirect compensation reporting requirements, Labor is at risk of receiving inconsistent and incomparable information on the Schedule C. In addition, the new requirements currently give plan sponsors the option of not disclosing EIC on the Schedule C. If Labor allows certain indirect compensation to be deemed EIC, and therefore not to be reported, Labor will continue to have incomplete information on compensation received by service providers, and will be no better informed. Similarly, as long as asset-based fees netted from an investment fund’s performance are not required to be reported on the Form 5500, sponsors, participants, and Labor will not know the true costs of a plan. Requiring plan sponsors or administrators to report more complete information to Labor on fees—that is, those paid out of plan assets or by participants— puts the agency in a better position to effectively oversee defined contribution plans. Despite Labor’s intentions in changing the Form 5500 Schedule C, the way the regulations are currently written may not result in an increase in the amount of meaningful service provider compensation information reported to Labor. In addition, it is unclear whether plan sponsors will actually receive the information on service provider compensation that Labor believes is important for them to have. Because of the option to distinguish indirect compensation as either eligible or ineligible, service providers may choose to qualify their compensation as EIC and not provide their disclosures to plan sponsors. Meanwhile, Labor has also proposed regulatory changes that could eliminate some of the confusion surrounding 408(b)(2) disclosure requirements. However, it is unclear whether the final regulations will be coordinated with the existing changes to the Form 5500 reporting requirements. Coordinating these initiatives may reduce the burden and the cost to service providers and clarify for plan sponsors the information they need for the service provider selection and renewal processes. Finally, as we suggested in our November 2006 report, absent a statutory change with explicit requirements for service providers, Labor will continue to lack comprehensive information on all fees being charged directly or indirectly to 401(k) plans. To minimize the possibility that inconsistent and incomparable information will be reported on the Schedule C and to ensure that the data collected results in meaningful information for Labor, sponsors, and participants, we recommend that the Secretary of Labor take the following action: Provide additional guidance regarding the reporting of indirect compensation and require that all indirect compensation be disclosed on the Schedule C. Furthermore, consistent with our previous recommendation, to ensure comparable disclosure among all types of service providers and ensure that all investment products’ fees are fairly disclosed, we recommend that the Secretary of Labor take the following action: Require asset-based fees that are netted from an investment fund’s performance (and, as such, are not paid with plan assets) be explicitly reported on the Form 5500. To reduce the potential for additional costs and burden being placed on service providers, we recommend that the Secretary of Labor take the following action: Coordinate the implementation of the Form 5500 revisions with the publication of its final 408(b)(2) regulations, since the two initiatives are closely related. We provided a draft of this report to the Department of Labor (Labor). We received written comments from the Assistant Secretary for Employee Benefits Security Administration, which we reproduced in appendix I. Labor generally agreed with our recommendations. Labor also provided technical comments, which we have incorporated in this report where appropriate. Labor stated that it is committed to making the shift to the expanded Schedule C reporting requirements as smooth as possible, and that it has already engaged in substantial outreach on the new reporting requirements. Specifically, regarding our recommendation that Labor provide additional guidance on the reporting of indirect compensation, Labor stated that it has plans to continue its outreach efforts, including publishing additional Schedule C guidance. Regarding our recommendation that all indirect compensation be disclosed on the Schedule C and that asset-based fees be explicitly reported on the Form 5500, Labor explained that it had originally proposed that all indirect compensation charged against a plan’s investments be required to be reported on the Schedule C, without providing an alternative reporting option. However, Labor provided an alternative reporting option for eligible indirect compensation to plan fiduciaries on the basis of comments received on the proposed rule. Labor stated that the alternative reporting option would provide the department with enough information to engage in effective oversight activities. Labor also stated that once Schedule C reporting begins (for most plans, July 2010 and later), it will be able to evaluate the data it receives, taking into consideration our recommendation. Although Labor stated in its comments that its eligible indirect compensation reporting requirements are intended to help ensure fiduciaries are collecting information and evaluating service provider indirect compensation, we believe that it is also important for the indirect compensation information to be reported to Labor. As we stated in our report, we continue to believe that Labor will not receive enough information to engage in effective oversight activities. Finally, Labor stated that it agreed with our recommendation to coordinate the implementation of the Form 5500 regulations with the publication of its final 408(b)(2) regulation, and that it will continue to coordinate the two initiatives. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution until 30 days after the date of this report. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Labor, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff has any questions concerning this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. The following team members made key contributions to this report: Tamara Cross, Assistant Director; Monika Gomez, Analyst-in-Charge; Christopher Langford; James Bennett; Jessica Orr; Walter Vance; and Roger Thomas. Private Pensions: Conflicts of Interest Can Affect Defined Benefit and Defined Contribution Plans. GAO-09-503T. Washington, D.C.: March 24, 2009. Private Pensions: Fulfilling Fiduciary Obligations Can Present Challenges for 401(k) Plan Sponsors. GAO-08-774. Washington, D.C.: July 16, 2008. Private Pensions: GAO Survey of 401(k) Plan Sponsor Practices (GAO-08-870SP, July 2008), an E-supplement to GAO-08-774. GAO-08-870SP. Washington, D.C.: July 16, 2008. Private Pensions: Information That Sponsors and Participants Need to Understand 401(k) Plan Fees. GAO-08-222T. Washington, D.C.: October 30, 2007. Private Pensions: 401(k) Plan Participants and Sponsors Need Better Information on Fees. GAO-08-95T. Washington, D.C.: October 24, 2007. Defined Benefit Pensions: Conflicts of Interest Involving High Risk or Terminated Plans Pose Enforcement Challenges. GAO-07-703. Washington, D.C.: June 28, 2007. Private Pensions: Increased Reliance on 401(k) Plans Calls for Better Information on Fees. GAO-07-530T. Washington, D.C.: March 6, 2007. Private Pensions: Changes Needed to Provide 401(k) Plan Participants and the Department of Labor Better Information on Fees. GAO-07-21. Washington, D.C.: November 16, 2006. Private Pensions: Government Actions Could Improve the Timeliness and Content of Form 5500 Pension Information. GAO-05-491. Washington, D.C.: June 3, 2005.
The Department of Labor (Labor) collects information on fees charged to 401(k) plans primarily through its Form 5500. Labor issued final regulations in November 2007, making changes to, among other things, Schedule C of the Form 5500. Labor put emphasis on reporting the indirect compensation paid to service providers and between service providers, in an effort to capture all of the costs that plan sponsors incur. Congress and others are concerned that Labor's rules could result in duplicative and confusing reporting. Given these concerns, Government Accountability Office (GAO) was asked to examine the new requirements and determine whether Labor's new requirements will provide (1) clear and understandable guidance to plan sponsors and (2) useful information to Labor and others. GAO analyzed Labor's regulations and interviewed Labor and other officials about disclosure and reporting practices. Sponsors and service providers report confusion over Labor's new reporting requirements for the Form 5500 Schedule C and over how plan expenses are defined. Specifically, they have questions regarding the distinction between eligible and ineligible indirect compensation, that is, which types of indirect compensation must be reported on the Form 5500 (compensation that qualifies as "eligible" does not have to be reported). Labor's guidance on its Web site thus far has been limited, and, according to sponsors and service providers GAO spoke with, has raised additional questions that remain unanswered. Specifically, Labor has not provided sufficient guidance for sponsors and providers to accurately determine what elements of compensation qualify as eligible indirect compensation (fees or expense reimbursements charged to investment funds and reflected in the value of the investment). Therefore, interpretations have been left up to sponsors and providers and may result in a range of reporting practices, causing Labor to receive inconsistent and incomplete data. In addition to the new Form 5500 requirements, Labor has proposed another regulation on service provider fee disclosure (its 408(b)(2) regulation), but it has not yet been finalized. Sponsors and service providers GAO talked with stressed the importance of coordinating this initiative with the new Form 5500 requirements. Doing so may reduce the burden and the cost to service providers of making changes to their data gathering and reporting systems and clarify for plan sponsors the information they need to understand and compare the fees charged by various service providers. In GAO's discussions with Labor officials, they agreed that there was a need to coordinate the two regulations, and said that although they are working to finalize the proposed 408(b)(2) regulation, it is uncertain when it will be published. Labor officials told GAO that they do not have specific plans for using the data received as a result of the new Form 5500 requirements and will wait to see what information is reported before deciding what to do with the data. Although Labor's new requirements are meant to ensure that plan sponsors obtain the information they need to assess the compensation paid to service providers for services rendered to the plan, the Form 5500 may not provide useful information to Labor and others. Because plan sponsors are likely to report indirect compensation in varying formats, it is unclear how Labor will be able to compare such data across plans. In addition, GAO previously reported that the information provided to Labor on the Form 5500 has limited use for effectively overseeing fees paid by 401(k) plans because it does not explicitly list all of the fees paid from plan assets, yet these types of fees comprise the majority of fees in 401(k) plans. For example, plan sponsors are not required to explicitly report asset-based fees that are netted from an investment fund's performance, even though they receive this information for each of the mutual funds they offer in the 401(k) plan. Thus, despite the changes to the Form 5500, the new information provided may not be very useful to Labor, plan sponsors, and others.
The 104th Congress is moving to make major changes in AFDC, the nation’s largest cash assistance program for needy families with children. Under consideration are limiting the number of years that cash assistance may be received, capping benefit increases for mothers on welfare who have additional children, denying cash assistance to unwed mothers under 18 years old, and transforming AFDC from entitlement status to a block grant administered by states. Also under consideration are the type and extent of work requirements to be established. In 1988, the Congress created the JOBS program to transform AFDC into a transitional program geared toward helping parents become employed and avoid long-term welfare dependence. Under JOBS, states are to assess the needs and skills of AFDC recipients, prepare them for employment through education and training as needed, and place them in jobs. We reported earlier that while states have made progress in implementing JOBS, only a small percentage of the almost 4.6 million adults on AFDC participated in work-preparation activities in fiscal year 1993. Moreover, little is known about the JOBS program’s progress in moving parents into employment and reducing their dependence on welfare. To help the Congress as it considers welfare reform, the Ranking Minority Member of the Senate Committee on Finance requested us to provide information on (1) examples of county or local programs that stressed job placement, subsidized employment, or work-experience positions for welfare recipients; (2) the extent to which county JOBS programs nationwide emphasized these employment-focused activities; and (3) factors that hinder program administrators’ efforts to move welfare recipients into jobs. Through AFDC, the federal government and the states provide cash assistance to needy families with children who lack support from one or both parents because of death, absence, incapacity, or unemployment. As shown in figure 1.1, since 1970, the number of female-headed families, including those headed by women who have never married, has more than doubled, as has the number of families receiving AFDC. According to a Congressional Budget Office study, the growth in female-headed families, especially those headed by females who had never been married, accounted for about one-half of the sharp increase of 1.2 million in the number of AFDC families between 1989 and 1993. Forecasting AFDC Caseloads, With an Emphasis on Economic Factors, Congressional Budget Office Staff Memorandum (Washington, D.C.: 1993), pp. 1, 3. mothers receiving AFDC who had never been married has doubled, from 21 percent in 1976 to 52 percent in 1992. About 70 percent of families receiving benefits have 1 or 2 children. While most AFDC recipients are single mothers, these women are a diverse group, making use of the program in different ways. For example, one study has estimated the total time that those receiving AFDC at a point in time can be expected to receive benefits, as shown in figure 1.2. This analysis indicates that 9 percent of these recipients are using AFDC for only a short time—2 years or less. About 76 percent, however, are receiving AFDC benefits for a total of 5 years or more, when all moves on and off welfare are considered. According to these data, under a 5-year limit on receipt of cash assistance—a measure included in the House welfare reform bill—about three-fourths of those on AFDC may be expected to hit the time limit and need to support themselves through employment or other means. This prospect poses a formidable challenge for many AFDC recipients who have limited education, job skills, and work experience. About 45 percent of all AFDC recipients, for example, have less than a high school diploma.Surveys of several thousand AFDC recipients expected to participate in JOBS in selected sites showed that at least one-third had extremely low literacy skills and between one-fourth and more than one-half lacked prior work experience. Over one-fourth thought they could not prepare for work because they or their family members had health or emotional problems.Such recipients are at risk of long-term welfare dependence. We reported previously that states have made some progress in working with some of these recipients but that many remain unserved. The JOBS program, begun in 1989, was designed to improve upon the performance of previous welfare-to-work programs and help combat long-term welfare dependence. Research studies conducted up to then showed that employment training programs for welfare recipients could have a positive but generally modest effect on increased earnings and reduced welfare costs. They also showed that programs that emphasized low-cost services, such as job search, generally did not help welfare recipients get higher paying jobs than they would have without the programs or help the more disadvantaged. It was hoped that JOBS could improve upon previous programs’ performance by reaching further into the AFDC caseload and providing more comprehensive services, including education and training, to help parents find jobs that would end their dependence on welfare. To this end, under JOBS, states are to (1) provide a broad range of education, training, and employment-related activities; (2) increase the number of AFDC recipients participating in these activities; (3) target resources to the hard-to-serve; and (4) provide support services, including child care, transportation, work-related, and other support services, such as mental health counseling, if deemed necessary. To encourage states to work towards the federal goal of reducing welfare dependency, the Congress created minimum participation and targeting requirements that states must meet to receive their full share of federal funding. The minimum participation requirements rose from 7 percent of nonexempt AFDC recipients in fiscal year 1991 to 20 percent in fiscal year 1995. Under the targeting requirements, states must spend 55 percent of their JOBS funds on designated target groups. The Congress also expected that performance standards based on outcomes, such as increased employment and earnings and reduced welfare dependency, would be established after the initial implementation of the program. Outcome-related performance standards have not yet been established. (For more information on the current status of these performance standards, see p. 42.) While most states have met the minimum participation requirements, the number of AFDC recipients participating in JOBS remains limited. About one-half of the adults receiving AFDC have been exempted from JOBS, most often because they are caring for a young child. Of those considered nonexempt, states decide how many to serve in JOBS based on the availability of state resources. As shown in figure 1.3, the number participating in JOBS each month, while increasing, has remained limited for fiscal years 1991 through 1993. In 1993, about 11 percent of the 4.6 million adults receiving AFDC were active in JOBS activities each month. Although some individual programs have succeeded in serving most of their nonexempt AFDC recipients, JOBS programs overall served only about one-fourth of the nonexempt population. Federal law requires JOBS programs to make an assessment of employability based on a participant’s educational, child care, and other support services needs; skills and work experiences; and family circumstances. The types of assessments used can range from 5-page surveys filled out by participants to comprehensive career-oriented assessments. If participants are considered job-ready when they enter the program, they may be required to look for work immediately without further employment preparation. Programs have varying criteria on when a participant is considered job-ready. While some local programs encourage all of their participants to look for work before being placed in education, training, or work-related activities, most require some minimum level of education, skills, or work experience before participants are expected to look for work. Within the federal JOBS guidelines, states and localities assess the needs of their JOBS participants, determine the type and intensity of services provided, and set the criteria by which participants are deemed job-ready. They also have discretion to establish the wage level and benefits associated with the employment goal established in the employability development plan. Some programs set wage goals as high as $8 per hour, while others believe that a job at any wage level is an appropriate goal. To help AFDC recipients move towards self-sufficiency, states rely on two federal funding sources. First, about $1 billion of federal JOBS funds has been made available annually in recent years for allocation to the states. States must then commit their own funds to JOBS to match these federal funds. In fiscal year 1993, states used about 70 percent of the federal JOBS funds available to them. Second, the federal government has provided an uncapped source of funds to share with states the costs of providing child care assistance to AFDC recipients in education or training programs or who are employed. In fiscal year 1993, the federal government provided about $1.2 billion of the almost $2 billion spent by states on JOBS and AFDC child care. HHS oversees the JOBS program at the federal level and state AFDC agencies supervise it. At the local level, JOBS is administered either by the state AFDC office or by county officials. Before using JOBS funds to purchase services for participants, programs must make full use of the services and resources available in their communities without charge to AFDC recipients. Programs may also contract with other organizations for services. As a result, programs rely heavily on a variety of community resources, such as Job Training Partnership Act (JTPA) agencies, adult basic education programs, high schools, the state employment service, Head Start, and community colleges. To identify welfare-to-work programs that strongly emphasize employment or work for their welfare recipients, through job-placement activities, subsidized employment, or work-experience positions, we reviewed welfare-to-work evaluations and HHS program data and contacted HHS officials and welfare experts. We then visited selected programs in Riverside County and San Jose, California; Athens, Ohio; and New York, New York. Also, in Charleston, West Virginia, we spoke with six JOBS officials representing 11 West Virginia counties. Where results from impact evaluations were available, they are included in the text; however, only two of the five programs have been rigorously evaluated to measure program effects. We also note that the program cost data cited may not be comparable among the different programs described. To determine the extent to which county JOBS programs nationwide used these employment-focused elements and to identify factors that hinder administrators’ efforts to move AFDC recipients into employment, we collected and analyzed data from a range of sources. To obtain nationally representative data, we randomly sampled 453 of the nation’s 3,141 counties and mailed questionnaires to their JOBS administrators in May 1994. The sample was stratified to ensure representation of the nation’s central-city, suburban, and rural counties. It included the nation’s 10 largest central-city counties, based on the number of female-headed families with children receiving public assistance in 1990. Our analysis of the questionnaire data generally showed few material differences among the responses of the counties comprising the 10 largest central cities, other central-city counties, suburban counties, or rural counties. Consequently, we present the results using combined data from all the strata. See appendix II for more information about our sample. The questionnaire and summaries of the responses are in appendix III. For more information on JOBS program implementation, we spoke with program administrators at HHS and the Department of Labor; representatives of the National Alliance of Business and the American Federation of State, County and Municipal Employees; and welfare experts. We also reviewed HHS and congressional welfare reform proposals and analyzed economic data provided by the Bureau of the Census and the Department of Commerce. In addition, we visited JOBS programs in Alameda, Napa, Santa Clara, and Sonoma Counties in California; Franklin County in Ohio; and gathered additional information at a meeting with JOBS administrators from 12 counties in the San Francisco area. We conducted our work between September 1993 and April 1995 in accordance with generally accepted government auditing standards. Some local welfare-to-work programs are well-focused on employment, working closely with employers to help participants find jobs or using subsidized employment or work experience to promote work for welfare recipients. We saw this in programs at five locations we visited: Riverside County and San Jose, California; New York, New York; Athens, Ohio; and West Virginia. The programs in these places vary in their costs per participant and other features. Yet they all focus on work as the ultimate goal, with three of the programs—in Riverside County, San Jose, and New York—working closely with employers to move participants into paid employment, and the Athens and West Virginia programs supporting work- experience positions when regular employment was not available. Table 2.1 summarizes selected program features and highlights important differences among the programs. For example, the Riverside County program is administered by a welfare agency and involves all of the county’s JOBS participants. While welfare agencies also operate the Athens and West Virginia programs, their work-experience programs involve only a portion of their JOBS participants. The other programs are not JOBS programs and are not operated by welfare agencies. A nonprofit organization operates the San Jose program, which serves welfare recipients among other individuals in the community. And a for-profit firm runs the New York City program under contract to the state welfare agency; it serves but a small fraction of the JOBS participants in the city. We also note that the Riverside County and San Jose programs have research-documented success in getting more AFDC recipients employed than would have occurred without the programs. A more detailed discussion of these programs follows. The Riverside County JOBS program stresses that its purpose is to place participants in jobs quickly. Researchers believe that this strong employment message may have been one of the key factors in producing results. Using an experimental design to evaluate JOBS programs in six California counties, researchers found that the Riverside County program increased the earnings of single AFDC parents by 49 percent and decreased welfare costs by 15 percent over 3 years. Results in the other five counties were about one-half that level. As shown in figure 2.1, the Riverside County program produced greater net gains than the other counties for both welfare recipients and government budgets, saving almost $3 for every $1 spent by the federal, state, and local governments. Moreover, long-term AFDC recipients, those with little education, and those more job-ready have benefited under Riverside’s approach. Researchers who studied the six California counties believe that Riverside County’s greater positive impacts may be due to a combination of program features. For example, the program had sufficient resources to make efforts to enroll all the AFDC recipients deemed mandatory for JOBS. In addition, it used the threat of reduced AFDC benefits for uncooperative participants to secure their participation in JOBS. In contrast with the other counties evaluated, Riverside also articulated a simple goal: participants are there to get a job and leave welfare as soon as possible. They are, therefore, encouraged to take any job offered, including low-wage jobs, part-time jobs, or jobs without benefits. To help participants get jobs, five full-time job developers provide direct access to employers and support the five JOBS offices that served about 2,000 active JOBS participants each month in 1993. The Riverside program also uses placement standards for its JOBS workers; case managers are expected to place at least 12 participants in employment each month. Stressing the importance of job search along with education in the basic skills of reading, writing, and math also appears to benefit Riverside. In Riverside and the other counties, new participants whose test results indicate that they need basic education have the option of entering the classroom immediately or attending 3 weeks of job search. However, Riverside’s orientation results in proportionately more of its participants being in job search than is the case in most other California counties studied. Also, Riverside encourages those participants in education and training to find jobs quickly. Staff closely monitor these participants and expect those not making progress to look for work. Riverside’s emphasis on short-term job search along with longer-term education may also account for its relatively low average cost of $3,000 per participant in 1993 dollars, compared with other California counties studied. Consistent with its emphasis on moving participants quickly into the work force, Riverside makes less use of basic, vocational, and postsecondary education than some other counties. For example, in Alameda County, which used education extensively, the 1993 average participant cost was $6,600. While the Riverside County results indicate that an emphasis on job placement, among other factors, is important, questions remain about what works best to help welfare recipients get jobs and earn enough to support their families. HHS has contracted with researchers to conduct experimental design studies to provide additional information on the cost effectiveness of higher-cost education and training programs compared with lower-cost programs that emphasize quick entry into jobs. While the Riverside program produced greater earnings increases and welfare savings than in the other counties, about 40 percent of its participants were still on AFDC 3 years after the study began, and many of those who did leave AFDC remained in poverty and possibly at risk of returning to welfare. Some of those that left AFDC may also have continued to receive other forms of public assistance, including Food Stamps and housing subsidies. The researchers also noted that it was not clear that Riverside’s program could be replicated or, if replicated, could produce similar results in other localities nationwide—for example, in inner cities where AFDC recipients may face greater barriers. In addition, while they concluded that the Riverside County results appear not to be fully explained by its local labor market conditions, they cautioned that similar results may not be possible in areas with very poor economic conditions, such as rural areas with high unemployment rates. The Center for Employment Training (CET), a nonprofit organization founded in 1967 and based in San Jose, California, represents another approach to promoting employment that has demonstrated positive results. CET contracts with job training and local welfare programs to provide job skills training, combined with remedial basic education, when needed. Using an experimental design, researchers found that this program increased employment and earnings for minority female single parents on or at risk of becoming dependent on AFDC who volunteered for training. The research study noted that at the end of 12 months, 46 percent of CET participants were working compared with 36 percent of a control group and participants earned 47 percent more on average than the control group. These results were also greater than those for other sites in the study. To help its participants get higher-wage jobs with a potential for upward mobility, CET offers job skills training in a range of occupations for which employers have demonstrated consistent demand. About 28 courses are offered, including child care provider, automated office skills, home health aide, commercial food service, and electronic assembly. Remedial education is integrated into the job skills training curriculum for participants who have basic skill deficiencies, rather than being offered separately. Researchers who have studied the CET program believe that its strong focus on employment and integrated training design are important features. The employment focus is evident in several CET activities. CET’s full-time job developers make contact with employers in the community and meet with participants who are nearing completion of training to help them find appropriate work. The job developers are assisted in their placement efforts by CET’s vocational instructors, who maintain close contacts with local employers. CET also has an industrial advisory board, composed of employers, that meets monthly to provide advice on the types of training equipment to be used and other issues to ensure that the training offered meets the needs of employers. Board members also conduct mock job interviews with participants. One employer we spoke with, manager of a local sheet metal fabrication company, emphasized that his company relies heavily on CET graduates. He believes that this saves him advertising and other hiring costs and guarantees him well-prepared workers. At the time of our visit, he was planning to open a company cafeteria to be staffed with CET graduates. Another key feature, integrated training, provides basic education in a practical context. Participants lacking basic educational skills are entered into job skills training immediately to help maintain their motivation and focus on work. Because basic education is provided within the skills training class itself, participants appear more likely to accept the remedial help and to succeed. Participants attend classes during the normal work week in a setting designed to simulate the workplace, using the tools of their trade under the guidance of instructors with recent industry experience. Individualized instruction allows new participants to enter class on the first day of any week of the year, to proceed at their own pace, and to leave as soon as they have demonstrated the necessary competencies. Training courses average 6 months in length and cost about $6,000 to $7,000 per participant. Another example of a work-focused program is seen in New York City. There, the welfare agency, as part of its work-supplementation program, contracts with a private for-profit firm called America Works. America Works quickly prepares JOBS participants for employment, places them in jobs, and provides counseling and support to ease their transition to work. Staff and resources are devoted to working with employers and supporting clients after job placement to help alleviate any personal problems that may arise and threaten their ability to continue to work. America Works emphasizes the development of good work habits and skills required for entry-level jobs during the short training period it provides participants. Specifically, participants are urged to demonstrate punctuality, reliability, appropriate professional dress and demeanor, a constructive and cooperative attitude, and an ability to get along with others in a work environment. Participants attend a week-long pre-employment class and 6 weeks of business laboratory where they use self-paced computer-assisted office skills programs. Tardiness and absences may result in suspension from the program. Participants who complete the business laboratory are placed by the firm’s job developers with private employers for 4 months of supported work, during which time they are on the payroll of America Works. The America Works payroll in New York City is supported by AFDC grant funds and funds from employers. Upon placement, participants are provided a support system, whereby America Works staff help participants with personal problems, such as creditor or landlord disputes, that interfere with their ability to work. America Works staff believe that their support system for participants who have been newly placed in jobs is key to keeping many of their participants employed. According to data compiled by the New York State welfare agency, about 65 percent of participants in supported work are ultimately hired by the private employers with whom they have been placed. America Works receives about $5,300 from the state’s welfare agency when an AFDC recipient enrolled in America Works remains employed and off AFDC for at least 7 months. Unlike the Riverside and CET programs, the outcomes of the America Works program have not been compared with a comparison or control group to determine whether the effects were due to the program. Some of the America Works participants might have found jobs on their own, especially because many of them were motivated volunteers. While the program’s design screens out those not motivated, the program does work with many long-term welfare recipients with low levels of education. The typical participant is an adult female head of household on AFDC an average of 5 years. Also, the typical participant in America Works has volunteered for the program, has a sporadic history of minimum-wage jobs, and can read and write well enough to complete a brief application. Applicants who need remedial basic education or English language training are referred to other community providers. About one-half of the participants have not completed high school. America Works officials believe that reaching out to employers and responding to their needs is a prime program goal. They noted that employers who take on America Works participants save on placement agency fees as well as costs of advertising for and screening job applicants. In addition, they obtain workers at reduced wage and benefit costs initially, and with lower turnover and related costs. America Works guarantees that employers will be satisfied with participants placed with them or replacements will be found. About 60 percent of the jobs that America Works staff develop are the result of repeat business with satisfied employers. JOBS programs in Athens, Ohio, and West Virginia reveal a different kind of work focus, typified by placing participants in community work- experience positions with public and nonprofit agencies. Welfare officials at the sites we visited indicated that having AFDC recipients perform community service can benefit their communities, in addition to developing participants’ work habits and providing work experience that may lead to paid employment. The JOBS program in Athens County, Ohio, uses work-experience positions to increase the confidence and competency of participants, and in some cases these positions lead to permanent employment. The county’s welfare agency is the largest utilizer of work-experience participants, many of whom are subsequently transferred to the county’s payroll and leave welfare. One office unit within the welfare agency is staffed primarily by work-experience participants, and an estimated three-fourths of the welfare agency’s personnel consist of former welfare recipients. West Virginia, where unemployment rates are among the highest in the nation, uses community work experience extensively to develop and maintain work habits among its JOBS participants. This involves work for various public or nonprofit organizations. Since the 1980s, West Virginia’s welfare-to-work program has promoted the idea that AFDC recipients should contribute to their communities in exchange for their benefits, and work for such organizations has been used to promote work among AFDC recipients, especially men. The state has made greater use of community work experience than most other states, with about 2,500 AFDC recipients enrolled in June 1994, mostly at government agencies but also at nonprofit agencies. Participants often work for an average of 62 hours a month, putting in full 40-hour weeks for some part of the month or part-time hours throughout the month. Single parents with young school-age children, for example, may work during the 6 hours of a normal school day and care for their children at home the remainder of the day, thus saving on child care expenses. West Virginia administrators we spoke with noted that much time, effort, and resources must be devoted to operate a work-experience program. Major work-experience program expenses involve intensive use of JOBS staff to arrange for jobs with employers, screen and match participants to available jobs, and provide follow-up support. JOBS case managers check monthly timesheets and ask to be called if problems arise at the workplace. They rarely visit worksites, however, because they average caseloads of 300 to 400 participants. Based on experimental design studies of the use of work experience in several sites in the 1980s, including some in West Virginia, researchers have concluded that unpaid work experience alone does not increase paid employment, earnings, or welfare savings. However, they also found that these programs could produce benefits for taxpayers through the work performed by welfare recipients. In addition, program administrators and welfare recipients involved generally thought that they had performed meaningful work, although the participants said that they would have preferred to work in paid positions. Based on their review, the researchers estimate that the annual cost of a work-experience position in 1993 dollars would range from $2,000 to $4,000, excluding the AFDC benefit and child care costs. While some county and local organizations have forged links with employers to promote work for welfare recipients, these programs are more the exception than the rule across the nation. A majority of county JOBS programs do not work closely with employers to help their participants find work. Administrators and researchers cited many factors that hinder efforts to find or create employment for welfare recipients, including insufficient staff and resources and poor labor market conditions. In addition, we found that the federal JOBS participation requirements emphasizing the enrollment of eligible persons into JOBS programs without an emphasis on the graduation of enrollees into employment provide programs little incentive to redirect their resources to job-placement efforts. Most programs do not fully use the tools available to help move participants quickly into work. This is demonstrated by the limited emphasis on job development, work incentives, and work activities, including subsidized employment or work experience. Although job development is a potentially important tool for moving JOBS participants into employment, about one-half of the nation’s county JOBS administrators believe that they are not doing enough job development to help JOBS participants find work. In addition to preparing AFDC recipients for employment through education and training, JOBS programs are required to engage in job development to help participants secure jobs. Program officials may also work with employers to identify the types of education and training needed for participants to meet employers’ needs. These job-development activities can play an important role in making JOBS programs more responsive to their local labor markets. While almost all county JOBS programs perform some job-development activities, in most, their job-development resources are limited. We found that JOBS programs rely on a variety of local agencies and organizations, such as JTPA, the Employment Service, and education providers, to perform job-development activities for JOBS participants. While other organizations are involved in helping JOBS participants find work, in most counties, the welfare agency itself takes the lead in job development. However, about one-third of the nation’s programs have no full- or part-time staff dedicated to job-development activities. And while caseworkers may also perform job-development activities, we found that they devote little time to working with employers. More than three-fourths of all JOBS administrators report that caseworkers devote 20 percent or less of their time to job development. In many programs, the extent of job development performed on behalf of JOBS participants is limited and may not meet the needs of the job-ready looking for work. For example, about 60 percent of the nation’s JOBS programs or their contractors arranged job interviews for or marketed to employers only some or few of their job-ready participants. Moreover, about 46 percent or more cited that the program or its contractors worked with each of the following only sometimes or rarely: public employers, private-sector employers, the Chamber of Commerce, or other employer associations. Local administrators themselves also believe that job development is underutilized in JOBS programs. A majority of administrators believe that they did not conduct enough job-development and job-placement activities to meet the needs of their JOBS participants, as illustrated in figure 3.1. Furthermore, a 1994 study of JOBS implementation in 30 localities in 10 states also noted that job- development and job-placement activities are underutilized in JOBS programs. Many JOBS programs nationwide do not make all participants aware of some important incentives to seek employment. To encourage work, the AFDC program provides some assistance to recipients who become employed by temporarily disregarding part of their earnings, including some of those expended for child care, in calculating their AFDC benefits. These income and child care disregards allow AFDC recipients who go to work to avoid the cutback in benefits that would ordinarily result from an increase in earnings. In addition, to further ease the transition to employment, AFDC recipients who earn enough to leave the welfare rolls are eligible for 1 year of child care subsidies if needed and continued Medicaid coverage. Other assistance may be available after AFDC recipients leave the welfare rolls. When the 1 year of transitional Medicaid coverage is exhausted, the children of AFDC recipients may still be covered due to recent changes in Medicaid coverage for all children in families below the poverty line. And the recently expanded Earned Income Tax Credit (EITC) will increase some low-wage workers incomes by up to 40 percent. These federal supports can increase the attractiveness of low-wage work. However, many JOBS programs do not inform all their participants of the work incentives that may be available to them. Based on our survey, from 67 to 84 percent of county JOBS programs inform all or almost all of their participants about each of the following: the availability of transitional child care, transitional Medicaid, AFDC income disregards, and child care disregards. However, only about one-half of the nation’s JOBS programs inform all or almost all their participants of the EITC. While we identified about 18 percent of the programs that worked with all or almost all their participants to develop a sample budget demonstrating the benefits available to them when working, about 60 percent of the nation’s JOBS programs reported that they do so for one-half or fewer of their participants. These findings are consistent with other studies showing that those on welfare, as well as welfare and JOBS caseworkers, may not be aware of or understand work incentives. One study of a sample of 30 women in Chicago concluded that the EITC may not provide an incentive to work because few recipients have a clear understanding of how it operates.Another study of welfare administrators found that many did not know that Medicaid coverage was available for certain children in families with incomes up to or, in some cases, beyond the federal poverty line. Almost all JOBS programs encourage participants to engage in job search activities at some point in their enrollment in JOBS, but many job-ready participants do not become employed for a variety of reasons. For JOBS participants who cannot find regular employment, local JOBS programs have the option of using cash wage subsidies to encourage employers to hire them into on-the-job training or work-supplementation programs. Another option is to place participants in work-experience programs. For example, as discussed in chapter 2, West Virginia has used its community work-experience program to promote work among its welfare recipients when jobs were not available. Yet the use of work activities is limited, even though about 70 percent of the administrators reported that one-half or fewer of their job-ready participants became employed during their most recent program year. The distribution of counties according to their placement rates is shown in figure 3.2. The limited extent of work activities is seen in the following numbers: nationwide in mid-1994, of about 586,600 JOBS participants each month, about 59,000 were in work-experience programs, 3,000 were in on-the-job training, and 1,000 were in work-supplementation programs. As shown in figure 3.3, these work activities were little used compared with other JOBS activities. Moreover, more than 80 percent of the nation’s counties have no experience operating work-supplementation programs and almost 50 percent have no experience in on-the-job training. This demonstrates that counties will face a major challenge in supporting the work programs called for in some welfare reform proposals. For example, H.R. 4 requires states to provide work activities for an increasing percentage of those receiving cash assistance or face penalties of up to 5 percent of the state’s block grant. In 1996, states would have to involve 10 percent of all families in work activities, with the requirement rising to 50 percent by 2003. And the administration’s proposal before the 103rd Congress called for those young mothers who do not find unsubsidized employment after 24 months of receiving AFDC to be placed in subsidized minimum-wage jobs. The House bill and the administration’s proposal place a much greater emphasis on work activities than current law. Under both of these proposals, welfare agencies will need to work with many welfare recipients who cannot find jobs on their own. Attention will have to be paid to preparing these recipients for the workplace, because administrators we spoke with emphasized the importance of screening and selecting able and motivated participants to place with employers to maintain employer interest in participating in the programs. This is consistent with our survey results showing that in most counties the typical JOBS participant enrolled in on-the-job training or work supplementation has at least 1 year of previous work experience and high levels of motivation. Also, in most counties, participants in these work activities tended to be more educated than JOBS participants in general. While work activities are little used in JOBS, most administrators believe that they are effective tools that warrant expansion. Of the relatively small number of JOBS administrators currently using work supplementation, 70 percent rated it moderately or highly effective in moving AFDC recipients off welfare and 83 percent wanted to expand their use of it. Of those using on-the-job training, 72 percent thought it at least moderately effective in moving individuals off welfare and 88 percent expressed interest in expanding its use. Almost all counties used work experience, with 76 percent rating it as effective and 84 percent wishing to expand its use. In sites we visited, JOBS participants had been placed with a range of employers and other community organizations. They performed community service work with a county planning office, the Indian Health Service, and a community food bank. In addition, through the work-supplementation program, participants had found jobs at a car dealership, a large health care provider, and a small doctor’s office. In one site, the work-supplementation program helped refugees receiving AFDC gain employment at worksites where they could improve their English-language skills. According to the program supervisor, some of the refugees had been in English as a Second Language classes for several years but had not progressed to employment. Of those program administrators not currently using on-the-job training, about 32 percent believed it to be moderately or highly effective in moving recipients off AFDC and about three-fourths supported expansion. At least one-half of the administrators without work-supplementation programs also wanted to develop or expand these programs, although they were less sure about the effectiveness of such programs. Evaluations of on-the-job training and work-supplementation programs have shown positive results in terms of increased employment and earnings for welfare recipients, but did not conclude that the programs produced welfare savings. As discussed in chapter 2, evaluations of work-experience programs have shown that they offer productive work for participants and benefits to taxpayers, but do not generally produce increased earnings, employment rates, or welfare savings. While JOBS administrators acknowledged that they did not work enough with employers to help participants find jobs, they identified several administrative and programmatic factors that hindered their efforts. Further, administrators and researchers identified certain labor market conditions that hinder efforts to place AFDC recipients in jobs. Most administrators reported that insufficient staff hindered their efforts to work with employers to place JOBS participants in unsubsidized jobs or work activities. Local program administrators, researchers, and HHS officials have noted that working with employers to find job openings or to create and maintain work-activity positions requires a lot of time and effort on the part of JOBS workers. For example, to operate work-supplementation programs, AFDC grant dollars must be diverted to employers to subsidize wages. Many administrators believe that it is difficult to develop and administer a tracking system to operate such a program. In addition, staff must market their programs to employers and sometimes visit worksites to maintain contact or monitor operations. Economies may be achieved if many participants are placed at a single worksite, but we found that generally only one or two participants are placed with each employer. Administrators believe that they need more staff to work with employers because current JOBS staff and resources are mainly devoted to participant intake and management of often heavy caseloads. According to HHS, JOBS caseloads range from 30 to 400 participants per worker. Administrators we met explained that expansion of job-development and work activities would necessitate shifting current staff from intake and case management functions. They also noted that hiring additional staff is not an option where budgets are constrained. While in some cases resource constraints may limit the number of JOBS staff, they may also affect administrators’ and caseworkers’ decisions about the activities in which they enroll participants. The study of JOBS programs in 10 states referred to earlier noted that the availability of education, training, and employment-related activities tends to drive the placement of participants. For example, as a result of resource constraints, programs would often place participants in activities that were readily available or free of charge rather than create or purchase services that were deemed needed by participants. We also found that funding constraints limited the use of on-the-job training. About one-half of the JOBS administrators cited insufficient funds and one-third cited the high costs of on-the-job training compared with other JOBS activities as a major or moderate hindrance to its expansion.On-the-job training is sometimes more costly to a JOBS program than other activities because many of the educational or other activities in which participants are placed are funded by other providers or programs and do not require expenditures of JOBS funds. For example, a JOBS program may not pay for adult basic education or college courses funded by federal, state, or county providers. Funding constraints also hinder the use of work supplementation, even though this form of employer subsidy is funded by AFDC grants instead of JOBS funds. An official in Texas told us that in states with low AFDC grants, the amount of money that can be diverted to the employer is not sufficient for a wage subsidy. For example, the average AFDC grant in Texas equals $159 a month, providing few dollars to subsidize wages. In 1994, 31 state welfare agencies decided not to include work supplementation in the state JOBS plans they must submit to HHS for approval. As a result, the local programs in these states were barred from operating work-supplementation programs. The current federal JOBS participation and targeting requirements provide little incentive for states to redirect scarce resources to increase their focus on moving AFDC recipients into employment. The JOBS performance measurement system is process-oriented, based on the numbers and types of participants enrolled in activities, and does not include outcome measures, such as the portion of participants who become employed and leave welfare. While the participation requirements have played an important role in encouraging states to serve more participants, including the hard-to-serve, the ultimate goal of JOBS is to increase employment and reduce welfare dependence. Yet states are not required by HHS to report the total number of JOBS participants who find jobs each year and are not held accountable for the number of JOBS participants who become employed. Some program administrators and researchers have noted that programs can meet federal participation requirements by placing participants in readily available JOBS activities more easily and with less cost to their programs than finding them unsubsidized jobs or creating subsidized employment. Because program administrators can meet federal requirements without redirecting scarce resources to focus more on employment, they have little incentive to do so. JOBS programs may, therefore, emphasize getting clients into program activities without also focusing on establishing links with employers to realize the ultimate goal of employment. For example, at one site we visited, a woman had successfully completed several different training programs. Under the current performance system, this individual helps the program meet the federal requirements to receive its full share of federal funding. Yet she remained unemployed and on AFDC. Labor market realities also pose a range of problems for JOBS administrators as they attempt to move AFDC recipients into the workplace. Several factors are important in this regard. Administrators and research studies cite high unemployment and low job growth as hindering programs’ efforts to get jobs for participants. Nearly three-fourths of local JOBS administrators identify current labor market conditions, which are outside their control, as a hindrance to their job-development efforts. Many counties operate JOBS in areas of high unemployment or negligible job growth. For example, in 1993, unemployment rates reached 8 percent or more in 30 percent of the nation’s counties; job growth was 1.5 percent or less in one-half the nation’s counties and negative in about one-third of the counties. While some research has shown that the outlook for job growth nationwide over the next few years is encouraging, in specific locations the number of job openings may not meet local needs. For example, a May 1993 survey of Milwaukee area employers identified about 12,000 full-time job openings, which represented only 20 percent of the jobs needed for the estimated 63,000 welfare recipients and unemployed persons seeking or expected to work. When part-time jobs were included, the number of available jobs represented 35 percent of the total jobs needed. Likewise, JOBS officials in Silicon Valley in California, where many once-booming high-tech computer companies are located, and other areas in California believe that their JOBS participants and staff acting on their behalf operate at a distinct disadvantage because of the increase in competition for positions in general. They noted that they must operate their programs in areas where employers are often faced with a surplus of job applicants, especially for relatively unskilled, entry-level positions. Administrators we surveyed and spoke with emphasized that lack of employer interest also hindered the expansion of work activities. Administrators cited as one contributing factor a federal displacement restriction. Under work-supplementation and work-experience programs, participants may only be placed in positions newly created by employers—not positions that become vacant due to turnover. This prohibition is intended to protect workers from being displaced through layoffs and replaced by federally subsidized JOBS participants. About three-fourths of the administrators operating work-supplementation programs reported that this restriction hindered expansion of their programs and about 46 percent of all administrators said that they probably or definitely would like to use work supplementation for existing positions also. In addition, work-experience positions are restricted to sponsors who serve a public purpose, another restriction that about 72 percent of administrators would like to see changed, allowing them both more as well as a greater variety of employers with which to place participants to help them gain work experience. Like work-supplementation, work-experience positions are also subject to displacement restrictions. While most administrators did not believe that the displacement restriction was currently a factor hindering expansion, about one-half supported placing work-experience participants in existing positions. Administrators we spoke with thought that other workers and individuals could be protected without restricting work programs to new positions only. Local administrators also cited other reasons. For example, for on-the-job training, the JOBS program and employers must generally enter into contracts covering the employment of participants, maintain timekeeping and payroll records subject to audit, develop individual training plans, establish qualitative measures of success, and assess the progress of participants in acquiring jobs skills. Some employers may feel that the wage subsidy they receive—up to one-half of participants’ wages when training is completed—does not adequately compensate them for any extra work they must do. In a work-experience program, while a participating employer gets an unpaid worker, the employer is not compensated for any supervision costs involved. Administrators we met cautioned that the number of available supervisors among employers places an upper limit on the expansion of work experience. Administrators also cited employer concerns about welfare recipients being unprepared for work. Employers’ perceptions may be skewed by unfavorable stereotypes or unsuccessful prior experiences. One study of a welfare-to-work program in an inner-city neighborhood noted that many of the participants who found jobs had problems keeping them for various reasons, including chronic lateness and misunderstandings with supervisors. To overcome these perceptions and problems, program administrators told us that they often select their most capable participants for work activities. While the lack of jobs is a problem in many areas, the low-wage work that is available to many AFDC recipients discourages their movement off AFDC. Our work in 1991 demonstrated that many single mothers will remain near or below the poverty line even if they work at full-time jobs. More recently, we found that in 1993 the typical single mother with a low-wage job had more income than a comparable mother and family on AFDC, but was nevertheless still in poverty. Moreover, a low-wage worker may incur significant job-related costs, such as child care, which could make her family worse off financially than some AFDC families. In addition, employment or increased earnings may affect her receipt of other forms of assistance. For example, the previously cited survey of several thousand AFDC recipients found that 60 percent of the respondents in Atlanta lived in public housing projects or other subsidized housing. As a result, their incentive to find jobs may be affected because increased earnings may cause them to incur significantly increased housing costs. The belief and often the reality that a poor single mother can better provide for her family by being on welfare than by working at a low-wage job plays a critical role in discouraging AFDC recipients from looking for and accepting employment. As figure 3.4 shows, about three-fourths of the JOBS administrators cited the lack of jobs with sufficient wages and benefits as a moderate or major reason that their job-ready clients did not become employed. About 70 percent of administrators also noted that their participants did not become employed because of concerns about losing their AFDC benefits, Medicaid, or housing subsidies. By comparison, about one-half of administrators cited the lack of jobs as a major or moderate reason. One study found that 55 of 69 randomly selected current and former AFDC recipients interviewed in Tennessee and North Carolina said that they were not likely to accept a minimum-wage job that did not provide health insurance for them and their children. Most of the 55 thought that health insurance was a necessity and others said that they could not support their families with a minimum-wage job. Concerns about participants’ abilities to support their families may affect the attitudes of administrators and staff in promoting employment as the ultimate program goal. For example, we found that while about 60 percent of local administrators said that they would definitely encourage a 30-year-old JOBS participant with one child to accept a minimum-wage job with health insurance benefits, only 26 percent would definitely encourage her to accept such a job without health benefits. Recent studies of labor market conditions and the characteristics of welfare recipients indicate that employment training strategies to improve the earnings capacities of welfare recipients through education and training may not lead to earnings increases great enough to allow single parents to support themselves with their own earnings. These studies demonstrate that the supports available to low-wage workers, for example, the EITC, expanded Medicaid coverage, child support payments, and child care subsidies, play an important role in helping families get jobs and remain employed. Our recent work on child care subsidies indicates that assistance with child care has a large effect on the likelihood that poor women will work. Thus, subsidies may help welfare recipients become employed and remain off the welfare rolls. The 104th Congress proposes to fundamentally change AFDC—the nation’s largest cash assistance program for poor families with children. While there is general agreement that reforms should promote work, the Congress is considering the type and extent of work requirements to be linked to the receipt of cash assistance. Whether AFDC continues as an entitlement program or is converted into a block grant, program administrators at the county and local levels will be concerned with moving large numbers of welfare recipients into employment. Our work highlights examples of programs that are well-focused on the ultimate goal of employment—stressing the importance of work for their participants and forging links with employers to identify jobs or create work opportunities where none is available. However, these programs appear more the exception than the rule. Most programs appear to emphasize preparing participants for employment without also making strong efforts to help place their participants in jobs. While we acknowledge that some administrators face factors beyond their control that may limit program choices, including budget constraints and a lack of jobs, other programs facing similar constraints have taken steps that promote work more strongly for their participants. These steps include focusing staff and participants on the importance of employment, working more closely with employers to identify job openings, determining employers’ needs, and helping match recipients’ education and training activities to labor market demands. Even programs that are well-focused on moving AFDC recipients into employment have faced challenges, however. For example, the Riverside County program strongly emphasized moving recipients quickly into jobs; yet after 3 years, about 40 percent of its participants remained on AFDC. Many who became employed remained on AFDC or, if off AFDC, continued to receive other forms of public aid, including Food Stamps or federal housing assistance. And some of those who left AFDC remained in poverty and at risk of returning to AFDC. In those cases where unsubsidized employment is not available or the characteristics of participants do not make them readily employable, strategies like work supplementation or on-the-job training may help welfare recipients become employed. And where regular jobs or subsidized employment are not feasible, work-experience programs may serve as an alternative that promotes work for welfare recipients. Administrators generally supported the use of these work activities. However, they believe that they need more flexibility to design work activities to meet the needs of their participants and local labor markets. In commenting on a draft of this report (see app. IV), HHS’ Administration for Children and Families (ACF) disagreed with our conclusion that JOBS programs do not have a strong employment focus. ACF stated that we did not sufficiently recognize programs’ use of job search or the extent of their job-development activities in evaluating their employment focus. It also stated that we did not acknowledge the many ways that programs could focus on employment and, instead, relied too much on programs’ low use of subsidized employment and work experience to indicate a weak employment focus. We continue to believe, based on all the evidence we gathered, that many JOBS programs nationwide do not have a strong employment focus. More specifically, ACF commented that the report does not recognize job search as an employment-focused activity and its extensive use in JOBS, thus, underrepresenting the employment efforts of JOBS programs. We acknowledge that programs can emphasize employment through their use of job-search activities for participants. As we had shown in figure 3.3, the participants enrolled in job search nationwide numbered 75,000 out of 586,600. In addition, we note that all programs use job search as an integral part of their programs and have added this information to the report. We also found, however, that while job search plays a role in all programs, its use varies considerably. Only about one-third of programs employ an early job search strategy that encourages participants to look for work upon enrollment in JOBS, in effect letting the local labor market decide who is job-ready and employable. Those who fail to find work initially are then placed in job search again after participating in education and training. On the other hand, most programs do not expect all participants to look for work upon enrollment, instead limiting job-search activities until participants have received the education and training that the program determines they need to become employed. We also note that the programs we highlighted for their strong job-placement efforts took steps beyond enrolling participants in job-search activities. These programs facilitate job-search activities by working closely with employers, through job-development efforts, to help participants find work. In addition, it is important that programs encourage their participants to accept employment by, for example, helping all participants understand the work incentives available to them. We found, however, that most programs did not strongly emphasize job-development efforts or inform all participants of important work incentives. In addition, ACF believes that we did not adequately recognize the extent of programs’ connections with employers through their job development efforts. Our data show and the report acknowledges that almost all JOBS programs include some job-development activities, performed either by a program’s own staff or through other organizations. We also found, though, that the extent of job development performed on behalf of JOBS participants, whether by the welfare agency itself or other organizations, is limited. For example, about 60 percent of program administrators reported that their program or its contractors arranged interviews for or marketed to employers only some or few of their job-ready participants. In addition, over one-half of the nation’s program administrators believe that their program or its contractors did not do enough job development to meet their participants’ needs. ACF also noted that JOBS programs can take many approaches to help their participants become employed. In addition, ACF stated that the relatively low use of subsidized employment and work experience does not necessarily indicate a lack of employment focus. We agree that there are many ways that programs can focus on employment, as we demonstrated with the examples of different approaches in chapter 2. We also agree that programs do not have to use subsidized employment or work experience to be considered employment-focused. The Riverside County program, for example, does not emphasize these options. However, we found that most programs reported placement rates for their job-ready participants of 50 percent or less. Yet programs were not widely using existing subsidized employment or work-experience options to foster work among the many participants unable or unwilling to find work. In addition to these issues, ACF expressed concern that our draft report promoted holding states accountable for the employment outcomes of their JOBS programs without noting the problems involved in such an approach. We acknowledge the challenges inherent in holding JOBS programs accountable for results. We maintain, however, that strong congressional interest in AFDC becoming more focused on helping recipients become employed, as well as requirements in the Government Performance Results Act that performance monitoring become more outcome-oriented governmentwide, indicate that more attention to outcome measures and goals is appropriate. ACF also suggested certain technical revisions to the draft, which we incorporated as appropriate.
Pursuant to a congressional request, GAO provided information on employment-focused welfare-to-work programs, focusing on: (1) the extent to which county and local Job Opportunities and Basic Skills Training (JOBS) programs focus on employment; and (2) factors that hinder administrators' efforts to move Aid to Families With Dependent Children (AFDC) recipients into jobs. GAO found that: (1) some welfare-to-work programs stress employment and work closely with employers in promoting work among welfare recipients; (2) although the programs reviewed keep participants focused on the importance of work and help program participants find jobs or work-experience positions, they vary in their approach; (3) many county JOBS programs do not have a strong employment focus and many county administrators do not work with employers to find jobs for participants or use work-experience programs; (4) many local program administrators believe that insufficient staffing and resources hinder their work with employers and more flexibility in federal rules governing work-experience programs would facilitate their use; (5) the low-wage work available to many AFDC recipients discourages their movement into the work force; and (6) AFDC programs may emphasize preparing participants for employment without also making strong efforts to help them get jobs, since states are not required to track the number of AFDC recipients who get jobs or earn their way off AFDC.
The Smithsonian Institution is a unique and complex organization. Congress created the Smithsonian in 1846 as an independent trust establishment of the United States. Congress did not establish the Smithsonian within any branch of the federal government, and it is not a federal agency unless Congress designates it as such for purposes of a particular law or a federal court determines it to be. The Smithsonian Act provided that the business of the Smithsonian be conducted by a board of regents, and that the Board be drawn from all three branches of government and the private sector. The Act delineates the structure and composition of the Board. Table 1 provides more information on the structure and current membership of the Board. The Board traditionally has elected the Chief Justice of the United States to be the Chancellor of the Board. The Chancellor presides over Board meetings and selected ceremonial occasions, and until recent changes to the role of the Chancellor, had several other leadership responsibilities. The Board is vested with governing authorities over the Smithsonian and considers matters such as the Smithsonian’s budgets and planning documents, new programs and construction proposals, appointments to Smithsonian advisory boards, and a variety of other issues facing the Smithsonian. The Board also has stewardship responsibilities, including ensuring that the Smithsonian’s facilities and collections are maintained, and ensuring that the Smithsonian has a funding strategy that provides sufficient funds to support these activities. In recent years, the National Academy of Public Administration, the Smithsonian, and we have reported on the deterioration of the Smithsonian’s facilities and the threat posed by this deterioration. For example, in September 2007, we reported that funding challenges at the Smithsonian were affecting facilities’ conditions and security, and endangering collections, and that the Smithsonian’s cost estimate for facilities projects had increased to $2.5 billion from $2.3 billion in April 2005. In that report, we recommended that the Board perform a comprehensive analysis of alternative funding strategies beyond principally using federal funds to support facilities and submit a report to Congress and the Office of Management and Budget describing a funding strategy for current and future facilities needs. The Smithsonian manages two different types of funds—federal appropriations and trust funds—which include revenue from donations, grants, revenue-generating activities, interest on investments, and other sources. In some cases, the Smithsonian’s policies related to activities involving the expenditure of federal funds has differed from its policies related to the expenditure of trust funds. For example, in 2006, we reported that the Smithsonian had elected to follow the Federal Acquisition Regulation (FAR) provisions for contracts involving the expenditure of federal funds, while for business contracts that involved neither the expenditure nor receipt of federal funds and for which the FAR is inapplicable, the Smithsonian had elected to follow commercial business practices. About two-thirds of Smithsonian employees, including executives, are paid with federal funds, while other employees are paid with trust funds. Federal employees are subject to the laws, regulations, and policies for federal employment, while trust employees are covered by Smithsonian policies for trust fund employment. Adding to the complexity of the organization, in 1998, the Board authorized the Secretary of the Smithsonian to reorganize the various business activities within the Smithsonian into a centralized business entity, Smithsonian Business Ventures (SBV). SBV’s mission is to generate revenue from business activities to support the Smithsonian’s mission. The Smithsonian’s business activities include a magazine and museum retail operations. The governance reform currently under way at the Smithsonian is not unique among nonprofit organizations, or among other major arts and educational institutions. In light of the Sarbanes-Oxley Act of 2002 and governance reform in the corporate world, awareness has grown regarding the need for improving governance practices in the nonprofit sector. Several major nonprofit organizations, such as the American Red Cross, the J. Paul Getty Trust, and the United Way of America, have recently undergone major governance reform. These efforts were initiated in response to similar problems that were identified at the Smithsonian regarding controls over executive compensation and expenses, and insufficient oversight and leadership by the boards of those institutions. For example, the J. Paul Getty Trust initiated its governance reform in response to similar criticism regarding executive compensation and expenses, limited board oversight of those expenses, and similar issues regarding weaknesses in conflicts-of-interest policies. We have found that governance and accountability breakdowns can result in a lack of trust from donors, grantors, and appropriators, which could ultimately put funding and the organization’s credibility at risk. Both the Smithsonian’s Governance Committee report, and the IRC’s report highlighted several issues and areas of concern related to governance of the Smithsonian and oversight of Smithsonian management. While the Board accepted the IRC’s report, it did not concur with all of the report’s findings. Both reports laid out recommendations to improve governance which were adopted by the Board in June 2007. Specifically, these reports highlighted: inadequate policies in place and insufficient oversight of and knowledge among regents concerning executive compensation, trust and federal pay systems, leave for senior executives, conflicts of interest regarding service on for-profit boards, travel expenses, event expenses, activities of SBV, and internal financial controls; a lack of critical information and relationships necessary to bring forward important issues and concerns and to support vigorous deliberation and well-reasoned decision making on the part of the Board, such as lack of access for senior management to the Board, too much control within the Secretary’s office regarding the information available to the Board and the Board’s agenda, and a lack of transparency and connection to stakeholders within the Smithsonian (such as museum directors and advisory boards) and to the public at large; and IRC’s finding that there was insufficient action on the part of the Board to demand critical information needed to conduct adequate oversight of the Smithsonian, which was linked to a number of issues, such as unclear roles and expectations for citizen, congressional, and ex-officio regents; a lack of engagement and participation by some regents; unclear responsibilities of Board committees and a lack of critical committees; a lack of diversity of skills and expertise needed to conduct adequate oversight; inflexible size and structure of the Board; and a lack of accountability of regents with regard to their performance and to fulfilling their fiduciary duties. The Board has implemented several executive reforms to address concerns about executive compensation, benefits, and expenses, and has established an overarching code of ethics applicable to everyone associated with the Smithsonian, but development of policies and internal controls for broader operational matters such as travel, event expenses, and contracting is still under way. Training and establishing accountability remain challenges. The recommendations covered in this section are shown in figure 1. Appendix II provides information on the status of all of the Governance Committee’s report recommendations. The Board has implemented reforms related to the Secretary’s compensation, executives’ compensation, executive leave policies, and conflict-of-interest policies. Regarding the Secretary’s compensation, the Board revised the process it used to establish compensation for the Secretary-elect (scheduled to assume office in July 2008). This reform was implemented to address concerns about the former Secretary’s compensation. According to the IRC, the former Secretary’s total compensation equaled $915,698 in 2007 and was troubling for two reasons. First, it included a housing allowance that reached $193,000 in 2007, which the IRC concluded did not in fact serve as a housing allowance but as a “packaging device” to provide the Secretary with additional salary. Second, the former Secretary’s starting base salary of $330,000 in 2000 was increased significantly in 2001—and reached $617,672 in 2007—based in part on compensation studies that the IRC concluded were not objective and were used primarily as a method of justifying substantial compensation increases. On March 15, 2008, the Board announced that it had selected a new Secretary of the Smithsonian. According to Smithsonian officials, the Secretary-elect’s base salary will be $490,000 as compared to the former Secretary’s 2007 base salary of $617,672. The Secretary-elect’s total compensation, including base salary and employer pension contributions, will be $524,000, considerably lower than the former Secretary’s 2007 total compensation of $915,698. According to a Smithsonian official, the new Secretary will not receive a housing allowance or any additional benefits beyond those provided to all senior trust Smithsonian employees. In setting this salary, the Board was guided by a compensation study that used a methodology that addressed many of the concerns the IRC had had about the previous compensation studies used to set the former Secretary’s salary. The Board also has reformed the Smithsonian’s executive compensation system by implementing what it refers to as a unified compensation philosophy. This effort involved establishing clear criteria to determine whether the compensation of each trust executive is market based or equivalent to the federal system. This reform grew out of concerns that the Smithsonian had separate employment systems governing its federal, trust, and SBV employees, and that the trust salaries for some positions—in areas such as finance and administration—were unnecessarily higher than those paid in much larger federal agencies. The Board determined that the Smithsonian should follow the federal guidelines for pay for senior-level positions except for limited instances where the job functions do not exist in the federal sector or there is not a candidate pool in the federal sector—in which case, compensation for the position should be market based. The Board also approved a plan to address the effects of applying the criteria. After applying the criteria, the Smithsonian determined that 38 of its executives were in positions that should follow a federal compensation approach. Of those 38, 19 had salaries above the federal pay cap. According to the plan, the salaries for these 19 positions will be brought in line with the federal pay cap within 5 years. A Smithsonian official stated that the implementation of this recommendation was extensively debated. According to a Smithsonian official, the 5-year transition plan was chosen to avoid the risk of having all of the affected executives leave immediately, which could hurt the Smithsonian; however, a Smithsonian official predicted that some of the affected executives may leave the Smithsonian due to the change in their salaries. The Board also implemented a recommendation to place all Smithsonian executives on the Smithsonian’s existing leave policy. This effectively addressed a concern that some Smithsonian executives were exempt from the generally applicable leave accrual system, and that as a result, the former Secretary and former Deputy Secretary were absent from the Smithsonian for about 400 and 550 workdays, respectively, between 2000 and 2006. On September 30, 2007, the Smithsonian transitioned all these executives to the existing trust leave policy that is equivalent to the federal leave policy for senior executives. The affected executives were provided with opening leave balances for annual and sick leave according to a formula. According to National Finance Center data, since the leave policy went into effect on September 30, 2007, 33 of the 34 executives had recorded some leave by February 16, 2008. The Board strengthened its policies related to conflicts of interest by prohibiting for-profit board service by its senior executives and requiring prior approval for service on nonprofit boards by any senior executives. The previous policy resulted in two senior executives, including the former Secretary, serving on for-profit boards, one of which had a contract with the Smithsonian; some regents were unaware of this board service. The Board took action on this issue by creating a conflicts-of-interest policy applicable to senior executives. In addition, the General Counsel now reports to the Board’s Audit and Review Committee on senior executives’ outside activities. Also, although the Smithsonian has individual codes of ethics for employees, volunteers, regents, and advisory board members, until the governance reform, it had no overarching code for the entire institution. The Board approved a Statement of Values and Code of Ethics in January 2008, which applies to everyone in the Smithsonian community. A few additional efforts to improve the ethics policy are planned, but not yet complete—for example, the creation of a conflicts database. However, after reviewing the new policies and ethics code and comparing them to current industry practices, we believe that the new policies and ethics code follow current industry practices. The Board has established some new policies related to travel and event expenses, and has initiated reviews to consider how to strengthen and improve the Smithsonian’s travel and event expense policies and contracting policies, including for SBV. The Board has also initiated a separate review of its internal controls in these areas and on an overall basis. In some cases, new policies have been established in the interim while the final policy is under development. The Smithsonian’s CFO stated that the teams addressing the Governance Committee’s recommendations in these areas have generally expanded the scope of work beyond what would be required to implement the recommendation to think about what else needs to be done, and the Board has been supportive of this approach. As a result, the implementation of some of these reforms has not yet been completed. Successfully implementing these reforms is likely to include challenges related to training and establishing accountability. The ongoing effort to reform the Smithsonian’s travel policies is in response to concerns about the former Secretary’s travel practices and expenses, including a provision in his employment agreement that authorized first-class air travel—and which he and the Board interpreted to include premium travel in other regards as well. In addition, the IRC pointed out that the former Secretary’s expenses were not reviewed for reasonableness and raised concerns about his chartering of a private jet, at a cost of $14,000, to travel to San Antonio, Texas, to receive an award and return to Washington, D.C., the next day to attend a Board committee meeting. The Board adopted interim policies on travel expenses in April 2007 and adopted them as standing policies in June 2007. Among other things, the standing policies subject all Smithsonian officials and employees to a single Smithsonian-wide policy that is compliant with federal travel regulations—whether the travel is paid for with trust or federal funds— and established that the Smithsonian should not pay or reimburse the cost of travel on chartered aircraft in the absence of demonstrated business necessity. In addition, according to Smithsonian officials, the Acting Secretary’s travel expenses are now approved by the CFO’s office prior to his travel. A team also is reviewing the Smithsonian’s travel policies, with a focus on clarifying the policies and procedures to make it easier for Smithsonian employees to understand and follow them. The effort also is focusing on establishing review and audit processes for travel and expense reimbursement to strengthen internal controls in this area. The Board also adopted some new policies on event expenses, while a team is undertaking a broad review of event expense policies. The Board undertook these actions in response to findings in the Inspector General’s report related to the former Secretary’s entertainment expenses. The issue of event expenses also was raised in late 2007 and early 2008, when the media and some members of Congress expressed concern about the reasonableness of the event expenses surrounding the departure of the former Director of the National Museum of the American Indian. The Board adopted policies that attempted to clarify this issue by stating that Smithsonian funds, including for trust events, should only be used for reasonable expenses in accordance with Smithsonian policies. In its ongoing review, a Smithsonian team has developed an approach to event expense policies that will divide Smithsonian events into three categories, from the most formal to the least formal, with clearly identified requirements for approvals and authorizations. The new policy will cover both regent events and other Smithsonian events. The team also decided that given the variation in size and mission of different Smithsonian museums, research centers, and programs, each may supplement the policy with additional expenditure and approval guidelines specific to their organization. The details of this framework, scheduled to be reviewed by the Audit and Review Committee in May 2008, have yet to be finalized. Under the leadership of the Board, the Smithsonian also is reviewing its contracting policies. This effort stemmed from concerns that the Smithsonian had entered into confidential business contracts that appeared to have been awarded in a manner not consistent with contracting standards generally applicable in the public sector. For the contracting review, a team has worked to formally document Smithsonian contracting policies and procedures, and is clarifying any exceptions to following the FAR. The team has drafted a contracting policy and is working to develop a set of handbooks to further explicate specific areas of contracting. Regarding SBV’s policies, a team has reviewed Smithsonian and SBV policies to determine which Smithsonian-wide policies are applicable to SBV and under which circumstances SBV should deviate from Smithsonian policies. This effort came out of concerns about the propriety of SBV policies and activities. When SBV was created in 1999 to consolidate and improve Smithsonian business activities, it was not subject to all Smithsonian policies. According to the CFO, the team that examined SBV’s policies found only three SBV policies that are not consistent with relevant Smithsonian-wide policies and they all involve financial or payroll systems SBV uses that differ from Smithsonian systems. The team has drafted a new process by which SBV can request an exception to Smithsonian policies that was scheduled to be implemented in April 2008. In addition, the Acting Secretary created a task force that recommended significant changes to SBV, such as restructuring and the creation of a uniform revenue-sharing formula. The Board approved these recommendations in January 2008. Another team, at the direction of the Board, is examining the appropriateness of Smithsonian internal controls across the Smithsonian’s operations. This reform effort relates to concerns about whether the Smithsonian had appropriate policies and an effective process for enforcing and monitoring compliance with its policies. Furthermore, a management letter from the Smithsonian’s external auditor based on the Smithsonian’s fiscal year 2006 financial statement audit cited inadequate accounting resources and staff as a “reportable condition,” an internal control weakness. The Smithsonian has defined internal control as a process designed to provide reasonable assurance of the Smithsonian’s ability to achieve and sustain effective and efficient operations, reliable financial reporting, and compliance with applicable laws, regulations, and policies. To implement this review of internal controls, the team identified a core set of 16 critical processes where there is financial risk— including processes related to travel and event expenses, contracting, SBV, and other areas such as financial reporting and federal and trust funds control—and developed a framework with which to analyze each process. The internal control framework for these reviews is closely aligned with the framework established by our Standards for Internal Control in Federal Government. In response to the outside auditor’s concerns about the CFO’s resources, the Smithsonian is in the process of a phased hiring of 10 additional accountants. According to the CFO, funding has been attained for 8 of the 10 positions and has been requested for the last 2 positions for fiscal year 2009. Effectively implementing the new policies and procedures developed during these review efforts is likely to depend on effectively training Smithsonian staff at all levels and on effectively establishing accountability, both of which may be challenging for the Smithsonian. The implementation of some of these reforms is scheduled to stretch beyond 2008; Smithsonian officials estimated that reforms related to travel expenses and SBV policies would be fully implemented in the third quarter of 2008, which will provide an important basis for taking further action to strengthen the Smithsonian’s internal controls. According to Smithsonian officials, the event expense policy and contracting reforms are scheduled to be fully implemented by the end of 2008. The internal control recommendations are scheduled to be approved by the end of 2008, with the implementation of the approved recommendations scheduled to take place in 2009. Several of these efforts are likely to lead to a level of standardization or requirements that did not exist before, which must be implemented by staff at many levels and throughout many of the Smithsonian’s museums, research centers, programs, and central offices. Smithsonian officials acknowledged that effectively training staff on some of the new policies is important and likely to be challenging. For example, according to Smithsonian officials, new requirements and standardization related to the changes in the Smithsonian’s events policy may cause confusion and resistance from some Smithsonian staff, which could make training challenging. According to a Smithsonian official, training is also important to the success of the travel policy reforms. This official stated that while the vast majority of Smithsonian staff are trying to follow the rules regarding travel, the rules are complicated, and Smithsonian staff who are creating travel requests at museums, research centers, or programs may not understand the complexity of the rules as they apply to specific situations. Establishing accountability mechanisms to monitor whether new policies are followed is also important for the success of these efforts—and likely to be an ongoing challenge. Accountability represents the processes, mechanisms, and other means by which an entity’s management carries out its stewardship and responsibility for resources and performance. Smithsonian officials emphasized that the teams undertaking the reviews of Smithsonian policies are considering how to best establish accountability processes and mechanisms to ensure compliance with new policies. However, the Smithsonian’s ongoing efforts to establish accountability for travel expenses illustrate challenges related to determining how to monitor the compliance of top executives and how to effectively establish audit processes given limited resources. To monitor the compliance of top executives, the CFO now approves the Secretary’s travel spending in advance, which did not happen before, and the undersecretaries’ offices review the appropriate directors’ travel in advance, as was always the case. Prior to the reforms, the Secretary’s anticipated travel expenses were not reviewed prior to travel. Under this change, the Secretary’s travel is being approved by a subordinate—the CFO. According Smithsonian officials, the Smithsonian decided on this policy because the CFO has direct access to the Board, and through informal benchmarking it found that some other federal agencies either provided the agency head with blanket travel authorizations, which the Smithsonian did not want to do, or had the agency head’s travel approved by the executive secretary, a position that does not exist at the Smithsonian. The CFO stated that to supplement this process, she is planning to hire someone to support posttravel audits of the Secretary, executives, and the entire Smithsonian. The Smithsonian has not yet determined the extent to which the Board will review the travel expenses of the Secretary and other executives. Although the Governance Committee’s recommendation in this area calls for the Audit and Review Committee to review the Secretary’s travel expenses and to report to the full Board at least annually on these expenses, the process by which this will occur on an ongoing basis has not yet been established. According to Smithsonian officials, the Board did review the Secretary’s travel in January 2008, and a team is considering a posttravel audit approach that would include 100 percent travel voucher audits for all senior executives with vouchers greater than $2,500. The team working on this issue also will recommend how much information from travel reviews or audits is to be provided to the Audit and Review Committee. The new processes are scheduled to be implemented by the end of the third quarter of 2008. According to officials at the J. Paul Getty Trust, who told us that the Trust faced some similar governance issues over the expenses of a previous chief executive officer, the Chair of the J. Paul Getty Trust’s Board now reviews the chief executive officer’s travel expenses on a quarterly basis. In addition, its audit committee receives quarterly reports on the travel expenses of the chief executive officer and nine senior directors. Another ongoing challenge for the Smithsonian is implementing effective accountability measures within limited resources. For example, the CFO stated that while the Smithsonian had a long-standing practice of quarterly posttravel compliance reviews of a sample of travel activity, the Smithsonian has not been able to complete these reviews on a quarterly basis since the second quarter of 2004—well prior to the governance problems identified by the Governance Committee. While the Smithsonian has completed assorted travel-related reviews, including checks for duplicative payments and senior executive travel reviews, it has not renewed its post-travel voucher audits. As described above, the outside auditor’s management letter that accompanied the Smithsonian’s fiscal year 2006 financial statement audit cited inadequate accounting resources and staff as a reportable condition, and a team is working to address this through hiring more accountants. Establishing accountability is especially important because among the governance problems identified by the IRC were that the Smithsonian did not comply with its existing policies and procedures with respect to accounting for expenses, and the former Smithsonian Secretary interpreted a Smithsonian policy authorizing his first-class air travel to qualify him for first class accommodations as well— despite Smithsonian policies to the contrary. The fact that the Inspector General found that the former Director of the Smithsonian’s Latino Center had violated Smithsonian policies related to contracting, acceptance of gifts from outside sources, and travel expenses, among other things, also illustrates the importance of establishing effective accountability mechanisms in these and other areas. The Board has completed the actions it proposed for improving its access to information and making its operations more transparent, but actions to improve communication and relationships with stakeholders are not fully implemented. Going forward, a challenge for the Board will be to ensure that all actions taken to improve the engagement of advisory boards and other stakeholders include a mechanism by which the Board can—on a regular basis—receive and consider unfiltered, needed information; take appropriate actions in response; and report its progress. The recommendations covered in this section are shown in figure 2. Appendix II provides the status of all of the Governance Committee’s report recommendations. The Board formally amended its bylaws to require attendance of the General Counsel and the CFO, or their designees, at Board meetings. Also, the reporting relationship for the Inspector General was changed from the Secretary to the Board, and the Audit and Review Committee’s charter was amended to reflect this change. These changes were in response to concerns that the Board was not developing relationships necessary to allow Smithsonian senior officials or staff to bring forward information of concern to them. Our review of Board meeting minutes shows that the General Counsel, CFO, and Inspector General have attended all full Board meetings since the changes were made and have provided information and raised concerns to the Board. Based on our interviews with the General Counsel, CFO, and Inspector General, as well as regents and other officials both before and after the changes were initiated, the changes have generally improved access and communication and strengthened reporting relationships and governance reform efforts. For example, the General Counsel and CFO have been working with the Board’s committees in carrying out governance reforms related to ethics and contracting policies, respectively. The Board also created an independent Office of the Regents with staff dedicated and reporting to the Board in order to better control the agenda of the Board and the level of information available to the Board. The regents now have a greater role in determining which matters warrant the Board’s time and attention, decisions that were previously largely made within the office of the former Secretary. This is an important change in the Board’s practice because it will serve to help prevent any future Secretary from attempting to control the level of information provided to the Board as the Governance Committee’s and IRC’s reports indicated the previous Secretary had done. The Acting Secretary designated the General Counsel as corporate secretary who, among other duties, officially records the minutes of all Board meetings—similar to practices of other nonprofit organizations. Since November 2007, minutes of most Board meetings, including committees, have generally been recorded by the corporate secretary or his staff and are more thorough than they were previously and include more context about information presented to the Board during the meeting. For example, the reports of Board committees are more detailed than previously. We also found that Board meeting minutes have been improved to capture more deliberation than they did previously, which will likely serve to help improve transparency of the Board’s actions. Although the Smithsonian is not subject to the Freedom of Information Act (FOIA), Smithsonian officials told us they follow the principles of FOIA when fielding the public’s requests for access to records. The Smithsonian created a FOIA policy in November 2007 to formalize and clarify its records access policy. This change came about in response to criticism that there was confusion as to which FOIA principles applied to the Smithsonian. According to the General Counsel, the Board adopted a FOIA policy to formally articulate and clarify the Smithsonian’s policy on public access to Smithsonian records. We compared the Smithsonian’s FOIA policy with FOIA itself and found that the Smithsonian’s policy is consistent. The Smithsonian articulates its public records’ policy and establishes exceptions or cases in which documentation would likely not be provided, for example, certain commercial or personal data. According to the General Counsel, contracts negotiated since this policy was created no longer contain blanket confidentiality provisions. Furthermore, the Smithsonian intends to remove its FOIA policy provision authorizing the Secretary to carve out additional exceptions. The formulation of this policy should result in less confusion regarding the public’s access to records and provide greater transparency. Finally, the Board created a Web page dedicated to governance to increase the public’s access to information and improve transparency. Information posted by the Board relates to its structure, membership, and functions. For example, the Board has posted minutes, committee assignments, reports to the Board, and a document tracking the governance reform efforts, which is updated monthly. These changes were made in response to criticisms about the lack of transparency regarding governance at the Smithsonian following the recent governance problems. The Smithsonian has improved the amount of governance information available to the public, and by comparison, generally has more governance content available to the public than some other nonprofit organizations’ Web sites. The Board also is planning to post additional information in an effort to improve the transparency of the Board’s policies and actions. For example, according to a Smithsonian official, the executive compensation study which describes the benchmarking used to determine the salary range for the new Secretary, committee meeting minutes, and information regarding criteria for regent nomination, eventually will be posted to the Web page. In one effort to improve governance through strengthened stakeholder relationships, the Board is studying how to improve the link between the Board and the Smithsonian’s 30 advisory boards, which include a national advisory board as well as advisory boards that focus on individual museums, research centers, or programs. As the Board’s study and recommendations on this issue have not yet occurred, it is too soon to fully evaluate the Board’s efforts or conclusions. Past efforts to enhance this relationship have been limited, and some directors and advisory board chairs expressed ongoing concerns about the relationship. Most advisory boards (except for those with mandated statutory authority) have no independent governance function, and all are subject to the authority of the Board. There is considerable variability among advisory boards, reflecting the differences in how the boards came into being, their missions, their bylaws, and the entities they serve. Their primary purpose is to provide advice, support, and expertise to the directors of museums, research centers, and programs, as well as to the Board and Secretary. For example, some advisory board chairs we spoke with assist their museums, research centers, or programs with acquisitions, program development, strategic planning, or fundraising, among other things. According to the Governance Committee’s report, the advisory boards provide a key link between the Board and the public and a direct connection to the museums. Some advisory boards include a regent, while most do not. The Board has not always fully implemented past recommendations to enhance the relationship between the Board and advisory boards. A 2002 internal Smithsonian study found that the advisory boards were an underutilized asset and recommended, among other things, that each advisory board should submit a report to the regents every 3 years on its museum or facility on a rotating basis, and the Board should respond in writing to these reports. In response to this recommendation, the Smithsonian implemented the practice of having a subset of advisory boards submit a paper to the Board for its meetings starting in September 2003, with the idea that each advisory board would report every 3 years. However, the second half of the internal recommendation, that the regents should respond in writing to the papers, was never implemented. The practice was discontinued after January 2007. Our work suggests a continued broad level of concern over the lack of engagement of the regents with advisory boards. Nine of the 10 advisory board chairs we spoke with stated that they had had little to no direct contact with the Board. Although 2 of the 10 advisory board chairs we spoke with were generally satisfied with the current level of interaction between the Board of Regents and the advisory boards, 6 of the advisory board chairs we spoke with were not. Some of the museum directors with whom we spoke also raised concerns about the current relationship between the advisory boards and the Board, and the majority of the museum directors see additional value in enhancing the relationship between the Board and the advisory boards. Negative effects of the current relationship described by several advisory board chairs or museum directors included concerns that the Board’s lack of understanding of individual museums, research centers, and programs reduces the Board’s ability to effectively oversee the prioritizing of the Smithsonian’s budget, limits the effectiveness of the Smithsonian’s fundraising efforts, and limits the Board’s ability to articulate a unifying vision for the Smithsonian to strengthen its mission and better leverage its future potential. According to Smithsonian officials, the Chair of Smithsonian’s National Board, which is a Smithsonian-wide advisory board, attends all Board meetings and to the extent that advisory boards interact with the National Board, is therefore an existing vehicle for communication between advisory boards and regents. However, most of the advisory board chairs we spoke with did not think that the National Board Chair served as a liaison between the advisory boards and the Board of Regents. The Smithsonian has taken some steps to rethink this relationship. In January 2008, the National Board changed the agenda of an annual meeting to which it invites all advisory board chairs to include roundtable discussions, a luncheon with the Acting Secretary, and a presentation by a regent. According to Smithsonian officials and the two advisory board chairs we spoke with about this meeting, the meeting was positive. However, the two advisory board chairs we spoke to stated that the meeting focused on enhancing the relationship among advisory boards rather than on what they saw as a remaining need to enhance the engagement of the regents with the advisory boards and the museums, research centers, or programs they represent. Going forward, according to a Smithsonian official, Office of the Regents’ staff will attend business meetings of the advisory boards to serve as direct liaisons to those boards and the regents. Four organizations we spoke with that have undergone governance reforms and that, like the Smithsonian, have a structure that includes a number of units, described to us various models they have developed to engage their advisory boards and obtain their input. For example, the United Way of America, which in reforming its governance in 2000 moved away from having members of a number of local affiliates as board members, now has a council of its chapter boards that provides advice to the chief executive officer and whose concerns are communicated to the board at an annual meeting. The American Red Cross holds an annual convention at which its chapter boards may interact with its governing board. American University has representatives from different constituents provide input to board committees. These representatives also may attend board meetings. After its governance reforms, the J. Paul Getty Trust, a much smaller organization, increased communication between its board and the four unit heads by inviting them to attend and participate in board meetings. A noticeable similarity of the way that these four organizations engage and obtain input from their constituent parts is that they all have a mechanism whereby the governing board engages in personal interaction with these constituent representatives on a regular basis. Directors and advisory board chairs we spoke with had a variety of suggestions for improving the relationship between the advisory boards and the Board. Some suggestions sought to help regents thoroughly understand each museum, research center, or program—for example, by having a regent serve on each advisory board or having a regent hold regular discussions with the director or advisory board chair. Other suggestions included inviting advisory board chairs to the Board’s meetings or having advisory board representation on the Board. Any actions taken to improve the relationship between the Board and the advisory boards will involve balancing the regents’ time demands with ensuring that the Board has a strong enough understanding and relationship with the museums, research facilities, and programs it governs to effectively prioritize the Smithsonian’s budget, encourage fundraising efforts, and strengthen the Smithsonian’s mission. The Board is currently developing a plan for an annual public meeting to enable interested stakeholders to address information and make inquiries directly to the Board. This change is being considered in response to criticisms about the lack of transparency regarding governance at the Smithsonian following the recent incidents of governance and accountability breakdowns. According to the Governance Committee’s report, this annual meeting should be an opportunity for the Board not only to disseminate information, but also for the Board to receive information and input from interested stakeholders including the public, Smithsonian staff, and others to help inform the Board’s decision making. Originally planned for June 2008, the Board’s Chief of Staff told us that, as of May 2008, plans have not yet been fully developed and, because the new Secretary’s term begins on July 1, the Board plans to hold the public meeting after the new Secretary has begun his service, at either the Board’s September 2008 or November 2008 meeting. The Board has enlisted a governance consultant to provide technical assistance in developing the meeting agenda and is planning for the meeting to focus on specific topics well-suited for stakeholder input, for example, strategic planning or the future of the Arts and Industries Building. Also in response to criticisms regarding a lack of communication and transparency at the Smithsonian, the Smithsonian’s Director of Communication and Public Affairs led a team to develop a strategy to increase available information about the Board’s and Smithsonian activities and operations to identified stakeholders, such as senior management, staff, and the public. The Governance Committee’s report directed that the communication strategy be modeled after best practices in the federal government, nonprofit organizations, and universities. The Governance Committee’s recommendations for the communication strategy appear to be in line with practices we have identified for federal agencies undergoing major transformations. For example, the Governance Committee recommended that the strategy include (1) mechanisms to foster communication between and among senior management and regents, staff, and other stakeholders; (2) a framework to ensure effective congressional outreach and information; and (3) a plan to ensure all stakeholder constituencies are routinely informed of important decisions and have opportunity to provide comments or information to the Board and management. Our work similarly suggests that an effective communication plan should reach out to all employees, customers, and stakeholders and seek to genuinely engage them; communication should facilitate a two-way exchange of information; and it is important for feedback from stakeholders to be considered, and appropriate actions made in response and progress reported. Originally scheduled to be complete in December 2007, the Smithsonian’s communication strategy was finalized and approved by the Board on May 5, 2008. Because this effort is not yet fully implemented, we were not able to assess the communication strategy in its entirety, and it remains to be seen how effective the communication strategy will be. However, the team has taken a number of steps thus far. For example, the team has defined stakeholders, analyzed existing communication processes, performed broad stakeholder surveys and analyses, and developed a communication plan. Officials told us that among issues the team is still considering is what information stakeholders most need. The team also is seeking to avoid duplication of efforts; for example, where there are already multiple avenues for receiving information, the team will consider whether tools such as a hotline or a public ombudsman are necessary. A challenge going forward will be to ensure that actions taken to improve communication with stakeholders include a mechanism by which the Board can—on a regular basis—receive and consider unfiltered, needed information; take appropriate actions in response; and report its progress. The Board has largely completed the actions it proposed for clarifying regents’ responsibilities and studying possible changes in its size and structure, but actions for assessing Board performance are still being developed. The Board has not yet developed a process for assuring transparency and accountability in selecting nonregents and using them to enhance governance. The Board is developing a self-assessment process to examine its performance, but it is not clear how the Board will hold regents accountable in the event of performance problems. The Board does not currently have plans to conduct a broader evaluation of its governance reform efforts after a suitable period of implementation to determine if they have been effective in addressing the governance and accountability breakdowns that occurred. The recommendations covered in this section are shown in figure 3. Appendix II provides the status of all of the Governance Committee’s report recommendations. As part of its governance reform efforts, the Board adopted a specific set of duties and expectations for all regents, which is a key step in governance reform. Previously, the roles and responsibilities of regents were not clearly and explicitly defined, which, in part, led to the lack of oversight and awareness evidenced in recent problems at the institution. Elements of the regents’ written duties and responsibilities, such as engaging in forthright discussions about the Smithsonian’s strategic and operational issues, are in accordance with current board duties of comparable organizations with established practices. While we did not evaluate the effectiveness of governance reforms at comparable organizations, many have highlighted the importance of clearly delineating roles and expectations of regents and of holding regents accountable for their individual performance. For instance, American Red Cross emphasized how this process of clarification has helped to create a culture of accountability among its board members, resulting in greater vigilance in their oversight activities. Through our interviews with regents and senior Smithsonian officials, these clarifications appear to have contributed to a greater awareness of the regents’ roles and responsibilities in overseeing the institution. While the initial set of written roles and responsibilities applied to all regents, in practice there have been differing expectations in the level of participation and involvement of citizen, congressional, and ex-officio regents. Previously, citizen regents were expected to chair committees and have historically had greater involvement on committees, and have often participated on multiple committees. Congressional regents have not been expected to chair committees, and some have not served on committees in the past. Furthermore, the Vice President, an ex-officio regent, has not participated in any Board meetings. However, according to the General Counsel, the regents are charged with a sole trust responsibility: to increase and diffuse knowledge among mankind. In discharging that responsibility, all regents are subject to the same fiduciary duties as other trustees: loyalty, prudence, and due care. The IRC recommended that congressional and citizen regents should accept a fiduciary relationship with the institution and devote the requisite time to carry out those duties. The IRC also recommended that the ex-officio regents continue to serve on the Board, but in an advisory role, without fiduciary duties to the institution. The Board has recently decided to expand the expectations regarding the duties and responsibilities of congressional regents—including expectations for them to serve on at least two committees and serve as committee chairs when needed, and equally carry the workload as that of citizen regents. Given their other duties as members of Congress and in order to fulfill their expanded roles and responsibilities, the Board envisions the regent liaisons playing a larger role in assisting the congressional regents in fulfilling their duties. In creating this expanded role for liaisons in the functioning of the Board, it will be important for the Board to clearly delineate what is expected of regent liaisons. The Board has not changed the role or expectations of the Vice President as a member of the Board, but sees value in retaining representation from the executive branch. Once the Board finalizes the expectations regarding the duties and responsibilities of its regents, a revised set of written roles and responsibilities will be included in the orientation program that is still under development. According to the Chief of Staff to the Board, implementation of the orientation program was not completed in time for the new regent, who began his term in March 2008. The Board also has clarified the duties of the Chancellor, who by tradition has been the Chief Justice, and created a Chair position to recognize that the Chair of the Board has more leadership responsibilities. Previously, the Smithsonian bylaws established the Chancellor as the Chair of the Board, although the Chair of the Executive Committee in practice performed many duties that would otherwise be expected of the Chair of the Board. The lack of clarity and specificity with regard to leadership on the Board and the limited engagement in Board activities that can reasonably be expected from the Chief Justice were factors that both the Governance Committee and IRC reported were associated with the Board’s lack of oversight. To further strengthen leadership, the Board also has recently decided to create a Vice Chair position. Although it is not yet clear how these changes will affect Board leadership, regents we interviewed expressed the belief that this structure would retain the value they perceived in having the Chief Justice’s involvement and also address the limitations on his involvement. The Chair, who was recently elected by the Board, is now expected to play a greater leadership and oversight role in the governance of the Board, including being the chief spokesperson for the Board, working with senior Smithsonian officials to communicate and oversee implementation of policies adopted and approved by the Board, and leading the Board in its annual evaluation of the Secretary’s performance and compensation. The Chancellor’s role includes presiding over Board meetings and selected official ceremonies. According to several regents we interviewed, the skill with which the current Chief Justice has led Board meetings was perceived to add tremendous value to the proceedings. According to some governance experts, this structure retains the potential for the magnitude of the Chief Justice’s office to suppress and discourage honest and candid deliberation due to the deference individuals may give the Chief Justice, which can hinder Board performance; although according to regents we interviewed, there is no indication that this has occurred. The Board has taken a number of steps to strengthen its committee structure, such as creating new committees, appointing new committee chairs, and directing all committees to review their charters. Previously, committees were not consistently examining their roles, responsibilities, and jurisdiction, and the June 2007 Governance Committee report raised concerns that some committees were not effectively carrying out their proper oversight functions as a result. Most committees have now completed this review. However, the review of the Executive Committee’s charter has not yet been finalized. The IRC’s report criticized the Executive Committee for taking certain actions independently of the Board as a whole, such as approving the compensation of the former Secretary, and then seeking approval of the full Board after the fact. While it is not unique for an executive committee to take certain actions or make decisions outside of the full board, experts we spoke with cited concerns when too much authority to make decisions is placed in the hands of just a few board members. Moreover, a January 2008 BoardSource report cited similar concerns and provided several options for reforming the committee, including increasing the size to avoid the risk of delegating decision-making powers to too small a group. Nonetheless, the Board has decided to maintain the current size and scope of responsibilities of the Executive Committee, but may recommend reconsideration at another time. The Board also has decided that the newly created Vice Chair of the Board will be a member of the committee, and the Board has made it a responsibility of the Governance and Nominating Committee to closely monitor the activities of the Executive Committee. It remains to be seen whether these steps will be sufficient to ensure that the Executive Committee does not begin to act in lieu of the full Board when it should and could be engaged. At the onset of its governance reform effort, the Board created new standing committees on governance and facilities that previously did not exist; however, the Board noted in the Governance Committee’s report that significant areas of Smithsonian activities were not within the oversight of any of the Board’s committees, including fundraising and development, most programmatic activities, and strategic planning. The BoardSource report also noted the need and potential for adding additional committees, and suggested enhancing the current committee structure by creating a committee on resource development to demonstrate the Board’s commitment to obtaining appropriate resources beyond federal allocations. The report also suggested establishing a strategic planning committee to lead the Smithsonian through the strategic planning process that would identify programmatic and fundraising goals. In addition, the report from the SBV task force that was issued in January 2008 also recommended creating a committee or subcommittee to focus on increased oversight of SBV programs. As a result of these reports, the Board has decided to create new committees on strategic planning and programs, and advancement. Since these committees have just been created, it remains to be seen how they will improve the Board’s engagement in the Smithsonian’s strategic planning and budget formulation processes and change its role in raising funds for advancement and development. The Board has addressed issues regarding the need for additional expertise by increasing the use of nonregents on committees, though some issues are still under consideration regarding the appropriate selection and use of nonregents. The IRC’s report noted concern that the Board may have needed expertise in some areas—such as financial or accounting expertise—and that this contributed to the lack of oversight of executive compensation and expenses. According to some governance experts and the BoardSource report, bringing in nonregents with specific expertise to be part of Board committees can be effective in providing an independent and different perspective and in “doing the work” of the committee. Comparable organizations also recruit nonboard committee members to provide advice and different perspectives. For instance, officials from the United Way of America stated that half of its board’s committees have nonboard members in addition to board members. Table 2 shows the increase in the use of nonregents on the Board of Regents’ committees as well as the relative workload changes for citizen and congressional regents. While using nonregents permits the Board to increase its knowledge base without changing its size or structure, no formal process exists for the identification and selection of nonregents, and certain policies regarding the appropriate role of nonregents are still under development. Nonregents are currently recruited and vetted through informal channels. For example, regents may reach out within their particular field or professional network for recommendations or referrals for experts in a particular discipline. The full Board then approves the appointment of nonregents. According to the Chief of Staff to the Board, the Governance and Nominating Committee is considering policies and processes regarding the use of nonregents. For example, one option under consideration is to look to first recruit nonregent expertise from the advisory boards within the Smithsonian before looking outside the institution. In addition, the Governance and Nominating Committee is still considering whether and how nonregents should take on leadership roles in the committees. The Board has decided that no major structural changes should be made to the Board’s composition. Many have suggested that the Board’s composition contributed to governance breakdowns because of concerns about a lack of engagement of some regents and concerns of whether the Board is large enough to house the diversity of skills and expertise needed to effectively carry out its oversight activities. The BoardSource report reinforced these concerns and found that the statutorily required structure and composition of the Board can, in fact, create limitations for the Board with regard to evenly spreading its workload, and adding a diversity of skills and expertise. Furthermore, the Board has no authority in recruiting and appointing congressional and ex-officio regents, which can limit its ability to add needed expertise within its current structure. Our analysis of the criteria developed for citizen regents finds that the criteria are consistent with current accepted practices, and include such things as whether individuals can serve in leadership roles, whether they have well- rounded experience in multiple fields, and whether they have specific expertise needed on the Board and are appropriate for the Smithsonian’s specific needs. However, in selecting congressional regents, it is unclear whether the President Pro Tempore of the Senate and the Speaker of the House—who appoint congressional members—consider such criteria when selecting congressional regents. The BoardSource report suggested four options for the Board to consider that included making no major structural changes to the Board and a range of increases in the size of the Board either through adding more citizen regents, decreasing the number of congressional regents, or both. The report noted that any changes to the Board’s composition regarding its size or representation would require legislative action. All four options would maintain representation from all three branches of government, and thus retain different expectations for citizen, congressional, and ex-officio regents. Literature we reviewed and governance experts we spoke with indicated that it is not necessarily the particular size or structure of the board, but rather a board’s high level of engagement and participation, and a broad diversity of skills and expertise that seem to drive effective governance. Experts we spoke with pointed to examples of successful boards of many sizes and structures. In theory, boards with ex-officio and other governmental members can still be effective, but their effectiveness depends on having clear and specific expectations of the level of commitment and contribution board members can realistically make. Moreover, some experts we interviewed held the view that major structural changes may not be the most effective approach when considering governance reform, and that there is some evidence—though not conclusive—that reforms aimed at changing the “culture” of a board to be more participatory and accountable are ultimately more effective at improving governance. The J. Paul Getty Trust, which, according to officials, faced many of the same issues regarding inadequate oversight of executive compensation and expenses, did not make significant structural changes to its board, but rather focused on instilling a culture of accountability. Although we did not assess the effectiveness of their reforms, officials involved with the board indicated that these changes resulted in major improvements from their perspective with regard to their board’s engagement and accountability. According to the Chief of Staff to the Board, the Board expects that further clarification of duties and expectations of all regents—such as increasing the participation level of congressional regents in committees and distributing the workload equally among the citizen and congressional regents—and increased use of nonregents to address the need for diverse expertise and skills in committees will adequately address the major governance issues regarding size and structure facing the Board. Furthermore, as previously mentioned in this report, staff liaisons for congressional regents—who, according to the BoardSource report and to regents we interviewed, have been viewed as an important conduit for the congressional regents to keep them abreast of the activities of the Board— also are envisioned to play a greater role in assisting congressional members in taking on more responsibilities. However, these changes have just been instituted, and thus it remains to be seen whether the Board’s actions will be sufficient to ensure that all regents are fully engaged and held accountable for fulfilling their roles and expectations. The Board is in the process of developing a self-assessment process to examine its performance, which is not expected to be finalized and implemented until June 2008. Previously, the Board lacked a formal and regular assessment of its performance to determine its effectiveness in governance and oversight of Smithsonian management. Our review of standards of practice for nonprofit boards indicates that implementing a regular and consistent self-assessment process is critical to improving individual performance and contribution to a governing board. If a board does not assess its performance, it is missing a key opportunity for input from its own members for improving its operations and governance policies. A self-assessment of the board, committees, and individual members enables a board to identify areas for improvement in the board’s operating procedures, its committee structure, and its governance practices. The Board has drafted a self-assessment questionnaire to facilitate the evaluation process and help the Board understand what it does well and what needs to improve. However, the draft questionnaire does not fully address evaluation of performance against the full range of regent expectations, such as the Board’s role in the Smithsonian’s strategic planning process. The self-assessment process will need to be further developed to reflect changes to expectations for congressional regents and their liaisons. According to the Chief of Staff to the Board, the Board is currently considering changes to the draft assessment tool to reflect these concerns. Other organizations go beyond self-assessments and can remove board members whose performance is inadequate or not up to expectations. For instance, American University told us that its Board of Trustees conducts an individual and peer assessment every 3 years (board members have 3- year terms), at the conclusion of a member’s term, before reappointment. Board members whose terms are up for renewal complete a self- assessment form, and their colleagues on the board also assess the board members’ performance, using a different assessment form. If a board member has not performed up to expectations, he or she will not be nominated for re-election. A similar practice was instituted at the United Way of America in its governance reform, where officials told us there have been cases where a board member was removed because the member was not attending meetings and fulfilling board duties and obligations. In other organizations, boards retain the authority to remove board members for cause. For example, the board of the Legal Services Corporation, which is a federally chartered nonprofit organization consisting of 11 bipartisan board members, has the authority to remove its members for cause, such as persistent neglect of duties, by a vote of at least 7 members. While board members are appointed by the President with the advice and consent of the Senate, neither the President nor the Senate has the power to remove the Legal Services Corporation board members. Although the Smithsonian Governance and Nominating Committee can examine and determine the level of interest of a regent in continuing his or her service and weigh the interest of other regents before calling on a vote for the regent’s renomination, the Board has no authority to remove regents for cause, such as persistent neglect of duties during a regent’s term. The Smithsonian’s General Counsel told us that the Board itself cannot remove regents—only Congress can take action to remove a regent. In our interviews with Smithsonian officials, it is unclear what actions the Board could or would take in the event of persistent neglect of duties by any of its regents. Removal of a regent has occurred at least once in the history of the Smithsonian, but not in recent memory. It remains to be seen how the Board will proceed with its self-assessment process, and a key challenge for the Board will be how to hold all regents accountable for individual performance. Moreover, no mechanism is planned to evaluate the implementation and effectiveness of the governance reforms after enough time has passed for the Smithsonian to operate in a fully reformed environment. Literature we reviewed indicated that while self assessments are valuable and important, they are limited to the views of those being surveyed, and a governing board should occasionally solicit the viewpoints of stakeholders and others close to the governance process to continuously search for ways to improve. Another approach to performance assessment is bringing in outside consultants to interview board members and present results. Current practices include appointing an independent task force or commission to interview board officers and others in an institution every few years concerning the board’s strengths and weaknesses, and what the board could do to empower staff and advisory boards to fulfill their responsibilities more effectively. One governance expert we interviewed commented that boards undergoing governance reform should undertake a thorough, independent, postgovernance reform review to evaluate the effectiveness of governance changes. Although most of the organizations we spoke with that have recently undergone governance reform have not yet performed a comprehensive, postgovernance reform assessment, American University officials told us that the university is planning to conduct a review of its governance reform in a 5-year period to assess whether its governance practices and structure are effective. Given the extent and pace of changes occurring at the Smithsonian, as well as other transformations of the institution, including the hiring of a new Secretary and significant reforms to SBV, among other things, it is likely that many governance reforms will not be completely implemented for some time, and the effectiveness of some reforms will only be evident over a longer period of implementation. The Smithsonian Institution relies significantly on funding from taxpayers and donors, and as such, effective governance and accountability are key to maintaining trust and credibility. Governance and accountability breakdowns can result in a lack of trust from donors, grantors, and appropriators, which could ultimately put funding and the organization’s credibility at risk. The Smithsonian’s Board of Regents has taken many steps to implement governance reforms since June 2007 to address several of the problems that led to the incidents that caused concern about governance and accountability. The Board has reformed executive compensation and benefits, initiated reviews of the Smithsonian’s policies and internal controls, enhanced the access of key stakeholders to the Board, increased transparency, and clarified the roles and responsibilities of regents. We acknowledge and recognize the efforts that have been made by the Board and Smithsonian staff in confronting these governance issues. However, we also note that there are some areas where reforms have yet to be developed, or where improvements to the transparency and accountability mechanisms of the Board could be taken further. For example, reforms related to enhancing the Board’s relationships with important stakeholders, including museum advisory boards and the public, remain under consideration, and current efforts have yet to fully develop mechanisms by which the Board can receive and consider unfiltered, needed information from these stakeholders on a regular basis. Without such mechanisms, these efforts may not have the desired impact of creating an environment for governance that is inclusive of the broad diversity of activities and viewpoints of stakeholders within and outside of the Smithsonian. Furthermore, while the Board is still considering how it can best recruit and use nonregent experts—potentially using advisory board members—there is currently little transparency as to how nonregent experts on committees are to be selected, used, or evaluated. Moreover, while the Board has made strides in defining and refining the roles and responsibilities of regents, it remains unclear what actions the Board can take to hold its regents and their liaisons accountable in the event that they do not fulfill their roles and expectations. Finally, while the Board and the Smithsonian as a whole are currently focused on extensive governance reforms, and the Board is tracking the implementation of these reforms through a scorecard that is posted on the Smithsonian’s Web site and updated monthly, no mechanism is planned to evaluate the implementation and effectiveness of these reforms after enough time has passed for the Smithsonian to operate in a fully reformed environment. Over the long term, focusing on regular and continuous improvements in these areas could help to enhance the credibility of and the public’s trust in the Board, as well as its current governance reform efforts. We are making the following four recommendations to the Board of Regents to strengthen its governance reform efforts: In implementing the Governance Committee’s recommendations related to stakeholder relationships, the Board of Regents should develop a regularly occurring mechanism that ensures an understanding of and meaningful consideration by the Board of the key concerns of advisory boards and other stakeholders, and a formal means by which the Board follows up on those concerns. To provide more transparency into the use of nonregents, the Board should clarify and make public the process used to select nonregents for service on its committees, the roles and expectations for nonregents, and how nonregents’ performance will be evaluated. To improve the accountability of regents in fulfilling their newly clarified roles and responsibilities, the Board should evaluate what actions it can take in the event of persistent neglect of duties by any regents or their liaisons. To ensure that the multiple governance and management reform efforts underway are effective in addressing the issues that led to governance and accountability breakdowns, and also to ensure that the Board is focused on continuous improvement of the governance and management practices at the Smithsonian, the Board should plan to have an evaluation of its comprehensive governance reform efforts conducted after a suitable period of operation in the reformed environment. We provided a draft of this report to the Smithsonian Board of Regents and the Smithsonian Institution for their official review and comment. Both the Board and the Smithsonian concurred with all our recommendations. In addition, they provided technical clarifications to the draft which we incorporated into the final report, where appropriate. Regarding our recommendations, both the Board and the Smithsonian stressed the importance of enhancing relationships with stakeholders and agreed that these relationships and related communication can be improved. The Board and the Smithsonian underscored that they are working together to enhance these relationships and communication— particularly with advisory boards. In addition, the Smithsonian provided examples of recent efforts to enhance communication with stakeholders. Although all recommendations are directed at the Board, the Smithsonian noted its role in helping the Board implement the recommendations related to improving stakeholder relationships and ensuring that sound governance is a priority in Smithsonian operations. The Board highlighted the importance of making the selection process for nonregents transparent and holding all regents accountable for their performance. The Board’s comments to our report can be found in appendix III and the Smithsonian’s comments to our report can be found in appendix IV. We are sending copies of this report to the appropriate congressional committee. We are also sending this report to the Chair of the Smithsonian Institution’s Board of Regents and the Acting Secretary of the Smithsonian Institution. We will make copies available upon request. In addition, this report will be available at no cost on the GAO Web site at http://www.gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. If you or your staff have any questions about this report, you may contact me at (202) 512-2834 or at [email protected]. Major contributors to this report are listed in appendix V. To determine the extent to which the Smithsonian Institution’s Board of Regents’ (Board) actions have addressed issues related to executive compensation, travel, and internal control and other policies, we interviewed senior Smithsonian Institution (Smithsonian) management officials and regents and reviewed Smithsonian and other documents. Specifically, we met with regents, the Acting Secretary, the Chief Financial Officer, Director of Government Relations, Chief of Staff to the Board of Regents, Director of Human Resources, and other senior officials and staff to obtain the implementation status of the recommendations relating to these areas. We also reviewed Smithsonian and other supporting documents, including the report from the Smithsonian Business Ventures Task (SBV) Force, the PricewaterhouseCoopers unified executive compensation study, and the Mercer Human Resource Consulting executive compensation study that was used in the search process for the new Secretary. We reviewed the Smithsonian worksheets used to create leave balances for all executives transitioning to a leave policy and National Finance Center records to validate that all these executives are now recording leave. We reviewed internal Smithsonian documents describing ongoing efforts related to reviewing travel and event expenses, contracting policies, SBV’s policies, and its internal control review, and we reviewed Smithsonian directives relevant to each of these areas. We met with the Inspector General to discuss ongoing efforts of this office to review the travel expenses of Smithsonian executives and reviewed Inspector General reports in this area. Furthermore, we also reviewed GAO’s prior work on these issues, including travel expenses, contracting, and internal controls, and we reviewed the policies of the comparable organizations described below in relation to these activities. To determine whether the Board has addressed issues related to improving information available to the Board, and transparency and communication concerning the Board and Smithsonian operations, we reviewed Smithsonian laws, policy directives, and other documentation. We interviewed senior Smithsonian management officials, including three undersecretaries or acting undersecretaries, directors of all Smithsonian’s museums, including the Director of the National Zoological Park, and one research center director, to discuss issues related to their understanding of Board functions and their level of communication with the Board. To assess how the Board has improved transparency, we reviewed the Board’s Web page and policies and procedures related to providing information to the public, including its Freedom of Information Act policy, and compared it to the Act. We interviewed key Smithsonian officials, including the Chief of Staff to the Board of Regents, Acting Secretary, the Director of External Affairs, and the Director of Communications and Public Affairs to learn the status of the Board’s plans to improve transparency. We also met with the General Counsel, the Chief Financial Officer, and the Inspector General to assess the level of communication and access between the Board and key Smithsonian officials, before and after governance changes. We reviewed the Board’s meeting minutes and supporting documents, including revised bylaws and position descriptions to determine the extent to which information regarding the Board’s decisions was made available to the public. We met with chairs of selected advisory boards to obtain their perspective of the Board’s interaction and communication with them. We interviewed 10 science, arts, history and culture, and research-oriented advisory boards to obtain a nongeneralizable representation. To assess the Board’s efforts at improving communication with other stakeholders, we reviewed GAO’s work on federal agency communication strategies. To determine the extent to which the Board has addressed issues related to roles and responsibilities, size and structure, and performance of the Board, we reviewed Smithsonian and other documents to assess current board practices and the Board’s progress towards implementing governance reform efforts. To examine the roles and responsibilities of the Board, we looked at relevant laws pertaining to the creation and operation of the Smithsonian Board of Regents and reviewed Smithsonian documents, including Board bylaws and written set of the Board’s and its committees’ duties and responsibilities. To assess the size, composition, and structure of the Board, we reviewed a recent BoardSource report that was commissioned by the Board to examine possible options for the Board’s size and structure. To assess the extent the Board has addressed issues related to its performance assessment, we examined relevant Smithsonian documents, including a draft board assessment instrument, compiled current governance practices from several sources, and interviewed key Smithsonian officials. We interviewed all of the current citizen regents, except the most recently appointed regent, three former citizen regents, and four congressional regents and the primary staff liaisons to the remaining two congressional regents. We also obtained a written response from the Chief Justice to questions we provided. We interviewed senior Smithsonian officials, including the Chief of Staff to the Board of Regents and General Counsel to obtain their perspective and information on the status of the governance reform efforts. To identify current nonprofit governance practices, we reviewed literature on corporate and nonprofit governance, including literature from organizations such as the American Association of Museums, BoardSource, Council on Foundations, Independent Sector, the Museum Trustee Association, and The Conference Board. We also reviewed GAO’s work on governance of several organizations, including the John F. Kennedy Center for the Performing Arts, Legal Services Corporation, and Federal Deposit Insurance Corporation. We identified and interviewed governance experts on nonprofit governance, including academics, to obtain independent views on the Smithsonian’s governance problems and whether recent governance changes will address those problems. The governance experts we interviewed included four governance or museum experts who advised the Smithsonian during its governance review, as well as six that we identified through a literature search or were referred to us by other experts in the field. To perform all of our work on all of the above objectives, we also met with officials from several institutions that had some similarity to the Smithsonian and that had recently undergone governance reforms. We focused on organizations that had had similar governance problems, conducted a governance review, and changed their practices or structure; organizations that had a structure that consisted of a central or national governing body with multiple programming units; and organizations with similar missions and stewardship challenges. We met with officials from American University, American National Red Cross, J. Paul Getty Trust, and United Way of America, who had in-depth knowledge and contributed to governance reform efforts at the respective organizations. We also met with officials from the National Trust for Historic Preservation, an organization that, according to officials, initiated governance reform without having experienced similar governance challenges. We reviewed recently enacted legislation relating to governance for one of the organizations. While we did not evaluate the effectiveness of the governance reforms efforts at these comparable organizations, we reviewed and analyzed documents from these organizations relevant to their governance reform efforts. We conducted this performance audit from May 2007 to May 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. II: GAO’s Assessment of s of May 2008) Completed (the recommendtion has een implemented) Under dy (implementtion of the recommendtion i nder frther dy) Stepken (tepve een tken to implement the recommendtion, but more work i needed) We confirmed that the executive compensation process that was followed to set the salary range for the next Secretary’s compensation package was refined to address many of the IRC’s concerns about the previous compensation process, but we did not validate the extent to which this process follows best practices. In addition Andrew Von Ah, Assistant Director; Shane Bechtold; Seth Dykes; Delwen Jones; Jennifer Kim; Margaret McDavid; Kim McGatlin; Susan Michal-Smith; Amanda Miller; Sara Ann Moessbauer; Stanley Stenersen; and Alwynne Wilbur made key contributions to this report.
The Smithsonian Institution's governing body, the Board of Regents (Board), has developed a set of actions to address governance and accountability breakdowns that came to light in 2007. These actions were aimed at problems in three main areas: (1) executive compensation, benefits, ethics, and operational policies and controls; (2) flow of information to the Board, transparency of Board operations, and relationship with stakeholders; and (3) the Board's responsibilities, structure, and performance. GAO was asked to assess the extent of the Board's progress in each of these areas. GAO obtained information and data from the Board's regents, Smithsonian executives, and other stakeholders and also analyzed other organizations whose boards had faced similar governance challenges. The Board has implemented several reforms related to executive compensation and benefits, but the development of policies for broader operational matters such as travel, event expenses, and contracting is still under way. Actions implemented include a revised salary range for the Smithsonian's Secretary-elect and a unified compensation policy for other executives. The Smithsonian is reviewing policies related to travel and other matters, including internal controls. Effectively implementing the new policies and procedures developed during these reviews is likely to depend on effectively training staff and establishing accountability, both of which may be challenging due to a level of standardization and requirements that did not exist before. The Board has completed the actions it proposed for improving its access to information and making its operations more transparent, but actions to improve communication and relationships with stakeholders are less far along. The Board now has avenues for obtaining information directly from senior officials rather than through the Office of the Secretary, and it has taken such steps as creating a Web page to better publicize its operations and decisions. The Board is studying how to improve links with its 30 advisory boards and has developed an overall strategy for communicating with the larger network of stakeholders, but neither action is far enough along to assess its potential for addressing past problems. The Board has largely completed the actions it proposed for clarifying regents' responsibilities and studying possible changes in its size and structure, but actions for assessing Board performance are still being developed. The Board altered its committee structure but decided that more fundamental changes in its size and composition were unnecessary. To provide further expertise where necessary under the existing structure, the Board is encouraging the addition of nonregents to committees. Thus far, however, the Board has not developed a process for assuring transparency and accountability in selecting nonregents and using them to enhance governance. The Board is also developing a self-assessment process, but it remains to be seen how the Board will hold regents accountable if they neglect their duties. Given the extensiveness of actions taken and still under way, it is likely that the effectiveness of some changes will only be evident over a longer time. The Board does not currently have plans to conduct a broader evaluation of its governance reform actions after such time has passed to determine if the actions taken have addressed governance and accountability problems which led to its reform actions.
Both government and private entities increasingly depend on computerized information systems to carry out operations and to process, maintain, and report essential information. Public and private organizations rely on computer systems to transmit sensitive and proprietary information, develop and maintain intellectual capital, conduct operations, process business transactions, transfer funds, and deliver services. In addition, the Internet serves as a medium for hundreds of billions of dollars of commerce each year. Cyberspace—where much business activity and the development of new ideas often take place—amplifies potential threats by making it possible for malicious actors to quickly steal and transfer massive quantities of Threat actors data while remaining anonymous and difficult to detect.may target businesses, among others targets, resulting in the compromise of proprietary information or intellectual property. In addition, the rapid growth of Internet use has significantly contributed to the development of technologies that enable the unauthorized distribution of copyrighted works and is widely recognized as leading to an increase in piracy. Digital products are not physical or tangible, can be reproduced at very low cost, and have the potential for immediate delivery through the Internet across virtually unlimited geographic markets. Sectors facing threats from digital piracy include the music, motion picture, television, publishing, and software industries. Piracy of these products over the Internet can occur through methods including peer-to-peer networks, streaming sites, and one-click hosting services. As we reported in April 2010, IP is an important component of the U.S. economy and IP-related industries pay higher wages and contribute a significant percentage to the U.S. economy. However, the U.S. economy as a whole may grow at a slower pace than it otherwise would because of counterfeiting and piracy’s effect on U.S. industries, government, and consumers. The importance of patents and other mechanisms to enable inventors to capture some of the benefits of their innovations has long been recognized in the United States as a tool to encourage innovation, dating back to Article 1 of the U.S. Constitution and the 1790 patent law. Ensuring the protection of IP rights encourages the introduction of innovative products and creative works to the public. Protection is granted by guaranteeing proprietors limited exclusive rights to whatever economic reward the market may provide for their creations and products. As we reported in April 2010, intellectual property is an important component of the U.S. economy, and the United States is an acknowledged global leader in the creation of intellectual property. According to the United States Trade Representative, “Americans are the world’s leading innovators, and our ideas and intellectual property are a key ingredient to our competitiveness and prosperity.” The United States has generally been very active in advocating strong IP protection and encouraging other nations to improve these systems for two key reasons. First, the U.S. has been the source of a large share of technological improvements for many years and, therefore, stands to lose if the associated IP rights are not respected in other nations. Secondly, a prominent economist noted that IP protection appears to be one of the factors that has helped to generate the enormous growth in the world economy and in the standard of living that has occurred in the last 150 years. This economist pointed out that the last two centuries have created an unprecedented surge in growth compared to prior periods. Among the factors attributed to creating the conditions for this explosion in economic growth are the rule of law, including property rights and the enforceability of contracts. The U.S. economy as a whole may grow at a slower pace than it otherwise would because of counterfeiting and piracy’s effect on U.S. industries, government, and consumers. As we reported in April 2010, according to officials we interviewed and a 2008 OECD study, to the extent that companies experience a loss of revenues or incentives to invest in research and development for new products, slower economic growth could occur. IP-related industries play an important role in the growth of the U.S. economy and contribute a significant percentage to the U.S. gross domestic product. IP-related industries also pay significantly higher wages than other industries and contribute to a higher standard of living in the United States. To the extent that counterfeiting and piracy reduce investments in research and development, these companies may hire fewer workers and may contribute less to U.S. economic growth, overall. The U.S. economy may also experience slower growth due to a decline in trade with countries where widespread counterfeiting hinders the activities of U.S. companies operating overseas. The U.S. economy, as a whole, also may experience effects of losses by consumers and government. An economy’s gross domestic product could be measured as either the total expenditures by households (consumers), or as the total wages paid by the private sector (industry). Hence, the effect of counterfeiting and piracy on industry would affect consumers by reducing their wages, which could reduce consumption of goods and services and the gross domestic product. Finally, the government is also affected by the reduction of economic activity, since fewer taxes are collected. In addition to the U.S. economy-wide effects, as we reported in April 2010, counterfeit or pirated products that act as substitutes for genuine goods can have a wide range of negative effects on industries, according to experts we spoke with and literature we reviewed. These sources further noted that the economic effects vary widely among industries and among companies within an industry. The most commonly identified effect cited was lost sales, which leads to decreased revenues and/or market share. Lost revenues can also occur when lower-priced counterfeit and pirated goods pressure producers or IP owners to reduce prices of genuine goods. In some industries, such as the audiovisual sector, marketing strategies must be adjusted to minimize the impact of counterfeiting on lost revenues. Movie studios that use time-related marketing strategies— introducing different formats of a movie after certain periods of time— have reduced the time periods or “windows” for each format as a countermeasure, reducing the overall revenue acquired in each window. Experts stated that companies may also experience losses due to the dilution of brand value or damage to reputation and public image, as counterfeiting and piracy may reduce consumers’ confidence in the brand’s quality. Companies are affected in additional ways. For example, to avoid losing sales and liability issues, companies may increase spending on IP protection efforts. In addition, experts we spoke with stated that companies could experience a decline in innovation and production of new goods if counterfeiting leads to reductions in corporate investments in research and development. Another variation in the nature of the effects of counterfeiting and piracy is that some effects are experienced immediately, while others are more long-term, according to the OECD. The OECD’s 2008 report cited loss of sales volume and lower prices as short-term effects, while the medium- and long-term effects include loss of brand value and reputation, lost investment, increased costs of countermeasures, potentially reduced scope of operations, and reduced innovation. Finally, one expert emphasized to us that the loss of IP rights is much more important than the loss of revenue. He stated that the danger for the United States is in the accelerated “learning effects”— companies learn how to produce and will improve upon patented goods. They will no longer need to illegally copy a given brand—they will create their own aftermarket product. He suggested that companies should work to ensure their competitive advantage in the future by inhibiting undesired knowledge transfer. In addition, private sector organizations have experienced a wide range of incidents involving data loss or theft, economic loss, computer intrusions, and privacy breaches, underscoring the need for improved security practices. The following examples from news media and other public sources illustrate types of cyber crimes. In February 2011, media reports stated that computer hackers had broken into and stolen proprietary information worth millions of dollars from the networks of six U.S. and European energy companies. In mid-2009 a research chemist with DuPont Corporation reportedly downloaded proprietary information to a personal e- mail account and thumb drive with the intention of transferring this information to Peking University in China and also sought Chinese government funding to commercialize research related to the information he had stolen. Between 2008 and 2009, a chemist with Valspar Corporation reportedly used access to an internal computer network to download secret formulas for paints and coatings, reportedly intending to take this proprietary information to a new job with a paint company in Shanghai, China. In December 2006, a product engineer with Ford Motor Company reportedly copied approximately 4,000 Ford documents onto an external hard drive in order to acquire a job with a Chinese automotive company. Generally, as we reported in April 2010, the illicit nature of counterfeiting and piracy makes estimating the economic impact of IP infringements extremely difficult, so assumptions must be used to offset the lack of data. Efforts to estimate losses involve assumptions such as the rate at which consumers would substitute counterfeit for legitimate products, which can have enormous impacts on the resulting estimates. Because of the significant differences in types of counterfeited and pirated goods and industries involved, no single method can be used to develop estimates. Each method has limitations, and most experts observed that it is difficult, if not impossible, to quantify the economy-wide impacts. Nonetheless, research in specific industries suggests that the problem is sizeable. As we reported in April 2010, quantifying the economic impact of counterfeit and pirated goods on the U.S. economy is challenging primarily because of the lack of available data on the extent and value of counterfeit trade. Counterfeiting and piracy are illicit activities, which makes data on them inherently difficult to obtain. In discussing their own effort to develop a global estimate on the scale of counterfeit trade, OECD officials told us that obtaining reliable data is the most important and difficult part of any attempt to quantify the economic impact of counterfeiting and piracy. OECD’s 2008 report stated that available information on the scope and magnitude of counterfeiting and piracy provides only a crude indication of how widespread they may be, and that neither governments nor industry were able to provide solid assessments of their respective situations. The report stated that one of the key problems is that data have not been systematically collected or evaluated and, in many cases, assessments “rely excessively on fragmentary and anecdotal information; where data are lacking, unsubstantiated opinions are often treated as facts.” Because of the lack of data on illicit trade, methods for calculating estimates of economic losses must involve certain assumptions, and the resulting economic loss estimates are highly sensitive to the assumptions used. Two experts told us that the selection and weighting of these assumptions and variables are critical to the results of counterfeit estimates, and the assumptions should, therefore, be identified and evaluated. Transparency in how these estimates are developed is essential for assessing the usefulness of an estimate. However, according to experts and government officials, industry associations do not always disclose their proprietary data sources and methods, making it difficult to verify their estimates. Industries collect this information to address counterfeiting problems associated with their products and may be reluctant to discuss instances of counterfeiting because consumers might lose confidence. OECD officials, for example, told us that one reason some industry representatives were hesitant to participate in their study was that they did not want information to be widely released about the scale of the counterfeiting problem in their sectors. As we reported in April 2010, there is no single methodology to collect and analyze data that can be applied across industries to estimate the effects of counterfeiting and piracy on the U.S. economy or industry sectors. The nature of data collection, the substitution rate, value of goods, and level of deception are not the same across industries. Due to these challenges and the lack of data, researchers have developed different methodologies. In addition, some experts we interviewed noted the methodological and data challenges they face when the nature of the problem has changed substantially over time. Some commented that they have not updated earlier estimates or were required to change methodologies for these reasons. A commonly used method to collect and analyze data, based on our literature review and interviews with experts, is the use of economic multipliers to estimate effects on the U.S. economy. Economic multipliers show how capital changes in one industry affect output and employment of associated industries. Commerce’s Bureau of Economic Analysis guidelines make regional multipliers available through its Regional Input- Output Modeling System (RIMS II). These multipliers estimate the extent to which a one-time or sustained change in economic activity will be Multipliers can provide an attributed to specific industries in a region.illustration of the possible “induced” effects from a one-time change in final demand. For example, if a new facility is to be created with a determined investment amount, one can estimate how many new jobs can be created, as well as the benefit to the region in terms of output (e.g., extra construction, manufacturing, supplies, and other products needed). It must be noted that RIMS II multipliers assume no job immigration or substitution effect. That is, if new jobs are created as a result of investing more capital, those jobs would not be filled by the labor force from another industry. Most of the experts we interviewed were reluctant to use economic multipliers to calculate losses from counterfeiting because this methodology was developed to look at a one- time change in output and employment. Nonetheless, the use of this methodology corroborates that the effect of counterfeiting and piracy goes beyond the infringed industry. For example, when pirated movies are sold, it damages not only the motion picture industry, but all other industries linked to those sales. While experts and literature we reviewed in our April 2010 report provided different examples of effects on the U.S. economy, most observed that despite significant efforts, it is difficult, if not impossible, to quantify the net effect of counterfeiting and piracy on the economy as a whole. For example, according to the 2008 OECD study, it attempted to develop an estimate of the economic impact of counterfeiting and concluded that an acceptable overall estimate of counterfeit goods could not be developed. OECD further stated that information that can be obtained, such as data on enforcement and information developed through surveys, “has significant limitations, however, and falls far short of what is needed to develop a robust overall estimate.” Nonetheless, the studies and experts we spoke with suggested that counterfeiting and piracy is a sizeable problem, which affects consumer behavior and firms’ incentives to innovate. Chairman Murphy, Ranking Member DeGette, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions you may have at this time. If you or your staff have any questions about this testimony, please contact me at 202-512-3763 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony include Christine Broderick, Assistant Director; Pedro Almoguera; Karen Deans; and Rachel Girshick. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The United States is an acknowledged global leader in the creation of intellectual property. According to the Federal Bureau of Investigation, IP theft is a growing threat which is heightened by the rise of the use of digital technologies. IP is any innovation, commercial or artistic, or any unique name, symbol, logo, or design used commercially. IP rights protect the economic interests of the creators of these works by giving them property rights over their creations. Cyber attacks are one way that threat actors--whether nations, companies, or criminals--can target IP and other sensitive information of federal agencies and American businesses. While bringing significant benefits, increasing computer interconnectivity can create vulnerabilities to cyber-based threats. GAO was asked to testify on efforts to estimate the economic impacts of theft of intellectual property. Accordingly, this statement discusses (1) the economic significance of intellectual property protection and theft on the U.S. economy and (2) insights from efforts to quantify the economic impacts of counterfeiting and piracy on the U.S. economy. This statement is based on products GAO issued from April 2010 through June 2012 on the economic impacts of theft of intellectual property and on cyber threats and economic espionage. In April 2010, GAO reported that intellectual property (IP) is an important component of the U.S. economy and IP-related industries contribute a significant percentage to the U.S. gross domestic product. IP-related industries also pay significantly higher wages than other industries and contribute to a higher standard of living in the United States. Ensuring the protection of IP rights encourages the introduction of innovative products and creative works to the public. According to experts and literature GAO reviewed, counterfeiting and piracy have produced a wide range of effects on consumers, industry, government, and the economy as a whole. The U.S. economy as a whole may grow more slowly because of reduced innovation and loss of trade revenue. To the extent that counterfeiting and piracy reduce investments in research and development, companies may hire fewer workers and may contribute less to U.S. economic growth, overall. Furthermore, as GAO reported in June 2012, private sector organizations have experienced data loss or theft, economic loss, computer intrusions, and privacy breaches. For example, in February 2011, media reports stated that computer hackers had broken into and stolen proprietary information worth millions of dollars from the networks of six U.S. and European energy companies. Generally, as GAO reported in April 2010, the illicit nature of counterfeiting and piracy makes estimating the economic impact of IP infringements extremely difficult. Nonetheless, research in specific industries suggests that the problem is sizeable, which is of particular concern as many U.S. industries are leaders in the creation of intellectual property. Because of the difficulty in estimating the economic impact of IP infringements, assumptions must be used to offset the lack of data. Efforts to estimate losses involve assumptions such as the rate at which consumers would substitute counterfeit for legitimate products, which can have enormous impacts on the resulting estimates. Because of the significant differences in types of counterfeited and pirated goods and industries involved, no single method can be used to develop estimates. Each method has limitations, and most experts observed that it is difficult, if not impossible, to quantify the economy-wide impacts. GAO is not making any new recommendations in this statement.
Congress has given the president the authority to issue executive orders designating and terminating areas as combat zones. Since 1950, U.S. presidents have designated combat zones in Korea, Vietnam, the Persian Gulf area, the Kosovo area, and Afghanistan. U.S. presidents terminated combat zone designations for Korea in 1955 and Vietnam in 1996. See appendix II for information on the current combat zones. In addition, Congress has established qualified hazardous duty areas where servicemembers serving in such areas receive the same tax treatment as members serving in presidentially designated combat zones. Congress has specified in this legislation, however, that a servicemember on duty in a qualified hazardous duty area is only entitled to combat zone tax relief benefits if that member would also be entitled to hostile fire or imminent danger pay. Congress has designated seven qualified hazardous duty areas since 1995 (see app. II). DOD is responsible for prescribing regulations for imminent danger pay and for certifying locations where military service is in direct support of combat zone operations. Servicemembers who perform duty in certain foreign areas may be eligible for imminent danger pay as well as income tax relief, in the form of combat zone tax relief benefits, as summarized in table 1. See appendixes I, II, and III for information about the locations designated for imminent danger pay or combat zone tax relief benefits. The Under Secretary of Defense (Comptroller) in coordination with the director of the Defense Finance and Accounting Service is responsible for maintaining and updating the Financial Management Regulation, Volume 7A, that provides financial management policy and procedures for DOD, including imminent danger pay and combat zone tax relief benefits. In DOD’s imminent danger pay guidance, the Under Secretary of Defense (Personnel and Readiness) requires unified combatant commanders to continually appraise the conditions within areas designated for imminent danger pay to ensure that the designation is warranted, and to forward written recommendations, at least annually, to the Chairman, Joint Chiefs of Staff, who evaluates these designations to determine whether conditions in these areas continue to present the threat of physical harm or imminent danger from civil war, civil insurrection, terrorism, or wartime conditions. The Chairman forwards area designations that he recommends for approval to the Under Secretary of Defense (Personnel and Readiness) with support for these recommendations. The Secretary of Defense has delegated the responsibility for designating imminent danger pay areas and certifying military service as in direct support of combat zone operations to the Under Secretary of Defense (Personnel and Readiness) who is responsible for DOD personnel policy, including oversight of military compensation, and as such, serves as the focal point for imminent danger pay. DOD’s imminent danger pay guidance requires the unified combatant commanders to continuously appraise conditions within designated areas to ensure that these areas continue to present the threat of physical harm or imminent danger from civil insurrection, civil war, terrorism, or wartime conditions, and to submit their recommendations to continue or terminate area designations, at least annually, to the Chairman, Joint Chiefs of Staff, for review. At any time, unified combatant commanders may recommend that additional areas be designated for imminent danger pay or that existing area designations be terminated. The Chairman, in turn, evaluates designation recommendations and, if the Chairman recommends them for approval, forwards the unified combatant commander’s recommendations to the Under Secretary of Defense (Personnel and Readiness). In practice, however, officials in the Office of the Under Secretary of Defense (Personnel and Readiness) have assumed responsibility for initiating and directly managing these reviews. DOD’s processes for reviewing existing imminent danger pay areas and combat zones can be improved. While combatant commanders have taken the initiative periodically to make recommendations to designate or terminate imminent danger pay areas, DOD has not conducted annual reviews of existing imminent danger pay areas in accordance with its guidance to ensure that conditions in these areas continue to warrant such designations. Also, DOD has not updated its guidance to reflect that the Office of the Under Secretary of Defense (Personnel and Readiness) has assumed responsibility for initiating and managing annual reviews. In addition, DOD also has not incorporated factors contained in questionnaires completed by combatant commanders that the Office of the Under Secretary of Defense (Personnel and Readiness) uses to evaluate when conditions in foreign areas pose the threat of physical harm or imminent danger to servicemembers performing duty in designated locations. By conducting annual reviews of existing designations in accordance with its guidance, DOD could strengthen its oversight of imminent danger pay designations to ensure that conditions in designated areas continue to pose the threat of physical harm or imminent danger to servicemembers and that these areas should continue to be designated. Furthermore, updating DOD’s guidance to include the current processes and factors used for reviewing and designating imminent danger pay areas would clarify responsibilities and establish standard criteria for use in meeting the program’s objectives. Imminent danger pay was first authorized in October 1983. Between 1992—when DOD revised its guidance to require annual reviews, as opposed to semiannual reviews, of imminent danger pay area designations—and 2006, the number of designated areas increased from 34 to 54 areas. (See appendix I for a list of areas currently designated for imminent danger pay). While DOD guidance requires the combatant commanders to continuously appraise conditions in designated areas and to forward recommendations to the Chairman, Joint Chiefs of Staff, to continue or terminate these designations at least annually, in practice, the Office of the Under Secretary of Defense has assumed responsibility for initiating and managing these reviews. The combatant commanders have not forwarded their recommendations on existing designations to the Chairman, Joint Chiefs of Staff, at least annually, nor has the Under Secretary of Defense (Personnel and Readiness) requested the combatant commanders to do so. Between 1992 and 2006, the Under Secretary of Defense (Personnel and Readiness) requested the unified combatant commanders to conduct six reviews of designated imminent danger pay areas. Reviews were conducted in 1992, 1993, 1995, 1997, and 2002; the 2006 review is ongoing, and officials in the Office of the Under Secretary of Defense (Personnel and Readiness) expect to complete it by October. In the absence of annual reviews, the combatant commanders have taken the initiative to periodically recommend the designation or termination of imminent danger pay areas. According to DOD officials, the Under Secretary of Defense (Personnel and Readiness) deferred several reviews because of anticipated policy revisions and concerns about creating additional work for combatant commanders while they are engaged in ongoing operations. For instance, DOD deferred the 1998 review pending the anticipated revision of its imminent danger pay guidance to incorporate criteria for determining whether conditions in designated areas continue to pose the threat of physical harm or imminent danger from civil war, civil insurrection, terrorism, or wartime conditions to servicemembers. However, this revision did not occur. The department initiated the ongoing 2006 review shortly after we began our review. According to officials from the Office of the Under Secretary of Defense (Personnel and Readiness), the department deferred reviews between 2002 and 2006 because it did not want to create additional work for combatant commanders during ongoing operations, and department officials also believe that conditions in most designated areas had not changed sufficiently since the terrorist attacks of September 11, 2001, and the start of the Global War on Terrorism to result in termination of designated areas. Although DOD guidance calls for the combatant commanders to forward recommendations to continue or to terminate imminent danger pay areas at least annually, this has occurred at longer intervals ranging from 2 to 5 years. As a result of combatant commander’s recommendations as well as annual reviews of imminent danger pay areas conducted by the combatant commanders at the request of the Under Secretary of Defense (Personnel and Readiness), the department has terminated area designations where conditions no longer pose the threat of physical harm or imminent danger from civil war, civil insurrection, terrorism, or wartime conditions. For instance, as a result of the 1993 review, DOD terminated imminent danger pay designation for areas including Oman, the United Arab Emirates, Bahrain, Qatar, and the Gulfs of Oman and Aden as well as the Arabian Sea because the department determined that conditions no longer posed the threat of physical harm or imminent danger toward servicemembers. The department has also denied requests from unified combatant commanders to designate new areas for imminent danger pay when there is not sufficient information to demonstrate that conditions pose a threat of physical harm or imminent danger to servicemembers on duty in the location, based on criteria used in a questionnaire and accompanying threat assessments. By conducting annual reviews of existing designations in accordance with its guidance, DOD could strengthen its oversight of imminent danger pay designations to ensure that conditions in designated areas continue to pose the threat of physical harm or imminent danger to servicemembers and that these areas should continue to be designated. DOD has not updated its imminent danger pay guidance to (1) reflect the responsibility of the Office of the Under Secretary of Defense (Personnel and Readiness) for initiating and managing annual reviews of imminent danger pay areas or (2) to incorporate factors that DOD uses to evaluate and define when servicemembers face imminent danger. Updating guidance to include the current processes and factors used for reviewing and designating imminent danger pay areas would clarify responsibilities and establish standard criteria for use in meeting the objectives of the imminent danger pay program. First, while DOD’s imminent danger pay area guidance requires the unified combatant commanders and Joint Chiefs of Staff to conduct reviews of areas designated for imminent danger pay at least annually, in practice, the Under Secretary of Defense (Personnel and Readiness) has directly managed these reviews since the mid-1990s. However, as previously discussed, the combatant commanders have not forwarded recommendations on existing designations to the Chairman, Joint Chiefs of Staff, on an annual basis in accordance with DOD’s guidance and the Under Secretary of Defense (Personnel and Readiness) has not requested them to do so on an annual basis. In addition, DOD has not updated its imminent danger pay guidance to show that the Under Secretary of Defense (Personnel and Readiness) has been assigned the functions, relationships, and authorities previously assigned to the position of the Assistant Secretary of Defense (Force Management and Personnel). Second, the Federal Managers’ Financial Integrity Act of 1982 states that agencies must establish internal administrative controls in accordance with the standards prescribed by the Comptroller General. The Comptroller General published these standards in Standards for Internal Control in the Federal Government, which sets out management control standards for all aspects of an agency’s operations. These standards are intended to provide reasonable assurance of meeting agency objectives. Two of the standards of internal control—risk assessment and control activities—state that an agency should establish clear objectives as well as appropriate policies, procedures, and plans with respect to its activities to ensure effective and efficient use of resources to meet organizational objectives. Neither the statute that authorizes imminent danger pay nor DOD guidance define what constitutes the threat of physical harm or imminent danger or contain criteria to determine this. However, in 1997, DOD developed a questionnaire that contains specific factors that it continues to use to evaluate requests from combatant commands to designate new imminent danger pay areas as well as to review existing imminent danger pay designations. These factors help DOD to gather consistent information that supports its goal to limit imminent danger pay to those members placed in direct or imminent danger. The factors consider acts of violence against U.S. personnel, such as assassinations, homicides, sabotage, kidnapping, aggravated battery, property damage, terrorizing, extortion, rioting, and commandeering vessels or hijacking aircraft; insurrection, war, or wartime conditions, including fighting that occurs sufficiently close to servicemembers that creates a substantial probability of death or bodily injury to servicemembers, causes servicemembers to fear for their safety, or creates danger to human life or property; terrorism conditions, including the existence of terrorist organizations that have the intent or ability to harm servicemembers or terrorist threat levels indicative of an imminent threat; security environment, that places U.S. forces at risk. DOD also considers existing security measures including threat condition levels and operating tempo levels, as well as restrictions on leave and off-duty travel and measures taken by the host government to protect servicemembers; travel restrictions, including restriction of servicemembers to duty stations, installations, or defined sections of the area; and presence of dependents in the area, including school-age dependents, whether dependents are targets, and the security measures in place to protect them. We reviewed 54 completed questionnaires for imminent danger pay—some for the same countries over a period of time—that were submitted by combatant commands to the Office of the Under Secretary of Defense (Personnel and Readiness) between 1998 and 2005. We found that DOD is using information supplied on these questionnaires, in conjunction with threat assessments, to inform decisions to designate new imminent danger pay areas and to review existing designations to ensure that they continue to be appropriately designated. For instance, DOD denied requests from the U.S. Central Command to designate Turkmenistan and Kazakhstan for imminent danger pay in 2001 because the support provided in the questionnaires did not demonstrate an imminent threat of physical harm or danger to servicemembers based on civil war, civil insurrection, terrorism, or wartime conditions in those areas. DOD also denied a request from the U.S. European Command in 2003 to designate a number of areas, including Bulgaria, Romania, Hungary, and Cyprus, for imminent danger pay based on the probability of increased danger to military personnel due to their involvement in combat operations related to Iraq as well as general terrorist threats to DOD personnel. The Office of the Under Secretary of Defense (Personnel and Readiness) determined that there was not sufficient support of an imminent threat of personal harm to servicemembers on official duty in these areas. In contrast, DOD has designated areas, such as Ethiopia, for imminent danger pay based on facts and circumstances that indicate the likelihood that U.S. personnel might be harmed as a result of civil unrest. Beginning in 2006, Congress authorized the Secretary of Defense to retroactively designate an area for imminent danger pay, subject to the availability of appropriated funds. Despite the demonstrated usefulness of these questionnaires to decision makers, DOD has not incorporated factors contained in the questionnaire into its imminent danger pay guidance. For example, DOD does not include minimum terrorist threat levels, one such potential factor identified in the questionnaire, in its imminent danger pay guidance. While terrorist threat levels for a country may change over time and are raised and lowered on the basis of new information and analysis, in the past DOD officials responsible for overseeing imminent danger pay have considered establishing minimum terrorist threat levels of “high” or “significant” to help assess the imminence of threats and dangers from terrorism. Although DOD has yet to do so, we believe that establishing a minimum terrorist threat level threshold needed for an area to be designated for imminent danger pay could help to limit imminent danger pay to those servicemembers who truly face an imminent threat or danger to their lives and would likely reduce the number of imminent danger pay area designations. Currently, DOD identifies terrorist threat levels using the Defense Intelligence Agency’s four step scale (see fig. 1) that describes the severity of a threat. For example, using Defense Intelligence Agency terrorism threat levels, if current threat levels for designated areas remain constant and DOD was to establish a minimum terrorist threat level of “significant” for areas to be designated for imminent danger pay, then designations for 19 currently designated areas with threat levels below “significant” would be terminated. These areas include Angola, Burundi, Croatia, and Haiti. However, 29 areas with “high” or “significant” terrorist threat levels, such as Indonesia, the Philippines, Columbia, Iraq, Kuwait, Malaysia, Qatar, and Syria, would continue to qualify for imminent danger pay. Due to changing world circumstances since July 2006, the terrorist threat levels may have changed. Internal controls over servicemembers’ temporary duty travel to areas designated for imminent danger pay or combat zone tax relief benefits need to be strengthened. While two DOD components have instituted policies to regulate and monitor cross-month travel to these areas, there is no similar departmentwide policy to ensure that travel to areas designated for imminent danger pay or combat zone tax relief benefits needs to cross calendar months. Data limitations prevented us from determining the full extent of temporary duty travel to areas designated for imminent danger pay and combat zone tax relief benefits, as well as how much of this travel crossed calendar months. However, the U.S. Central Command and U.S. Army, Europe—which collectively account for 62 percent of imminent danger pay areas and 86 percent of areas designated for combat zone tax relief benefits—have developed policies and controls to monitor and regulate cross-month travel to areas designated for imminent danger pay and combat zone tax relief benefits to preclude, in their view, the appearance of abuse of these benefits. Our review of data from DOD’s Defense Travel System as well as data for ship port visits to the U.S. European Command’s area of responsibility show that cross-month travel does occur. By establishing internal controls such as a departmentwide policy and periodic audits to monitor cross-month travel, DOD could ensure that all areas are covered and further strengthen its management of imminent danger pay and combat zone tax relief benefits. Internal controls over servicemembers’ temporary duty travel to areas designated for imminent danger pay or combat zone tax relief benefits need to be strengthened. It is DOD’s policy to minimize the number of visits and visitors to overseas areas as well as demands on equipment, facilities, time, installation services, and personnel. In addition, DOD’s temporary duty travel policy and its Foreign Clearance Guide, which implements this policy, provide mechanisms for the unified combatant commands to regulate travel to foreign locations. Before any servicemember makes an official visit to a foreign country, including areas where servicemembers qualify for imminent danger pay or combat zone tax relief benefits, the visit must be reviewed and approved by both the U.S. embassy in the host country (country clearance) as well as by the sponsoring unified commander (theater clearance). DOD’s Foreign Clearance Guide permits the unified combatant command to grant theater clearance or to delegate this authority to component commands, subordinate commands, special agencies, or units to be visited. The Standards for Internal Control in the Federal Government recommend that agencies establish internal controls to monitor and review operations and programs to provide reasonable assurance that these meet their goals. Moreover, we consider establishing such controls, including policies and oversight mechanisms, a best practice that can ensure that cross-month travel needs to cross calendar months, especially to areas designated for imminent danger pay or combat zone tax relief benefits. With the exception of U.S. Army, Europe, DOD does not conduct periodic audits of cross-month travel. Although there is no requirement to do so, the U.S. Central Command and U.S. Army, Europe—which collectively account for 62 percent of imminent danger pay areas and 86 percent of areas designated for combat zone tax relief designation—have developed policies and internal controls to monitor and regulate cross-month travel to areas for which they provide theater clearance. According to U.S. Central Command officials, they instituted this policy to preclude, in their view, the appearance of abuse of imminent danger pay and combat zone tax relief benefits. Specifically, the U.S. Central Command—which consists of 29 (approximately 55 percent) of the areas currently designated for imminent danger pay and 20 (approximately 71 percent) of the areas currently designated for combat zone tax relief benefits—precludes travel that crosses months to its area of responsibility. According to U.S. Central Command officials, the command instituted this policy in 2003 as a result of the high pace of operations and to preclude, in their view, the abuse of imminent danger pay and combat zone tax relief benefits. All five sea areas and all but 3 of the 27 countries in the U.S. Central Command’s area of responsibility are designated for imminent danger pay, and 16 of these countries as well as all sea areas are designated for combat zone tax relief benefits (see app. I). To enforce its policy, the U.S. Central Command has established an electronic database with internal controls that automatically alert officials to any proposed temporary duty travel that crosses calendar months. Officials in the U.S. Central Command’s Travel Clearance Office, which is responsible for granting theater clearance, review all requests from servicemembers not assigned to the U.S. Central Command to travel to countries in the command’s area of responsibility. If the proposed travel dates cross calendar months, the Travel Clearance Office requests that the traveler changes the dates of the trip or provide justification for why the trip needs to occur at that time. According to a command official, appropriate justification may include mission requirements or the limited availability of flights. However, according to an official, most travelers choose to change the dates of travel to avoid crossing months. In 2005, U.S. Army, Europe, which was delegated authority by the U.S. European Command to approve travel to locations in the former Federal Republic of Yugoslavia, instituted a policy precluding cross-month travel— although this was minimal—to Bosnia-Herzegovina; Croatia; Serbia and Montenegro, including Kosovo; Macedonia; and Slovenia in order to prevent, in their view, the perceived abuse of combat zone tax relief benefits. These areas, excluding Slovenia, are designated for imminent danger pay and combat zone tax relief benefits and constitute approximately 8 percent of the areas currently designated for imminent danger pay and approximately 14 percent of the areas currently designated for combat zone tax relief benefits. In addition, U.S. Army, Europe, requires that a colonel or general officer approve all cross-month travel, and its Internal Review and Audit Compliance Office periodically reviews travel to specific areas to monitor whether cross-month travel occurs. According to an official in the U.S. Army, Europe, Internal Review and Audit Compliance Office, cross-month travel accounted for 1.6 percent of all travel to countries for which U.S. Army, Europe, approves theater clearance requests. Although no similar policy precludes cross-month travel to Georgia, a country for which the U.S. Marine Corps Forces, Europe, has been delegated authority by the U.S. European Command to approve travel, an official who is responsible for approving travel to this location told us that since January 2005 the U.S. Marine Corps Forces, Europe, has monitored travel requests for potential cross-month travel. Neither U.S. Naval Forces, Europe, nor U.S. Air Forces in Europe has policies to monitor temporary duty travel that crosses calendar months. We did not determine the amount of cross-month travel conducted by servicemembers under these commands. In contrast, neither U.S. European Command (excluding U.S. Army, Europe), U.S. Pacific Command, or U.S. Southern Command—which collectively account for 20 (about 38 percent) imminent danger pay areas and 4 (about 14 percent) combat zone tax relief benefits—have policies that address temporary duty travel that crosses calendar months. Although two commands have instituted policies and internal controls to preclude the perceived abuse of imminent danger pay and combat zone tax relief by regulating and monitoring cross-month travel, no requirement exists for DOD to monitor cross-month travel to 38 percent of areas currently designated for imminent danger pay and 14 percent of areas currently designated for combat zone tax relief benefits. By establishing internal controls, such as a departmentwide policy and periodic audits to monitor cross-month travel, DOD could ensure that all areas are covered and further strengthen its management of imminent danger pay or combat zone tax relief benefits. Our analysis of data from the Defense Travel System indicates that cross- month travel does occur during temporary duty travel. We reviewed 28,404 vouchers for temporary duty travel that were processed using the Defense Travel System for travel that occurred between fiscal years 2003 and 2005. During about 18 percent (5,152) of these trips, a servicemember traveled to an area that is designated for imminent danger pay or combat zone tax relief benefits at some point during the trip regardless of whether the trip crossed months. About 93 percent (1,469) of the 1,576 trips that crossed calendar months were to countries within the Central Command’s and European Command’s areas of responsibility. We found that 1,576 (about 6 percent) of the 28,404 trips processed using the Defense Travel System from fiscal year 2003 through fiscal year 2005 involved cross-month travel to an area designated for imminent danger pay or combat zone tax relief benefits, including trips during which servicemembers were eligible for imminent danger pay or combat zone tax relief benefits during 1 or 2 months. For instance, servicemembers traveling in areas designated for imminent danger pay or combat zone tax relief benefits qualified for 2 full months of imminent danger pay and combat zone tax relief benefits for 745 (about 3 percent) of the 28,404 trips. Also, during 831 (about 3 percent) of the 28,404 trips, individual servicemembers traveled to areas designated for imminent danger pay or combat zone tax relief benefits at some point during a single month, although their travel crossed calendar months. As a result, these servicemembers qualified for 1 month of imminent danger pay or combat zone tax relief benefits. As depicted in table 2, we also found that the 1,576 cross-month trips ranged in duration from 2 days to more than 15 days. The six trips that involved cross-month travel lasting 2 days or less were to Afghanistan, Haiti, Iraq, and the Philippines for training, meetings, special missions, and site visits. Most of these 1,576 cross-month trips were to the U.S. Central Command’s area of responsibility and the majority was longer than 7 days in duration, as shown in table 3. The Air Force and the Army account for the majority of cross-month travel. In addition, we found that officers at the O-3 and O-4 pay grades accounted for the greatest number of cross-month trips by active duty officers to areas designated for imminent danger pay and combat zone tax relief benefits during fiscal years 2003 through 2005. In contrast, officers at the O-7, O-8, O-9, and O-10 pay grades made the least number of cross- month trips to areas designated for imminent danger pay or combat zone tax relief benefits. Enlisted servicemembers at the E-3, E-5, and E-6 pay grades made the majority of cross-month trips by active duty enlisted servicemembers to areas designated for this special pay or benefit during fiscal years 2003 through 2005. In comparison, we found that 23,083 (about 81 percent) of the 28,404 trips processed using the Defense Travel System from fiscal year 2003 through fiscal year 2005 involved travel to areas that were not designated for either imminent danger pay or combat zone tax relief during fiscal years 2003 to 2005. Further, we found that 4,414 trips (about 16 percent) of the 28,404 trips crossed calendar months, but did not include travel to areas designated for imminent danger pay or combat zone tax relief benefits. Of these 4,414 trips, 3,058 (about 69 percent) were 7 or more days in duration and 1,356 (about 31 percent) were less than a week in duration, as shown in table 4. However, due to data limitations, we were unable to determine whether this travel was representative of temporary duty travel departmentwide. Data provided by U.S. Naval Forces, Europe, indicate that some ships make port visits to areas designated for imminent danger pay or combat zone tax relief benefits, and that some of these port visits cross calendar months thus qualifying servicemembers for 2 months of imminent danger pay, and more significantly, combat zone tax relief benefits. Although ship commanders determine ship movements within an operating area, U.S. Naval Forces, Europe, determines the location of port visits based on security concerns and joint exercises and operations, among other considerations. The U.S. Naval Forces, Europe, provided data on port visits made by Navy ships for fiscal years 2003 through 2005 to areas designated for imminent danger pay or combat zone tax relief benefits within the U.S. European Command’s area of responsibility. According to these data, for which we did not validate the accuracy, 56 ships made port visits to areas designated for imminent danger pay or combat zone tax relief benefits within the U.S. European Command’s area of responsibility during this time. Seven of these ships made port calls to areas designated for imminent danger pay or combat zone tax relief that crossed calendar months (see table 5). According to U.S. Naval Forces, Europe, the purpose of these port visits was generally for quality of life reasons or part of training or exercises. DOD paid approximately $7.5 million to the 26,849 servicemembers who qualified for imminent danger pay for 1 month and the 3,284 servicemembers who qualified for 2 months of imminent danger pay because the port visit crossed calendar months. We did not determine how much compensation servicemembers qualified to exclude from federal taxes as a result of these port visits. We focused on port visits made by ships and did not obtain individual ships’ logs to determine whether ships crossed into sea areas designated for imminent danger pay or combat zone tax relief benefits while en route to other destinations or while performing operations. We also did not obtain similar data for the U.S. Central Command because most ports and all sea areas in its area of responsibility are designated for imminent danger pay or combat zone tax relief benefits. We also did not obtain this information from the U.S. Southern Command or U.S Pacific Command because few countries and no sea areas in their areas of responsibility are designated for imminent danger pay or combat zone tax relief benefits. DOD tracks the cost of imminent danger pay and servicemembers’ compensation that qualify for combat zone tax relief benefits. While DOD tracks and reports the cost of imminent danger pay to Congress as part of its budget request, the department does not report the amount of servicemembers’ compensation that qualifies for combat zone tax relief benefits. Combat zone tax relief benefits could allow servicemembers to exclude a significant portion of their compensation from federal taxes. For example, enlisted personnel and warrant officers may exclude all military compensation earned during one month. For commissioned officers, compensation is free of federal income tax up to the maximum amount of enlisted pay plus any imminent danger or hostile fire pay received. In 2006, the maximum amount of compensation for commissioned officers that is eligible for combat tax zone relief is $6,499.50 plus $225 imminent danger pay, or $6,724.50 per month. The Defense Finance and Accounting Service reports servicemembers’ compensation that qualifies for combat zone tax relief benefits to the services’ financial management offices. However, there is no requirement for DOD to report this information to Congress. The reporting of combat zone tax relief benefit data to Congress could provide information on the extent of this benefit and aid Congress in its oversight role. With concerns about the long-term sustainability of rising costs associated with military compensation, it is important that DOD effectively manage its imminent danger pay program to ensure that only those servicemembers who are subject to the threat of physical harm or imminent danger on the basis of civil war, civil insurrection, terrorism, or wartime conditions while on duty in a designated foreign area receive imminent danger pay. While DOD has significantly increased the number of areas designated for imminent danger pay over the past 14 years, it has conducted reviews of these areas only six times between 1992 and 2006, and the current review is ongoing. In addition, we believe that updating guidance to reflect the responsibility of the Office of the Under Secretary of Defense (Personnel and Readiness) for reviewing area designations and incorporating factors into its guidance to determine what specifically constitutes imminent danger, would clarify responsibilities and establish standard criteria for use in meeting the objectives of the imminent danger pay program. Clearly, it is important that servicemembers are appropriately compensated for the duties they perform, particularly when there are risks associated with those duties. It is also important that DOD ensure the need for cross-month travel as both the U.S. Central Command and U.S. Army, Europe, are doing. Although these two commands have instituted policies and internal controls to preclude the perceived abuse of imminent danger pay and combat zone tax relief by regulating and monitoring cross-month travel, no requirement exists for DOD to monitor cross-month travel to 38 percent of areas currently designated for imminent danger pay and 14 percent of areas currently designated for combat zone tax relief benefits. Managing these programs effectively is important, as the nation’s growing fiscal concerns will require DOD and the federal government to consider difficult trade-offs in the years ahead. In addition, internal controls to monitor cross-month travel could ensure all designated areas are covered and further strengthen DOD’s management of imminent danger pay and combat zone tax relief benefits. Finally, combat zone tax relief benefits can be substantial. Congress may benefit from obtaining information on servicemembers’ total compensation that qualifies for combat zone tax relief benefits as a means to assist in providing oversight of this benefit. To strengthen DOD’s internal controls and management of imminent danger pay and to monitor cross-month travel, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Personnel and Readiness) to take the following three actions: Update DOD’s imminent danger pay guidance to (1) reflect the office responsible for reviewing and designating imminent danger pay areas and (2) incorporate factors that clearly define what constitutes the presence of imminent threat or dangers to servicemembers on duty in foreign areas. Request the designated organizations to conduct reviews of areas designated for imminent danger pay in accordance with DOD’s guidance, and, if necessary, update its guidance to reflect the appropriate time period for conducting these reviews. Establish departmentwide policies and internal controls that include periodic audits to monitor cross-month travel to ensure that the travel needs to cross calendar months. In light of the substantial tax benefits that servicemembers receive when performing duty in combat zones, Congress should consider the following action: Require DOD to periodically report the amount of servicemembers’ total compensation that qualifies for combat zone tax relief benefits to Congress. In written comments on a draft of this report, DOD generally concurred with two of our three recommendations, but did not concur with our recommendation to establish a departmentwide policy and internal controls to monitor cross-month travel to ensure that the travel needs to cross calendar months. DOD also provided separate technical comments which we have incorporated in the report, as appropriate. DOD concurred with our recommendation to update DOD’s imminent danger pay guidance. DOD acknowledged that there are informal processes and additional guidance and concurs that its informal policy should be updated to reflect the office currently responsible for the review and to provide additional clarification and guidance as to what factors are considered in determining whether imminent danger pay is warranted. DOD partially concurred with our recommendation to conduct comprehensive reviews of imminent danger pay area designations, noting it had conducted reviews six times since 1992 and expects to complete its ongoing review by October 31, 2006. DOD stated that the department believes that comprehensive reviews of imminent danger pay should be conducted at least biennially. DOD’s current imminent danger pay guidance requires at least annual reviews of areas designated for imminent danger pay. We continue to believe that DOD should conduct reviews of areas designated for imminent danger pay in accordance with its guidance, which is reflected in our recommendation. If DOD believes that the appropriate time period for conducting these reviews is biennially, as reflected in its comments, then DOD should update its imminent danger pay guidance accordingly. DOD did not concur with our recommendation to establish departmentwide policies and internal controls that include periodic audits to monitor cross-month travel to ensure that the travel needs to cross calendar months. DOD stated in written comments that doing so would add unnecessary bureaucracy and cost to its travel system and believes that our recommended action is already implemented in existing travel procedures that require thorough review and management of the necessity and cost-effectiveness of travel by DOD personnel. Further, DOD commented that our review found no apparent abuse of cross-month travel and the two commands that collectively account for the majority of imminent danger pay and combat zone tax relief benefits have developed theater-specific policies and controls to monitor and regulate cross-month travel. As stated in our scope and methodology and the report, data limitations prevented us from determining the full extent of temporary duty travel to areas designated for imminent danger pay and combat zone tax relief benefits, as well as how much of this travel crossed calendar months. Further, we reviewed only temporary duty travel vouchers processed using DOD’s Defense Travel System. We did not review travel that was processed in DOD’s numerous legacy travel systems, and therefore did not reach any conclusions as to whether servicemembers have scheduled travel in a fiscally responsible manner or have attempted to maximize this pay and benefit by scheduling cross-month travel. We recognize DOD has procedures in place to review and authorize travel. Our recommendation addresses the need to have greater monitoring of travel trends. Moreover, although two commands have instituted policies and internal controls to preclude the perceived abuse of imminent danger pay and combat zone tax relief by regulating and monitoring cross-month travel, no requirement exists for DOD to monitor cross-month travel to the 38 percent of areas currently designated for imminent danger pay and the 14 percent of areas currently designated for combat zone tax relief benefits not covered by these two commands. Also, while we recognize that these commands currently reflect the majority of areas designated for imminent danger pay and combat zone tax relief benefits, security conditions are subject to change over time. Therefore, we continue to believe our recommendation has merit and that internal controls to monitor cross-month travel departmentwide are needed to ensure all designated areas are covered and further strengthen DOD’s management of imminent danger pay and combat zone tax relief benefits. In response to the other written comments provided by DOD, we modified the report to acknowledge that the global security environment has played a role in the increased number of areas designated for imminent danger pay since 1992. In addition, we modified the report to reflect that danger pay was first authorized on October 1, 1983, by Public Law 98-94, section 905. We are sending copies of this report to congressional committees, the Secretary of Defense, the secretaries of the Army, the Navy, the Air Force; and the Commandant of the Marine Corps. We will also make copies available to other interested parties upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-9619 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. The GAO contact and key contributors are listed in appendix VI. Table 6 identifies the areas designated for imminent danger pay or combat zone tax relief benefits as of July 2006. A combat zone is any area that the president designates by executive order as an area in which U.S. Armed Forces are or have engaged in combat after June 24, 1950. An area ceases to be a combat zone on the dates the president designates by executive order. Since 1950, U.S. presidents have designated combat zones in Korea, Vietnam, the Persian Gulf, the Kosovo area, and Afghanistan. U.S. presidents terminated combat zone designations for Korea in 1955 and Vietnam in 1996. Enlisted personnel and warrant officers may exclude from taxes all military compensation earned during the month that they serve in a combat zone. For commissioned officers, compensation is free of federal income tax up to the maximum amount of enlisted pay plus any imminent danger or hostile fire pay received. In 2006, the maximum amount of compensation for commissioned officers that is eligible for combat tax zone relief is $6,499.50 plus $225 imminent danger pay, or $6,724.50 per month. Table 7 depicts the current combat zones. Servicemembers on duty in qualified hazardous duty areas designated by Congress are entitled to the same benefits afforded those who serve in a presidentially designated combat zone if the Secretary of Defense also designates that qualified hazardous duty area for imminent danger pay. If Congress designates a qualified hazardous duty area, but the Secretary of Defense has not designated the area for imminent danger pay, servicemembers deployed to the area are not eligible to receive combat zone tax relief unless the qualified hazardous duty area becomes eligible for imminent danger pay. If the Secretary of Defense terminates imminent danger pay for a qualified hazardous duty area, members no longer receive combat zone tax relief benefits. Table 8 depicts the current qualified hazardous duty areas. DOD has certified military service as in direct support of military operations in combat zones, thus qualifying servicemembers for combat zone tax relief benefits (see table 9). Unlike the other areas identified in the table, Jordan is the only area where DOD has certified military service as in direct support of combat zone operations as part of both Operation Enduring Freedom and Operation Iraqi Freedom. To evaluate DOD’s processes for reviewing imminent danger pay areas and to understand how servicemembers become eligible for combat zone tax relief, we analyzed legislation and DOD regulations and guidance. We compared these to the objectives and fundamental concepts of internal control defined in Standards for Internal Control in the Federal Government and Internal Control Management and Evaluation Tool. We also held discussions with knowledgeable officials from the following offices, geographic commands, and services: Commander, Navy Installations, Personnel Support Activity, Washington, D.C. Defense Finance and Accounting Service, Indianapolis, Indiana Defense Manpower and Data Center, Monterey, California and Defense Travel System Office, Arlington, Virginia Department of Defense Inspector General, Inspector General/Hotline Cases and Senior Officials Investigation Offices, Arlington, Virginia Joint Chiefs of Staff, Personnel and Policy, Arlington, Virginia Office of Management and Budget, Washington, D.C. Under Secretary of Defense (Personnel and Readiness), Arlington, United States Air Forces in Europe, Ramstein Air Force Base, Germany United States Army, Europe, Heidelberg, Germany United States Central Command, MacDill Air Force Base, Tampa, United States Department of State, Office of Allowances, Washington, D.C. United States European Command, Patch Barracks, Stuttgart, Germany United States Marine Corps Forces, Europe, Boeblingen, Germany United States Naval Forces, Europe, Commander, U.S. Sixth Fleet, United States Pacific Command, Camp H.M. Smith, Honolulu, Hawaii United States Southern Command, Miami, Florida To gain an understanding of DOD’s rationale for designating certain areas and to determine whether DOD followed its policy for designating and reviewing imminent danger pay areas and areas certified as in direct support of combat zones, we reviewed 54 imminent danger pay area questionnaires completed by combatant commanders and used by DOD to evaluate area designations and the threat of imminent danger. This nonprobability sample of areas was selected to ensure that each geographic combatant command was represented. In addition, we sought to include imminent danger pay areas where designation was limited to either land, or airspace, or water areas, or the designation was for the entire area including land, air, or water areas. For purposes of our analysis, we considered an area to be designated for imminent danger pay if DOD had designated any portion of the area (land, airspace, or water) for imminent danger pay. We also reviewed other relevant DOD documents related to DOD’s review of imminent danger pay and combat zone tax relief benefits from 1992 through 2006. We compared these against the standards for oversight envisioned in Standards for Internal Control in the Federal Government and the Internal Control Management and Evaluation Tool. To understand how DOD regulates and monitors temporary duty travel, and cross-month travel in particular, we reviewed DOD’s temporary duty travel policy and its Foreign Clearance Guide that establishes DOD policy for military personnel traveling overseas. We also reviewed travel policies from the U.S Central Command and U.S. Army, Europe. We discussed these policies with officials responsible for approving travel in accordance with the Foreign Clearance Guide, and compared these with the standards for internal control activities contained in the Standards for Internal Control in the Federal Government. In addition, we interviewed personnel in the U.S. Pacific Command and U.S. Southern Command who are knowledgeable about these command’s travel policies and requirements. Because the U.S. European Command does not have a travel clearance office, we spoke with service officials at the U.S. European Command who are responsible for authorizing travel to certain locations. To determine the extent to which servicemembers were making cross- month trips to areas that would make them eligible to receive imminent danger pay and combat zone tax relief benefits, we also analyzed 28,404 travel vouchers obtained from the Defense Travel System for servicemembers traveling on temporary duty between October 1, 2002, and September 31, 2005. In addition, we obtained data from the U.S. Naval Forces, Europe, on ship port visits to areas within the U.S. European Command’s area of responsibility that are designated for imminent danger pay or for combat zone tax relief benefits. We did not obtain similar data for the U.S. Central Command because most ports and all sea areas in its area of responsibility are designated for imminent danger pay or combat zone tax relief benefits. We also did not obtain this information from the U.S. Southern Command or U.S. Pacific Command because few countries and no sea areas in their areas of responsibility are designated for imminent danger pay or combat zone tax relief benefits. Under DOD’s Financial Management Regulations, authorizing officials are responsible for determining the necessity of trips and funds availability for temporary duty travel. Further, DOD guidance states that DOD airlift authorizing officials shall ensure that an official purpose is served by air travel. Therefore, we did not evaluate whether the purposes of the trips were valid. We analyzed travel vouchers from the Defense Travel System because it is the only system that captures temporary duty travel from all four services. We also reviewed data from the Defense Finance and Accounting Service’s Operational Data System that captures the balance of travel data for the Army that is not processed using the Defense Travel System. Data from the Operational Data System frequently were missing information about trip destinations. As a result, we were unable to use these data to analyze cross-month travel to areas designated for imminent danger pay or combat zone tax relief benefits. We also did not review data processed in the 43 legacy systems used by the services to process travel vouchers because of data reliability concerns. Due to the large number of countries and sea areas within the U.S. Central Command’s area of responsibility that are designated for imminent danger pay or combat zone tax relief and because most travel vouchers for servicemembers assigned to the U.S. Central Command’s headquarters during the time period of our review were not processed using the Defense Travel System, we sought to obtain travel data directly from several combatant commands. We tested this approach with the U.S. Central Command. First, we attempted to obtain data from the U.S. Central Command’s finance office that processes travel vouchers. Due to data limitations and concerns about the accuracy of data in the automated business service system, a document processing system used to approve travel orders, we were unable to use this approach. Second, we sought to use theater clearance requests and approvals to determine the extent of cross-month travel to the U.S. Central Command’s area of responsibility. However, database limitations prevented us from using this approach. For instance, if more than one servicemember was listed on a travel clearance request submitted to the U.S. Central Command’s travel clearance office, only the name of the most senior official was listed in the database and information on the pay grades of junior servicemembers would not be available for analysis. In addition, we could not determine the accuracy of travel dates contained in the Travel Clearance Office’s database because itinerary changes are not always captured in the database. The scope of this review excluded servicemembers deployed or assigned to foreign areas. The scope also excluded servicemembers who served as part of crews on aircraft and, with the exception of data provide by the U.S. Naval Forces, Europe, ships because aircraft and ship movements are dictated by operational needs and mission requirements, and generally, do not have the same degree of flexibility associated with scheduling temporary duty travel. To assess the reliability of data obtained from the Defense Travel System, we (1) reviewed existing documentation related to the data sources, (2) electronically tested the data to identify problems with completeness or accuracy, and (3) interviewed knowledgeable officials about the data. We found these data to be sufficiently reliable for the purposes of this report. For purposes of our analysis, we defined cross-month travel as any temporary duty travel of 30 days or fewer that begins during one month and concludes during the following month. This cross-month travel may encompass travel to a single location or multiple locations during a 30-day period. During this time, a servicemember may travel to areas designated for imminent danger pay or combat zone tax relief, areas not designated for these special pays and benefits, or to both areas designated and not designated for such benefits. However, we do not know the portion of overall travel that these trips represent because some DOD locations were not using the Defense Travel System to process travel vouchers during the time of our review. As a result, we cannot extrapolate travel patterns based on the data we obtained and analyzed from the Defense Travel System. We also requested and received information on GAO’s FraudNet hotline as well as the DOD Inspector General hotline on inquiries related to servicemembers scheduling cross-month travel to potentially maximize combat zone tax relief benefits. GAO’s FraudNet did not receive any calls concerning servicemembers scheduling travel to maximize imminent danger pay or combat zone tax relief benefits. Two calls received by the DOD Inspector General’s hotline were not substantiated. Additionally, we obtained data from the Defense Finance and Accounting Service to determine the compensation excluded from federal taxable wages for servicemembers for calendar years 2003 through 2005 as a result of performing military service that the Office of the Under Secretary of Defense (Personnel and Readiness) certified as in direct support of combat zone operations and for combat zone tax exclusion overall. We also obtained data from the Defense Manpower and Data Center to estimate the total dollar amount of imminent danger pay for service members for fiscal years 2003 through 2005. However, we did not use these data because sufficient information was not available from the Defense Manpower and Data Center to validate the accuracy of these data for this report. As stated earlier in this report, we previously reported on the lack of reliability on DOD’s reported costs related to the Global War on Terrorism, including military pay. We conducted our review from October 2005 through July 2006 in accordance with generally accepted government auditing standards. In addition to the person named above, Ann Borseth, Assistant Director; Krislin Bolling; Alissa Czyz; James Driggins; Ron La Due Lake; Katherine Lenane; Grant Mallie; Oscar Mardis; David Mayfield; Ken Patton; Vanessa Taylor; and John Van Schaik also made major contributions to this report.
Servicemembers who are assigned, deployed, or travel on temporary duty to certain foreign areas are eligible for special pays and benefits including (1) imminent danger pay (IDP) when the Department of Defense (DOD) determines that members are subject to the threat of physical harm or imminent danger and (2) combat zone tax relief (CZTR) benefits, which allow members to exclude earned income from federal taxes. If travel to IDP- or CZTR-designated areas begins during one month and concludes during another (known as cross-month travel), members could receive 2 full months of benefits. GAO conducted this review under the Comptroller General's authority to initiate such reviews. GAO evaluated DOD's (1) process for reviewing IDP areas and (2) internal controls over servicemembers' temporary duty travel to areas designated for IDP and CZTR benefits. GAO is also providing information on the reporting of IDP and CZTR data. GAO analyzed legislation, guidance, travel vouchers, and internal control standards and interviewed appropriate officials. DOD's processes for reviewing existing IDP areas could be improved. While combatant commanders have taken the initiative periodically to make recommendations to designate or terminate IDP areas, DOD has not conducted annual reviews of existing IDP designations in accordance with its guidance to ensure that conditions in these areas continue to warrant such designation. Also, DOD has not updated its guidance to reflect current responsibilities for initiating annual reviews or to include factors used to determine when conditions in foreign areas pose the threat of physical harm or imminent danger to servicemembers on duty in these locations. DOD conducted 6 annual reviews between 1992 and 2006. When conducting reviews, DOD has queried combatant commanders using a set of factors to determine the nature of threats to servicemembers. However, DOD has not incorporated these factors into its guidance. By conducting annual reviews in accordance with its guidance, DOD could strengthen its oversight of IDP designations to ensure that conditions in designated areas continue to pose the threat of physical harm or imminent danger to servicemembers and that these areas should continue to be designated. Internal controls over servicemembers' temporary duty travel to areas designated for IDP or CZTR benefits need to be strengthened. While two DOD components have instituted policies to regulate and monitor cross-month travel to these areas, there is no similar departmentwide policy to ensure that travel to areas designated for IDP or CZTR benefits needs to cross calendar months. Data limitations prevented GAO from determining the full extent of temporary duty travel to areas designated for IDP and CZTR benefits, as well as how much of this travel crosses calendar months. The U.S. Central Command and U.S. Army, Europe--which collectively account for 62 percent of IDP areas and 86 percent of CZTR benefit areas--have developed policies and controls to monitor and regulate cross-month travel to areas designated for IDP and CZTR benefits to preclude, in their view, the appearance of abuse of these benefits. By establishing internal controls such as a departmentwide policy and periodic audits to monitor cross-month travel, DOD could ensure all areas are covered and further strengthen its management of IDP and CZTR benefits. DOD tracks IDP costs and servicemembers' compensation that qualifies for CZTR benefits. While DOD reports the cost of IDP to Congress as part of its budget request, the department does not report servicemembers' compensation that qualifies for CZTR benefits. Combat zone tax relief benefits could allow servicemembers to exclude a significant portion of their income from federal taxes. Reporting data on CZTR benefits to Congress could provide information on the extent of this benefit and aid Congress in its oversight role.
Historically, the Congress has limited VA’s authority to provide medical care to veterans, expanding it in a careful and deliberate manner. Although VA’s authority has increased significantly over the years, important limitations have not been recognized by VA in establishing and operating new access points. At the access points we visited, many veterans receive primary care contrary to applicable statutory limitations and priorities on their eligibility for such services. As authority for operating contract access points, VA relies on a statute (38 U.S.C. 8153) that permits it to enter into agreements “for the mutual use, or exchange of use, of specialized medical resources when such an agreement will obviate the need for a similar resource to be provided” in a VA facility. Specialized medical resources are equipment, space, or personnel that—because of cost, limited availability, or unusual nature—are unique in the medical community. VA officials assert that primary care provided at access points is a specialized medical resource because its limited availability to veterans in areas where VA facilities are geographically inaccessible (or inconvenient) makes it unique. One significant aspect of VA’s reliance on this authority is that it effectively broadens the eligibility criteria for contract outpatient care, thus allowing some veterans, who would otherwise be ineligible, to receive treatment. In our view, this statute does not authorize VA to provide primary care through its access points. Nothing in the statute suggests that the absence of a VA facility close to veterans in a particular area makes primary care physicians unique in the medical community. The purpose of allowing VA to contract for services under the specialized medical resources authority is not to expand the geographic reach of its health care system, but to make available to eligible veterans services that are not feasibly available at a VA facility that presently serves them. Furthermore, contracting for the provision of primary care at access points does not obviate the need for primary care physicians at the parent VA facility. VA has specific statutory authority (38 U.S.C. 1703) to contract for medical care when its facilities cannot provide necessary services because they are geographically inaccessible. that are more restrictive than those under 38 U.S.C. 8153, upon which VA relies. For example, under 38 U.S.C. 8153, a veteran who has income above a certain level and no service-connected disability is eligible for pre- and post-hospitalization medical services and for services that obviate the need for hospitalization. But under 38 U.S.C. 1703, that same veteran is not eligible for pre-hospitalization medical services or for services that obviate the need for hospitalization. If access points are established in conformance with 38 U.S.C. 1703, VA would need to limit the types of services provided to all veterans except those with service-connected disabilities rated at 50 percent or higher (who are eligible to receive treatment of any condition). All other veterans are generally eligible for VA care based on statutory limitations (and to the extent that VA has sufficient funds). For example, veterans with service-connected conditions are eligible for all care needed to treat those conditions. Those with disabilities rated at 30 or 40 percent are eligible for care of non-service-connected conditions at contract access points to complete treatment incident to hospital care. Furthermore, veterans with disabilities rated at 20 percent or less, as well as those with no service-connected disability, may only be eligible for limited diagnostic services and follow-up care after hospitalization. Most veterans currently receiving care at access points do not have service-connected conditions and, therefore, do not appear to be eligible for all care provided. VA is to assess each veteran’s eligibility for care on the merits of his or her unique situation each time that the veteran seeks care for a new medical condition. We found no indication that VA requires access point contractors to establish veterans’ eligibility or priority for primary care or that contractors were making such determinations for each new condition. Last year, VA proposed ways to expand its statutory authority and veterans’ eligibility for VA health care. Several bills have been introduced that, if enacted, should authorize VA hospitals to establish contract access points and provide more primary care services to veterans in the same manner as the new access points are now doing. VA hospital directors are likely to face an evolving series of financial challenges as they establish new access points. In the short term, hospitals must finance new access points within their existing budgets; this will generally require a reallocation of resources among hospitals’ activities. Over the longer term, VA hospitals may incur unexpected, significant cost increases to provide care to veterans who would otherwise not have used VA’s facilities. These costs may, however, be offset somewhat if access points allow hospitals to serve current users more efficiently. So far, VA hospitals have successfully financed access points by implementing local management initiatives, unrelated to the access points, which allow the hospitals to operate more efficiently. For example, one hospital director estimated that he had generated resources for new access points by consolidating underused medical wards at a cost savings of $250,000. To date, most directors have concluded that it was more cost-effective to contract for care in the target locations than operate new access points themselves. Essentially, they have found that it is not cost-effective to operate their own access points for a relatively small number of veterans. For example, one hospital that targeted 173 veterans for an access point concluded that this number could be most efficiently served by contracting for care. By contrast, private providers seem willing to serve small numbers of veterans on a contractual, capitated basis because they already have a non-VA patient base and sufficient excess capacity to meet VA’s needs. The longer-term effects of new access points on VA’s budget are less certain. This is because VA has not clearly delineated its goals and objectives; nor has it developed a plan that specifies the total number of potential access points, time frames for beginning operations, estimates of current and potential new veterans to be served, and related costs. Of these, key cost factors appear to be the magnitude of new users and their willingness to be referred to VA hospitals for specialty and inpatient care. Costs could potentially vary greatly depending on whether VA hospitals’ primary objective is to improve convenience for current users or to expand their market share by attracting new users. average, each spend about $300,000 a year to provide primary care to about 1,500 veterans. This hospital can reduce the number of teams to 4 once it enrolls 1,500 veterans at new access points closer to their homes. These newly established access points could be cost-effective if their total costs are the same or lower than the VA hospital’s costs—$300,000 or less in this case. VA hospitals, however, could experience significant budget pressures if new access points modestly increase VA’s market share. For example, VA currently serves about 2.6 million of our nation’s 26 million veterans. To date, 40 percent of the 5,000 veterans enrolled at VA’s 12 new access points had not received VA care within the last 3 years. Most of the new users we interviewed had learned about the access points through conversations with other veterans, friends, and relatives or from television, newspapers, and radio. VA’s access points may prove more attractive to veterans in part because they overcome barriers such as geographic inaccessibility and quality of care. About half of the veterans who have used VA health care in the past, and a larger portion of the new users, said that it matters little whether they receive care in a VA-operated facility. In fact, almost two-thirds of the new users indicated that if hospitalization is needed, they would choose their local hospital rather than a distant VA facility. Veterans will also generally benefit financially by enrolling in new VA access points. For example, prior VA users will save expenses incurred traveling to distant VA facilities as well as out-of-pocket costs for any primary care received from non-VA providers; most said that they use both VA and non-VA providers. New VA users will also save out-of-pocket costs, with low-income veterans receiving free care and high-income veterans incurring relatively nominal charges. now VA pays the contract provider a capitated rate and then bills the insurer to recover its costs on a fee-for-service basis. The combination of these factors could lead to VA attracting several hundred thousand new users through its access points. This may force VA to turn veterans away if sufficient resources are not available, or it may cause VA to seek additional appropriations to accommodate the potential increased demand. Currently, VA is to provide outpatient care to the extent resources are available. When resources are insufficient to care for all eligible veterans, VA is to care for veterans with service-connected disabilities before providing care to those without such disabilities. Furthermore, when VA provides care to veterans without service-connected disabilities, it is to provide care for those with low incomes before those with high incomes. Presently, most of the nine hopsitals encourage current and new users to enroll in their new access points. For example, the 3 hospitals we visited had enrolled 1,250 veterans in new access points. Of the 1,250, about 20 percent had service-connected disabilities, including about 4 percent rated at 50 percent or higher. Of the remaining 80 percent, most had low incomes, including about 10 percent who were receiving VA pensions or aid and attendance benefits. Inequities in veterans’ access to VA care have been a long-standing concern. For example, about three-fourths of veterans (both those with service-connected conditions and others) using VA clinics live over 5 miles away, including about one-third who live over 25 miles away. Establishing new access points gives VA the opportunity to reduce some of these veterans’ travel distances. Although VA provided general guidance, it left the development of specific criteria for targeting new locations and populations to be served to network and hospital directors. Directors have several options when targeting new locations and populations to be served. For example, they could target those current users or potential new users living the greatest distances from VA facilities. have improved convenience for existing users and attracted new users as well. However, two new access points have served only current VA users, while another one has served only new users. VA’s plans to establish access points could represent a defining moment for its health care system as it prepares to move into the 21st century. On one hand, VA hospitals could use a relatively small amount of resources to improve access for a modest number of current or new users, such as those living the greatest distances from VA facilities or in the most underserved areas. On the other hand, VA hospitals could, over the next several years, open hundreds of access points and greatly expand market share. There are over 26 million veterans and 550,000 private physicians who could contract to provide care at VA expense. VA’s growth potential appears to be limited only by the availability of resources and statutory authority, new veteran users’ willingness to be referred to VA hospitals, and other health care providers’ willingness to contract with VA hospitals. Although VA should be commended for encouraging hospital directors to serve veterans using their facilities in the most convenient way possible, VA has not established access points in conformance with existing statutory authority. In our view, under current statutes, new access points should be VA-operated or provide contract care for only those services or classes of veterans specifically designated by VA’s geographic inaccessibility authority. While legislative changes are needed to authorize VA hospitals to provide primary care to veterans in the same manner as the new access points are now doing, such changes carry with them several financial and equity-of-access implications. In addition, VA has not developed a plan to ensure that hospitals establish access points in an affordable manner. If developed, such a plan could articulate the number of new access points to be established, target populations to be served, time frames to begin operations, and related costs and funding sources. It could also articulate specific travel times or distances that represent reasonable veteran travel goals that hospitals could use in locating access points. points in accordance with the statutory service priorities. If sufficient resources are not available to serve all eligible veterans expected to seek care, new access points that are established would serve, first, veterans with service-connected disabilities and then, second, other categories of veterans, with higher income veterans served last. Finally, this approach could provide for more equitable access to VA care than VA’s current strategy of allowing local hospitals to establish access points that serve veterans on a first-come, first-served basis and then rationing services when resources run out. Mr. Chairman, this concludes my statement. I will be happy to answer any questions that you or other Members may have. For more information, please call Paul Reynolds, Assistant Director, at (202) 512-7109. Michael O’Dell, Patrick Gallagher, Abigail Ohl, Robert Crystal, Sylvia Shanks, Linda Diggs, Larry Moore, and Joan Vogel also contributed to the preparation of this statement. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Department of Veterans Affairs' (VA) plan to improve veterans' access to primary health care. GAO noted that: (1) by creating new access points, VA may be able to cost-effectively improve users' access to health care and reduce the inequities in veterans' access caused by geographic inaccessibility; (2) creating new access points may increase costs dramatically, since VA failure to adhere to statutory eligibility limitations has resulted in an increase in the amount of services provided and members receiving benefits; (3) the lack of a VA facility in a particular area does not necessarily justify the establishment of a new primary care access point in that area; (4) VA hospitals need to find ways to finance new access points through reorganization of resources rather than with additional funds; (5) in some underserved areas, it has been more cost-effective to contract for health care services rather than establishing a new VA access point; and (6) new access points could cause financial difficulties for VA, because these new facilities will make VA funded care more accessible to veterans who would otherwise not have used VA facilities.
DHS’s interest in better integrating its legacy agencies has been long- standing, and on several occasions since 2004 the department has identified an approach or vision for establishing a more unified field structure to enhance mission coordination among its component agencies but has not implemented such a structure. As we reported in September 2012, according to senior DHS officials, the fragmentation associated with each operational component having different boundaries for its area of responsibility prompted some in DHS and other stakeholders to promote the idea of a single unified DHS field structure, sometimes referred to as regionalization. Proponents believed that a more unified structure of DHS regional offices could foster better collaboration and integration of multiple components’ operations, making DHS as a whole more responsive and better prepared to counter man-made or natural threats. In addition to improving operational effectiveness, proponents of a single DHS field structure envisioned opportunities for long-term cost savings through the sharing of assets, including office space. For example, in 2005 and 2006, DHS considered implementing an overarching plan to unify its components under a single unified field structure, but then opted not to pursue this vision because of component resistance to the concept and significant up-front costs associated with colocating components. Again, in 2010, DHS chose not to realign its component regional configurations into a single DHS regional structure, as recommended in its BUR. In 2012, DHS and component officials stated that transforming the existing structure into a single unified DHS system would be a huge undertaking, and those not in favor of large- scale regionalization cited numerous challenges, including budgetary constraints, and other drawbacks to such a plan. As we reported in September 2012, while DHS’s intention of improving collaboration among its agencies is a sound goal—whether through regionalization or other means—its approach has lacked the systematic analyses and documentation needed to support its proposals for change. The department agreed with our findings and acknowledged that its efforts could have been better documented. In lieu of a single unified field structure, DHS has proposed other alternatives for enhancing collaboration among its components in the field, but has not implemented these proposals. For example, the department reported plans to harmonize operations and intelligence— utilizing concepts and structures modeled after JIATF-South. In 2012, DHS identified a new approach for enhancing regional collaboration among its components through virtual integration—that is, by improving component agencies’ coordination of mission activities and communication through the use of technology. According to a senior DHS official, virtual integration would allow for coordination of component functions without actually consolidating or merging the functions. DHS’s intention toward virtual integration was communicated in its fiscal years 2012-2016 strategic plan and reported to us in September 2012. However, in 2013, DHS officials stated that the department was not specifically pursuing virtual approaches to regional coordination. DHS officials also reported that although DHS no longer planned to pursue virtual collaboration on a larger scale, it was occurring on a more limited scale, within the department and components, for certain efforts. In the absence of a unified field structure, DHS’s operational components have established and utilized collaborative mechanisms, including virtual approaches, to better integrate their field operations. Specifically, DHS field components have employed collaborative mechanisms to coordinate their missions and share information among multiple stakeholders in order to increase their mission effectiveness and efficiencies. These mechanisms have both similarities and differences in how they are structured, which missions or threats they focus on, and which agencies participate in them, among other things. All of the mechanisms identified in this report involve multiple DHS components, as well as other federal, state, and local agency participants, and their purpose is to improve operational integration, coordination, and efficiency among DHS components. These mechanisms focus on a range of missions and are located throughout the United States. Figure 2 shows the states and territories that contain 1 or more of the 13 collaborative field mechanism types (including the 4 mechanism types we selected for further study) involving DHS’s key operational components that we identified in conjunction with the department and these components. All 4 mechanism types we selected for a more in-depth review (ReCoM, BEST, JTT, and RISC) had been established through formal organizing documents (e.g., a charter or memorandum of understanding); involve stakeholders from various federal, state, and local agencies; and have an established lead agency to provide oversight and guidance to participants. In addition, all 4 of these mechanism types are funded by the participating agencies—no funding has been allocated or budgeted specifically for these collaborative mechanisms. ReCoMs were officially established in 2011 through the Maritime Operations Coordination Plan, which was signed by their Executive Team of the Senior Guidance Team, composed of the Director of ICE Homeland Security Investigations (HSI), the Commissioner of CBP, and the Commandant of the Coast Guard. The Maritime Operations Coordination Plan directs these agencies to utilize the fusion of intelligence, planning, and operations to target the threat of transnational terrorist and criminal acts along the coastal border. USCG serves as the lead agency responsible for planning and coordinating among components, and as of June 2013, 32 ReCoMs had been established aligning with the USCG sectors’ geographic areas of responsibility. In 2005, the first BEST unit was organized and led by ICE HSI, in partnership with CBP, in Laredo, Texas, and as of June 2013, 35 BESTs had been established throughout the United States. The BESTs have a mission to identify, disrupt, and dismantle existing and emerging threats at U.S. land, sea, and air borders. The first JTT was organized in November 2011 as a CBP-led partnership among the U.S. Border Patrol, CBP Office of Field Operations, ICE HSI, and the government of Mexico to support the South Texas Campaign (STC). The purpose of the STC is to integrate intelligence, pursue enhanced coordination with the government of Mexico, and conduct targeted operations to disrupt and dismantle transnational criminal organizations. As of June 2013, JTTs had been established across four geographic boundary areas in Del Rio, Laredo, McAllen, and Houston, Texas. The first RISC was organized in 2003 to provide a forum for senior DHS officials to enhance emergency management and homeland security for all hazards through a collaborative, regional approach involving federal, state, local, tribal, nongovernmental organization and private sector partners. As of June 2013, a RISC had been established in each of the 10 FEMA regions. The purpose of all of these selected mechanisms includes increasing operational effectiveness through greater collaboration and leveraging of resources and expertise. Our review identified commonalities within the same type of mechanism across multiple locations, as well as commonalities across the 4 types of mechanisms that we focused on for our review. Table 1 provides a summary description of the 4 selected collaborative field mechanism types that we reviewed and the locations we visited for each of them. DHS, at the departmental level, has limited awareness of the universe of component field collaborative mechanisms and of the types and quality of collaborative practices they employ to better coordinate and integrate mission operations. As a result of its limited visibility over these mechanisms, DHS headquarters is not well positioned to routinely identify valuable information obtained from the mechanisms that could inform decisions about DHS field structures or further enhance collaboration across components. According to senior DHS Office of Operations Coordination and Planning (OPS) officials, DHS headquarters does not actively catalog or routinely monitor the universe of collaborative field mechanisms because they are organized and monitored by their respective lead operational components or participants. However, although the collaborative mechanisms may be monitored by individual components, the components do not have the same high-level perspective—or accountability—as the department as a whole, to look across all components and assess the state of collaboration occurring in the field. Moreover, according to senior OPS officials, their departmental-level office is focused on the specific outcomes of operational activities and not whether the activities are carried out by a certain collaborative mechanism, as the collaborative mechanisms employed to accomplish tasks are not as important as the end results. Therefore, these DHS headquarters officials believe they have visibility—primarily through the components—over activities carried out by collaborative mechanisms, but stated that they have little or no visibility over the nature of the collaboration itself, since they do not collect this type of information. OPS officials also noted that the Program Analysis & Evaluation Division (PA&E), within the Office of the Chief Financial Officer, which is involved with performance measurement, monitors whether the operational components are meeting their performance requirements or goals, but does not track performance or other information on cross-component field mechanisms. PA&E officials stated that their division is responsible for strategic-level management and oversight, not operational-level, and per various laws and policy frameworks, their division measures performance of higher-level DHS programs. DHS’s limited visibility at the departmental level of the number and type of existing collaborative field mechanisms was demonstrated in part by the challenges DHS headquarters experienced in providing us with a list of mechanisms. Specifically, when we asked for a list of formalized mechanisms that DHS headquarters considered to be successful examples of field collaboration, this information was not readily available at the departmental level, according to senior DHS officials. After consulting with operational component officials, DHS headquarters provided 6 of the 13 examples that constituted our final list of mechanisms. We acknowledge that identifying a universe of successful collaborative field mechanisms can be difficult, in part because of the relatively large size of DHS and the breadth of activities involving component agencies. However, systematically collecting information (e.g., related to operational mission, capabilities, performance, etc.) about the mechanisms from the component agencies that sponsor them would yield important information about which mechanisms are effective. Senior DHS headquarters officials stated that although they have limited visibility over the universe of collaborative mechanisms and the specific collaborative practices utilized by the groups, the department does obtain regular knowledge of component operational activities and results. For example, senior OPS officials said that they receive situation reports about daily operational actions broken out by lead component, regardless of whether the operation is affiliated with a particular collaborative mechanism. They also noted their DHS Common Operational Picture (COP), which provides an unclassified consolidated information hub for homeland security partners to ensure critical terrorism- and disaster- related information is available. However, senior DHS officials agreed that having increased visibility and additional mechanism information at the headquarters level could benefit departmental and component efforts to improve collaboration in the field and better integrate operations. For example, obtaining and analyzing this information, which DHS has lacked in recent deliberations about revamping its field structure, could provide DHS with a stronger basis for decision making regarding the establishment of new mechanisms, the effective allocation of scarce resources, or other changes to its field structure. Having some access to this information gains importance because the potential increase in mission effectiveness and efficiencies that may have been realized from earlier regionalization plans that were not adopted could be accomplished through other means—including through collaborative field mechanisms. DHS’s limited ability to monitor the collaborative mechanisms operating under the DHS umbrella is inconsistent with its own departmental-level strategic goals. Specifically, several key DHS initiatives and documents, including the Quadrennial Homeland Security Review (QHSR), BUR, and the DHS fiscal years 2012-2016 strategic plan, contain strategic goals aimed at greater unification and integration of efforts across individual DHS components. In particular, DHS’s strategic plan specifically outlines objectives related to the goals of “improving cross-departmental management, policy, and functional integration,” as well as “enhancing intelligence, information sharing, and integrated operations.” DHS’s limited departmental visibility over these mechanisms is also inconsistent with elements of the Standards for Internal Control in the Federal Government, which calls for the establishment of control activities, such as a mechanism to identify and monitor the activities of components within an organization, to help ensure achievement of the organization’s objectives. The internal control objective pertaining to effectiveness and efficiency of operations is of particular relevance to DHS’s oversight of the collaborative mechanisms and relates more to the assessment of the efficiency of the mechanisms themselves in addition to the operational- level visibility that we discuss above. Collecting information on the existing collaborative mechanisms will enable the department to better monitor these mechanisms. Doing so could also better position DHS to judge at a more strategic level which mechanisms offer potential for replication in other geographic or mission areas. Given current budget constraints, it is important for DHS to identify and promote the most effective and efficient collaborative field mechanisms as possible. During the course of our review, participants from the four selected DHS collaborative mechanism types provided information on successful practices that enhanced their collaboration that could be useful for DHS to collect and disseminate on a broader scale. At each of our 10 site visits, we asked cognizant participants to identify and provide examples of collaborative practices or other factors that they considered particularly important to the success of their group’s collaboration and operations. We evaluated their responses and summarized them into seven broad categories, as shown in figure 3, based upon the practices that were reported most frequently. This summary information provides valuable insights on approaches for enhancing collaboration among the DHS component agencies—information that could also be beneficial for DHS to collect from a larger group of component mechanisms. Among participants that we interviewed, there was consensus that certain practices facilitated more effective collaboration, which, according to participants, contributed to the groups’ overall successes. In many cases, the same or similar successful collaborative practices were reported by participants of different mechanism types as well as by participants of the same mechanism type in different geographic regions. For example, despite having a different mission focus or operating in different geographic regions, participants we interviewed from all three ReCoMs, all four BESTs, the JTT, and both RISCs—the total sample population that we met with with—identified three of the seven categories of practices as keys to success: (1) positive working relationships/communication, (2) sharing resources, and (3) sharing information. Furthermore, participants from most mechanisms also drew connections among the successful collaborative factors. For example, participants from all 10 mechanisms stated that forming positive working relationships was tied to better information sharing among them. Specifically, in our interviews, BEST mechanism officials stated that developing trust and building relationships helps participants respond quickly to a crisis, and communicating frequently helps participants eliminate duplication of efforts. Participants from the ReCoMs, BESTs, and JTT also reported that having positive working relationships built on strong trust among participants was a key factor in their law enforcement partnerships because of the sensitive nature of law enforcement information, and the risks posed if it is not protected appropriately. In turn, building positive working relationships was facilitated by another collaborative factor identified as important by 6 of the 10 mechanisms: physical colocation of participants. Specifically, participants from the mechanisms focused on law enforcement investigations, such as the BESTs and JTT, reported that being physically colocated with members from other agencies was important for increasing the groups’ effectiveness. Participants from one of the three ReCoMs we visited also stated that colocation enables operations planning and database/information sharing. It also helps build trust and overcome cultural barriers among agency participants. Successful collaboration practices can help the participating components mitigate a variety of challenges, and they are generally consistent with the seven key issues to consider when implementing collaborative mechanisms that we identified in our 2012 report on interagency collaboration. DHS leadership could benefit from engaging the mechanisms—soliciting promising collaboration practices information and organizing it through the lens of our seven key collaborative issues. As noted earlier in this report, DHS does not collect this type of information at the departmental level primarily because the mechanisms operate under the components. However, collecting promising practices information from the collaborative mechanisms at the departmental level and disseminating it to components throughout DHS would inform components about specific practices from which they could also benefit. Senior DHS officials agreed with this assessment. In addition, it may be more efficient for a single DHS departmental-level office to collect and disseminate this type of information than all the components individually, especially given DHS’s higher-level, strategic perspective across the department. Also, given that our fieldwork indicated similar collaboration issues are relevant to multiple components, a more centralized DHS clearinghouse of collaborative promising practices information could be more easily accessed by a wide range of DHS component stakeholders than under the current structure, where such information is now stovepiped and may not be readily shared outside of individual components or mechanisms. Key features of interagency collaboration include agencies establishing or clarifying guidelines, agreements, or procedures for sharing information. Among other things, these guidelines, agreements, and procedures should identify and disseminate practices to facilitate more effective communication and collaboration among federal, state, and local agencies. The benefit of sharing promising practices includes the ability to replicate positive program outcomes by leveraging the experiences of different stakeholders with the same or similar goals. Key features of interagency collaboration also identify the sharing of promising practices as an example of government agencies building capacity for improved efficiency. Accordingly, DHS component agencies could benefit from better access to collaborative promising practices, as this would help them in their own efforts to leverage the experiences of many collaborative mechanisms. Participants from the 10 collaborative field mechanisms we visited also identified challenges or barriers that affected their collaboration across components and made it more difficult. Using the same approach as that for eliciting successful collaborative practices, at each of the 10 locations we visited, we asked cognizant participants to identify challenges to collaboration that they believed had impeded their groups’ operations or effectiveness. We evaluated their responses and created three broad categories, as shown in figure 4, based on the challenges that they reported most frequently. Our discussions with participants representing the 10 mechanisms identified three barriers that participants most frequently believed hindered effective collaboration within their mechanisms: (1) resource constraints, (2) rotation of key personnel, and (3) lack of leadership buy- in. For example, when discussing resource issues, participants from 9 of the 10 collaborative mechanisms said that funding for their group’s operation was critical and identified resource constraints as a challenge to sustaining their collaborative efforts. These participants also reported that since none of the mechanisms receive dedicated funding, the participating federal agencies provide support for their respective representatives assigned to the selected mechanisms. This support included such things as funding for employee salaries, office space, and law enforcement equipment (e.g., night vision capability and surveillance vehicles), among other things. A lack of resources also affected state and local law enforcement participation in some of these collaborative mechanisms, and mechanism participants explained that ensuring state and local participation has been challenging because of resource constraints, which, in some cases, have led to a mechanism missing key participants. For example, participating agencies fund ReCoM positions out of their respective operating budgets—no dedicated ReCoM funding has been provided. As a result, some agencies are not able to participate (such as state and local law enforcement) because of resource constraints. Also, there was a majority opinion among mechanism participants we visited that rotation of key personnel and lack of leadership buy-in hindered effective collaboration within their mechanisms. For example, JTT participants stated that the rotation of key personnel hinders the JTT’s ability to develop and retain more seasoned personnel with expertise in investigations and surveillance techniques. In addition to collecting promising practices information from the collaborative mechanisms and disseminating it to components throughout DHS, collecting and disseminating information on any ways to address identified challenges or barriers to collaboration would similarly help leverage the experiences of other collaborative mechanisms. Collaborative field mechanism participants could also benefit from DHS sharing information related to performance measurement. While sharing such information is not a challenge to collaboration itself, officials from all mechanisms agreed that establishing metrics that could measure the impact of their collaboration, including whether the benefits of the collaborative mechanisms outweigh the costs, were difficult to establish or did not yet exist. Nonetheless, officials reported that the ReCoMs, BESTs, and JTT have all undertaken efforts to develop output or outcome performance measures to track the accomplishments of their collaborative mechanisms. For example, the fiscal years 2012-2016 BEST Strategic Plan states that BESTs are to be evaluated annually on their overall performance, which is quantified by output enforcement metrics (e.g., number of arrests, indictments, convictions, and seizures). The JTT’s efforts to develop performance measures include identifying emerging threats, risks, and vulnerabilities in the South Texas corridor where it operates. Developing output and outcome measures can provide insight into the performance of each mechanism; however, ReCoM, BEST, JTT, and RISC officials all stated that is very difficult to develop a metric that isolates the benefits of their collaboration from the benefits that they may have achieved operating separately under their respective agencies. Despite these measurement challenges, ReCoM, BEST, JTT, and RISC officials were able to provide anecdotal examples of the positive benefits of their collaboration and coordination. For example, ReCoM officials in one location told us that they were able to make significant progress toward meeting their goal of “persistent presence” along a coastal ship channel because they had coordinated the schedules of the USCG and CBP Office of Air and Marine resources that conducted these patrols. BEST and JTT officials stressed the value of leveraging their participating agencies’ legal authorities to develop more robust cases, which increased the likelihood that their cases would be successfully prosecuted, and that convicted criminals would receive longer sentences. RISC participants in both locations cited their collaborative mechanisms as important to identifying emergency response capability gaps across different levels of government and integrating courses of action to take in response to disasters. Leveraging mechanism participants’ experiences and insights regarding the development of performance metrics to quantify their accomplishments and the impacts of their collaboration, and disseminating promising practices that they have identified could provide benefits to other DHS collaborative efforts. Effective collaboration within and among federal agencies is important for improving operational success, especially in a resource-constrained environment. DHS component agencies have made progress in developing and evolving collaborative field mechanisms that have allowed them to better coordinate mission activities in the field, and these collaborative efforts are even more important in light of DHS’s decision in 2012 to not pursue a single unified field structure to integrate component field operations. Given the overlapping geographic areas of responsibility and authorities, and the many operational activities that DHS components are conducting, component efforts to collaborate are important. However, DHS’s limited visibility over the universe of collaborative field mechanisms operating under its purview reduces its ability to maximize the effectiveness and efficiency of these mechanisms to enhance cross- departmental management and integrated operations. DHS senior officials believe the components, not the department, are responsible for the mechanisms’ oversight because the department is more focused on strategic rather than operational-level management activities. We agree that the components are capable of operating and monitoring their collaborative field mechanisms. However, consistent with its own departmental-level strategic goals, we believe that DHS could benefit from greater awareness of the mechanisms themselves and the collaborative practices that they employ. Not only is the department ultimately accountable for the resources that support these mechanisms, it is also responsible for making important decisions about the overall field structure of its components, and for moving the department closer to its goal of greater component integration. By collecting information about the universe of collaborative mechanisms and developing a fuller understanding of them and the promising practices they employ, DHS could be in a better position to utilize these practices across components to help move the department toward its strategic goal of increased operational integration. To help ensure that any future efforts to analyze or implement changes to DHS’s regional field office structure, including the establishment of collaborative field mechanisms, are informed by current collaborative practices, we recommend that the Secretary of Homeland Security direct the appropriate department official to take the following two actions: (1) collect information on the existing collaborative mechanisms to have better visibility of them, and (2) collect information on promising practices, including such things as potential ways to address any identified challenges or barriers to collaboration as well as any identified performance metrics, from the collaborative mechanisms and disseminate them to components. We provided a draft copy of this report to the Secretary of Homeland Security for review and comments. DHS provided official written comments, which are reprinted in appendix V. In response to our first recommendation, DHS concurred and stated that the Office of Operations Coordination and Planning (OPS), in coordination with other DHS components, as appropriate, will develop a method to enhance the collection of information on collaborative field coordination and integration mechanisms. OPS will schedule appropriate data calls to collect the information and leverage the Homeland Security Information Network as a means for sharing information among the components. DHS also concurred with our second recommendation and stated that OPS, in coordination with other DHS components, as appropriate, will develop and implement a method of collecting and disseminating information to the components regarding promising practices, including challenges or barriers to collaboration, from various field coordination and integration mechanisms. DHS estimated completion of actions related to both recommendations by September 30, 2014. DHS provided technical comments, which we incorporated as appropriate. We also changed some specific descriptions of DHS component operations and removed others because DHS identified them as sensitive. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. We are sending copies of this report to the Secretary of Homeland Security and interested congressional committees as appropriate. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov.If you or your staff have any questions about this report, please contact me at (202) 512-9971 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. The Maritime Operations Coordination Plan was signed in June 2011 by the senior leadership of the U.S. Immigration and Customs Enforcement (ICE), U.S. Customs and Border Protection (CBP), and the U.S. Coast Guard (USCG), directing these agencies to form ReCoMs for maritime homeland security enforcement and intelligence integration. USCG, ICE HSI, and CBP serve as the lead agencies responsible for planning and coordinating among stakeholders, and as of June 2013, 32 ReCoMs have been established aligning with the USCG sectors’ geographic areas of responsibility. In addition to ICE HSI, CBP, and USCG, ReCoM stakeholders include other federal, state, local, tribal, and international agencies including, but not limited to, the Federal Bureau of Investigation (FBI), Drug Enforcement Administration (DEA), U.S. Attorney’s Office (USAO), state agencies, local police departments, and foreign law enforcement partners. According to the Maritime Operations Coordination Plan, the ReCoM was established for each region to coordinate component maritime operational activities. All ReCoM members are responsible for participating in integrated planning efforts with a goal to maintain active patrol and targeted monitoring. The Department of Homeland Security (DHS) Senior Guidance Teams (composed of senior USCG, ICE HSI, and CBP officials) are to assign a working group to monitor ReCoM operational performance, coordination efforts, and information sharing. Accordingly, the components must measure the performance of the ReCoMs to ensure the most effective use of resources. In 2005, the first BEST unit was organized and led by ICE HSI, in partnership with CBP, in Laredo, Texas, and as of June 2013, 35 BESTs have been established throughout the United States. BEST stakeholders include CBP; DEA; the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF); FBI; USCG; USAO; and other key state, local, tribal, and international law enforcement and intelligence resources, which partner with one another to identify, disrupt, and dismantle existing and emerging threats at U.S. land, sea, and air borders. The BEST concept is built upon the guiding principles of colocation and cross-designation. According to the 2012-2016 BEST Strategic Plan, ICE HSI serves as the “executive agent” for BEST and provides a standardized platform of policy and procedure for BEST units, as well as primary resourcing. The BEST units use qualitative and quantitative risk assessment methods to efficiently allocate resources. The National BEST unit (NBU) serves as the programmatic lead on establishment, deployment, and oversight of the BEST units nationwide, including overseeing policy and implementation of the BEST program. As stated in the 2012-2016 BEST Strategic Plan, BEST units are also evaluated annually on their overall performance, including their effectiveness and efficiency of operations based on established criteria. The overall success of the BEST program is measured by its impact on border-related criminal activity levels. The 2012-2016 BEST Strategic Plan also states that the NBU is required to provide a written evaluation to the ICE HSI Executive Associate Director and Executive Steering Committee within 90 days following the end of each fiscal year. These reports, which are to continue through fiscal year 2016, are to include an evaluation of the field metrics, as well as the implementation of the strategic plan by headquarters, the field offices, and their BEST units. The JTT originated in November 2011 as a CBP-led partnership among the Del Rio area U.S. Border Patrol, CBP Office of Field Operations, and ICE HSI, and was expanded to support the South Texas Campaign (STC). The purpose of the STC is to disrupt and dismantle transnational criminal organizations (TCO). As of August 2013, JTTs had been established across four geographic boundary areas in Del Rio, Laredo, McAllen, and Houston, Texas. In addition to ICE HSI, JTT stakeholders include DEA; FBI; ATF; USAO; U.S. Marshals Service; and state, local, tribal, and international law enforcement agencies. To the greatest extent practicable, each participating agency within the JTT is structured to be colocated and function as a unified team to ensure deconfliction of intelligence information while focusing on STC targets. According to Federal Emergency Management Agency (FEMA) officials, FEMA established the first RISC in 2003 to provide a forum for senior DHS officials to enhance emergency management and homeland security for all hazards through a collaborative, regional approach involving federal, state, local, tribal, nongovernmental organization, and private sector partners. According to FEMA officials, the majority of RISCs include representatives of the following DHS components: FEMA, CBP, ICE, Federal Protective Service, Science and Technology Directorate, National Protection and Programs Directorate, Transportation Security Administration, and USCG, among others. In addition, other federal agencies representing the emergency support community participate in RISCs, such as the Department of Transportation, Department of Defense, and the Department of Agriculture. The authority to establish RISCs derives from the Post-Katrina Emergency Management Reform Act of 2006, which provides the FEMA regional administrators with the responsibility to ensure effective, coordinated, and integrated regional preparedness, protection, response, recovery, and mitigation activities and programs for natural disasters, acts of terrorism, and other man- made disasters (including planning, training, exercises, and professional development) and perform such other duties relating to such responsibilities as the Administrator may require. According to FEMA officials, representatives from the FEMA regions developed a draft RISC charter in 2010, and several regions subsequently used this document as a basis for developing their own charters. For example, a FEMA RISC charter may contain (1) objectives and scope of RISC activities, (2) membership requirements, (3) annual operating costs, and (4) executive committee governance. According to FEMA officials, in general, most FEMA regions convene RISC meetings on a quarterly or semiannual basis to discuss various topics, such as making preparedness, protection, response, recovery, or mitigation more easily accomplished and increasing regional capability. RISC meetings typically include presentations, workgroups, training workshops, and panel discussions led by DHS components. During these RISC meetings, DHS components are able to share lessons learned and best practices with members, as well as develop plans that identify resource capabilities and integrated courses of action to take in response to disasters. One FEMA official explained that FEMA has not developed any specific performance-reporting requirements regarding the content or output from RISC meetings; however, the value of the coordination and communication gained through these meetings supports various planning, response, and recovery activities. CTCEU/TTPG was established in 2003. Selected agencies Lead agency: ICE HSI Stakeholders: CBP DOD The Office of Biometric Identity Management (OBIM) National Counterterrorism Center (NCTC) DHS Office of Intelligence and Analysis (I&A) Mechanism purpose/description The CTCEU/TTPG leverages ICE HSI expertise across partnering agencies dedicated to promoting national security. The group leads the DHS National Security Overstay Initiative in cooperation with CBP, OBIM, and NCTC to identify and apprehend nonimmigrants who have overstayed or violated the terms of their admission and pose a potential risk to the national security of the United States, as well as to prevent terrorists and other criminals from exploiting the nation’s immigration system. HSTF-SE was established in 2003. HSTF-SE is a joint task force that provides the framework for coordination of a unified response command and control organization for all DHS agencies. HSTF- SE is responsible for the development and execution of Operations Plan Vigilant Sentry (OVS), which includes interdiction, detention, protection screening, processing, and repatriation during a mass migration from a Caribbean nation. HSTF-SE is a standing task force that is in effect at all times, although full activation of the task force does not occur until a mass migration plan is implemented. Lead agency: None Stakeholders: CBP ICE USAO USCG Puerto Rican Police Department’s Forces United for Rapid Action (FURA) CBIG serves to coordinate the operations of USCG, CBP, ICE, and USAO, targeting illegal migration and narcotics trafficking near Puerto Rico and the U.S. Virgin Islands. CBIG was established in 2006. Mechanism purpose/description OIC provides a centralized location for CBP, along with federal, state, local, and international partners, to gather, analyze, and disseminate operational and strategic data in the Great Lakes region of the northern border for use by frontline agents and officers. NSARC was established in 1973. The purpose of NSARC is to coordinate interagency search and rescue matters. NSARC works with other state and local search and rescue authorities to coordinate implementation of the national search and rescue system. Washington, D.C. STBIC was established in 2012. STBIC is a facility designed to intensify and integrate intelligence gathering and sharing activity among law enforcement agencies across South Texas. ACTT was established in 2009. ACTT was established to counter the threats posed by transnational criminal organizations operating in the Arizona corridor. Specifically, ACTT leverages the capabilities and resources of more than 60 federal, state, local and tribal agencies in Arizona and the government of Mexico to combat individuals and criminal organizations that pose a threat to communities on both sides of the border. IOC was established in 2006. Mechanism purpose/description Mandated by the Security and Accountability for Every Port Act of 2006 (SAFE Port Act), IOCs were established to improve multiagency maritime security operations and enhance cooperation among partner agencies at 35 U.S. ports. Specifically, IOCs transformed the Coast Guard sector command centers by upgrading their information management tools. IOCs also help port agencies to collaborate on first response, law enforcement, and homeland security operations. Mechanism purpose/description SEWG is an interagency forum that ensures comprehensive, coordinated interagency awareness of, and federal support to, special events. Locations Washington, D.C. SEWG was established in 2004. We were asked to continue our work on DHS’s efforts to improve collaboration and integrate its field operations. This report (1) assesses the extent to which DHS has identified the collaborative field mechanisms—that is, multiagency groups such as task forces, committees, and teams that enhance stakeholder collaboration across the participating agencies in order to more effectively and efficiently achieve their mission—of its key operational components, and (2) describes factors that participants of selected mechanisms identified that enhance or are challenges to their collaboration, and assesses the extent to which DHS has collected and disseminated successful collaborative practices. To address the first objective, we contacted DHS officials to identify a list of collaborative mechanisms that they deemed to be successful examples of field component collaboration while we also independently identified such mechanisms. Upon receiving DHS’s list of mechanisms, we combined this list with our own and sought input from DHS and the seven key operational component agencies to create a master list of agreed- upon collaborative field mechanisms. Our final list included 13 collaborative mechanisms that DHS and operational component officials agreed were models of collaboration among component agencies in the field. The 13 collaborative mechanisms we identified are used by federal agencies to implement interagency collaborative efforts, such as agencies colocating within one facility or establishing interagency task forces. In addition, all of the identified mechanisms involved multiple DHS component agencies, as well as other federal, state, and local agency participants, and their purpose was to improve operational integration, coordination, and efficiencies among DHS agencies. Of those we identified, 11 of the 13 mechanisms focused on law enforcement activities. For these 13 mechanisms, we examined organizational documents related to mechanism mission, objectives, stakeholder composition, locations, and date organized, among other things. We also interviewed officials from DHS’s Office of Operations Coordination and Planning about the establishment and operation of these 13 mechanism types. We also interviewed DHS officials and analyzed documentation (i.e., components daily activity reports) obtained from responsible senior DHS headquarters officials to identify the extent to which the department has visibility over the collaborative field mechanisms activities—including any plans to increase visibility over the mechanisms in the future. We compared DHS’s efforts to identify and collect information on the collaborative field mechanism with criteria in Standards for Internal Control in the Federal Government, which call for the establishment of control activities, such as a mechanism to identify and monitor the activities of components within an organization, to help achieve the organization’s objectives. To address the second objective, we selected 4 types of collaborative mechanisms in 10 locations from the list of 13—ReCoM, BEST, JTT, and RISC. We based our selection of these 4 types of mechanisms on the following factors: geographic location, continuity of the mechanism (established for at least 16 months), participation of multiple DHS component agencies, and variation in the lead component agency. Except for the JTT, we selected mechanism types that existed in more than one location to allow for geographic comparisons—such as BEST, with 35 locations throughout the United States. To describe the factors that participants of the selected collaborative mechanisms identified that enhance or are challenges to collaboration, we conducted site visits to interview operational component officials directly participating in each of the 4 types of mechanisms, and in total we met with over 55 participants from 10 mechanisms—including officials from three ReCoMs, four BESTs, one JTT, and two RISCs. The BEST, ReCoM, and JTT are law enforcement–focused, while the RISC focuses on emergency management activities and exercises. For each of the selected mechanisms, we also interviewed senior headquarters officials to discuss their views on the successes and challenges experienced with collaboration, including how the successes are replicated and communicated across the mechanisms and challenges are addressed. At each of the 10 mechanisms we visited, we gathered information from participants on what they believed to be promising practices that helped them to succeed as collaborative mechanisms, as well as the factors they viewed as challenges to their collaboration. We also discussed their efforts to establish performance measures to assess mechanism effectiveness. While we cannot generalize our work from visits to these collaborative mechanisms, we chose these locations to provide examples of the way in which the mechanisms identify, communicate with others, and address the successes and challenges experienced with collaboration. We also reviewed planning, operations, and management integration documents such as strategic plans, annual performance reports, and memorandums of understanding or agreements among the participating agencies. We compared these documents and their responses with the information-sharing and collaboration practices identified in our past work on this subject. Our past work on interagency collaboration has highlighted the importance of agencies establishing or clarifying guidelines, agreements, or procedures for sharing information. These guidelines, agreements, and procedures should identify and disseminate practices to facilitate more effective communication and collaboration among federal, state, and local agencies. In addition, our prior work has demonstrated the benefits of sharing promising practices as a means to replicate positive program outcomes by leveraging the experiences of different stakeholders with the same or similar goals. We have also identified the sharing of promising practices as an example of government agencies building capacity for improved efficiency. At each of the 10 mechanisms we visited, we noted any alignment or divergence from the mechanisms’ reported successes and challenges with key features identified in our 2012 report on interagency collaboration. These key features include seven categories, (1) outcomes and accountability, (2) bridging organizational cultures, (3) leadership, (4) clarity of roles and responsibilities, (5) participants, (6) resources, and (7) written guidance and agreements. We interviewed component officials responsible for managing the selected mechanisms and determined that our work and past recommendations related to information sharing and collaborative practices are still valid and were deemed reasonable by the respective officials. We then assessed the extent to which the mechanism participants’ responses regarding integration, coordination, and collaboration practices utilized by their mechanisms aligned with those identified in our 2012 report. See appendix IV for a list of key issues to consider when implementing interagency collaborative mechanisms that were identified in our 2012 report. We interviewed responsible senior DHS headquarters officials to determine the extent to which the department has collected and reported on the collaborative practices identified by the mechanisms. We also interviewed component officials at the selected 4 mechanisms to identify the extent to which information, such as information sharing and collaborative practices are provided to DHS headquarters officials who are responsible for oversight of the collaborative field mechanisms. We conducted this performance audit from October 2012 to September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In September 2012, we indentified 12 mechanisms that the federal government uses to lead and implement interagency collaboration. Although these mechanisms differ in complexity and scope, our September 2012 report notes that these mechanisms all benefit from the seven features below. Key issues to consider when implementing collaborative mechanisms are listed under each feature. 1. Outcomes and accountability Have short-term and long-term outcomes been clearly defined? Is there a way to track and monitor progress toward the short-term and long-term outcomes? Do participating agencies have collaboration-related competencies or performance standards against which individual performance can be evaluated? Do participating agencies have the means to recognize and reward accomplishments related to collaboration? 2. Bridging organizational cultures What are the missions and organizational cultures of the participating agencies? What are the commonalities between the participating agencies’ missions and cultures and what are some potential challenges? Have participating agencies developed ways for operating across agency boundaries? How did they develop these ways? Have participating agencies agreed on common terminology and definitions? Has a lead agency or individual been identified? If leadership will be shared between one or more agencies, have roles and responsibilities been clearly identified and agreed upon? How will leadership be sustained over the short term? How will it be sustained over the long term? 4. Clarity of roles and responsibilities Have participating agencies clarified the roles and responsibilities of the participants? Have participating agencies articulated and agreed to a process for making and enforcing decisions? Have all relevant participants been included? Do the participants have full knowledge of the relevant resources in their agency; the ability to commit these resources; the ability to regularly attend activities of the collaborative mechanism; and the appropriate knowledge, skills, and abilities to contribute? How will the collaborative mechanism be funded? If interagency funding is needed, is it permitted? If interagency funding is needed and permitted, is there a means to track funds in a standardized manner? How will the collaborative mechanism be staffed? Are there incentives available to encourage staff or agencies to Have participating agencies developed online tools or other resources participate? If relevant, do agencies have compatible technological systems? that facilitate joint interactions? 7. Written guidance and agreements If appropriate, have the participating agencies documented their agreement regarding how they will be collaborating? A written document can incorporate agreements reached in any or all of the following areas: accountability, roles and responsibilities, and resources. Have participating agencies developed ways to continually update or monitor written agreements? In addition to the contact named above, Stephen L. Caldwell, Director, Dawn Hoff, Assistant Director, and Frederick Lyles, Jr., Analyst-in- Charge, managed this engagement. Chuck Bausell, Eric Hauswirth, Tracey King, David Lutter, Jess Orr, Janay Sam, and Cynthia Saunders made significant contributions to the report.
DHS is the third-largest department in the federal government, with an annual budget of about $60 billion, 200,000 staff, and a broad range of missions. In 2002, DHS was created from 22 legacy agencies. The geographic overlap of these agencies' legacy field office structures was extensive, underscoring the importance of collaboration among them when conducting missions that crossed across boundaries. As a follow-on to GAO's September 2012 report on DHS's efforts to integrate field operations, GAO was asked to review DHS and key operational components' use of collaborative mechanisms. This report (1) assesses DHS's visibility over collaborative field mechanisms established by component agencies, and (2) describes factors that enhance or impede collaboration within these mechanisms, and the extent to which DHS has collected and disseminated successful collaborative practices. GAO analyzed selected mechanisms' guidance; conducted 10 mechanism site visits based on their geographic diversity, among other factors; and compared their practices with collaboration practices identified in previous GAO work. GAO also interviewed DHS and component officials. Opportunities exist for the Department of Homeland Security (DHS) to enhance its visibility over collaborative field mechanisms (i.e., multiagency groups such as task forces, committees, and teams that enhance stakeholder collaboration to more effectively and efficiently achieve their missions) established by component agencies. DHS, at the departmental level, has limited visibility over the universe and operation of these mechanisms and does not identify information from them that could further enhance collaboration across DHS and inform future DHS decisions. In the absence of a single DHS regional/field structure, DHS components have created collaborative mechanisms to better integrate field operations by better coordinating their missions and sharing information. However, when GAO sought to identify these mechanisms, in conjunction with DHS, senior DHS officials stated that while they maintain regular visibility over component activities--which may involve these collaborative mechanisms--DHS does not collect information on the types of mechanisms and collaborative practices these mechanisms employ because the mechanisms operate under the components, and thus this information was not readily available at the departmental level. DHS officials stated that primary oversight over the mechanisms is the responsibility of the operational components or mechanism participants. However, DHS's own strategic goals emphasize the importance of cross-departmental integration and coordination to enhance DHS's mission, and DHS could benefit--on a strategic level--from greater awareness of these mechanisms and the collaborative practices they employ. DHS is ultimately accountable for the resources that support these mechanisms, and is responsible for decision making about its overall field structure and for moving the department closer to its goal of greater component unification and integration. By collecting additional information on collaborative mechanisms, DHS could achieve better visibility over the universe of existing mechanisms, and thus be better positioned to analyze or implement any future changes to DHS's regional/field structure. Participants from each of the collaborative mechanisms GAO reviewed identified several common factors that enhanced their collaboration, which DHS could benefit from collecting and disseminating on a broader scale. For example, participants identified the value of sharing resources, information, and recognition of successful missions as examples of successful collaboration practices they employed. Officials also cited collaboration challenges, including resource constraints, rotation of key personnel, and lack of leadership commitment. As GAO's fieldwork indicated, similar collaboration issues are relevant to multiple components, thus, DHS leadership could benefit from undertaking a review of collaborative mechanisms to solicit and identify promising practices, and then sharing this information among all components. In addition, given DHS's more strategic perspective, a more centralized DHS clearinghouse of collaborative practices information could be more efficient to collect and more easily accessed by a wider range of DHS components than under the current structure, where such information may not be readily shared outside of individual components or mechanisms. Collecting and disseminating information on collaborative practices would allow DHS to inform components about promising practices and lessons learned from which they could benefit. GAO recommends that DHS (1) collect information on existing collaborative mechanisms for better visibility over them, and (2) collect promising practices from the mechanisms and distribute them to components. DHS concurred with the recommendations and identified planned actions to address them.
SSA’s mission is to deliver Social Security services that meet the changing needs of the public. The Social Security Act and amendments established the programs that SSA administers, which include the Old Age, Survivors, and Disability Insurance program: Commonly referred to simply as “Social Security,” this program is one of the nation’s largest entitlement programs and provides monthly benefits to retired and disabled workers, their spouses and children, and the survivors of insured workers who have died; and the Supplemental Security Income program: This is a needs-based program financed from general tax revenues that provides benefits to aged adults, blind or disabled adults, and children with limited income and resources. According to SSA, in fiscal year 2011, about 54 million people received benefits from the Old Age, Survivors, and Disability program, and over 8 million people received benefits from the Supplemental Security Income program. Collectively, about 155 million people work and pay Social Security taxes. The agency’s fiscal year 2011 expenses totaled about $12.4 billion to support its programs. SSA relies extensively on IT to administer its programs and support related activities. Specifically, its systems are used to, among other things, handle millions of transactions on SSA’s toll-free telephone number, maintain records for the millions of beneficiaries and recipients of SSA’s programs, evaluate evidence and make determinations of eligibility for benefits, issue new and replacement Social Security cards, and process earnings items for crediting to workers’ earnings records. However, as the agency’s systems have aged, SSA has faced challenges in carrying out its increasing workload. Specifically, many of SSA’s existing systems software were developed in the 1960s and 1970s and are written in older computer programming languages or are past their designed life cycle. While the agency has made technical and functional upgrades throughout the years, it continues to face challenges because of the need to store, process, and share increasing amounts of data and to transition to Web-based, online access for SSA data and services, among other factors. Accordingly, in its most recent Agency Strategic Plan, SSA has identified IT as a key foundational element to achieving success in meeting its goals. Recognizing the challenges facing its IT environment, the agency has stated that it plans to, among other things, develop and implement a common system for processing disability cases, increase its use of online services for access to benefits and information, and automate its processes for reporting information. SSA’s Office of Systems is responsible for developing, overseeing, and maintaining the agency’s IT systems. Comprised of eight component offices and approximately 3,300 staff, the Office of Systems has responsibility for the agency’s IT. SSA uses its capital planning and investment control process to manage its software development projects. This process is intended to meet the by providing a framework for objectives of the Clinger-Cohen Act of 1996selecting, controlling, and evaluating investments in IT to help ensure that they meet the strategic and business objectives of the agency. This process requires a series of reviews by executive oversight bodies, including the agency’s Strategic Information Technology Assessment and Review board, to ensure that IT projects are selected that best meet the agency’s goals; that, once selected, they are performing within expected schedule and cost parameters; and finally, that once implemented, these projects are delivering results. In June 2011, in an effort to increase efficiency, the Commissioner of Social Security announced the realignment of CIO functions and associated personnel. As part of this realignment, the Office of the CIO was eliminated, and most of its responsibilities for managing IT, along with the IT budget, were reassigned to the Office of Systems. Previously, key duties of the CIO were to select and prioritize IT investments and oversee the review and approval of the annual IT budget, while the Office of Systems was responsible for managing the acquisition, development, and maintenance of IT projects. Under the realignment, the Deputy Commissioner for Systems—who heads the Office of Systems—assumed the major responsibilities of the CIO. Since 2001, SSA has reported spending more than $5 billion on the development, modernization, and enhancement of its IT systems and capabilities. SSA officials identified 120 initiatives undertaken from 2001 to 2011 that the agency considered to be key investments in modernization. These comprise a subset of the hundreds of projects and modernization activities SSA undertakes yearly, which vary greatly in level of effort, scope, and cost. These initiatives affected all of the agency’s main program areas: According to managers within SSA’s Office of Disability Systems, in an effort to reduce backlogs of disability hearings, the agency implemented a process for creating electronic “folders” for each applicant, to replace the existing paper-based process. This initiative included capabilities for electronically viewing an applicant’s folder, electronic screening for faster disability determinations, and Internet access to information on disability hearings and determinations. The Office of Retirement and Survivors Insurance Systems took steps to improve outdated legacy systems and respond to legislation or other mandates requiring new system functionality. These efforts included integrating stand-alone “post-entitlement” processes, facilitating online application for benefits, and conversion of a key database to a more modern, industry-standard one. Managers from the Office of Applications and Supplemental Security Income described initiatives to modernize large legacy databases and facilitate data sharing to streamline the claims process. These included enhancements to the electronic death registration process and the development of a Web application enabling access to data from multiple systems. SSA officials described initiatives in the area of electronically exchanging data with external partners, including states and private- sector partners such as banks and credit bureaus. SSA also noted efforts to streamline the process for administering Social Security cards, such as introducing safeguards against counterfeiting and replacing its legacy printers. In addition to these initiatives, SSA undertook a project to establish a disaster recovery capability at a secondary computing site. This project provided for continuity of operations, continuous processing of SSA’s workload, and backup of the agency’s IT assets, among other capabilities. While these improvements have yielded benefits, SSA still has a number of other major efforts under way to continue the modernization of its IT environment. These efforts involve completing the conversion of the agency’s legacy Master Data Access Method database system (used to support the storage and retrieval of SSA’s major program master files) to a modern, industry-standard database system; transitioning from its legacy system for processing retirement and survivors’ claims to a single, unified system that integrates initial and post-entitlement actions; streamlining operations and reducing duplication in disability databases and transitioning from multiple and fragmented applications to a single, unified case processing system; enhancing and refreshing telecommunications equipment and ongoing improvement of connectivity and bandwidth for data, voice, and video communications; and supporting enhancements to SSA’s Medicare initiatives, including changes required by the Patient Protection and Affordable Care Act, which are intended to improve the process for verifying the name, Social Security number, and other data on Medicare earnings reports. SSA officials noted that the agency faces several challenges in successfully carrying out these modernization efforts. These include planning for system changes within a single fiscal year budget cycle, a practice that limits the agency’s ability to make long-term modernization plans; devoting significant resources to the maintenance of existing legacy systems because of large quantities of legacy code; and diverting resources from long-term projects to shorter-term immediate requirements, such as those arising from legislative changes. Compounding these challenges, we found that SSA has not fully established performance measures or a post-implementation review process that would allow it to determine the progress it is making in its modernization efforts. Federal law requires agencies to identify performance measures for their IT investments, and we have previously reported that comprehensive measures are essential for gauging the progress and benefits of IT investments. However, while SSA developed performance measures for most of its 17 major modernization investments for fiscal year 2010, it did not identify any measures in one of four management areas identified by the Office of Management and Budget (OMB) for 3 of these investments. Moreover, the measures SSA developed did not always allow for assessments of each project’s effectiveness in meeting agency goals. For example, these measures did not always (1) identify how each project is to contribute to expected benefits; (2) include measures of investments’ effectiveness in meeting goals, requirements, or mission results; or (3) provide the means for measuring progress toward specific modernization goals. In addition, SSA has not conducted post-implementation reviews of its IT projects or systems, as called for by OMB guidance. Such a review should confirm the extent to which planned benefits were achieved, determine the cost-effectiveness of the project, and identify lessons learned and opportunities for improvement. While SSA conducts assessments of completed initiatives, these assessments lack key elements called for by OMB that would provide assurance that modernization and other IT projects are delivering expected benefits at acceptable costs and that SSA is making progress in meeting its goals. Comprehensive strategic planning is essential for successfully carrying out large-scale efforts such as SSA’s IT modernizations. Key elements of such planning include developing an IT strategic plan and an enterprise architecture that, together, outline modernization goals, measures, and timelines. An IT strategic plan serves as an agency’s vision and helps align its information resources with its business strategies and investment decisions. As such, it provides a high-level perspective of the agency’s goals and objectives, enabling the agency to prioritize how it allocates resources; proactively respond to changes; and communicate its vision and goals to management, oversight bodies, and external parties. The enterprise architecture helps to implement the strategic vision by providing a focused “blueprint” of the organization’s business processes and technology that supports them. It includes descriptions of how the organization operates today, how it intends to operate in the future, and a plan for transitioning to the target state. It further helps coordinate the concurrent development of IT systems to limit unnecessary duplication and increase the likelihood that these systems will inter-operate. SSA developed an IT strategic plan in 2007 to guide its modernization efforts; however, the plan is outdated and may not be aligned with the agency’s overall strategic plan. Specifically, because it has not been updated since 2007, the plan contains elements that are no longer relevant to SSA’s ongoing modernization efforts. For example, the plan discusses projects that have largely been completed, does not reference current information security requirements, and does not reflect current staffing needs. Further, it does not reflect the way in which modernization decisions are driven by the agency’s Strategic Information Technology Assessment and Review board. The currency of the IT strategic plan is further called into question by the fact that the agency updated its overall Agency Strategic Plan in 2008 and again in 2012. Thus, the IT strategic plan may no longer be aligned with the agency’s broader goals. In the absence of an updated IT strategic plan, SSA has relied on a number of program activities to guide its modernization efforts, such as identifying and prioritizing IT modernization investments during its annual investment review process and developing high-level descriptions of projects in each of the agency’s portfolios. However, these activities are based on short-term budget cycles and do not provide a long-term strategic vision with detailed steps and milestones. SSA officials stated that they are updating the IT strategic plan; however, it has yet to be finalized or approved. In addition, SSA has developed an enterprise architecture, but it is missing key components. Specifically, the architecture captures certain foundational information about the current and target states of the organization, such as current business processes and business outcomes, to assist in evolving existing information systems and developing new ones. Nevertheless, the architecture lacks important content called for by federal CIO Council and OMB guidance that would allow the agency to more effectively plan its investments and achieve its vision of modernized systems and operations. Specifically, the architecture lacks key elements that would establish the specific steps and direction to reach its vision of modernized systems by 2016. In particular, the agency has not developed a service-oriented architecture road map that would, among other things, articulate the changes and growth in IT capabilities over time and provide a conceptual plan that establishes a basis for developing more detailed project plans. Further, SSA has not conducted an enterprise gap analysis to identify the differences between its current and target states to enable the development of a plan for transitioning from the current to the target state. SSA also has not developed quantitative performance expectations for the target state or analyzed the flows of information among the agency’s business processes. Without a long-term strategic vision and an enterprise architecture that provides details on how this vision is to be executed, SSA lacks assurance that its modernization initiatives will effectively and efficiently support its goals and mission. As mentioned earlier, in 2011, SSA realigned the functions of its Office of the CIO, consolidating major responsibilities for the management and oversight of IT in its Office of Systems. Federal law, specifically the Clinger-Cohen Act of 1996, requires the heads of executive branch agencies to designate a CIO with key responsibilities for managing an agency’s IT resources. As we have previously reported, to carry out these responsibilities effectively, CIOs require sufficient control over IT investments, including control over the IT budget and workforce. Under the realignment, key responsibilities of the CIO and Deputy Commissioner for Systems were merged into the Office of Systems. Specifically, this arrangement gave the Office for Systems responsibility for, among other things, oversight and management of IT budget formulation; systems acquisition, development, and integration; the IT capital planning and investment control process; workforce planning and allocation of resources to IT projects; IT operations. If implemented appropriately, this organizational structure should allow for effective oversight and management of the agency’s systems and modernization initiatives. However, we found in our review that the realignment was undertaken without the benefit of an analysis of the impact of this significant organizational change. Specifically, SSA did not develop a management plan that would describe the challenges associated with the realignment or strategies for addressing them, along with time frames, resources, performance measures, and accountability structures. Further, SSA did not analyze the roles and responsibilities needed to support the allocation of functions under the realignment. Without such an analysis, it cannot be determined whether the reassignment of staff that occurred as a result of the realignment represents an optimal allocation of resources. In addition, SSA has not updated its capital planning and investment control guidance to reflect the realignment. This guidance sets forth the process and responsibilities for managing the selection, control, and evaluation of SSA’s IT investments. However, under the realignment, certain elements of the existing guidance are obsolete, such as the requirement for independent CIO reviews of IT investment proposals. SSA officials stated that the guidance was being updated and would be reviewed internally; however, they could not provide a time frame for the approval and implementation of the revised guidance. Having updated guidance is critical to ensuring that responsibilities for management and oversight of the agency’s IT investments are being carried out effectively under the realigned organizational structure. In our report, we made a number of recommendations to SSA to address the challenges it faces in carrying out its IT modernization efforts. Specifically, we recommended that SSA: Ensure that performance measures are established for IT investments in each of OMB’s four management areas and that they allow for measurement of progress in meeting modernization goals. In updating the agency’s IT strategic plan, ensure that it includes key elements, such as results-oriented goals, strategies, milestones, performance measures, and an analysis of interdependencies among projects and activities, and is used to guide and coordinate modernization efforts. Establish an enterprise architecture that includes key elements, such as a service-oriented architecture road map, a gap analysis, performance targets, and descriptions of information flows and relationships. Define roles and responsibilities of realigned IT staff and develop and clearly document updated investment review guidance. In commenting on a draft of our report, SSA neither agreed nor disagreed with our recommendations. However, the agency provided responses to each of the recommendations, as well as more general comments on our report’s findings. SSA described steps it is taking that would address elements of the recommendations related to planning, enterprise architecture, and IT oversight, while it took issue with other elements of the recommendations, including the level of detail that an IT strategic plan should contain and the need for more comprehensive measures. We continue to believe these recommendations are warranted. (Please see the “Agency Comments and Our Evaluation” section of our report for more details on SSA’s comments and our response.) In summary, while SSA has undertaken important initiatives that have resulted in improvements to its processes, significant efforts remain for it to fully meet its goals for modernizing its IT environment. Ensuring that it is successful in meeting these goals will be difficult without the agency establishing effective tools for measuring progress and performance and without comprehensive strategic planning. SSA’s realignment of the CIO responsibilities provides an opportunity for effective management and oversight of the agency’s systems modernization efforts; however, this effectiveness may well be hindered without appropriate implementation of the realignment, including defined roles and responsibilities and updated oversight guidance. Chairman Johnson, Ranking Member Becerra, and Members of the Subcommittee, this concludes my statement. I would be pleased to answer any questions that you may have at this time. If you have any questions regarding this statement, please contact Valerie C. Melvin, Director, Information Management and Technology Resources Issues, at (202) 512-6304 or [email protected]. Other individuals who made key contributions include Christie Motley, Assistant Director; Michael Alexander; David Hong; Alina Johnson; Lee McCracken; and Scott Pettis. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This hearing is on the Social Security Administration’s (SSA) efforts to modernize its information technology (IT) systems and environment. As you know, SSA is responsible for delivering services that touch the lives of virtually every American, and the agency relies heavily on IT to do so. Its computerized information systems support a range of activities, from the processing of Disability Insurance and Supplemental Security Income payments to the calculation and withholding of Medicare premiums, and the issuance of Social Security numbers and cards. Last fiscal year, the agency spent nearly $1.6 billion on IT. As SSA’s systems have aged and its workload has increased, the agency has committed to investing in the capacity and modern technologies needed to update its strained IT infrastructure. In addition, the agency has recently undertaken a realignment of its IT governance structure, including the responsibilities of its Chief Information Officer (CIO). At your request, over the past year, we have been examining SSA’s modernization efforts. The specific objectives of our study were to (1) determine SSA’s progress in modernizing its IT systems and capabilities; (2) evaluate the effectiveness of SSA’s plans and strategy for modernizing its systems and capabilities; and (3) assess whether the realignment of the agency’s CIO responsibilities allows for effective oversight and management of the systems modernization efforts. Since 2001, SSA has reported spending more than $5 billion on the development, modernization, and enhancement of its IT systems and capabilities. SSA officials identified 120 initiatives undertaken from 2001 to 2011 that the agency considered to be key investments in modernization. These comprise a subset of the hundreds of projects and modernization activities SSA undertakes yearly, which vary greatly in level of effort, scope, and cost. These initiatives affected all of the agency’s main program areas: According to managers within SSA’s Office of Disability Systems, in an effort to reduce backlogs of disability hearings, the agency implemented a process for creating electronic “folders” for each applicant, to replace the existing paper-based process. This initiative included capabilities for electronically viewing an applicant’s folder, electronic screening for faster disability determinations, and Internet access to information on disability hearings and determinations. The Office of Retirement and Survivors Insurance Systems took steps to improve outdated legacy systems and respond to legislation or other mandates requiring new system functionality. These efforts included integrating stand-alone “post-entitlement” processes, facilitating online application for benefits, and conversion of a key database to a more modern, industry-standard one. Managers from the Office of Applications and Supplemental Security Income described initiatives to modernize large legacy databases and facilitate data sharing to streamline the claims process. These included enhancements to the electronic death registration process and the development of a Web application enabling access to data from multiple systems. SSA officials described initiatives in the area of electronically exchanging data with external partners, including states and private-sector partners such as banks and credit bureaus. SSA also noted efforts to streamline the process for administering Social Security cards, such as introducing safeguards against counterfeiting and replacing its legacy printers. Comprehensive strategic planning is essential for successfully carrying out large-scale efforts such as SSA’s IT modernizations. Key elements of such planning include developing an IT strategic plan and an enterprise architecture that, together, outline modernization goals, measures, and timelines. An IT strategic plan serves as an agency’s vision and helps align its information resources with its business strategies and investment decisions. As such, it provides a high-level perspective of the agency’s goals and objectives, enabling the agency to prioritize how it allocates resources; proactively respond to changes; and communicate its vision and goals to management, oversight bodies, and external parties. The enterprise architecture helps to implement the strategic vision by providing a focused “blueprint” of the organization’s business processes and technology that supports them. It includes descriptions of how the organization operates today, how it intends to operate in the future, and a plan for transitioning to the target state. It further helps coordinate the concurrent development of IT systems to limit unnecessary duplication and increase the likelihood that these systems will inter-operate. SSA developed an IT strategic plan in 2007 to guide its modernization efforts; however, the plan is outdated and may not be aligned with the agency’s overall strategic plan. Specifically, because it has not been updated since 2007, the plan contains elements that are no longer relevant to SSA’s ongoing modernization efforts. As mentioned earlier, in 2011, SSA realigned the functions of its Office of the CIO, consolidating major responsibilities for the management and oversight of IT in its Office of Systems. Federal law, specifically the Clinger-Cohen Act of 1996, requires the heads of executive branch agencies to designate a CIO with key responsibilities for managing an agency’s IT resources. As we have previously reported, to carry out these responsibilities effectively, CIOs require sufficient control over IT investments, including control over the IT budget and workforce. Under the realignment, key responsibilities of the CIO and Deputy Commissioner for Systems were merged into the Office of Systems. Specifically, this arrangement gave the Office for Systems responsibility for, among other things, oversight and management of IT budget formulation; systems acquisition, development, and integration; the IT capital planning and investment control process; workforce planning and allocation of resources to IT projects; IT strategic planning; enterprise architecture; IT security; and IT operations.
ERISA was enacted to protect participants in employer-based pension and welfare benefit plans. After several highly visible pension plan failures and abuses in the 1960s and early 1970s, the Congress enacted ERISA in 1974. Although it was established primarily as a pension law, ERISA also regulates welfare benefits—including health care benefits—that participants and their beneficiaries receive through employers. ERISA lays out the framework within which employer-based health benefit plans must operate. ERISA requires plans to (1) designate a named fiduciary to administer a plan in the sole interest of the participants, (2) provide pertinent documents and plan-related information to participants, and (3) file annual reports with DOL—the federal agency responsible for overseeing ERISA’s implementation. However, ERISA’s regulatory requirements for employer-based health care plans are not nearly as comprehensive as are those for pension plans. ERISA’s requirements do not focus solely on health care but apply to a variety of employer-based welfare benefit plans. Also, ERISA provides no financial or solvency standards to which employer-based health benefit plans must adhere. By enacting ERISA, the Congress created a federally administered regulatory scheme that applies, with some exceptions, to all employer-based pension and welfare benefit plans, including employer-based health plans. The Congress included a section in ERISA that states that ERISA supersedes, or preempts, all state laws that “relate to” an employee benefit plan. According to employer groups, this “preemption” provision helps eliminate problems for multistate employers who could face conflicting and burdensome state statutes and regulations when they provide health benefits in more than one state. Because the states have traditionally been granted the power to regulate the business of insurance, the Congress “saved” insurance laws from ERISA’s preemption provision. That is, ERISA cannot supersede a state insurance law. However, an employer-based health plan cannot be deemed to be a health insurer for the purposes of being regulated by state insurance laws under this “savings” clause. Employers who choose to provide health care benefits can do so in different ways. An employer can self-fund the coverage—that is, assume the financial risk associated with health insurance. Because self-funded plans are not considered to be an insurance product, they are exempt from state insurance laws and are therefore regulated solely through ERISA. Alternatively, an employer can purchase a health insurance product directly from an established health insurer. State laws regulating health insurers are saved from preemption. Consequently, the states can regulate the insurance product that the ERISA plan purchases. For example, an insurance product purchased by an employer would have to cover all state-mandated health care benefits, but all those benefits would not have to be covered in a self-funded plan. Similarly, such a product would be subject to state law requirements for grievance and appeal procedures. The distinction between self-funded and purchased health plans does not apply to the remedies available under ERISA for benefit claims disputes under employee benefit plans. With respect to these remedies, ERISA does not distinguish between the disputed benefit’s being provided through an employer-purchased plan or a self-funded plan. As the Supreme Court has found, ERISA preempts state law claims for compensatory damages that result from benefit denials because the state laws governing such claims “relate to” employee benefit plans. Furthermore, under its preemption clause, ERISA may or may not supersede state tort laws that govern the kinds and amounts of compensation that may be available for health-care-related injuries. Most workers and their families in the private sector receive their health insurance coverage through employers. According to DOL statistics, about 125 million people receive health insurance through about 2.5 million employer-based health care plans covered by ERISA. While the exact number of people who receive health care through self-funded employer-based plans is unknown, several studies have estimated that approximately 40 percent of insured people are enrolled in self-funded plans, plans that are free from state insurance regulation. During the past decade, the number of people enrolled in employer-based managed health care plans has continued to increase. About 29 percent of those who received health insurance through an employer-based plan were enrolled in some type of managed care in 1988, according to KPMG Peat Marwick. Figure 1 shows that a far greater percentage of people who received health insurance through an employer in 1995 were enrolled in managed care plans than in fee-for-service, even compared with just 2 years earlier. Furthermore, four times as many employers offered no fee-for-service option in 1997 as in 1988. Paying for health care in the United States has become increasingly expensive. In the 1980s, employers in the private sector, who shoulder much of the health care costs, were becoming particularly attuned to the alarming speed at which these costs were rising. As cost growth continued at double-digit rates, employers began to search for more economical ways of providing health care benefits while maintaining or improving the quality of care. In their search, many employers began to turn away from traditional fee-for-service health care and look to managed care—which, among other things, selectively contracts with providers and manages the use of services. This transition to managed health care has made benefit determinations more critical for participants in employer-based plans. For many years, private-sector employers relied on traditional fee-for-service health insurance to provide their employees with health care benefits. Health insurance became increasingly expensive, partly because participants had virtually an unlimited choice of health care providers with few controls on service use or costs. Traditional fee-for-service insurers paid the bills for covered services and typically did not become involved in medical treatment decisions. These insurers played primarily a financial role and left medical decision-making to physicians and hospitals. Under a traditional fee-for-service health insurance plan, (1) the attending physician usually decided when medical services were necessary, (2) the patient received the services, and (3) the insurer either paid or did not pay, depending on an independent retrospective review of the claim. Therefore, benefit disputes usually focused on whether the insurer would pay, not on whether services would be provided. Generally, managed health care attempts to contain costs by addressing both the price and quantity of health care services. Managed care plans attempt to ensure that services provided to plan enrollees are necessary, delivered efficiently, and priced appropriately. Managed care covers a broad spectrum of health care delivery arrangements and financing. Types of managed care plans include health maintenance organizations (HMO), preferred provider organizations (PPO), and point of service (POS) plans.HMOs—the oldest form of managed care—operate under several different models. For example, staff model HMOs employ health care providers directly and often serve only enrolled HMO patients at facilities owned by the HMOs. Independent practice association (IPA) model HMOs contract with providers who serve other patients as well as HMO enrollees in the providers’ own offices. Managed care plans may use different methods to control access to care, but prospective UR—used to determine in advance the medical necessity or appropriateness of more costly, nonroutine health care services—is distinctive. Prospective UR adds a layer of review to the decision-making between attending physicians and their patients. In managing patient care, most managed care plans have adopted prospective UR procedures. Prospective UR determines whether the attending physician’s proposed course of medical treatment and proposed service location are necessary based on clinical criteria. Managed care plans often assume the administrative function of making benefit determinations—that is, determining whether a specific treatment or procedure is covered by the employer’s plan. Plans may assume this function when employers contract with HMOs and other types of managed care to provide health care services for ERISA participants or when the plans administer employer-based ERISA self-funded plans. Prospective UR procedures vary among managed care plans. Plans differ in the services requiring prior authorization, the type of personnel making decisions, and the criteria used to determine medical necessity.Plan-based medical personnel such as physicians or nurses may be involved at different stages of the process and may exercise independent clinical judgment rather than relying exclusively on plan-specified criteria. However, they exercise this judgment for the purpose of determining plan coverage for the service in question. Such decisions are made independently from the decisions made by a patient’s attending physician. (The attending physician and plan officials may discuss the plan’s benefit coverage decision.) Figure 2 shows that compared with traditional fee-for-service care, the coverage decisions at managed care plans for some services are often made through a prospective UR process—before the patient receives health care services. Ultimately, the health plan decides whether a particular service is covered based on the review conducted by its agents. A health plan’s decision to deny coverage does not preclude the attending physician from providing treatment—if the patient can obtain other funding to pay for it. Nonetheless, it may be that for financial or other reasons, the patient may not obtain the recommended service. Consequently, benefit disputes involving managed care plans often focus on whether the plan should compensate the patient for any injuries or damages that may have occurred because the disputed service was not provided. Plan participants who believe they have been denied a plan benefit can seek to reverse the denial through the plan’s claims appeal process. If participants are unsuccessful, they can sue the plan as a last resort. If participants are successful in court, the only remedy under ERISA allows them to receive the denied benefit and, at the court’s discretion, attorney fees. Groups representing consumers are concerned that ERISA’s remedy does not sufficiently deter inappropriate benefit denials and that it is difficult to pursue claims in court. Conversely, employer groups believe that ERISA already provides the appropriate mechanism for protecting plan participants. ERISA requires that employer-based health benefit plans provide participants with information on covered benefits in a summary plan description (SPD). The SPD gives details on the plan and describes the rights, benefits, and responsibilities under the plan. Plans that meet federal standards for HMOs are permitted to omit certain information from their SPDs and are deemed to satisfy ERISA’s requirements for resolving benefit disputes. ERISA requires plans to have a system for resolving benefit disputes. The minimum procedures that an employer-based health benefit plan must follow when denying a benefit and for resolving any dispute that arises from the denial are specified in ERISA and its implementing regulations, which were published in 1977. Generally, as shown in table 1, it can take up to 1 year to complete the benefit denial and resolution process: (1) plans have 90 days in which to deny a benefit, although they can request an extension of up to 90 days; (2) a participant has up to 60 days to appeal after the denial, or request a review of the decision; and (3) the plan must resolve the appeal promptly, within 60 days, although an additional 60 days can be requested under special circumstances. ERISA’s requirements for handling benefit appeals are limited in some respects. For example, the regulations contain no provisions for expedited appeals. Furthermore, there is no provision for an independent review of the plan participant’s appeal. The same entity that initially denied the benefit can make the appeal decision. If not satisfied with the results of the appeal process, a plan participant has the final recourse of suing the plan. ERISA provides that when a participant is not satisfied with the results obtained through a plan’s appeal process, the only way to resolve the benefit denial is to sue the plan to obtain the benefit. This remedy applies to participants in all employer-based health plans, except those sponsored by governments and churches—both those self-funded and those purchased from an insurer. While this type of civil suit can be filed initially in either a federal or state court, frequently the cases in state courts are removed to a federal court on a defendant’s motion. If successful in court, however, the participant is entitled only to receive the denied benefit and, at the court’s discretion, attorney fees. If a participant establishes in a lawsuit that a plan wrongfully denied a benefit, ERISA authorizes the court to order that the benefit be provided. ERISA does not contain any provisions for compensating for damages that may occur because of the benefit denial such as lost wages, additional health care costs, or pain and suffering. Nor does ERISA provide for any punitive damages. Generally, when a benefit is refused or a claim is denied, a participant’s ultimate recourse is to sue under ERISA. (The participant may be required first to go through an administrative appeal process.) In such a case, the participant asserts that a service promised through the employer-based health care plan was not provided. Under ERISA, the plan’s fiduciary is responsible for protecting the interests of the participants. ERISA states that fiduciaries have a duty to act “solely in the interest of the participants and beneficiaries and for the exclusive purpose of providing benefits to participants and their beneficiaries. . . .” In this role, a fiduciary must be prudent and act according to the plan documents. Under ERISA, the failure to pay a valid claim may constitute a breach of a fiduciary’s responsibilities. However, courts ordinarily uphold the fiduciary’s decision unless the participant can show that the decision was “arbitrary and capricious.” Arbitrary and capricious behavior may include such activities as (1) using undisclosed medical criteria that are more restrictive than those used by other insurers, (2) basing a denial on an ambiguous provision of the benefit agreement, or (3) failing to comply with ERISA’s notification and reconsideration procedures if that failure prevented a request of reconsideration of an adverse determination. Benefit coverage determinations have taken on a more critical role as employers have changed from traditional fee-for-service health care to managed care. Benefit denials can now more easily restrict access to care because benefit determinations are often made before services are provided, especially for nonroutine, high-cost services or treatments. This is a different situation from traditional fee-for-service coverage. A participant may not be financially able to obtain more costly services if the plan does not pay. According to representatives of consumer groups, ERISA’s remedy does not deter inappropriate benefit denials. ERISA provides no penalty when benefits are denied inappropriately. If found to be in the wrong, the employer-based plan must then only provide the benefit that had been denied. In addition, consumer groups believe that saving money gives employer-based health plans an incentive to deny benefits. Plan participants may face other problems pertaining to benefit denials, according to consumer group representatives. For example, participants may not be aware of or understand the information contained in the SPDs that explains the benefit appeal process or ERISA’s remedy. A recent study found that according to state regulators, most plan participants neither read nor understood plan documents. Furthermore, it may be difficult for a plan participant to pursue a benefit denial in court because the participant may not be able to find or afford legal representation. Attorneys often accept cases on a contingency fee basis—they receive a percentage of the final settlement or award made by the court. However, the ultimate success in winning an ERISA case is receiving the denied benefit and, at the court’s discretion, reasonable attorney fees. There is no chance for a large monetary award based on damages. Therefore, attorneys have little financial incentive to take these cases. According to groups that represent employers, the Congress sought to strike a balance in the remedy it provided for benefit denials when it enacted ERISA. Representatives of employer groups believe that the Congress chose to include a remedy that was not overly burdensome on employers because they provide health care benefits voluntarily. These groups believe that ERISA’s remedy must consider the employers who voluntarily provide health care coverage and the plan participants who expect to receive services they believe are covered by the plan. Furthermore, the ERISA Industry Committee—which represents major private employers regarding public policy and related matters affecting employee benefit plans—believes that despite employers’ shift since 1974 from predominantly fee-for-service health benefit plans to mostly managed care plans, ERISA continues to provide an adequate framework for protecting plan participants. According to this employer group, ERISA protects participants by requiring that plan fiduciaries act in their sole interest and that denied claims receive fair hearings. Employers have no incentive to deny benefits that are rightly due to plan participants, according to several employer groups. Employer groups say that employers provide health benefits to keep their employees healthy so that the employees can be productive. In addition, representatives of the American Association of Health Plans believe that significant counterbalances prevent inappropriate benefit denials. According to the association, health plans are also concerned about making improper benefit denials. For example, health plans could face a loss of business and reputation and could risk incurring unfavorable publicity. Furthermore, representatives of employer groups said that plan fiduciaries can be assessed penalties if they are found to make arbitrary and capricious benefit determinations. For example, they said that ERISA provides the authority to ban individuals or entities from acting as plan fiduciaries, effectively putting them out of business. In principle this sanction is available, but in practice it is rarely invoked. ERISA’s preemption clause generally prevents plan participants from holding managed care plans directly liable under state laws for the damages that result from their negligent acts or omissions, and it complicates plan participants’ ability to hold managed care plans indirectly liable for the negligence—medical malpractice—of their providers. This situation has caused much debate. Federal courts are addressing the scope of ERISA preemption as it relates to negligence in an employer-based managed care arrangement. Physicians and other health care providers can be held directly liable for their own negligent acts—generally called medical malpractice. Medical malpractice is defined as acts of omission or commission, usually based on negligence, that result in injuries. Plan participants may attempt to hold health care providers directly liable for injury by filing medical malpractice claims seeking compensation for monetary and nonmonetary losses. In addition to monetary and nonmonetary losses, plaintiffs can seek to obtain punitive damages. Medical malpractice claims are generally governed by state tort law.State tort laws may differ with respect to the kinds and amounts of compensation that are available. A determination of liability for medical malpractice is based upon four elements: (1) the existence of a duty of care to the patient, (2) an applicable standard of care and its violation, (3) a compensable injury to the patient resulting from that breach, and (4) a causal connection between the violation of the standard of care and the harm complained of. The medical malpractice liability system is generally thought to have three primary goals: (1) provide compensation to people who are injured through negligent medical care, (2) create an incentive for physicians to provide careful treatment, and (3) provide accountability in dispute resolution. In addition, health care organizations, including hospitals and managed care plans, can be held directly liable for their own negligent acts or omissions. That is, a managed care plan can be held directly liable for its own failure to fulfill a duty owed to its patients in non-ERISA-based plans and can be sued for damages under state laws. For example, a negligence claim brought against a managed care plan can be based on a benefit denial or some other aspect of employee benefit plan administration, including UR and preauthorization of services, alleging that the actions of the plan constitute direct negligence. An injured person’s right to sue a managed care plan—for example, an HMO—under state law for the medical malpractice of its health care providers is evolving in much the same way that the right to sue a hospital for the negligence of its providers evolved. Initially, hospitals were viewed as only providing a place where patients could receive services from independent health care providers. However, courts eventually began to address whether hospitals could be held “vicariously” or indirectly liable for the actions of health care providers—who were either employees or independent contractors—in addition to being held directly liable for their own actions. Plan participants are increasingly attempting to hold HMOs to be vicariously liable when health-care-related injuries occur. For example, an employer can be held indirectly responsible for the actions of its employees under the legal theory of “respondeat superior.” This theory of law applies most clearly to staff model HMOs in which health care providers are directly employed by the HMO. A staff model HMO’s employed health care providers are typically physicians, nurses, and others, including those who make UR decisions. Consequently, plan participants’ attorneys can try to hold staff model HMOs liable for the negligent actions of their health care providers. Most HMOs, however, treat physicians as independent contractors rather than retaining them as direct, salaried employees. When an HMO contracts with physician associations to provide health care services—the IPA model—the HMO also may be held liable for negligent medical care if the plan participant perceives the HMO to be providing the health care. This is the “ostensible agency” theory of law. Courts must determine whether the HMO represented the physician to be its employee and whether the patient looked to the HMO, rather than the physician, as the health care provider.If the patient has no choice when selecting a treating physician, the patient could more reasonably look to the HMO as a provider. ERISA’s preemption clause complicates the ability of employer-based health care plan participants to sue managed care plans when injuries occur. In fact, ERISA has become a major source of confusion as to whether a plan participant may recover damages from a managed care organization for the negligence of its health care providers. Because ERISA preempts state laws that “relate to” employee benefit plans, managed care plans have argued that ERISA preempts the ability to sue them under state tort laws either directly for their own negligence or indirectly for the negligence of health care providers. When sued by an ERISA plan participant in a state court, the managed care plan can seek to move the case to federal court because of ERISA and, even in state court, can assert ERISA preemption as a defense. As ERISA provides more limited remedies than state tort laws—only the benefit denied and no compensatory or punitive damages—there are incentives for managed care organizations to claim the ERISA preemption. Managed care entities increasingly perform several functions simultaneously for employer-based plans—UR, plan administration, arranging or providing medical treatment—and consequently the distinction between administering a plan and providing health care may be less clear. Federal courts have found that the UR entity may make medical decisions in the context of making a benefit determination under an employer’s ERISA plan. DOL has intervened as amicus curiae—“friend of the court”—in eight lawsuits addressing ERISA’s preemption of state tort laws. In these cases, DOL argued that ERISA does not preempt negligence or medical malpractice claims against HMOs when the plan participant is part of an employer-based health plan. Federal appellate courts have concluded that when the action of a managed care plan involves benefit administration, ERISA preempts damage claims under state law, even though the plan’s action may have been a wrongful denial of benefits. Whether a plan action is benefit administration or a medical decision is not always clear; the distinction often depends on the facts of each case. Courts have generally concluded that benefit determinations by plans—for example, that a particular medical service is not covered—fall in the category of benefit administration. Once the action of the plan is characterized as benefit administration, ERISA preempts state causes of action. The federal courts are divided on the effect of ERISA when the managed care plan acts not merely as a benefit administrator but as a provider of medical care. Some courts have held that ERISA does not preempt malpractice suits against managed care plans under state law when the complaint is based on medical advice or care by the plan provided through an employee or an agent of the plan. Other courts have held that ERISA preempts suits against managed care plans under state law based on malpractice by plan employees or agents. The appendix summarizes some of the decisions on ERISA by the federal courts addressing these issues. When ERISA’s preemption of state tort law applies, plan participants/ consumers, employers, and managed care plans are affected differently. As a result, each of these groups has its own distinct reaction to the role that ERISA’s preemption clause plays in the ability to file claims for medical malpractice and benefit denials under state law. Consumer groups object to ERISA’s preemption clause because it denies plan participants the ability to pursue damage claims against plans for benefit decisions under state law. As federal court cases have shown, ERISA plan participants have been left without any legal right to sue for damages when injuries occur as a result of such benefit coverage decisions. There are no data to show how often participants are left without the right to sue for damages. Moreover, the high cost of treatments may effectively keep plan participants from paying for care themselves that employer-based plans determine to be medically unnecessary or not covered. In addition, ERISA’s remedy for injuries occurring because of benefit denials is insufficient, according to consumer advocates. They believe that providing more remedies would also improve the quality of health care that consumers would get by holding managed care plans accountable for unfair denials, limitations, or reductions in care. Employers provide health care benefits voluntarily to help attract and retain workers, especially in a competitive environment, and to help keep trained employees healthy and productive. Many employees consider these benefits to be important. When employers choose to provide health care benefits, employer groups believe that ERISA’s preemption gives them the ability to design innovative health plans that can be consistent across state borders. Employer groups say that if plan fiduciaries were subject to compensatory or punitive damages, employers would be less likely to provide health benefits because of the higher costs associated with them. When managed care plans administer employer-based health care benefits, they believe that they are protected from state law remedies by ERISA’s preemption clause. They view making benefit determinations—including whether a particular service is medically necessary—as administrative functions associated with benefit plans covered by ERISA. They believe that subjecting such decisions to state law remedies would raise costs as participants/consumers and trial lawyers seek damages from plans because of their perceived “deep pockets.” According to the American Association of Health Plans, the very methods that have made health plans successful at arranging for affordable, high-quality health care would be undermined. Also, the association maintains that the need to defend against tort claims for denial of benefits would cause health plans to take defensive measures. For example, the association notes that plans may authorize coverage for more services, whether or not they are medically necessary, to avoid possible litigation. However, the scope of defensive measures may be limited. One survey showed that the final benefit denial rate was no more than 3 percent, although higher denial rates may be associated with certain services or specialties. In effect for more than two decades, the remedies ERISA provides are facing increased scrutiny. While they were considered to be sufficiently fair when enacted by the Congress in 1974, much has changed since then. Now, as more federal court cases challenge ERISA preemption, many members of the Congress believe that the time has come to revisit either ERISA’s civil enforcement scheme—its remedies section—or its preemption clause. In addition to the legislative proposals that have been introduced, alternative solutions may merit further exploration. Nonetheless, any changes made to ERISA will evoke positive reactions by some of those who are affected and negative reactions by others. In the first session of the 105th Congress, two different types of amendments to ERISA were introduced to address the question of compensating plan participants who are injured as a result of improper medical decisions made by managed care entities. Generally, these proposals would either (1) provide for compensation in ERISA’s civil enforcement section or (2) change ERISA’s preemption clause so that it does not supersede state tort laws. The two proposal types differ with respect to how injured plan participants would be able to seek relief. Under the first type, ERISA itself would provide more remedies for alleged injuries. Under the second, ERISA would make it easier for participants to pursue state-provided remedies when seeking compensation from managed care plans. The courts are the arena within which participants pursue remedies when injuries occur under employer-based managed care plans, but more could be done to try to avoid litigation. Several studies and groups have supported this preventive or “upstream” strategy. Rather than amending ERISA to provide for the ability to pursue increased remedies for damages that result from medical decisions made by managed care entities, more attention could be placed on resolving disputes earlier. The National Association of Insurance Commissioners took the position that ERISA should be changed to ensure more government oversight and authority over ERISA health plans’ claim and coverage determinations. Also, according to the association, ERISA needs to provide participants with more meaningful internal and external appeal mechanisms in addition to the appeals to courts that are permitted. It also suggested that ERISA could be revised to require each ERISA health plan to provide independent alternative dispute resolution mechanisms to mediate or adjudicate disputes with participants that cannot be resolved through internal review. The association believed that ERISA should be amended to give participants in ERISA health plans access to state tort law remedies, subject to reasonable limits, only if its other suggested changes were not implemented. A recent study on managed care plan liability stated that having a procedure through which participants could appeal a managed care plan’s benefit denials to an outside reviewer could resolve disputes before harm occurred and could prevent the need for lawsuits later on. Analysts sponsored by the National Institute for Health Care Management reported that an ideal system for resolving benefit disputes should “Recognize the inevitability of conflict between emotionally vulnerable patients and any economically rational health system by anticipating common disputes, tempering expectations with clear rules and implementing timely, efficient dispute resolution mechanisms.” Also, several groups we spoke with representing consumers, health care plans, and employers told us that more emphasis needs to be placed on strengthening the grievance and appeal procedure within managed care plans. Furthermore, the President’s Advisory Commission on Consumer Protection and Quality in the Health Care Industry stated that a timely appeal process can help reduce the incidence of injury. As a result, the commission recommended that the internal and external appeal process be enhanced. The Commission recommended the establishment of external review systems that would be limited to reviewing (1) decisions relating to services that are “experimental” or “investigational,” (2) decisions in which a service is determined not to be medically necessary and the cost exceeds a “significant threshold,” and (3) decisions in which the denial, based on medical necessity, could jeopardize the life or health of a patient. While DOL supports the need to strengthen ERISA’s claim resolution procedure, it also believes that stronger remedies are needed. According to DOL, stronger remedies are needed at the end of the process to ensure compliance with the process “upstream.” That is, DOL believes that it could develop a regulation to implement a better claim resolution procedure for plan participants but could not ensure compliance if there was no cost imposed for failure to comply. Consumer groups also concur with DOL’s concern that current ERISA remedies do not adequately protect plan participants or deter plans from noncompliance. Any effort to amend ERISA or make other changes would likely involve tradeoffs among the divergent interests of consumers, managed care plans, and employers. For example, if the Congress amended either ERISA’s preemption clause or remedies section as discussed previously, plan participants would have access to a broader array of remedies for adverse outcomes under employer-based managed care plans. Patients commonly expect that the medical care they receive is of reasonable quality. ERISA’s preemption of the liability of health plans may remove a powerful incentive to provide high-quality service. An amendment to make it easier to hold managed care plans liable for injuries could cause plans to take more actions to avoid injuries, and the number of adverse outcomes could decrease. Furthermore, the denial of benefits can, if it affects the course of treatment, cause physical injury, which in turn may result in the loss of income or one’s job. Such losses cannot be recovered by reimbursing the patient for the cost of the denied benefit. Amending ERISA could change this. If more participants were able to pursue benefit disputes, however, the courts could potentially have to handle more cases. The limited data available indicate that in recent years more medical malpractice claims are being filed. The number could increase if ERISA were amended. But an adverse outcome does not necessarily mean that a claim will be filed. The findings of a study conducted at New York hospitals and reported in The New England Journal of Medicine in 1991 showed that the number of negligent adverse outcomes was eight times the number of tort claims filed. Thus, as shown by this study, many individuals who are injured by medical negligence may not file a claim. Some contend that even when patients bring malpractice suits, the current liability system for resolving such claims is inefficient and ineffective. Many agree that in the current system, claims take a long time to be resolved, legal costs are high, and settlements and awards are unpredictable. Further, there is concern about whether the system deters the negligent practice of medicine. In contrast, according to managed care and employer groups, health care costs could increase if ERISA were amended to provide compensation in its civil enforcement section or to change its preemption clause so that it does not supersede state tort laws. Some have expressed concern that the increases would be significant. According to the Corporate Health Care Coalition, expanding ERISA’s remedies to encourage more litigation without improving the quality of decision-making would greatly increase plan liabilities and have a “chilling effect” on the use of managed care techniques. As a result, health plans might find it more difficult to deny even inappropriate claims. However, data to accurately estimate the likely extent of such potential increases are lacking. If managed care plans can be sued under state tort law, plan representatives say that they will pass the costs associated with these suits on to employers—the payers of health care. Employers say that faced with such cost increases, they might in response (1) reduce health care benefits, perhaps by excluding from coverage particular treatments and procedures; (2) provide a “defined contribution,” or a fixed amount earmarked for such costs, to employees’ health care costs; (3) shift more of the cost to employees by making them pay a higher percentage of the premiums; or (4) eliminate health care benefits completely.Consequently, plan participants could end up with expanded remedies for adverse outcomes but potentially fewer or more expensive health care benefits. Employer groups also suggest that the increased ability of plan participants to sue managed care plans and UR firms would lead to what ERISA preemption was intended to prevent. That is, benefit determination decisions would be considered treatment decisions (which would elicit the full array of malpractice remedies), and plans operating in multiple states would be subject to various laws. In addition, employers are concerned that they would be more likely to be sued for damages resulting from benefit denials or medical negligence because of their perceived “deep pockets.” To date, however, no employers have been held liable for such damages. When the Congress enacted ERISA in 1974, it would have been nearly impossible to predict the state of the U.S. health care delivery system in the late 1990s. Managed care has grown rapidly only within the past decade. Benefit coverage decisions now are more often made in advance of treatment, which creates a new kind of potential legal liability not faced by traditional fee-for-service health insurers—and not envisioned a quarter of a century ago. ERISA’s remedies section and preemption clause and their effect on compensation for injured plan participants have posed challenging questions for the courts. Recent case law displays a trend toward expanding liability beyond health care providers to include managed health care organizations. Generally, an HMO will be held liable when it directly provides medical services, when the provider is acting as its agent, or when it leads a beneficiary to reasonably believe that the provider is its agent. However, plan participants can be left without a remedy for injuries when they occur because of benefit denials. As managed care enrollment continues to grow, HMO exposure to liability will undoubtedly increase. Proposed changes to ERISA’s remedies section or its preemption clause seek to provide for fair and appropriate remedies for participants in managed care plans. However, analysts’ efforts to assess the merits of such changes have been far from definitive, in part because the contending parties’ interests and views differ sharply and in part because strong evidence on the effects of amending ERISA on cost and quality is absent. That is, there is no research on how or how much plans’, employers’, and consumers’ costs would change if ERISA were amended. To date, much of the debate surrounding ERISA’s current remedies has focused on a proposed “downstream” approach—which seeks to change the remedies available through the courts. Many have suggested that an “upstream” approach—which seeks to prevent court suits and protracted litigation—may warrant consideration as well. DOL reviewed a draft of this report and provided technical comments, which we incorporated as appropriate. We also furnished a draft of this report for review to the American Association of Health Plans, Association of Private Pension and Welfare Plans, the ERISA Industry Committee, two ERISA experts—one of whom specifically represented the consumer viewpoint—and one expert in medical malpractice. In commenting on the draft, the American Association of Health Plans focused on several areas that it believed needed to be clarified and revised. These included (1) distinguishing better between benefit coverage determinations and treatment decisions and clarifying the roles of physicians and plans in those decisions; (2) clarifying the discussion of plans’ direct liability and vicarious liability for physicians’ medical malpractice; and (3) providing additional information on state requirements for managed care plans’ grievance and appeal processes, the incidence of service denials, and existing “counterbalances” to plans’ inappropriate denial of benefits. The association also suggested the need both to discuss more fully the concern of some about proposed expansions of state tort law damages for health care liability and to make more prominent the discussion of “upstream” solutions such as improved grievance and appeal procedures for participants. The final report contains revisions to reflect these clarifications and additions. In its comments, the ERISA Industry Committee emphasized that ERISA is an adaptable and flexible law that is as relevant now as when it was enacted in 1974. Furthermore, the committee believed that the nature of benefit determinations has not fundamentally changed because of the transition from fee-for-service to managed health care. The committee stated that because in most cases health plans are making payment decisions and not treatment decisions, participants are not prohibited from obtaining treatment at their own expense and health care providers can still treat participants. Therefore, the committee believed that the draft report overstated the significance of changes in the health care delivery system as they relate to the legal issues associated with ERISA’s appeal procedures and remedies. While we acknowledged the committee’s position on the role that ERISA’s standards play in the current health care environment, we believe the evidence suggests that prospective benefit coverage decisions can, in fact, affect participants’ ability to obtain needed treatments, especially if other financial resources are not available. The ERISA Industry Committee suggested, as did the American Association of Health Plans, that we distinguish better between benefit coverage determinations and treatment decisions, as well as elaborating on concerns about the effectiveness of the tort system as a remedy. The committee also commented that the report could better reflect the role of an employer-sponsored health benefit plan’s fiduciary in safeguarding participants’ interests and the potential that increased liability could cause benefit plan administrators to take defensive and other measures, with resulting increased costs and decreased coverage. We revised the final report to reflect these clarifications and perspectives. In response to additional comments from these and other reviewers, we clarified certain distinctions and made technical changes as appropriate. We did not receive comments from the Association of Private Pension and Welfare Plans. As we arranged with your offices, unless you announce the report’s contents earlier, we plan no further distribution of it until 30 days after the date of this letter. We will then send copies to the Secretary of Labor. We will make copies available to others on request. If you or your staff have any questions, please call me at (202) 512-7114. This report was prepared initially under the direction of the late Michael Gutowski; his role was later assumed by Jonathan Ratner. Major contributors to this report include Joseph Petko, Roger Thomas, Susan Poling, and Barry Bedrick. The preemption clause of the Employee Retirement Income Security Act of 1974 (ERISA) provides that ERISA supersedes any and all state laws “insofar as they . . . relate to any employee benefit plan” covered by the act. In many instances, the courts have concluded that ERISA preempts participant claims against managed care providers. Although plan participants may have limited remedies under ERISA, the remedies under state law that would typically be more generous, such as compensatory, punitive, or extracontractual damages, are barred by ERISA preemption. As managed care has become more widespread, plan participants or beneficiaries have sought to sue ERISA plans, employers, and organizations conducting utilization review (UR) under state law for malpractice, wrongful death, or benefit denials because of their actions or determinations under the plans. The plans, employers, and UR providers have argued that under the preemption clause, these state remedies are not available. (They have also argued that suits of this kind filed in state court must be removed to federal court.) In this appendix, we discuss these issues and describe some of the cases. In preparing this appendix, we reviewed federal case law and law review articles. The federal courts have found that ERISA may or may not preempt state law governing negligence or malpractice in suits against a managed care plan by a plan participant, depending on the circumstances of the case. It seems clear that if a court concludes that a claim is based on the wrongful denial of a benefit offered by the health plan, ERISA will preempt any claim for relief under state law. Under ERISA, the only remedy available to plan participants may be a court order requiring that the benefit be provided; ERISA preempts remedies authorized under state law such as compensatory, punitive, or extracontractual damages for malpractice. Some federal courts have permitted malpractice suits under state law against managed care plans to go forward where the provider is considered to be an agent of the plan or where the plan directly provides medical services. (HMO) provides medical services to members for a flat or fixed fee. HMO subscribers or members are usually required to use an HMO-employed or HMO-contracted physician in order to qualify for coverage. A preferred provider organization (PPO) typically arranges for independent physicians, specialists, servicing hospitals, and other providers to provide medical care to subscribing members based on a fixed, usually discounted, fee. Both PPOs and HMOs typically use UR in some form. Under UR, the organization or a third party under contract with it evaluates proposed procedures or treatments to determine on the basis of clinical criteria whether they are medically necessary. Many employers who provide health care benefits through group plans regulated under ERISA have turned to managed care to control health care costs. As managed care has emerged as a principal cost-control measure, ERISA’s federal preemption has affected the ability of participants to seek compensatory or punitive damages from managed care plans, based on a plan’s role in providing medical care. Critics of ERISA preemption believe that it is fundamentally unfair that ERISA supersedes state control over medical malpractice or negligent care cases. They also object to the limited remedies available under ERISA for the denial of a claim, which contrasts with state extracontractual, punitive, or compensatory damages for the same denial outside ERISA: When a UR organization determines that a particular treatment is not medically necessary, it is arguably making a medical decision. Yet ERISA leaves plan participants without a remedy for negligence or medical malpractice by a UR organization. ERISA was enacted to protect the interests of employees and their beneficiaries through comprehensive federal requirements and protections for employee pension and welfare benefit plans. It sets standards for pension plans, including standards for vesting, accrual, and funding, and provides fiduciary standards for both pension and welfare benefit plans. ERISA’s federal preemption provision is intended to avoid conflicting state rules on the administration of these federally regulated plans. The ERISA preemption provision—section 514(a) of ERISA—says that with certain exceptions, ERISA “shall supersede any and all State laws insofar as they may now or hereafter relate to any employee benefit plan” covered by ERISA. If a plan participant or beneficiary claims to have been the victim of negligence or medical malpractice, or wrongful denial of benefits by the plan, and injury or other damage resulted, then preemption is significant: Although state law typically authorizes punitive, compensatory, or extracontractual damages, such as for emotional distress, loss of consortium, or injury, ERISA does not. The federal courts are increasingly being called upon to define more precisely the scope of ERISA preemption. Courts of appeals for the various federal circuits have interpreted the preemption provision differently. Conflicts among the circuits can be resolved by a definitive ruling by the Supreme Court or by legislation. claims “to recover benefits due to under the terms of his plan, to enforce his rights under the terms of the plan, or to clarify his rights to future benefits under the terms of the plan . . . .”Complete preemption under section 502 is a jurisdictional concept; if the defendant successfully argues that complete preemption applies, the case is removed to federal court. In general, if a plaintiff chooses to file an action in state court, the defendant cannot force the removal of the action to federal court unless the plaintiff’s complaint specifically raises issues of federal law. Suits filed in state court alleging malpractice or breach of contract in violation of state law would therefore ordinarily not be subject to removal on the defendant’s motion. Presumably, in ERISA cases, defendants seek removal to federal court because they believe their federal defense—that section 514 preempts state law—will receive a more sympathetic hearing there than in state court. However, merely raising a federal issue as a defense does not ordinarily justify removal. Complete preemption is an exception to the general rule that removal is required only when the plaintiff’s complaint raises a federal issue. The Supreme Court decided in Metropolitan Life Ins. Co. v. Taylor that a defendant may have a case removed from state to federal court even though the plaintiff’s complaint does not raise federal issues if federal legislation has “so completely pre-empt a particular area that any civil complaint raising this select group of claims is necessarily federal in character.” Complete preemption applies to ERISA claims under section 502. In Metropolitan Life, the plaintiff’s complaint was based on common law contract and tort claims under state law only. However, the court found that any complaint brought against a plan under section 502, regardless of whether it is based on state causes of action, will be viewed as arising under federal law. argue that his or her claim is not one that “relates to any employee benefit plan” and therefore that section 514 does not preempt his or her state cause of action. Deciphering the meaning and outer limits of “relates to” has been contentious and difficult. One federal appellate court described the law in this area as “a veritable Sargasso Sea of obfuscation.” The Supreme Court in Shaw v. Delta Air Lines Inc. provided a basic definition: a state law “relates to” an employee benefit plan, within the meaning of the preemption clause of ERISA, if the law has “a connection with or reference to such a plan.” The Court noted that it is not enough that a state law affects employee benefit plans; the effect may occur “in too tenuous, remote or peripheral a manner” to warrant concluding that the law “relates to” the plan for purposes of the preemption clause. The Supreme Court refined its interpretation of what it means for a state law to “relate to” an employee benefit plan in New York State Conference of Blue Cross and Blue Shield Plans v. Travelers Insurance Co. In that case, the Court concluded that ERISA did not preempt a state law that mandated surcharges on the hospital bills of patients insured by commercial insurers (and certain HMOs) but not on the bills of patients insured by Blue Cross and Blue Shield. The Court concluded that these laws were not “related to” ERISA plans, even though the surcharges would have an indirect economic effect on ERISA plans: “if ‘relate to’ were taken to extend to the furthest stretch of its indeterminacy, then for all practical purposes pre-emption would never run its course for ‘eally, universally, relations stop nowhere.’” The Court in Travelers relied on the fact that the surcharge under the state statute applied whether or not the health services were furnished through an ERISA-covered plan. The Court noted that “while Congress’s extension of pre-emption to all ‘state laws relating to benefit plans’ was meant to sweep more broadly than ‘state laws dealing with the subject matters covered by ERISA, reporting, disclosure, fiduciary responsibility, and the like,’ . . . nothing in the language of the Act or the context of its passage indicates that Congress chose to displace general health care regulation, which historically has been a matter of local concern . . . .” In Travelers, the Supreme Court made clear that state laws that have only an indirect influence on ERISA plans are not preempted. State laws that indirectly affect “the relative costs of various health insurance packages in a given State,” or that do not preclude plan administrators from adopting uniform administrative practice or uniform interstate benefit packages, do not implicate those “conflicting directives” from which the Congress meant to insulate ERISA plans and are therefore not preempted. some effect on the administration of ERISA plans, but that does not mean that every such state law is preempted by ERISA. ERISA’s enforcement provisions prescribe the causes of action and remedies available under this federal law. Managed care organizations involved in medical malpractice lawsuits have asserted that ERISA preempts these state law claims. A managed care arrangement that successfully asserts such a defense may effectively avoid state tort remedies of extracontractual, compensatory, punitive, or exemplary damages. ERISA provides that a participant or beneficiary may bring a civil action “to recover benefits due to him under the terms of the plan, to enforce his rights under the terms of the plan, or to clarify his rights to future benefits under the terms of the plan.” Only the benefits to which a plan participant or patient is contractually entitled under the terms of the plan are available under ERISA. Thus, if a benefit is denied, the remedy is to obtain the benefit. Under ERISA, no civil action can be brought for malpractice, emotional distress, wrongful death, or negligence. A review of the cases that can be categorized as a denial of a benefit as a result of a UR decision indicates that ERISA preempts state law claims for negligence, wrongful death, medical malpractice, and the like. Corcoran v. United Healthcare, Inc. illustrates the difficult issues relating to jurisdiction, remedies, and public policy faced by a court applying ERISA preemption in the managed care context. Because of the plaintiff’s high-risk pregnancy, her physician asked that the expectant mother be hospitalized for close monitoring. Under the plaintiff’s employer’s health plan, precertification and UR were necessary for the hospital admission and the length of the hospital stay. The firm performing UR for the employer’s plan determined that hospitalization was not necessary, but authorized up to 10 hours a day of home nursing care. When a nurse was not on duty, the fetus went into distress and died. The parents filed a wrongful death action in state court, alleging, in part, that Mrs. Corcoran’s unborn child had died as a result of the negligence and denial of hospital care by both the health plan and the UR firm. The defendants removed the action to federal court and moved for summary judgment. They characterized the plaintiffs’ wrongful death action as, in reality, an action for mishandling a claim by firms retained merely to administer benefits under an ERISA-covered plan. Their relationship to the plaintiffs, they contended, was wholly defined by the terms of the employer plan; as a result, plaintiffs’ claims “related to” an ERISA plan and were therefore preempted. Plaintiffs answered that preemption would “contravene the purposes of ERISA by leaving them without a remedy.” disclosure, and fiduciary obligations, but also, in a much broader sense, whenever the state laws have “a connection with or reference to” an employee benefit plan. The UR firm argued that it did not make medical decisions or provide medical advice; all it did was determine whether Mrs. Corcoran qualified for the benefits provided by the plan by applying previously established eligibility criteria. The court disagreed but held that while the UR firm made medical decisions and gave medical advice, it did so in the context of making determinations about the availability of benefits under the health plan. In the court’s view, this was enough of a relationship to an employee benefit plan to require ERISA preemption. In Tolton v. American Biodyne, Inc. the Court of Appeals for the Sixth Circuit used the same analysis to conclude that ERISA preempted state causes of action based on UR. In that case, the covered employee, Mr. Tolton, was drug-dependent and suicidal. His employer’s managed care health plan included a UR requirement. On several occasions, Mr. Tolton met with or talked to a psychologist who performed UR and, on that basis, denied him inpatient care. After attempting to obtain treatment from a variety of health care providers on a number of occasions, Mr. Tolton committed suicide. Mr. Tolton’s estate brought an action in state court against the employer’s health plan, the plan administrator, and each of the health care providers who had treated him, including the psychologist who had performed UR. The claims included wrongful death and medical malpractice. On the motion of the plan, the case was removed to federal court based upon ERISA preemption. Summary judgment was subsequently granted, in part on this same basis. action based on the psychologist’s UR decision and the denial of Mr. Tolton’s claim was also preempted. Similar facts resulted in the same outcome in a district court decision in the First Circuit, while also generating a strongly worded opinion by a judge who believed that the result was an injustice and that the Congress should amend the law. The beneficiary was denied 30-day inpatient care by a UR provider. After the beneficiary committed suicide, plaintiff brought claims for breach of contract, medical malpractice, wrongful death, loss of parental and spousal consortium, intentional and negligent infliction of emotional distress, and specific violations of the Massachusetts consumer protection laws. ERISA was found to preempt these claims, but the court commented that “ERISA has evolved into a shield of immunity that protects health insurers, UR providers, and other managed care entities from potential liability for the consequences of their wrongful denial of health benefits.” Other federal appellate courts have followed the approach taken in Corcoran and Tolton. The Eighth Circuit Court of Appeals found in Kuhl v. Lincoln National Health Plan of Kansas City, Inc., that failure of the managed care entity to preapprove heart surgery constituted a denial of benefits and thus that the state cause of action was preempted by ERISA.The failure to preapprove, in the court’s view, did not constitute the provision of medical advice. The same result was reached in the Ninth Circuit in Spain v. Aetna Life Insurance Co. agent are attributed to the principal who did nothing wrong. Direct liability means that while the agent’s wrongful act caused the harm, the principal also acted negligently or wrongfully—for example, in selecting or retaining its agents or monitoring their activities. In Pacificare of Oklahoma, Inc. v. Burrage, the basis of the claims against the HMO was malpractice. The case was removed in part from state court, where it had originally been filed. But the federal district court did not agree that preemption applied and permitted the state court to hear the plaintiff’s claims that (1) the HMO primary care physician was the agent of the HMO and (2) the HMO was liable, both vicariously and directly, for the physician’s actions. The Tenth Circuit sustained the decision of the district court that ERISA did not preempt these claims against the HMO. While noting that there was no simple formulation, the court identified four categories of state laws that might “relate to” a plan, as that term is used in ERISA: “(1) laws that regulate the type of benefits or terms of ERISA plans; (2) laws that create reporting, disclosure, funding or vesting requirements for ERISA plans; (3) laws that provide rules for the calculation of the amounts of benefits to be paid under ERISA plans; and (4) laws and common law rules that provide remedies for misconduct growing out of the administration of the ERISA plan.” agent of the HMO is not preempted. The court therefore directed that the case be returned to state court. In Dukes v. U.S. Healthcare System, Inc., the court addressed several theories of negligence or medical malpractice involving an HMO. Mr. Dukes’ primary care physician ordered a blood test by a hospital that for unknown reasons was not performed and that allegedly would have disclosed extremely high blood sugar. Mr. Dukes died after additional medical treatment. His wife sued the physicians, hospitals, and HMO, U.S. Healthcare System, in state court. In seeking removal of the case to federal district court, the HMO argued that (1) Mr. Dukes had obtained medical care as a benefit from a welfare benefit plan governed by ERISA, (2) removal was required by the “complete preemption” theory, and (3) the plaintiff’s claims were preempted by section 514(a) of ERISA. The district court dismissed plaintiff’s claims against the HMO on the basis of ERISA preemption. “ny ostensible agency claim,” the district court concluded, “must be made on the basis of what the benefit plan provides and is therefore ‘related’ to it.” The court also held that “the treatment received must be measured against the benefit plan and is therefore also ‘related’ to it.” The Third Circuit Court of Appeals reversed the district court and remanded the malpractice claims to the state court. The appellate court concluded that complete preemption did not apply because the plaintiff’s claims focused only on the quality of benefits received; the plaintiff was not alleging that benefits were withheld nor seeking either to enforce rights under the terms of the plan or to have the right to future benefits clarified. The court found a significant distinction between this case and Corcoran, the case discussed previously in which ERISA was held to preempt a malpractice claim based on the UR provider’s determination—contrary to the opinion of the plaintiff’s physician—that the plaintiff did not need hospitalization during her pregnancy. This court said that the UR provider in Corcoran, “unlike the HMOs here, did not provide, arrange for, or supervise the doctors who provided the actual medical treatment for plan participants.” Another recent appellate decision, this one from the Seventh Circuit, concluded that ERISA does not preempt a claim that the administrator of an employee health benefits plan is liable under state law for medical malpractice by a physician who is an agent of the plan. The plaintiff in Rice v. Panchal was treated by a preferred provider furnished by his health plan. He brought suit in state court against the doctor for malpractice and against the health plan on an agency theory: The health plan, the plaintiff claimed, was responsible for the medical malpractice by its preferred provider. On a motion by the health plan, the case was removed to federal district court under the doctrine of complete preemption. The Seventh Circuit Court of Appeals found in Panchal that there was no complete preemption of the claim against the health plan. The court acknowledged that complete preemption would be required if the plaintiff’s state law claim could not be resolved without an interpretation of the contract—the ERISA plan—governed by federal law. In Panchal, the court said that resolving the question of whether the health plan is liable for the medical malpractice of the provider under state agency law does not require construing the ERISA plan; the issues, in this view, were whether the doctor was in fact an agent, whether he was authorized to act for the principal, and whether the injury would not have occurred but for the victim’s reliance on the agency. Answering these questions does not involve the interpretation of the ERISA plan. (In the state proceeding, the plan would be free to raise ERISA preemption under section 514 as a defense.) ERISA has generally been found to preempt medical negligence claims where a managed care provider has acted merely as a payer for claims with respect to a health plan. In Butler v. Wu, the plaintiff brought a medical negligence claim against a physician and an HMO. The physician was neither an agent nor an employee of the HMO; he provided services to HMO members as an independent contractor. The HMO did not provide medical treatment itself. The district court granted the HMO’s motion to dismiss the case against the HMO, based on ERISA preemption of state law claims. The court held that ERISA preempts state-law negligence claims against HMOs “where, as in this case, the HMO is fulfilling a role closer to that of a traditional insurer than that of a direct provider of health care services.” The court, examining the evolution of the health care industry, noted that the distinction between arranging and paying for health care services and providing such services directly may not always be so clear, and it reserved judgment concerning whether preemption would apply if it found that an HMO was directly providing medical care. Angelo, Susan. “Choice Curbs, Doctor Links Fuel Managed Care Risks.” National Underwriter (Property/Casualty/Employee Benefits), Vol. 98, No. 44 (1994), pp. 10 and 36. Atkins, G. Lawrence, and Kristin Bass. ERISA Preemption: The Key to Market Innovation in Health Care. Washington, D.C.: Corporate Health Care Coalition, 1995. Azevedo, David. “Courts Let UR Firms Off the Hook—and Leave Doctors On.” Medical Economics, Vol. 70, No. 2 (1993), pp. 30-33, 37, 40, 42, and 44. Barnes, Cliff. “Malpractice Liability in Managed Care Systems: It’s More Complex.” The Internist, Vol. 28, No. 7 (1987), pp. 36-37. Barnett, Daniel R. “Arbitration Panels Are the Key for Tort Reform.” The Internist, Vol. 35, No. 8 (1994), pp. 25-26. Benesch, Katherine. “Risk Management: Emerging Theories of Liability for Negligent Credentialing in HMOs, Integrated Delivery and Managed Care Systems.” Trends in Health Care, Law & Ethics, Vol. 9, No. 4 (1994), pp. 28 and 41-44. Benson, Barbara. “The Dark Side of Managed Care.” Chief Executive, No. 110 (1996), p. 60. Bergthold, Linda A., and William M. Sage. Medical Necessity, Experimental Treatment and Coverage Determinations: Lessons From National Health Care Reform. Washington, D.C.: National Institute for Health Care Management, 1994. Black, Henry Campbell. Black’s Law Dictionary, 6th ed., St. Paul, Minn.: West Publishing Co., 1990. Bovbjerg, Randall R. “Medical Malpractice on Trial: Quality of Care Is the Important Standard.” Law and Contemporary Problems, Vol. 49, No. 2 (1986), pp. 321-48. Burns, Ellen. “Understanding Liability Issues in Managed Care.” QRC Advisor, Vol. 10, No. 5 (1994), pp. 8-10. Butler, Catherine. “Preferred Provider Organization Liability for Physician Malpractice.” American Journal of Law & Medicine, Vol. 11, No. 3 (1985), pp. 345-68. Butler, Patricia A. Managed Care Plan Liability: An Analysis of Texas and Missouri Legislation. Menlo Park, Calif.: The Henry J. Kaiser Family Foundation, 1997. Butler, Patricia, and Karl Polzer. Private-Sector Health Coverage: Variation in Consumer Protections Under ERISA and State Law, Special Report. Washington, D.C.: National Health Policy Forum, 1996. Caldwell, Bernice. “Let the Buyer—the Employer—Beware.” Employee Benefit Plan Review, Vol. 50, No. 10 (1996), pp. 6-7. Chittenden, William A., III. “Malpractice Liability and Managed Health Care: History and Prognosis.” Tort & Insurance Law Journal, Vol. 26, No. 3 (1991), pp. 451-96. Coan, L. Frank, Jr. “You Can’t Get There From Here—Questioning the Erosion of ERISA Preemption in Medical Malpractice Actions Against HMOs.” Georgia Law Review, Vol. 30, No. 4 (1996), pp. 1023-60. Coleman, David L. “Crushing Your Health Plan’s Legal Protection.” Business & Health, Vol. 15, No. 8 (1997), pp. 40-42, 44, and 46. Coleman, David L. “The Law That Shields HMOs From the Risk Doctors Face.” Managed Care, Vol. 6, No. 5 (1997), pp. 25-26, 35-38, and 41. Conrad, Robert J., Jr., and Patrick D. Seiter. “Health Plan Liability in the Age of Managed Care (Health Care and the Law).” Defense Counsel Journal, Vol. 62, No. 2 (1995), pp. 191-200. Cooper, Mark G. “A ‘New’ Approach to Medical Malpractice: The Liability of HMOs for Member Physician Negligence.” Detroit College of Law Review, No. 4 (1994), pp. 1263-92. DiCicco, Domenick C., Jr. “Liability of the HMO for the Medical Negligence of Its Providers.” For the Defense (The Magazine for Defense, Insurance and Corporate Counsel), Vol. 38, No. 3 (1996), pp. 10-15. Dorros, Torin A., and T. Howard Stone. “Implications of Negligent Selection and Retention of Physicians in the Age of ERISA.” American Journal of Law & Medicine, Vol. 21, No. 4 (1995), pp. 383-418. Dowell, Michael A. “Avoiding HMO Liability for Utilization Review.” Specialty Law Digest: Health Care Law (May 1993), pp. 9-32. Doyle, Corbette S. “Managing the Risks of Managed Care.” Journal of Healthcare Risk Management, Vol. 14, No. 4 (1994), pp. 3-7. Elgin, Peggie R. “Managed Care Can Expose Employers to Legal Liabilities.” Corporate Cashflow, Vol. 14, No. 9 (1993), pp. 5-6. Empey, Carole S. “Liability of Utilization Review Decision-Making: Overutilized, Overexposed.” Journal of Healthcare Risk Management, Vol. 16, No. 1 (1996), pp. 19-23. Employer-Based Health Plans: Issues, Trends, and Challenges Posed by ERISA (GAO/HEHS-95-167, July 25, 1995). “ERISA Bars State Lawsuits Against HMO Administration.” Business Insurance (Aug. 23, 1993), p. 23. Ermer, David M. “Employer Liability in the Managed Care Setting.” Employee Benefits Journal, Vol. 18, No. 2 (1993), pp. 19-29 and 45. Fazen, Marianne F. Managed Care Desk Reference: The Complete Guide to Terminology and Resources. Dallas, Texas: HCS Publications, 1994. Fredel, Ellen A. “ERISA and Managed Care: What the Courts Are Saying.” Benefits Law Journal, Vol. 8, No. 3 (1995), pp. 105-18. From Medical Malpractice to Managed Care Liability. Hartford, Conn.: Conning & Company, 1997. Gice, Jon. “Managed Care: A Short Phrase but a Long Story. . . .” CPCU Journal, Vol. 49, No. 3 (1996), pp. 159-64. Goebel, Robert H., and Michael R. Goebel. “Clinical Pathways Can Prevent Malpractice Lawsuits in Radiation Oncology and Breast Cancer.” The Journal of Oncology Management, Vol. 5, No. 5 (1996), pp. 20-29. Graber, Brent J. “Legal Crisis Threatens Managed Care.” Best’s Review (Property/Casualty), Vol. 96, No. 7 (1995), pp. 40-44. Hall, Richard C. W. “Legal Precedents Affecting Managed Care—The Physician’s Responsibilities to Patients.” Psychosomatics, Vol. 35, No. 2 (1994), pp. 105-17. Havighurst, Clark C. “Making Health Plans Accountable for the Quality of Care.” Georgia Law Review, Vol. 31 (1997), pp. 587-647. Health Insurance: Management Strategies Used by Large Employers to Control Costs (GAO/HEHS-97-71, May 6, 1997). HMO Complaints and Appeals: Most Key Procedures in Place, but Others Valued by Consumers Largely Absent (GAO/HEHS-98-119, May 12, 1998). Holahan, John, Colin Winterbottom, and Shruti Rajan. “A Shifting Picture of Health Insurance Coverage.” Health Affairs, Vol. 14, No. 4 (1995), pp. 253-64. Holoweiko, Mark. “Health Care Blunders: Is Your Company Liable?” Business & Health, Vol. 10, No. 1 (1992), pp. 26-31. “How HMOs Have Hidden Behind ERISA.” Medical Economics, Vol. 73, No. 15 (1996), pp. 200-5. Jensen, Gail A., and others. “The New Dominance of Managed Care: Insurance Trends in the 1990s.” Health Affairs, Vol. 16, No. 1 (1997), pp. 125-36. Joffe, Mark S. “Potential HMO and Physician Liability Arising From Physician Incentive Arrangements.” HealthSpan, Vol. 5, No. 11 (1988), pp. 9-14. Johnsson, Julie. “Managed Care Involvement Increases Liability Exposure.” Hospitals, Vol. 64, No. 5 (1990), pp. 40-41 and 44. Jost, Kenneth. “Still Warring Over Medical Malpractice.” ABA Journal, Vol. 79 (1993), pp. 68-74. Karp, David. “Avoiding Managed Care’s Liability Risks.” Medical Economics, Vol. 71, No. 8 (1994), pp. 68-70 and 72. Kenkel, Paul J. “Malpractice Record at HMOs in Line With Rest of Industry, Results of Recent Study Show.” Modern Healthcare, Vol. 22, No. 28 (1992), pp. 82 and 84. Kertesz, Louise. “California Court Ruling Retains Kaiser’s Malpractice Arbitration Program.” Modern Healthcare, Vol. 25, No. 33 (1995), p. 16. Kertesz, Louise. “Data Signal Managed-Care Suits on Rise.” Modern Healthcare, Vol. 26, No. 21 (1996), pp. 17 and 24. Kertesz, Louise. “Horror Stories Aside, HMOs May Be Curbing Malpractice.” Modern Healthcare, Vol. 26, No. 32 (1996), pp. 56 and 60. Kessler, Daniel, and Mark McClellan. “Do Doctors Practice Defensive Medicine?” The Quarterly Journal of Economics, Vol. 111, Issue 2 (1996), pp. 353-90. Kilcullen, Jack K. “Groping for the Reins: ERISA, HMO Malpractice, and Enterprise Liability.” American Journal of Law & Medicine, Vol. 22, No. 1 (1996), pp. 7-50. Klynn, Toni C. “Where the Supreme Court Feared to Tread: Closing the Loopholes in ERISA §510.” Journal of Health and Hospital Law, Vol. 26, No. 11 (1993), pp. 321-27 and 352. Kosterlitz, Julie. “Unmanaged Care?” National Journal, Vol. 26, No. 50 (1994), pp. 2903-7. Kraeutler, Eric, and Paul J. Greco. “HMOs: Newest Lawsuit Targets (As the Popularity of Managed Health Care Flourishes, So Does Litigation Against HMOs. One Defense is ERISA).” Pennsylvania Law Weekly, Vol. 18, No. 29 (1995), p. 11. Kratz, John E., Jr. “How to Avoid Lawsuits With Employer Packaged Plans.” Business and Health, Vol. 5, No. 3 (1988), pp. 38-41. Laddaga, Lawrence A., and Douglas M. Zayicek. “The Use of Arbitration to Settle Managed Care Contract Disputes.” Healthcare Financial Management, Vol. 50, No. 8 (1996), pp. 44-48. Localio, A. Russell, and others. “Relation Between Malpractice Claims and Adverse Events Due to Negligence.” The New England Journal of Medicine, Vol. 325, No. 4 (1991), pp. 245-51. Lowes, Robert L. “Can Malpractice Really Be Kept Out of Court?” Medical Economics, Vol. 71, No. 16 (1994), pp. 106-7, 111-17, and 120-21. McBride, Susan H. “Does MCO Now Stand for ‘Malpractice Case Overload’?” Managed Healthcare, Vol. 5, No. 4 (1995), pp. 38-44. McCaffery, Robin. “Courts Continue to Liberally Invoke ERISA Preemption in Health Care Settings.” Health Care Law Newsletter, Vol. 8, No. 2 (1993), pp. 12-15. Malone, Thomas William, and Deborah Haas Thaler. “Managed Health Care: A Plaintiff’s Perspective.” Tort & Insurance Law Journal (fall 1996), p. 123. Mamorsky, Jeffrey D. “A Guide to ERISA-Covered Plans.” Journal of Health Care Benefits, Vol. 3, No. 1 (1993), pp. 52-56. Manuel, Barry M. “Physician Liability Under Managed Care.” Journal of the American College of Surgeons, Vol. 183, No. 6 (1996), pp. 537-46. Margolis, Robin Elizabeth. “ERISA Loophole Permits Utilization Review Firms to Avoid Liability for Malpractice.” HealthSpan, Vol. 9, No. 8 (1992), pp. 29-30. Margolis, R.E. “Malpractice Liability: Can Arbitration Replace Litigation?” HealthSpan, Vol. 8, No. 11 (1991), pp. 17-18. Mariner, Wendy K. “Liability for Managed Care Decisions: The Employee Retirement Income Security Act (ERISA) and the Uneven Playing Field.” American Journal of Public Health, Vol. 86, No. 6 (1996), pp. 863-69. Mariner, Wendy K. “Problems With Employer-Provided Health Insurance—The Employee Retirement Income Security Act and Health Care Reform.” The New England Journal of Medicine, Vol. 327, No. 23 (1992), pp. 1682-85. Maslen, David P. “Employer Managed Care Liability: Defining and Managing the Risk.” Journal of Compensation and Benefits, Vol. 11, No. 1 (1995), pp. 5-10. Melbinger, Michael S. “Planning to Avoid Litigation Over Group Health Plans.” Journal of Compensation and Benefits (July-Aug. 1994), pp. 5-11. Mellette, Peter M., and Jane E. Kurtz. “Corcoran v. United Healthcare, Inc.: Liability of Utilization Review Companies in Light of ERISA.” Journal of Health and Hospital Law, Vol. 26, No. 5 (1993), pp. 129-32 and 160. Minc, Gabriel J. “ERISA Preemption of Medical Negligence Claims Against Managed Care Providers: The Search for an Effective Theory and an Appropriate Remedy.” Journal of Health and Hospital Law, Vol. 29, No. 2 (1996), pp. 97-106. Mitka, Mike. “HMO Liability on Par With Fee-For-Service (Study by William Gold and James Posner, Health Maintenance Organizations).” American Medical News, Vol. 35, No. 26 (1992), p. 25. Mulholland, Daniel M. “Managing Care and the Risk for Managing Quality.” Quality Assurance and Utilization Review, Vol. 7, No. 1 (1992), pp. 12-22. Murata, Steve. “Malpractice Suits That Doctors May Like?” Medical Economics, Vol. 73, No. 15 (1996), p. 8. Neff, Barbara C. “Managed Care Litigation and ERISA Preemption.” For the Defense (The Magazine for Defense, Insurance and Corporate Counsel), Vol. 38, No. 3 (1996), pp. 16-22. Nepple, Fred. “ERISA: A Call for Reform.” Journal of Insurance Regulation, Vol. 14, No. 1 (1995), pp. 3-26. Oakley, David J., and Eileen M. Kelley. “HMO Liability for Malpractice of Member Physicians: The Case of IPA Model HMOs.” Tort & Insurance Law Journal, Vol. 23, No. 3 (1988), pp. 624-41. Palmer, Lizzette. “ERISA Preemption and Its Effects on Capping the Health Benefits of Individuals With AIDS: A Demonstration of Why the United States Health and Insurance Systems Require Substantial Reform.” Specialty Law Digest: Health Care Law (July 1994), pp. 9-49. Panah, Lisa. “Common Law Tort Liability of Health Maintenance Organizations.” Journal of Health and Hospital Law, Vol. 29, No. 3 (1996), pp. 146-59 and 192. Parloff, Roger. “The HMO Foes.” The American Lawyer, Vol. 18, No. 6 (1996), p. 80. Pautler, Richard J. “HMO’s Liability Cloudy in Doctor’s Malpractice.” Best’s Review (Property/Casualty), Vol. 95, No. 10 (1995), pp. 68-70. Pautler, Richard J. “HMO Liability Roulette Can Be a Costly Game.” Best’s Review (Life/Health), Vol. 95, No. 10 (1995), pp. 76-77. Perez, Robert Armand, Sr. “Health and Welfare Benefit Litigation Under ERISA.” Benefits Quarterly, Vol. 10, No. 4 (1994), pp. 31-38. Platt, James B. “Gatekeeper Liability and Managed Care.” Minnesota Medicine, Vol. 79, No. 9 (1996), pp. 25-27. Platt, James B. “Physician Malpractice and Managed Care Plans.” Minnesota Medicine, Vol. 75, No. 1 (1992), pp. 31-33. Platt, James B. “Reducing Liability Risk in Managed Care.” Minnesota Medicine, Vol. 77, No. 12 (1994), pp. 43-44. Polzer, Karl, and Patricia A. Butler. “Employee Health Plan Protections Under ERISA.” Health Affairs, Vol. 16, No. 5 (1997), pp. 93-102. Quality First: Better Health Care for All Americans, Final Report to the President of the United States. Washington, D.C.: The President’s Advisory Commission on Consumer Protection and Quality in the Health Care Industry, 1998. Quality Health Care Is Good Business: A Survey of Health Care Quality Initiatives by Members of The Business Roundtable. Washington, D.C.: The Business Roundtable, 1997. Remler, Dahlia K., and others. “What Do Managed Care Plans Do to Affect Care? Results From a Survey of Physicians.” Inquiry, Vol. 34 (fall 1997), pp. 196-204. Rice, Berkeley. “Look Who’s on the Malpractice Hot Seat Now; but Don’t Think Doctors Are Off the Hook.” Medical Economics, Vol. 73, No. 15 (1996), p. 192. Rose, Joan R. “Defendants Buck the Trend in Jury Verdicts.” Medical Economics, Vol. 73, No. 14 (1996), p. 16. Rutkin, Alan S., and Erica B. Garay. “ERISA Pre-empts Many HMO Claims.” The National Law Journal, Vol. 19, No. 2 (1996), p. B11. Sager, Mark, and others. “Do the Elderly Sue Physicians?” Arch. Intern. Med., Vol. 150, No. 5 (1990), pp. 1091-93. Schessler, Cheralyn E. “Liability Implications of Utilization Review as a Cost Containment Mechanism.” Journal of Contemporary Health Law and Policy (Spring 1992), pp. 379-406. Severson, Nancy J. “On ERISA Protection, Employers Beware.” Business & Health, Vol. 11, No. 7 (1993), pp. 74 and 76. Shah, Seema R. “Loosening ERISA’s Preemptive Grip on HMO Medical Malpractice Claims: A Response to PacifiCare of Oklahoma v. Burrage.” Minnesota Law Review, Vol. 80, No. 6 (1996), pp. 1545-77. Shah-Mirany, Tayebe. “Malpractice Liability of Health Maintenance Organizations: Evolving Contract and Tort Theories.” Medical Trial Technique Quarterly, Vol. 39, No. 3 (1993), pp. 357-72. Sloan, Frank A., and Chee Ruey Hsieh. “Injury, Liability, and the Decision to File a Medical Malpractice Claim.” Law & Society Review, Vol. 29, No. 3 (1995), pp. 413-35. Snarr, Brian B. “Managed Care: Recent Cases Increase Employers’ Liability Risk.” Compensation & Benefits Review, Vol. 27, No. 6 (1995), pp. 26-29. Stayn, Susan J. “Securing Access to Care in Health Maintenance Organizations: Toward a Uniform Model of Grievance and Appeal Procedures.” Columbia Law Review, Vol. 94, No. 5 (1994), pp. 1674-1720. “Study Shows Continuing Trends in Malpractice Claims.” Best’s Review (Property/Casualty) (July 1994), p. 80. Tarrant, Carol Anne. “Liability for Employer Purchasers of Health Care Benefits.” AAOHN Journal, Vol. 42, No. 5 (1994), pp. 250-54. “The Battle to Preserve Self-Funded Health Plans—ERISA Waiver Activity Must Spur Administrators.” Employee Benefit Plan Review, Vol. 49, No. 6 (1994), pp. 10-14. “The Biggest Fan of Binding Arbitration: Kaiser.” Medical Economics, Vol. 71, No. 16 (1994), pp. 116-17. Thomas, M. Carroll. “Who’ll Be Liable When a Patient Is Injured?” Medical Economics, Vol. 68, No. 8 (1991), pp. 56-58 and 60-62. Touse, James L. “Medical Management and Legal Obligations to Members.” The Managed Health Care Handbook. Gaithersburg, Md.: Aspen Publishers, Inc., 1993. Van Duch, Darryl. “Courts Peel HMO Shield in Medical Malpractice Cases.” The National Law Journal, Vol. 18, No. 1 (1995), p. B1. Way, Linda A. “Protecting Medical Malpractice Claims Against ERISA Preemption.” Trial, Vol. 33, No. 3 (1997), p. 34. Wethly, F. Christopher. “Vicarious Liability Malpractice Claims Against Managed Care Organizations Escaping ERISA’s Grasp.” Boston College Law Review, Vol. 37, No. 4 (1996), pp. 813-60. Will, Robert J. “Arbitration of Medical Malpractice Claims: A Practical Update.” The Health Lawyer, Vol. 9, No. 2 (1996), pp. 14-17. Wilson, Sally Hart. “Another View of Medicare HMOs: Not Always What the Doctor Ordered.” Health System Leader, Vol. 2, No. 5 (1995), pp. 11-13. Woolsey, Christine. “Jury Awards Rise.” Business Insurance (Apr. 12, 1993), p. 2. Zamora, O. Mark. “Medical Malpractice and Health Maintenance Organizations: Evolving Theories and ERISA’s Impact.” Nova Law Review, Vol. 19, No. 3 (1995), pp. 1047-62. Zibelman, Adrienne M. “The Practice Standard of Care and Liability of Managed Care Plans.” Journal of Health and Hospital Law, Vol. 27, No. 7 (1994), pp. 204-17 and 224. Zolkos, Rodd. “As Managed Care Expands, Liability Exposures Also Grow.” Business Insurance (July 29, 1996), pp. 3 and 16. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed how people enrolled in employer-based managed care plans are compensated when they are improperly denied health care benefits or when they experience negligent medical care and the role that the Employee Retirement Income Security Act (ERISA) plays, focusing on: (1) whether the transition from traditional employer-based fee-for-service health plans to managed care has changed the process of benefit determination; (2) the remedies that ERISA provides to participants in employer-based managed care plans who are improperly denied benefits; (3) whether ERISA affects the ability of participants to be compensated for injuries that result from either medical malpractice or improper benefit denials at employer-based managed care plans; and (4) the consequences of changing ERISA's remedies. GAO noted that: (1) managed care plans attempt to ensure that enrollees receive services that are necessary, efficiently provided, and appropriately priced; (2) since the 1980s, employers have shifted to offering managed care plans that use such techniques as prospective utilization review (UR); (3) as a result, benefit coverage decisions have increasingly shifted from being made after services are provided to before; (4) ERISA effectively limits the remedies available when employees of private-sector firms claim to have been harmed by plans' decisions to deny coverage of a particular service; (5) under ERISA, plans must have an appeal process for participants who are dissatisfied with a benefit denial; (6) if participants are not successful, they can file a civil lawsuit; (7) ERISA's exclusive remedy for improper benefit denials is to require the plan to provide the denied service and, at the court's discretion, pay attorney fees; (8) groups representing consumers believe that ERISA's limited remedy neither provides sufficient compensation for injuries that benefit denials contribute to nor effectively deters unjustified benefit denials; (9) ERISA can affect participants' ability to be compensated for injuries sustained while in an employer-based health care plan; (10) ERISA's preemption of state laws that relate to employee health benefit plans enables managed care plans and UR firms to avoid liability under state law for medical malpractice; (11) compelling evidence is lacking on the likely effects of amending ERISA to provide either expanded remedies for losses due to disputed benefit denials or the ability to sue managed care plans for medical malpractice under state tort laws; (12) consumer groups and others assert that additional remedies could: (a) improve health care quality by holding plans accountable for the consequences of their benefit coverage decisions; and (b) provide participants with a course of remedies more comparable to state tort laws; (13) managed care plan and employer groups maintain that these additional provisions would result in increased costs or benefit reductions; and (14) to date, data are not available to accurately estimate the extent to which the quality of health care would improve or the amount by which the costs of plans, employers, and employees might change if ERISA's remedies or preemption of state laws were amended.
Historically, DOD’s programs for acquiring major weapon systems have taken longer, cost more, and often delivered fewer quantities and other capabilities than planned. GAO has documented these problems for decades. In 1970, GAO reported that considerable cost growth had been and was continuing to occur on many current development programs. Since that report was issued, numerous changes have been made to DOD’s acquisition process and environment to try to improve acquisition outcomes. Those changes include numerous executive branch initiatives and legislative actions as well as roughly 11 revisions to DOD’s acquisition policy between 1971 and 2005. Despite these efforts, defense acquisition programs in the past 3 decades continued to routinely experience cost overruns, schedule slips, and performance shortfalls. Figure 1 illustrates the continued problem of development cost overruns. The figure depicts the combined cost overruns for large development programs (programs totaling more than $1 billion for research, development, testing and evaluation in fiscal year 2005 dollars) in each of the past 3 decades. The figure also identifies some of the major studies and improvement efforts initiated during this time frame. As the figure illustrates, efforts to improve acquisition outcomes have not been successful in curbing acquisition cost problems. Programs initiated in the 1970s exceeded DOD’s initial investment estimate by 30 percent, or $13 billion (in fiscal year 2005 dollars), and similar outcomes continued during the subsequent decades despite numerous reform efforts and policy revisions. Since the mid-1990s, we have studied the best practices of leading commercial companies. Taking into account the differences between commercial product development and weapons acquisitions, we articulated a best practices product development model that relies on increasing knowledge when developing new products, separating technology development from product development, and following an evolutionary or incremental product development approach. This knowledge-based approach requires developers to make investment decisions on the basis of specific, measurable levels of knowledge at critical junctures before investing more money and before advancing to the next phase of acquisition. An evolutionary product development process defines the individual increments on the basis of mature technologies and a feasible design that are matched with firm requirements. Each increment should be managed as a separate and distinct acquisition effort with its own cost, schedule and performance baseline. An increment that excludes one of these key elements puts an extra burden on decision makers and provides a weak foundation for making development cost and schedule estimates. The knowledge-based, evolutionary approach in our model is intended to help reduce development risks and to achieve better program outcomes on a more consistent basis. Hoping to improve acquisition outcomes, DOD leaders initiated significant revisions to the department’s acquisition policy again in October 2000, by adopting the knowledge-based, evolutionary system development approach. We reported in November 2003, that much of the revised policy agrees with GAO’s extensive body of work and that of successful commercial firms. DOD’s revised policy emphasizes the importance of and provides a good framework for capturing knowledge about critical technologies, product design, and manufacturing processes. If properly implemented and enforced this approach could help DOD’s decision makers gain the confidence they need to make significant and sound investment decisions for major weapon systems. Furthermore, the policy’s emphasis on evolutionary system development sets up a more manageable environment for achieving knowledge. We also noted that DOD’s policy strongly suggests the separation of technology development from system development, a best practice that helps reduce technological risk at the start of a program and makes cost and delivery estimates much more predictable. Figure 2 depicts in general how DOD’s revised policy adopts key aspects of the best practices model. Although DOD took significant steps in the right direction, its policy does not include controls that require program officials to meet the key criteria that we believe are necessary for ensuring that acceptable levels of knowledge are actually captured before making additional significant investments. We previously recommended that DOD design and implement necessary controls to ensure that appropriate knowledge is captured and used to make decisions about moving a program forward and investing more money at critical junctures. DOD officials acknowledged the advantages of using knowledge-based controls, but stated that they believed the policy already included enough controls to achieve effective program results. The officials agreed to monitor the acquisition process to assess the effectiveness of those controls and to determine whether additional ones are necessary. The cost and schedule outcomes being achieved by development programs initiated since DOD first issued its revised policy have not improved over those achieved by programs managed under prior versions of the policy. Of the 23 major programs we assessed, 10 have already reported estimated development cost growth greater than 30 percent or expected delays of at least 1 year in delivery of an initial operational capability to the warfighter. These programs combined represent a cost increase of $23 billion (in fiscal year 2005 dollars) and an average delay in delivery of initial capability of around 2 years. Most of the other programs were still in the early stages as of December 2005 with over half of system development remaining and had not yet reported an adequate amount of cost or schedule data to effectively analyze their progress. Table 1 contains the cost and schedule increases for the 23 programs we assessed, expressed as a percentage of each program’s development estimate. The Army’s Future Combat System is a case in point. Less than 3 years after program initiation and with $4.6 billion invested, the Army has already increased its development cost estimate $8.9 billion or 48 percent and delayed delivery of initial capability by 4 years over the original business case. Similarly, just over 1 year after initiating development of the Aerial Common Sensor aircraft, the Army has reported that severe weight and design problems discovered during development have stopped work on the program. As a result, program officials are anticipating at least a 45 percent cost increase and a delay of 2 years in delivering an initial capability to the warfighter. These two Army programs are not the only ones experiencing problems. Table 2 contains cost and schedule data for 6 of the 10 largest development programs initiated under the revised policy, including the Future Combat System and Aerial Common Sensor. As the table illustrates there are several programs experiencing large cost increases and schedule delays. A good measure of acquisition performance is return on investment as expressed in acquisition program unit cost because unit cost represents the value DOD is getting for its acquisition dollars invested in a certain program. The programs listed in table 2 will not achieve the return on investment that DOD anticipated when they began development. In the case of Joint Strike Fighter, for example, DOD initially intended to purchase 2,866 aircraft at an acquisition program unit cost of about $66 million. The Navy has reduced the number of Joint Strike Fighter aircraft it plans to buy; technology and design problems encountered during development have led to the significant cost growth. As a result, the acquisition program unit cost is now about $84 million, an increase of 27 percent. We recently reported that the risk of even greater increases is likely because flight testing has not yet started and the acquisition strategy involves substantial overlap of development and production. Similar problems have led to increases in the Future Combat System program. At program initiation, the Army anticipated that each of 15 units would cost about $5.5 billion to develop and deliver. Since that time, instability in the program’s technologies and requirements have led to significant cost increases, leading to a 54 percent increase in acquisition program unit cost, now estimated to be $8.5 billion. Regarding all 23 development programs, DOD leaders originally planned to invest a total of about $83 billion (fiscal year 2005 dollars) for system development and anticipated delivering an initial operational capability to the warfighter in 77 months on average. However, development costs have grown and delivery schedules have been delayed significantly. DOD now expects to invest over $106 billion in those same programs, an increase of over $23 billion or 28 percent. The delivery of initial capability to the warfighter is expected to take an average of 88 months or nearly 1 year longer than originally planned. Figure 3 shows changes in these business case elements for these programs in the short time since their initiation. DOD is not effectively implementing the knowledge-based process and evolutionary approach emphasized in its acquisition policy. While the policy outlines a specific knowledge-based process of concept refinement and technology development to help ensure a sound business case is developed before committing to a new development program, almost 80 percent of the programs we reviewed were permitted to bypass this process. Furthermore, the policy emphasizes the need to mature all critical technologies before starting system development and to demonstrate that the product’s design is mature before beginning system demonstration. However, nearly three-fourths of the programs reported having immature critical technologies when they received approval to start development, and at least half of the programs had not achieved design maturity before holding their design review and gaining approval to enter the system demonstration phase of development. The policy also emphasizes the use of an evolutionary product development approach, yet program officials continue to structure major acquisition programs to achieve large advances in capability within a single step development program. This strategy has historically resulted in poor cost and schedule outcomes. DOD decision makers continue to approve programs for system development that have not followed key elements of the policy’s suggested knowledge-based process. The policy requires program managers to provide senior decision makers with knowledge about key aspects of a system at critical investment points in the acquisition process. Our prior reviews have identified those critical points as the start of system development or program start (referred to as Milestone B in the DOD acquisition policy), design readiness review separating system integration and system demonstration, and production commitment (Milestone C in the DOD acquisition policy). The most important point occurs at program start, when system development begins. DOD acquisition guidance emphasizes the importance of the acquisition phases preceding program start, noting that the decisions made during those phases—concept refinement and technology development—generally define the nature of an entire acquisition program. Acquisition officials continue to begin system development without following early processes for developing executable business cases. A business case should provide demonstrated evidence that (1) the warfighter’s needs are real and necessary and that they can best be met with the chosen concept and (2) the chosen concept can be developed and produced within existing resources—including technologies, design knowledge, funding, and the time to deliver the product when it is needed. Establishing a business case calls for a realistic assessment of risks and costs; doing otherwise undermines the intent of the business case and invites failure. This process requires the user and developer to negotiate whatever trade-offs are needed to achieve a match between the user’s requirements and the developer’s resources before system development begins. The revised policy and associated guidance emphasize the importance of following a sound process of systems engineering and decision making prior to initiating a system development program. The process established in the policy consists of two phases, concept refinement and technology development, and a major decision review called Milestone A, which if rigorously followed, would provide acquisition officials with an opportunity to assess whether program officials had the knowledge needed to develop an executable business case. However, almost 80 percent of the programs we reviewed began system development without holding any prior decision review. Senior officials with the Office of the Secretary of Defense confirmed that this is a common practice among defense acquisition programs. This practice eliminates a key opportunity for decision makers to assess early product knowledge needed to establish a business case that is based on realistic cost, schedule, and performance expectations. Although program officials conduct analysis before starting a development program, they do not consistently follow a process to capture the critical knowledge needed to produce executable business cases, as evidenced by the poor outcomes current programs are experiencing. Officials with the Office of the Secretary of Defense recognized this lack of rigor and discipline in acquisition process, and in February 2004, the Under Secretary of Defense (Acquisition, Technology and Logistics) issued a department-wide policy memorandum directing acquisition officials to place greater emphasis on systems engineering when planning and managing acquisition programs. The policy requires programs to develop a systems engineering plan that describes the programs’ overall technical approach, including processes, resources, metrics, and applicable performance incentives. Although DOD’s systems engineering initiative has the potential to improve program performance, officials have found that the preliminary results are mixed. Early analysis shows that implementation is inconsistent while program officials learn to develop and implement systems-engineering plans. DOD decision makers continue to permit programs to enter system development before critical technologies are mature. Our review of technology readiness assessments and acquisition decision memorandums for our nine case study programs found that seven of the nine programs were approved to begin development even though program officials reported levels of knowledge below the criteria suggested in the policy and associated guidance, specifically in the area of technology maturity. Those seven programs are not isolated cases. As illustrated in Figure 4, 13 of the programs (nearly three-fourths) that received approval to enter system development under the revised policy did so with less than 100 percent of their critical technologies mature to the level specified by DOD. Only 2 of those programs had more than 75 percent of their technologies mature when they began (see appendix III for technology maturity data for each program). Even though acquisition policy states that technologies shall be mature before beginning system development, the practice of accepting high levels of technology risk at program start continues to be the norm and not the exception. An official with Office of the Secretary of Defense responsible for reviewing and validating program assessments of technology maturity informed us that the office generally views immature critical technologies at the beginning of development as an acceptable risk as long as program officials can show that they have a plan to mature the technologies by the time the program reaches its design readiness review, which requires additional investments to move a program from system integration into system demonstration. Therefore, risk management plans are consistently viewed as acceptable substitutes for demonstrated knowledge. In addition to emphasizing the importance of capturing technology knowledge before starting system development, DOD’s policy also highlights the importance of demonstrating design maturity before moving from the integration phase of system development into system demonstration and initial manufacturing. The policy establishes a design readiness review between the two phases to determine whether a product’s design is mature and stable and whether the product is ready to move ahead. While DOD’s policy does not require programs to demonstrate any specific level of design maturity, our past work has found that a key indicator of design maturity is the completion of 90 percent of the system’s engineering drawings. We found that defense programs that moved forward with lower levels of design maturity, as indicated by drawing completion, encountered costly design changes and parts shortages that, in turn, caused labor inefficiencies, schedule delays, and quality problems. Consequently, those programs required significant increases in resources—time and money—over what was estimated at the point each program entered the system demonstration phase. We analyzed engineering drawing completion data for 8 programs initiated under the revised policy that have held a design review, and found that more than half of those programs had not completed 90 percent of their design drawings before they received approval to enter the system demonstration phase of development. We also analyzed drawing-release data for three programs that have not yet held their design review but have projected the number of drawings officials anticipate will be completed when their reviews are held. Based on projections provided by program officials, 2 of those 3 programs are expected to have less than 55 percent of their drawings complete before they seek approval to begin system demonstration and initial manufacturing. Despite the revised policy’s guidance that capabilities should be developed and delivered in individually defined and separately managed increments, a majority of major weapon acquisition programs we assessed continue to be structured to achieve revolutionary increases in capability within one development program. According to the policy, the objective of an evolutionary approach is to balance needs and available capability with resources and put capability into the hands of the user quickly. The policy states that the success of the strategy depends on consistent and continuous definition of requirements and the maturation of technologies that lead to disciplined development and production of systems that provide increasing capability. In this approach, requirements that cannot be satisfied within these limits as well as available financial resources must wait for future generations of the product and be managed as separate system development programs with separate milestones, costs, and schedules. In our case studies of nine acquisition programs initiated under the revised policy, we found only one program—the Small Diameter Bomb—that satisfied all of the criteria of an evolutionary approach. In five case studies, we found that program officials had claimed that their programs were evolutionary, yet our evidence shows they were not evolutionary in practice; and in three cases, program officials chose not to use evolutionary acquisition from the outset. Table 3 summarizes our assessment of the nine case studies. The revised acquisition policy does not contain effective controls that require the demonstration of product knowledge measured against specific criteria to ensure that acquisition officials make disciplined, transparent, and knowledge-based investment decisions. The lack of specific required criteria creates an environment in which unknowns about technology, design, and manufacturing processes are acceptable. Decision makers and program officials are left with no objective measures against which to gauge a program’s level of knowledge, making accountability difficult. In the absence of criteria, transparency in acquisition decisions is essential to ensuring accountability, but key decision documents do not provide sufficient information about major decisions. DOD believes that acquisition decision memorandums, used to document program decisions, provide adequate transparency. However, the decision memorandums we reviewed did not contain an explanation of the decision maker’s rationale and rarely identify remaining risks, especially as they relate to the key knowledge standards emphasized in the policy. Further, the timeliness, accessibility, and depth, of the data contained in the Selected Acquisition Reports, DOD’s primary means of providing Congress with a status report of program performance, inhibits the reports’ usefulness as a management and oversight tool. In November 2003, we reported that the revised acquisition policy lacked many of the controls that leading commercial companies rely on to attain an acceptable level of knowledge before making additional significant investments. Controls are considered effective if they are backed by specific criteria and if decision makers are required to consider the resulting data before deciding to advance a program to the next level. Controls used by leading companies help decision makers gauge progress in meeting cost, schedule, and performance goals and hold program managers accountable for capturing relevant product knowledge to inform key investment decisions. The controls we have articulated as best practices used by successful commercial product developers are listed below in table 5. Some senior officials with the Office of the Secretary of Defense believe that the effective use of controls in DOD’s policy and the establishment of more specific criteria for decision making would improve program outcomes. They note that specific criteria need to be established and that programs need to be held accountable to those criteria before being permitted to proceed into the next phase. They also note that the criteria for moving an acquisition effort from one phase of the process to the next, primarily documented in acquisition decision memorandums as exit criteria, are not typically specific and often do not relate to the key knowledge-based criteria suggested in the policy. We found this to be true for our nine case study programs. We reviewed acquisition decision memorandums in our case studies and determined that they were not useful in explaining the decision maker’s rationale and in almost all of the cases they did not address the key knowledge criteria suggested in the acquisition policy. In most instances, the decision maker simply noted that the program being assessed was ready to proceed into system development, but did not provide an explanation of the rationale for the decision. Senior officials with the Office of the Secretary of Defense told us that they agree that a better explanation of the decision maker’s rationale, specifically in instances where the knowledge criteria are not fully met, would provide transparency and ultimately allow for a more accountable decision-making process. The following two examples illustrate how decision documentation is lacking: The Future Combat System program received approval to enter system development and demonstration in 2003, with 19 percent of its critical technologies mature, well below the policy’s standard. The acquisition decision memorandum supporting this decision did not provide the rationale for approving the system with such a large number of immature critical technologies. The memo did direct an updated review of the decision 18 months later and that the program “remain flexible and open to accommodate trades in the system architecture and in the individual systems’ designs.” The Joint Strike Fighter program was approved to enter system development in 2001. The acquisition decision memorandum did not address the fact that 75 percent of the program’s critical technologies were not mature to the policy’s standard. The memorandum did acknowledge that the program’s requirements could be changed or modified, noting that further refinements in the requirements should be explored as a potential way to reduce program costs. However, the memorandum did not explain why the decision maker determined that the program should enter development without achieving the technology and requirements knowledge emphasized in the policy. The acquisition decision memorandums for most of the other programs we reviewed did not specifically address critical gaps in knowledge, nor did they effectively explain the decision makers’ rationale for deeming those programs ready to begin system development. In memos where we found a reference to key knowledge principles, such as technology maturity, the decision makers acknowledged that more effort was needed to meet the policy’s suggested criteria but considered the risk acceptable to begin development. These memos did not explain why risks were considered acceptable. For example, the Navy’s Multi-Mission Maritime Aircraft program had none of its critical technologies mature at program initiation. The decision maker acknowledged the need to further mature the critical technologies but approved the program to enter development. Instead of holding the program to the policy’s criteria for entering development, the decision maker simply directed the Navy to work with the Office of the Secretary of Defense to implement risk mitigation and technology maturation plans during the integration phase of system development. In addition to the lack of transparency provided through acquisition decision memoranda, we also found that the data presented to Congress in DOD’s Selected Acquisition Reports (SARs) provided only limited usefulness as an oversight tool. Since 1969, SARs have been the primary means by which DOD reports the status of major weapon system acquisitions to Congress. SARs are reports that are expected to contain information on the cost, schedule, and performance of major weapon systems in comparison with baseline values established at program start, full-scale development, and production decision points. Our analysis, as well as a previous GAO review, of current and historical SAR data found that the timeliness, accessibility, and depth of the data contained in the reports limits their usefulness as an oversight tool. Our prior review noted that a number of opportunities exist for DOD to give Congress more complete information on the performance of major defense acquisition programs. DOD agreed that SAR data could be improved to make it more useful to Congress. Failing to consistently implement the knowledge-based process and evolutionary principles emphasized in the revised acquisition policy— coupled with a lack of specific criteria for making key investment decisions—are keeping DOD on its historical path of poor cost and schedule outcomes. Most programs are incurring the same scope of cost overruns and schedule delays as programs managed under prior DOD policies. More consistent use of the early acquisition processes would improve the quality and viability of program business cases by ensuring they are founded on knowledge obtained from rigorous and disciplined analysis. The initiative by Office of the Secretary of Defense to reinstitute the use of systems engineering is a step in the right direction. However, in order for this initiative to be effective DOD must establish and enforce specific criteria at key decision points. Our past work has identified and recommended criteria and controls that should be consistently applied at major decision points. The enforcement of these criteria is critical to ensuring that programs have the knowledge necessary to successfully move forward through the acquisition process. DOD officials have acknowledged the advantages of using knowledge-based criteria and controls, but believe the policy already includes enough controls to achieve effective program results. However, without enforceable criteria, defense officials are challenged to determine whether adequate knowledge has been obtained for investing taxpayer dollars. The lack of enforceable criteria also makes it difficult to hold defense officials accountable for their decisions. DOD must ensure that appropriate knowledge is captured and used at critical junctures to make decisions about moving a program forward and investing more money. We recommend that the Secretary of Defense require program officials to demonstrate that they have captured appropriate knowledge at three key points—program start, design review for transitioning from system integration to system demonstration, and production commitment—as a condition for investing resources. At a minimum those controls should require program officials to demonstrate that they have achieved a level of knowledge that meets or exceeds the following criteria at each respective decision point: Program start (Milestone B): Start of product development Demonstrate technologies to high readiness levels Ensure that requirements for the product are informed by the systems- Establish cost and schedule estimates for product on the basis of knowledge from preliminary design using system engineering tools Conduct decision review for program start Design readiness review: Beginning of system demonstration Complete 90 percent of design drawings Complete subsystem and system design reviews Demonstrate with prototype that design meets requirements Obtain stakeholders’ concurrence that drawings are complete and Complete the failure modes and effects analysis Identify key system characteristics Identify critical manufacturing processes Establish reliability targets and growth plan on the basis of demonstrated reliability rates of components and subsystems Conduct decision review to enter system demonstration Production commitment (Milestone C): Initiation of low-rate production Demonstrate manufacturing processes Build production-representative prototypes Test production-representative prototypes to achieve reliability goal Test production-representative prototypes to demonstrate product in Collect statistical process control data Demonstrate that critical processes are capable and in statistical Conduct decision review to begin production Furthermore, to ensure that major decisions are transparent and that program officials and decision makers are held accountable, we recommend that the Secretary of Defense require decision makers to include written rationale for each major decision in acquisition decision documentation. The rationale should address the key knowledge-based criteria appropriate for milestone decisions, explain why a program’s level of knowledge in each area was deemed acceptable if criteria have not been met and provide a plan for achieving the knowledge necessary to meet criteria within a given time frame. DOD provided written comments on a draft of this report. The comments appear in appendix II. DOD partially concurred with our recommendation that the Secretary of Defense should establish specific controls to insure that program officials demonstrate that they have captured a level of knowledge that meets or exceeds specific criteria at three key points in the acquisition process: program start, design readiness review, and production commitment. DOD agreed that knowledge-based decision making is consistent with sound business practice and stated that it would continue to develop policy that reflects a knowledge-based approach and improves acquisition outcomes. DOD noted that it would consider our recommendations as it reassesses the DOD acquisition business model and the knowledge required at each decision point. We believe that DOD’s plan to reassess its business model provides a good opportunity to establish the controls and specific criteria recommended in this report. Therefore, we are retaining our recommendation that the Secretary of Defense should establish controls to insure that program officials demonstrate that they have captured a level of knowledge that meets or exceeds specific criteria at three key points in the acquisition process. DOD also partially concurred with our recommendation that the Secretary of Defense require decision makers to provide written rationale in acquisition decision documentation for each major decision. DOD agreed that acquisition decisions should be documented, decision makers should be held accountable, and that they should provide the rationale for their decisions. DOD believes that the implementation of Section 801 of the National Defense Authorization Act for FY 2006 reinforces these processes. The act calls for the decision maker to certify that the program meets certain requirements, such as technology maturity, prior to starting a new development program at Milestone B. However, the act is focused on the decision to start a development program and does not identify specific criteria for programs to be measured against at design readiness review or production commitment. We believe our recommendation adds transparency and accountability to the process because it requires the decision maker to provide the rationale for a decision to allow a program to move forward, not only at Milestone B but at other key decision points as well. Therefore, we are retaining our recommendation that the Secretary of Defense require decision makers to provide written rationale for each major decision in acquisition decision documentation. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; and the Director of the Office of Management and Budget. We will provide copies to others on request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report or need additional information, please call me at (202) 512-4841 ([email protected]). Contact points for the offices of Congressional Relations and Public Affairs are located on the last page of this report. Key contributors to this report were Michael Hazard, Assistant Director; Lily Chin; Ryan Consaul; Christopher DePerro; Travis Masters; and Adam Vodraska. To assess the impact of DOD’s revised acquisition policy, we analyzed cost and schedule data for 23 major defense acquisition programs that were approved to begin system development under the revised policy. We did not assess space, missile defense, or ship programs. We collected our data from Selected Acquisition Reports, presidential budget documents, ongoing GAO work, and pertinent program officials. We utilized previous GAO reports related to defense acquisition policies and worked with knowledgeable GAO staff to ensure the use of current, accurate data. We also analyzed more than 150 annual Selected Acquisition Reports covering a 36-year period from 1969 to 2005, to determine historical trends related to outcomes of acquisition policy implementation. We assessed whether the revised policy’s knowledge-based, evolutionary acquisition principles were being effectively implemented by conducting 9 case study reviews and analyzing design maturity data for 11 programs that have made engineering-drawing data available to GAO. Our case study programs were the Aerial Common Sensor, Multi-Platform Radar Technology Insertion Program, Global Hawk Unmanned Aerial Vehicle, Small Diameter Bomb, Future Combat System, Joint Strike Fighter, Expeditionary Fighting Vehicle, Multi-Mission Maritime Aircraft, and the E-2 Advanced Hawkeye. We interacted directly with numerous program officials to seek input on current developments with their programs. We studied program documents to assess how well programs understand and are implementing the revised acquisition policy. We also analyzed drawing release data for those programs that have either passed their design review or have provided GAO with estimated drawing release data for a future design review to assess design maturity. In several cases, we asked that program offices verify information in these various documents. We also reviewed Department of Defense (DOD) Directive 5000.1, DOD Instruction 5000.2, and the Defense Acquisition Guidebook. In addition we examined each of the military services’ policy directives and guidance, DOD memorandums to include policy intent and DOD expectations regarding policy implementation as well as Joint Capabilities Integration and Development System documents. We interviewed relevant officials in Washington, D.C., from the Office of the Director, Defense Research and Engineering, the Joint Staff, the Office of the Secretary of Defense, and Army, Navy, and Air Force acquisition policy staff in order to better understand the content of these documents and the intent of DOD’s policy. We conducted our review from May 2005 to February 2006 in accordance with generally accepted government auditing standards. Appendix III: Program Data for 23 Programs Initiated under DOD’s Revised Acquisition Policy (as of December 2005) Formal Milestone I or Milestone A decision review? Program office projections. DOD Acquisition Outcomes: A Case for Change. GAO-06-257T. Washington, D.C.: November 15, 2005. Defense Acquisitions: Stronger Management Practices Are Needed to Improve DOD’s Software-Intensive Weapon Acquisitions. GAO-04-393. Washington, D.C.: March 1, 2004. Best Practices: Setting Requirements Differently Could Reduce Weapon Systems’ Total Ownership Costs. GAO-03-57. Washington, D.C.: February 11, 2003 Best Practices: Capturing Design and Manufacturing Knowledge Early Improves Acquisition Outcomes. GAO-02-701. Washington, D.C.: July 15, 2002. Defense Acquisitions: DOD Faces Challenges in Implementing Best Practices. GAO-02-469T. Washington, D.C.: February 27, 2002. Best Practices: Better Matching of Needs and Resources Will Lead to Better Weapon System Outcomes. GAO-01-288. Washington, D.C.: March 8, 2001. Best Practices: A More Constructive Test Approach Is Key to Better Weapon System Outcomes. GAO/NSIAD-00-199. Washington, D.C.: July 31, 2000. Defense Acquisition: Employing Best Practices Can Shape Better Weapon System Decisions. GAO/T-NSIAD-00-137. Washington, D.C.: April 26, 2000. Best Practices: DOD Training Can Do More to Help Weapon System Programs Implement Best Practices. GAO/NSIAD-99-206. Washington, D.C.: August16, 1999. Best Practices: Better Management of Technology Development Can Improve Weapon System Outcomes. GAO/NSIAD-99-162. Washington, D.C.: July 30, 1999. Defense Acquisitions: Best Commercial Practices Can Improve Program Outcomes. GAO/T-NSIAD-99-116. Washington, D.C.: March 17, 1999. Defense Acquisition: Improved Program Outcomes Are Possible. GAO/T- NSIAD-98-123. Washington, D.C.: March 17, 1998. Best Practices: DOD Can Help Suppliers Contribute More to Weapon System Programs. GAO/NSIAD-98-87. Washington, D.C.: March 17, 1998. Best Practices: Successful Application to Weapon Acquisition Requires Changes in DOD’s Environment. GAO/NSIAD-98-56. Washington, D.C.: February 24, 1998. Best Practices: Commercial Quality Assurance Practices Offer Improvements for DOD. GAO/NSIAD-96-162. Washington, D.C.: August 26, 1996.
The Department of Defense (DOD) is planning to invest $1.3 trillion between 2005 and 2009 in researching, developing, and procuring major weapon systems. How DOD manages this investment has been a matter of congressional concern for years. Numerous programs have been marked by cost overruns, schedule delays, and reduced performance. Over the past 3 decades, DOD's acquisition environment has undergone many changes aimed at curbing cost, schedule, and other problems. In order to determine if the policy DOD put in place is achieving its intended goals, we assessed the outcomes of major weapons development programs initiated under the revised policy. Additionally, we assessed whether the policy's knowledge-based, evolutionary principles are being effectively implemented, and whether effective controls and specific criteria are in place and being used to make sound investment decisions. Changes made in DOD's acquisition policy over the past 5 years have not eliminated cost and schedule problems for major weapons development programs. Of the 23 major programs we assessed, 10 are already expecting development cost overruns greater than 30 percent or have delayed the delivery of initial operational capability to the warfighter by at least 1 year. The overall impact of these costly conditions is a reduction in the value of DOD's defense dollars and a lower return on investment. Poor execution of the revised acquisition policy is a major cause of DOD's continued problems. DOD frequently bypasses key steps of the knowledge-based process outlined in the policy, falls short of attaining key knowledge, and continues to pursue revolutionary--rather than evolutionary or incremental--advances in capability. Nearly 80 percent of the programs GAO reviewed did not fully follow the knowledge-based process to develop a sound business case before committing to system development. Most of the programs we reviewed started system development with immature technologies, and half of the programs that have held design reviews did so before achieving a high level of design maturity. These practices increase the likelihood that problems will be discovered late in development when they are more costly to address. Furthermore, DOD's continued pursuit of revolutionary leaps in capability also runs counter to the policy's guidance. DOD has not closed all of the gaps in the policy that GAO identified nearly 3 years ago, particularly with regard to adding controls and criteria. Effective controls require decision makers to measure progress against specific criteria and ensure that managers capture key knowledge before moving to the next acquisition phase. However, DOD's policy continues to allow managers to approach major investment decisions with many unknowns. Without effective controls that require program officials to satisfy specific criteria, it is difficult to hold decision makers or program managers accountable to cost and schedule targets. In this environment, decision-making transparency is crucial, but DOD is lacking in this area as well.
In 1986, the United States entered into its original Compact of Free Association with the FSM. The compact comprised a framework for the United States to work toward achieving three main goals: (1) to secure self-government for the FSM, (2) to ensure certain national security rights for all parties, and (3) to assist the FSM in its efforts to advance economic development and self-sufficiency. Under the original compact, the FSM also benefited from numerous U.S. federal programs, and their citizens were allowed to live and work in the United States as nonimmigrants and to stay for long periods of time. Although the original compact’s first and second goals were met, the FSM did not achieve economic self-sufficiency. The FSM gained independence in 1978, and key defense rights were established. However, the compact’s third goal was to be accomplished primarily through U.S. direct financial assistance totaling about $1.5 billion from 1987 through 2003. Although U.S. financial assistance maintained higher income levels than the FSM could have achieved without support, FSM estimated per capita GDP at the compact’s close did not differ substantially, in real terms, from its per capita GDP in the early 1990s. In addition, we found that the U.S. and FSM governments provided little accountability over compact expenditures and that many compact-funded projects encountered problems related to poor planning and management, inadequate construction and maintenance, or misuse of funds. In 2003, the United States approved an amended compact with the FSM that (1) continues the defense relationship; (2) strengthens immigration provisions; and (3) provides an estimated $2.3 billion to the FSM for 2004 through 2023 (see attachment II). The amended compact, which took effect in June 2004, identifies the additional 20 years of grant assistance as intended to assist the FSM in its efforts to promote the economic advancement and budgetary self-reliance of its people. Financial assistance is provided in the form of annual sector grants and contributions to the trust fund. The amended compact and its subsidiary agreements, along with the FSM’s development plan, target the grant assistance to six sectors—education, health, public infrastructure, the environment, public sector capacity building, and private sector development—prioritizing two sectors, education and health. To provide increasing U.S. contributions to the FSM’s trust fund, grant funding decreases annually and will likely result in falling per capita grant assistance over the funding period and relative to the original compact (see attachment III). For example, in 2004 U.S. dollar terms, FSM per capita grant assistance will likely fall from around $1,352 in 1987 to around $562 in 2023. Under the amended compact, annual grant assistance is to be provided according to an implementation framework with several components (see attachment IV). For example, prior to the annual awarding of compact funds, the FSM must submit a development plan that identifies goals and performance objectives for each sector. The FSM government is also required to monitor day-to-day operations of sector grants and activities, submit periodic financial statements and performance reports for the tracking of progress against goals and objectives, and ensure annual financial and compliance audits. In addition, the U.S. and JEMCO are to approve annual sector grants and evaluate the countries’ management of the grants and their progress toward program and economic goals. The amended compact and subsidiary trust fund agreement also provide for the formation of an FSM trust fund committee to, among its other duties, hire a money manager, oversee the fund’s operation and investment, and provide annual reports on the fund’s profitability. The FSM economy shows limited potential for developing sustainable income sources other than foreign assistance to offset the annual decline in U.S. compact grant assistance. Moreover, the FSM has not enacted economic policy reforms needed to improve its growth prospects. The FSM’s economy shows continued dependence on government spending of foreign assistance and limited potential for expanded private sector and remittance income. Total government expenditures in 2006, over half of which were funded by external grants, accounted for about 65 percent of GDP. The FSM’s government budget is characterized by limited tax revenue paired with growing government payrolls. For example, FSM taxes have consistently provided less than 25 percent of total government revenue; however, payroll expenditures have increased as a percentage of total government spending, from 38 percent in 2000 to 45 percent in 2006. The FSM development plan identifies fishing and tourism as key potential private sector growth industries. However, the two industries together provide only about 6 percent of employment. Further, according to economic experts, growth in these industries is limited by factors such as the FSM’s geographic isolation, lack of tourism infrastructure, inadequate interisland shipping, limited pool of skilled labor, and growing danger of overfishing. Although remittances from emigrants could provide increasing monetary support to the FSM, evidence suggests that FSM emigrants are currently limited in their income-earning opportunities abroad, owing to inadequate education and vocational skills. Although the FSM has undertaken some efforts aimed at economic policy reform, it has made limited progress in implementing key tax, public sector, land, and foreign investment reforms that are needed to improve its growth prospects. For example: Tax reform. After several years of national policy dialogue to address a tax system that economic experts describe as inequitable and inefficient, the FSM established a tax reform executive steering committee in December 2005. The committee endorsed key elements of tax reform recommended by experts and the FSM’s Tax Reform Task Force, such as a value-added tax (VAT), a net profit tax, and a unified tax authority. In April 2007, the committee endorsed a 3-year implementation plan. However, as of April 2008, legislation required for implementing these measures had not yet been passed. Public sector reform. Although the FSM has endorsed public sector reform aimed at reducing wage and subsidy expenditures, limited progress has been made in addressing annual fiscal deficits, which amounted to about 5 percent of GDP in 2005 and 2006. Slow progress in implementing public sector reforms, combined with a lower level of grant assistance, precipitated fiscal crises in Kosrae and Chuuk. Fiscal adjustment programs were subsequently created for the two states based on, among other things, reductions-in-force wage savings and increased state tax rates. Kosrae completed its adjustment program in 2007, but Chuuk’s implementation of its program began only recently. Moreover, all FSM governments continue to conduct a wide array of commercial enterprises that require subsidies. Land reform. In attempts to modernize a complex land tenure system, the FSM has established land registration offices. However, these offices have lacked a systematic method for registering parcels, instead waiting for landowners to voluntarily initiate the process. Continued uncertainties over land ownership and land values create costly disputes, disincentives for investment, and problems regarding the use of land as an asset. Foreign investment reform. Economic experts and private sector representatives describe the overall climate for foreign investment in the FSM as complex and nontransparent. Despite attempts to streamline the process, foreign investment regulations remain relatively burdensome, with reported administrative delays and difficulties in obtaining permits for foreign workers. Some FSM states also require a certain percentage of local ownership in foreign investment. Although the FSM development plan includes objectives for economic reform, JEMCO did not begin to address the country’s slow progress in implementing these reforms until August 2006, 2 years into the amended compact. Further, while JEMCO recently approved some funding to support FSM efforts at public sector reform, key challenges to improving private sector growth remain. Although the FSM has allocated compact grants to the sectors targeted by the compact, immediate problems in some sectors persist, and several factors have hindered the FSM’s use of the funds to meet long-term development goals. In addition, administrative deficiencies have limited the FSM’s ability to account for its use of the grants for these long-term goals. Further, although OIA has monitored early compact activities, program implementation challenges have hampered its oversight. In 2004 through 2008, the FSM targeted compact grants largely according to compact priorities, allocating 35 percent of the funds for education, 27 percent for infrastructure, and 22 percent for health (see attachment V). However, the FSM has completed only three infrastructure projects, and more than $67 million of the $82.5 million (approximately 82 percent) allocated for infrastructure grants in 2004 through 2007 remains unspent. Lack of progress in this sector is owed to national and state disagreements over infrastructure priorities, problems associated with the project management unit, and Chuuk’s inability to secure land leases. Unspent funds for other sector grants from 2004 to 2007 amounts to an additional $14.9 million, or around 7 percent of funds allocated (see attachment VI). Additionally, numerous factors have limited the government’s use of compact funds to meet long-term development needs. For example: Lack of government consensus. Interior and State officials reported that the FSM’s weak federal structure inhibits compact grant implementation. Because each state has its own constitution and authority over budgetary policies, the FSM central government, which is represented on JEMCO, does not control the majority of compact funds and has been unable to secure agreement from the state governments regarding the use of compact funds. Lack of needs analysis. The allocation of FSM grants among its four states is not needs based and has resulted in significant differences in per capita funding, creating varying levels of government services across the states. For example, in 2006, Yap state received approximately $1,963 in education funding per student, while Chuuk state received $626 per student. More recently, in 2007, the national government’s share of grant funding increased from 8.65 percent to 10 percent and the allocation of compact funds to the four states decreased. Lack of planning for declining U.S. assistance. A lack of viable plans to address the annual decrement in compact funding and the elimination of nonconforming uses of the public-sector capacity building (PSCB) grant could limit the FSM’s ability to sustain current levels of government services. JEMCO required the FSM in 2004 to develop a plan to eliminate funding for the nonconforming uses of the PSCB by 2009. While FSM officials indicated that they plan to replace the PSCB funds with local monies, recent tax revenues have largely stagnated and, in 2006, the FSM requested that the deadline for its elimination of nonconforming funding be extended to 2011. OIA indicated that the steps the FSM takes toward overall public sector reform will affect whether it recommends to JEMCO to approve this request. Lack of accountability over compact funds. The FSM’s accountability for its use of compact funds has been limited. Although the timeliness of the FSM’s single audits has improved—in 2006, only Chuuk and the national government submitted audit reports after the deadlines—auditors have continued to find weaknesses with financial statements and lack of compliance with requirements of major federal programs. For example, the lack of audited financial statements for several subgrantees led the auditors to render qualified and disclaimed opinions. The FSM has failed to consistently monitor day-to-day sector grant operations or report on progress. Inadequate authority. The FSM’s first effort to monitor and report on compact progress was through the Office of Compact Management (OCM), which lacked the authority and resources to carry out its function. In 2007, the FSM created a Statistics, Budget and Overseas Development Assistance and Compact Management (SBOC) office. According to OIA, the SBOC may have a role in conducting compact coordination, ensuring sector-by-sector compliance, and providing technical assistance to the states. Nonetheless, as of April 2008, SBOC had not addressed performance problems, such as missing reports and data, and had failed to hold the FSM governments accountable for not meeting JEMCO resolutions and grant requirements. Data deficiencies. Although the FSM established performance measurement indicators, a lack of complete and reliable data prevents the use of these indicators to assess progress. For example, the FSM provided the first complete set of education indicators in 2007. However, OIA found that the data were not consistently reliable for monitoring scholastic improvements, owing to problems in establishing baselines and collecting data for all of the indicators. Likewise, determining performance in the health sector was difficult due to a lack of standardized data collection. Report problems. The FSM continues to have difficulty in submitting its required annual report to the U.S. President on time. As of April 2008, the FSM had not begun work on the 2007 annual report to the U.S. President, which was due in February 2008, and it submitted the 2006 annual report 10 months late. The quarterly reports have also been regularly incomplete or inconsistent, preventing their use for monitoring progress. Most recently, OIA rejected the FSM’s 2007 fourth quarter reports, stating that most of the submitted forms were completely blank or missing data. Capacity constraints. The FSM has not allocated available compact resources to develop the capacity for, and to provide, regular monitoring of sector grants. As a result, the skills necessary to improve financial and programmatic reporting are lacking. For example, the FSM’s single audit reports for 2005 and 2006 showed that the FSM’s ability to account for the use of compact funds was limited, as shown by weaknesses in its financial statements and lack of compliance with requirements of major federal programs. The FSM’s Compact Fiscal Adjustment and Transition Plan, in August 2006, reiterated that capacity weaknesses continue, especially in the areas of financial management, economic planning, and statistics. OIA has carried out various duties as administrator of the amended compact grants but has not addressed the FSM’s worsening compliance with compact reporting requirements, and several challenges continue to hamper its compact oversight. For example, in monitoring the sector grants, OIA determined that Chuuk, in 2006, and Kosrae, in 2007, had each misused approximately $1 million in compact funds through the commingling of compact and general funds. OIA required both states to repay the misused funds, a requirement met in 2007. However, OIA has generally failed to hold the FSM accountable for not submitting required reports, including 2006 and 2007 quarterly performance reports and the annual report to the U.S. President, and for not meeting requirements imposed as grant conditions by JEMCO. Additionally, OIA’s oversight continues to be constrained by time-consuming demands associated with poor compact implementation. For example, because the FSM state and national government budgets are not presented in unified format or linked to performance measures, OIA reports that it has continued to spend an inordinate amount of time reviewing them for the JEMCO meetings. FSM trust fund balances in 2023 could vary widely owing to market volatility and choice of investment strategy, preventing trust fund disbursements in some years. Moreover, the FSM’s ability to supplement its trust fund balance with additional contributions or other sources of income is uncertain and entails risks. Further, the FSM’s trust fund committee has faced challenges in managing the fund’s investment and has not evaluated the fund’s adequacy as a source of future revenue. Market volatility and investment strategy could have a considerable impact on projected trust fund balances in 2023 (see attachment VII). Our analysis indicates that, under various scenarios, the FSM’s trust fund could fall short of the maximum allowed disbursement level—an amount equal to the inflation-adjusted compact grants in 2023—after compact grants end, with the probability of shortfalls increasing over time (see attachment VIII). For example, under a moderate investment strategy, the fund’s income is about 30 percent likely to fall short of the maximum distribution by 2031; however, this probability rises to almost 70 percent by 2050. Additionally, our analysis indicates a positive probability that the fund will yield no disbursement in some years; under a moderate investment strategy, the probability is around 19 percent by 2050. FSM trust fund income could be supplemented by sources such as other donors, increased taxes, and securitization. However, this potential is uncertain. Other donors. The trust fund agreement allows the FSM to seek funding from other donors; however, the FSM has not yet received other contributions. Increased taxes. The FSM’s limited development prospects constrain its ability to raise tax revenues to supplement the fund’s income. Securitization. Securitization—issuing bonds against future U.S. contributions—could increase the fund’s earning potential by raising its balances through bond sales. However, securitization could also lead to lower balances and reduced fund income if interest owed on the bonds exceeds investment returns. In October, 2007, the committee contracted for a study of securitization. The FSM trust fund committee has experienced management challenges in establishing the trust fund to maximize earnings and has not yet evaluated the fund’s adequacy as a source of future revenue. Contributions to the trust fund were initially placed in a low-interest savings account and were not invested until 22 months after the initial contribution. The months when the fund remained in a low-interest account prior to investment likely reduced its potential investment earnings significantly; we estimate this loss at $720,000 per month, after taking into account stock market investment fees. As we reported in June 2007, contractual delays and committee processes for reaching consensus and obtaining administrative support contributed to the time taken to establish and invest funds. The committee has since hired an Executive Administrator in September 2007, and some steps were taken to improve committee processes; however, the Administrator reports that communication and administrative delays remain. Also, despite the likely impact of market volatility and investment strategy, the trust fund committee’s reports have not yet assessed the fund’s potential adequacy as a source of revenue for meeting the FSM’s long-term economic goals. Since enactment of the amended compact, the U.S. and FSM governments have made efforts to meet new requirements for implementation, performance measurement, and oversight. However, after 5 years—one quarter of the amended compact’s duration—the FSM faces significant challenges in working toward the compact goals of economic advancement and budgetary self-reliance. The FSM economy shows continued dependence on government spending of foreign assistance. However, despite the budgetary impact of declining annual grant assistance, the FSM has made little progress in implementing key reforms needed to improve tax income or increase private sector investment opportunities. The FSM has also been unable to utilize more than $67 million in infrastructure and almost $15 million in other sector grant monies. Moreover, persistent deficiencies in needs assessment, long-term planning, and financial accountability continue to hinder the U.S. and FSM governments and JEMCO from ensuring effective implementation of those grants that have been spent. Although OIA has carried out various duties as administrator of compact grants, U.S. and FSM monitoring of grant operations remains deficient owing to continued problems with oversight authority in the FSM, consistently poor data and reporting, and unaddressed capacity constraints. Further, the FSM trust fund committee has yet to assess the potential status of the trust fund as an ongoing source of revenue after compact grants end in 2023. Because the trust fund’s earnings are intended as a main source of U.S. assistance to the FSM after compact grants end, the fund’s potential inadequacy as a source of sustainable income in some years could impact the FSM’s ability to provide future government services. To maximize the benefits of compact assistance, our prior reports include recommendations that the Secretary of the Interior direct the Deputy Assistant Secretary for Insular Affairs, as chair of the FSM management and trust fund committees, to take a number of actions, including the following: ensure that JEMCO address the lack of FSM progress in implementing reforms to increase investment and tax income; coordinate with other U.S. agencies on JEMCO to work with the FSM to establish plans to minimize the impact of declining assistance; coordinate with other U.S. agencies on JEMCO to work with the FSM to fully develop a reliable mechanism for measuring progress toward compact goals; and ensure the FSM trust fund committee’s assessment and timely reporting of the fund’s likely status as a source of revenue after 2023. Interior generally concurred with our recommendations and has taken actions in response to several of them. However, unless the challenges we identified are addressed, the U.S. and FSM are unlikely to meet compact goals of the FSM’s economic advancement and budgetary self-reliance. Mr. Chairman and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For future contacts regarding this testimony, please call David Gootnick at (202) 512-3149 or [email protected]. Individuals making key contributions to this testimony included Emil Friberg, Jr. (Assistant Director), Ming Chen, Julie Hirshen, Reid Lowe, Mary Moutsos, Kendall Schaefer, and Eddie Uyekawa. Compacts of Free Association: Trust Funds for Micronesia and the Marshall Islands May Not Provide Sustainable Income, GAO-07-513 (Washington, D.C.: July 15, 2007). Compact of Free Association: Micronesia and the Marshall Island’s Use of Sector Grants, GAO-07-514R (Washington, D.C.: May 25, 2007). Compacts of Free Association: Micronesia and the Marshall Islands Face Challenges in Planning for Sustainability, Measuring Progress, and Ensuring Accountability, GAO-07-163 (Washington, D.C.: Dec. 15, 2006). Compacts of Free Association: Development Prospects Remain Limited for the Micronesia and the Marshall Islands, GAO-06-590 (Washington, D.C.: June 27, 2006). Compacts of Free Association: Implementation of New Funding and Accountability Requirements is Well Under Way, but Planning Challenges Remain, GAO-05-633 (Washington, D.C.: July 11, 2005). Compact of Free Association: Single Audits Demonstrate Accountability Problems over Compact Funds, GAO-04-7 (Washington, D.C.: Oct. 7, 2003). Compact of Free Association: An Assessment of the Amended Compacts and Related Agreements, GAO-03-988T (Washington, D.C.: June 18, 2003). Foreign Assistance: Effectiveness and Accountability Problems Common in U.S. Programs to Assist Two Micronesian Nations, GAO-02-70 (Washington, D.C.: Jan. 22, 2002. Foreign Relations: Migration From Micronesian Nations Has Had Significant Impact on Guam, Hawaii, and the Commonwealth of the Northern Mariana Islands, GAO-02-40 (Washington, D.C.: Oct. 5, 2001). Foreign Assistance: U.S. funds to Two Micronesian Nations Had Little Impact on Economic Development, GAO/NSIAD-00-216 (Washington, D.C.: Sept. 22, 2000). (Dollars in millions) FSM grants (Section 211) FSM trust fund (Section 215) The annual grant amounts include $200,000 to be provided directly by the Secretary of the Interior to the Department of Homeland Security, Federal Emergency Management Agency, for disaster and emergency assistance purposes. The grant amounts do not include the annual audit grant, capped at $500,000. These dollar amounts shall be adjusted each fiscal year for inflation by the percentage that equals two-thirds of the percentage change in the U.S. gross domestic product implicit price deflator, or 5 percent, whichever is less in any one year, using the beginning of 2004 as a base. Grant funding can be fully adjusted for inflation after 2014, under certain U.S. inflation conditions. FSM propo grnt budget for ech ector tht inclde proviion – Expenditre, performnce go, nd pecific performnce indictor – Brekdown of peronnel expenditre nd other co – Informtion on U.S. federl progr nd other donor United Ste evuate the propoed ector grnt budget for: – Contency with fnding requirement in the compct nd relted – Identify poitive event thccelerte performnce otcome nd prolem encontered nd their impct on grnt ctivitie nd performnce measurereport to used to: – Monitor generopertion to ensure complince with grnt condition Submit nnual report to the U.S. U.S. dollar (in illion) U.S. dollar (in illion) This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
From 1987 through 2003, the Federated States of Micronesia (FSM) received more than $1.5 billion in economic assistance under the original Compact of Free Association with the United States. In 2003, the U.S. government approved an amended compact with the FSM that provides an additional $2.3 billion from 2004 through 2023. The Department of the Interior's Office of Insular Affairs (OIA) is responsible for administering and monitoring this assistance. The amended compact identifies the additional 20 years of grant assistance as intended to assist the FSM in its efforts to promote the economic advancement and budgetary self-reliance of its people. The assistance is provided in the form of annually decreasing grants that prioritize health and education, paired with annually increasing contributions to a trust fund intended as a source of revenue for the country after the grants end in 2023. The amended compact also contains several new funding and accountability provisions intended to strengthen reporting and bilateral interaction. Among these provisions is a requirement for the establishment of a joint economic management committee and a trust fund committee to, respectively, among other duties, review the FSM's progress toward compact objectives and assess the trust fund's effectiveness in contributing to the country's economic advancement and long-term budgetary self-reliance. In 2003, we testified that these provisions could improve accountability over the assistance provided but that successful implementation of these provisions would require appropriate resources and sustained commitment from both the United States and the FSM. Drawing on several more recent reports as well as updated information, this report will discuss the FSM's economic prospects, implementation of the amended compact to meet its long-term goals, and potential trust fund earnings. The FSM has limited prospects for achieving budgetary self-reliance and long-term economic advancement, and the FSM government has not yet implemented policy reforms needed to enable economic growth. The FSM economy depends on public sector spending of foreign assistance; government expenditures, over half of which are funded by external grants, account for about 65 percent of the FSM's gross domestic product (GDP). The FSM government's budget is characterized by limited tax revenue and a growing wage bill, and the two private sector industries identified as having growth potential--fisheries and tourism--face significant barriers to expansion because of the FSM's remote geographic location, inadequate infrastructure, and poor business environment. Moreover, progress in implementing key tax, public sector, land, and foreign investment policy reforms necessary to improve growth has been slow. For example, although the FSM has agreed on principles of reform to address its tax system that has been characterized by experts as inefficient and inequitable, the FSM government has made limited progress in implementing fundamental tax reform. Also, the FSM's failure to implement key public sector reforms to reduce wage and subsidy expenditures resulted in fiscal crisis in Chuuk and Kosrae. In August 2006, nearly 2 years after the amended compact entered into force, the FSM Joint Economic Management Committee (JEMCO) began discussions of economic policy reform and has since approved some funding to support FSM reform efforts; however, challenges to private sector growth remain. Numerous factors have negatively affected the use of the compact grants for FSM development goals. The FSM's grant allocations have reflected compact priorities by targeting education, health, and infrastructure. However, as of April 2008, the FSM had completed only three infrastructure projects and approximately 82 percent of the $82.5 million in infrastructure funds remained unexpended. Lack of progress in this sector is owed to national and state disagreements over infrastructure priorities, problems associated with the project management unit, and Chuuk's inability to secure land leases. Additionally, the FSM has almost $15 million in unspent funds for other sectors, or around 7 percent of funds allocated from 2004 to 2007. Furthermore, the FSM's distribution of grants among its four states has not been based on need, leading to significant differences in per capita funding, while the FSM's long-term planning has not taken into account the likely effects of the annual funding decrement and other budgetary changes. The FSM has also lacked accountability for the use of compact funds, as demonstrated by weaknesses in its yearly financial statements and lack of compliance with requirements of major federal programs. Moreover, the FSM has not consistently monitored day-to-day grant operations or reported on progress toward program and economic goals, owing to inadequate data, a lack of required reporting, and an unwillingness to dedicate the resources necessary. OIA has conducted administrative oversight of the sector grants, but its oversight has been constrained by the need to assist the FSM with its compact implementation activities such as preparing budgets and addressing financial management problems such as the misuse of compact funds by Chuuk and Kosrae in 2006 and 2007, respectively.
Uterine Fibroids and Treatment Uterine fibroids are noncancerous growths that develop from the muscular tissue of the uterus. Most women will develop uterine fibroids at some point in their lives, although most cause no symptoms. In some cases, however, uterine fibroids can cause symptoms, including heavy or prolonged menstrual bleeding, pelvic pressure or pain, or frequent urination, requiring medical or surgical therapy. Treatment for uterine fibroids includes surgical procedures to remove the uterus (hysterectomy) or to remove the fibroids (myomectomy). These surgical procedures can be done via minimally invasive laparoscopic procedures or through traditional surgical procedures, such as an abdominal hysterectomy. Other treatments for uterine fibroids include, for example, high-intensity focused ultrasound, and drug therapy. Power morcellators are medical devices used during laparoscopic (minimally invasive) surgeries. Morcellation refers to the cutting of tissue into smaller fragments for removal from the body. In laparoscopic surgical procedures, morcellation facilitates the extraction of large pieces of tissue through small incisions. Over time, laparoscopic surgeons have applied different manual methods of morcellation using scalpels, forceps, and other tools that require repetitive manual motions, such as twisting. Power morcellators generally use an electromechanical motor to spin a cylindrical blade within a tube for cutting and removing tissue. Power morcellators can be used during different types of laparoscopic surgeries, including general surgical procedures, such as spleen and liver surgeries; urological surgical procedures, such as kidney removal surgeries; and gynecological surgical procedures. These laparoscopic gynecological procedures include two types of surgeries used to treat uterine fibroids: (1) the removal of the uterus, known as hysterectomy; and (2) the removal of individual fibroids, known as myomectomy. Some women may prefer laparoscopic hysterectomies and myomectomies, because these procedures are associated with such benefits as a shorter post-operative recovery time and, for laparoscopic hysterectomies, a reduced risk of infection compared to open procedures. Medical devices, including power morcellators, are regulated by FDA. The agency classified most power morcellators as class II devices, meaning that FDA generally considers them to be higher-risk than class I devices and lower-risk than class III devices. For most class II devices, FDA determines whether they should be legally marketed in the United States through the agency’s 510(k) premarket notification process. Specifically, the device manufacturer through a 510(k) submission must notify FDA at least 90 days before it intends to market a new device and establish that such device is substantially equivalent to a predicate device. To be substantially equivalent, a device must (1) have the same intended use as the predicate device; and (2) have the same technological characteristics as the predicate device, or have different technological characteristics but submitted information demonstrates the device is as safe and effective as the predicate device, and does not raise different questions of safety or effectiveness. Figure 1 shows FDA’s decision- making flowchart for its 510(k) premarket notification process in effect when FDA cleared the 510(k) submissions for power morcellators prior to July 2014. Once a new medical device is on the market, medical device user facilities, manufacturers, and importers must comply with medical device reporting requirements. Under these requirements, these parties must report device-related adverse events, including events that reasonably suggest a device has or may have caused or contributed to a death or serious injury, in a timely manner. For example, user facilities must report such deaths and serious injuries within 10 work days of becoming aware of information reasonably suggesting the device may have caused or contributed to the death or serious injury. Within this time frame, deaths must be reported to both FDA and the manufacturer, if known, and serious injuries must be reported to the manufacturer, or, if the manufacturer is unknown, to FDA. Consumers and other parties may voluntarily report adverse events directly to FDA. The agency maintains databases that house both mandatory and voluntary reports of device- related adverse events. While adverse event reports may provide the first signal that a problem exists with a device or its use, or both, FDA and others have reported that information from these reports can be limited. Examples of identified limitations include the following Incomplete or erroneous reporting. Adverse event reports can include incomplete reporting, where key data are not reported, or erroneous reporting, where the information provided is not accurate. Reports that are not timely. Adverse event reporting does not always reflect real time reporting, as some reports document events that occurred years earlier. Underreporting. Adverse events may not always be reported. (See app. I for additional information on medical device reporting requirements.) In addition to adverse event reporting, FDA conducts other postmarket surveillance activities to obtain information about devices after they are on the market. For example, FDA may order a manufacturer to conduct a postmarket surveillance study if failure of a class II or class III device would be reasonably likely to have serious adverse health consequences. FDA documentation shows the agency cleared 25 510(k) submissions for power morcellators to be marketed in the United States between 1991 and 2014. In clearing the first of the 25 power morcellators in 1991, FDA determined the new device was substantially equivalent to an electromechanical system for cutting tissue during minimally invasive surgeries performed on joints, known as an arthroscopic surgical system. (For more information on this predicate device, see app. II.) FDA determined the other 24 power morcellators—the most recent of which was cleared in May 2014—were substantially equivalent to at least one previously cleared power morcellator. We also found that for most power morcellators the documentation we reviewed referenced more than one predicate device. As shown in table 1, the additional devices referenced by manufacturers included other previously marketed devices, such as manual morcellators, forceps, and various accessories used for laparoscopic surgeries. FDA officials stated that the additional devices referenced likely informed FDA’s decision-making for all 25 power morcellators. However, FDA’s determinations of substantial equivalence were based on only one predicate device, the arthroscopic surgical system for the first power morcellator cleared and a previously cleared power morcellator for the other 24 devices, according to agency officials. (For more information on each of the 25 power morcellators cleared by FDA, see app. III.) Among the 25 cleared morcellators, we found that FDA determined that all had the same intended use and 19 had the same technological characteristics as their predicate devices; the agency also reviewed performance data for 11 of them. (See fig. 2.) In our review of the FDA documentation for power morcellators, we found that the agency determined that all 25 devices had the same intended use as their predicate devices. In making this determination, FDA also determined that 4 power morcellators had different indication statements compared to the predicate devices, but the differences did not alter the intended use of each device. In general, the indication statements for the 4 power morcellators identified new or fewer procedures during which the devices were to be used compared to the predicates. For example, the indication statement for a power morcellator FDA cleared in 2000 specifically identified use in hysterectomies where the predicate’s indication statement only identified myomectomies. In another example, the indication statement of a power morcellator cleared in 2011 only identified use in gynecological procedures where the predicate identified general surgical and urological procedures, in addition to gynecological. For all 4 devices, however, FDA determined that the differences in indication statements did not alter the intended effect of the devices or raise new questions of safety or effectiveness, and determined, overall, that the power morcellators had the same intended use as their predicates. We also found that FDA determined that 19 of the 25 power morcellators had the same technological characteristics as their predicate devices, while 6 devices did not. According to FDA officials, the technological characteristics of these 6 power morcellators that were different included the change from the use of a vacuum to suction tissue into the morcellator to the use of forceps to grasp tissue for this purpose; the change from single use, disposable body or blade to ones that are reusable; the change from a rotary cutting action to one that is reciprocating; and the addition of the ability to control suction with a foot switch. In addition, for 11 power morcellators, we found that FDA reviewed performance data. These included 3 power morcellators for which FDA determined that different technological characteristics could affect safety and effectiveness, and 8 other power morcellators for which the device description was not sufficient to determine whether the devices were substantially equivalent to predicate devices. For these 11 devices, FDA reviewed performance data—which, according to agency officials, included data such as those from testing the wear of components, electrical safety, and electromagnetic compatibility—and determined that the devices were substantially equivalent to predicates. Based on our review of FDA documentation, we also found nearly all of the 25 power morcellators were indicated for use in gynecological surgical procedures. We found the indications for use for 14 power morcellators specifically identified laparoscopic gynecological procedures, such as myomectomies and hysterectomies: the indications for use of 4 devices identified gynecological procedures only, the indications for use of 2 devices identified general surgery and gynecological procedures, and the indications for use of 8 devices included general surgery, gynecological, and urological procedures. For the 11 other devices, 9 power morcellators had indications for use for general surgical procedures, which could include gynecological procedures. (See table 2.) FDA was aware of the potential for spreading tissue when using a power morcellator prior to receiving the first adverse event reports; however, the general understanding was that the risk of an unsuspected cancer that could be spread when using the device was low. In response to adverse event reports, FDA has taken several actions, including estimating cancer risk, warning against certain uses of power morcellators, and recommending new labeling. However, questions remain regarding the use of power morcellators to treat uterine fibroids, and FDA continues to monitor available information. FDA officials were aware of the potential for spreading tissue during procedures that involved the use of power morcellators before receiving the first adverse event reports describing the spread of cancerous tissue after the use of a power morcellator to treat uterine fibroids. Specifically, according to FDA officials, the potential for spreading tissue—cancerous or noncancerous—following the use of a power morcellator has been known since the agency cleared the first device in 1991. We found that this awareness was reflected in the labeling for 12 of the 25 devices cleared by FDA. The labeling for these power morcellators recommended the use of a bag when cutting cancerous (diagnosed or suspected) tissue and any other tissue that may be considered harmful if spread. FDA officials noted that articles reporting the risk of spreading tissue following the use of a power morcellator to treat uterine fibroids were published prior to the agency receiving the first adverse event reports in December 2013. Agency officials, however, noted that at the time, there was no consensus within the clinical community regarding the risk of this occurring, particularly for cancerous tissue. We identified 30 such articles published between 1980 and 2012 that mentioned or concluded a risk of tissue dissemination following the use of a power morcellator, or the need for a physician to remove all fragments of tissue following a surgery. Most of these articles involved case studies or were limited in scope. For example, one case study published in 2010 looked at a single patient who, after undergoing a hysterectomy to treat a uterine fibroid, was found to have a previously unsuspected sarcoma (a type of cancer), and concluded that there is a potential risk of spreading the unsuspected cancer following morcellation. None of the articles that we identified estimated the risk of spreading tissue, cancerous or noncancerous, during power morcellation. Uterine Sarcoma Uterine sarcoma is a cancer of the muscle and supportive tissues of the uterus. Uterine sarcoma is one of two types of uterine cancer (endometrial carcinoma is the other, more common type of uterine cancer). The American Cancer Society estimates that less than 4 percent of uterine cancers are uterine sarcoma. Of the two types of uterine cancer, uterine sarcoma tends to be more aggressive, more difficult to diagnose before surgery, and have worse prognoses. Leiomyosarcoma is a type of uterine sarcoma that, similar to fibroids, develops in the muscular tissue of the uterus. Leiomyosarcoma can resemble a fibroid and, as a result, can be difficult to diagnose before surgery. Though the risk of spreading tissue during power morcellation was known, FDA officials stated that prior to December 2013, the general understanding was that the risk of a woman undergoing treatment for fibroids having unsuspected cancer—specifically, a difficult to diagnose cancer called uterine sarcoma—was low. Therefore, the risk of a power morcellator spreading a uterine sarcoma would be expected to be low, as it could be no higher than the risk of having a uterine sarcoma. In addition, FDA officials were not aware of any definitive scientific publications regarding the actual risk of cancer in uterine fibroids (by definition presumed to be noncancerous), which is generally consistent with statements by two professional societies. FDA officials noted that published estimates for an unsuspected cancer (specifically uterine sarcoma) in a woman with a presumed uterine fibroid varied from about 1 in 1,000 women to 1 in about 10,000 women. These estimates of the risk of cancer depended on several factors, including the cancer diagnosis (e.g., uterine sarcoma or a category of uterine sarcoma called leiomyosarcoma), the type of treatment for uterine fibroids (e.g., hysterectomy or myomectomy), or the patient population included in the estimate (e.g., women of reproductive age or women who are older). One 2012 study that examined 1,091 instances of uterine morcellation at one hospital, however, reported that the rate of unsuspected cancer (uterine sarcoma) after laparoscopic morcellation was 9 times higher than the rate quoted to patients at the time (1 in 10,000), and concluded that uterine morcellation carries a risk of spreading unsuspected cancer. FDA took several actions after receiving the first adverse event reports in December 2013 describing the spread of cancerous tissue after using a power morcellator to treat uterine fibroids. (See fig. 3.) See appendix IV for a more detailed timeline of FDA actions and other events related to power morcellators. FDA’s actions included the following Convening a signal review team. In December 2013, FDA began forming a signal review team to coordinate and lead the agency’s evaluation and response to the potential safety issue related to power morcellators. According to FDA officials, the team started meeting weekly and collecting information on the devices, adverse event reports, and scientific literature in January 2014. Estimating the prevalence of cancer in women undergoing surgical treatment for uterine fibroids. In April 2014, FDA published the results of a review of scientific literature to estimate the prevalence of cancer (specifically sarcoma and leiomyosarcoma) in women undergoing surgical treatment for uterine fibroids. Based on this review, FDA estimated that about 1 in 350 women undergoing the surgical procedures of hysterectomy or myomectomy to treat uterine fibroids was at risk for having an unsuspected uterine sarcoma. FDA also estimated that about 1 in 500 such women were at risk for having one certain type of uterine sarcoma, leiomyosarcoma. FDA officials told us that these estimates were significantly higher than what had been traditionally quoted (1 in 1,000 to 1 in 10,000). Issuing an initial safety communication. In April 2014, FDA issued a safety communication discouraging the use of power morcellators in surgical procedures (hysterectomies and myomectomies) to treat uterine fibroids. In discouraging this use, FDA cited the lack of a reliable method for predicting whether a woman with uterine fibroids may have an unsuspected cancer; specifically, a uterine sarcoma. The agency also noted that if a power morcellator is used on women with an unsuspected uterine sarcoma, the procedure may spread cancerous tissue within the abdomen and pelvis, significantly worsening the patient’s likelihood of long-term survival. The safety communication also recommended that health care providers carefully consider all the available treatment options for women with symptomatic uterine fibroids and thoroughly discuss the benefits and risks of all treatments with patients. FDA also noted that it had instructed manufacturers that produced power morcellators used to treat uterine fibroids to review their device labeling for accurate risk information for patients and providers. Convening a meeting of the Obstetrics and Gynecology Devices Panel of FDA’s Medical Devices Advisory Committee. In July 2014, FDA convened an expert panel and guest speakers to present their views and available data related to the potential power morcellator safety issue. The panel discussed patient populations in which power morcellators should not be used, specifically mentioning patients with known or suspected cancer. The panel also discussed mitigation strategies, including the possibility of adding a warning to power morcellator labeling related to the risk of spreading an unsuspected cancer. Issuing guidance. FDA issued an “immediately in effect” guidance document in November 2014. The guidance noted that recent discussions with the patient and clinical communities, as well as the peer-reviewed medical literature, had raised awareness of the risk of spreading unsuspected cancerous tissue beyond the uterus when power morcellators are used during surgeries intended to treat uterine fibroids. For power morcellators with a general or gynecologic indication for use, the guidance recommended the addition of specific safety statements to the product labeling for laparoscopic power morcellators, including two contraindications and a boxed warning that the use of power morcellators during fibroid surgery may spread cancer. (See fig. 4.) FDA also recommended that manufacturers submit their revised labeling language to FDA, as well as to the hospitals and other facilities that had previously purchased power morcellators. We found that the manufacturers of the 10 power morcellators with indications for use for general surgical or gynecological procedures marketed as of November 2016 followed the recommendation, providing FDA with updated labeling. Information provided by FDA indicated that manufacturers also contacted hospitals and other user facilities that purchased their power morcellators, providing the updated labeling and instructing them to switch out any old labeling. Half of the manufacturers also instructed the user facilities to mail back a receipt of acknowledgement regarding the safety alert to the manufacturer. Issuing an updated safety communication. At the same time it issued guidance in November 2014, FDA issued an updated safety communication warning against the use of power morcellators in the majority of women undergoing surgery (hysterectomy or myomectomy) to treat uterine fibroids. This safety communication recommended that doctors thoroughly discuss the benefits and risks of all treatments with their patients. The updated safety communication also specified that FDA considers the spread of unsuspected cancer when using a power morcellator for hysterectomy or myomectomy to treat uterine fibroids as a serious injury, which is a reportable adverse event under the agency’s medical device reporting requirements. Inspecting selected user facilities for compliance with adverse event reporting. In December 2015, FDA initiated inspections at selected hospitals to review their compliance with medical device reporting requirements, which specify that hospitals and other user facilities must report certain device-related events to FDA and to manufacturers when the manufacture is known. These inspections included five hospitals that, according to FDA officials, were chosen because there were reports of adverse events at these facilities related to the spread of uterine cancer from the use of power morcellators. FDA identified significant deviations from medical device reporting requirements at these hospitals based on its review of the inspection evidence. FDA investigators’ observations included user facilities’ failure to report adverse events within required time frames or to establish and maintain files for medical device reporting—that is, adverse event reports. The agency determined that corrective action plans presented by two of the five hospitals were adequate, and according to FDA officials, the agency worked with the three other hospitals to help ensure appropriate corrective actions were taken. Questions remain regarding the use of power morcellators in the treatment of uterine fibroids, which include varying stakeholder opinions regarding the risks related to the use of power morcellators. For example, FDA officials noted there was limited information available to assess how the risk of spreading cancerous tissue is affected when the morcellation is performed using a power morcellator or through manual morcellation (e.g., using a scalpel). Similarly, officials from one professional society also stated that they were not aware of any reliable data showing that power morcellation spreads tissue any worse than other morcellation techniques. In addition, professional societies have questioned or noted concerns with FDA’s estimate of the risk of cancer (uterine sarcoma) in women who undergo surgical treatment of uterine fibroids, citing limitations related to FDA’s methodology. One professional society’s open letter to FDA included concerns regarding the keywords FDA officials used to find the studies included in their estimate, stating that those keywords may have limited the number of studies used to develop the agency’s estimate. The letter also asserted that FDA’s estimate was higher than a more appropriate estimated risk of uterine cancer of about 1 in 1,500 to 1 in 2,000. FDA officials have acknowledged limitations, such as the small number of studies, in their estimate, but stated that estimates in more recently representative published studies have generally been consistent with the agency’s estimate. Continuing questions also include the long-term effects of FDA’s guidance on patients, according to the stakeholders we interviewed. Two professional societies we contacted have expressed concern that FDA’s decision to discourage the use of power morcellators in laparoscopic surgeries (hysterectomies and myomectomies) to treat uterine fibroids limits women’s health options. According to officials from the two societies, the reduction or elimination of laparoscopic surgery using a power morcellator to treat uterine fibroids—in response to FDA’s safety communication and guidance—may lead to an increased use of abdominal hysterectomies, a surgical procedure that typically does not involve the use of power morcellators, but is associated with other risks. One professional society noted that abdominal hysterectomies require larger incisions, slower recovery time, and present the patient with higher mortality rates and complications than laparoscopic hysterectomies. However, FDA officials noted that one 2016 study reported a decline in the use of power morcellators in hysterectomies since the agency issued its November 2014 guidance, and found no increase in complications from abdominal hysterectomies. While these questions remain, FDA officials stated that the agency continues to review scientific literature regarding the use of power morcellators to treat uterine fibroids as new studies have been conducted since 2014. We found more than 50 articles on the risk of uterine cancer in women or the use of morcellation in women undergoing gynecologic surgeries like hysterectomy and myomectomy—including peer-reviewed articles, case studies, and opinion pieces—that have been published since December 2013. FDA also continues to monitor available adverse event information regarding the use of power morcellators, while acknowledging the limitations of the available information. FDA reported that, as of September 2016, the agency had identified 285 adverse event reports about the spread of an unsuspected cancer following the use of a power morcellator. According to FDA officials, the majority (over 88 percent) of these reports were mandatory reports submitted by manufacturers. The remainder were voluntary reports from patients and their families, as well as physicians (about 10 percent) and mandatory reports from hospitals and other user facilities (less than 2 percent). According to FDA officials, of the 285 adverse event reports regarding power morcellators and the spread of unsuspected cancer that the agency received through September 2016, 5 were related to events occurring after FDA issued its guidance and updated safety communication in November 2014. FDA officials noted, however, the limitations in the current, passive, medical device reporting system, which relies on people to identify that a harm occurred or a risk is present, recognize that the harm or risk is associated with the use of a particular device, and take the time to report it. For power morcellators, officials from three health care providers (two hospitals and one physician group) that we spoke to stated that prior to November 2014, physicians would likely not have considered the spreading of an unsuspected cancer following the use of a power morcellator as a reportable adverse event, because the device would have performed as intended (e.g., cutting and extracting tissue). FDA’s inspections of manufacturers of power morcellators and hospitals that use them have also identified issues related to medical device reporting of adverse events. (See app. I for more information on FDA inspections related to medical device reporting.) Recognizing the limitations in its current postmarket surveillance activities, the agency reported plans to generate better information in the future. For example, in October 2016, the agency reported plans to work with hospitals to identify a system that quickly identifies life-threatening problems caused by medical devices. FDA officials also noted they will continue to review new technologies, such as morcellation containment systems, and work on a national registry to collect data on the treatment of fibroids. In addition, FDA is working to establish a National Evaluation System for health Technology to more efficiently generate better evidence for medical device evaluation and regulatory decision-making. The professional societies we contacted did not have any professional standards or training requirements for physicians specifically regarding the use of power morcellators, but some societies issued guidance to physicians related to procedures that could involve the use of power morcellators. The training requirements for physicians performing procedures like hysterectomies are typically determined at the hospital level. All power morcellator manufacturers provided instructions for use, and some offered technical training. Officials from three professional societies we contacted—AAGL (formerly the American Association of Gynecologic Laparoscopists), the American Board of Obstetrics and Gynecology (ABOG), and the American Congress of Obstetricians and Gynecologists (ACOG)—stated that there are no professional standards issued by their societies that apply to member physicians specifically regarding the use of power morcellators. ABOG, which certifies obstetricians and gynecologists in the United States, does not deal directly with training recommendations or requirements related to the use of power morcellators. AAGL and ACOG, which are professional societies representing member physicians; The Joint Commission, which accredits hospitals; and three health care providers, which included two hospitals and a physician group, we contacted stated that training requirements for physicians performing specific procedures, such as procedures to treat uterine fibroids, are generally governed by hospital credentialing and privileging. While the professional societies that we contacted did not set standards or requirements for using power morcellators, some provided guidance and educational resources for their members on the procedures that could involve the use of power morcellators. For example, in May 2014, the American College of Obstetricians and Gynecologists (ACOG’s companion organization) published a special report on clinical recommendations and scientific issues related to hysterectomies or myomectomies. This special report touched on topics related to proper diagnosis and evaluation before a hysterectomy or myomectomy, the use of a bag during morcellation in gynecologic surgery, and patient counseling and informed consent information that should be discussed with a patient if a power morcellator is being considered for use during the procedure. Officials from the three health care providers that we interviewed indicated that physicians may receive training in using power morcellators during their medical residency (for example, if their attending physician used the device). The officials also noted that, after completing their medical residency, physicians who want to use power morcellators for laparoscopic surgery would likely seek out training, such as individual training from another physician with experience using the device. According to health care provider officials, physicians’ privileges to perform laparoscopic hysterectomies and myomectomies could be part of broader privileges—for example, they said that some hospitals may grant permission for a physician to use a power morcellator as part of a general list of procedures for gynecologists, or a hospital could require specific permission for use of the device. All of the 25 power morcellators cleared by FDA included instructions from the manufacturers for using the device, and some of the manufacturers offered technical training for physicians. FDA regulations require that the labeling for a prescription device like a power morcellator, which is not safe for use except under the supervision of a licensed practitioner, must provide information on the device’s use, including precautions under which practitioners can use the device safely and the purpose for which the device is intended. We found the labeling for the 25 power morcellators included instructions for use (submitted by the manufacturers to FDA as part of the agency’s premarket review of the devices), which provided information such as device assembly, use, disassembly, and safety information. One power morcellator manufacturer that responded to our request for information stated that it has a standard procedure to review the instructions for use with new users of its power morcellator. In addition to providing instructions for use, two manufacturers that provided us with information also offered technical training to physicians on their power morcellators, such as demonstrating how to set-up or operate their devices. FDA does not require manufacturers to provide clinical training for power morcellators, that is, training on the actual morcellation of tissue during a surgical procedure. One manufacturer we spoke to stated that clinical training is typically part of a surgeon’s accredited residency and fellowship program. We provided a draft of this report to the Secretary of Health and Human Services. HHS provided technical comments that were incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Health and Human Services, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. The Food and Drug Administration (FDA) uses information gathered through adverse event reporting to monitor and track potential safety issues associated with medical devices after they are marketed in the United States. According to FDA, adverse event reports are best used for two purposes. First, they are used to capture qualitative snapshots of adverse events for a particular device or device type, such as the types of malfunctions or clinical events, or both, associated with the device. Second, they are used to detect safety signals, such as identifying unexpected events associated with a particular device or device type. Adverse event reports are submitted to FDA through mandatory and voluntary sources. Mandatory adverse event reporting by medical device importers, manufacturers, and user facilities enables FDA to obtain specific safety data related to medical devices from these reports. FDA regulations require medical device importers, manufacturers, and user facilities that become aware of information suggesting that a device may have caused or contributed to a death or serious injury to provide information to FDA. Manufacturers and importers also must report certain device malfunctions—manufacturers must report the information to FDA and importers must report the information to the manufacturer. (See table 3 for summaries of these reporting requirements.) FDA also encourages healthcare professionals, patients, caregivers, and consumers to submit voluntary adverse event reports or problems with medical devices. According to FDA officials, while the agency has enforcement authority over mandatory adverse event reporting by user facilities, the agency has generally focused its enforcement resources on manufacturers—which are required to investigate each reportable event. According to FDA officials, of the 2,185 device inspections conducted in fiscal year 2015, 875 included a review of medical device reporting. Of these inspections, FDA reported that the agency found 284 to have inspection observations related to medical device reporting requirements. Of the 12 manufacturers of power morcellator medical devices, FDA reported inspecting 11 of them in the past 5 years (including inspections for devices other than power morcellators). As a result of these inspections, FDA reported identifying problems related to medical device reporting such as manufactures not reporting adverse events within required time frames or not implementing medical device reporting procedures. For three manufacturers, the inspections resulted in FDA issuing warning letters that cited, among other things, violations of medical device reporting requirements. FDA may also inspect user facilities’ compliance with medical device reporting requirements, for example, in situations where the user facility perspective is essential to understanding the public health issue. Recently, in light of several high-profile device safety issues occurring in hospitals, the agency initiated inspections at 17 hospitals in December 2015. According to the director of FDA’s Center for Devices and Radiological Health, these hospitals were chosen because there were reports of events at these facilities related to the spread of uterine cancer from the use of power morcellators or the spread of infections associated with another device called a duodenoscope. The director noted that while these events appeared to be the kind that would have fallen under the agency’s medical device reporting requirements, the agency did not see corresponding adverse event reports submitted to FDA’s adverse event report database. He further reported that from these inspections, the agency learned several things, including Some hospitals did not submit required reports for deaths or serious injuries related to devices used at their facilities; and in some cases, they did not have adequate procedures in place for reporting device- related deaths or serious injuries to FDA or to the manufacturers. Based on the number of user facilities in the United States and the number of reports FDA receives, the agency believes that these hospitals are not unique, in that there is limited to no reporting to FDA or to the manufacturers at some hospitals. Hospital staff often were not aware of nor trained to comply with all of FDA’s medical reporting requirements. The director also noted that FDA wants to work with hospitals to address issues of limited or nonreporting, and to work with hospitals to get the real-world information FDA needs. For example, following the inspections, FDA held regulatory meetings with certain hospitals to help identify corrective actions. In addition, FDA hosted a public workshop in December 2016 to discuss how to improve hospitals’ role in monitoring medical device safety. In clearing the first laparoscopic power morcellator in 1991 through the 510(k) premarket notification process, the Food and Drug Administration (FDA) determined the device was substantially equivalent to an electromechanical system for cutting tissue during minimally invasive surgeries performed on knees and other joints. According to FDA officials, this device, known as an arthroscopic surgical system, was one of a number of devices cited by the manufacturer in the 510(k) submission; however, the agency based its determination of substantial equivalence primarily on the arthroscopic surgical system. As shown in table 4, FDA documentation shows that the device can be traced back to a surgical system also used for cutting during knee surgeries that FDA determined in 1978 to be substantially equivalent to a predicate device that was marketed prior to the enactment of the Medical Device Amendments of 1976 (May 28, 1976). Between 1991 and 2014, the Food and Drug Administration cleared 25 laparoscopic power morcellators to be marketed in the United States. Figure 5 shows the device type, medical specialty, and indications for use for 11 power morcellators still being marketed in the United States in November 2016. Figure 6 provides the same information for the other 14 devices that were no longer being marketed in November 2016. Table 5 shows key events related to laparoscopic power morcellators and the actions the Food and Drug Administration has taken in relation to safety concerns of the spread of unsuspected uterine cancer following the use of power morcellators in the treatment of uterine fibroids. In addition to the contact named above, Kim Yamane (Assistant Director), Aaron Holling (Analyst-in-Charge), Jazzmin Cooper, and Kate Tussey made key contributions to this report. Also contributing were Leia Dickerson, Sandra George, Drew Long, and Vikki Porter.
In December 2013, media reports raised concerns regarding the use of power morcellators in the surgical treatment of women with uterine fibroids. These concerns focused on the spread of an unsuspected uterine cancer after such use of the devices. GAO was asked to review power morcellator medical devices. This report examines (1) the number of 510(k) submissions for power morcellators FDA cleared, and the extent to which the agency determined the devices had new intended uses or new technological characteristics; (2) FDA's understanding of any concerns with the use of power morcellators to treat uterine fibroids prior to receiving adverse event reports, and the actions FDA has taken in response to these reports; and (3) the professional standards and guidance for physicians regarding the use of power morcellators, and the information device manufacturers provided. GAO reviewed documentation of FDA's decision-making and guidance and manufacturers' device labeling, and interviewed FDA officials. In addition, GAO reviewed documents and contacted officials from 10 professional societies and other organizations that have a potential interest in the use of power morcellators, and three health care providers that performed gynecological procedures that could involve the use of the devices. GAO also contacted all 12 manufacturers for the power morcellators FDA cleared for the U.S. market. The Department of Health and Human Services provided technical comments on a draft of this report, which were incorporated as appropriate. Between 1991 and 2014, the Food and Drug Administration (FDA)—the federal agency responsible for the oversight of medical devices—cleared 25 submissions for laparoscopic power morcellators for the U.S. market. FDA cleared the submissions for these devices, which cut tissue into small pieces to facilitate removal through small incision sites of gynecological and other types of minimally invasive surgeries, through its premarket notification process. Under this process, established under section 510(k) of the Federal Food, Drug, and Cosmetic Act, FDA reviews information submitted by a device manufacturer and determines whether the new device is substantially equivalent to another legally marketed device, known as a predicate device. In making this determination, FDA assesses whether a device has (1) the same intended use; and (2) the same technological characteristics as a predicate device, or has different technological characteristics but submitted information demonstrates the device is as safe and effective as the predicate device, and does not raise different questions of safety or effectiveness. A device determined to be substantially equivalent is cleared to be marketed. For power morcellators, FDA determined the devices in all 25 of the 510(k) submissions had the same intended use as their predicates, while 6 had new technological characteristics. Prior to receiving adverse event reports, FDA understood the risk of having an unsuspected cancer that could be spread using a power morcellator as low; in response to such reports, the agency has taken several actions. According to FDA officials, the agency was aware of the potential for power morcellators to spread tissue (cancerous and noncancerous) when the agency cleared the first device in 1991. FDA officials noted that, at the time, the risk of having a type of uterine cancer that can resemble noncancerous uterine tumors, called fibroids, was thought to be low based on available information. After receiving reports in December 2013 about the spread of an unsuspected cancer following the use of power morcellators in surgeries to treat fibroids, FDA estimated the cancer risk to women undergoing these surgeries to be about 1 in 350 for one type of cancer. FDA issued a safety communication in November 2014 warning against certain uses of power morcellators—specifically in treating uterine fibroids. The agency also issued guidance recommending that manufacturers add a boxed warning to their device labeling, which all current manufacturers followed, and conducted inspections to review hospitals' compliance with medical device reporting requirements. As questions remain related to the use of power morcellators, FDA has continued to monitor adverse event reports, among other actions. Professional societies provided some guidance to physicians regarding the use of power morcellators, while manufacturers of the devices provided instructions and some technical training. According to officials at professional societies that GAO contacted, there are no professional standards specific to the use of power morcellators, but some guidance and educational resources are available for surgical procedures to treat uterine fibroids in which the devices may be used. Training requirements for physicians using power morcellators generally occur at hospitals as part of the processes to ensure that physicians have suitable experience and abilities. Manufacturers provide instructions for use, and some offer technical training that demonstrates device set-up, operation, and cleaning.
In March 2009, $26.7 billion of Recovery Act funding was apportioned to all 50 states and the District for activities allowed under the Federal-Aid Highway Surface Transportation Program, including restoration, repair, and construction of highways, and for other eligible surface transportation projects. The act requires that 30 percent of these funds be suballocated for projects in metropolitan and other areas of the state. Highway funds are apportioned to the states through federal-aid highway program mechanisms, and states must follow the requirements of the existing program. Under the Recovery Act, the maximum federal fund share of highway infrastructure investment projects is 100 percent, whereas the federal share under the existing federal-aid highway program is generally 80 percent. As of July 17, 2009, $16.8 billion of the apportioned funds had been obligated for over 5,700 projects nationwide, including $9.8 billion that had been obligated for over 2,900 projects in the 16 states and the District that are the focus of our review. About half of Recovery Act highway obligations nationwide have been for pavement improvements. Specifically, $8.2 billion is being used for projects such as reconstructing or rehabilitating deteriorated roads. Many state officials told us they selected a large percentage of resurfacing and other pavement improvement projects because they did not require extensive environmental clearances, were quick to design, could be quickly obligated and bid, could employ people quickly, and could be completed within 3 years. In addition, about $2.8 billion, or about 17 percent of Recovery Act funds nationally, has been obligated for pavement-widening projects, and around 12 percent has been obligated for the replacement and improvement of existing bridges, and the construction of new bridges. Figure 1 shows obligations by the types of road and bridge improvements being made. As of July 17, 2009, $401.4 million had been reimbursed nationwide by the Federal Highway Administration (FHWA), including $140.8 million that had been reimbursed for projects in the 16 states and the District. DOT officials told us that although funding has been obligated for more than 5,000 projects, it may be months before contractors mobilize and begin work. States make payments to these contractors for completed work and then can request reimbursement from FHWA. Nevertheless, this is a notable increase in reimbursements since we issued our report on July 8, 2009. At that time we reported that, according to June 25 data, FHWA had reimbursed $233 million nationwide, including $96.4 million that had been reimbursed to the 16 states and the District. This is an increase of about 72 percent and 46 percent respectively over a period of about three weeks, compared with increases in obligations in the 6 percent range. We will continue to monitor these trends in the weeks ahead. According to state officials, because an increasing number of contractors are looking for work, bids for Recovery Act contracts have come in under estimates. State officials told us that bids for the first Recovery Act contracts were ranging from around 5 percent to 30 percent below the estimated cost. Several state officials told us they expect this trend to continue until the economy substantially improves and contractors begin taking on enough other work. Funds appropriated for highway infrastructure spending must be used as required by the Recovery Act. States are required to do the following: Ensure that 50 percent of apportioned Recovery Act funds are obligated within 120 days of apportionment (before June 30, 2009) and that the remaining apportioned funds are obligated within 1 year. The 50 percent rule applied only to funds apportioned to the state and not to the 30 percent of funds required by the Recovery Act to be suballocated, primarily based on population, for metropolitan, regional, and local use. The Secretary of Transportation is to withdraw and redistribute to other states any amount that is not obligated within these time frames. Give priority to projects that can be completed within 3 years and to projects located in economically distressed areas, as defined by the Public Works and Economic Development Act of 1965, as amended. According to this act, to qualify as an economically distressed area, an area must meet one or more of three criteria, two of which related to income and unemployment based on the most recent federal or state data, and the third of which is based on a Department of Commerce determination of special need. Certify that the state will maintain the level of spending for the types of transportation projects funded by the Recovery Act that it planned to spend the day the Recovery Act was enacted. As part of this certification, the governor of each state is required to identify the amount of funds the state plans to expend from state sources from February 17, 2009, through September 30, 2010. All states have met the first Recovery Act requirement that 50 percent of their apportioned funds are obligated within 120 days. Of the $18.7 billion nationally that is subject to this provision, 69 percent was obligated as of June 25, 2009. The percentage of funds obligated nationwide and in each of the states included in our review is shown in figure 2. The second Recovery Act requirement is to give priority to projects that can be completed within 3 years and to projects located in economically distressed areas. While officials from almost all of the states said that they considered project readiness, including the 3-year completion requirement, when making project selections, there was substantial variation in the extent to which states prioritized projects in economically distressed areas and how they identified these areas. Due to the need to select projects and obligate funds quickly, many states first prioritized projects based on other factors and only later identified whether these projects fulfilled the requirement to give priority to projects in economically distressed areas. According to the American Association of State Highway and Transportation Officials, in December 2008, states had already identified more than 5,000 “ready-to-go” projects as possible selections for federal stimulus funding, 2 months prior to enactment of the Recovery Act. Officials from several states also told us they had selected projects prior to the enactment of the Recovery Act and that they only gave consideration to economically distressed areas after they received guidance from DOT. States also based project selection on other priorities, such as geographic distribution, the potential for job creation or other economic benefits, and state planning criteria or funding formulas. DOT and FHWA have yet to provide clear guidance regarding how states are to implement the requirement that priority be given to economically distressed areas. In February 2009, FHWA published replies to questions from state transportation departments on its Recovery Act Web site stating that because states have the authority to prioritize and select federal-aid projects, it did not intend to develop or prescribe a uniform procedure for applying the Recovery Act’s priority rules. Nonetheless, FHWA provided a tool to help states identify whether projects were located in economically distressed areas. Further, in March 2009, FHWA provided guidance to its division offices stating that FHWA would support the use of “whatever current, defensible, and reliable information is available to make the case that has made a good faith effort to consider economically distressed areas” and directed its division offices to take appropriate action to ensure that the states gave adequate consideration to economically distressed areas. We also found some instances of states developing their own eligibility requirements for economically distressed areas using data or criteria not specified in the Public Works and Economic Development Act. According to the act, to qualify for this designation, an area generally must (1) have a per capita income of 80 percent or less of the national average or (2) have an unemployment rate that is, for the most recent 24-month period for which data are available, at least 1 percent greater than the national average unemployment rate. For areas that do not meet one of these two criteria, the Secretary of Commerce has the authority to determine that an area has experienced or is about to experience a special need arising from actual or threatened severe unemployment or economic adjustment problems resulting from severe short-term or long-term changes in economic conditions. In each of the cases we identified, the states informed us that FHWA approved the state's use of alternative criteria. However, FHWA did not consult with or seek the approval of the Department of Commerce, and it is not clear under what authority FHWA approved these criteria. For example: Arizona based the identification of economically distressed areas on home foreclosure rates and disadvantaged business enterprises—data not specified in the Public Works Act. Arizona officials said they used alternative criteria because the initial determination of economic distress based on the act’s criteria excluded three of Arizona’s largest and most populous counties, which also contain substantial areas that, according to state officials, are clearly economically distressed and include all or substantial portions of major Indian reservations and many towns and cities hit especially hard by the economic downturn. The state of Arizona, in consultation with FHWA, developed additional criteria that resulted in these three counties being classified as economically distressed. Illinois based the classification of economically distressed areas on increases in the number of unemployed persons and the unemployment rate, whereas the act bases this determination on how a county’s unemployment rate compares with the national average unemployment rate. According to FHWA, Illinois opted to explore other means of measuring recent economic distress because the initial determination of economic distress based on the act’s criteria was based on data not as current as information available within the state and did not appear to accurately reflect the recent economic downturn in the state. Using the criteria established by the Public Works Act, 30 of the 102 counties in Illinois were identified as not economically distressed. Illinois’s use of alternative criteria resulted in 21 counties being identified as economically distressed areas that had not been so classified following the act’s criteria. California based its economically distressed area determinations on the January 2009 monthly unemployment rates developed by the California Employment Development Department. While the use of state data is allowed under the act, the data must cover a 24-month period. California officials stated that county-level unemployment data from December 2006 through November 2008 were not sufficiently representative of the current unemployment situation in California. Our July 2009 report recommended that the Secretary of Transportation develop (1) clear guidance on identifying and giving priority to economically distressed areas that is in accordance with the requirements of the Recovery Act and the Public Works and Economic Development Act of 1965, as amended, and (2) more consistent procedures for FHWA to use in reviewing and approving states’ criteria. In its response to this recommendation, DOT said that it has already provided clear and consistent guidance to assist states and localities in identifying economically distressed areas and prioritizing projects in these areas, and that it has also conducted extensive outreach with state and local governments. However, we believe DOT’s existing guidance is insufficient because, while it emphasizes the importance of giving priority to these areas, it does not define what giving priority means, and thus does not ensure that the act’s priority provisions will be consistently applied. DOT also stated that it is consulting with the Department of Commerce to develop additional guidance on criteria that may be used to classify areas as economically distressed for the purpose of Recovery Act funding. We will review the additional guidance when it becomes available and plan to continue to monitor this issue in the weeks ahead for our future reports. Finally, the states are required to certify that they will maintain the level of state effort for programs covered by the Recovery Act. With one exception, the states have completed these certifications, but they face challenges. Maintaining a state’s level of effort can be particularly important in the highway program. We have found that the preponderance of evidence suggests that increasing federal highway funds influences states and localities to substitute federal funds for funds they otherwise would have spent on highways. As we previously reported, substitution makes it difficult to target an economic stimulus package so that it results in a dollar-for-dollar increase in infrastructure investment. Most states revised the initial certifications they submitted to DOT. As we reported in April, many states submitted explanatory certifications—such as stating that the certification was based on the “best information available at the time”—or conditional certifications, meaning that the certification was subject to conditions or assumptions, future legislative action, future revenues, or other conditions. The legal effect of such qualifications was being examined by DOT when we completed our review. On April 22, 2009, the Secretary of Transportation sent a letter to each of the nation’s governors and provided additional guidance, including that conditional and explanatory certifications were not permitted, and gave states the option of amending their certifications by May 22. Each of the 16 states and District selected for our review resubmitted their certifications. According to DOT officials, the department has concluded that the form of each certification is consistent with the additional guidance, with the exception of Texas. Texas submitted a revised certification on July 9, 2009. According to DOT officials, as of July 28, 2009, the status of Texas’ revised certification remained unresolved. For the remaining states, while DOT has concluded that the form of the revised certifications is consistent with the additional guidance, it is currently evaluating whether the states’ method of calculating the amounts they planned to expend for the covered programs is in compliance with DOT guidance. States face drastic fiscal challenges, and most states are estimating that their fiscal year 2009 and 2010 revenue collections will be well below estimates. In the face of these challenges, some states told us that meeting the maintenance-of-effort requirements over time poses significant challenges. For example, federal and state transportation officials in Illinois told us that to meet its maintenance-of-effort requirements in the face of lower-than- expected fuel tax receipts, the state would have to use general fund or other revenues to cover any shortfall in the level of effort stated in its certification. Mississippi transportation officials are concerned about the possibility of statewide, across-the-board spending cuts in 2010. According to the Mississippi transportation department’s budget director, the agency will try to absorb any budget reductions in 2010 by reducing administrative expenses to maintain the state’s level of effort. We will continue to monitor states’ and localities’ use of Recovery Act funds for transportation programs and their compliance with program rules. In our next report, in September 2009, we plan to provide information on action taken by states and DOT in response to our recommendation on economically distressed areas and follow up on the progress states and metropolitan areas have made in obligating Recovery Act funds for highway infrastructure programs. We also plan to examine the use of Recovery Act funds for the Federal Transit Administration’s Transit Capital Assistance program—the transit program receiving the most recovery act funding—in selected states. We expect that subsequent reports will include information on states’ use of Recovery Act funds for other transit programs, such as the Fixed Guideway Modernization program. In addition to the two reports we have issued to date, we have also reported or testified on the following issues related to other transportation programs receiving Recovery Act funding: Discretionary transportation infrastructure grants. We reported that DOT followed key elements of federal guidance in developing selection criteria for awarding grants under this $1.5 billion dollar program. These key elements include communicating important elements associated with funding opportunities and using selection criteria that support a framework for merit-based spending and follow transportation infrastructure investment principles. High-speed passenger rail projects. We examined the factors that can lead to economically viable projects and whether the Federal Railroad Administration’s (FRA) strategic plan to use the $8 billion of Recovery Act funds provided for high-speed and other intercity passenger rail projects incorporates those factors. We found that factors such as costs, ridership projections, and determination of public benefits affect which projects are likely to be economically viable. We also found that FRA’s strategic plan for high-speed rail outlines, in general terms, how the federal government may invest Recovery Act funds for high-speed rail development but that it does not establish clear goals or a clear role for the federal government in high-speed rail. We are beginning follow-up work aimed at, among other things, identifying how project sponsors and others have surmounted the challenges of instituting new rail service and how FRA is positioned to develop, implement, and oversee its new high-speed rail program. We hope to have this work completed by next spring. We will continue to monitor these and other areas in which the committee might be interested. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Committee might have. For further information regarding this statement, please contact Katherine A. Siggerud at (202) 512-2834 or [email protected], or A. Nicole Clowers at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this statement are Steve Cohen, Heather Halliwell, David Hooper, Bert Japikse, Hannah Laufe, Leslie Locke, and Crystal Wesco. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The American Recovery and Reinvestment Act of 2009 (Recovery Act) included more than $48 billion for the Department of Transportation's (DOT) investment in transportation infrastructure, including highways, rail, and transit. This testimony--based on GAO report GAO-09-829 , issued on July 8, 2009 and updated with more recent data, in response to a mandate under the Recovery Act--addresses (1) the uses of Recovery Act transportation funding including the types of projects states have funded, (2) the steps states have taken to meet the act's requirements, and (3) GAO's other work on transportation funding under the Recovery Act. In GAO-09-829 , GAO examined the use of Recovery Act funds by 16 states and the District of Columbia (District), representing about 65 percent of the U.S. population and two-thirds of the federal assistance available through the act. GAO also obtained data from DOT on obligations and reimbursements for the Recovery Act's highway infrastructure funds. A substantial portion of Recovery Act highway funds have been obligated, with most funded projects focusing on pavement improvements. In March 2009, $26.7 billion was apportioned to 50 states and the District for highway infrastructure and other eligible projects. As of July 17, 2009, $16.8 billion of the apportioned funds had been obligated for over 5,700 projects nationwide. About half of the funds has been obligated for pavement improvements such as reconstructing or rehabilitating roads; 17 percent has been obligated for pavement-widening projects; and about 12 percent has been obligated for bridge projects. Remaining funds were obligated for the construction of new roads and safety projects, among other things. States have generally complied with the act's three major requirements on the use of transportation funds: (1) Fifty percent of funds must be obligated within 120 days of apportionment. All states have met this requirement. (2) Priority for funding must be given to projects that can be completed within 3 years and are located in economically distressed areas, as defined by the Public Works and Economic Development Act. Officials from almost all of the states included in GAO's review said they considered project readiness, including the 3-year completion requirement, when making project selections. However, due to the need to select projects and obligate funds quickly, many states first selected projects based on other factors and only later identified whether these projects fulfilled the economically distressed area requirement. Additionally, some states identified economically distressed areas using data or criteria not specified in the Public Works or Recovery Act. In each of these cases, states told us that DOT's Federal Highway Administration (FHWA) approved the use of alternative criteria but it is not clear under what authority it did so as FHWA did not consult with or seek the approval of the Department of Commerce. (3) State spending on transportation projects must be maintained at the level the state had planned to spend as of the day the Recovery Act was enacted. With one exception, the states have certified that they will maintain their level of spending. GAO will continue to monitor states' use of Recovery Act funds for transportation programs and their compliance with program rules. In the next report, in September 2009, GAO plans to provide information on the use of Recovery Act funds for transit programs and for highway programs. Previous GAO work on the act has addressed other transportation issues. For instance, GAO's work on discretionary transportation grants found that DOT followed key elements of federal guidance in developing selection criteria for awarding these grants, and GAO's work on intercity rail funding found that although DOT's strategic plan for high-speed rail generally outlines how the act's funds may be invested for high-speed rail development, the plan does not establish clear goals or a clear role for the federal government.
DOD uses its secondary inventories, such as spare parts, clothing, and medical supplies, to support its operating forces worldwide. However, its logistics system to acquire, store, and deliver these materials has frequently been criticized as cumbersome, inefficient, and costly. DOD’s inventory management problems have been long-standing, characterized by the expenditure of billions of dollars on excess supplies and the failure to acquire sufficient tools and expertise to manage them effectively. DOD’s culture has historically encouraged maintaining excessive inventories rather than managing with just the right amount of stock, and DOD has been slow to adopt new management practices, technologies, and logistics systems. Lately, however, DOD has taken several steps, such as developing a prime vendor for medical, food, and clothing supplies, to help change this culture. In fiscal year 1994, DOD developed a logistics strategic plan, which it has updated annually, to provide an integrated logistics roadmap to support its warfighting strategy. The plan, which is prepared by the Office of the Deputy Under Secretary of Defense (Logistics) and approved by the Under Secretary of Defense (Acquisition and Technology), indicates senior leaders’ commitment and support, which are important in overcoming barriers to changing DOD’s culture. Its current plan states that DOD is striving to cut secondary inventories from the current $70 billion to $53 billion or less by October 2001, or about 24 percent, and occupied storage space from 631 million cubic feet to 375 million cubic feet or less, or about 40 percent. DOD’s data appear to indicate that, for the most part, DOD is on target in achieving these outcomes by the end of the decade. DOD also hopes to reduce logistics response times by one-third from its fiscal year 1996 first-quarter average by September 1997, and to reduce the average age for backordered items to 30 days by October 2001. DOD’s plan can further serve as a fundamental building block to creating a results-oriented organization as envisioned by the Government Performance and Results Act (GPRA) of 1993, which requires federal agencies, including DOD as a total entity, to develop strategic plans no later than September 30, 1997, for a 5-year period. In previous work, we have found that leading organizations that successfully implement results-oriented management have established clear hierarchies of performance goals and measures throughout all levels of their organizations. GPRA was designed to create a new results-oriented federal management and decision-making approach that requires agencies to clearly define their missions, set goals, link activities and resources to goals, measure performance, and report on their accomplishments. In crafting GPRA, Congress recognized that agencies must be alert to the environment in which they operate; in their strategic plans, they are required to identify the external factors that could affect their ability to accomplish their missions. The cost of DOD’s logistics system is larger than the budgets of some federal agencies that are required to submit their strategic plans to Congress under GPRA. Although DOD is required to develop and submit a DOD-wide plan to Congress, the GPRA approach to strategic planning is, in our opinion, a useful technique for larger efforts, such as DOD’s logistics system. Recognizing that it needed an integrated logistics roadmap to support its warfighting strategy into the next century, DOD developed a strategic plan in 1994 to improve its logistics system. The plan does a good job setting out a vision of what DOD expects the logistics system to do, that is “provide reliable, flexible, cost-effective and prompt logistics support, information, and services to the warfighters; and achieve a lean infrastructure.” DOD’s vision is guided by several principles, which its plan highlights, such as the need for near real-time information on material and logistics support need for both performance metric and performance measurement use of process reengineering and investment to reduce the operational and support cost burden on defense resources without reducing readiness. The plan recognizes that the future logistics environment will require greater mobility, visibility of key assets, and more dynamic workload management to provide a rapid response to changing requirements and improved and accurate management information to better control logistics resources as defense budgets decline. The plan also points out that streamlining to a leaner logistics system can be achieved through greater integration of business and production processes but that performance of logistics processes must be continually assessed to identify opportunities for improvement through the adoption of new initiatives. DOD’s fiscal year 1996-97 edition of its plan reiterates the 1994 plan’s goals and aims to achieve them by 2001. According to DOD’s plan, its overall goals are to (1) reduce logistics cycle times, (2) develop a seamless logistics system, and (3) streamline the logistics infrastructure. The plan also sets forth the objectives and strategies for addressing these goals. In all, the plan lists a total of 95 specific strategies, including the identification of 12 priority strategies, such as the total asset visibility, battlefield distribution, and continuous acquisition and life-cycle support (CALS) strategies. The strategies are to help accomplish the plan’s goals and, as a result, “achieving world-class capabilities, while reducing the cost of DOD’s logistics system.” DOD reported that it has made progress in achieving all three of its goals. DOD recognized, however, that implementing certain strategies was often more complex than originally anticipated and that while most strategies included specific milestones, many actions do not happen just once but continue. As a result, DOD has adjusted its approaches to some of the strategies and its projected completion dates for others. DOD can further improve its planning process by (1) linking its priority strategies to resources, (2) better linking the services’ and DLA’s plans to its logistics strategic plan, and (3) identifying interim approaches when milestones of a priority strategy have been extended. Although DOD’s plan indicates that staffing and financing requirements are to be aligned to the planning, programming, and budgeting system’s (PPBS) cycle, it does not indicate the magnitude and source of resources that are required to implement many of its strategies, particularly the priority strategies. By including the resources to carry out the plan’s strategies, DOD could help ensure that a strategy based on priorities, and agreed to by those who must approve the resources to implement them, guides management actions and shapes the budget consistent with the direction and outcomes it wishes to achieve. In this connection, logistics managers for one of DOD’s priority logistics strategies—the CALS strategy—have not identified, either in DOD’s plan or the CALS implementation plan, the magnitude and source of resources that are required to implement its many initiatives. CALS, which began in 1985, is intended to automate and integrate acquisition, engineering, manufacturing, and logistics data on weapons. If successfully implemented, this effort is expected to allow for more efficient management of weapon systems information by converting into digital format millions of technical manuals and engineering drawings used throughout a weapon’s life cycle, linking databases, and providing access to users within and outside of DOD for managing this information. CALS managers acknowledged that there is no single point of funding accountability for implementing CALS and that DOD also does not know the total cost associated with this effort. Because CALS funding comes from many sources, such as weapon system budgets and CALS-related system budgets, CALS managers further indicated that it is difficult to arrive at an estimated total cost to implement CALS. We and the DOD Inspector General have reported over the years problems associated with implementing CALS. We recognize that it is difficult to arrive at a cost for CALS, but until resource requirements for CALS are clearly identified and tied to the PPBS cycle, DOD will continue to have difficulty implementing CALS. Effective strategic planning helps guide members of an organization to envision their future and develop the necessary procedures and operations to achieve that future. Therefore, it is equally important that DOD’s plan cascade through all its organizations so that responsible elements of those organizations work toward attaining the same goals. However, the Executive Steering Group, which is responsible for directing implementation of the plan, assessing progress, setting priorities, and developing plan updates, has not required the services and DLA to develop logistics strategic plans that link their individual goals and strategies to DOD’s plan. Consequently, the services’ goals, objectives, and strategies do not always support DOD’s plan. We did note that DLA is the only major defense agency to take the initiative to ensure that the goals and strategies of its corporate plan (similar to a strategic plan) link directly to DOD’s plan. The Army’s and the Air Force’s logistics plans have evolved over the last several years to better reflect DOD’s goals and objectives. The Navy has only recently begun to develop its first logistics strategic plan and expects to complete it by the end of the year. In our opinion, this is an opportune time for the Navy to ensure its plan ties directly to DOD’s goals and objectives. During this review, we noted that DOD’s plan did not contain interim approaches that could be developed and implemented when milestones of a priority strategy have been extended. Interim approaches are particularly important in cases where other strategies outlined in the plan are interrelated and dependent on the success of the priority strategy, such as the corporate information management (CIM) initiative, to accomplish their goals and objectives. To illustrate, in 1989, DOD introduced the CIM strategy to improve business practices and the use of information technology and to eliminate redundant systems in medical, civilian payroll, and material management. According to DOD’s plan, CIM’s milestones have been extended an additional 5 years because of operational difficulties.But, the plan does not describe any interim approaches to achieve the objectives of the CIM strategy and continue furthering the goals and objectives of other priority strategies that depend on CIM to be successful. There are several interrelated strategies in DOD’s plan that depend on CIM for success, such as the joint battlefield distribution, the joint total asset visibility, and the in-transit visibility strategies. CIM migration systems are required to provide the communication links for transmitting logistics data to managers. For example, the joint battlefield distribution strategy, which is intended to improve delivery of supplies to fighting units, depends on improved battlefield communications and real-time asset information, which CIM was expected to provide, to be successful. Similarly, the joint total asset visibility strategy is ultimately dependent on CIM migration systems to help it provide timely, accurate information on the location and movement of personnel, equipment, and supplies. Likewise, in-transit visibility, intended to track the identity, status, and location of cargo in transit, is also dependent on both total asset visibility and CIM migration systems to communicate the data. Therefore, until CIM migration systems are fully implemented, these dependant strategies may experience considerable difficulty achieving their goals and objectives. These issues are in line with similar issues that we reported in September 1996 on problems with the development of materiel management systems, which are a part of the CIM initiative. In that report, we stated that DOD had made a major change in its materiel management migration system policy and that it did so before critical steps were taken that would help ensure good business decisions were made and that risks were minimized. We concluded that DOD (1) may likely deploy systems that will not be significantly better than those already in place and (2) could waste millions of dollars resolving problems that result from a lack of developing and implementing a clear and cohesive strategy. We stated that, before proceeding with any new strategy, DOD needs to take the necessary steps to fully define its approach, plan for risks, ensure adequate oversight, and complete testing of new systems. We also recently reported that weaknesses in the materiel management information system strategy were evident in the migration information systems strategies for the depot maintenance and transportation business areas, putting even more millions of information technology investment dollars at risk. For example, for depot maintenance systems, we found that even if the migration effort was successfully implemented as envisioned, the planned depot maintenance standard system would not dramatically improve depot maintenance operations principally because there were problems with the system that delayed reengineering efforts to make the improvements. For the transportation area, we found that had DOD followed its own regulations and calculated investment returns on its transportation migration selections, it would have found, based on data available when the systems were selected, that two of the systems would lose money. We concluded that many of these systems’ problems may have been prevented if DOD had employed a strategic information resources planning effort beforehand. Such planning would have helped DOD focus on meeting the objectives intended to dramatically improve operations for these areas rather than incrementally improving them. Strategic planning for information resources is supported by the Clinger-Cohen Act of 1996, which Congress recently passed, in part, to provide for the cost effective and timely acquisition, management, and use of effective information technology solutions. Moreover, strategic information resources planning is a critical step in the development of a strategic business plan, such as DOD’s logistics strategic plan. To build on DOD’s existing strategic planning efforts and to have a better chance of achieving the major logistics system improvements that its plan envisions, we recommend that the Secretary of Defense direct the Deputy Under Secretary of Defense for Logistics to (1) ensure that future logistics plans include a recognition of the magnitude of the investment that is required to accomplish the plan’s goals, objectives, and strategies and (2) issue specific guidance to the Secretaries of the Army, the Navy, and the Air Force and the Director of DLA instructing the services and DLA on how to link their goals and budgets to the DOD logistics strategic plan’s overall goals and strategies. In its comments on a draft of this report, DOD generally agreed with the thrust of our report and partially concurred with our recommendations. DOD stated that the resourcing process for the logistics strategic plan needs to be strengthened, but it believed it is presently impractical to include the magnitude of the investment made to implement the plan’s goals, objectives, and strategies in the plan itself. DOD stated that it relies on PPBS to cost and resource the plan’s priority strategies. It pointed out that, under the current PPBS process, the magnitude of the investment of the plan’s various alternatives is generally not known until the PPBS process is completed, long after the plan is issued. DOD acknowledged, however, that resourcing the plan’s requirements through PPBS may not be the best way for ensuring its accomplishment, but presently, there is not a better alternative. DOD’s explanation has some merit, and we recognize the inherent difficulties it faces in identifying investment requirements for the plan that must compete with other requirements for scarce resources. However, we believe that future plans still need to recognize that trade-offs between and among the priority strategies must be made from time to time, often necessitating a reevaluation of the financial resources that are currently needed or will be available to fund them. In this regard, GPRA will require federal agencies, including DOD, to develop plans that link activities and resources to goals, starting next year. However, for purposes of the logistics strategic plan itself, we have revised our recommendation to encourage DOD to include a recognition of the difficulties involved in making these financial trade-offs in its plan, and DOD agreed to ensure that the next edition of the plan includes language to that effect. DOD also stated that it will ensure the next edition of the plan includes specific guidance to the military services and DLA on linking their plans to DOD’s plan. DOD pointed out that the Deputy Under Secretary of Defense for Logistics cannot issue specific budget guidance on which DOD requirements will be funded or not. Although we agree, the intent of our recommendation was not to be interpreted as a budget driven issue; rather, we are trying to alert DOD to one of the principles of effective strategic planning that will be strongly encouraged under GPRA. That is, DOD will need to ensure that lower level units focus their efforts on supporting the goals of the next level and, ultimately, to DOD’s overall corporate goals. Without this basic tenet, organizations like DOD, which do not make sufficient progress toward achieving their goals, may neither know why the goals were not met nor what changes are needed to improve performance. DOD’s comments are included in appendix I. We reviewed and analyzed DOD’s logistics strategic plan to determine whether it was characteristic of generally accepted models that focused on the process of strategic planning. In analyzing DOD’s plan, we applied fundamental strategic planning practices identified from our review of literature on this topic, including our prior reports that addressed the implementation of strategic management processes in government agencies. We also discussed the adequacy of DOD’s logistics strategic planning efforts with a consultant (a retired high-ranking military officer) who is on our Logistics Advisory Panel. In addition, we reviewed selected services’ logistics strategic plans to determine the extent to which their individual goals and objectives matched those contained in DOD’s plan. We selected six top-priority strategies contained in the plan to assess how well DOD was carrying out the plan’s goals, objectives, and strategies—total asset visibility, CIM, mobility requirements study bottom-up review, battlefield distribution, CALS, and in-transit visibility strategies. Specifically, we spoke to officials responsible for developing the plan and monitoring its implementation. We also reviewed pertinent documents such as implementation plans, charters, status reports, and other related information. We did not independently verify the accuracy of this information. In conducting our review, we held discussions with officials in the Office of the Deputy Under Secretary of Defense for Logistics (Materiel and Distribution Management); the Office of the Under Secretary of Defense for Acquisition and Technology; the Office of the Joint Chiefs of Staff; the Offices of the Air Force and Army Deputy Chiefs of Staff for Logistics, Washington, D.C.; DLA, Fort Belvoir, Virginia; the U.S. Transportation Command, Scott Air Force Base, Illinois; and the Office of the Chief of Naval Operations and the Naval Supply Systems Command, Arlington, Virginia. We conducted our review from September 1995 through July 1996 in accordance with generally accepted auditing standards. We are sending copies of this report to the appropriate congressional committees; the Secretaries of the Air Force, the Army, and the Navy; the Directors of DLA and the Office of Management and Budget; and other interested parties. We will make copies available to others upon request. Please contact me on (202) 512-8412 if you or your staff have any questions concerning this report. The major contributors to this report are listed in appendix II. Frank R. Marsh, Evaluator-in-Charge Sandra D. Epps, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Department of Defense's (DOD) logistics strategic plan to identify opportunities for increasing the likelihood of implementing the plan's goals and objectives successfully. GAO found that: (1) DOD's plan gives direction to improvements that are needed to reduce the costs of its logistics system (i.e., reducing logistics cycle times, developing a seamless logistics system, and streamlining the logistics infrastructure) and lays out specific objectives and strategies to produce these improvements; (2) DOD could build on its plan and increase the likelihood of implementing its goals and objectives successfully, as well as be better prepared for implementing the requirements of the Government Performance and Results Act of 1993, if the plan: (a) linked its action plans to resources so that both DOD managers and Congress can make more informed decisions on the value and priority of logistics system improvements; (b) better linked the services' and the Defense Logistics Agency's (DLA) plans to DOD's plan; and (c) identified interim approaches that can be developed and implemented when milestones of a priority strategy, aimed at achieving the plan's overall goals and objectives, have been extended; and (3) DOD's success in bringing these elements together hinges on its top-level managers' continued and visible support of efforts to remove institutional cultural barriers.
Long-term fiscal simulations by GAO, CBO, and others all show that despite some modest improvement in near-term deficits, we face large and growing structural deficits driven primarily by rising health care costs and known demographic trends. In fact, the long-term fiscal challenge is largely a health care challenge. Although Social Security is important because of its size, the real driver is health care spending. It is both large and projected to grow more rapidly in the future. GAO’s current long-term simulations show ever-larger deficits resulting in a federal debt burden that ultimately spirals out of control. Figure 1 shows two alternative fiscal paths. The first is “Baseline extended,” which extends the CBO’s baseline estimates beyond the 10-year projection period, and the second is an alternative based on recent trends and policy preferences. Our alternative scenario assumes action to return to and remain at historical levels of revenue and reflects somewhat higher discretionary spending and more realistic Medicare estimates for physician payments than does the baseline extended scenario. Although the timing of deficits and the resulting debt build up varies depending on the assumptions used, both simulations show that we are on an unsustainable fiscal path. The bottom line is that the nation’s longer-term fiscal outlook is daunting under any realistic policy scenario or assumptions. Continuing on this unsustainable fiscal path will gradually erode, if not suddenly damage, our economy, our standard of living, and ultimately our national security. Our current path also increasingly will constrain our ability to address emerging and unexpected budgetary needs and increase the burdens that will be faced by future generations. As I noted earlier, despite some recent improvements in short-term deficits, the long-term outlook is moving in the wrong direction. Figures 2 and 3 illustrate just how much worse the situation has become. Both figures show the potential fiscal outcome under our “Baseline extended” scenario. Figure 2 shows the fiscal outlook in 2001 and figure 3 shows the outlook now. The contrast is dramatic. Even with the surpluses of 2001, we had a long-term problem, but it was more than 40 years out. Although an economic slowdown and decisions driven by the attacks of 9/11 and the need to respond to natural disasters have contributed to the change in outlook, they do not account for the dramatic worsening in the long-term outlook since 2001. Subsequent tax cuts and the passage of the Medicare prescription drug benefit in 2003 were major factors. Figure 3 illustrates today’s cold hard truth: that neither slowing the growth in discretionary spending nor allowing the tax provisions to expire—nor both together—would eliminate the longer-term imbalance. This is even clearer under our alternative scenario based on recent trends and policy preferences (see fig. 4). Growth in the major entitlement programs— primarily health spending—results in an unsustainable fiscal future regardless of whether one assumes future revenue will be somewhat above historical levels as a share of the economy as in the first simulation (fig. 3) or at historical levels as shown in figure 4. Rapidly rising health care costs are not simply a federal budget problem; they are our nation’s number one fiscal challenge. Just last week, GAO released the results of our latest fiscal modeling efforts showing that state and local governments—absent policy changes—will also face large and growing fiscal challenges beginning within the next few years. As is true for the federal budget, growth in health-related spending—Medicaid and health insurance for state and local employees and retirees—is the primary driver of the fiscal challenges facing the state and local governments. In short, the fundamental fiscal challenges of all levels of government are similar and linked. Further, escalating health care costs are also a major competitiveness challenge for American businesses and a growing challenge for many Americans. As such, solutions to address these challenges should be considered in a strategic and integrated manner. The longer-term fiscal challenge we face is not solely a federal one—it is a national one. Figure 5 shows both the federal fiscal path and the fiscal path for the whole of government. There often seems to be an imbalance between the focus of press coverage and public debate and what drives the longer-term outlook. Reporting and debate are often focused on what the Budget Enforcement Act (BEA) called discretionary—the one-third of the budget that goes through the annual appropriation process: Is funding for specific programs being cut or increased? Is “too much” or “too little” being spent in a given area? I would be the last person to say this isn’t important. Much of what the American people think of as government is contained in that part of the budget. Further, as I have said before, I believe that reexamining “the base” is something that should be done periodically regardless of fiscal condition—all of us have a stewardship obligation over taxpayer funds. We have programs still in existence today that were designed 20 or more years ago—and the world has changed. However, I would suggest that as constraints on discretionary spending continue to tighten, the need to reexamine existing programs and activities becomes greater. Certainly controlling discretionary spending is important, but—as everyone in this room knows even with the large costs associated with the “Global War on Terrorism” and Iraq—discretionary spending is not the part of the budget that drives the long-term fiscal imbalance. As figure 6 shows, mandatory programmatic spending—that is mandatory spending excluding interest—has grown from 27 percent of the federal budget in 1965—the year Medicare was created—to 42 percent in 1985 to 53 percent last year. Total mandatory spending including net interest—has grown from 34 percent in 1965 to 62 percent last year. Both the CBO baseline estimates and the President’s Budget proposal show this spending growing even further. This growth—in particular rising health care spending—will have significant implications not only for the budget, but also for the economy as a whole. Figure 7 shows the total future draw on the economy represented by Social Security, Medicare, and Medicaid. Under the 2007 Trustees’ intermediate estimates and CBO’s 2005 midrange Medicaid estimates, spending for these entitlement programs combined will grow to over 15 percent of GDP in 2030 from today’s 8.9 percent. Taken together, it is clear that Social Security, Medicare, and Medicaid represent an unsustainable burden on the federal budget, our economy, and future generations. Ultimately, the nation will have to decide what level of benefits and spending it wants and how it will pay for these benefits. Although these three programs dominate the long-term outlook, they are not the only federal programs or activities that bind the future. The federal government undertakes a wide range of responsibilities, programs, and activities that may either obligate the government to future spending or create an expectation for such spending. Part of what we owe the future is leaving enough flexibility to meet whatever challenges arise. So beyond dealing with the “big 3,” we need to look at other policies that limit that flexibility—not to eliminate all of them but to at least be aware of them and make a conscious decision to reform them in a manner that will be responsible, equitable, and sustainable. GAO has described the range and measurement of such fiscal exposures—from explicit liabilities such as environmental cleanup requirements to the more implicit obligations presented by life-cycle costs of capital acquisition or disaster assistance. Last year the U.S. government’s major reported liabilities, social insurance commitments, and other fiscal exposures continued to grow. They now total approximately $50 trillion—about four times the nation’s total output (GDP) in fiscal year 2006—up from about $20 trillion, or two times GDP in fiscal year 2000. Absent meaningful reforms, these amounts will continue to grow every second of every minute of every day due to continuing deficits, known demographic trends and compounding interest costs. While it is hard to make sense of what “trillions” means, one way to think of these numbers is that if we wanted to put aside today enough to cover these promises, it would take $170,000 for each and every American, including newborns, or approximately $440,000 per American household. Considering that median household income is about $46,000, the household burden is about 9.5 times median income. Just two weeks ago the Office of Management and Budget released its mid- session budget update—showing further improvement in this year’s budget deficit. This “good news,” however, did not signal any improvement in the long-term outlook. The problem isn’t this year’s deficit—or even the deficit in 2012. The problem is that we are on an imprudent and unsustainable path. When I appeared before this Committee in January I noted that I have previously urged a restitution of the statutory budget controls—including meaningful caps on discretionary spending and PAYGO on both the tax and spending sides of the ledger. Given the focus of this hearing, let me elaborate. BEA—of which PAYGO was a part—had a number of strengths its predecessor, Gramm-Rudman-Hollings, lacked. Consistent with good practice in designing incentives, it focused on what Congress and the administration could control—spending and tax decisions—rather than on outcomes driven by external changes. In addition, enforcement was targeted—further encouraging compliance with the discretionary caps and PAYGO rules. There is broad consensus among observers and analysts who focus on the budget that the controls contained in the Budget Enforcement Act constrained spending for much of the 1990s. However, since the BEA was focused on deficit reduction, its effectiveness deteriorated with the achievement of near-term surpluses. Although the BEA statutory PAYGO rules were extended twice, they expired in 2002. Earlier this year, both the Senate and the House adopted rules reinstating PAYGO discipline on both sides of the ledger. Then why should we consider restoration of statutory PAYGO? The obvious answer ties to enforcement and duration: it may be easier to waive a rule than ignore a law, and a law can carry a penalty designed to encourage compliance. I will defer to Director Orszag and some of the technical experts on the next panel as to the details of how any sequester or enforcement mechanism should be designed. However, I will note that it should be unpleasant enough to encourage compliance but not so draconian as to be implausible. The goal of any penalty should be to encourage compliance, not to encourage avoidance or merely impose the penalty. As I have said before, when you are in a hole, the first thing to do is stop digging. Discretionary caps and PAYGO are designed to stop the digging. There are two reasons to impose PAYGO on both the direct spending and the revenue side of the budget. The first is obvious—both affect the bottom line. The second—and perhaps as important—is that applying PAYGO only to spending is likely to lead to more programs being designed as tax preferences. Tax preferences are like a form of back door spending. As a result, they need to be subject to additional transparency and controls as well. We have previously reported on these tax expenditures, which are often aimed at policy goals similar to those of federal spending programs. Revenues forgone through tax expenditures—unless offset by increased taxes or lower spending—increase the unified budget deficit and federal borrowing from the public (or reduce the unified budget surplus available to reduce debt held by the public). Unlike discretionary spending programs, which are subject to periodic reauthorization and annual appropriation, tax expenditures are—like entitlement programs— permanent law and generally not subject to a recurring legislative process that would ensure systematic annual or periodic review. BEA’s statutory PAYGO regime applied to both mandatory spending and revenues—and so limited the ability to create or expand either spending entitlements or tax expenditures unless offsetting funds could be raised. Since tax provisions are not as visible in the budget as spending programs, there is already some incentive to use tax provisions rather than spending programs to accomplish programmatic ends; imposing controls on spending programs but not on tax provisions would only increase this incentive. It would be an unfortunate consequence if the restoration of the PAYGO rule were to lead to an increase in the portion of the budget on automatic pilot and therefore reduce both transparency and control. The PAYGO requirement prevented legislation that lowered revenue, created new mandatory programs, or otherwise increased direct spending from increasing the deficit unless offset by other legislative actions. While PAYGO constrained the creation or legislative expansion of direct spending programs and tax cuts, it accepted the existing provisions of law as given. It was not designed to trigger—and it did not trigger—any examination of “the base.” Furthermore, cost increases in existing mandatory programs were exempt from control under PAYGO and could be ignored. However, constraining legislative actions that increase the cost of entitlements, mandatories, and tax expenditures is not enough. Looking ahead, the budget process will need to go beyond limiting expansions. Existing programs cannot be permitted to be on autopilot and grow to an unlimited extent. Since the spending for any given entitlement or other mandatory program is a function of the interaction between eligibility rules and the benefit formula—either or both of which may incorporate exogenous factors such as economic downturns—the way to change the path of spending for any of these programs is to change their rules or formulas. In January of last year, we issued a report on “triggers”—some measure that when reached or exceeded, would prompt a response connected to that program. By identifying significant increases in the spending path of a mandatory program relatively early and acting to constrain it, Congress may avert much larger and potentially disruptive financial challenges and program changes in the future. A similar approach could be applied to those tax expenditures that operate in many ways like mandatory spending programs. Some years ago, Mr. Chairman, you had suggested a kind of “look back” trigger—a requirement that the President and the Congress monitor the path of existing entitlements and make an explicit determination about whether to accept growing costs or to take action to change the path. I know it comes as no surprise to anyone in this room that I believe we need to increase the understanding of and focus on the longer term in our policy and budget debates. When I was here in January I spoke about some ideas I had been discussing with a number of Members of the House and Senate as well as other interested and concerned citizens and groups. Since then—at the request of some Members—I have had those ideas put into legislative language as a basis for discussion. Today I’d like to elaborate a little on some of those ideas. They fall into three broad categories: increased information and reporting by the executive branch— both in the President’s budget proposal and in other statements for the public; more information for the Congress, and an annual GAO report. I will discuss each in turn. A summary of the proposal appears in appendix I. I. Executive Branch Reporting & Information A. Increased Information in the President’s Budget Proposals Annual Report on Fiscal Exposures: The transparency of existing commitments would be improved by requiring OMB to report annually on existing fiscal exposures—including a concise list, description and cost information. As I noted before, these exposures range from explicit liabilities to implicit promises embedded in the structure of current programs. This should be provided as supplementary information in the President’s budget along with information on the long-term costs of major tax expenditures. As appropriate and possible, showing tax expenditures, related spending programs and related credit programs that address the same policy area would facilitate oversight and reexamination by the Congress. Information over a longer time horizon: (1) The President’s budget should include an estimate of the impact of any major spending or tax proposals on these fiscal exposures and on the long-term fiscal outlook; (2) The budget should provide year-by-year data for 10 fiscal years rather than the current 5; and (3) The President’s budget should include a statement of his budgetary goals for the next decade. B. Executive Branch Reporting and Information—Summary Annual Report and Statement of Fiscal Sustainability Summary Annual Report: One of the things I am proudest of from my tenure as a public trustee for Social Security and Medicare is the creation of a Summary Report to accompany the annual Trustees report. This summary report presents key information in a way more accessible to the press and lay reader. I believe it has contributed to improved understanding about the condition of these programs. As the Comptroller General I sign the audit report on the Consolidated Financial Statements of the U.S. Government (CFS). Despite the fact that we must disclaim our opinion on the statements I believe they contain important information. The report is, however, too thick and very hard to read. I believe the Department of the Treasury (Treasury) should publish a summary financial report derived from the information in the audited CFS and the Comptroller General’s audit report on it within 15 days of the issuance of that audit report. Every four years the Treasury should do more—it should prepare and publish a fiscal sustainability report including information and an assessment of the long-term fiscal sustainability of our current spending and revenue path. A number of other Organization for Economic Co-operation and Development (OECD) countries have begun to do fiscal sustainability reports as a way of looking ahead. Such a report permits the public and policymakers to look at the full range of government commitments rather than focusing only on new proposals. II. Additional Information for the Congress If Congress is to balance short-term claims and long-term costs it must have information about the long-term cost implications of proposals that would result in a significant increase or decrease in revenues or spending. I recognize that estimates over a multi- decade period cannot be as precise as short-term estimates and that some programs are harder to cost out than Social Security. However, information about the path should be made available. For example, do costs double every decade? As the independent auditor of the federal government’s Consolidated Financial Statements and an agency of the legislative branch without a day-to-day responsibility in the budget process, I believe GAO is in an excellent position to pull together periodic financial and fiscal information in a summary report similar to the fiscal stewardship report I issued January 31 of this year. If Congress does impose additional transparency requirements on the Executive Branch, then we are in a good position to look over how those requirements were implemented and to suggest what changes, if any, might be made. I think we all know that there is no easy way out of the large and growing longer-term fiscal challenge we face. Economic growth is essential, but we cannot grow our way out of the problem. Based on reasonable assumptions the math does not come close to working. I have said that the first thing to do is stop digging—and the restoration of credible discretionary caps and PAYGO on both the spending and tax side of the ledger can help with that. Important as they are, however, they are not enough. Fundamental reform of existing entitlement programs will be necessary to change the path of those programs. The fact that the long-term outlook is driven primarily by health care costs does not mean that the rest of the budget should be exempt from scrutiny. We have the opportunity to bring our government and its programs in line with 21st century realities. Those who believe we can solve this problem solely by cutting spending or raising taxes are not being realistic. The truth is we will also need to reform entitlement programs, re-prioritize and re-engineer other direct spending programs, and engage in comprehensive tax reform that generates additional revenue as a percent of the economy (compared to current and historical levels) in order to get the job done. I have long believed that the American people can accept difficult decisions as long as they understand why such choices are necessary. They need to be given the facts about the fiscal outlook: what it is, what drives it, and what it will take to address it. As most of you know, I have been investing a good deal of time in the Fiscal Wake-Up Tour (FWUT) led by the Concord Coalition. Scholars from both the Brookings Institution and the Heritage Foundation join with me and key Concord officials in laying out the facts and discussing the possible ways forward. In our experience, having these people with quite different policy views agree on the nature, scale and importance of the issue—and on the need to sit down and work together to develop a multi-dimensional solution to our longer- term fiscal challenge—resonates with the audiences. The specific policy choices made to address this fiscal challenge are the purview of elected officials. The policy debate will reflect differing views of the role of government and differing priorities for our country. What the FWUT can do—and what I will continue to do—is lay out the facts, debunk various myths, discuss possible options and prepare the way for tough choices by elected officials. If the American people understand that there is no magic bullet—if they understand that we cannot grow our way out this problem; eliminating earmarks will not solve the problem; wiping out fraud, waste and abuse will not solve the problem; ending the “Global War on Terrorism”, exiting from Iraq, or cutting way back on defense will not solve the problem; and letting the recent tax cuts expire will not solve this problem; then they can engage with you in a discussion about what government should do; how it should do it; and how we should pay for it without unduly mortgaging the future of our country, children, and grandchildren. This is a great nation, probably the greatest in history. We have faced many challenges in the past and we have met them. It is a mistake to underestimate the commitment of the American people to their country, children, and grandchildren; to underestimate their willingness and ability to hear the truth and support the decisions necessary to deal with this challenge. We owe it to our country, children and grandchildren to address our fiscal and other key sustainability challenges. The clock is ticking and time is working against us. The time for action is now. Chairman Spratt, Mr. Ryan, Members of the Committee, let me repeat my appreciation for your commitment and concern in this matter. We at GAO stand ready to assist you in this important effort. My remarks are based largely on previous reports and testimonies, such as Long-Term Budget Outlook: Deficits Matter—Saving Our Future Requires Tough Choices Today (GAO-07-389T) and Budget Process: Better Transparency, Controls, Triggers, and Default Mechanisms Would Help to Address Our Large and Growing Long-term Fiscal Challenge (GAO-06- 761T). We updated these testimonies with the results from our most recent long-term simulations in The Nation’s Long-Term Fiscal Outlook: April 2007 Update (GAO-07-983R). Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. For further information on this testimony, please contact Susan J. Irving at (202) 512-9142 or [email protected]. Individuals making key contributions to this testimony include Jay McTigue, Assistant Director; Matthew Mohning, Senior Analyst and Melissa Wolf, Senior Analyst. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony relates to the broader question: How should we deal with our nation's long-term fiscal challenge in order to help ensure that our future is better than our past? This testimony will start with our longer-term fiscal challenge. Then it will turn to the process question you present at this hearing: the reimposition of a statutory PAYGO rule(s) as a step toward dealing with this challenge. Finally it will talk about moving beyond caps and PAYGO to some ideas on how improved transparency and process changes can help in the effort to put us on a more prudent and sustainable long-term fiscal path. As widely reported earlier this month, the Administration now expects the deficit for fiscal year 2007 to be $205 billion, down from its February estimate of $244 billion and last year's deficit of $248 billion. However, because these numbers include the Social Security surpluses, they mask what GAO likes to call the "operating deficit" now estimated to be $385 billion for fiscal year 2007. Clearly lower short-term deficits are better than higher short-term deficits. However, our real challenge is not short-term deficits, rather it's the long-term structural deficits and related debt burdens that could swamp our ship of state if we do not get serious soon. Specifically, while our near-term fiscal picture is better, our long-term fiscal outlook is not. Health care costs are still growing faster than the economy and the population is still aging. Indeed, what we call the long-term fiscal challenge is not in the distant future. The first of the baby boomers become eligible for early retirement under Social Security on January 1, 2008--less than 1 year from now-- and for Medicare benefits in 2011--just 3 years later. The budget and economic implications of the baby boom generation's retirement have already become a factor in Congressional Budget Office's (CBO) 10-year baseline projections and will only intensify as the baby boomers age. Simply put, our nation is on an imprudent and unsustainable long-term fiscal path that is getting worse with the passage of time. Herbert Stein once said that something that is not sustainable will stop. That, however, should not give us comfort. Clearly, it is more prudent to change the path than to wait until a crisis occurs. While restraint in the near term and efforts to balance the budget over the next 5 years can be positive, they are not enough. It is also important that we take steps to address our longer-term fiscal imbalance. The real problem is not the nearterm deficit--it is the long-term fiscal outlook. It is important to look beyond year 5 or even year 10. Both the budget and the budget process need more transparency over and focus on the long-term implications of current and proposed spending and tax policies. GAO will suggest a number of things that it believes will help in this area in this testimony. These remarks are based on our previous work on a variety of issues, including reports and testimonies on our nation's long-term fiscal challenges and budget process reform. These efforts were conducted in accordance with generally accepted government auditing standards. Long-term fiscal simulations by GAO, CBO, and others all show that despite some modest improvement in near-term deficits, we face large and growing structural deficits driven primarily by rising health care costs and known demographic trends. In fact, the long-term fiscal challenge is largely a health care challenge. Although Social Security is important because of its size, the real driver is health care spending. It is both large and projected to grow more rapidly in the future. GAO's current long-term simulations show ever-larger deficits resulting in a federal debt burden that ultimately spirals out of control. Although the timing of deficits and the resulting debt build up varies depending on the assumptions used, both simulations show that we are on an unsustainable fiscal path. The bottom line is that the nation's longer-term fiscal outlook is daunting under any realistic policy scenario or assumptions. Continuing on this unsustainable fiscal path will gradually erode, if not suddenly damage, our economy, our standard of living, and ultimately our national security. Our current path also increasingly will constrain our ability to address emerging and unexpected budgetary needs and increase the burdens that will be faced by future generations.
The CFATS program is intended to secure the nation’s chemical infrastructure by identifying and protecting high-risk chemical facilities. Section 550 of the DHS appropriations act for fiscal year 2007 requires DHS to issue regulations establishing risk-based performance standards for security of facilities that the Secretary determines to present high levels of security risk. The CFATS rule was published in April 2007 and Appendix A to the rule, published in November 2007, listed 322 chemicals of interest and the screening threshold quantities amount for each. According to the CFATS rule, any facility that possesses (or later comes into possession of) any of these chemicals in quantities that meet or exceed the threshold is required to submit certain information to DHS for screening. According to the rule, if DHS preliminarily determines that a facility is high risk—that is, the facility presents a high risk of significant adverse consequences for human life or health, national security, or critical economic assets if subjected to terrorist attack, compromise, infiltration, or exploitation—the facility must submit a security vulnerability assessment to DHS that identifies security vulnerabilities at the site, among other things. After reviewing the security vulnerability assessment, DHS then makes a final decision as to whether the facility is high-risk and, if so, assigns the facility to a final tier. The rule then requires facilities that have been finally determined to be high-risk to develop and submit for DHS approval site security plans that generally show how they are to address the vulnerabilities identified in the vulnerability assessment, including measures that satisfy applicable risk-based performance standards. In addition, the rule requires that DHS implement a compliance inspection process to ensure that covered facilities are satisfying DHS’s performance standards consistent with their approved site security plans. ISCD has direct responsibility for implementing DHS’s CFATS rule, including assessing potential risks and identifying high-risk chemical facilities, promoting effective security planning, and ensuring that final high-risk facilities meet the applicable risk-based performance standards though site security plans approved by DHS. ISCD is managed by a Director and a Deputy Director and operates five branches that are, among other things, responsible for information technology operations, policy and planning; providing compliance and technical support; inspecting facilities and enforcing CFATS regulatory standards; and managing logistics, administration, and chemical security training. ISCD receives business support from NPPD and IP for services related to human capital management and training, budget and finance, and acquisitions and procurement. Figure 1 shows ISCD’s current organizational structure within NPPD and IP. Appendix II provides a more detailed organization chart showing the various ISCD divisions. From fiscal years 2007 through 2012, DHS dedicated about $442 million to the CFATS program. During fiscal year 2012, ISCD was authorized 242 full-time-equivalent positions. For fiscal year 2013, DHS’s budget request for the CFATS program was $75 million and 242 positions. DHS’s CFATS rule outlines a specific process for administering the program. Any chemical facility that possesses any of the 322 chemicals in the quantities that meet or exceed the threshold quantity outlined in the rule is required to complete an initial screening tool (referred to by DHS as the Top Screen) whereby the facility provides DHS various data, including the name and location of the facility and the chemicals and their quantities at the site. DHS is to use this information to initially determine whether the facility is high risk. If so, DHS is to notify the facility of its preliminary placement in one of four risk-based tiers—tier 1, 2, 3, or 4. Facilities preliminarily placed in any one of these tiers are considered to be high risk, with tier 1 facilities considered to be the highest risk. Facilities that DHS initially determines to be high risk are required to complete a security vulnerability assessment, which includes the identification of potential critical assets at the facility and a related vulnerability analysis. DHS is to then review the security vulnerability assessment and notify the facility of DHS’s final determination as to whether or not it is considered high risk, and if the facility is determined to be a high-risk facility about its final placement in one of the four tiers. Once this occurs, the facility is required to submit a site security plan or participate in an alternative security program in lieu of a site security plan. The security plan is to describe the security measures to be taken to address the vulnerabilities identified in the vulnerability assessment, and identify and describe how security measures selected by the facility will address the applicable risk-based performance standards. DHS then is to do a preliminary review of the security plan to determine whether it meets the regulatory requirements. If these requirements appear to be satisfied, DHS issues a letter of authorization for the facility’s plan. DHS then conducts an authorization inspection of the facility and subsequently determines whether to approve the security plan. If DHS determines that the plan does not satisfy CFATS requirements (based on its preliminary review after an authorization inspection), DHS then notifies the facility of any deficiencies and the facility must submit a revised plan correcting those deficiencies. If the facility fails to correct the deficiencies, DHS may then disapprove the plan. Following approval, DHS may conduct further inspections to determine if the facility is in compliance with its approved security plan. Figure 2 illustrates the CFATS regulatory process. In July 2007, DHS began reviewing information submitted by approximately 40,000 facilities. By January 2012, DHS had preliminarily determined that approximately 4,500 of these facilities were high risk and preliminarily placed each in one of the four tiers. Each of these approximately 4,500 facilities was to complete a security vulnerability assessment, and those facilities that DHS finally determined to be high risk were to submit a site security plan. According to ISCD officials, the vulnerability assessment process prompted over 1,600 facilities to remove chemicals of interest from their sites, thereby enhancing their security posture and removing them from CFATS coverage. Also, according to division officials, as of February 2012, ISCD had worked with facilities to complete 925 compliance assistance visits whereby division inspectors visit high-risk facilities to provide knowledge of and assistance in complying with CFATS, particularly facilities that were in the process of preparing their security plans. Our review of the ISCD memorandum and discussions with ISCD officials showed that the memorandum was developed during the latter part of 2011 and was developed primarily based on discussions with ISCD staff and the observations of the ISCD Director in consultation with the Deputy Director. In July 2011, a new Director and Deputy Director were appointed to lead ISCD and, at the direction of NPPD’s Under Secretary, began a review of the CFATS program goals, challenges, and potential corrective actions. In November 2011, the Director and Deputy Director provided the Under Secretary the ISCD memorandum entitled “Challenges Facing ISCD, and the Path Forward.” These officials stated that the memorandum was developed to inform leadership about the status of ISCD, the challenges it was facing, and the proposed solutions identified to date. In transmitting a copy of the memorandum to congressional stakeholders following the leak in December 2011, the NPPD Under Secretary discussed caveats about the memorandum. He stated that the memorandum was not a formal compliance audit or program review and in several instances it lacked useful, clarifying context. He stated that the ISCD memorandum was not intended for wider internal or external dissemination beyond the Under Secretary’s office. He further explained that it had not undergone the normal review process by DHS’s Executive Secretariat and contained opinions and conclusions that did not reflect the position of DHS. He also noted that the memorandum did not discuss the “significant progress” ISCD had made to date reaching out to facilities of concern to improve their security posture. For example, senior division officials told us that the memorandum did not note the positive impact of ISCD’s initial screening of facilities, which resulted in many facilities reducing their holdings of regulated materials so that they would no longer be subject to the rule. The ISCD Director confirmed that she was the primary author of the ISCD memorandum, in consultation with the Deputy Director, and said that the memorandum was intended to be used as an internal management tool. The Director stated that when she was brought onboard, the Under Secretary tasked her to look at CFATS from an outsider’s perspective and identify her thoughts on the program relative to other regulatory regimes, particularly in light of growing concerns about possible human capital issues and problems tiering chemical facilities covered by CFATS. She confirmed that the memo was intended to begin a dialog about the program and challenges it faced. The Director also confirmed that she developed the memorandum by (1) surveying division staff to obtain their opinions on program strengths, challenges, and recommendations for improvement; (2) observing CFATS program operations including the security plan review process; and (3) analyzing an internal DHS report on CFATS operations, which, according to the Director, served as a basis for identifying some administrative challenges and corrective action. The Director told us that senior ISCD officials, including branch chiefs, were given an opportunity to review an initial draft of the memorandum and provided feedback on the assumptions presented. ISCD branch chiefs— the officials responsible for taking corrective actions—confirmed that they were given the opportunity to provide comments on a draft of the memorandum. However, they said that after the leak, almost all of the senior ISCD officials, including branch chiefs, did not have access to the final memorandum per the instruction of the Under Secretary for Management. The senior ISCD and NPPD officials we contacted said that they generally agreed with the material that they saw, but noted that they believed the memorandum was missing context and balance. For example, one NPPD official stated that that the tone of the memorandum was too negative and the problems it discussed were not supported by sound evaluation. The official expressed the view that the CFATS program is now on the right track. The ISCD memorandum discussed numerous challenges that, according to the Director, pose a risk to the program. The Director pointed out that, among other things, ISCD had not approved any site security plans or carried out any compliance inspections on regulated facilities. The Director attributed this to various management challenges, including a lack of planning, poor internal controls, and a workforce whose skills were inadequate to fulfill the program’s mission and highlighted several challenges that have an impact on the progress of the program. In addition, the memorandum provided a detailed discussion of the issues or problems facing ISCD. One group of issues focused on human capital management, problems the author categorized as team issues. According to the Director, these included issues arising out of poor staffing decisions; difficulty establishing a team culture that promotes professionalism, respect, and openness; a lack of measurable employee performance goals and unclear performance and conduct standards; and potential delays associated with notifying ISCD inspector union over policies, procedures, and processes. A second group focused on mission issues, including what the author found to be the slow pace of the site security plan approval process, the lack of an established inspection process, and the ISCD’s inability to perform compliance inspections 5 1/2 years after enactment of the CFATS statute, and the lack of an established records management system to document key decisions. A third group focused on administrative issues, particularly those the Director regarded as a lack of infrastructure and support, both within ISCD and on the part of NPPD and IP. They included the aforementioned concern about over-reliance on contractors, insufficient and inconsistent support by NPPD and IP with regard to human capital needs—including support on the aforementioned staffing issues—and insufficient controls regarding the use of inspector vehicles, purchase cards, and travel. Additional details on the human capital, mission, and administrative issues identified in the ISCD memorandum are considered “for official use only.” ISCD is using an action plan to track its progress addressing the challenges identified in the memorandum, and, according to senior division officials, the plan may be helping them address some legacy issues that staff were attempting to deal with before the memorandum was developed. As discussed earlier, the ISCD memorandum was accompanied by a proposed action plan that, according to the director, was intended to provide proposed solutions to the challenges identified. The January 2012 version of that plan listed 91 actions to be taken categorized by issue—human capital management issues, mission issues, or administrative issues—that, according to the ISCD Director, were developed to be consistent with the ISCD memorandum. Each action item also listed the coordinator, or individual or unit responsible for the action, and discussed the status of the action, including whether the item was complete or in progress. For example, in the human capital/staffing issues area, one action item was intended to engage ISCD leadership to develop an integration plan for newly hired employees. The IP Business Support Team, which is co-located with ISCD, was responsible for coordinating this action, and at the time the plan was prepared, the action was in progress. According to the plan, a 3-day ISCD 101 course had been developed and a more comprehensive process for acclimating new employees to ISCD was under development. However, the January 2012, version of the action plan did not provide information on when the action was started or to be finished. In February 2012, ISCD developed a version of the action plan that included the same information as the January 2012, plan. However, it also included quarterly projected completion dates. Since then the division’s action plan has evolved into a more detailed plan containing 94 items. Like the February 2012 plan, March and June 2012 updated versions of the plan contained information on the coordinator, the action to be taken, and the status of each item. However, unlike the February 2012 version of the plan, the March and June versions of the plan provided detailed milestones and timelines for completing action items including calendar dates, and interim actions leading to completion— essentially a road map for managing each action item according to particular dates and milestones. This approach is consistent with The Standard for Program Management, which calls for organizations to develop plans with milestones and time frames to successfully manage programs and projects. Eleven of the 12 ISCD managers (those other than the Director and Deputy Director) assigned to work as the coordinators of the individual action items told us that even though they were not given the opportunity to view the final version of the ISCD memorandum, the Director provided them the sections of the action plan for which they were responsible to help them develop and implement any corrective actions. They said that they agreed that actions being taken in the plan were needed to resolve challenges facing ISCD. Our discussions with these officials also showed that about 39 percent (37 of 94) of the items in the March and June 2012 action plans addressed some legacy issues that were previously identified and, according to these officials, corrective actions were already under way for all 38 of these action items. For example, one action item called for ISCD to maintain better relations with industry, Congress, and other key stakeholders. ISCD officials said that the ISCD Policy Branch had already begun working on this strategy prior to the development of the memorandum and action plan and that this strategy was given more attention and a higher priority because of the associated action item. An ISCD official expressed the view that the ISCD memorandum and action plan encouraged ISCD to address these and other items sooner than they otherwise might have been addressed. Our analysis of the June 2012 version of the ISCD action plan showed that 40 percent of the items in the plan (38 of 94) had been completed. The remaining 60 percent (56 of 94) were in progress. Our analysis of the 38 completed items showed that 32 of the 38 items were associated with human capital management and administrative issues, including those involving culture and human resources, contracting, and documentation. For example, one human capital management issue that is complete called for ISCD to survey staff to obtain their opinions on program strengths and challenges and recommendations for program improvements. According to the June 2012 action plan, the survey was completed and ISCD’s action plan showed the item as completed on January 10, 2012. Another completed human capital action item— categorized by ISCD as a cultural issue—called for ISCD management to hold a series of meetings with employees to involve them in addressing program challenges, clarify program priorities related to its mission, and implement changes in ISCD culture. The June 2012 version of the action plan shows the item as completed on January 10, 2012, but noted that this activity will continue going forward. The remaining 6 of 38 action items categorized by ISCD as completed were associated with mission issues such as 1 action item calling for ISCD to establish a quality control function for compliance and enforcement activities. According to ISCD’s action plan, this item was completed in April 2012, based on development of a proposal to form the quality control section within the division. Figure 3 shows the status of action items by each of the three categories— human capital management issues, mission issues, and administrative issues, as of June 2012. Appendix III provides an overview of the items in the action plan and their status (completed or in progress) by issue (human capital management, mission issues, and administrative issues) and subcategory. For the remaining 56 items that were in progress, 40 involved human capital management and administrative issues. According to ISCD officials, these 40 issues generally involved longer-term efforts—such as organizational realignment—or those that require approval or additional action on the part of IP or NPPD. For example, ISCD reported that there are 13 action items that are directly or indirectly associated with the division’s realignment efforts, including items that require approval by NPPD and IP. The overall realignment effort related to these action items is intended to address concerns, highlighted in the memorandum, that ISCD’s organizational structure was “stovepiped” and compartmentalized. The plan, which, as of June 2012, was in draft, would, according to officials, reorganize ISCD to “integrate more fully certain functions to enhance the collaborative nature of the work that needs to be performed” and would entail creating new offices, moving and integrating others, and centralizing some functions that are now dispersed throughout the division. In accordance with the affected action items, ISCD and a contractor developed the several elements of the realignment plan for review, and ISCD was awaiting input or guidance from NPPD and IP before associated action items can be completed. Sixteen of 56 remaining actions items in progress covered mission issues that will likely also require long-term efforts to address. For example, 1 of these mission-related action items entails the development of requirements for an information technology platform to support inspection activities. Another entails the development of plans to improve ISCD’s site security plan review process. Regarding the latter, ISCD encountered delays approving security plans because, according to ISCD officials, the quality of the plans submitted was inconsistent and ISCD did not have dedicated staff with the skills needed to work with facilities to review and approve them. As noted in the ISCD memorandum, the site security plan review process was overly complicated, did not leverage available resources, and created bottlenecks and clearing the backlog of security plan’s was ISCD’s highest priority. To address these concerns, ISCD developed an interim review process to clear the backlog of tier 1 security plans with a goal of completing reviews of those plans by the end of the calendar year. ISCD began to track the action item intended to develop a plan for introducing a new security plan review process, which, according to the June 2012 action plan, is supposed to be completed in July 2012. The development of a new security plan review process may be critical to the effective implementation of the CFATS program. According to an ISCD official, compliance inspections cannot begin until ISCD reviews and approves a facility’s site security plan. In March 2012, the official estimated that it could take at least 18 months for ISCD to complete its first compliance inspections. In commenting on our draft statement, ISCD officials stated that inspections for all of the approximately 4,500 tiered facilities could take several years, contingent upon available resources. Our analysis of the April and June versions of the plan shows that the division had extended the estimated completion dates for nearly half of the action items. Estimated completion dates for 52 percent (48 of 93 items) either did not change (37 items) or the date displayed in the June 2012 plan was earlier than the date in the April 2012 version of the plan (11 items). Conversely, 48 percent (45 of 93) of the items in the June 2012 version of the plan had estimated completion dates that had been extended beyond the date in the April 2012 plan. For example, in the April 2012 plan, ISCD was to work with NPPD and IP on identifying job skills, the correct job series, and job descriptions, action that was estimated to be completed in July 2012. However, the June 2012 plan shows that the completion date for this action item was extended to August 2012, more than 30 days beyond the date estimated in April 2012. Figure 4 shows the extent to which action plan items were completed earlier than planned, did not change, or were extended, from April 2012 through June 2012, for the human capital management, mission, and administrative issues identified in the plan. ISCD officials told us that estimated completion dates have been extended for various reasons. They said that one reason for moving these dates was that the work required to address some items was not fully defined when the plan was first developed and as the requirements were better defined, the estimated completion dates were revised and updated. In addition, ISCD officials also stated that timelines have been adversely affected for some action items because staff have been reassigned to work on higher-priority responsibilities, such as moving staff from their assigned duties to work on efforts to reduce the backlog of security plans under review. ISCD officials also told us that some dates have been extended because the division is awaiting actions within ISCD or by NPPD or IP. ISCD, through its action plan, appears to be heading in the right direction toward addressing the challenges identified, but it is too early to tell if the action plan is having the desired effect because (1) the division has only recently completed some action items and continues to work on completing more than half of the others, some of which entail long-term changes, and (2) ISCD has not developed an approach for measuring the results of its efforts. ISCD officials told us that they had not yet begun to plan or develop any measures, metrics, or other documentation focused on measuring the impact of the action plan on overall CFATS implementation because they plan to wait until corrective action on all items has been completed before they can determine the impact of the plan on the CFATS program. For the near term, ISCD officials stated that they plan to assess at a high level the impact of the action plan on CFATS program implementation by comparing ISCD’s performance rates and metrics pre-action plan implementation and post-action plan implementation. However, because ISCD will not be completing some action items until 2014, it will be difficult for ISCD officials to obtain a complete understanding of the impact of the plan on the program using this comparison only. Now that ISCD has begun to take action to address the challenges identified, ISCD managers may be missing an opportunity to measure the effects or results of some of the actions taken thus far, particularly actions that are either in the early stages of implementation or are in the formative stages. Measuring results associated with particular action items would be consistent with Standards for Internal Control in the Federal Government, which calls for the establishment and review of performance measures and indicators to monitor activities and compare actual performance with planned or expected results throughout the organization and analyze significant differences. We recognize that it might not be practical to establish performance measures for all action items, for example; 1 of the 94 items calls for ISCD to initiate the hiring process for an economist. However, other action items may be candidates for performance measurement because they focus on organizational changes or mission-related issues. For example, once ISCD gets approval to move forward with a plan to reorganize, it could develop interim plans and measures to monitor the progress of integrating various functions and use the information to identify barriers, if any, for completing this effort. Likewise, once ISCD makes the decision to revise its site security plan review process, it could develop measures for implementing those revisions and consider what measures might be appropriate for gauging its success in streamlining the process and completing security plan reviews. By looking for opportunities to develop performance measures covering the various action items and developing such measures, ISCD managers would be better positioned to identify any gaps in their efforts to address the challenges and have tools available to measure and monitor performance in the future. ISCD would also have a framework for providing continuity of operations when new managers or staff are hired, managers move from position to position, or as the program changes. Furthermore, ISCD would be better equipped to inform stakeholders of its progress as the organization moves toward resolving the challenges identified in the ISCD memorandum. According to ISCD officials, almost half of the action items included in the June 2012 action plan either require ISCD to collaborate with NPPD and IP or require NPPD and IP to take action to address the challenges identified in the ISCD memorandum. NPPD, IP, and ISCD officials have been working together to identify solutions to the challenges the memorandum identified and to close pertinent action items. One of the issues identified in the ISCD memorandum was the level of NPPD and IP communication and support. According to ISCD officials, at the time the program was established, NPPD and IP communication and support were not adequate for the division to implement the CFATS program within the statutory time frames (which was 6 months following the passage of the CFATS statute). Regarding the ISCD memorandum and the action plan, NPPD, IP, and ISCD officials have been working together to identify solutions to these human capital and administrative challenges. According to division officials, 46 of the 94 action items included in the June 2012 action plan require either action by NPPD and IP or collaboration with NPPD and IP. This includes collaborating with NPPD officials representing the NPPD human capital, facilities, and employee and labor relations offices, among others, and with IP’s Directorate of Management Office.that require action by or collaboration with NPPD or IP are complete; 33 of 46 are in progress. As of June 2012, 13 of the 46 items With regard to completed items, these focused largely on human capital and administrative issues. For example, 1 completed item required ISCD leaders to establish regular meetings with NPPD and IP human capital officials to ensure better communication and visibility on human capital issues. Our discussions with ISCD and NPPD officials confirmed that this action item was closed because meetings covering human capital issues have begun and are held on a weekly and recurring basis. NPPD, IP, and ISCD told us that one of the topics of discussion during the weekly meetings is the hiring of specialists so that the division has assurance that the CFATS reviews and inspection process properly include their expertise. According to these officials, hiring certain types of specialists is a difficult challenge given that ISCD is competing with other organizations, including organizations within DHS, for individuals that possess these specialized skills. These officials also stated that these weekly meetings provide NPPD, IP, and ISCD an opportunity to discuss human capital issues as they come up and ensure that the division’s hiring process runs smoothly. To further assist with ISCD’s hiring efforts, IP officials said that one IP human capital staff member is moving to be co-located with the division with the intent that this co-located staff member will be an important accelerator to the hiring process and help keep ISCD hiring on track. Another related action item required similar meetings between ISCD and NPPD’s Office of Employee and Labor Relations to discuss union-related issues. This item was closed because these NPPD staff members meet weekly with ISCD senior leaders to discuss how the union operates and how they should work with the union, and help them understand and properly address the division’s obligations to the union. With regard to the 33 of 46 actions items requiring collaboration with NPPD and IP that are in progress, 23 require NPPD or IP to review and approve work completed by ISCD or make policy decisions before the division can list the action item as complete. For example, Twelve of the 33 action items involve ISCD’s development of the aforementioned realignment plan. As of June 2012, ISCD had forwarded the realignment plan to NPPD and IP for review and was awaiting approval so that the plan could be forwarded to DHS for review and comment. Another action item requires ISCD to develop a human capital strategic plan. According to the June 2012 action plan, ISCD is waiting for NPPD to release its Human Capital Strategic Plan to finalize this action item and plans to use the guidance provided in the NPPD plan to develop an ISCD Strategic Human Capital Plan. ISCD continues to work on the remaining 10 of the 33 in-progress action items that require NPPD or IP action or division collaboration with NPPD and IP. According to the June 2012 action plan, completion of these action items is dependent upon ISCD staff completing an internal review of an ISCD-drafted set of standard operating procedures or memorandum, or an analysis of an existing ISCD procedure. Once ISCD finalizes these 10 action items, the outputs are to be forwarded to NPPD and IP for review, comment, and approval, where appropriate. Additional details on action items that require collaboration with or action by NPPD or IP are considered “for official use only.” ISCD has identified numerous challenges it has encountered implementing the CFATS program and has developed an action plan that is intended to help address these challenges. This appears to be a step in the right direction as officials continue their efforts to better manage the program and establish a viable process consistent with the statute and the CFATS rule. Because of the scope and breadth of the action plan and given that that many of the action items were recently completed (38 of 94 action items) or are in progress (56 of 94 action items), it is too early to tell whether they will have the effect of helping ISCD overcome and resolve all the problems it has identified. However, ISCD, working with NPPD and IP, may be better positioned to understand and report on its progress by looking for opportunities to measure the effect of efforts to implement key action items, especially since many of the action items are either recently completed or in their formative stages. By developing performance measures, where practical, ISCD, IP, and NPPD would be better equipped to identify any gaps between actual and planned or expected results and take corrective action, where necessary, consistent with Standards for Internal Control in the Federal Government. Furthermore, ISCD, IP, and NPPD would be better positioned to report on their progress developing a viable CFATS program to key stakeholders, including Congress. To better ensure that DHS can better understand the effect of its actions as it moves forward with its efforts to address the challenges facing ISCD as it implements the CFATS program, we recommend that the Secretary of Homeland Security direct the Under Secretary for NPPD, the Assistant Secretary for IP, and the Director of ISCD, in conjunction with the development of ISCD’s strategic plan, to look for opportunities, where practical, to measure results of their efforts to implement particular action items, and where performance measures can be developed, periodically monitor these measures and indicators to identify where corrective actions, if any, are needed. We provided a draft of this statement to the Secretary of Homeland Security for review and comment. The Deputy Under Secretary for NPPD and the Assistant Secretary for Infrastructure Protection provided oral comments on July 23, 2012, and stated that NPPD agreed with our recommendation. NPPD officials said that they intend to provide an updated action plan that includes a new action item to “develop metrics for measuring, where practical, results of efforts to implement action plan items, including processes for periodic monitoring and indicators for corrective actions.” The Deputy Under Secretary also noted that these new measures would be in addition to the program metrics NPPD uses to measure the overall progress of the CFATS program. DHS also provided technical comments, which we incorporated as appropriate. As agreed with your offices, we will continue to review the CFATS program and review ISCD’s efforts to manage the mission aspects of the program. This will include ISCD efforts to determine chemical facility risk; manage the process used to assess vulnerabilities, review security plans, and perform inspections; and work with owners and operators of high-risk chemical facilities. We expect to report the results of these efforts early in 2013. Chairman Aderholt, Ranking Member Price, and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For information about this statement please contact Stephen L. Caldwell, Director, Homeland Security and Justice, at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals making key contributions include John F. Mortin, Assistant Director; Ellen Wolfe, Analyst-in-Charge; Charles Bausell; Jose Cardenas; Andrew M. Curry; Michele Fejfar; Tracey King; Marvin McGill; Mona E. Nichols-Blake; and Jessica Orr. This statement discusses how the internal Infrastructure Security Compliance Division’s (ISCD) memorandum (the ISCD memorandum) was developed and what challenges were identified, what actions are being taken to address the challenges identified, and the extent to which ISCD’s planned actions and proposed solutions require collaboration with National Protection and Programs Directorate (NPPD) or the Office of Infrastructure Protection (IP). To determine how the ISCD memorandum was developed and the challenges outlined in the memorandum, we reviewed and analyzed the memorandum to determine the various Chemical Facility Anti-Terrorism Standards (CFATS) program challenges as identified by the memorandum’s author—i.e., the ISCD Director, who was the primary author, in consultation with the Deputy Director. As a part of our analysis, we grouped the challenges into overarching categories—human capital management issues, mission issues, and administrative issues—and used the sub-categories developed by the author of the ISCD memorandum to summarize the types of challenges or problems described in the ISCD memorandum. We also interviewed 14 ISCD senior officials (including the ISCD Director and Deputy Director) to confirm our understanding of the challenges identified, determine how the memorandum was developed, and obtain ISCD officials’ views on what may have created the CFATS program challenges. To determine what actions ISCD is taking to address the challenges identified in the memorandum, we analyzed and compared the various action plans that were prepared by ISCD senior officials between January 2012 and June 2012. We developed a list of the 94 action items included in the June plan and determined the status of each action item (completed or in progress), the extent to which the ISCD officials responsible for leading efforts for the action item agreed that the action item addressed an existing problem, and the extent to which the activities related to the action item were in progress prior to the ISCD memorandum’s release. Where possible, we obtained and reviewed documentation (e.g., standard operating procedures and ISCD memos) relevant to each action item to support ISCD officials’ views that the status of the action item was accurate and whether the work on the action item was in progress before the development and release of the ISCD memorandum. We also compared the results of our analysis of the action plans and our discussions with program officials with various criteria, including the CFATS law and regulations; Department of Homeland Security (DHS) policies, procedures, and reports; Standards for Internal Control in the Federal Government; and The Standard for Program Management. To determine the extent to which ISCD’s planned actions and proposed solutions require collaboration with or action by NPPD or IP officials, we interviewed 11 NPPD and 9 IP officials identified by ISCD officials who are to work with ISCD to implement corrective actions. Using the results of these interviews and our analysis of the ISCD memorandum and action plan, we determined the extent to which collaboration among ISCD, NPPD, and IP is required to implement corrective action, if at all. Where available, we obtained and reviewed NPPD, IP, and ISCD documentation (e.g., policies, standard operating procedures, and internal memos) relevant to each action item that requires NPPD or IP support or action in working with ISCD to overcome those challenges. We identified three limitations that should be considered when using our results. First, ISCD’s memorandum is largely based on the efforts of the ISCD Director in consultation with the ISCD Deputy Director and may not be representative of the views of other senior officials within the CFATS program. Furthermore, the conclusions reached in the memorandum were not obtained by using a formal compliance audit or program review procedures, nor were the assumptions validated. Second, our results are based on the status of the action plan as of June 2012, so these results are valid only up until this point in time. Third, documentary evidence about the development of the CFATS program and the causes for the issues identified in the ISCD memorandum is, for the most part, not available. Program officials did not maintain records of key decisions and the basis for those decisions during the early years of the program. During preliminary discussions, the members of current management team qualified that much of their knowledge about program decisions during the early years of the program is their best guess of what happened and why. We conducted this performance audit from February 2012 to July 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our analysis based on our audit objectives. This appendix provides the organizational structure used to manage the Chemical Facility Anti-Terrorism Standards program within the Infrastructure Security Compliance Division. ISCD has direct responsibility for implementing DHS’s CFATS rule, including assessing high-risk chemical facilities, promoting collaborative security planning, and ensuring that covered facilities meet DHS’s risk-based performance standards. ISCD is managed by a Director and a Deputy Director and operates five branches that are, among other things, responsible for information technology operations; policy and planning; providing compliance and technical support; inspecting facilities and enforcing CFATS regulatory standards; and managing logistics, administration, and chemical security training. ISCD receives business support from the National Protection and Programs Directorate and the Office of Infrastructure Protection for services related to human capital management and training, budget and finance, and acquisitions and procurement. Figure 5 shows the organizational structure of NPPD, IP, and ISCD. This appendix provides a summary of the status and progress of action items grouped by issue and sub-category. The Infrastructure Security Compliance Division is using an action plan to track its progress in addressing the challenges identified in the November 2011 ISCD memorandum prepared by the ISCD Director in consultation with the Deputy Director. The ISCD memorandum was accompanied by an action plan that, according to the authors of the memorandum, was intended to provide solutions to addressing the challenges identified. Table 1 provides an overview of the items in the action plan and their status (completed or in progress) by issue (human capital management, mission issues, and administrative issues) and subcategory. Critical Infrastructure Protection: DHS Could Better Manage Security Surveys and Vulnerability Assessments. GAO-12-378, Washington, D.C.: May 31, 2012. Critical Infrastructure Protection: DHS Has Taken Action Designed to Identify and Address Overlaps and Gaps in Critical Infrastructure Security Activities. GAO-11-537R. Washington, D.C.: May 19, 2011. Critical Infrastructure Protection: DHS Efforts to Assess and Promote Resiliency Are Evolving but Program Management Could Be Strengthened. GAO-10-772. Washington, D.C.: September 23, 2010. Critical Infrastructure Protection: Update to National Infrastructure Protection Plan Includes Increased Emphasis on Risk Management and Resilience. GAO-10-296. Washington, D.C.: March 5, 2010. The Department of Homeland Security’s (DHS) Critical Infrastructure Protection Cost-Benefit Report. GAO-09-654R. Washington, D.C.: June 26, 2009. Information Technology: Federal Laws, Regulations, and Mandatory Standards to Securing Private Sector Information Technology Systems and Data in Critical Infrastructure Sectors. GAO-08-1075R. Washington, D.C.: September 16, 2008. Risk Management: Strengthening the Use of Risk Management Principles in Homeland Security. GAO-08-904T. Washington, D.C.: June 25, 2008. Critical Infrastructure Protection: Sector Plans Complete and Sector Councils Evolving. GAO-07-1075T. Washington, D.C.: July 12, 2007. Critical Infrastructure Protection: Sector Plans Complete and Sector Councils Continue to Evolve. GAO-07-706R. Washington, D.C.: July 10, 2007. Critical Infrastructure: Challenges Remain in Protecting Key Sectors. GAO-07-626T. Washington, D.C.: March 20, 2007. Homeland Security: Progress Has Been Made to Address the Vulnerabilities Exposed by 9/11, but Continued Federal Action Is Needed to Further Mitigate Security Risks. GAO-07-375. Washington, D.C.: January 24, 2007. Critical Infrastructure Protection: Progress Coordinating Government and Private Sector Efforts Varies by Sectors’ Characteristics. GAO-07-39. Washington, D.C.: October 16, 2006. Information Sharing: DHS Should Take Steps to Encourage More Widespread Use of Its Program to Protect and Share Critical Infrastructure Information. GAO-06-383. Washington, D.C.: April 17, 2006. Risk Management: Further Refinements Needed to Assess Risks and Prioritize Protective Measures at Ports and Other Critical Infrastructure. GAO-06-91. Washington, D.C.: December 15, 2005. Protection of Chemical and Water Infrastructure: Federal Requirements, Actions of Selected Facilities, and Remaining Challenges. GAO-05-327. Washington, D.C.: March 28, 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The events of September 11, 2001, triggered a national re-examination of the security of facilities that use or store hazardous chemicals in quantities that, in the event of a terrorist attack, could put large numbers of Americans at risk of serious injury or death. As required by statute, DHS issued regulations that establish standards for the security of high-risk chemical facilities. DHS established the CFATS program to assess the risk posed by these facilities and inspect them to ensure compliance with DHS standards. ISCD, a component of IP, manages the program. A November 2011 internal ISCD memorandum, prepared by ISCD senior managers, has raised concerns about the management of the program. This testimony focuses on (1) how the memorandum was developed and any challenges identified, (2) what actions are being taken in response to any challenges identified, and (3) the extent to which ISCD’s proposed solutions require collaboration with NPPD or IP. GAO’s comments are based on recently completed work analyzing the memorandum and related actions. GAO reviewed laws, regulations, DHS’s internal memorandum and action plans, and related documents, and interviewed DHS officials. The November 2011 memorandum that discussed the management of the Chemical Facility Anti-Terrorism Standards (CFATS) program was prepared based primarily on the observations of the Director of the Department of Homeland Security’s (DHS) Infrastructure Compliance Security Division (ISCD), a component of the Office of Infrastructure Protection (IP) within the National Protection and Programs Directorate (NPPD). The memorandum was intended to highlight various challenges that have hindered ISCD efforts to implement the CFATS program. According to the Director, the challenges facing ISCD included not having a fully developed direction and plan for implementing the program, hiring staff without establishing need, and inconsistent ISCD leadership—factors that the Director believed place the CFATS program at risk. These challenges centered on human capital issues, including problems hiring, training, and managing ISCD staff; mission issues, including overcoming problems reviewing facility plans to mitigate security vulnerabilities and performing compliance inspections; and administrative issues, including concerns about NPPD and IP not supporting ISCD’s management and administrative functions. ISCD has begun to take various actions intended to address the human capital management, mission, and administrative issues identified in the ISCD memorandum and has developed a 94-item action plan to track its progress. According to ISCD managers, the plan appears to be a catalyst for addressing some of the long-standing issues the memorandum identified. As of June 2012, ISCD reported that 40 percent (38 of 94) of the items in the plan had been completed. These include (1) requiring ISCD managers to meet with staff to involve them in addressing challenges, clarifying priorities, and changing ISCD’s culture and (2) developing a proposal to establish a quality control function over compliance activities. The remaining 60 percent (56 of 94) that were in progress include those requiring longer-term efforts--—i.e., streamlining the process for reviewing facility security plans and developing facility inspection processes; those requiring completion of other items in the plan; or those awaiting action by others, such as approvals by ISCD leadership. ISCD appears to be heading in the right direction, but it is too early to tell if individual items are having their desired effect because ISCD is in the early stages of implementing corrective actions and has not established performance measures to assess results. Moving forward, exploring opportunities to develop measures, where practical, to determine where actual performance deviates from expected results, consistent with internal control standards could help ISCD better identify any gaps between actual and expected results so that it can take further action, where needed. For example, as ISCD develops a new security plan review process, it could look for ways to measure the extent to which the time to do these reviews has been reduced as compared with the time needed under the current review process. According to ISCD officials, almost half of the action items included in the June 2012 action plan require ISCD collaboration with or action by NPPD and IP. The ISCD memorandum stated that IP and NPPD did not provide the support needed to manage the CFATS program when the program was first under development. ISCD, IP, and NPPD officials confirmed that IP and NPPD are providing needed support and stated that the action plan prompted them to work together to address the various human capital and administrative issues identified. GAO recommends that DHS look for opportunities, where practical, to measure its performance implementing actions items. DHS concurred with the recommendation.
U.S. national military strategy requires air, land, sea, and special operations forces to be capable of working together as a joint force in military operations. At the direction of the Chairman, the CJCS Exercise Program began in the early 1960s to provide joint training opportunities. According to CJCS policy, the exercise program’s primary objective is to achieve joint preparedness. Specifically, joint exercises are to be designed to demonstrate that forces are proficient in wartime and other tasks considered essential by the regional commanders in chief (CINC). CJCS guidance allows the program to satisfy other national security objectives, including overseas presence, coalition building, and support of U.S. allies. However, the guidance requires the CINCs to ensure that the program accomplishes training essential to war-fighting missions first. The guidance also allows the CINCs to train for lesser contingencies, such as peacekeeping and humanitarian operations, while emphasizing training for major contingencies. The Joint Staff’s Operational Plans and Interoperability Directorate monitors and coordinates training activities under the CJCS Exercise Program. However, the program is actually implemented by the CINCs, who determine requirements, develop joint training plans, and conduct and evaluate CJCS exercises for their respective areas of responsibility. The military services provide forces to the CINCs and use service operation and maintenance funds to absorb the costs stemming from the forces’ participation in the exercises. The Joint Staff allocates funds to cover transportation-related costs among the CINCs. Congress makes an annual appropriation for these transportation-related costs in the DOD-wide operations and maintenance account. The Joint Staff and CINCs coordinate some CJCS exercises with the Department of State and the National Security Council, including those that (1) involve large-scale participation of U.S. and foreign forces, (2) require granting rights or approval by another nation, (3) have particular political significance or are planned to occur in politically sensitive areas, or (4) are likely to receive prominent media attention. The State Department’s role in the exercise program is primarily to review exercise plans and consult with the Joint Staff and CINCs about the implications of exercises to be held in politically sensitive regions. CJCS exercises may be simulated, live, or a combination of the two and can range from classroom seminars on a specific topic to the deployment of thousands of forces to train for military operations. Examples of these exercises include sending four senior military officials to a 3-day war game seminar to study the interrelationships during peacekeeping operations among the North Atlantic Treaty Organization, the United Nations, international and nongovernmental organizations, and the media; deploying about 1,350 land forces to a foreign country to conduct combined force tactical military operations, such as infantry tasks, reconnaissance, and combat medical operations; involving about 20,000 air, land, sea, and special operations forces in a training exercise to perform joint tasks, such as maneuvering to position, identifying targets, and providing combat support; and sending 300 Marine Corps and Air Force personnel overseas to construct a vehicle maintenance facility and renovate a community center and medical clinic. Over the past few years, the Joint Chiefs of Staff Chairman, CINCs, and military service commanders have expressed concerns to Congress about the high level of personnel deployments by a downsized force. DOD and Congress then considered the possibility that CJCS exercises were impacting the high DOD-wide deployment rate. On the basis of this and other concerns, the Secretary of Defense directed in May 1997 that the number of CJCS exercise man-days be decreased by 15 percent between fiscal year 1996 and 1998 to reduce the potential impact of the exercise program on deployment rates. Also, Congress, in its conference report for DOD fiscal year 1998 appropriations, called for a reduction in funding for the CJCS Exercise Program—including both DOD-wide and service incremental funds—of about $118.5 million. In addition, the National Defense Authorization Act for Fiscal Year 1998 (P.L. 105-85) directed DOD to report on CJCS exercises conducted from fiscal year 1995 to 1997 and those planned for fiscal years 1998 to 2000. This one-time congressional reporting requirement was to include (1) the percentage of mission-essential tasks performed or scheduled, (2) exercise costs, (3) exercise priority, (4) an assessment of the training value of each exercise, and (5) options to minimize the effect of CJCS exercises on deployments. The Secretary of Defense submitted the required report to Congress on February 16, 1998. The regional CINCs use the CJCS Exercise Program largely to ensure that their forces are trained to conduct missions contained in contingency plans, provide joint training, and project a military presence worldwide to shape the international environment. Some exercises focus on just one of these objectives, whereas others, such as war-fighting training, focus on more than one objective (i.e., contingency plans and joint training). CINC exercise officials stated that deliberate decisions are made to determine the number of exercises and their objectives necessary to meet the commands’ regional security needs. Our analysis showed that the five regional CINCs conducted or plan to conduct 1,405 CJCS exercises during the period from fiscal year 1995 to 2002. On average, about 37 percent of these exercises have or will train forces to implement the CINCs’ existing contingency plans; about 60 percent are designed to prepare U.S. forces for joint operations; and approximately 44 percent are designed primarily for engagement purposes, such as projecting U.S. military presence abroad or fostering relations with foreign military forces. CINCs develop contingency plans that cover a wide variety of wartime and peacetime operations, such as major theater wars and evacuations. The joint training system focuses on war-fighting or preparing forces to perform the missions contained in these plans. Joint Staff guidance requires that training should emphasize war-fighting missions and focus on major regional contingencies before other less critical training is done. Of the 1,405 CJCS exercises conducted or planned for fiscal years 1995-2002, 521, or about 37 percent, were directly tied to contingency plans. Figure 1 shows the number and percent of exercises that are linked to contingency plans at each command. It also shows the current geographical areas of responsibility for the five commands. The CJCS Exercise Program also provides the CINCs with opportunities to train forces in a joint setting. Such training requires the application of joint doctrine, which contains the fundamental principles that guide the employment of forces of two or more services. Also, joint exercises are either to respond to requirements established by a joint force commander or train joint forces or staffs for missions. Thus, joint training under the CJCS Exercise Program is primarily designed to train forces, commanders, or staff of two or more services using joint doctrine, tactics, techniques, or procedures to employ forces. The percentage of exercises intended to provide joint training at the regional CINCs has increased over the past 3 to 4 years. In 1995, we reported that about 25 percent of CJCS exercises in fiscal years 1994-95 were designed to provide joint training. Our current evaluation of 1,405 exercises conducted during fiscal years 1995-97 and planned to be conducted during fiscal years 1998-2002 shows that 836, or about 60 percent, were or are intended to provide U.S. forces with joint training experience. The percentage varied among the five regional commands, ranging from 39 percent in the U.S. Central Command to 76 percent in the U.S. Pacific Command. Table 1 shows the exercises directed toward joint training. In its February 1998 mandated report to Congress, DOD reported that 66 percent of CJCS exercises were for joint training purposes. Differences between DOD’s and our figures can be attributed to methodological differences in the evaluations. For example, DOD used planned exercises for the period 1995-2000, and we used a combination of actual exercises for 1995-97 and planned exercises for 1998-2002. Further, we considered only those exercises that involved the participation of more than one service component as joint; however, DOD officials included certain exercises that did not involve more than one service if they believed that the content of the exercises had some joint training value. The CJCS Exercise Program is also used by regional CINCs to meet other responsibilities that are not directly focused on executing contingency plans or providing joint training. For example, the CINCs may conduct exercises or engagement activities to demonstrate U.S. forces’ ability to project military presence within their geographic areas of responsibility. According to military officials, gaining access to critical facilities, maintaining presence, peacekeeping, providing humanitarian relief, and fostering relations with foreign nations’ forces are engagement activities that are essential to accomplishing the CINCs’ assigned missions. Of the 1,405 CJCS exercises conducted or planned to be conducted during fiscal years 1995-2002, 625, or about 44 percent, were directed toward engagement activities. Some regional CINCs conduct more engagement-related exercises than others. For example, in the U.S. Southern Command, 81 percent of all actual or planned CJCS exercises are for engagement purposes compared with 24 percent in the U.S. Pacific Command. Table 2 illustrates the number of CJCS exercises that each CINC devoted to engagement-type activities. The Joint Staff does not track total costs involved in conducting CJCS exercises; it only compiles actual cost data for strategic lift, port handling, and inland transportation—items covered by a specific congressional appropriation. The Joint Staff, CINC staff, and military services do not have systems to capture all exercise-related costs. Historically, there has been no requirement that total CJCS Exercise Program costs be tracked or reported. However, the Joint Staff estimated that the CJCS Exercise Program would cost about $400 million to $500 million annually during fiscal years 1995-2000. The Joint Staff’s estimate was derived from a combination of actual and estimated costs; therefore, we were unable to independently verify the estimate. However, we believe that the costs reported by the Joint Staff may be understated, since certain incremental costs and other related operating costs were not included in its estimate. A variety of costs are directly associated with conducting CJCS exercises. These costs, shown in table 3, include strategic lift, port handling, inland transportation, exercise-related construction, and service incremental. The costs are funded by the DOD-wide and service operation and maintenance accounts except for exercise-related construction, which is funded by the military construction accounts. Although the Joint Staff is responsible for program oversight, it only tracks a portion of the total exercise program costs. The Joint Staff tracks actual cost data for expenditures related to strategic lift, port handling, and inland transportation expenses and reimburses the U.S. Air Mobility Command, the Military Sealift Command, and service components for these costs. It also maintains data on the amount of funding appropriated for exercise-related military construction projects. The costs reported by the Joint Staff for these categories for fiscal years 1995 through 1997 are shown in table 4. The Joint Staff estimated that the incremental cost to conduct the CJCS Exercise Program annually between fiscal year 1995 and 2000 would be between $400 million and $500 million. This cost included actual and projected expenses related to strategic lift, port handling, and inland transportation, as shown in table 4. The remainder of the cost estimate is based on an estimate of exercise-related construction costs and service incremental operations and maintenance costs. The estimate does not include the items funded in the military service accounts, such as flying hours, steaming days, or tank miles. The CINCs do not compile, track, or report on total CJCS exercise costs, although they have access to information on and track to some degree strategic lift, port handling, inland transportation, and exercise-related construction costs. CINC officials told us that maintaining total cost information would be of no value to them because they are not responsible for paying these costs. The Joint Staff does allocate each CINC an annual strategic lift funding level, which is not to be exceeded, to manage CJCS exercises. Consequently, CINC officials said total cost information would have little bearing on their management responsibilities. Because individual military services provide forces for CJCS exercises, they incur and pay for the incremental operations and maintenance costs associated with the forces’ participation. These costs, which include consumable supplies, repair parts, and non-aviation fuel, are tracked differently by each service. Depending on the service, costs incurred by units that are preparing to participate in an exercise, equipment maintenance and repair expenses, and costs associated with recovering from participation in the exercises may or may not be tracked. To assist the Joint Staff in developing the exercise program costs for the February 1998 report, the services provided estimated cost data related to such items as consumable supplies, repair parts, per diem, non-aviation fuel, and communications. No commonly accepted process among the service component commands exists to capture CJCS exercise costs; therefore, the services’ cost estimates will vary according to what costs they choose to include. The Army, the Navy, and the Marine Corps maintained some cost data on the incremental operations and maintenance costs associated with their participation in CJCS exercises. In fiscal year 1997, for example, these services reported costs of about $54 million, $11 million, and $12 million, respectively. They developed these estimates using various systems and records of funding targets to help the Joint Staff meet its congressional reporting requirement. Navy components do not track operations and maintenance funds for flying hours and steaming days used during CJCS exercises, and Air Force components do not track flying hours used for the exercises. In providing information to the Joint Staff to satisfy DOD’s reporting requirement, the Navy and the Air Force estimated incremental operations and maintenance costs, excluding flying hours and steaming days. Service component officials cited two reasons for not accumulating cost data at the level necessary to accurately determine total operations and maintenance costs associated with participation in CJCS exercises. First, such data would not enhance their management capabilities. Second, there was no DOD-wide requirement for them to track and report these costs. The officials said that any measure of the actual operations and maintenance costs consumed by CJCS Exercise Program participation would require individual unit commanders in the field (e.g., tank operators, pilots, mechanics, or explosive ordnance specialists) to maintain such cost data and report it through financial management channels. Service officials did not believe that accumulating such data would be cost beneficial. The services use various methods to track the time individuals or units spend engaged in operations and time deployed away from their home stations because there is no DOD-wide requirement to collect and maintain specific personnel deployment rate data. Service officials stated that they maintain some personnel or unit deployment rate data to track their forces’ participation in the exercise program. However, the services do not regularly track the impact of participation in CJCS exercises on overall deployment rates. As a result, officials from the Joint Staff, CINCs, service headquarters, and service components at the five regional commands could not provide complete information on the total number of days consumed by all deployments, including those associated with CJCS exercises. Without this data, the program’s impact on personnel deployment rates cannot be precisely determined. The Joint Staff has generally had difficulty measuring personnel deployments among the military services. We reported in April 1996 that it is difficult to determine the actual time that either military personnel or units are deployed. Our report recommended that the Secretary of Defense (1) establish a DOD-wide definition of deployment; (2) state whether each service should have a goal, policy, or regulation stipulating the maximum amount of time units and personnel may be deployed; and (3) define the minimum data on deployments that each service must collect and maintain. DOD agreed to further pursue initiatives—many which were noted in our report—to enhance its ability to manage deployments. Also, in January 1997, DOD forwarded a report to the Chairman, House National Security Committee, on the impact of increased deployments on training, retention, and readiness. As part of that study, the Joint Staff assessed the capabilities of service systems to track personnel deployments. The report noted that, although all the services had systems in place to monitor deployments, each service measured and defined personnel deployment differently. For example, the Army tracks personnel at both the unit and individual level, whereas the Marine Corps and the Navy track personnel only at the unit level. The Air Force tracks personnel by aircraft type and specialty type. The difficulty with measuring either military personnel or unit deployment rates stems in part from the differences in how each military service defines and tracks personnel deployments. The services have different definitions of deployed forces. For example, the Marine Corps considers a servicemember deployed after that person has been away from his or her home station for 10 days, but the Army, the Air Force, and the Navy consider personnel to be deployed after only 1 day away from their home station. Table 5 shows the variation in service measurement systems and definitions that were in place as of March 1998. Military officials stated that systems that address the different personnel deployment rate measurements are evolving. The readiness staff at Air Force, Navy, and Marine Corps headquarters, which monitor data systems, currently do not track the impact of CJCS exercise participation on overall personnel or unit deployment rates. The services are developing systems to enhance their capability to measure overall deployment rates. At the time of our visits, officials at the service components at the regional commands had not been regularly maintaining data on the participation of their personnel in CJCS exercises. Some CINCs have tried to determine this relationship, but their data and methodologies had flaws. For example, the U.S. Pacific Command performed an analysis on the relationship between CJCS participation and personnel deployments. The analysis showed that about 4 percent of the total deployed days spent by service components in fiscal year 1996 were attributable to participation in CJCS exercises. However, the analysis did not include data from all of the units assigned to the command, the components determined deployment days differently, and the information provided by the components was not complete. For example, their personnel tracking systems do not calculate the number of days used by deployments for CJCS exercises. The lack of such information is especially evident at the U.S. Atlantic Command, which has responsibility for training and deploying nearly 80 percent of all U.S. forces. Officials from this command stated that, to assess the impact of CJCS exercises on personnel deployment rates, the command would need an adequate database with visibility into all deployments, operations, exercises, and training events. Command officials stated that they do not have historical personnel deployment data for all of their units; therefore, they could not determine the impact of participation in CJCS exercises on personnel deployment rates. The officials also stated that they do not have information on the extent of unit deployments and therefore do not consider this factor when selecting units for exercises. Although many factors contribute to the pace of deployments, such as routine training, peacekeeping efforts, and major deployments, the military officials we met with believe that personnel deployments created by participation in CJCS exercises have a minimal impact on the overall DOD-wide deployment rate. Nevertheless, the Secretary of Defense directed in the May 1997 Quadrennial Defense Review that CJCS exercise man-days be reduced by 15 percent between fiscal year 1996 and 1998 to reduce the stress on overall DOD-wide personnel deployments caused by these exercises. The officials we met with believe that any reduction in CJCS exercise participation would have virtually no impact on overall DOD-wide deployment problems. Participation in CJCS exercises has a greater impact on the personnel deployment of low-density, high-demand units rather than military units in general, according to these officials. However, their conclusion was based on professional military judgment, since no systems exist to measure the impact of the exercise program on total deployment. In its February 1998 report to Congress, DOD describes various actions underway to reduce personnel deployments incurred as a result of the CJCS Exercise Program. The report cited the Secretary’s directive to reduce the man-days devoted to exercise programs and noted that the military services had been asked to pursue further reductions. Actions to meet these mandates are underway, according to the report. DOD officials use the CJCS Exercise Program to accomplish a wide range of objectives. DOD policy directs that the exercises are to prepare forces for their highest priorities—joint wartime operations. DOD policy also allows these exercises to be focused on maintaining relationships with U.S. allies. CINC exercise officials stated that the mix of exercises and their intended focus are the result of deliberate decisions made to meet each command’s security needs. Total costs associated with conducting CJCS exercises cannot be determined. DOD and its components are currently unprepared to report accurate and complete cost data because they do not believe tracking such costs would be cost beneficial. The cost data in DOD’s February 1998 report to Congress is incomplete because some service participation costs are not included. The reported costs generally represent some of the incremental costs incurred in conducting these exercises. DOD has no method to measure the impact of the CJCS Exercise Program on overall individual and unit deployment rates. Although the Office of the Secretary of Defense questions whether deployment problems exist, concerns expressed by Joint Staff, CINCs, and service component officials have led to actions by both DOD and Congress to reduce overall deployment rates by reducing the CJCS program in terms of funding and the number of exercises. Because DOD does not consistently track information on deployments, the impact of the exercise program on overall deployment rates cannot be precisely determined. Although DOD agreed to consider the recommendations in our April 1996 report to address the problem of managing personnel deployment rates, it has yet to fully implement them. We continue to believe that our prior recommendations to DOD are crucial to its ability to measure the impact that the CJCS Exercise Program has on overall personnel deployment rates. In written comments on a draft of this report, DOD concurred with our findings and made several observations about the CJCS Exercise Program (see app. I). DOD also provided technical comments, which we incorporated where appropriate. DOD said that, even though the primary focus of an exercise may be joint training, contingency operations, or engagement, it is not appropriate to consider the value of this training for just one purpose, since all CJCS exercises provide joint training value. In categorizing exercises according to their purposes, we used established guidance published by the Joint Staff to identify those exercises that provided an opportunity for joint training. We did not assess the value of the training but did include exercises with more than one purpose in all applicable categories. With respect to program costs, DOD noted that the Joint Staff monitors direct costs of the exercise program (e.g., strategic lift and port handling) as well as service incremental costs. DOD acknowledged that the services do not track flying hours, steaming days, and tank miles associated with the exercises because, according to DOD, doing so would not necessarily benefit the agency. As our report points out, without such cost information, DOD cannot determine total program costs. DOD noted that the Joint Staff is now collecting data on the number of man-days spent participating in the CJCS Exercise Program. According to DOD, this data shows that the man-days associated with the exercise program have been reduced and exceeded DOD’s 15-percent goal. DOD acknowledged that the services’ ability to measure overall personnel and unit deployment rates is still evolving and is not yet robust enough to allow the agency to determine the share attributable to the CJCS Exercise Program. Because the services use various methods to determine deployment rates and do not regularly track the impact of participation in CJCS exercises on these rates, we cannot verify DOD’s statement that it has met its man-day reduction goal for the exercise program. To assess the number and type of CJCS exercises conducted or planned for fiscal years 1995 to 2002, we obtained and analyzed quarterly schedules of exercises conducted by the U.S. Atlantic, Central, European, Pacific, and Southern Commands. These exercises represent approximately 88 percent of the total exercises conducted or planned to be conducted during the time period. We did not analyze the remaining exercises, which were conducted by the U.S. Space, Strategic, Transportation, and North American Aerospace Defense Commands and the Joint Staff. To determine the scope of the joint training, we used the Joint Staff’s published guidance to determine whether a particular exercise meets the criteria for joint training. We reviewed the training objectives and tasks to be performed for each of the 1,405 CJCS exercises conducted or planned to be conducted during the 8-year period in our review. We provided the Joint Staff and each CINC an opportunity to review our analyses and make any necessary adjustments to account for additional exercises conducted or planned and exercises that were canceled. Any discrepancies between the information the Joint Staff and the CINCs provided about the exercises were reconciled. To identify the CJCS exercises designed primarily to accomplish contingency plans, we relied on the determinations of CINC officials. To identify exercises conducted to address engagement-type activities, we obtained and analyzed each CINCs’ joint training plans. We discussed our analyses with officials from the Joint Staff’s Exercise and Training Division. We visited each of the five U.S. regional commands and discussed our analyses with CINC officials from each command. We also interviewed officials from service components of the Atlantic, European, and Pacific Commands. These officials generally agreed with our categorization of the exercises. To determine the available cost data for the exercise program, we interviewed officials and analyzed data obtained from the Joint Staff, CINCs, service component commands, and service headquarters. We also interviewed officials and obtained budget data from Headquarters, U.S. Forces Command; Headquarters, Air Combat Command; the Commander in Chief, Atlantic Fleet; the Commander, Marine Forces Atlantic; the Commander in Chief, Pacific Fleet; and Headquarters, Marine Forces Pacific. We also discussed with the Joint Staff the methodology for estimating the costs of the CJCS Exercise Program that were reported to Congress. To assess whether DOD maintains the data needed to estimate the impact of CJCS exercises on overall deployment rates, we interviewed officials and obtained documents from service headquarters; the Atlantic, Central, European, Pacific, and Southern Commands; and service components of the Atlantic, European, and Pacific Commands. We determined the systems the Joint Staff, CINCs, services, and major commands use to track military personnel and unit deployments by contacting the following organizations: the Joint Staff Operational Plans and Interoperability Directorate (J-7); the U.S. Atlantic, Central, European, Pacific, and Southern Commands; the Army Deputy Chief of Staff for Operations; the Air Force Operations Support Center, Training Division; the Office of the Deputy Chief of Naval Operations for Plans, Policy, and Operations; and the Marine Corps Current Operations Branch Exercise Office, Deputy Chief of Staff for Plans, Policies, and Operations. We also contacted major service component commands. Because these organizations were unable to provide data on the amount of time units and personnel deployed for CJCS exercises, service training, and operational deployments, we could not evaluate the impact of the program on personnel or unit deployment rates. Both the lack and inconsistency of the data that is maintained made it difficult to determine the actual time personnel or units are deployed. We conducted our work from September 1997 to July 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to appropriate congressional committees; the Secretaries of Defense, the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and the Chairman of the Joint Chiefs of Staff. We will also make copies available to others on request. Please contact me at (202) 512-5140 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix II. Janine Cantin Harry Taylor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Chairman, Joint Chiefs of Staff (CJCS), Exercise Program, focusing on the: (1) number and type of CJCS exercises conducted and planned from 1995 to 2002; (2) basis for the Department of Defense's (DOD) estimates of exercise costs for the same time period; and (3) availability of DOD data to estimate the impact of CJCS exercises on deployment rates. GAO noted that: (1) DOD cannot determine the impact of the CJCS Exercise Program on overall deployment rates because DOD does not have a system that accurately and consistently measures overall deployment rates across the services; (2) without such a system, DOD cannot objectively assess the extent to which the program contributes to deployment rate concerns; (3) from fiscal year (FY) 1995 to 2002, 1,405 exercises were or are planned to be conducted as part of the program at the 5 regional commands; (4) the objectives of these exercises are to: (a) ensure that U.S. forces are trained to conduct their highest-priority mission contained in regional command contingency plans; (b) provide joint training for commanders, staff, and forces; and (c) project a military presence worldwide and support commitments to U.S. allies; (5) some exercises focus on just one of these objectives, whereas others focus on more; (6) about 37 percent of the exercises during FY 1995 through 2002 are directly related to executing contingency plans, 60 percent are intended to provide joint training benefits, and about 44 percent are primarily directed toward engagement activities with foreign nations' military forces and U.S. allies; (7) the Joint Staff maintains data on transportation-related expenses but does not monitor and track the complete costs of the program; (8) before the FY 1998 National Defense Authorization Act, DOD was not required to determine total program costs; (9) in DOD's February 1998 mandated report to Congress, the Joint Staff used a combination of actual and estimated costs to estimate that the total program would cost between $400 million and $500 million annually from FY 1995 to 2000; (10) DOD does not maintain the data that would enable it to determine the extent to which military personnel deployments associated with the program contribute to overall DOD-wide personnel or unit deployment rates; (11) the services use various methods to track individual or unit deployments and collect some data on the numbers of personnel or units that participate in CJCS exercises and the length of personnel deployments associated with the exercises; and (12) the services' ability to measure overall personnel or unit deployment rates is still evolving; as a result, the impact of the CJCS Exercise Program on deployment rates remains unknown.
This section provides information on the history of the Manufacturing USA program, the provisions of the RAMI Act, and the awards process and membership of the institutes. The June 2011 PCAST report was followed in July 2012 by another report, prepared by the Advanced Manufacturing Partnership Steering Committee; this report contained 16 recommendations to improve the manufacturing competitiveness of the United States. One of the recommendations was to establish a national network of manufacturing innovation institutes as public-private partnerships. After DOD established the first such institute as a pilot institute in 2012, the National Science and Technology Council released its January 2013 report proposing a preliminary design for the network. This report envisioned a network of public-private partnerships to be supported with a co-investment of federal obligations ranging from $70 million to $120 million for each institute and equal or greater amounts in nonfederal pledges (also known as cost-share). The amount of investment would depend on such factors as the magnitude of the opportunity and maturity of the technology and would be distributed across 5 to 7 years. Total capitalization of an institute across this time period was envisioned to be $140 million to $240 million. The report also envisioned that an institute would become sustainable within 7 years of the public announcement of an award of financial resources to start up and operate the institute, and that the institute would provide long-term economic impact in its region and nationwide. Building on the July 2012 Advanced Manufacturing Partnership report, an October 2014 PCAST report provided 12 recommendations with further guidance on how the United States could sustain its lead in innovation. For example, one of the recommendations was to create—through the National Economic Council, the Office of Science and Technology Policy, and the implementing agencies and departments—a shared Manufacturing USA governance structure that includes input from various agencies as well as from private sector experts, organized labor, and academia. Under the RAMI Act, the purposes of the Manufacturing USA program are to improve the country’s manufacturing competitiveness and increase production of goods manufactured predominantly within the United States; stimulate U.S. leadership in advanced manufacturing research, innovation, and technology; facilitate transition of innovative technologies into scalable, cost- effective, and high-performing manufacturing capabilities; facilitate access by manufacturing enterprises to capital-intensive infrastructure; accelerate development of an advanced manufacturing workforce; facilitate peer exchange of and documentation of best practices in addressing advanced manufacturing challenges; leverage nonfederal sources of support to promote a stable and sustainable business model without the need for long-term federal funding; and create and preserve jobs. The RAMI Act requires the Secretary of Commerce to report annually to Congress, until December 31, 2024, on the program’s performance, including an assessment of Commerce’s institutes and an assessment of the participation in, and contributions to, the network by institutes established by other agencies. Annual reports are also to include an assessment of how well the program is meeting its statutory purposes. The RAMI Act also contains a number of provisions related to collaboration between Commerce and other agencies. For example, in awarding financial assistance to establish institutes, the RAMI Act requires that the Secretary of Commerce, through AMNPO, collaborate with federal departments and agencies whose missions contribute to or are affected by advanced manufacturing. In addition, several of the functions of the AMNPO under the RAMI Act also pertain to collaboration. The functions that the RAMI Act identifies for the national office include: Overseeing the planning, management, and coordination of the Entering into memorandums of understanding with federal departments and agencies whose missions contribute to or are affected by advanced manufacturing, to carry out the program’s statutory purposes; Developing, not later than 1 year after the date of enactment of the RAMI Act, and updating not less frequently than once every 3 years thereafter, a strategic plan to guide the program; Establishing such procedures, processes, and criteria as may be necessary and appropriate to maximize cooperation and coordinate the activities of the program with programs and activities of other federal departments and agencies whose missions contribute to or are affected by advanced manufacturing; Establishing a clearinghouse of public information related to the activities of the program; and Acting as a convener of the network. Additionally, the RAMI Act requires that, when developing and updating the strategic plan, the Secretary of Commerce solicit recommendations and advice from a wide range of stakeholders, including industry, small and medium-sized manufacturing enterprises, research universities, community colleges, and other relevant organizations and institutions on an ongoing basis. The RAMI Act also requires the Secretary of Commerce to ensure that the AMNPO incorporates NIST’s Hollings Manufacturing Extension Partnership into Manufacturing USA program planning to ensure that the results of the program reach small and medium-sized entities. DOD, DOE, and Commerce have provided competitive financial assistance for institutes using similar processes for soliciting applications and making awards. DOD and DOE have identified prospective technology focus areas through requests for information from nonfederal organizations and meetings with industry officials and other stakeholders. For Commerce institutes, the RAMI Act requires that the Secretary of Commerce ensure an open process that allows for consideration of all applications relevant to advanced manufacturing regardless of technology focus area. Subsequently, each of the sponsoring agencies released notices of intent announcing a competition for their new institutes and then released federal funding opportunity announcements. The federal funding announcements included goals and objectives for the institutes that were related to the Manufacturing USA program and to the agency mission. Potential applicants generally have had opportunities to ask clarifying questions via mechanisms such as webinars and e-mail. Applicants submitted pre-application concept papers, which were evaluated by subject matter experts in the government to determine, for example, the potential to fulfill a recognized national need. Based on these subject matter experts’ review, the sponsoring agencies invited selected applicants to submit full proposals. Full proposals included detailed information—such as a business strategy, leadership and sustainment plan, and technology investment plan—and were evaluated by multi-agency source selection team. Ultimately, an official from the sponsoring federal agency selected the winning applicant, after which the agency and the applicant negotiated an agreement, such as a cooperative agreement, to establish and manage the institute. These awards provided financial resources to start up and operate an institute focusing on the specific technology area. The institutes provide members with access to shared facilities and equipment. Members can take advantage of these resources in a variety of ways, such as by collaborating on research related to the technology focus area of the institute. Membership is open to all U.S. industrial organizations, academic institutions, nonprofit organizations, and government agencies that want to further technology and education in a certain focus area. Each institute has its own membership terms— including a range of costs, rights, and benefits—and required time commitment. Specific terms and conditions are detailed in formal membership agreements. As of December 2016, DOD, DOE, and Commerce collectively had established a total of eleven innovation institutes, which have become part of the Manufacturing USA network. Seven of these eleven institutes were operating (i.e., supporting research projects in their technology focus areas) as of December 2016. In addition, one DOD institute, two DOE institutes, and potentially one additional Commerce institute were in the process of being established through a competitive solicitation. Our analysis of institute membership between May 2016 and September 2016 shows that about 780 manufacturers and other entities, such as academic institutions and nonprofit organizations, have become members of the operating institutes. Institute members receive a variety of benefits and rights, such as networking opportunities. As of December 2016, DOD, DOE, and Commerce collectively had established a total of eleven institutes that have become part of the Manufacturing USA network. Four institutes—three DOD institutes and one DOE institute—were established prior to the enactment of the RAMI Act in 2014. The remaining seven institutes were established after the enactment of the RAMI Act. Four of these institutes—Advanced Functional Fabrics of America, the Advanced Tissue Biofabrication Manufacturing Innovation Institute, the Clean Energy Smart Manufacturing Innovation Institute, and the National Institute for Innovation in Manufacturing Biopharmaceuticals—were established in 2016 and were not yet operating as of December 2016. However, all eleven institutes are part of the network. Appendix I provides additional information on the DOD and DOE institutes that were operating as of December 2016. The institutes, which are located across the United States, focus on a wide variety of areas of advanced manufacturing, including additive manufacturing (three-dimensional, or 3D, printing), lightweight metals, integrated photonics circuits, novel fiber and textiles, and biopharmaceuticals. DOD, DOE, and Commerce plan to provide over $800 million in financial assistance, generally through cooperative agreements, for the eleven established institutes they sponsor. According to DOD, DOE, and NIST officials, the agencies plan to distribute this assistance over a 5 to 7 year period, at the end of which they expect the institutes to be sustainable without guaranteed federal financial assistance. Award recipients have pledged about $1.9 billion in nonfederal funds over the same 5 to 7 year period. However, the officials also said they expect the institutes will be eligible to receive additional federal financial assistance, such as competitively awarded grants for research. According to the February 2016 Manufacturing USA program annual report, the seven institutes established by DOD are designed to overcome the challenge DOD faces when the existing industrial infrastructure is insufficient to develop and produce new and emerging technologies that hold strategic promise. DOD considers its institutes to represent a key investment strategy for the department and its Manufacturing Technology program. Table 1 provides an overview of the seven institutes DOD had established as of December 2016. DOE’s mission is to ensure America’s security and prosperity by addressing its energy, environmental, and nuclear challenges through transformative science and technology solutions. DOE officials told us that the three institutes established by DOE support this mission by focusing support on innovative, advanced manufacturing technologies that will enhance domestic advanced manufacturing competitiveness and create jobs for American workers by maintaining U.S. global competitiveness in clean energy manufacturing. Table 2 provides an overview of the three institutes DOE had established as of December 2016. In December 2016, the Secretary of Commerce announced the selection of an award recipient for the first institute sponsored by Commerce, the National Institute for Innovation in Manufacturing Biopharmaceuticals. According to Commerce, this institute is expected to help advance U.S. leadership in the biopharmaceutical industry, foster economic development, improve medical treatments, and ensure a qualified workforce. Table 3 provides an overview of the institute Commerce had established as of December 2016. DOD, DOE, and Commerce were in the process of evaluating applications, selecting award recipients, and negotiating agreements to establish up to four additional institutes as of December 2016. The DOD- sponsored institute, the Robots in Manufacturing Environments Manufacturing Innovation Institute, will focus on robotic technologies in manufacturing, such as human robot interaction, learning, and mobility. The two DOE institutes, Rapid Advancement in Process Intensification Deployment and Reducing Embodied-energy and Decreasing Emissions, will have the following technology focus areas: (1) developing breakthrough technologies to boost domestic energy productivity and energy efficiency of manufacturing processes, and (2) reuse, recycling, and remanufacturing of materials. In addition, Commerce officials were considering whether to establish a second institute, subject to the availability of funds. This second Commerce-sponsored institute could address any area of advanced manufacturing as long as it does not duplicate the technology focus areas and programs of existing or announced federally-funded institutes. Table 4 provides an overview of the institutes that DOD and DOE were in the process of establishing as of December 2016. Our analysis of institutes’ membership data showed that about 520 manufacturers (mostly small manufacturers according to a Commerce official) were members of the seven DOD and DOE institutes that were operating as of December 2016. In addition, about 260 other entities, including academic institutions, state government agencies, nonprofit organizations, and NIST’s Hollings Manufacturing Extension Partnership centers were members of these institutes. Some institute officials told us that additional members were pending. For example, America Makes officials told us that as of July 2016, they were processing 14 new member requests. Manufacturers and other entities obtain a variety of benefits and rights, such as networking opportunities and intellectual property access, based on membership level. Each institute has developed a membership structure that provides different costs, benefits, and rights at each level. The number of membership levels ranges from 3 levels at one institute to 10 levels at another institute. For manufacturers, membership structures generally consist of 3 to 4 levels. For other entities, membership structures and the number of membership levels vary. At the highest membership levels, members have the greatest influence on the governance of the institute and the direction of research and development. Benefits and rights at those levels could include participation in technology roadmap development, greater access to intellectual property, and participation in determining the order in which projects are executed. Benefits and rights at lower membership levels could include assistance with navigating government contracts, collaboration with other members, access to member-only data, and participation in research and development projects. See appendix I for additional information on membership benefits and rights for DOD and DOE institutes that were operating as of December 2016. Figure 1 illustrates participation by manufacturers and other entities at the highest 3 levels of membership. At the highest 2 membership levels, which we denote as Tier 1 and Tier 2, participation by manufacturers and other entities is fairly evenly divided. At the membership level which we denote as Tier 3, manufacturers represent a larger proportion of participating members. Figure 2 illustrates participation by large and small manufacturers at the highest 3 levels of membership, which we denote as Tier 1, Tier 2, and Tier 3. Manufacturers who join Manufacturing USA institutes at a higher membership level are typically larger manufacturers. Membership costs vary depending on the institute and type of entity. Some institutes allow membership costs to be provided by members as in-kind contributions. In-kind contributions can consist of items such as equipment, software, or facility space. Our analysis of institutes’ membership data showed that for manufacturers, costs for the highest membership level range from $200,000 cash or in-kind annually, with at least a 3-year commitment, to $5 million provided over a period of up to 5 years. Costs for the lowest membership level range from no fee, but with a 3-year commitment, to $15,000 cash or in-kind annually, with at least a 3-year commitment. For other entities—institutes that delineate between manufacturers and other types of members—membership costs generally range from no fee, with a 1-year commitment, to $5 million in expenditures, with a 5-year commitment. According to an institute official, in-kind contributions can attract more small manufacturers and other organizations to become members, and to become members at higher levels, than if membership costs were restricted to cash membership fees. See appendix I for additional information on membership costs for each institute. During discussion groups with representatives of institute members, 19 representatives of manufacturers and other entities that are members of Manufacturing USA institutes provided their perspectives on how their organizations have benefited from institute membership. Representatives of institute members in each of the discussion groups identified some examples of benefits, including the following: Small manufacturers. All four representatives of small manufacturers identified networking opportunities as a benefit of institute membership. They told us they have established working relationships with other large and small manufacturers, and with suppliers, and that they obtained contracts they otherwise might not have been able to secure. One small manufacturer representative also said that, through networking, he had gained a more comprehensive understanding of his company’s technology. Another small manufacturer representative said that, through networking, he has kept up with market trends and what other manufacturers are trying to achieve. Large manufacturers. Four of seven large manufacturer representatives cited acceleration of technology development as a benefit of institute membership. For example, three large manufacturer representatives stated that networking has helped accelerate the development of their technology by between 2 and 5 years, thereby shortening the time needed to get their products ready for commercial release. Academic and Nonprofit. Five of eight other entities’ representatives said they have been able to strengthen their educational programs in ways that help prepare students to enter a certain manufacturing industry—for example, by adding material relevant to a particular technology, such as semiconductors. One academic entity representative also said that some students work on institute projects that make the students more marketable upon graduation. DOD and DOE primarily use cooperative agreements to manage their Manufacturing USA institutes that were operating as of December 2016. These agreements include terms and conditions for the institutes, and the agencies seek to ensure these terms and conditions are met through meetings, reports, and other mechanisms. Manufacturing USA institute officials and institute member representatives identified various challenges related to institute operations. DOD and DOE manage the Manufacturing USA institutes that were operating as of December 2016 primarily through cooperative agreements that define the terms and conditions that govern the relationships between the agencies and the institutes. For example, the cooperative agreements identify the amount of federal financial assistance agencies provide to each institute and the period of performance for the institute to carry out work under the agreement. According to AMNPO officials, the sponsoring federal agency determines the federal financial assistance provided through these cooperative agreements, and this amount can be up to $120 million, which is matched or exceeded by funding from private industry and other nonfederal sources over a 5- to 7-year period. Cooperative agreements may also address other aspects of the relationships, including how facilities and infrastructure will be established, how research projects will be determined, and how projects will be managed. In addition, the cooperative agreements may require Manufacturing USA institutes to develop governance plans that define types of membership, establish sustainability goals, and help manage intellectual property. According to institute and agency officials, DOD and DOE have established mechanisms to ensure that their Manufacturing USA institutes meet the terms and conditions specified in the cooperative agreements. Institute officials told us that DOD and DOE may conduct weekly, monthly, and quarterly meetings with each of their institutes. For example, officials from one DOD institute said that DOD officials perform program management functions through weekly teleconferences with the institute’s executive director, executive staff, and finance staff. Similarly, officials from one DOE institute said that a DOE Technology Manager conducts weekly teleconferences with the institute’s management staff. Institute officials said that DOD’s program managers may conduct monthly visits to their institutes, and DOE officials conduct monthly meetings with their institutes to assess progress toward achieving key goals specified in the cooperative agreements. DOD and DOE also host quarterly meetings for the institute directors and program managers of their respective institutes, at which they discuss a variety of topics. For example, DOD officials said the focus of their quarterly meetings is to ensure their institutes are on track to being sustainable. Finally, DOD and DOE have reporting requirements for the institutes they have sponsored. The cooperative agreements all contain requirements for reports to be submitted to the sponsoring agency, such as expenditure reports, progress reports, technical roadmaps, and presentations from program reviews. For example, DOD officials said their Manufacturing USA institutes are required to report semi-annually on their progress toward becoming sustainable. DOD uses semi-annual program management review presentations to assess its Manufacturing USA institutes’ sustainability plans and progress toward implementing these plans. In addition, DOE officials said that they require their institutes to report on sustainability along with other program milestones through quarterly technical reviews. Manufacturing USA institute officials and the institute member representatives we spoke with identified several operating challenges faced by the institutes. These included: Negotiating agreements and approving projects. Officials from three DOD institutes and one DOE institute described challenges related to negotiating and approving membership and project agreements. For membership agreements, there are challenges in negotiating rights to any intellectual property created; for example, potential members may be hesitant to share these rights with the institute. These negotiation challenges can cause schedule delays for institute operations. For project agreements, there are challenges in negotiating the terms and conditions of the agreements and with the amount of time it takes for sub-recipients and sponsoring agencies to review and approve the agreements. Some institutes have taken steps to address these challenges. For example, one DOD institute used language from another DOD institute’s membership agreement to avoid schedule delays that could result from developing its own. In addition, one DOD institute has increased the amount of time allocated for the development of project agreements. Meeting cost-share requirements. Representatives of two academic institutions and two small manufacturers expressed concerns about cost-share requirements for institute participation. For example, one representative said universities are not capable of providing the amount of cash some institutes require under their membership agreements, and another representative said it is a challenge for small manufacturers to participate in some institutes because of the cost-share requirements. However, according to an institute official, the possibility of providing in-kind contributions could help some small manufacturers and other organizations address these concerns. Ensuring institute sustainability. Going forward, as institutes reach the end of the period of performance for their agreements, officials from all DOD institutes identified the sustainability of the institutes as a challenge, although they told us several different reasons for their concerns: Amount of federal financial assistance. Officials from all DOD institutes said the amount of federal financial assistance provided under their cooperative agreements may not be enough for the institutes to become sustainable over the 5- to 7-year period of performance. Length of federal financial assistance. Officials from three DOD institutes said the 5- to 7-year period of performance for their cooperative agreements may not be sufficient to become sustainable. Collaborating with foreign-owned organizations. Officials from two DOD institutes expressed concerns about constraints on collaborating with foreign-owned organizations because these constraints affect their sustainability plans. For example, officials from one DOD institute said that the institute’s cooperative agreement requires DOD approval for foreign-owned organizations’ participation and that DOD may not provide approval, limiting potential sources of institute funding. Type of activities funded. Some DOD institute officials said that the cooperative agreements require them to carry out activities that do not contribute to sustainability. For example, a senior official from one DOD institute said that education and workforce development activities are required as a project component under the institute’s cooperative agreement, but these activities make only a minor contribution to the institute’s sustainability. Institute officials identified several options for how they might address sustainability challenges. For example, officials from one DOD institute said they will base their decision about which type of activities to fund, after federal financial assistance ends, on which ones help make the institute sustainable as well as on the availability of funds from nonfederal sources. Similarly, another DOD institute official said that the institute may stop funding education and workforce development activities after federal financial assistance under the cooperative agreement ends. Further, officials from one DOD institute said they may not require the same level of scrutiny on collaborating with foreign-owned organizations after federal financial assistance under their cooperative agreement ends. AMNPO led a collaboration with DOE and DOD to develop an initial set of performance measures to assess the Manufacturing USA program’s progress toward furthering its eight statutory purposes. AMNPO, DOE, and DOD continue to work together to improve agreed upon performance measures for the institutes. Commerce also has taken steps or has identified options to address challenges in measuring program performance. AMNPO led a collaboration with DOE and DOD to develop guidance for Manufacturing USA institutes and sponsoring agencies that included initial performance measures that could be used to track progress toward achieving the eight statutory purposes of the Manufacturing USA program. The performance measures outlined in this guidance relate to the four program goals for Manufacturing USA that were established in the program’s first strategic plan, published in February 2016. In developing the strategic plan, AMNPO officials said they worked with other agency officials, including DOD and DOE officials, to develop Manufacturing USA program goals that aligned with the initial measures as well as with the program’s statutory purposes. Figure 3 describes the relationship between the program’s statutory purposes, program goals, and initial institute performance measures. Institutes are only required to report on measures that have been agreed upon with their sponsoring agencies. According to AMNPO officials, reporting on institute performance is the responsibility of the respective “funding agency” (i.e., the sponsoring agency). Also, the RAMI Act does not include reporting requirements for institutes sponsored by DOD and DOE. However, as noted previously, the RAMI Act requires the Secretary of Commerce to report annually to Congress on the performance of the program, including an assessment of how well the program is meeting the purposes identified in the RAMI Act. AMNPO, DOD, and DOE told us that they are working together to consider whether any revisions may be warranted to these initial performance measures. Specifically, in coordination with AMNPO and DOE, DOD hired an external consultant to conduct an assessment of the Manufacturing USA program design and progress made toward achieving program objectives, and recommend areas where Manufacturing USA and its institutes could enhance programs. In January 2017, the external consultant reported on the results of its review. Among other recommendations, the report recommended developing measures to evaluate Manufacturing USA program performance using a phased approach. For example, the report recommended developing measures at the program level for two phases: 1. start-up support and shared services provision (i.e., the phase in which the program provides support to individual institutes in their establishment and delivers services applicable to all the institutes); and 2. long-term outcomes (i.e., the phase in which the program tracks the achievement of program goals on a manufacturing sector-wide and national basis). AMNPO officials told us that they plan to incorporate the results of this review into a set of revised performance measures. They also told us that, subsequently, they plan to work with DOD and DOE to reach agreement on these revised measures to allow the Secretary of Commerce to report the data collected on the revised measures in the Manufacturing USA annual report to Congress for fiscal year 2017. This annual report is expected to be issued in May 2017. Institute officials identified challenges related to measuring progress toward achieving the Manufacturing USA program’s statutory purposes, including measuring outcomes within a short timeframe, and Commerce has taken steps to address these challenges. First, institute officials told us that some of the program’s statutory purposes are difficult to measure. In particular, some institute officials told us that measuring the number of jobs created or preserved may not be feasible because, for example, there are challenges with associating the number of jobs created and preserved directly to an institute’s activities. Second, several institute officials also said that the timeline associated with measuring progress toward achieving the program’s statutory purposes may be too short. For example, institutes are relatively new and generally still in the process of establishing mechanisms to collect meaningful data related to many of the program purposes. Further, measuring economic impact and job creation, for example, requires performance measures with lagging indicators that will not be measurable until years into the future, according to institute officials. AMNPO officials told us that they tried to address these challenges in the first Manufacturing USA program annual report by including information on some leading indicators of success among institutes that had available data, such as the level of involvement by industry and academia and amount of cost-share. Also, as mentioned above, AMNPO officials plan to incorporate the results of the recent external consultant review, which recommended developing measures that assess progress during the program’s initial start-up phase in addition to long-term outcome measures. For example, the consultant’s report suggested developing long-term outcome measures that relate to achievement of program goals on a manufacturing sector-wide and nationwide basis, such as increased economic competitiveness, macroeconomic results, and workforce results. Agencies may also face challenges collecting performance information from institutes after their cooperative agreements end, and Commerce has identified options to continue collecting this information. Institutes currently track and report information required by their sponsoring agency and cooperative agreements. Under the DOE cooperative agreements we reviewed, financial assistance recipients are to continue to report annually for five years beyond the project period on the utilization and impact of the institute and technical progress in implementing and deploying the technologies on the institute’s roadmap. Conversely, DOD officials told us the cooperative agreements the agency entered into before December 2016 do not require their institutes to continue to report on performance after the agreements’ period of performance ends. According to AMNPO officials, one way to continue collecting this information is for institutes to voluntarily continue providing performance information to federal agencies, including Commerce. However, in the absence of ongoing federal financial support from their funding agencies for the efforts and expense of collecting and providing data, the officials said that institutes may be reluctant to continue reporting on performance. AMNPO officials told us of another option to ensure continued performance reporting but said that they did not have plans in place to use it. Specifically, the officials said the Secretary of Commerce could establish a requirement for institutes funded by other agencies to submit performance reports to Commerce after their cooperative agreements end as a condition of their continued network participation because the Secretary has the statutory authority to recognize those institutes for the purpose of participating in the network. AMNPO officials said they anticipate that institutes will choose to remain in the network and continue to voluntarily report to AMNPO because the institutes value the services and other benefits they receive from participating. AMNPO uses a variety of collaborative mechanisms to coordinate the efforts of the agencies that contribute to the Manufacturing USA program, including a network governance system that defines some roles and responsibilities for agencies that sponsor institutes as well as for agencies that do not sponsor institutes (non-sponsoring agencies). However, the process for developing the governance system did not include all relevant non-sponsoring agencies or ensure that their roles and responsibilities are fully identified. AMNPO has used a variety of mechanisms to enhance coordination among the institutes and agencies that contribute to the Manufacturing USA program. These mechanisms incorporate several key practices for enhancing and sustaining interagency collaboration. As our prior work has found, an interagency mechanism for collaboration is any arrangement or application that can facilitate collaboration between agencies. One of these mechanisms is the December 2016 Manufacturing USA network charter. The charter says that AMNPO is responsible for supporting network functions, which include establishing the network; facilitating intra-network collaboration; fostering robust communication between the network and external stakeholders; and sustaining, strengthening, and growing the network. The charter also identifies several subfunctions for each of these functions. For example, the facilitating intra-network collaboration function includes, among other subfunctions, establishing forums for network collaboration, information exchange, and knowledge management; facilitating the organization and sharing of lessons learned and best practices across the network; and facilitating network-level discussions among institutes regarding management of technology interfaces and technology gaps. Another mechanism that AMNPO has developed is the Manufacturing USA strategic plan. AMNPO collaborated with federal agencies that contribute to the Manufacturing USA program to develop the strategic plan; these agencies included the Department of Agriculture, Commerce, DOD, Education, DOE, the National Aeronautics and Space Administration, and the National Science Foundation. According to the strategic plan, agencies and institutes participating in the Manufacturing USA program collectively work toward achieving program goals. For example, to work toward the program goal of accelerating the development of an advanced manufacturing workforce, the strategic plan states that the network can help institutes navigate opportunities for federal financial assistance that could support their efforts, including workforce development programs such as those authorized by the Workforce Innovation and Opportunity Act, as amended. AMNPO officials told us that AMNPO has an ongoing effort to help institutes navigate such opportunities. The strategic plan also identified other opportunities for the network to enhance coordination across agencies that contribute to the Manufacturing USA program, such as by serving as a clearinghouse for information about workforce support. AMNPO also uses mechanisms such as hosting teleconferences, convening in-person meetings, and using technologies that allow it to collaborate with representatives of different agencies and institutes. For example, according to AMNPO officials, AMNPO hosts monthly teleconferences with directors from DOD and DOE institutes and senior deputies from DOD and DOE and convenes in-person, semi-annual Manufacturing USA network meetings that include institute and agency staff. For instance, an August 2016 Manufacturing USA network meeting included discussions on a public name for the program, communications, and establishing a Directors Council. According to AMNPO officials, the Manufacturing USA Directors Council will further facilitate cooperation and collaboration among the institutes. The officials said AMNPO has hosted other meetings, such as a workforce development workshop that included workforce leaders from all institutes; subject matter experts; Hollings Manufacturing Extension Partnership center officials; and agency officials from Education, the National Science Foundation, Commerce, DOD, and DOE. AMNPO also recently launched a web-based shared services platform that allows institutes to share best practices and that allows agencies that participate in the Manufacturing USA network to share information with the institutes. AMNPO officials told us they are working with the institutes to provide content for this system and expect to continue working with them to ensure information is current and useful. A further coordination mechanism is a governance system that defines roles and responsibilities for agencies contributing to the Manufacturing USA program. According to Commerce, the process for developing the October 2015 governance system document for the Manufacturing USA network was initiated by the National Science and Technology Council’s Subcommittee on Advanced Manufacturing, and represented a joint effort between Commerce, DOD, and DOE. The governance system identifies the Manufacturing USA network functions and subfunctions for which agencies that sponsor Manufacturing USA institutes, as well as those that do not sponsor institutes (non-sponsoring agencies), are responsible, accountable, informed, and consulted. For example, under the governance system, as part of the facilitating intra-network collaboration function, AMNPO and the agencies that sponsor institutes are responsible for providing situational awareness to individual institutes regarding key contextual landscape issues, such as industrial developments. Alternatively, as part of the function to sustain, strengthen, and grow the network, AMNPO and the agencies that sponsor institutes are responsible for identifying and helping to establish long-term nonfinancial support mechanisms for the program, which the governance document notes should provide valuable nonfinancial support to help institutes succeed and thrive. Non-sponsoring agencies are responsible for one general function: promoting advanced manufacturing to a variety of external stakeholders such as Congress to raise awareness about the Manufacturing USA institutes. Although the governance system was developed by an interagency team, the process used to develop the governance system did not ensure that all relevant non-sponsoring agencies were included or that their roles and responsibilities for contributing to the Manufacturing USA program were fully identified. Specifically, the process used to develop the governance system did not include working with all relevant agencies that could contribute to the Manufacturing USA program. As described above, agencies that contributed to the development of the governance system included Commerce, DOD, and DOE. An AMNPO official told us that non- sponsoring agencies had an opportunity to comment on the governance system but were not involved in its development. According to Commerce, only the institute-sponsoring agencies were involved in developing the governance system because the governance document was chartered by the National Science and Technology Council as a reference for how Manufacturing USA institutes are to be involved in the network. Thus, non-sponsoring agencies, such as DOL the National Science Foundation, the National Aeronautics and Space Administration, and the Department of Homeland Security, among others, were not involved in developing the governance system despite having missions that contribute to or are affected by advanced manufacturing. For example, DOL’s mission is to foster, promote, and develop the welfare of the wage earners, job seekers, and retirees of the United States; improve working conditions; advance opportunities for profitable employment; and assure work-related benefits and rights. In support of this mission, DOL is a member of the National Science and Technology Council’s Subcommittee on Advanced Manufacturing, according to the subcommittee’s charter, and DOL has created resources to assist with the development of programs that support secondary to post-secondary career pathways related to advanced manufacturing. The governance system itself does not specify which non-sponsoring agencies are responsible for contributing to the Manufacturing USA program; rather, it broadly identifies non-sponsoring agencies as agencies not acting in a lead funding role and that are providing other types of support to an institute in the network. Furthermore, in developing the governance system, the process Commerce, DOD, and DOE used did not include fully identifying how non-sponsoring agencies could contribute to the Manufacturing USA program. For example, the governance system does not specify any responsibility for non-sponsoring agencies to provide information or expertise related to their activities to the program. Rather, where the governance system indicates a role for non-sponsoring agencies, it generally indicates that such agencies are to be informed and consulted. However, some non-sponsoring agencies may be implementing programs or other activities that could contribute to the Manufacturing USA program. For example, DOL administers workforce development programs that are carried out by state agencies and local workforce development boards. According to DOL officials, state workforce development boards could work with Manufacturing USA institutes to develop sector partnership strategies, which help employers in an industry address shared goals and hiring needs. In addition, DOL administers discretionary grant programs, which can help increase the number of skilled workers in advanced manufacturing. Financial assistance provided under these discretionary grant programs is being used to support some Manufacturing USA institutes’ education and workforce development activities, including: The Trade Adjustment Assistance Community College and Career Training grant program, which provided nearly $2 billion to strengthen manufacturing programs at community colleges; and The American Apprenticeship Initiative grant program, which will provide $175 million to train and hire 34,000 new apprentices in advanced manufacturing over the next five years. Information or expertise related to other agencies’ programs, such as these, if provided to AMNPO, could help it support the institutes with navigating opportunities for federal financial assistance. Our work has shown that collaborative mechanisms, such as the Manufacturing USA program’s governance system, benefit from certain key features, including the clarity of roles and responsibilities and ensuring that the relevant participants are included in the collaborative effort. Specifically, key collaboration practices call for clarifying the roles and responsibilities of all participating agencies and determining whether all relevant participants have been included. By agreeing on and clearly defining the roles and responsibilities of their members as well as documenting decisions, such as in a memorandum of understanding, collaborating agencies can clarify which agency will do what, organize their joint and individual efforts, and facilitate decision making. Moreover, the RAMI Act identifies several AMNPO functions pertaining to coordination, such as establishing procedures, processes, and criteria as may be necessary and appropriate to maximize cooperation and coordinate the activities of the Manufacturing USA program with the programs and activities of other federal departments and agencies whose missions contribute to or are affected by advanced manufacturing. An AMNPO official acknowledged that the governance system will require revision as the network evolves. The official said that in revising the system, AMNPO will work with sponsoring agencies to further define roles and responsibilities. However, the official said that other agencies participate in the program as it aligns with their respective missions and AMNPO cannot prescribe functions and subfunctions for other federal agencies participating in the Manufacturing USA network. However, without ensuring that all relevant agencies have been included in the process of developing the governance system and that non-sponsoring agencies’ roles and responsibilities have been fully identified, AMNPO may miss opportunities to leverage and coordinate the efforts of non- sponsoring agencies in contributing to the Manufacturing USA program consistent with key practices for interagency collaboration and effective implementation of AMNPO’s functions under the RAMI Act. In an effort to revitalize the U.S. manufacturing sector and increase U.S. competitiveness in advanced manufacturing, Congress passed and the President signed into law the RAMI Act, which requires the Secretary of Commerce to establish a network of institutes for manufacturing innovation, among other things. In establishing this Manufacturing USA network, AMNPO, in collaboration with DOD and DOE, has taken steps to establish institutes and measure progress toward the statutory purposes of the program. AMNPO has also worked with other sponsoring and non- sponsoring agencies to coordinate agencies’ contributions to the network through efforts that incorporate a number of the key practices we have identified to enhance and sustain interagency collaboration. However, while Commerce, DOD, and DOE worked together to develop a governance system that outlines agencies’ roles and responsibilities, the process for developing the system did not ensure that all relevant non- sponsoring agencies were included or that their roles and responsibilities for contributing to the Manufacturing USA program were fully identified. Working with other agencies to revise the governance system, including ensuring that all relevant agencies are involved in the process to fully identify non-sponsoring agencies’ roles and responsibilities, would strengthen AMNPO’s efforts to leverage and coordinate agencies’ contributions to the Manufacturing USA program, consistent with key practices for interagency collaboration, and would improve implementation of its function to coordinate the program. To enhance interagency collaboration in the Manufacturing USA program, the Secretary of Commerce should direct the Director of NIST to work with all non-sponsoring agencies whose missions contribute to or are affected by advanced manufacturing to revise the Manufacturing USA governance system to ensure the roles and responsibilities for how these agencies could contribute to the Manufacturing USA program are fully identified. We provided a draft of this report for review and comment to Commerce, DOD, Education, DOE, and DOL. We received the following comments: Commerce provided written comments, which are reproduced in appendix II. Commerce stated it agreed with the intent of our recommendation and that it will convene an extended team for revising the governance model. Commerce provided other comments related to the role and responsibility of the Department, interagency engagement, performance metrics, as well as other comments on the recommendation which are described below. DOD provided written comments, which are reproduced in appendix III. DOD also stated that it agreed with the intent of our recommendation and expanding the network governance model to better define roles and responsibilities of non-sponsoring agencies. DOD provided other comments in areas similar to Commerce’s comments, as described below. In an email from an audit analyst in its Office of the Chief Financial Officer, DOE provided general comments, which we discuss below. Commerce, DOD, and DOE provided technical comments, which we incorporated as appropriate. Officials from Education and DOL stated via email that they had no comments on the report. Commerce, DOD, and DOE provided several comments related to our recommendation. Commerce agreed with the intent of our recommendation but requested revisions to the wording of the recommendation because NIST has no authority to compel participation by DOL and Education in the Manufacturing USA program or to define roles and responsibilities for these agencies. Relatedly, Commerce and DOD said that Commerce has no authority to coordinate other agencies in their programs or any management responsibilities for institutes sponsored by other agencies. We recognize that Commerce has no authority to coordinate other agencies’ programs, to manage the institutes that they sponsor, or to compel agency participation in the program. We believe that our report correctly characterizes improved coordination as a collaborative effort between Commerce and other agencies. The focus of the recommendation in our draft report was not for NIST to compel agency participation or unilaterally define their roles and responsibilities but rather for NIST to work with non- sponsoring agencies in an interagency collaborative effort to identify their roles and responsibilities. As our report noted, one of the functions of AMNPO specified in the RAMI Act is to establish procedures, processes and criteria as may be necessary and appropriate to maximize cooperation and coordinate the program’s activities with programs and activities of the other federal agencies whose missions contribute to or are affected by advanced manufacturing. In response to Commerce’s comments, we clarified our recommendation to make clear that NIST should engage non- sponsoring agencies in an interagency collaboration to fully identify their roles and responsibilities for contributing to the program. Commerce also proposed an alternative recommendation, commenting that it would be constructive for GAO to recommend that the Secretary of Commerce connect with counterparts at DOL and Education. While our report provides more detailed information on several DOL programs and activities that could potentially contribute to the Manufacturing USA program (relative to other non-sponsoring agencies), we provide this information as illustrative examples. Because we believe collaboration on Manufacturing USA network governance roles and responsibilities should include all relevant non- sponsoring agencies, we did not adjust our recommendation to specifically focus on DOL or Education. Commerce also noted that the first generation of the Manufacturing USA governance system was focused on how institutes would work with the network and program, so the responsible entities that developed it were limited to Commerce, DOD, and DOE. Commerce further stated that our recommendation speaks to a next generation of the Manufacturing USA governance with a broader scope. We modified our report to more clearly reflect that Commerce, DOD, and DOE were the agencies initially involved in developing the governance system as chartered by the National Science and Technology Council. However, we do not believe the scope of the governance system as evidenced by documentation provided to us during our review is substantially different than that envisioned by our recommendation. For example, as we noted in our report, the governance system assigns responsibility for certain subfunctions to non-sponsoring agencies. Finally, DOD stated it would work with the interagency team to assist in reaching out to all agencies whose missions contribute to or are affected by advanced manufacturing, and that it supports expanding the network governance model. In its general comments, DOE agreed that additional coordination with other governmental agencies would be useful, and would further serve to reduce potential inefficiencies and redundancies in federal investments. Commerce and DOD also provided general comments in three areas (1) roles and responsibilities of the Department of Commerce, (2) interagency engagement, and (3) performance metrics. Roles and responsibilities. Commerce and DOD provided comments regarding the roles and responsibilities of Commerce that were also related to the comments on our recommendation and are discussed above. Commerce and DOD also emphasized the nature of (1) the AMNPO as an interagency team hosted at NIST, and (2) the Manufacturing USA institutes as public-private partnerships, owned and managed by an industry-led consortium, with the sponsoring federal agency providing a minority of funding. We made several edits to reflect these points. Commerce and DOD also commented that present non-federal funding is over a 2 to 1 match to federal financial assistance. We believe we accurately present information on these relative contributions in our report. Interagency engagement. Commerce and DOD stated that all agencies were invited to be part of the Manufacturing USA program, and noted that a number of agencies, including some non-sponsoring agencies, have been members of the interagency team. Commerce and DOD also stated that GAO correctly stated that DOL was not a principal agency involved with the program; although, Commerce noted attempts had been made to reach out to DOL and both agencies discussed providing information on DOL initiatives to the Manufacturing USA institutes. We added information to our report to acknowledge these efforts. Other comments Commerce provided regarding interagency engagement through the Manufacturing USA governance system are discussed above. Performance metrics. Commerce and DOD stated that there is agreement on the initial set of performance measures for the Manufacturing USA program and that information on each of these measures will be provided in the upcoming Manufacturing USA annual report (for fiscal year 2016). In the draft report, we included information on the extent to which all DOD and DOE institutes planned to provide information on all of the initial performance measures for the Manufacturing USA program. This information was based on documentation provided to us during the course of our review, which showed that not all DOD institutes planned to report data on all of the initial performance measures. In response to the comments received from Commerce and DOD, we followed up with DOD. A DOD official responsible for overseeing DOD’s Manufacturing USA institutes said via email that DOD recently tasked the institutes to provide data in support of these measures for aggregation in the fiscal year 2016 report. Therefore, we revised our report to remove information related to a difference in the extent to which DOD and DOE planned to have all their institutes report on all the initial measures. In its comments on our draft report DOD also provided information on other information DOD collects from its institutes consistent with its mission, and emphasized that performance measures included in our report are only initial metrics which could evolve over time. We did not make any additional changes based on these comments as we believe these concepts are already reflected in our report. Finally, Commerce and DOD commented on the title of our draft report, which was Advanced Manufacturing: Commerce Needs to Strengthen Collaboration with Other Agencies on Innovation Institutes, indicating a preference for a more descriptive report title. In its general comments, DOE stated that the draft title overstated the severity of the issue. We believe that Commerce as well as other institute-sponsoring and non- sponsoring agencies have worked collaboratively in implementing the Manufacturing USA program; however, our report identified an opportunity to further strengthen this collaboration. We adjusted our report title and believe that it appropriately highlights this area where collaboration could be strengthened. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Commerce, Defense, Education, Energy, and Labor; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact John Neumann at (202) 512-3841 or [email protected], or Andrew Sherrill at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This appendix provides additional information on each of the five Department of Defense (DOD) and two Department of Energy (DOE) Manufacturing USA institutes that were operating as of December 2016. This information includes, among other things, background information on the institute, such as its budget and technology focus, as well as data on the number of members and membership benefits, rights, and fees. The institutes are listed in chronological order based on the date of the cooperative agreement with their sponsoring agency. About: America Makes is the flagship Manufacturing USA institute. Its mission is to accelerate the adoption of additive manufacturing technologies, also known as three-dimensional, or 3D, printing, in the U.S. manufacturing sector. Location: Youngstown, Ohio, facility opened in October 2012. A satellite center is located at the University of Texas, El Paso. First cooperative agreement: Signed August 2012. America Makes is operated by the National Center for Defense Manufacturing and Machining, a 501(c)(3) nonprofit organization that has a cooperative agreement with the Air Force Research Laboratory. Budget: America Makes was established as a pilot institute in August 2012 for the Manufacturing USA program with $30 million in planned federal obligations and $40 million in pledged nonfederal cost-share. This amount was later increased to a combined $110 million in planned federal obligations ($55 million) and pledged nonfederal cost-share ($55 million). Second cooperative agreement: Signed February 2016 between the National Center for Defense Manufacturing and Machining and the Air Force Research Laboratory for up to approximately a combined additional $76 million in planned federal obligations ($50 million) and pledged nonfederal cost-share ($26 million). As of June 2016, the second cooperative agreement consists of one $9 million project. Technology portfolio: About 60 projects, such as a project to streamline the design and improve the materials for the production of customized ankle-foot orthoses, and another project to accelerate the U.S. metal- casting industry with the adoption of America Makes technologies into their operations. Membership: About 175 members from industry, academia, government, nonprofit organizations, and Hollings Manufacturing Extension Partnership centers. Summary of membership fees, benefits, and rights: America Makes provides three membership levels with different fees, benefits, and rights associated with each level. The three membership levels and their associated fees are as follows: Platinum (lead): $200,000 cash or in-kind annually, with the intention of at least a 3-year commitment Gold (full): $50,000 cash or in-kind annually, with the intention of at Silver (supporting): $15,000 cash or in-kind annually, with the intention of at least a 3-year commitment Table 5 provides an overview of America Makes membership benefits and rights. Specific terms and conditions are further detailed in the formal membership agreement. About: DMDII’s mission is to digitize American manufacturing. By capturing data at every stage of the production process—and by deploying specially designed software and other digital tools— manufacturers can efficiently share and revise their digital designs. Location: Chicago, Illinois, facility opened in May 2015. Two chapters were opened in Quad Cities and Rockford, Illinois. Cooperative agreement: Signed February 2014. DMDII is managed by UI LABS, a 501(c)(3) nonprofit organization that has a cooperative agreement with the Army Contracting Command Redstone. Budget: $176 million from federal and nonfederal financial assistance $70 million in planned federal obligations $106 million pledged nonfederal cost-share Technology portfolio: About 50 projects, such as a project to use 3D models to enhance product quality, reduce communication errors, and reduce development times throughout the supply chain, and another project to fill the gap between design and manufacturing with the information necessary to enable services for machine motion control. Membership: About 260 members from industry, academia, government, nonprofit organizations, and Hollings Manufacturing Extension Partnership centers. Summary of membership fees, benefits, and rights: DMDII provides three membership levels for industry, four membership levels for academic and nonprofit organizations, and two government membership levels with different fees, benefits, and rights associated with each level. The industry, academic and nonprofit, and government membership levels and their associated fees are as follows: Tier 1: $400,000 cash annually and at least $3 million in expenditures for additional projects with a 5-year commitment Tier 2: $200,000 cash annually with a 5-year commitment Tier 3: $500 cash annually with a 5-year commitment Academic and nonprofit: Tier 1: $5 million in expenditures with a 5-year commitment Tier 2: $2 million in expenditures with a 5-year commitment Tier 3: $1 million in expenditures with a 5-year commitment Tier 4: $500 cash annually with a 5-year commitment U.S. government: Obligations as specified in the cooperative State and local government: At least $5 million in expenditures Tables 6, 7, and 8 provide an overview of DMDII membership benefits and rights for industry, academic and nonprofit, and government members, respectively. Specific terms and conditions are further detailed in the formal membership agreement. About: LIFT’s mission is to accelerate the development of new manufacturing processes for products using lightweight metals, including aluminum, magnesium, titanium, and advanced high-strength steel alloys. Location: Detroit, Michigan, facility opened in January 2015. Satellite locations are in Columbus, Ohio; Ann Arbor, Michigan; Worchester, Massachusetts; and Golden, Colorado. Cooperative agreement: Signed February 2014. LIFT is managed by the American Lightweight Materials Manufacturing Innovation Institute, a 501(c)(3) nonprofit organization that has a cooperative agreement with the Office of Naval Research. Budget: $148 million from federal and nonfederal financial assistance $70 million in planned federal obligations $78 million in pledged nonfederal cost-share Technology portfolio: About 30 projects, either initiated or under development, in the areas of melt and powder processing, thermomechanical processing, joining and assembly, coatings, and agile tooling, such as developing and deploying thin wall ductile iron castings for high-volume production. LIFT also sponsored a Purdue-designed Indy 500 Grand Prix for high school students to engineer, build, test, and market vehicles using lightweight metals. Membership: About 90 members from industry, academia, government, nonprofit organizations, and a Hollings Manufacturing Extension Partnership center. Summary of membership fees, benefits, and rights: LIFT provides nine membership levels with different fees, benefits, and rights associated with each level. The nine membership levels and their associated fees are as follows: Gold: $350,000 annually, of which at least $100,000 is provided in cash, with the intention of at least a 5-year commitment Silver: $150,000 annually, of which at least $50,000 is provided in cash, with the intention of at least a 5-year commitment Bronze: $25,000 cash annually, with the intention of at least a 5-year In-Kind members (e.g., equipment, software, or engineering service providers): $150,000 in-kind annually, with the intention of at least a 5- year commitment Small manufacturers and start-ups: For small manufacturers with 251 to 500 employees, the fee is $5,000 cash annually, with the intention of at least a 5-year commitment; for small manufacturers with 1 to 250 employees, the fee is $2,500 cash annually, with the intention of at least a 5-year commitment; and for start-ups with fewer than 50 employees and less than 5 years in business, the fee is $1,000 cash annually, with the intention of at least a 5-year commitment Research partners: $10,000 cash annually, with the intention of at least a 5-year commitment. The annual in-kind fee varies based on the level of participation. Trade association: Annual cash and in-kind fees vary based on the level of participation, with the intention of at least a 5-year commitment Professional society: $50,000 in-kind annually, with the intention of at Education and workforce: Annual cash and in-kind fees vary based on the level of participation, with the intention of at least a 5-year commitment Table 9 provides an overview of LIFT membership benefits and rights across the nine membership levels. Specific terms and conditions are further detailed in the formal membership agreement. About: PowerAmerica’s mission is to develop advanced manufacturing processes that will enable large-scale production of wide bandgap semiconductors, which allow power electronic components to be smaller, faster, and more efficient than silicon-based devices. Location: Raleigh, North Carolina, facility opened in January 2015. Cooperative agreement: Signed December 2014. PowerAmerica is led by North Carolina State University, which has a cooperative agreement with DOE’s Office of Energy Efficiency and Renewable Energy. Budget: $140 million from federal and nonfederal financial assistance $70 million in planned federal obligations $70 million in pledged nonfederal cost-share Technology portfolio: About 30 projects, such as a project developing a medium voltage fast charger for electric vehicles. Another project involves working with an existing silicon wafer fabrication facility to update its capabilities to allow for the production of wide bandgap semiconductor devices—specifically silicon carbide wafers. This affords PowerAmerica members an opportunity to accelerate their research by producing and testing wide bandgap devices at the facility. Membership: About 30 members from industry, academia, government, and a Hollings Manufacturing Extension Partnership center. Summary of membership fees, benefits, and rights: PowerAmerica provides seven membership levels with different fees, benefits, and rights associated with each level. The seven membership levels and their associated fees are as follows: Full sustaining: $500,000 annually ($250,000 of which may be met by in-kind contributions), with a 3-year commitment. In general, the in- kind contribution should be made within the membership year; however, the member may defer 25 percent of the in-kind contribution for up to 6 months into the following membership year. Full member: $100,000 cash annually with a 3-year commitment Affiliate member: $50,000 cash annually with a 3-year commitment Startup member: $10,000 cash annually with a 3-year commitment Academic member: $10,000 cash annually with a 3-year commitment Federal lab: No fee, but involves a 3-year commitment Associate member: No fee, but involves a 3-year commitment Table 10 provides an overview of PowerAmerica membership benefits and rights across the seven membership levels. Specific terms and conditions are further detailed in the formal membership agreement. Third option for exclusive commercial license Members vote on projects that have gone through the PowerAmerica call for projects process and that can be funded by industry membership fees. Enhancement projects are projects funded by industry that are beyond the core funding of the grant. They are nonproprietary so that all of the members can benefit from the research (as opposed to other industry-funded projects performed using standard research agreements that have full overhead and in which the company would own intellectual property generated). About: IACMI’s mission is to accelerate the development and adoption of cutting-edge manufacturing technologies for low-cost, energy-efficient manufacturing of advanced polymer composites for vehicles, wind turbines, compressed gas storage, and other applications. Location: Oak Ridge, Tennessee, facility opened in June 2015. Geographic extensions are located in Dayton, Ohio (compressed gas storage); Golden, Colorado (wind turbines); West Lafayette, Indiana (design and simulation); and Detroit, Michigan (vehicles). IACMI has also signed memorandums of understanding with the Carbon Recycling Technology Center in Port Angeles, Washington, and the Composite Prototyping Center in Plainview, New York. Cooperative agreement: Signed June 2015. IACMI is managed by the Collaborative Composite Solutions Corporation, a 501(c)(3) nonprofit organization that has a cooperative agreement with DOE’s Office of Energy Efficiency and Renewable Energy. Budget: $250 million from federal and nonfederal financial assistance $70 million in planned federal financial obligations $180 million in pledged nonfederal cost-share Technology portfolio: About three projects, such as a project optimizing carbon fiber production to enable high-volume manufacturing of lightweight automotive components, and another project developing thermoplastic material to lower production costs and improve recyclability of wind turbine blades. Membership: About 140 members from industry, academia, government, nonprofit organizations, and a Hollings Manufacturing Extension Partnership center. Summary of membership fees, benefits, and rights: IACMI provides four membership levels with different fees, benefits, and rights associated with each level. The four membership levels and their associated fees are as follows: Charter: $5 million provided over a period of up to 5 years (at least 50 percent cash and $100,000 overhead annually) Premium: $1 million provided over a period of up to 5 years (at least 50 percent cash and $20,000 overhead annually) Resource: $5,000 cash annually for industry members with 500 or fewer employees, government agencies, and educational institutions; $10,000 cash annually for industry members with over 500 employees; as well as a resource contribution and a 1-year commitment Consortium: $5,000 cash annually for industry members with 500 or fewer employees, government agencies, and educational institutions; and $10,000 cash annually for industry members with over 500 employees; involves a 1-year commitment Table 11 provides an overview of IACMI membership benefits and rights. Specific terms and conditions are further detailed in the formal membership agreement. About: AIM Photonics’ mission is to advance integrated photonic circuit manufacturing technology development while simultaneously providing access to state-of-the-art fabrication, packaging, and testing capabilities for small-to-medium enterprises, academia, and the government. Location: Albany, New York, facility operations began in July 2015. Institute officials said the Rochester, New York, facility will be the location for the test, assembly, and packaging operations. Also, AIM Photonics has satellite hubs at the University of Rochester, Rochester Institute of Technology, State University of New York Polytechnic Institute, Massachusetts Institute of Technology, Columbia University, University of Arizona, and University of California Santa Barbara. Cooperative agreement: Signed July 2015. AIM Photonics is managed by the Research Foundation of the State University of New York, a 501(c)(3) nonprofit organization that has a cooperative agreement with the Air Force Research Laboratory. Budget: $612 million federal and nonfederal financial assistance $110 million in planned federal obligations $503 million in pledged nonfederal cost-share Technology portfolio: Institute officials said that the portfolio contains about 10 projects, such as a project pertaining to 3D displays, and another project through AIM Photonics’ AIM Academy to study industry skills gap and employment needs. Membership: About 30 members from industry, academia, and government. Summary of membership fees, benefits, and rights: AIM Photonics provides four membership levels for industry and three membership levels for academic and nonprofit organizations with different fees, benefits, and rights associated with each level. The industry and academic and nonprofit membership levels and their associated fees are as follows: Tier 1: $1 million cash annually (may include cash and in-kind with a minimum of $100,000 cash through 2020), increasing to full cash by 2021 Tier 2: $500,000 cash annually (may include cash and in-kind with a minimum of $100,000 cash through 2020), increasing to full cash by 2021 Tier 3: $100,000 or greater amount cash annually (may include in- kind through 2020) Industry observer: $2,500 cash annually, with a 1-year commitment Academic and nonprofit: Tier 1: In-kind and tangible and intangible contributions as provided in the membership agreement (such as software licenses, hardware, services, and expertise) Tier 2: In-kind as provided in the membership agreement (such as software licenses, hardware, or services and overhead costs) Academic observer: No fee, with 1-year commitment Tables 12 and 13 provide an overview of AIM Photonics’ membership benefits and rights for industry and academic and nonprofit, respectively. Specific terms and conditions are further detailed in the formal membership agreements. About: NextFlex’s mission is to pioneer a new era of advanced flexible hybrid electronics manufacturing in the United States. Flexible hybrid electronics combine the ability to add electronics to materials that are part of our everyday lives, to create lightweight, low-cost, flexible, conformable, and stretchable smart products. Location: San Jose, California, facility opened in August 2016. Cooperative agreement: Signed August 2015. NextFlex is managed by FlexTech Alliance, Inc., a 501(c)(6) nonprofit organization that has a cooperative agreement with the Air Force Research Laboratory. Budget: $171 million in federal and nonfederal financial assistance $75 million in planned federal obligations $96 million in pledged nonfederal cost-share Technology portfolio: About 13 projects, such as a project to develop a flexible smart wound dressing with oxygen release and sensing capability, and a project to develop a flexible and stretchable monitoring sensor network to improve aerospace fuel efficiency and flight safety. Membership: About 45 members from industry, academia, government, and nonprofit organizations. Summary of membership fees, benefits, and rights: NextFlex provides four membership levels for industry, three membership levels for academic and nonprofit organizations, and a government membership level with different fees, benefits, and rights associated with each level. The industry, academic and nonprofit, and government membership levels and their associated fees are as follows: Tier 1: $150,000 cash and $250,000 in-kind annually, with the intention of at least a 3-year commitment Tier 2: $50,000 cash and $50,000 in-kind annually, with the intention of at least a 3-year commitment Tier 3: $10,000 cash annually and in-kind encouraged, with the intention of at least a 3-year commitment Observer: $2,500 cash annually and in-kind encouraged, with the intention of at least a 3-year commitment Academic and nonprofit: Tier 1: $15,000 cash and $600,000 in-kind annually, with the intention of at least a 3-year commitment Tier 2: $7,500 cash and $300,000 in-kind annually, with the intention of at least a 3-year commitment Tier 3: $2,500 cash annually and in-kind encouraged, with the intention of at least a 3-year commitment No annual membership fee In-kind contributions, which are not required, can be cash in support of the institute Tables 14, 15, and 16 provide an overview of NextFlex membership benefits and rights for industry, academic and nonprofit, and government members, respectively. Specific terms and conditions are further detailed in the formal membership agreement. In addition to the contacts named above, key contributors to this report were William MacBlane, Assistant Director; Christopher Murray, Assistant Director; Erin Barry; Candace Carpenter; David Dornisch; Brian Egger; Ellen Fried; Paige Gilbreath; Ruben Gzirian; Jamila Kennedy; Jonathan Ludwigson; Rachel Pittenger; Dan C. Royer; Kathryn Smith; and Jeanette Soares.
The RAMI Act includes a provision for GAO to assess the program every two years, with a final assessment in 2024. This is GAO's first report in response to the statutory provision. Among other objectives, GAO examined (1) the status of the network and use of the institutes, (2) the extent to which performance measures are in place to assess progress toward achieving the statutory program purposes, and (3) the extent to which Commerce has taken steps to coordinate agencies contributing to the program. GAO reviewed documentation and interviewed officials from Commerce, DOD, DOE, DOL, the Department of Education, and Manufacturing USA institutes; and held discussion groups with a nongeneralizable sample of institute member representatives. As of December 2016, the Departments of Defense (DOD), Energy (DOE), and Commerce (Commerce) collectively had signed agreements to establish 11 manufacturing innovation institutes. Four of these institutes were established prior to enactment of the Revitalize American Manufacturing and Innovation Act of 2014 (RAMI Act), which requires Commerce to establish a network of institutes for manufacturing innovation. Since 2014, the network—called the Manufacturing USA network—has grown as DOD, DOE, and Commerce have established seven more institutes, and Commerce, DOD, and DOE plan to sponsor up to four more institutes. Each institute is a public-private partnership between the sponsoring federal agency and a nonfederal entity in charge of operations, with the nonfederal entity matching or exceeding the federal financial assistance. GAO's analysis of institute membership from May through September 2016 shows that about 780 members had joined the seven institutes that were operating during GAO's review (i.e., supporting research projects in their technology focus areas). Members receive a variety of benefits, such as access to intellectual property and networking opportunities. Commerce, DOD, and DOE worked together to develop initial performance measures to track progress toward the Manufacturing USA program's statutory purposes. Additionally, DOD, working with Commerce and DOE, hired a consultant to review the Manufacturing USA program. The consultant's January 2017 report included recommendations on revised measures to track program progress. After considering the results of this review, Commerce plans to work with DOD and DOE to reach agreement on a revised set of measures. While Commerce may face challenges with assessing the program, such as the timeframe over which results may need to be measured, it has taken steps or has identified options to address these challenges. Commerce has used a variety of mechanisms to coordinate the Manufacturing USA program. These mechanisms incorporate several key practices GAO has identified for enhancing and sustaining interagency collaboration. However, GAO found that the process used to develop a governance system that outlines agencies' responsibilities for contributing to the program did not include all relevant non-sponsoring agencies (agencies that do not sponsor institutes), or ensure that their roles and responsibilities for contributing to the program are fully identified. Specifically, non-sponsoring agencies, such as the Department of Labor (DOL)—which administers discretionary grant programs that can help increase the number of skilled workers in advanced manufacturing—were not actively involved in developing the governance system. Additionally, the governance system does not specify any responsibility for non-sponsoring agencies to provide information or expertise related to their activities to the program. A Commerce official told GAO that the governance system is subject to revision, but participation in the program is up to each agency. However, including all relevant agencies in the process of revising the system and fully identifying non-sponsoring agencies' roles and responsibilities could strengthen Commerce's efforts to leverage and coordinate agencies' contributions to the program, consistent with key practices for interagency collaboration. GAO recommends that Commerce work with all relevant federal agencies to fully identify roles and responsibilities for how agencies that do not sponsor institutes could contribute to the Manufacturing USA program. Commerce agreed, but suggested an alternative recommendation. GAO modified the recommendation to clarify its intent.
A weapon system for ballistic missile defense, even a rudimentary one, requires the coordinated operation of a diverse collection of components. For example, the initial capability emplaced in 2004 employs early-warning satellites for launch detection, ground-based radars in California and Alaska, and sea-based Aegis radars in the Sea of Japan for surveillance and tracking of enemy missiles, interceptors at launch sites in Alaska and California to engage and destroy incoming warheads, and command and control nodes in Alaska and Colorado to orchestrate the mission. A typical scenario to engage an intercontinental ballistic missile (ICBM) would unfold as follows: Infrared sensors aboard early-warning satellites detect the hot plume of a missile launch and alert the command authority of a possible attack. Upon receiving the alert, land- or sea-based radars are directed to track the various objects released from the missile and, if so designed, to identify the warhead from among spent rocket motors, decoys, and debris. When the trajectory of the missile’s warhead has been adequately established, an interceptor—consisting of a “kill vehicle” mounted atop a booster—is launched to engage the threat. The interceptor boosts itself toward a predicted intercept point and releases the kill vehicle. The kill vehicle uses its onboard sensors and divert thrusters to detect, identify, and steer itself into the warhead. With a combined closing speed on the order of 10 kilometers per second (22,000 miles per hour), the warhead is destroyed through a “hit-to-kill” collision with the kill vehicle above the atmosphere. To develop a system capable of carrying out such an engagement, MDA is executing an evolutionary acquisition strategy in which the development of missile defense capabilities is organized in 2-year increments known as blocks. In 2001, when it adopted the block strategy, MDA planned to construct a test bed in which new sensors, weapon projects, and enhancements to existing capabilities could be matured. When assets were considered mature, MDA planned to integrate them into the BMDS to increase the system’s capability to respond to the evolving threat. However, with the President’s directive to begin fielding an initial BMDS capability beginning in 2004, MDA switched its emphasis from developing a test bed to developing and fielding an operational capability. MDA is completing its Block 2004 program of work. The associated military capability of this block is primarily one for defending the United States against ICBM attacks from North Korea and the Middle East, although the block increases the United States’ ability to engage short- and medium-range ballistic missiles. Block 2004 is built around the GMD element, augmented by shipboard Aegis BMD radars and missiles, and integrated by the system-level C2BMC element. In addition, MDA attempted to accelerate the fielding of the Forward-Based X-Band- Transportable (FBX-T) radar into Block 2004. This radar, being developed by the Sensors Program Office, was originally intended for operation during Block 2006. MDA is also carrying out an extensive research and development effort to expand its current operational capability into future blocks. During fiscal year 2005, MDA funded the development of four other major BMDS elements in addition to the four elements that were to be fielded as part of the Block 2004 BMDS. These elements are the ABL, KEI, STSS, and THAAD. MDA expects to field a limited THAAD capability during Block 2008. The other elements, which are primarily in technology development, will likely be fielded in later blocks. Table 1 provides a brief description of all elements being developed by MDA. More information about them is provided in appendix II. Since 2002, missile defense has been seen as a national priority and has been funded nearly at requested levels. However, DOD’s Program Budget Decision of December 2004 called for MDA to plan for a $5 billion reduction in funding over fiscal years 2006-2011. In addition, MDA will continue to compete with hundreds of existing and planned technology development and acquisition programs for research, development, and evaluation funding. Cost growth of existing weapon programs is also likely to affect MDA’s share of future DOD budgets. MDA made progress during fiscal year 2005 in carrying out the fiscal year plans of work established by the seven BMDS elements, but it was not able to field all planned components or conduct all scheduled tests. Also, some activities that would have furthered the development of elements planned for later blocks slipped into fiscal year 2006, possibly delaying the elements’ scheduled integration into the BMDS. In addition, although MDA did not complete all work scheduled during the fiscal year, most of MDA’s prime contractors reported that the work accomplished cost more than expected. During fiscal year 2005, MDA intended to improve the C2BMC, field eight Standard Missile-3 (SM-3) missiles, make seven Aegis destroyers capable of performing long-range surveillance and tracking, upgrade two Aegis cruisers with a missile defense contingency engagement capability, upgrade two radars (Beale and Fylingdales early warning radars), and deliver and emplace 10 GMD interceptors. In addition, MDA planned a number of flight tests—six GMD flight tests, four of which Aegis BMD would participate in to detect and track ICBM targets, and three Aegis BMD intercept tests. The C2BMC program completed most activities required to provide situational awareness of the missile defense battle. The C2BMC element, whose development is in its early stages, is initially expected to monitor the operational status of each BMDS component and display threat information, such as missile trajectories and impact points. In 2005, the program installed C2BMC suites (communications software and hardware) at U.S. Strategic Command, U. S. Northern Command, and U.S. Pacific Command. The additions at U.S. Strategic Command and U.S. Northern Command provide redundant capability and more flexibility to test, exercise, and maintain the C2BMC. MDA also planned to install a Web browser in the United Kingdom, to provide situational awareness for the British government. However, the Web browser will not be operational until 2006 because DOD did not complete final policy agreements as scheduled. Development of two C2BMC software upgrades was also completed during the fiscal year. The first upgrade gave C2BMC the ability to display GMD assets on the user’s computer monitors, improved the user’s ability to call up BMDS information, reduced the time to transfer force-level planning files, and installed the software and hardware necessary to provide an operational capability at U.S. Pacific Command. The final decision to make the U.S. Pacific Command suite operational has not yet been made, but a decision is expected in March 2006. Completion of the second upgrade was a little behind schedule, but it was completed by the first quarter of calendar year 2006. Development of the upgrade, known as Spiral 4.5, was completed by the end of September 2005, but all testing is not expected to be completed until the end of March 2006. Spiral 4.5 gives C2BMC the capability to receive, distribute, and display information developed by three new sensors—the FBX-T and Sea-Based X-Band (SBX) radars and the Fylingdales upgraded early warning radar. It also improves the consistency between the data displayed by the C2BMC and the GMD fire control monitors, both of which receive information directly from various sensors. The Aegis BMD program made good progress in developing and delivering missiles and upgrading Aegis ships for the missile defense mission. To increase the United States’ capability to defend against short- and medium- range ballistic missiles, the program produced and delivered eight Standard Missile-3s—the “bullet” for the Aegis BMD element. These missiles will be launched from Aegis cruisers, two of which were upgraded in fiscal year 2005 to enable them to perform their engagement and long- range surveillance and tracking missions. Six destroyers, whose ballistic missile defense mission is to provide long-range surveillance and tracking of ICBMs for the GMD element, were also upgraded in fiscal year 2005. The program was unable to upgrade a seventh destroyer during the fiscal year as scheduled—although assets required to proceed with the upgrade were in place—because the Navy had scheduled the ship for other activities. However, the destroyer was upgraded before the end of Block 2004. Although the GMD program made progress during fiscal year 2005, it did not meet all expectations. The GMD program had planned to field 10 additional interceptors during the fiscal year, but actually fielded 4. Two additional GMD interceptors were delivered and fielded at Fort Greely, Alaska, and the first 2 interceptors were emplaced at Vandenberg Air Force Base, California. The 2 interceptors installed at Vandenberg provide a redundant launch site and a better intercept trajectory against some ICBM threats. MDA also upgraded two early warning radars—one at Beale Air Force Base, California, and another at Fylingdales in the United Kingdom. In some scenarios, each of these radars will act as the primary fire control radar for the GMD element. Interceptor production slowed as the year progressed primarily because technical problems were discovered, mostly in the interceptor’s exoatmospheric kill vehicle (EKV). MDA officials explained that these problems were traced back to poor oversight of subcontractors, too few qualification tests, and other quality assurance issues. By the end of the fiscal year, the program had reduced its fiscal year plan for fielding interceptors from 10 to 6 so that additional interceptors could be made available for ground tests, but the contractor was only able to emplace the 2 interceptors at Fort Greely and the 2 at Vandenburg Air Force Base. The GMD and Aegis BMD programs also planned to conduct a number of flight tests during the fiscal year. The GMD program planned three nonintercept and three intercept flight tests. However, the program was able to successfully complete only one of the nonintercept flight tests and none of the intercept tests. The successful nonintercept test demonstrated that the upgraded Cobra Dane radar could detect and track a target of opportunity. However, a second nonintercept flight test that would have examined upgrades to the Beale upgraded early warning radar was delayed, when GMD’s test plan was restructured to make it less concurrent. Also, the other nonintercept test (integrated flight test - 13C) that was to demonstrate operational aspects of the fielded configuration of GMD’s interceptor could not be completed because the interceptor failed to launch. Of the three planned intercept tests, the program conducted one (IFT-14). However, this test was also aborted when the interceptor failed to launch. MDA planned two other intercept tests, but the tests did not take place because MDA restructured GMD’s test plan after the interceptor failures to implement a less risky test strategy. The first test in the restructured plan—which was a nonintercept test to assess the interceptor’s operation in space—was successfully completed in December 2005. The Aegis BMD Program Office planned to participate in four of the GMD tests during fiscal year 2005. Aegis BMD did not participate in any of these tests because weather conditions prevented the ship from participating in one test, the ship was unavailable during another, and GMD’s test plan was restructured, causing two tests to be canceled. In addition to participating in GMD tests, the Aegis BMD program planned three intercept tests during fiscal year 2005. However, only one test was conducted. The program delayed the two other tests because of budgetary constraints and technical problems. MDA completed one of the delayed tests in the first quarter of fiscal year 2006 and canceled the second delayed test because most of its objectives had been accomplished in the completed test. In the fiscal year 2006 test, an SM-3 missile successfully engaged a separating target, that is, a target whose warhead separates from its booster. In defeating this target, the program demonstrated that the Aegis BMD element has a capability against a more advanced threat than the nonseparating targets included in earlier tests. MDA made progress in developing the four elements that are expected to enhance the BMDS during future blocks—THAAD, ABL, STSS, and KEI— but some planned activities fell behind schedule. The THAAD Program Office completed numerous ground and component qualification tests that led to a successful first flight test in the first quarter of fiscal year 2006. The program also worked to solve technical problems that could have affected the success of the first flight tests. The ABL program completed the first major milestones of its restructured program—First Flight and First Light, completed scheduled activities associated with a series of Beam Control/Fire Control low-power passive flight tests, and began integrating the full Beam Control/Fire Control with other laser systems aboard the aircraft. The STSS program tested and integrated spacecraft components for the demonstration satellites that the program expects to launch and began testing the first satellite’s payload. The KEI program completed the construction of a shelter to house prototype fire control and communications equipment and conducted several demonstrations during which the prototype equipment collected data from overhead nonimaging infrared satellites in a timeline that, according to program officials, proves a boost phase intercept is possible. In addition, the program completed studies of communications equipment—which uplinks information from KEI’s fire control and communications component to its interceptor—that allowed the program to optimize the equipment’s design to operate in a nuclear environment or against jamming threats. However, all four programs experienced some setbacks. The THAAD program delayed the start of flight tests until the first quarter of fiscal year 2006. The ABL Program Office did not complete laboratory testing of the element’s high-energy laser in September 2005, as planned, and the STSS Program Office rescheduled tests of the first satellite’s payload until the second quarter of fiscal year 2006. The fourth element, KEI, also delayed some activities related to its Near Field Infrared Experiment (NFIRE), which is being conducted to gather data on the risk in identifying the body of a missile from the plume of hot exhaust gases that can obscure the body while the missile is boosting. The THAAD Program Office expected to begin flight tests in June 2005. However, the first test was delayed until November 2005 because of unexpected integration problems. For example, one delay was caused by a tear in a filter in the missile’s divert attitude control system. Program officials expect to recover the test schedule and conduct 14 flight tests before turning the first THAAD fire unit over to the Army in 2009 for operational use and testing. However, the test schedule is aggressive, requiring as many as 5 tests in some years. To complete all tests as planned, the officials told us that there can be no test failures. The Airborne Laser Program Office planned to complete tests of the element’s high-energy laser by September 30, 2005. The laser is a component of the ABL prototype that will be used to demonstrate the element’s lethality as early as the 2008 time frame. Prior to installing the laser on the prototype aircraft, the program tested the laser in its System Integration Laboratory at Edwards Air Force Base. Program officials expected the tests, which began in November 2004, to be completed by the end of fiscal year 2005. During this time frame, officials wanted to demonstrate that the laser could generate 100 percent of its design power and that it could repeatedly operate at that power for periods of about 10 seconds. As of October 2005, the laser had produced 83 percent of the power it is designed to generate and was able to operate for periods of about 5 ¼ seconds. After solving technical problems with the laser’s abort system and completing the planned installation of an ammonia cooling system, the program was able, in December 2005, to extend the laser’s operating time to more than 10 seconds. Although the laser has not reached 100 percent of its design power, officials told us that the 83 percent obtained thus far is sufficient to achieve 95 percent of maximum lethal range against all classes of ballistic missiles. The ABL Program Manager originally told us that he expected the laser to remain in the system integration laboratory until it produced 100 percent of its design power. Nonetheless, on December 9, 2005, MDA’s Director gave the ABL program permission to disassemble the System Integration Laboratory and install the laser on the aircraft. Program officials told us that they would continue to test the laser, when the aircraft is on the ground, in an attempt to demonstrate that the laser can produce 100 percent of its design power. During fiscal year 2005, the STSS program intended to integrate and test the spacecraft for two demonstration satellites and integrate and test the sensor payload, which includes surveillance and tracking sensors, for the first of the two satellites. The program is constructing the demonstration satellites from hardware developed by the Space-Based Infrared System- Low program before it was canceled in 1999 and plans to launch the satellites in fiscal year 2007, after all hardware has been integrated and tested. The program did not complete the payload integration and test activities in fiscal year 2005, as planned, because thermal vacuum testing is taking longer than expected. Hardware issues have emerged as the payload is being tested in a vacuum and at cold temperatures for the first time. For example, in a vacuum, the sensors’ optics did not cool to the desired temperature and the power supply to the acquisition sensor’s signal processor failed. The program office believes that repairs will correct the problems, but program officials are in the process of deciding whether further tests must be completed after the repairs are made and before the sensor payload is placed aboard the satellite. As part of its fiscal year 2005 activities, the KEI program intended to complete a number of tasks that would have enabled it to conduct the NFIRE experiment. The experiment places sensors aboard a satellite that will be launched into space, where the sensors will observe and collect infrared imagery of boosting intercontinental ballistic missiles. In fiscal year 2005, the KEI program expected to calibrate and deliver the sensor payload, complete the space vehicle integration and acceptance test, procure targets, and certify mission operation readiness. However, anomalies in the sensor payload delayed the delivery of the payload, in turn delaying the remaining activities. The day-to-day management of all NFIRE activities has since been transferred to the STSS program, which has extensive experience with the development of satellites. STSS officials told us that they do not expect the fiscal year 2005 delays to affect the experiment’s launch date. Although MDA was unable to complete all activities during fiscal year 2005 as planned, the completed work cost more than expected. Collectively, prime contractors for the various elements overran their budgets by about $458 million, or about 14 percent, with GMD accounting for approximately 80 percent of the collective overrun. Although the GMD contractor experienced the largest overrun, exceeding its fiscal year 2005 budget by approximately 25 percent, it is notable that the ABL contractor overran its fiscal year budget. The ABL contract had been restructured in 2004 to provide a more realistic cost estimate for the work planned. It is also noteworthy that continuing cost growth in the development of the THAAD missile caused the contractor to overrun its fiscal year budget for the first time since the contract was awarded. Table 2 contains our analysis of the contractor’s cost and schedule performance in fiscal year 2005 and the potential overrun or underrun of each contract at completion. All estimates of the contracts’ costs at completion are based on the contractors’ performance through fiscal year 2005. Collectively, the six contracts, for which data were available to estimate a cost at completion, could overrun their budgets by about $1.3 billion to $2.1 billion. It should be noted that the cost variance at completion projected for most of the contracts is based on more than one block of work. For example, the STSS contract covers the contractor’s work on Block 2006 and Block 2010. Appendix III provides further details regarding the performance of the prime contractors for the seven elements shown in the table. About $240 million of the GMD overrun can be traced to the interceptor, with the EKV accounting for more than 42 percent, or $102 million, of that amount. The EKV’s cost growth was caused by poor quality control procedures and technical problems during development, testing, and production. The interceptor’s cost also grew when the contractor had to bring a new supplier online to produce the motors for the BV+ booster, one of the two boosters being developed to carry the EKV into space. A new supplier was needed because explosions at the old supplier’s plant prevented it from delivering the motors. As of September 30, 2005, the SBX radar, which is also being developed by the GMD program, had also overrun its fiscal year budget by about $55 million. The cost of developing this component increased when numerous unplanned changes were made to the platform that holds the radar, subcontractor costs could not be negotiated at the expected price, and additional efforts were required to ensure a functional radome. The ABL prime contractor also experienced cost growth during fiscal year 2005, even though the ABL contract had been restructured in 2004. This action provided a more realistic budget and schedule for remaining contract activities leading up to a 2008 ABL lethality demonstration. With the restructure, the contractor was no longer required to report past cost and schedule growth. However, in fiscal year 2005, the contractor once again reported that ABL’s cost was growing and that some work had been delayed. Cost grew and schedules slipped as the contractor made software changes to address problems identified during tests of the Beam Control/Fire Control, modified the laser’s abort system so that it would not shut down the operation of the laser prematurely, and reprioritized activities throughout the program. Other costs were attributable to problems with ABL’s Active Ranger System and Beacon Illuminator Laser. For example, the contractor’s cost grew when it redesigned and replaced contaminated, damaged, and inefficient optics in the commercial off-the- shelf Active Ranger System. In addition, the contractor incurred additional cost because numerous faults in the power supply for the Beacon Illuminator Laser forced changes in circuit cards and circuit boards. For the first time since the THAAD contract was awarded, in 2000, the cost of the work being performed in a given fiscal year was greater than the funds budgeted for that work. The THAAD Program Office attributed the contractor’s overrun to unanticipated missile integration problems. For example, the Flight Termination Assembly, which is responsible for terminating a THAAD missile in flight, failed qualification tests that in turn delayed qualification of the next larger assembly. In another instance, work was delayed while engineers determined why telemetry equipment, which is placed aboard a test missile to report the missile’s condition in flight, sent corrupted data to the test station. Program officials told us that the program solved all known problems that could have prevented a successful first flight test. However, the officials said that the missile still has telemetry problems that prevent the test station from collecting all of the data that will be generated in the third flight test. Program officials expect to find solutions for these problems prior to the third test. MDA succeeded in fielding an initial missile defense capability by the end of fiscal year 2004 and in improving that capability by December 31, 2005, when Block 2004 ended. However, the block included fewer components than planned, cost more than anticipated, and its performance is unverified. In February 2003, MDA forwarded to Congress the goals that it had established for the initial BMDS capability that it planned to develop and field during Block 2004. The goals included the quantity of components that would compose the block, the cost of developing and producing those components, and the performance that the initial BMDS capability was to deliver. However, over the course of the block, MDA progressively reduced the number of components that it expected to field and increased its cost goal, primarily to recognize the cost of sustaining fielded assets. Even with changes, MDA was unable to meet its quantity goals, and MDA is reporting that the cost of Block 2004 will be greater than expected because of additional sustainment costs. However, the Block 2004 cost being reported by MDA does not include the cost of some activities that must still be completed. MDA did not change its performance expectations for the block. Between 2003 and mid-2005, MDA progressively decreased the number of components it planned to field as part of the Block 2004 capability. However, even with the reductions, MDA was unable to deliver all components planned. Table 3 illustrates the evolution of MDA’s quantity goals and compares those goals with the number of assets fielded. By mid-2005, MDA had reduced its February 2003 goal for operational GMD interceptors from 20 to 14. The first reduction occurred when MDA recognized that an explosion at a subcontractor’s facility would reduce the number of boosters available for interceptors slated for fielding. Although MDA was developing an alternate source for boosters, the second developer could not produce all of the boosters needed to field 20 interceptors. Therefore, MDA decided two interceptors would be diverted for testing. One was used in ground testing, and the second, which will not be built until calendar year 2006, will be used in a flight test. In mid-2005, after two unsuccessful flight tests, the GMD Program Office reduced its goal for operational interceptors further—from 18 to 14—to set aside more interceptors for ground tests. The test missiles will be assembled in a later block. Even with the reductions, MDA failed to meet its quantity goals. By the end of Block 2004, MDA had delivered 10 GMD interceptors. Production slowed as the program addressed technical issues and quality control problems discovered during testing and in quality control audits. Further, GMD program officials also told us that the SBX radar will not be operational until 2006 because funds that were to be used to integrate the radar into the BMDS were used to cover some of the cost of the restructured test program. MDA was also unable to place the FBX-T radar and the Fylingdales upgraded early warning radar in operation before the end of the block. While MDA did not formally add the FBX-T to its Block 2004 Statement of Goals, agency officials told congressional committees that they were developing this radar and expected to have it fully operational by the end of the block. MDA was able to accelerate this capability by 1 full year and to ready the radar for deployment within Block 2004. However, negotiations with Japan, the host nation for the radar, were not completed by December 2005, and site preparation, which will commence once negotiations are complete, is expected to take 8 to 9 months. Full functionality of the Fylingdales upgraded early warning radar was also delayed. MDA used the funds that were needed to make the radar fully functional to cover part of the cost of the restructured GMD test program. Over time, MDA also altered the number of SM-3 missiles that it planned to procure. MDA’s goal of producing up to 20 missiles was never reached because of fiscal constraints and missile parts availability. MDA set aside funding for 11 SM-3 missiles, 9 of which were to be made available for operational use. Program officials told us that had MDA funded more than 11 missiles it would have been necessary to restart some component production lines, which had been closed. Reopening these lines would have caused the additional components to be expensive. Also, the production lines could not have produced the components in a time frame that would have allowed the Aegis BMD program to meet the President’s directed fielding date. In December 2004, MDA further reduced its operational goal for SM-3 missiles from 9 to 8 in response to a DOD reduction in MDA’s fiscal year 2005 budget request. However, by the end of 2005, MDA was able to make 9 missiles available for operational use because 1 missile that the Aegis BMD program expected to use for testing was not needed for that purpose. MDA’s February 2003 Statement of Goals also included the planned upgrade of 15 destroyers and three cruisers for the missile defense mission. However, the Aegis BMD Program Office told us that the established goals were based on a capability-defined block, that is, a block that ended when the final ship was upgraded. In February 2004, MDA corrected Aegis BMD goals to take into account the agency’s definition of a block as a 2-year time period. In making this correction, MDA reduced the number of destroyers to be delivered during Block 2004 from 15 to 10. By February 2005, budgetary constraints also caused MDA to reduce its planned Block 2004 upgrade of cruisers from three to two. MDA is reporting that the cost of Block 2004 will exceed the cost goals established in 2003 and 2004, but the reported cost does not include the cost of Block 2004 activities that have been deferred until Block 2006. In February 2003, when it sent its Statement of Goals for Block 2004 to Congress, MDA estimated that in addition to the funds received in 2002, the agency would need $5.5 billion more, or a total of about $6.7 billion, to field this capability. Table 4 shows how MDA estimated those funds would be used. A year later, in 2004, the goal had increased to approximately $7 billion. However, the expected cost of the capability is now about $7.7 billion, or around $600 million more than the revised goal and $1 billion, or about 15 percent, more than the original Block 2004 goal. MDA primarily attributes Block 2004’s increased cost to the sustainment of fielded assets, which officials told us they could not fully estimate until they prepared their fiscal year 2006 budget request. However, the $7.7 billion cost does not include some work planned for Block 2004, which the contractor could not complete before December 31, 2005. According to GMD officials, this work has been deferred until Block 2006 and its cost will be recognized as part of that block’s cost. The Aegis BMD element was the only element of the BMDS program that estimated it would need funds during the first quarter of 2006 to complete Block 2004 fielding. GMD and C2BMC predicted that all work related to fielding the Block 2004 capability would be completed by September 30, 2005, when MDA expected to place a limited defensive operational capability on alert. In February 2004, MDA revised its estimated cost for fielding a Block 2004 capability to a little over $7 billion, or about $332 million more than originally projected. Table 5 presents the changes in the composition of the goal. The cost of fielding the Block 2004 capability will be about $939 million more than the originally estimated cost of $6.7 billion and approximately $607 million more than the revised cost goal of $7 billion. Officials primarily attribute the increased cost to MDA’s sustainment of fielded assets. However, the Block 2004 cost that MDA is reporting does not include work that the contractor was unable to complete within the block’s time frame. Program officials told us that in fiscal year 2006 the contractor will conduct additional Block 2004 development and deployment efforts. This will be followed in fiscal year 2007 with work needed to characterize and verify the Block 2004 fielded elements. The officials said that Block 2006 funds will be used to pay for these activities. Table 6 shows the actual cost incurred between October 1, 2002, and December 31, 2005, for the Block 2004 fielded capability and the sustainment cost expected to be incurred in fiscal years 2006 and 2007. It should be noted that this is not the full cost of the initial capability because DOD began to spend funds to develop the current missile defense capability in 1995, and as noted above, additional Block 2004 work will be completed and funded during Block 2006. Because test data are not available to anchor simulations that MDA uses to predict BMDS performance, the capability of Block 2004 cannot be verified. MDA has conducted a variety of tests that suggest Block 2004 offers some protection against ballistic missile attacks. However, MDA cannot be sure how well the BMDS will perform against ICBMs because tests needed to characterize the system’s performance have not yet been conducted. Test officials have also suggested that to fully characterize the BMDS’s ability to defeat short- and medium-range ballistic missile threats, more tests of Aegis BMD are needed. Additionally, the performance of emplaced GMD interceptors is uncertain because inadequate mission assurance/quality control procedures may have allowed less reliable or inappropriate parts to be incorporated into the manufacturing process. In February 2003, MDA set performance goals for Block 2004 that included a numerical goal for the probability of a successful BMDS engagement, a defined area from which the BMDS would prevent an enemy from launching a ballistic missile, and a defined area that the BMDS would protect from ballistic missile attacks. MDA did not alter Block 2004 performance goals, despite its actions on quantity and cost goals. A combination of tests and simulations is necessary to demonstrate whether the Block 2004 capability can meet its performance goals. Because it does not always conduct a sufficient number of tests to compute statistical probabilities of performance, MDA uses models and simulations to measure the probability that the BMDS will perform as designed. By employing digital simulations, estimates of system effectiveness are obtained over a wide range of conditions, scenarios, and system architectures. However, to ensure that models underlying these simulations are reflective of real-world operation, the models must be anchored by data collected during both ground and flight tests. MDA has completed simulations, ground tests, and flight tests that demonstrate various functions of the BMDS engagement, such as launch detection, tracking, interceptor launch, and intercept. However, it has not successfully completed an end-to-end flight test of the GMD element—the centerpiece of the BMDS—using production-representative components. In the absence of these data, MDA’s assessment of GMD’s Block 2004 performance is based on data derived from a number of sources, including design specifications, output from high-fidelity simulations, and integrated ground tests of various components. Officials in DOD’s Office of Operational Test and Evaluation told us that MDA’s computer-based assessments are appropriate for a developmental program but could present difficulties in interpreting results for operational considerations. During fiscal year 2005, MDA planned four integrated flight tests to demonstrate the ability of the Block 2004 BMDS against ICBMs. Together these tests were to assess the ability of different radars to detect and track targets for the GMD element, the ability of GMD’s fire control system to formulate a firing solution from each radar’s data, and the interceptor’s ability to hit and kill the target. Two of these tests were initiated. However, both tests were aborted because, in each, the GMD interceptor failed to launch. MDA postponed and has not rescheduled the third and fourth tests because, after the test failures, MDA decided to restructure its test program to make it less concurrent. MDA’s cancelation of the third flight test was particularly problematic because it prevented MDA from exercising Aegis BMD’s long-range surveillance and tracking capability in a manner consistent with an actual defensive mission. The Aegis BMD Program Office told us that Aegis BMD can adequately perform detection and tracking for the GMD element because in one test Aegis BMD demonstrated the ability to track a real target and in another test the ability to communicate track data to GMD’s fire control. However, DOT&E officials told us that having Aegis BMD perform long-range surveillance and tracking in real time would determine the degree to which errors are introduced when these activities are combined. MDA also planned to have the third test fulfill a congressional mandate to test the Block 2004 configuration in an operationally realistic manner. For the first time, a test would have included production-representative GMD hardware and software operated by sailors and soldiers. All successful GMD intercepts, to date, have used surrogate and prototype components. DOT&E officials suggested that further tests are needed to fully characterize Aegis BMD’s capability against ballistic missiles. The officials told us that Aegis BMD is making good progress in incorporating operational realism into its flight tests. Operational crews execute the intercept flight missions without advance notice of launch time. However, in early tests, ship position with respect to the target’s trajectory is still controlled to increase the probability of intercept. In addition, the tests have been constrained by sea states, time of day, weather, target dynamics, and the need to baseline Aegis BMD’s performance and concept of operations. The officials are recommending that in future tests Aegis BMD’s tactical mission planner should dictate the ship’s position and the sectors that its radar searches, rather than the program scripting the ship’s locations and its radar’s search sectors. Aegis program officials explained that the need to baseline Aegis BMD’s performance has indeed affected the ship’s position during tests. An intercept attempt in February 2005, for example, that tested a specific burn sequence for the missile’s booster required the ship be placed close to the target track. Yet another test, in November 2005, placed the ship relatively far from the target track. The officials emphasized that in both tests Aegis BMD performed successfully. Even if MDA had successfully completed flight tests needed to anchor the models and simulations used to predict the performance of the initial BMDS capability, the performance of some emplaced GMD interceptors would still be uncertain. GMD officials told us that before emplacing interceptors at Fort Greely and at Vandenberg Air Force Base for operational use, the interceptors undergo various tests. However, quality control procedures may not have been rigorous enough to ensure that unreliable parts or parts that were inappropriate for space applications would be removed from the manufacturing process. Two unsuccessful flight tests have been traced to poor quality control procedures. GMD officials have recommended that MDA remove the first nine interceptors emplaced at Fort Greely and Vandenberg Air Force Base, as the interceptors are scheduled for upgrades, so that any parts that tests have shown may not be adequately reliable or appropriate for use in space can be replaced. One of the two test failures (IFT-10) occurred in December 2002 when the EKV could not separate from its booster. A team of engineers that investigated the test failure found that an open circuit occurred in one part of the interceptor’s Laser Firing Unit, which disconnects the EKV from the booster. The open circuit was caused by a broken pin in an application- specific integrated circuit (ASIC) that controlled one aspect of the EKV/booster separation. The pin was fatigued during flight vibration. According to the test report, the ASIC’s design did not allow for variations in the assembly process and the contractor did not lay out an adequate process to uniformly produce the part. Additionally, the contractor did not adequately test to identify the problem. In earlier tests, the board on which the ASIC was mounted was stabilized with a foam material so that the board was not as affected by the severe vibrations that occur at launch. However, to improve producibility and reliability, the foam was removed prior to IFT-8. The second flight test (IFT-14) failure occurred in fiscal year 2005. The interceptor in this test failed to launch because two of the three arms that support the interceptor within its silo did not fully retract and lock. MDA’s investigation into the test failure found that the arms could not retract because the surface of one part was significantly corroded, and crush blocks, which absorb the impact of the arms as they retract and lock into position, were an earlier design that required more force to crush. MDA’s Deputy Director for Technology and Engineering pointed out that the corroded part was subjected to a more severe environment than it was designed to withstand. However, officials in the Office of Safety, Quality, and Mission Assurance told us that if simple quality assurance procedures had been in place, the corroded part would have been detected and the earlier design of the crush blocks would not have been installed. The GMD program considered four options for dealing with the first nine interceptors emplaced for operational use (seven at Fort Greely and two at Vandenberg Air Force Base). The options included (1) leaving the interceptors in their silos and accepting them as is; (2) using the interceptors in reliability tests; (3) over time, returning the interceptors to the contractor’s facility for disassembly and remanufacture; or (4) a combination of the other options. GMD program officials recently told us that their recommendation to MDA is to replace questionable parts when the interceptors are upgraded in fiscal year 2007. The officials said to replace the parts, the interceptors will be removed from their silos. The problems encountered during Block 2004, which ultimately prevented MDA from achieving all of its goals for the block, were brought about by management compromises. Time pressures caused MDA to stray from a knowledge-based acquisition strategy, allowing the GMD program to condense its acquisition cycle at the expense of cost, quantity, and performance goals. DOD has given MDA the flexibility to make such changes. MDA programs follow a structured acquisition plan called the Integrated Management Plan that is meant to guide the development of elements and components, as well as their integration into the BMDS. If the plan, which includes eight events, is completed in an orderly manner, it will increase the likelihood that programs will attain knowledge at appropriate points in the acquisition cycle. Successful developers have found that attaining certain knowledge at specific points decreases the likelihood of cost growth, schedule slips, or degraded performance. However, because MDA’s plan allows early deployment of a capability well before the eight events are completed, programs may gain knowledge too late in the process to prevent such problems. MDA officials told us that because the agency was directed to field a capability earlier than planned, it accepted additional risks. The risks were greatest in the GMD program that concurrently matured technology, designed the system, and produced and fielded operational assets as it attempted to meet its Block 2004 fielding dates. A primary tenet of a knowledge-based approach to product development is to demonstrate the maturity of critical technologies before starting product development and to demonstrate design maturity and production process maturity before committing to production and fielding. MDA’s Integrated Management Plan provided for this orderly progression through the acquisition cycle. At Event 1, an assessment of all technology critical to the system’s design was to be completed. By the end of Event 2, design work was to be finished, and at the end of Event 4, the design was to be demonstrated in developmental tests. By the close of Event 5, an assessment of the element’s operational capability would be complete and MDA would decide whether the element was ready to be handed over to a military service for production, operation, and sustainment or whether the element should be developed further. However, the Integrated Management Plan also allows a program to depart from a knowledge-based acquisition strategy if a decision is made to field all or part of a capability early. At the end of each event from Event 3 on, MDA may elect to accelerate fielding of all or part of a capability by simultaneously completing all phases of the acquisition cycle. That is, a program can concurrently mature technology, design its system, and produce and field assets for operational use—which is contrary to a knowledge-based acquisition strategy. According to MDA officials, GMD was at Event 3—the point at which a pilot production line produced its first components and the components’ functionality had been tested— when the presidential decision was made to deploy an early capability. MDA’s Integrated Management Plan is presented in appendix IV. Until the President’s directive, the GMD program was focused on developing a test bed. If GMD had serially progressed through all eight events of the Integrated Management Plan, components would have been matured and demonstrated in the test bed. At the end of Block 2004, MDA could have (1) transferred GMD to a military service for production, operation, and sustainment; (2) developed GMD further in a subsequent block; or (3) terminated the program altogether. However, to field early, the GMD program condensed its Block 2004 acquisition cycle. The program attempted to simultaneously demonstrate technology, design an integrated GMD element, and produce and emplace assets for operational use—all within 2 years of the President’s directive. The GMD program fielded an initial capability in 2004 and 2005, as it was directed to do. However, there were consequences of the accelerated schedule. The fielding schedule for some GMD components slipped, and the program could not complete an end-to-end test needed to verify GMD’s performance. Production and fielding of GMD interceptors was slowed by technical problems and the program’s need to address quality control issues. To address these issues, the program restructured its test plan at a cost of about $115 million; but it funded the plan at the expense of making the Sea-Based X-Band and Fylingdales upgraded early warning radars operational. Block 2006 funds will now be used to complete these Block 2004 activities. Other BMDS elements, whose fielding was not planned as part of Block 2004, are currently following a knowledge-based acquisition strategy. For example, the ABL program is concentrating on maturing technologies critical to the element’s design by designing a prototype. If the prototype successfully demonstrates its lethality in a demonstration planned no earlier than 2008, it will become the basis for the design of an operational capability. Similar to ABL, the KEI program is also concentrating on demonstrating technologies critical to its design. If these demonstrations are successful, they could be incorporated into KEI’s design. GMD officials told us that in the process of accelerating GMD’s schedule they became inattentive to weaknesses in the program’s quality control procedures. The GMD program had realized for some time that its quality controls needed to be strengthened. However, the program’s accelerated schedule left little time to address the problems. The extent of the weaknesses was documented in 2005 when MDA’s Office of Safety, Quality, and Mission Assurance conducted audits of the contractor developing the interceptor’s EKV and the Orbital Boost Vehicle. In its audit of the EKV contractor, the MDA auditors found evidence that The prime contractor did not correctly communicate all essential EKV requirements to its subcontractor and the subcontractor did not communicate complete and correct requirements to its suppliers. The EKV subcontractor did not exercise good configuration control. The reliability of the EKV’s design cannot be determined, and any estimates of its serviceable life are likely unsupportable. The contractor has no written policy involving qualification testing and does not require that its EKV subcontractor follow requirements established by industry, civilian, and military users of space and launch vehicles. The contractor’s production processes are immature, and the contractor cannot build a consistent and reliable product. More details on MDA’s audit of the EKV contractor can be found in appendix IV. Similarly, the auditors found that the contractor producing the Orbital Boost Vehicle needed to improve quality control processes and adherence to those processes. According to deficiency reports, the contractor did not always, among other things, flow down requirements properly; practice good configuration management to ensure that the booster met form, fit, and function requirements; implement effective environmental stress screening; or have an approved parts, material, and processes management plan. Ironically, the pitfalls that result from an accelerated fielding had already been learned in the THAAD program. In 2000, we reported that pressure on the THAAD program to meet an early fielding date nearly resulted in the program’s cancelation in 1998. When flight testing began, in 1995, the THAAD missile experienced numerous problems. Eight of the first nine flight tests revealed problems with software errors, booster separation, seeker electronics, flight controls, electrical short circuits, foreign object damage, and loss of telemetry. According to several expert reviews from both inside and outside the Army, the causes of early THAAD flight test failures included inadequate ground testing, poor test planning, and shortcomings in preflight reviews. One study noted that failures were found in subsystems usually considered low-risk. Subsequently, the THAAD program manager adopted a knowledge-based strategy, which led to successes in later tests. Compared with other DOD programs, MDA has greater latitude to make changes to the BMDS program without seeking the approval of high-level acquisition executives outside the program. In early 2002, DOD allowed MDA to effectively defer the application of DOD acquisition regulations to the BMDS program until a decision is made to transfer a BMDS capability to a military service for production, operation, and sustainment. This allows MDA to make program changes without asking for prior approval. For example, MDA has the flexibility to make trade-offs between BMDS elements. That is, the MDA Director can decide to accelerate one element while slowing another down. That is not to say that DOD and Congress are not kept informed of MDA’s progress or changes, but that the MDA Director, by statute, has the discretion to determine which variations are significant enough to be reported. Accountability has thus become broadly applied as to mean delivering some capability within funding allocations. Under DOD’s acquisition regulations, each BMDS element would likely have met the definition of a major acquisition program. Major acquisition programs are required by statute (10 U.S.C. § 2435) to develop a program baseline when the program begins system development and demonstration. The baseline, which includes cost and schedule estimates and formal performance requirements developed by the warfighter, is considered the initial business case for the acquisition effort. Once a baseline is approved, major acquisition programs are required to operate within the baseline or to obtain approval from a high-level acquisition executive outside the program to make cost, schedule, or performance changes. Changes in any of these baseline parameters would reflect a change in the program’s business case. Approved programs also report program status measured against the baseline and any baseline changes to Congress in an annual Selected Acquisition Report (SAR). Congress has also established criteria to identify significant variations in a weapon system’s cost or schedule and requires that those changes be reported more often, in a quarterly SAR. MDA is not yet required to have an approved program baseline as defined by 10 U.S.C. § 2435 for either the BMDS or its elements. Instead MDA develops more flexible cost and quantity goals and capability-based performance objectives. MDA has a separate statutory requirement to establish and report cost, schedule, and performance baselines for block configurations of the BMDS being fielded. But these baselines are more flexible than the rigid baselines required of other acquisition programs that DOD and Congress use in performing program oversight. While MDA reports its cost, quantity, and performance information to Congress in an annual Selected Acquisition Report, it is free to revise its goals and objectives, as it did during Block 2004, if they are not achievable with the time or funds available. MDA is also required by statute to report significant variations from the baselines in its annual SAR. However, there are no criteria to identify which variations are significant enough to report. Instead, MDA’s Director, by statute, has the discretion to determine which variations will be reported. For example, the Director decides whether to report that activities that Congress funded in one block are being deferred to a later block and will be paid for with the latter block’s funding. MDA has begun to address the quality control weaknesses in the BMDS program. Some actions are as simple as revising reporting lines so that MDA’s Chief of Safety, Quality, and Mission Assurance reports directly to MDA’s Director and Deputy Director and establishing toll-free telephone numbers for the report of safety and quality issues. MDA is also renegotiating some aspects of its prime contracts to revise the award fee determination process in an effort to place more emphasis on quality control and the implementation of industry best practices, and adding mission assurance provisions to contracts that promote process improvements, improve productivity, and enhance safety, quality, and mission assurance. Furthermore, MDA is placing more emphasis on the definition and correction of quality control weaknesses by conducting audits of major contractors and subcontractors. It has also renewed the emphasis on the role of the Defense Contract Management Agency in performing quality assurance functions in support of MDA programs. Finally, MDA has adopted a more conservative test approach for the GMD program that includes increased ground tests and an incremental approach to flight testing. However, the actions have not gone so far as to ensure that all BMDS programs implement knowledge-based practices or to ensure that the activities planned to develop, demonstrate, and produce the capabilities intended for future blocks are achievable within the block time frames without resorting to a concurrent schedule. MDA plans to revise prime contracts to reflect the importance of good quality assurance procedures and the contractor’s implementation of industry best practices. GMD officials told us that in fiscal year 2005 award fee on the GMD contract was partially based on a good quality control program. The officials said that of the $407 million award fee available for the period running from October 1, 2004, through September 30, 2005, $9 million was based on the contractor’s implementation of good quality assurance and supplier management procedures. In November 2005, MDA awarded the contractor $2.1 million of the $9 million set aside for the implementation of quality assurance procedures. MDA officials also told us that in fiscal year 2006, the overarching criteria for the entire award fee pool of $302 million will be the contractor’s implementation and adherence to industry standards and best practices. MDA also expects to modify prime contracts to incorporate a document referred to as MDA Assurance Provisions (MAP). All prime contracts are to include MAP standards, but not all contracts have been modified because MDA and some contractors have not reached agreement on the cost of implementing the MAP. For example, the GMD prime contractor estimates that implementation costs will be somewhere around $280 million. However, officials in MDA’s Office of Safety, Quality, and Mission Assurance told us that at least one contractor has agreed to implement the MAP at no additional cost. The MAP provides a measurable, standardized set of safety, quality, and mission assurance requirements to be applied to developers for mission- and safety-critical items in support of evolutionary acquisition and deployment of MDA systems. For example, the document includes standards regarding the collection and reporting of foreign object damage and debris incidents, a requirement for working-level peer reviews throughout design and development to identify and resolve technical issues and concerns prior to formal system-level reviews, and a requirement for ensuring that commercial off-the-shelf items meet all functional and interface requirements and are qualified to operate in their intended environment. In addition to requiring contractors to abide by MAP standards, MDA also requires each BMDS element program office to compare its mission assurance plan with the MAP. As a result of the comparison, the program is expected to identify critical mission assurance needs that are not being met. The results are catalogued in a Mission Assurance Implementation Plan (MAIP), which element program directors are accountable for implementing. Each element is to continuously assess MAIP execution so that feedback can be used to improve both the MAP and the MAIP. So that the quality assurance weaknesses in the BMDS program are accurately defined, the MDA Director also gave the Office of Safety, Quality, and Mission Assurance unfettered access to all MDA contractor operations, activities, and documentation. Under this authority, MDA quality personnel have been placed in each prime contractor facility to monitor the contractor’s quality procedures, and the office is auditing major contracts to identify quality assurance deficiencies and areas where procedures can be improved. As of November 2005, the office had completed audits of the Aegis BMD SM-3, GMD EKV, and Orbital Sciences Corporation booster, and THAAD contracts. MDA is also placing a renewed emphasis on the Defense Contract Management Agency’s (DCMA) quality assurance role. In a May 2005 delegation letter, MDA directed DCMA to perform quality assurance surveillance activities in accordance with DCMA policies and directives; ensure that mandatory government inspections authorized by MDA are incorporated into the contractor’s manufacturing process plans and/or critical suppliers’ plans; report mandatory government inspection test results, missed inspections, and requests for permission to waive inspections to MDA’s Office of Safety, Quality, and Mission Assurance for that office’s approval; and support technical surveillance activities by carrying out such duties as participating in mission critical item and component Material Review Boards and providing insight and recommendations on engineering change proposals, requests for waivers, employee training, and the contractor’s critical manufacturing processes. In 2005, the MDA Director established a new position—Director, Mission Readiness—whose primary focus during fiscal year 2005 was to examine the Ground-Based Midcourse Defense test program. To assist in this examination, a small, highly experienced Mission Readiness Task Force was established. The goals of the task force were to establish confidence in GMD’s ability to reliably hit its target, establish credibility in setting and meeting test event dates, build increasing levels of operationally realistic test procedures and scenarios, raise confidence in successful outcomes of flight missions, and conduct the next flight test as soon as practical within acceptable risk bounds. To meet these goals, the task force recommended a knowledge-based flight readiness process and flight test program. Before a test is held, the GMD program presents evidence that all components are ready for test. Program officials explained that senior executives from all key stakeholder organizations review the evidence and make a recommendation to the MDA Director as to whether the test event should proceed. GMD’s test plan has also been restructured to place more emphasis on successful ground tests prior to each flight test. According to MDA program officials, part of the evidence for proceeding from one flight test to another is success in the preceding ground and flight tests. The first flight tests have simple objectives. For example, flight test 1, conducted in December 2005, demonstrated the successful launch of the GMD interceptor and the separation of the EKV from its booster. By flight test 4, MDA expects to be ready to demonstrate that the GMD system is capable of hitting an operationally representative target. Tests that follow will become progressively more difficult. Although MDA is taking many actions to address quality assurance problems, it has not taken any steps to ensure that all elements follow a knowledge-based acquisition strategy or to ensure that the time is available to follow such a strategy. For example, a number of activities planned for the GMD element during Block 2004 have been deferred to Block 2006. Also, developmental efforts for other elements did not progress as planned, leaving more work to be completed during Block 2006 and, perhaps, later blocks. Missile defense is one of the largest weapon system investments DOD is making. To date, around $90 billion has been spent, and over the next 6 years, DOD expects that it will need about $58 billion more to enhance the BMDS. Beyond that, more funding will be required if DOD is to reach its ultimate goal of developing a system capable of countering ballistic missile launches from any range during all phases of flight. By driving to a fielding date during Block 2004, MDA placed assets in the field faster than originally planned. However, in doing so, MDA strayed from the knowledge-based approach that allows successful developers to deliver, within budget, a product whose performance has been demonstrated. Instead, MDA fielded assets before their capability was known. In addition, the full cost of this capability is not transparent to decision makers because MDA has deferred the cost of some Block 2004 activities into the next block. The fielding of the Block 2004 capability provides an opportunity for DOD to take stock of the approach it has taken thus far on missile defense and determine whether changes are warranted for its approach to future blocks. We believe they are. The concurrent development approach dictated by the directed fielding date and enabled by considerable flexibility to lower goals and defer capability has resulted in delivering fewer assets than planned. Accountability has been very broadly applied as to mean delivering some capability within funding allocations. While recognizing this approach did successfully accelerate fielding, to the extent it continues to feature concurrency as a means for acceleration, it may not be affordable for the considerable amount of capability that is yet to be developed and fielded. While the effects of this approach were perhaps most keenly felt with the Block 2004 capability, signs of its continuance can be seen in the developmental activities that were deferred during fiscal year 2005. It is possible for MDA to return to a knowledge-based approach to development while still fielding capability in blocks. To its credit, MDA instituted its own audits and is heeding the results of those audits in taking a number of steps to correct the quality assurance and testing problems encountered thus far. Yet these corrective actions have not gone far enough to put all of the BMDS elements on a knowledge-based approach to development and fielding. MDA’s experience during Block 2004 shows that it may not always be possible to deliver a capability in a 2-year time frame. Clearly, a block or stepped approach to fielding a new system is preferable to attempting a single step to full capability. However, a primary tenet of a knowledge-based acquisition strategy is that a program should be event- rather than schedule-driven. This philosophy is consistent with the evolutionary acquisition approach preferred by DOD in its acquisition regulations. It also provides a better basis for holding MDA accountable for what it can deliver within estimated resources. To better ensure the success of future MDA development efforts, we recommend that the Secretary of Defense direct the Director, MDA, to take the following three actions. Direct all BMDS elements to implement a knowledge-based acquisition strategy that provides for demonstrating knowledge points for major events or steps leading up to those events. These knowledge points should be consistent with those called for in DOD’s acquisition regulations. For example, markers could be established that would demonstrate that programs have the knowledge to meet design review standards and are ready to hold those reviews. Assess whether the current 2-year block strategy is compatible with the knowledge-based development strategy recommended above. If not, the Secretary should develop event-driven time frames for future blocks. Events could represent demonstrated increases in capability, such as the addition of software upgrades, stand-alone components, or elements. Adopt more transparent criteria for identifying and reporting on significant changes in each element’s quantities, cost, or performance, such as those that are found in DOD’s acquisition regulations. Coupled with a more knowledge-based acquisition strategy, such criteria would enable MDA to be more accountable for delivering promised capability within estimated resources. DOD’s comments on our draft report are reprinted in appendix I. DOD partially concurred with our first recommendation. DOD stated that MDA has implemented a knowledge-based acquisition strategy that relies upon discrete activities to produce data that can be used to judge an element’s progress. DOD noted that unlike the knowledge points discussed in DOD acquisition regulations, the knowledge points used by MDA are discrete points, not reviews. According to DOD, MDA’s strategy is consistent with the principles of DOD acquisition regulations while providing MDA’s Director with the flexibility to determine their applicability to the BMDS block development concept. We agree that knowledge is obtained through discrete events, such as a successful test or the completion of a cost/benefit analysis, but we define knowledge points as meaning more than discrete events. Rather, knowledge must be looked at in the aggregate. For example, the knowledge gained from a number of discrete events must be considered collectively to confirm that the design of a system is stable. It is these aggregations that we consider to be the knowledge points that should form the basis for investment decisions. For example, the GMD program’s successful demonstration of various functions of the BMDS engagement may have been sufficient to continue funding of the element’s development, but the discrete events were not sufficient to demonstrate that the element’s design and production processes were sufficiently mature to begin production and fielding. We also note that the knowledge points discussed in DOD acquisition regulations do represent measurable, demonstrated knowledge, such as technology and design maturity, that then become the basis for reviews. They are not the reviews themselves, as reviews can take place regardless of the level of knowledge available. DOD also partially concurred with our recommendation that MDA assess whether the 2-year block strategy is compatible with a knowledge-based acquisition strategy. DOD stated that MDA uses knowledge points to establish block goals and makes adjustments to those goals when necessary. DOD noted that the 2-year block strategy is compatible with this approach. We have not seen the decisions made on Block 2004 as being consistent with knowledge points. During Block 2004, MDA allowed the GMD program to complete all phases of the acquisition cycle— technology development, product design, production, and fielding— simultaneously to enable the program to field a capability within the 2-year time frame. If MDA is to be truly knowledge-based, it must be dedicated to taking the time to gather the knowledge needed to be successful in the next acquisition phase. Because MDA did not follow this strategy in Block 2004, we still believe that MDA should assess future blocks to determine whether those blocks can be developed within the 2-year time frame without resorting to a concurrent schedule. DOD did not concur with our third recommendation to adopt more transparent criteria for identifying and reporting program changes. In responding to this recommendation, DOD responded that MDA in 2005, by statute, began submitting fielding baselines to Congress and must report significant cost, schedule, or performance variances to these baselines in future reports. DOD believes that these reports and the quarterly reviews conducted by DOD staff provide an adequate level of oversight. We agree that MDA is required to report significant variances to established baselines to Congress and that MDA keeps DOD informed about the Ballistic Missile Defense program. However, given the management flexibilities accorded MDA and the large amount of resources (more than $50 billion) that DOD currently plans for missile defense, more transparent criteria is needed for better program management and oversight. DOD provided technical comments to our draft report, which we considered and incorporated as appropriate. In its technical comments, for example, DOD expressed concern that our draft report measured Block 2004 against goals established in February 2003 rather than the fielded baseline goals established in 2005. We chose the 2003 goals as a baseline because the goals were MDA’s official notification to Congress of the agency’s expectations for the block. In addition, goals are meant to be a result that an organization strives to achieve. If goals are changed over time to more closely reflect actual performance, they lose their validity. We have included in the report a discussion of the changes that MDA made in its Block 2004 goals from 2003 through 2005 and the reasons for those changes. We are sending copies of this report to the Secretary of Defense and to the Director, MDA. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you, or your staff, have any questions concerning this report, please contact me at (202) 512-4841. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. The major contributors to this report are listed in appendix VII. The Missile Defense Agency (MDA) developed and fielded in Block 2004 three Ballistic Missile Defense System (BMDS) elements for operational use in the event of an emergency. These elements are the Aegis Ballistic Missile Defense (Aegis BMD); Ground-Based Midcourse Defense (GMD); and the Command, Control, Battle Management, and Communications (C2BMC) elements. MDA also attempted to accelerate the fielding of the Forward-Based X-Band Transportable (FBX-T) radar being developed by the Sensors Program Office into Block 2004. Although the agency was able to complete the radar’s development, DOD did not complete negotiations with Japan, the host nation, in time to make the FBX-T operational during the block. During Block 2004, MDA also carried out development efforts for other elements that are expected to be incorporated into the BMDS during later blocks to enhance the system’s capability. These elements include the Airborne Laser (ABL), Kinetic Energy Interceptor (KEI), Terminal High Altitude Area Defense (THAAD), and Space Tracking and Surveillance System (STSS). Development of the THAAD element, which is being designed to attack short- and medium-range ballistic missiles during the terminal stage of their flight, is further along than the other developmental elements, and MDA expects to make one THAAD fire unit available for operational use in fiscal year 2009. The other three developmental elements are at an early stage. The ABL element, which is to attack missiles during the boost phase of their flight, is developing a prototype to demonstrate technologies critical to the system’s design. MDA expects to demonstrate the technologies no earlier than 2008, when the program will test the element’s lethality against a short-range ballistic missile. Similarly, the KEI program’s work during Block 2004 is focused on technology demonstration. MDA will assess KEI’s progress in 2008 and decide the future of its effort to develop a mobile, multi-use system capable of intercepting ballistic missiles during the boost and midcourse phases of flight. During Block 2004, the STSS program readied demonstration satellite and sensor hardware for launch. MDA expects the STSS to provide surveillance and tracking of enemy ballistic missiles for other BMDS elements. If the two STSS satellites being launched in 2007 successfully demonstrate this function, a constellation of STSS satellites could be launched beginning in 2013. The Aegis BMD element is a sea-based missile defense system designed to defeat short- and medium-range ballistic missiles in the midcourse phase of flight. Its mission is to protect deployed U.S. forces, allies, and friends from such attacks, and to employ its shipboard radar as a forward- deployed Ballistic Missile Defense System sensor to support intercontinental ballistic missile (ICBM) engagements. The Aegis BMD element builds upon the existing capabilities of Aegis- equipped Navy cruisers and destroyers. Planned hardware and software upgrades to these ships will enable them to carry out the missile defense mission in addition to their current role of protecting U.S. Navy ships from air, surface, and subsurface threats. The program is also developing the Standard Missile-3 (SM-3)—the system’s “bullet”—which is designed to destroy enemy warheads through hit-to-kill collisions above the atmosphere. The SM-3 is composed of a kinetic warhead (kill vehicle) mounted atop a three-stage booster. The program fielded Block 2004 assets mostly on schedule. Nine (Block I) SM-3 missiles were ready for operational use by December 2005, as planned. In addition, two Aegis BMD cruisers received system upgrades making them capable of launching missiles to engage ballistic missile targets. Ten Aegis BMD destroyers were equipped with long-range surveillance and tracking software during Block 2004. Aegis BMD conducted the most realistic tests of all the BMDS elements, but further tests are needed to fully characterize the element’s missile defense performance. The program has successfully tested Aegis BMD’s engagement capability in six intercept attempts since 1999 using variants of the SM-3 missile. One of these successful intercepts, Flight Test Mission (FTM) 04-1, was conducted in fiscal year 2005. Operational test officials reported that the test incorporated many operational characteristics. For example, the warfighter had no preknowledge of the target launch time, the target was representative of a real-world threat, and the fielded missile configuration was used. However, the officials said that in early tests, including FTM 04-1, ship position with respect to the target’s trajectory was controlled to increase the probability of intercept. The officials are recommending that in future tests Aegis BMD’s tactical mission planner should dictate the ship’s position and the sectors that its radar searches, rather than the program scripting the ship’s locations and its radar’s search sectors. Additional tests are also needed to demonstrate that the program has resolved problems that limit the SM-3 missile’s ability to divert to its target. Although the current configuration is adequate for the current threat, the missile will require more divert capability if it is to hit more complex targets and targets with more challenging trajectories than were seen in early tests. For example, the missile’s Solid Divert and Attitude Control System (SDACS) needs to operate in a pulse mode, rather than its current sustain mode, to increase the missile’s ability to maneuver toward its target. Performance problems with the SDACS’s pulse mode of operation were first noticed in a June 2003 flight test, Flight Mission (FM)- 5, and have remained a concern to the program. Program officials modified the SDACS’s design in fiscal year 2005, and they believe that the root cause of the problem is understood. However, ground and flight tests, planned for fiscal year 2006, are needed to verify that the SDACS will perform as designed. If the tests are successful, the pulsed SDACS could be incorporated into the missile in fiscal year 2007. Although the earliest fielded missiles will not be capable of pulse mode operation, which will reduce their divert capability, program officials believe that these missiles will provide a credible defense against a large population of the threat. A test (FTM 04-2) successfully conducted in November 2005 against a “separating” target—a target whose warhead separates from its booster rocket—also showed that the SM-3 has some capability against a more advanced target than the nonseparating targets used in prior tests. The program has also flight-tested Aegis BMD’s long-range surveillance and tracking capability, but further verification of fielded system upgrades is needed. In fiscal year 2005, the program successfully used the system upgrade (Ballistic Missile Defense 3.0E) to track live ICBM targets of opportunity in two separate events. However, because GMD did not participate in these tests, track data developed from the live target were not used to formulate a task plan for a GMD interceptor, as it would need to do in an actual defensive mission. Although track data have been passed to the fire control unit in a separate event, this has not been demonstrated in real time. MDA expected to test Aegis BMD’s long-range surveillance and tracking capability in several fiscal year 2005 flight tests, but it was unable to do so, mostly because of delays in the GMD test program. Aegis BMD was unable to participate in Integrated Flight Test (IFT)-13C because of weather conditions and in IFT-14 because of fleet scheduling conflicts. Other GMD tests were put on hold and later folded into a new test schedule to begin in fiscal year 2006. MDA has not yet rescheduled a GMD flight test that uses Aegis BMD in its long-range surveillance and tracking role. The GMD element is a missile defense system being developed to protect the United States against ICBM attacks launched from Northeast Asia and the Middle East. The GMD element relies on a broad array of components, including (1) space- and ground-based sensors to provide early warning of missile launches, (2) ground- and sea-based radars to track and identify threatening objects, (3) ground-based interceptors to destroy enemy missiles through hit-to-kill impacts above the atmosphere, and (4) fire control and communications nodes for battle management and execution of the GMD mission. Figure 1 illustrates the various GMD components, which are situated at several locations within and outside the United States. GMD’s progress toward meeting Block 2004 goals was less than expected. Silos and other construction at GMD facilities were completed on schedule, but the program was unable to meet its fielding goals for ground- based interceptors. Most of the GMD radars are fielded and could be used for defensive operations if needed. However, some radar upgrades were delayed, and none of the radars have been tested in integrated flight tests. In addition, an operational flight test and other key tests needed to characterize GMD’s performance were delayed into fiscal year 2006. The infrastructure for the missile defense complex is complete, but MDA was unable to deliver almost half of the interceptors initially planned for the Block 2004 inventory. MDA completed, on schedule, construction of all facilities needed to place the GMD system on alert, including the construction of the first missile field at the missile defense complex at Fort Greely, Alaska. However, the GMD program emplaced only 10 of the 20 interceptors originally planned for Block 2004. In fiscal year 2004, the program designated 2 of the 20 interceptors as test assets after an explosion at a plant producing motors for the interceptor’s booster caused the interceptor’s delivery schedule to slip. In fiscal year 2005, the program diverted 4 more interceptors to the test program in response to a MDA task force recommendation for a revised test plan. According to GMD officials, delivery of five of the six test assets and the remaining four missiles for operational use were delayed beyond December 2005. MDA has two radars ready for operation, Cobra Dane and the Beale upgraded early warning radar. However, tests have identified a Cobra Dane shortcoming, and neither radar’s capability has been verified in system-level flight tests. The Cobra Dane radar has been ready for limited defensive operations since September 2004. It has participated in ground tests and successfully tracked several targets of opportunity. Because the radar’s location prevents it from participating in integrated flight tests, an air-launched target was used in a September 2005 flight test (FT 04-5). The test was designed to assess the radar’s ability to transmit track data, in real time, to the missile defense fire control system. Cobra Dane performed as expected in these test events, but officials in the office of the Director, Operational Test and Evaluation (DOT&E) are concerned that the radar’s software, as currently written, could cause the GMD element to waste inventory. The Beale radar is also ready to conduct the missile defense mission, but software deficiencies and lack of testing are still a concern. While Beale radar hardware and communications upgrades are complete, software deficiencies caused software upgrades planned for Block 2004 to fall slightly behind schedule. The program planned to resolve the deficiencies, which could cause some degradation in the radar’s performance, in early 2006. However, officials consider Beale ready to perform its basic missile defense mission should the BMDS be placed on alert prior to the resolution of the deficiencies because the radar has successfully tracked several targets of opportunity. A test to certify all radar upgrades is currently scheduled for fiscal year 2006. In early fiscal year 2007, MDA also plans to test Beale’s operational capability as the fire control radar in an intercept attempt. In this test, for the first time, Beale will track a live target and provide track data to the GMD fire control component that will use the data to develop a weapon system task plan. Full functionality of two additional early warning radars was delayed into later blocks. Fylingdales upgraded early warning radar was delayed slightly to cover some of the cost of additional flight tests added to the GMD program. Its missile defense capability will be available in early Block 2006, after a distributed ground test scheduled for the second quarter of fiscal year 2006. Full radar functionality, which will allow the radar to perform both its missile defense mission and its legacy Air Force mission, is expected in October 2006. Likewise, deployment of the Thule upgraded early warning radar, which MDA had planned to upgrade incrementally, was postponed to Block 2008 so that the radar could be fully upgraded before taking on its missile defense mission. The Sea-Based X-Band radar (SBX) is also slightly behind schedule. Additional funding needs for new flight tests prevented the GMD program from integrating the Sea-Based X-Band radar into the BMDS by December 31, 2005, as planned. The radar is able to track targets but will not be able to pass track data to the fire control center until it is integrated with the GMD system during the distributed ground test scheduled for April 2006. The radar is expected to be transported to its home port at Adak, Alaska, by the third quarter of calendar year 2006 where it will be available in the event of an emergency. However, MDA does not plan to verify the performance of the radar in a system-level flight test until late in 2007. The GMD program was unable to demonstrate the Block 2004 GMD system in flight tests. The program attempted two integrated flight tests in fiscal year 2005, IFT-13C in December 2004 and IFT-14 in February 2005. In both tests, interceptors failed to launch from their silos. In IFT-13C, a timing problem with the interceptor’s flight computer caused the interceptor to abort its launch. In IFT-14, the first intercept attempt since 2002, the interceptor was unable to lift off because the arms inside the silo failed to fully retract and lock out of the way. Program officials traced the root cause of this failure to poor quality control procedures. In response to these test failures, MDA delayed upcoming plans for future tests and chartered the Mission Readiness Task Force to review the program and propose changes. The task force found that MDA’s problems were primarily linked to inadequate quality assurance processes. An independent review team attributed these problems to the urgency of the fielding schedule, which drove decision making and program planning. The task force provided guidance for improving the test program by significantly restructuring the focus of upcoming test events. MDA adopted the recommended test strategy at an additional cost of $115 million. Although early tests in the restructured plan have simple objectives, the tests get progressively more difficult, and DOT&E is concerned that MDA cannot meet its schedule to conduct the first four tests between November 2005 and November 2006. The first flight test (FT-1) was successfully conducted in December 2005, 1 month later than planned. The objective of the test was not to intercept a live target, but to verify that an interceptor, representative of the configuration being fielded, could be successfully launched and to evaluate its booster’s delivery performance. The next intercept attempt, FT-4, is not scheduled until late calendar year 2006. One consequence of restructuring the GMD test program was MDA’s inability to fulfill the statutory mandate that required DOD to conduct an operationally realistic test of the BMDS by October 1, 2005. MDA had planned to conduct this test in the third quarter of fiscal year 2005. However, after the two flight test failures, the task force recommended that MDA spend additional time addressing mission readiness before attempting an operational test of the BMD system. FT-4, scheduled for November 2006, is the first test that has the potential to fulfill the mandated objectives. FT-4 is planned as an intercept attempt using the Beale radar as the fire control radar. This will be the GMD program’s first intercept attempt to use a nonsurrogate fire control radar. While the GMD program has proved the concept of destroying ICBMs during the midcourse of their flight, the program has not proved GMD’s design will deliver the performance desired. The GMD program, the centerpiece of the BMDS Block 2004 defensive capability, has demonstrated its ability to intercept target warheads in flight tests since 1999. The program has conducted five successful intercept attempts, the last one in 2002. While the program maintains that each piece of the engagement sequence has been demonstrated by flight and ground tests, the program has been unable to verify that the integrated system, using production-representative components, will work in an end-to-end operation. Until further testing is done, MDA will not know for sure that the integrated system using operational interceptors and fire control radars will perform as expected, or that technical problems with the kill vehicle and its booster have been fixed. Quality control weaknesses also raise concerns about the performance of GMD interceptors. Quality control procedures may not have been rigorous enough to ensure that unreliable parts, or parts that were inappropriate for space applications, would be removed from the manufacturing process. For example, a leak in an attitude control system regulator was traced to unauthorized rework. Although production has slowed as the program introduces initiatives to strengthen quality controls, interceptors are still being emplaced in silos before all initiatives are in place. Additionally, the first nine interceptors emplaced for operational use—seven at Fort Greely and two at Vandenberg Air Force Base—could include questionable parts that were not detected during the interceptor’s acceptance tests. Program officials told us that they are recommending that such parts be replaced in 2007, when the interceptors are scheduled to be upgraded. Making the replacements will require that the interceptors be removed from their silos. The C2BMC element is being developed as the integrating and controlling entity of the BMDS. Leveraging existing infrastructure, it is initially designed to provide connectivity between the various BMDS components and in later blocks will manage their operations as part of an integrated, layered missile defense system. Over time, C2BMC will not only provide planning tools to assist the command structure in formulating defensive actions, it will also generate detailed instructions for executing various missile defense functions, such as tracking enemy missiles, discriminating the warhead from decoys and associated objects, and directing the launch of interceptors. It will also manage the exchange and dissemination of information necessary for carrying out the missile defense mission. The Block 2004 C2BMC element provides situational awareness by monitoring the operational status of each BMDS component, and it displays threat information such as missile trajectories and impact points. When the FBX-T becomes operational, C2BMC will also provide sensor control, sensor tasking, and sensor monitoring of the radar and forward the data to GMD. The incorporation of battle management capabilities into the C2BMC element begins with Block 2006. In the 2006-2007 time frame, the element is expected to track a ballistic missile threat throughout its entire trajectory and select the appropriate element to engage the threat. For example, the Block 2006 C2BMC configuration would be able to generate a single, precise track from multiple radars and transmit it to the other elements. This allows elements to launch interceptors earlier, providing more opportunity to engage incoming ballistic missiles. Block 2006 is also expected to enhance C2BMC’s communications with each BMDS component. C2BMC program officials will work to establish communications with all elements of the BMDS, overcome limitations of legacy satellite communications protocols, and establish redundant communications links to enhance robustness. Such upgrades will improve operational availability and situational awareness. The C2BMC team executed all of its planned fiscal year 2005 activities as scheduled and nearly all of the activities needed to complete the Block 2004 capability. Program officials completed software development and testing, and integration activities, and enhanced the system’s robustness. Additional suites were also installed at command centers to provide the warfighter with the capability to plan and monitor the missile defense mission. A number of activities in preparation for Block 2006 were also completed during fiscal year 2005. For example, design and planning requirements for Block 2006 software upgrades (Spirals 6.1 and 6.2), along with a Block 2006 system requirements review, were completed in June and July of 2005 respectively. During fiscal year 2005, program officials completed the development of the final two upgrades (Spirals 4.4 and 4.5) to Block 2004 C2BMC element software. The first upgrade (Spiral 4.4) added the ability to display GMD assets on users’ computer monitors, improved the user’s ability to call up BMDS information, and reduced the time to transfer force-level planning files. The second upgrade (Spiral 4.5) gave C2BMC the capability to receive, distribute, and display information developed by three new sensors—the Forward-Based X-Band and Sea-Based X-Band radars and the Fylingdales upgraded early warning radar. It also improved the consistency between the data displayed by the C2BMC and the GMD fire control monitor, which also receives information directly from various sensors. The program office installed a suite at the U.S. Pacific Command during fiscal year 2005, and it is waiting on policy agreements to turn on a Web browser— providing summary screens of the unfolding battle—in the United Kingdom. Additionally, second suites were added at the U.S. Strategic Command (STRATCOM) and the U.S. Northern Command (NORTHCOM) to allow for concurrent operations and system upgrades as well as to make the C2BMC a more robust system. The C2BMC program also completed most of the activities needed to verify its Block 2004 capability. In August 2005, the program completed testing that proved the readiness of Spiral 4.4 software for operations. The program also participated in demonstrations with other elements to practice transitioning the BMDS to alert. By the end of Block 2004, the final software upgrade (Spiral 4.5) was tested to verify that the C2BMC could interface with each BMDS element and that the improved software was ready for operational use. However, further testing is needed to verify that Spiral 4.5 can provide planning and situational awareness at U.S. Northern Command, U.S. Strategic Command, U.S. Pacific Command, and the Department of Defense’s Cheyenne Mountain Operations Center. Program officials told us that they expect to complete the verification tests by the end of March 2006. The C2BMC program successfully demonstrated its ability to maintain situational awareness during several ground- and flight-testing activities. Program officials were able to monitor the operational status of BMDS components and display threat information, such as missile trajectories and impact points. However, during tests, program officials discovered three primary risk items that have the potential to affect C2BMC’s performance. Table 7 identifies these risks, the possible impact on program performance, and the actions being taken to address each. The THAAD element is being developed as a mobile, ground-based missile defense system to protect forward-deployed military forces, population centers, and civilian assets from short- and medium-range ballistic missile attacks. A THAAD unit consists of a THAAD fire control component for controlling and executing a defensive mission, truck-mounted launchers, ground-based radars, interceptor missiles, and ground support equipment. The THAAD missile is composed of a kill vehicle mounted atop a single- stage booster and is designed to destroy enemy warheads through hit-to- kill collisions. The THAAD program is not expected to deliver an initial capability until 2009, when a fire unit and 24 missiles will be handed over to the Army for concurrent test and operation. Fiscal year 2005 activities focused on developing and ground-testing THAAD components in preparation for the initiation of THAAD’s flight test program. While several of these preparatory activities were completed on schedule, others were deferred, causing a further delay in the flight test program. According to program officials, unanticipated missile integration issues caused the delay. During fiscal year 2005, the THAAD program accomplished several key activities in preparation for flight tests, but flight tests began later in the block than planned. Program officials successfully integrated software upgrades into the launcher and radar and completed missile qualification tests that lead to flight readiness certification. However, a flight test delay that we reported last year has lengthened. Two explosions in the summer of 2003 at a subcontractor’s propellant mixing facility delayed the start of flight testing from December 2004 to March 2005 and led to revisions of the program’s flight test plan. However, because of unanticipated integration issues, the first flight test, which validated missile performance in a high endoatmospheric flight environment, was further delayed from March to November 2005. The delay occurred because program officials found problems with THAAD’s Laser Initiated Ordnance System and its telemetry system during ground tests and assembly operations. The discovery of these problems delayed other ground tests and the assembly of the THAAD missile being manufactured for the first THAAD flight test. Tests identified two problems in the Laser Initiated Ordnance System. A design issue caused one subcomponent to fail during testing, delaying the Laser Initiated Ordnance System’s qualification test. Also, during assembly operations, the program identified a change in the Laser Initiated Ordnance System’s power output that required the program to improve the design robustness of a fiber optic cable assembly. Additional qualification testing was then required to obtain range safety approval. Both of these problems, which were discovered during ground and qualification tests, were solved, but not before they affected the flight test schedule. The program also identified a problem with the missile’s telemetry system, which transmits flight test data to ground stations for observation during testing. During integration testing, transmission errors occurred between the missile’s telemetry system and the ground test station. Program officials told us that a solution was found that eliminated transmission errors in the first flight test. However, the telemetry system is not providing as much information as wanted in one mode of operation. According to the officials, this does not present a problem until flight test 3, which is scheduled for July 2006, and a solution is expected by that time. The THAAD program also had to address a number of range safety requirements prior to the initiation of flight testing. In September, the officials told us that they had addressed all requirements related to the first flight test, which did not involve an intercept attempt, and the majority of the requirements related to the second flight test. Officials do not expect any range safety requirements to delay future flight tests. THAAD program officials plan to conduct 14 more flight tests between April 2006 and December 2008. To complete these tests prior to handing the first THAAD fire unit over to the Army for concurrent operation and tests in 2009, the program will have to successfully conduct as many as 5 flight tests in each fiscal year. Program officials told us that if all tests are successful, they can meet this schedule. However, a failure will cause delays. THAAD’s performance and effectiveness remain uncertain until the program conducts flight tests with updated hardware and software. Data from flight testing are needed to anchor simulations of THAAD’s performance and to more confidently predict the element’s effectiveness. The ABL element is a missile defense system designed to shoot down enemy missiles during the boost phase of flight, the period after launch when the missile’s rocket motors are thrusting. The concept involves the coordinated operation of a high-energy laser and a beam control system that focuses the laser on a target missile. By rupturing the missile’s fuel or oxidizer tank, the laser causes the missile to lose thrust or flight control, and the missile cannot reach its intended target. The ABL element consists of three major components integrated onboard a highly modified Boeing 747 aircraft—a high-energy chemical oxygen- iodine laser; a beam control/fire control component to focus the laser’s energy on a targeted spot of the enemy missile; and a battle management/command control, computers, communications, and intelligence component to plan and execute the element’s defensive engagements. In addition, the element includes ground support infrastructure for storing, mixing, and handling chemicals used in the laser. Commensurate with its fiscal year 2004 restructuring effort, the ABL program continued to focus on near-term milestones. By accomplishing its near-term goals, the program expects to increase confidence in its longer- term program objectives of demonstrating ABL’s lethality against a short- range ballistic missile target. During fiscal year 2005, the program focused its efforts on testing ABL’s Beam Control/ Fire Control and its high-energy laser. Nearly all activities related to these milestones were completed on schedule. Program officials noted that the program’s progress over the past 18 months caused Congress to appropriate an additional $7 million for ABL’s fiscal year 2006 budget. Both First Flight and First Light—the first major milestones of the restructured program—were achieved during the first quarter of fiscal year 2005. First Flight was the first of a series of planned flight tests with the Beam Control/ Fire Control segment. The test demonstrated that all necessary design, safety, and verification activities to ensure flight worthiness had been completed. It also began the process of expanding the flight envelope—types and combinations of flight conditions—in which ABL can operate. The program also completed scheduled activities associated with a series of Beam Control/ Fire Control low-power passive flight tests. The program is currently integrating the full Beam Control/ Fire Control with the Beacon Illuminator Laser, which helps mitigate the effects of the atmosphere on the laser beam’s quality and with the Tracking Illuminator Laser, which helps focus the laser beam on its target. Once integration is complete, the program plans to conduct a series of active flight tests planned for summer 2006. First Light, which integrated six individual laser modules to demonstrate that the combined modules can produce a single beam of laser energy, was completed in November 2004. Further tests to extend the duration of the laser’s operation were scheduled for completion in September 2005. However, the tests were not completed until fiscal year 2006. The program plans to conduct its lethality demonstration—a flight test in which the ABL aircraft will attempt to shoot down a short-range ballistic missile—no earlier than 2008. If this test is successful, MDA believes it will prove the concept of using directed energy for missile defense. As previously noted, the ABL’s fiscal year 2005 test program was centered on its Beam Control/ Fire Control passive flight test series and its high- energy laser ground tests. The flight test series included 28 tests that enabled the program to demonstrate the performance of the aircraft’s turret, laser optics, and initial integration of Beam Control/ Fire Control software; verify the structural performance of the Active Ranger System—a system that helps ABL predict a missile’s launch point; complete flights under various combinations of flight conditions; collect data critical for readying the aircraft for laser installation; and demonstrate the performance of Link-16—a communications component that ABL uses to interact with other elements of the BMDS. The demonstration of First Light proved that individual laser modules, which have the fit and function needed to be placed on the aircraft, could be successfully integrated to produce a single laser beam for a fraction of a second. The program planned a series of tests during fiscal year 2005 that would gradually increase the length and power of the laser’s operation. However, problems encountered during testing limited the duration of lasing to less than 1 second and affected the program’s ability to determine the laser’s maximum power output. Program officials told us that two of the laser’s individual laser modules experienced alignment issues that prompted the system to shut down prior to completing extended lase times. The alignment problem was rectified and the program was able to conduct additional tests at longer durations. Over the fiscal year, the program operated the high-energy laser 51 times for a total of 23.5 seconds, with the longest duration being 5.25 seconds. On December 6, 2005, the program conducted a longer-duration test of the high-energy laser and was able to sustain the beam for more than 10 seconds. The ABL also produced approximately 83 percent of its design power. Although the ABL has not reached 100 percent of its design power, program officials told us that the 83 percent power is sufficient to achieve 95 percent of maximum lethal range against all classes of ballistic missiles. Prior to the longer-duration test, program officials told us that the laser would not be installed on the aircraft until it produced 100 percent of its specified power. However, on December 9, 2005, the Director, MDA, gave the program permission to disassemble the System Integration Laboratory and begin installation of the laser on the aircraft. Program officials said that the program will continue to test the laser when the aircraft is on the ground in an effort to demonstrate that the laser can produce 100 percent of its design power. The program continues to characterize jitter as a risk to the ABL system’s overall performance. Jitter is a phenomenon pertaining to the technology of controlling and stabilizing the high-energy laser beam so that vibration unique to the aircraft does not degrade the laser’s aim point. Jitter control is crucial to the operation of the laser because the laser beam must be stable enough to focus sufficient energy on a fixed spot of the target missile to rupture its fuel or oxidizer tank. Program officials told us that they will not be fully confident that jitter can be controlled until it is demonstrated in an operational environment during the lethality demonstration, but data on the two major components that cause jitter were collected in ABL’s System Integration Laboratory. These data were fed into simulations and models that help the program understand the effects of jitter and how components can be designed to reduce jitter. According to program officials, data obtained during recent laser and flight tests increased the program’s understanding of the phenomenon. The KEI element is being designed as a mobile, multi-use land-based system designed to destroy medium, intermediate, and intercontinental ballistic missiles during boost and all midcourse phases of flight. MDA originally planned to develop KEI to defeat threat missiles during the boost phase of their flight. However, in 2005 MDA directed the KEI program to incorporate the capability to engage missiles during both the ascent and the descent portions of the midcourse phase of flight, as well as the boost phase. The KEI program is currently focused on developing a mobile, land-based system that according to program officials is expected to be available in the Block 2014 time frame. The land-based system will be a deployable unit consisting of a fire control and communications unit, mobile launchers, and interceptors. The KEI element has no sensor component, such as radars, for detecting and tracking boosting missiles. Instead, it will rely on external ballistic missile defense system sensors, such as space- based infrared sensors and forward-deployed radars. A sea-based capability is planned in subsequent blocks. Preliminary work will also begin on a space-based interceptor in fiscal year 2008. If MDA should decide to go forward with a space-based interceptor, it would not be deployed until the next decade. Although the KEI program completed many planned activities on schedule, the program continued to progress slower than anticipated. KEI officials were forced to replan several activities and reduce the scope of others after both Congress and MDA reduced program funding. The activities completed during the fiscal year included constructing a shelter to house prototype fire control and communications equipment and conducting several demonstrations. According to program officials, the demonstrations showed the prototype equipment could collect data from overhead nonimaging infrared satellites in a time frame that would make a boost phase intercept possible. In addition, the program completed studies that allowed it to optimize the design of communications equipment that uplinks information from KEI’s fire control and communications component to its interceptor so that there is a decreased likelihood that communications will be jammed. The studies also allowed the program to optimize the equipment’s design to operate in a nuclear environment. Other activities scheduled to be completed during fiscal year 2004 were deferred into fiscal year 2005 and have now been further delayed. For example, the System Requirements Review, which documents mission objectives, identifies critical components, and establishes a program plan, was delayed from fiscal year 2004 to 2005 and then to fiscal year 2007. Program officials noted that funding shortfalls also forced the program to eliminate some of its initial risk reduction activities. For instance, the program originally planned to develop a two-color seeker, which would aid in plume-to-hardbody handover. However, because of a reduced program budget, program officials now plan to take advantage of the Aegis Ballistic Missile Defense program’s development of a two-color seeker and to work on a KEI-specific two-color seeker later in the program. In fiscal year 2005, the KEI program office planned to continue work on its Near Field Infrared Experiment (NFIRE), an experimental satellite that will collect infrared imagery of boosting intercontinental ballistic missiles. In 2004, the KEI program office signed a memorandum of agreement and transitioned day-to-day management and execution of NFIRE to the Space Tracking and Surveillance System program. The STSS Program Office has experience with satellite development and can leverage its resources to manage the experiment. STSS expects to launch NFIRE in September 2006, the launch date established by the KEI Program Office. At this early stage of element development, data are not available to evaluate element performance. However, the program office identified areas of high risk that could affect performance. The interceptor’s booster motors, which demand high performance for KEI engagements, and the algorithm enabling the kill vehicle to identify a target missile’s body from its luminous exhaust plume, are high-risk technologies. Initially, program officials were focused on designing KEI and maturing these technologies concurrently. However, the program has adopted an approach that lets it proceed with less risk. KEI is now focused on maturing the high-risk technologies before integrating them into the land- based capability. In 2008, KEI is scheduled to participate in its first booster flight test. According to program officials, at that time a decision will be made on the program’s future. In spite of program uncertainties, program officials are working to extend the prime contract. Currently, KEI’s contract, which was awarded in December 2003, has a term that extends through January 2012 (98 months). Program officials are now working to extend this period until September 2015 (143 months). MDA is developing STSS as a space-based sensor element of the BMDS. It is currently working on the first increment of STSS, which is focused on the preparation and launch of two technology demonstration satellites partially built under the former Space-Based Infrared System-Low (SBIRS- Low) program. Each satellite making up the program’s “space segment” includes a space vehicle and a payload of two infrared sensors—an acquisition sensor to watch for the bright plumes (hot exhaust gas) of boosting missiles, and a tracking sensor to follow the missile through midcourse and reentry. The STSS element also has supporting ground infrastructure, known as the ground segment, which includes a ground station and mission software to support the processing and communication of data from the satellites to the BMDS. MDA plans to launch these satellites in 2007, in tandem, in an effort to assess how well they perform surveillance and tracking functions. Using data collected by the satellites, MDA will determine what capabilities are needed and what goals should be set for the next generation of STSS satellites. The first operational constellation of satellites is expected to be available in the 2012 time frame. The STSS program accomplished many of the activities planned for completion in fiscal year 2005. Both spacecraft buses have been integrated and tested, the first of two ground software builds has successfully completed acceptance testing, and the second software build is progressing on schedule. However, one key activity, delivering the payload for the first satellite, was delayed because of problems in testing of the payload. By contract, the payload for the first satellite was supposed to be delivered in January 2005, but delivery has been delayed twice, with the last delaying delivery until early 2006. The delays are affecting scheduled work on the second satellite’s payload, potentially delaying the satellites’ launch date. During our last assessment of STSS, the program office expected the satellites to be launched in February 2007, earlier than the contract date of July 2007. However, the more recent problems and delays may result in the launch being later than February 2007, but still before the required launch date of July 2007. The program office is so confident that it will launch on time that it has placed an order through the National Aeronautics and Space Administration (NASA) for the Delta II launch vehicle, with a requested launch date during the second quarter of fiscal year 2007. The first satellite payload is being delayed because problems occurred during thermal vacuum testing. Hardware issues emerged when the payload was tested in a vacuum and at cold temperatures for the first time. Although the significance of the problems is not yet clear, repairs will have to be made. The program office and contractors plan to make the repairs and then decide if further testing is needed to ensure that all problems have been corrected. Several options for testing the payload are being considered. They include (1) retesting the payload in the thermal vacuum chamber without making repairs; (2) taking the payload out of the chamber, completing the repairs, and then retesting; (3) taking the payload out of the chamber and conducting tests at ambient (room) temperatures; or (4) shipping the payload as is to the prime contractor for retest at the contractor’s facility. However, if the program decides to return the payload to the contractor’s facility, the contractor could not test as specifically as could be done in the vacuum chamber, making it challenging to isolate problems. If further testing is completed before returning the payload to the prime contractor, several weeks will be added to the schedule because the payload will have to be removed from the vacuum chamber, disassembled, repaired, reassembled, and placed back in the chamber. The chamber will then have to be returned to the right vacuum and temperature conditions and the payload retested. The program office is having an independent team review the situation with the first payload to determine how much more testing should be conducted. The program manager does not believe any of the thermal vacuum testing problems are mission assurance or performance issues. In addition to the thermal vacuum issues, integration issues have been discovered as the subcontractor continues to integrate and test the payload at successively higher levels of integration. The payload ambient- level testing took nearly 3 months longer than expected to complete. This was due to the large number of software and hardware integration issues discovered when the flight hardware and software were tested together for the first time. Most software issues are due to the configuration differences between the pathfinder hardware that served as the test bed for the payload software and the actual flight hardware. The quality and workmanship problems with the payload subcontractor have continued to persist. These problems have been ongoing for the last 2 years and have contributed to a schedule delay in delivering the payload. According to program officials, the quality and workmanship problems are the result of the subcontractor’s lack of experience. Examples of the quality and workmanship issues include the initial failure of the second satellite’s track sensor during vibration testing. The failure occurred because fasteners were not tightened according to specifications and because payload cables were poorly manufactured by a third-tier vendor. Although neither of these issues resulted in damage to the flight hardware, both have taken substantial management attention and considerable effort to correct. In response to the quality and workmanship issues, quality control at the subcontractor’s site has undergone significant restructuring. In addition, the prime contractor’s on-site quality organization at the subcontractor’s site stepped up its inspection and supervision of all processes and is providing mentoring. A reeducation effort was also undertaken to ensure that all personnel on the program knew and understood the program instructions. The program office expects that the quality improvements the payload subcontractor has implemented will reduce the probability of additional quality-related issues in the future. According to the program office, the integration issues that have been discovered are not unusual for a first- time integration effort, but are taking more time than planned to work through. However, the second satellite’s hardware is consistently moving through integration and testing much more efficiently than the first satellite’s hardware. Prime contractors typically receive most of the funds that MDA requests from Congress each fiscal year to develop elements of the BMDS. To determine if it is receiving a dollar of value for each dollar it spends, each BMDS program office requires its prime contractor to provide monthly reports detailing cost and schedule performance. In these reports, which are known as Contract Performance Reports (CPR), the prime contractor makes comparisons that inform the program as to whether the contractor is completing work at the cost budgeted and whether the work scheduled is being completed on time. If the contractor does not spend all funds budgeted or completes more work than planned, the CPR shows positive cost and/or schedule variances. Similarly, if the contractor spends more than planned or cannot complete all of the work scheduled, the CPR shows negative cost and/or schedule variances. Using data from the CPR, a program manager can assess trends in cost and schedule performance, information that is useful because trends tend to persist. Studies have shown that once a contract is 15 percent complete, performance metrics are indicative of the contract’s final outcome. We used CPR data to assess the fiscal year 2005 cost and schedule performance of prime contractors for seven BMDS elements. When possible, we also predicted the likely cost of each prime contract at completion. Our predictions of final contract cost are based on the assumption that the contractor will continue to perform in the future as it has in the past. An assessment of each element is provided in this appendix. The Aegis BMD program has a prime contract for each of its two major components—the Aegis BMD Weapon System and the Standard Missile-3. During fiscal year 2005, both contractors completed most of their planned activities on time and at or less than budgeted costs. Based on the weapon system contractor’s performance through fiscal year 2005, the contractor could underrun the budgeted cost of the contract by about $7.1 million to $12.5 million, while the SM-3 contractor could underrun its budgeted costs for the contract by about $11.5 million to $17.8 million. Weapon System CPRs show that the contractor underran its budgeted costs for the prime contract and was able to complete all of its planned work on schedule. The weapon system contract’s cumulative cost and schedule variances—variances that take into account all work completed on the contract since its award—are highlighted in figure 2. According to program officials, the minimal schedule variance during the fiscal year was driven by ship availability and changing test event schedules. Additionally, the contractor incurred a $6 million positive cost variance as a result of underruns for Block 2004 and Block 2006 efforts. In September 2005, work tasks were replanned for the Block 2004 completion effort to reflect funding impacts. The prime contractor for the SM-3 missile component performed within its budgeted costs, but was slightly behind schedule. By the end of fiscal year 2005, the contractor reported a positive cost variance of $10.9 million and a negative schedule variance of $9.6 million. Figure 3 illustrates the cumulative cost and schedule performance for the SM-3 prime contractor. Our analysis of CPR data shows that the contractor spent less than budgeted because it did not need all staff originally planned to conduct test events; these events were delayed because of ship availability and fleet priorities. Program officials told us that the tests were rescheduled when the contractor was unable to meet the planned test dates. The funds budgeted for these tests will be used to conduct the tests at the rescheduled dates. The delayed test events also caused the contractor to fall slightly behind schedule. In addition, the contractor could not complete some planned work because hardware deliveries were late, delaying related integration activities. Despite these delays, the program asserts that the contractor has met most of its contractual delivery dates thus far, and the program expects the contractor to meet future delivery obligations. Our analysis of ABL CPRs indicates that the prime contractor’s cost and schedule performance declined during fiscal year 2005 despite the program’s restructuring efforts in the spring of 2004. The program restructured the contract to give the contractor a more realistic budget and schedule to do work that is needed to get ready for and complete a lethality demonstration of the ABL element. Despite these adjustments, the contractor was unable to complete fiscal year 2005 activities within budget or on schedule. As illustrated in figure 4, the ABL contractor incurred a negative cost variance of $23.1 million and a negative schedule variance of $23.6 million during fiscal year 2005. The program planned to complete two major activities during fiscal year 2005—passive flight tests of ABL’s Beam Control/Fire Control component and duration tests of the system’s high-energy laser. However, technical challenges associated with these activities increased costs and delayed scheduled work. Changes had to be made to Beam Control/Fire Control software, and additional work was needed on the Beam Control/Fire Control Hard Wire Abort System to support test activities. In addition, the program also reprioritized activities throughout the program. Furthermore, the contractor informed the ABL program office that negative cost variances caused by technical problems related to the element’s Active Ranger System and Beacon Illuminator Laser components cannot be recovered. These problems and their potential impact on the program are outlined in table 8. According to program officials, the late delivery of the Active Ranger System will not affect ABL’s planned 2008 lethality demonstration because the test will not require ABL to estimate the target missile’s launch or impact point. Neither will the test include more than one target. However, the delay could affect the contract’s schedule and cost because planned work related to the Active Ranger System may not be completed and the cost of unplanned work needed to resolve the technical problems was not included in the contractor’s budget. Despite program challenges, program officials noted that the contractor still believes it can complete the contract within the current contract ceiling. However, based on our analysis of the program’s fiscal year 2005 performance, we estimate a contract overrun of between $43.8 million and $231.7 million. Our analysis of the performance of the contractor developing the C2BMC element was limited because the program did not deliver CPRs for 6 months during fiscal year 2005. Program officials cited the dynamics of the program as the primary reason for the suspension. In 2004, the C2BMC program office directed the contractor to add requirements to integrate a Forward-Based X-band—Transportable radar into the program’s architecture, adjust its schedule to absorb funding reductions, and make several high-priority engineering changes. The contractor was unable to update its work plan and realign its budget quickly enough to reflect these changes. Without changes, CPRs would have compared the work under way with an outdated schedule and budget and would not have reflected the contractor’s true performance. The contractor completed all activities needed to replan its work in May 2005 and began to deliver CPRs in June 2005. By the close of fiscal year 2005, the contractor reported that it was performing work within budget and slightly behind schedule. The cumulative cost and schedule variances for the contract were approximately positive $1.7 million and negative $ 0.9 million, respectively. Our analysis shows that based on its performance so far, the contractor should be able to complete all scheduled contract work within the contract’s negotiated cost. The GMD prime contractor’s cost and schedule performance continued to erode during fiscal year 2005. By September 2005, the cumulative cost of all work completed was $713 million more than expected, and the contractor had incurred a cumulative negative schedule variance of $228 million. In fiscal year 2005 alone, work cost about $365 million more than budgeted. Furthermore, CPRs show that the contractor incurred a negative schedule variance of approximately $39 million during the fiscal year. However, officials in MDA’s Office of Business Management told us that the schedule variance does not capture some work planned for fiscal year 2005 that was deferred. The officials said that if the contractor deferred fiscal year 2005 work to another fiscal year before the work was begun, the CPR would not show that the contractor was behind schedule in completing that work. Judging from the contractor’s cost and schedule performance in fiscal year 2005, we estimate that at the contract’s completion, the contractor will have overrun the budgeted cost of the contract by between $1.0 billion and $1.4 billion. Figure 5 shows the unfavorable trend in GMD fiscal year 2005 performance. Developmental issues with the interceptor continue to be the leading contributor to cost overruns and schedule slips for the GMD program. Interceptor-related work cost $240 million more than budgeted in fiscal year 2005, with the kill vehicle accounting for more than 42 percent of this overrun. Poor quality control has led to a number of technical problems with the kill vehicle—such as foreign object debris in wiring harnesses and leaks in thermal batteries—that have increased manpower and rework costs. Additionally, the contractor for the BV+ booster incurred increased costs as a result of inefficiencies related to its transition to a new supplier. New requirements and redesign efforts related to the BV+ booster also contributed to the prime contractor’s negative cost performance. The program’s schedule variance grew as flight and ground tests were delayed. During fiscal year 2005, several flight tests were deferred after the interceptors in two flight tests failed to launch. The GMD program has restructured its test plan, and the first flight test was successfully conducted in December 2005. Program officials noted that the contractor expects to reduce its schedule variance in fiscal year 2006. However, the program’s negative performance forced the program to restructure its future work efforts and extend its prime contract by 1 year. In March 2005, we reported that plans to restructure the KEI contract prompted program office officials to suspend CPRs. The contract has since been restructured, and the contractor began delivering CPRs in March 2005. As of September 2005, the KEI prime contractor had completed approximately 4 percent of its planned work and was performing within its budgeted costs, but slightly behind schedule. The program incurred a positive cost variance of $3.0 million and a negative schedule variance of $3.9 million during the fiscal year. Because the contractor has completed a small percentage of the work required by the contract, the contractor’s performance to date cannot be used to estimate whether the contract can be completed within its estimated cost. The KEI program is undergoing several contract modifications to address additional requirements. In July 2005, the program modified the contract to require that KEI be capable of intercepting enemy missiles in the midcourse of their flight. Consequently, the program plans to extend the prime contract to better align its cost and schedule objectives with the new work content. Future CPRs will compare the contractor’s performance with the new cost and schedule objectives. Program officials plan to begin work on the midcourse capability in fiscal year 2008 and will continue to develop this capability through the end of the contract, which is expected to be September 2015. Our analysis of contractor performance reports shows that the STSS program continued to experience a decline in contractor performance during fiscal year 2005. As depicted in figure 6, the contractor incurred cumulative negative cost and schedule variances of $97 million and $20 million, respectively. If the contractor’s performance continues to decline, we estimate that at its completion the contract will exceed budgeted cost by between $248 million and $479 million. However, program officials noted that more than 90 percent of the contractor’s past performance can be attributed to a subcontractor whose work will be completed in fiscal year 2006. Quality issues with the subcontractor were the primary reason that the STSS prime contractor overran its fiscal year 2005 budget. For example, poor workmanship caused a satellite’s sensor payload to fail a vibration test because fasteners—designed to hold the sensor steady—were not tightened according to specifications. Additionally, poor workmanship at a third-tier vendor led to difficulties in manufacturing payload cables. Program officials told us that the prime contractor had to direct management attention and considerable effort to rectify the effects of the subcontractor’s poor quality control procedures. In addition to citing quality issues, program officials told us that they continue to encounter integration-related problems as the program progresses with testing the payload at successively higher levels of integration. Program officials noted that the subcontractor has made some improvements to its quality control program that should minimize future quality-related problems. For example, the subcontractor instituted an on- site Quality Assurance Council to develop improvements to the quality process at all levels of the organization. Additionally, quality personnel increased the number of inspections and supervision of all processes to ensure quality control. During fiscal year 2005, the THAAD program incurred cumulative cost overruns on its prime contract. As of September 2005, the contractor was overrunning its budgeted costs for the fiscal year by approximately $19 million, but it was still ahead of schedule. Because the cost performance of the contractor prior to fiscal year 2005 was positive, the cumulative overrun through September 2005 was about $15 million. Figure 7 illustrates the cumulative cost and schedule variances incurred by the program during the fiscal year. Judging from the contractor’s cost performance to date, we estimate that the contract could exceed its budgeted cost by about $48 million. During fiscal year 2005, the missile component continued to be the lead cause of the contractor’s negative performance. Major factors contributing to the missile’s cost variance include delays in activating a test facility at the Air Force Research Laboratory, redesign of faulty valves, performance issues related to vibration and shock testing, and unplanned hardware fabrication, assembly, and support costs. Redesign, material growth, and integration issues related to the missile also contributed to the program’s unfavorable cost performance. In 2005, MDA’s Office of Safety, Quality, and Mission Assurance conducted audits of the contractor developing the interceptor’s exoatmospheric kill vehicle and the Orbital Boost Vehicle. In its audit of the EKV contractor, a number of quality control weaknesses were documented. First, the MDA auditors found evidence that the prime contractor had not correctly communicated all essential EKV requirements to its subcontractor and the subcontractor had not communicated complete and correct requirements to its suppliers. For example, the prime contractor did not require the EKV contractor to use space-qualified parts—parts that have been proven to reliably withstand the harsh environment of space. Similarly, the auditors found that the subcontractor had not always provided its suppliers with correct parts, materials, and processes requirements. For example, the auditors found multiple incidents in which the subcontractor required one supplier to abide by incorrect or outdated compliance documents. The audit also identified numerous instances in which the EKV subcontractor had not exercised good configuration control. In some cases, drawings did not reflect current changes. In others, assembly records did not agree with build records. For example, the assembly record for one component showed that it included a different part from the one recorded in its build record. In another, the assembly tag showed that a component was not built in the same configuration shown in the build record. Auditors found that the reliability of the EKV’s design cannot be determined and any estimates of its serviceable life are likely unsupportable. The audit team established that the results from a March 2004 failure modes effects and criticality analysis were not fully used to influence the design of the EKV and that the contractor has not planned or performed a reliability demonstration, a maintainability analysis or demonstration, and does not plan reliability growth testing. Additionally, major requirements waivers approved on the basis of a short-term, limited- life mission significantly limit service life and have not been fully vetted, accepted, and mitigated for longer-term operational use. Further, auditors determined that the contractor has no written policy involving qualification testing and does not require that its EKV subcontractor follow requirements established by industry, civilian, and military users of space and launch vehicles. For this reason, tests of the EKV under thermal vacuum conditions representative of those found in space are not being conducted. The auditors also identified numerous issues with EKV shock and vibration testing and found that the contractor performs no formal qualification or acceptance tests on the EKV. Finally, the audit showed that because the contractor’s production processes are immature, the contractor cannot build a consistent and reliable product. For example, auditors found instances where work instructions were not followed and a number of deficiencies in the build books that lay out the plans and processes for manufacturing the EKV. Similarly, the auditors found that the contractor producing the Orbital Boost Vehicle needed to improve quality control processes and adherence to those processes. According to deficiency reports, the contractor, did not always, among other things, flow down requirements properly; practice good configuration management to ensure that the booster met form, fit, and function requirements; implement effective environmental stress screening; or have an approved parts, material, and processes management plan. Event 0 – Block Capability Alternative Block planning process completed Long lead targets, tests and exercises identified Affordability analysis completed Acquisition strategy approved Preliminary block plan approved Event 1 – Preliminary block definition Block performance assessments updated Detailed cost estimates/estimates at completions (EAC) available Costs/benefit analysis updated Risks assessed and mitigation programs established Preliminary operational concept and operations architecture drafted Preliminary designs for all elements/components/targets completed Required funding identified for development Integrated master schedule created Preliminary block definition approved Event 2 – Final block definition Performance assessments updated Detailed cost estimates/EACs available Risks assessed and mitigation programs updated Military utility characterized and operational concept refined Preliminary integration test plan available Final design for all elements/components/targets completed Funding available for development Block activation plan available Block definition updated Event 3 – First complete development article Detailed cost estimates/EACs available Operational concept defined and operations architecture available Test range and support planning completed Military utility assessment completed First development article/targets built and initial tests completed Event 4 – Element/Component development complete Detailed cost estimates/EACs available Block integration test planning completed Element/component/targets development and testing complete Support systems defined Training systems defined Fielding readiness assessed (initial defensive operations) To examine the progress MDA made in fiscal year 2005 toward its Block 2004 goals, we examined the efforts of individual programs that are developing BMDS elements under the management of MDA, such as the GMD program. The elements included in our review collectively accounted for 73 percent of MDA’s fiscal year 2005 research and development budget requests. We compared each element’s completed activities, test results, demonstrated performance, and actual cost achieved in fiscal year 2005 with those planned for fiscal year 2005. In making this comparison, we examined System Element Reviews, test schedules, test reports, and MDA briefing charts. To assess each element’s progress toward its cost goals, we reviewed Contract Performance Reports and Defense Contract Management Agency’s analyses of these reports (if available). We applied established earned value management techniques to data captured in Contract Performance Reports to determine trends and used established earned value management formulas to project the likely costs of prime contracts at completion. We also developed data collection instruments, which were submitted to MDA and each element program office, to gather detailed information on completed program activities, including tests, design reviews, prime contracts, and estimates of element performance. In addition, we discussed fiscal year 2005 progress with officials in MDA’s Business Management Office and each element program office, as well as the office of DOD’s Director, Operational Test and Evaluation. To determine whether MDA achieved the quantity, cost, and performance goals it set for Block 2004 in February 2003, we examined fielding schedules, System Element Reviews, test reports, budget estimate submissions, and the U.S. Strategic Command’s Military Utility Assessment. We also held discussions with the Aegis BMD, GMD, and C2BMC program offices; MDA’s Office of Safety, Quality and Assurance; and the Office of the Director, Operational Test and Evaluation. We determined the conditions that prevented MDA from achieving its Block 2004 goals by examining MDA’s implementation of its Integrated Management Plan, the Secretary of Defense 2002 memo establishing the Ballistic Missile Defense Program, and audits conducted by MDA’s Office of Safety, Quality, and Mission Assurance. We also held discussions with MDA’s Offices of Business Management and Safety, Quality, and Mission Assurance and the GMD Program Office. In determining the actions MDA is taking to address problems that affected the outcome of Block 2004, we reviewed MDA Assurance Provisions, recommendations of the Mission Return to Flight Task Force, memorandums of agreement between MDA and the Defense Contract Management Agency and MDA and the National Aeronautics and Space Administration, GMD award fee letters, and directives issued by MDA’s Director. We also discussed MDA’s plans with members of the Mission Readiness Task Force and officials in the agency’s Office of Safety, Quality, and Mission Assurance. To ensure that MDA-generated data used in our assessment are reliable, we evaluated the agency’s management control processes. We discussed these processes extensively with MDA upper management. In addition, we confirmed the accuracy of MDA-generated data with multiple sources within MDA and, when possible, with independent experts. To assess the validity and reliability of prime contractors’ earned value management systems and reports, we analyzed audit reports prepared by the Defense Contract Audit Agency. Finally, we assessed MDA’s internal accounting and administrative management controls by reviewing MDA’s Federal Manager’s Financial Integrity Report for Fiscal Years 2003, 2004, and 2005. Our work was performed primarily at MDA headquarters in Arlington, Virginia. At this location, we met with officials from the Kinetic Energy Interceptors Program Office; Aegis Ballistic Missile Defense Program Office; Airborne Laser Program Office; Command, Control, Battle Management, and Communications Program Office; Business Management Office; and Office of Safety, Quality, and Mission Assurance. In addition, we met with officials from the Space Tracking and Surveillance System Program Office, El Segundo, California; and the Ground-based Midcourse Defense Program Office and Terminal High Altitude Area Defense Project Office, Huntsville, Alabama. We also interviewed officials from the office of the Director, Operational Test and Evaluation, Arlington, Virginia. We conducted our review from May 2005 through March 2006 in accordance with generally accepted government auditing standards. In addition to the individual named above, Barbara Haynes, Assistant Director, Ivy Hübler, LaTonya Miller, Karen Richey, Adam Vodraska, and Jonathan Watkins made key contributions to this report.
The Department of Defense (DOD) has spent nearly $90 billion since 1985 to develop a Ballistic Missile Defense System (BMDS). In the next 6 years, the Missile Defense Agency (MDA), the developer, plans to invest about $58 billion more. MDA's overall goal is to produce a system that is capable of defeating enemy missiles launched from any range during any phase of their flight. MDA's approach is to field new capabilities in 2-year blocks. The first--Block 2004--was to provide some protection by December 2005 against attacks out of North Korea and the Middle East. Congress requires GAO to assess MDA's progress annually. This year's report assesses (1) MDA's progress during fiscal year 2005 and (2) whether capabilities fielded under Block 2004 met goals. To the extent goals were not met, GAO identifies reasons for shortfalls and discusses corrective actions that should be taken. MDA made good progress during fiscal year 2005 in the development and fielding of two of the seven elements reviewed. Most of the others encountered problems that slowed progress. Meanwhile, contractors for the seven elements exceeded their fiscal year budget by about $458 million, or about 14 percent, most of which was attributable to cost overruns in developing the Ground-based Midcourse Defense (GMD) element. Accelerating Block 2004 allowed MDA to successfully field missile defense assets faster than planned. But, MDA delivered fewer quantities than planned and exceeded the cost goal of $6.7 billion by about $1 billion. The increased cost is primarily the added cost of sustaining fielded assets. However, the increase would have been greater if some development and other activities had not been deferred into Block 2006. Also, MDA has been unable to verify actual system performance because of flight test delays. Time pressures caused MDA to stray from a knowledge-based acquisition strategy. Key aspects of product knowledge, such as technology maturity, are proven in a knowledge-based strategy before committing to more development. MDA followed a knowledge-based strategy with elements not being fielded, such as Airborne Laser and Kinetic Energy Interceptor. But it allowed the GMD program to concurrently mature technology, complete design activities, and produce and field assets before end-to-end testing of the system--all at the expense of cost, quantity, and performance goals. For example, the performance of some GMD interceptors is questionable because the program was inattentive to quality assurance. If the block approach continues to feature concurrency as a means of acceleration, MDA's approach may not be affordable for the considerable amount of capability that is yet to be developed and fielded. MDA has unusual flexibility to modify its strategies and goals, make trade-offs, and report on its progress. For example, MDA's Director may determine when cost variations are significant enough to report to Congress. MDA is taking actions to strengthen quality control. These actions are notable, but they do not address the schedule-induced pressures of fielding or enhancing a capability in a 2-year time frame or the need to fully implement a knowledge-based acquisition approach.
Under MD-715, federal agencies are to identify and eliminate barriers that impede free and open competition in their workplaces. EEOC defines a barrier as an agency policy, principle, or practice that limits or tends to limit employment opportunities for members of a particular gender, race, ethnic background, or disability status. According to EEOC’s instructions, many employment barriers are built into the organizational and operational structures of an agency and are embedded in the day-to-day procedures and practices of the agency. In its oversight role under MD- 715, EEOC provides instructions to agencies on how to complete their barrier analyses and offers other informal assistance. Based on agency submissions of MD-715 reports, EEOC provides assessments of agency progress in its Annual Report on the Federal Workforce, feedback letters addressed to individual agencies, and the EEO Program Compliance Assessment (EPCA). At DHS, the Officer for CRCL, through the Deputy Officer for EEO Programs, is responsible for processing complaints of discrimination; establishing and maintaining EEO programs; fulfilling reporting requirements as required by law, regulation, or executive order; and evaluating the effectiveness of EEO programs throughout DHS. Consistent with these responsibilities, the Officer for CRCL, through the Deputy Officer for EEO Programs, is responsible for preparing and submitting DHS’s annual MD-715 report. In addition, the Deputy Officer for EEO Programs and the Under Secretary for Management (USM) are also responsible for diversity management at DHS. Under the USM, the Chief Human Capital Officer is responsible for diversity management and has assigned these duties to the Executive Director of Human Resources Management and Services. According to CRCL’s Deputy Officer for EEO Programs, CRCL and OCHCO collaborate on a number of EEO and diversity activities through participation in work groups, involvement in major projects, policy and report review, and participation on the Diversity Council and its Diversity Policy and Planning Subcouncil. Figure 1 shows the officials who are primarily responsible for EEO and diversity management at DHS. The DHS Diversity Council is composed of the members of the DHS Management Council, which is chaired by the USM and includes component representatives—generally a component’s equivalent of a chief management officer or chief of staff. The Diversity Council charter gives the DHS Management Council the responsibility of meeting as the Diversity Council at least bimontly. CRCL’s Deputy Officer for EEO Programs and OCHCO’s Executive Director of Human Resources Management and Services chair the Diversity Council’s Policy and Planning Subcouncil, which includes at least one member from each DHS component represented on the Management Council. The Diversity Policy and Planning Subcouncil meets every 2 weeks and is to identify, research, and analyze workforce diversity issues, challenges, and opportunities and report and make recommendations to the Diversity Council on DHS diversity strategies and priorities. According to EEOC’s MD-715 instructions, barrier identification is a two- part process. First, using a variety of sources, an agency is to identify triggers. Second, the agency is to investigate and pinpoint actual barriers and their causes. According to EEOC officials, this should be an ongoing process. Figure 2 shows the barrier identification steps under MD-715. Our review of DHS’s MD-715 reports for each of the fiscal years 2004 through 2007 showed that in 2004 DHS identified 14 triggers, which were present in each subsequent year. According to DHS’s MD-715 reports, DHS identified 13 of the 14 triggers based on its analysis of participation rates contained in the workforce data tables. The remaining trigger— incomplete accessibility studies on all facilities—was identified based on responses to the self-assessment checklist contained in the MD-715 form and comments made at disability awareness training for managers. In addition, in 2008, DHS identified one new trigger based on a joint statement from EEOC, the Department of Justice, and the Department of Labor related to heightened incidents of harassment, discrimination, and violence in the workplace against individuals who are or are perceived to be Arab, Muslim, Middle Eastern, South Asian, or Sikh. Table 1 shows a summary of DHS-identified triggers and the sources of information from which they were identified. To identify triggers, agencies are to prepare and analyze workforce data tables comparing participation rates to designated benchmarks (such as representation in the civilian labor force (CLF) or the agency’s total workforce) by gender, race, ethnicity, or disability status in various subsets of their workforces (such as by grade level or major occupations and among new hires, separations, promotions, and career development programs). According to EEOC’s MD-715 instructions, participation rates below a designated benchmark for a particular group are triggers. Along with the workforce data tables, according to EEOC’s MD-715 instructions, agencies are to regularly consult additional sources of information to identify areas where barriers may operate to exclude certain groups. Other sources of information include, but are not limited to EEO complaints and EEO-related grievances filed; findings of discrimination on EEO complaints; surveys of employees on workplace environment issues; exit interview results; surveys of human resource program staff, managers, EEO program staff, counselors, investigators, and selective placement coordinators; input from agency employee and advocacy groups and union officials; available government reports (i.e., those of EEOC, GAO, OPM, the Merit Systems Protection Board, and the Department of Labor); and local and national news reports. EEOC officials said that these sources may reveal triggers that may not be present in the workforce data tables. Several of the above-listed sources provide direct employee input on employee perceptions of the effect of agency policies and procedures. For example, according to EEOC instructions, employee surveys may reveal information on experiences with, perceptions of, or difficulties with a practice or policy within the agency. Further, EEOC’s instructions state that reliance solely on workforce profiles and statistics will not meet the mandate of MD-715. When workforce data and other sources of information indicate that a barrier may exist, agencies are to conduct further inquiry to identify and examine the factors that caused the situation revealed by workforce data or other sources of information. To identify triggers, CRCL stated that it regularly reviews complaint data it must submit annually to EEOC and data collected from reports CRCL is required to submit under various statutes, executive orders, and initiatives, including the Notification and Federal Employee Antidiscrimination and Retaliation Act, Federal Equal Employment Opportunity Recruitment Program, Executive Order 13171 on Hispanic employment in the federal government, Disabled Veterans Affirmative Action Program, White House Initiative on Historically Black Colleges and Universities, and White House Initiative on Tribal Colleges and Universities. According to CRCL officials, in the past, CRCL has also relied upon the DHS online departmental newsletter, periodicals, and news media to identify triggers. We have previously reported that successful organizations empower and involve their employees to gain insights about operations from a frontline perspective, increase their understanding and acceptance of organizational goals and objectives, and improve motivation and morale. Obtaining the input of employees in identifying triggers would provide a frontline perspective on where potential barriers exist. Employee input can come from a number of sources including employee groups, exit interviews, and employee surveys. CRCL said that it does not consider input from employee groups in conducting its MD-715 analysis, but the Diversity Council’s Diversity Policy and Planning Subcouncil has recently begun to reach out to form partnerships with employee associations such as the National Association of African-Americans in the Department of Homeland Security. In addition, according to DHS’s 2008 MD-715 report, DHS does not currently have a departmentwide exit survey, but according to a senior OCHCO official, OCHCO plans to develop a prototype exit survey with the eventual goal of proposing its use throughout DHS. Although DHS does not have the structures in place to obtain employee input departmentwide from employee groups and exit surveys, DHS could use the FHCS and DHS’s internal employee survey to obtain employee input in identifying potential barriers. OPM administers the FHCS biennially in even-numbered years, and DHS administers its own internal survey in off years. Both surveys collect data on employees’ perceptions of workforce management, organizational accomplishments, agency goals, leadership, and communication. We have previously reported that disaggregating employee survey data in meaningful ways can help track organizational priorities.According to information from officials in OPM’s Division for Strategic Human Resources Policy, which administers and analyzes the FHCS, results by gender, national origin, and race are available at the agency level (i.e., DHS) on each agency’s secure site. DHS’s internal survey also collects demographic data on race, gender, and national origin of respondents. DHS could analyze responses from the FHCS and its internal employee survey by race, gender, and national origin to determine whether employees of these groups perceive a personnel policy or practice as a possible barrier. For example, one question on the 2008 FHCS asked whether supervisors or team leaders in the employee’s work unit support employee development. Fifty-eight percent of DHS respondents agreed and 21 percent disagreed with the statement. The 2007 DHS internal survey asked whether employees receive timely information about employee development programs. Thirty-nine percent of respondents provided a positive response; 35 percent provided a negative response. Although a CRCL staff member reviews the FHCS and DHS’s internal survey data as part of an OCHCO employee engagement working group, the staff member does not review DHS responses based on race, gender, and national origin. Responses based on demographic group could indicate whether a particular group perceives a lack of opportunity for employee development and suggest a need to further examine these areas to determine if barriers exist. Without employee input on DHS personnel policies and practices, DHS is missing opportunities to identify potential barriers. Regular employee input could help DHS to identify potential barriers and enhance its efforts to acquire, develop, motivate, and retain talent that reflects all segments of society and our nation’s diversity. In fiscal year 2007, DHS conducted its first departmentwide barrier analysis. This effort involved further analysis of the triggers initially identified in 2004 to determine if there were actual barriers and their causes. According to DHS’s 2007 MD-715 report, DHS limited its barrier analysis to an examination of policies and management practices and procedures that were in place during fiscal year 2004. Therefore, according to the report, policies, procedures, and practices that were established or used after fiscal year 2004 were outside the scope of this initial barrier analysis. Based on triggers DHS identified in 2004, DHS’s departmentwide barrier analysis identified the following four barriers: 1. Overreliance on the Internet to recruit applicants. 2. Overreliance on noncompetitive hiring authorities. 3. Adequacy of responses to Executive Order 13171, Hispanic Employment in the Federal Government; specifically, in several components that there was no evidence of specific recruitment initiatives that were directed at Hispanics. 4. Nondiverse interview panels; specifically, interview panels that did not reflect the diversity of applicants. According to EEOC guidance, barrier elimination is vital to achieving the common goal of making the federal government a model employer. Once an agency identifies a likely factor (or combination of factors) adversely affecting the employment opportunities of a particular group, it must decide how to respond. Barrier elimination is the process by which an agency removes barriers to equal participation at all levels of its workforce. EEOC’s instructions provide that in MD-715 reports, agencies are to articulate objectives accompanied by specific action plans and planned activities that the agency will take to eliminate or modify barriers to EEO. Each action item must set a completion date and identify the one high-level agency official who is responsible for ensuring that the action item is completed on time. In addition, according to EEOC’s instructions, agencies are to continuously monitor and adjust their action plans to ensure the effectiveness of the plans themselves, both in goal and execution. This will serve to determine the effectiveness of the action plan and objectives. Figure 3 shows the barrier elimination and assessment steps under MD-715. Our analysis of DHS’s MD-715 2007 and 2008 reports showed DHS articulated 12 different planned activities to address the identified barriers, including 1 new planned activity in 2008. Of the 12 planned activities, 2 relate to recruitment practices and strategies, specifically implementing a departmentwide recruitment strategy and targeting recruitment where there are low participation rates. Two other planned activities relate to the development of additional guidance, specifically on composition of interview panels and increasing educational opportunities. For each barrier, DHS identifies at least one planned activity—eight in total— related to collecting and analyzing additional data. According to DHS’s 2007 and 2008 MD-715 reports, DHS’s primary objective is to capture and analyze the additional data needed to link the barriers to the relevant triggers. In addition, of the 12 different planned activities, 5 involve collaboration between CRCL and OCHCO. One planned activity to address overreliance on the use of the Internet to recruit applicants calls for the development of an applicant flow tool to gather data on applicants, which would enable CRCL and OCHCO to analyze recruitment and hiring results. According to CRCL, its staff collaborate with OCHCO by evaluating and providing feedback on development of the tool. We have previously reported on the benefits of coordination and collaboration between the EEO and the human capital offices within agencies. During our previous work reviewing coordination of federal workplace EEO, an EEOC official commented that a review of barrier analyses in reports submitted under MD-715 showed that the highest-quality analyses had come from agencies where there was more coordination between staff of the human capital and EEO offices. Table 2 shows DHS’s planned activities, the identified barriers to which they relate, and the target completion dates. For the planned activities identified in its 2007 MD-715 report, DHS has modified the target date for all but one of them. As reported in the 2008 MD-715 report, the original target completion dates have been delayed anywhere from 12 to 21 months. In addition, since DHS filed its 2008 MD- 715 report, DHS modified one of the target dates it had previously modified in its 2008 report. DHS has not completed any of the planned activities articulated in its 2007 and 2008 MD-715 reports. According to CRCL officials, although it has not completed any planned activities to address identified barriers, DHS has completed some planned activities identified in fiscal years 2007 and 2008 related to improving its EEO program. According to CRCL, DHS modified target dates primarily because of staffing shortages in both CRCL and OCHCO, including the retirement in 2008 of three senior CRCL officials (including the Deputy Officer for EEO Programs) and extended absences of the remaining two staff. In addition, according to senior officials, during fiscal year 2008, OCHCO experienced significant staff shortages and budgetary issues and lost its contract support. According to the Deputy Officer for EEO Programs, fiscal year 2009 is a rebuilding year. CRCL is adding five new positions, in addition to the existing three, to the CRCL unit responsible for preparing and submitting DHS’s MD-715 reports and implementing MD-715 planned activities. According to CRCL, once it is fully staffed, it will be able to expand services and operations. DHS has not established interim milestones for the completion of planned activities to address barriers. According to DHS officials, its MD-715 reports and Human Capital Strategic Plan represent the extent of DHS project plans and milestones for completing planned activities. These documents include only the anticipated outcome, not the essential activities needed to achieve the outcome. For example, in DHS’s 2007 and 2008 MD-715 reports, CRCL identifies an applicant flow tool to analyze recruitment and hiring results as a planned activity to address the barrier of overreliance on the use of the Internet to recruit applicants. DHS’s Human Capital Strategic Plan also identifies an applicant flow tool to analyze recruitment and hiring results as an action to achieve its departmentwide diversity goal. DHS does not articulate interim steps, with milestones, to achieve this outcome in either document. In order to help ensure that agency programs are effectively and efficiently implemented, it is important that agencies implement effective internal control activities. These activities help ensure that management directives are carried out. We have previously reported that it is essential to establish and track implementation goals and establish a timeline to pinpoint performance shortfalls and gaps and suggest midcourse corrections. Further, it is helpful to focus on critical phases and the essential activities that need to be completed by a given date. In addition, we recommended in our 2005 report on DHS’s management integration that DHS develop a management integration strategy. Such a strategy would include, among other things, clearly identifying the critical links that must occur among initiatives and setting implementation goals and a timeline to monitor the progress of these initiatives and to ensure that the necessary links occur. Identifying the critical phases of each planned activity necessary to achieve the intended outcome with interim milestones could help DHS ensure that its efforts are moving forward and manage any needed midcourse corrections, while minimizing modifications of target completion dates. According to CRCL and OCHCO officials, DHS is making progress on initiatives relating to (1) outreach and recruitment, (2) employee engagement, and (3) accountability. DHS’s Executive Director of Human Resources Management and Services told us that DHS is currently implementing a targeted recruitment strategy based on representation levels, which includes attending career fairs and entering into partnerships with organizations such as the Black Executive Exchange Program. CRCL officials also said that CRCL staff participate on the Corporate Recruitment Council, which meets each month and includes recruiters from each of the components. In addition, according to the Human Capital Strategic Plan diversity goal, DHS plans to establish a diversity advisory network of external stakeholders. According to CRCL, this effort includes specific outreach and partnership activities with such groups as the National Association for the Advancement of Colored People, Blacks in Government, League of United Latin American Citizens, Organization of Chinese Americans, Federal Asian Pacific American Council, Federally Employed Women, National Organization of Black Law Enforcement Executives, and Women in Federal Law Enforcement. DHS has also reported progress on employee engagement efforts. The Executive Director of Human Resources Management and Services also told us that DHS is in the planning stages of forming a department-level employee council comprising representatives from each diversity network at each of DHS’s components. In addition, according to DHS’s Human Capital Strategic Plan, DHS will incorporate questions into its internal employee survey specifically addressing leadership and diversity. The planned completion for this effort is the first quarter of fiscal year 2010. To address accountability, the Executive Director of Human Resources Management and Services said that DHS added a Diversity Advocate core competency as part of DHS’s fiscal year 2008 rating cycle for Senior Executive Service (SES) performance evaluations. Under DHS’s SES pay- for-performance appraisal system, ratings on this and other core competencies affect SES bonuses and pay increases. According to DHS’s Competency Illustrative Guidance, the standard provides for each senior executive to promote workforce diversity, provide fair and equitable recognition and equal opportunity, and promptly and appropriately address allegations of harassment or discrimination. According to the Executive Director of Human Resources Management and Services, OCHCO is currently developing plans, with the participation of CRCL, to implement a similar competency in 2010 for managers and supervisors, although the specific details on implementation are not yet finalized. According to MD-715 and its implementing guidance, a parent agency is to ensure that its components implement the provisions of MD-715 and make a good faith effort to identify and remove barriers to equality of opportunity in the workplace. Among other requirements, the parent agency is responsible for ensuring that its reporting components—those that are required to submit their own MD-715 reports—complete those reports. The parent agency is also responsible for integrating the components’ MD-715 reports into a departmentwide MD-715 report. According to officials from EEOC’s Office of Federal Operations, how a department oversees and manages this process is at the discretion of the department. In addition, to ensure management accountability, the agency, according to MD-715, should conduct regular internal audits, at least annually, to assess, among other issues, whether the agency has made a good faith effort to identify and remove barriers to equality of opportunity in the workplace. At DHS, according to the DHS Acting Officer for CRCL and the Deputy Officer for EEO Programs, component EEO directors do not report directly to CRCL but to their respective component heads. While this EEO organizational structure is similar to other cross-cutting lines of business (LOB), other cross-cutting LOBs have indirect reporting relationships, established through management directives, between the component LOB head and the DHS LOB chief for both daily work and annual evaluation. In contrast, the Deputy Officer for EEO Programs stated that he relies on a collaborative relationship with the EEO directors of the components to carry out his responsibilities. According to the Deputy Officer for EEO Programs, component EEO programs have supported department-wide initiatives when asked to join such efforts. On February 4, 2008, the Secretary of Homeland Security delegated authority to the Officer for CRCL to integrate and manage the DHS EEO Program, and currently a management directive interpreting the scope of this authority is awaiting approval. The Deputy Officer for EEO Programs stated that until the management directive is approved and implemented, the actual effect of the delegated authority is unclear. Lacking direct authority, the Deputy Officer stated that he relies on a collaborative relationship with the EEO directors of the components to carry out his responsibilities. According to the Deputy Officer for EEO Programs, one means of collaboration with the components is through the EEO Council, which meets monthly and is chaired by the Deputy Officer for EEO Programs and is composed of the EEO directors from each component. The Deputy Officer for EEO Programs said that he uses the EEO Council to share best practices, enhance cooperation, and enforce accountability. To assist the components in their MD-715 analyses, according to CRCL officials, CRCL prepares the workforce data tables for each of the components required to submit its own MD-715 report. CRCL obtains the data from OCHCO and sends them to a contractor to create the workforce data tables. According to CRCL officials, DHS is pursuing an automated information management system that will allow CRCL to conduct in-house centralized workforce data analysis at the component level. To ensure timely submissions of component MD-715 reports, DHS’s CRCL sets internal deadlines by which reporting components are to submit their final MD-715 reports. CRCL instructs the components to follow EEOC guidance in completing their reports. CRCL also gives components the option of submitting a draft report for CRCL to review and provide technical guidance on before the final report is submitted. For those components that have submitted draft reports, CRCL has provided written comments that could be incorporated into the components’ final reports. A CRCL official told us that for fiscal year 2009 draft submissions, CRCL will continue this practice and encourage components to submit draft reports. Since DHS was formed in 2003, CRCL has completed a full EEO program evaluation of the Federal Law Enforcement Training Center (FLETC) in fiscal year 2007, which focused on FLETC’s EEO Office’s operations and activities. In fiscal year 2008, CRCL conducted the audit work on a full program evaluation of the Federal Emergency Management Agency’s Equal Rights Office’s operations and activities, but to date CRCL has not issued the audit report. In fiscal year 2006, CRCL conducted a partial evaluation of the Transportation Security Administration’s Office for Civil Rights, which focused on EEO counseling, complaint tracking, and alternative dispute resolution. In addition, in fiscal year 2009, a contractor issued a report describing the findings of a program review of the U.S. Coast Guard’s Office of Civil Rights. The Deputy Officer for EEO Programs told us that CRCL intends to conduct program reviews of the EEO programs at all operational components by 2010, although no schedule for completing these audits has been established. Input from employee groups reflects the perspective of the individuals directly affected by employment policies and procedures and could provide valuable insight into whether those policies and procedures may be barriers to EEO. Because CRCL does not regularly include employee input from available sources, such as the FHCS and DHS’s internal employee survey, it is missing opportunities to identify potential barriers to EEO. For barriers DHS has already identified, it is important for DHS to ensure the completion of planned activities through effective internal control activities, including the identification of critical schedules and milestones that need to be completed by a given date. Effective internal controls could help DHS ensure that its efforts are moving forward, manage any needed midcourse corrections, and minimize modifications of target completion dates. Additional staff, which DHS plans to add in 2009, could help DHS implement effective internal control activities. We recommend that the Secretary of Homeland Security take the following two actions: Direct the Officer for CRCL to develop a strategy to regularly include employee input from such sources as the FHCS and DHS’s internal survey in identifying potential barriers to EEO. Direct the Officer for CRCL and the CHCO to identify essential activities and establish interim milestones necessary for the completion of all planned activities to address identified barriers to EEO. We provided a draft of this report to the Secretary of Homeland Security for review and comment. In written comments, which are reprinted in appendix I, the Director of DHS’s Departmental GAO/OIG Liaison Office agreed with our recommendations. Regarding the first recommendation, the Director agreed that DHS should develop a departmentwide strategy to regularly include employee input from the FHCS and DHS internal employee survey to identify barriers, but noted that DHS component EEO programs already use employee survey data to develop annual action plans to address identified management issues. Regarding the second recommendation, the Director wrote that CRCL has already begun revising its plans to identify specific steps and interim milestones to accomplish the essential activities. DHS also provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Homeland Security and other interested parties. The report also will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Belva Martin, Acting Director; Amber Edwards; Karin Fangman; Melanie H. Papasian; Tamara F. Stenzel; and Greg Wilmoth made key contributions to this report.
Under MD-715, federal agencies are to identify and eliminate barriers that impede free and open competition in their workplaces. EEOC defines a barrier as an agency policy, principle, or practice that limits or tends to limit employment opportunities for members of a particular gender, race, ethnic background, or disability status. According to EEOC's instructions, many employment barriers are built into the organizational and operational structures of an agency and are embedded in the day-to-day procedures and practices of the agency. In its oversight role under MD-715, EEOC provides instructions to agencies on how to complete their barrier analyses and offers other informal assistance. Based on agency submissions of MD-715 reports, EEOC provides assessments of agency progress in its Annual Report on the Federal Workforce, feedback letters addressed to individual agencies, and the EEO Program Compliance Assessment (EPCA). DHS has generally relied on workforce data and has not regularly included employee input from available sources to identify "triggers," the term EEOC uses for indicators of potential barriers. GAO's analysis of DHS's MD-715 reports showed that DHS generally relied on workforce data to identify 13 of 15 triggers, such as promotion and separation rates. According to EEOC, in addition to workforce data, agencies are to regularly consult a variety of sources, such as exit interviews, employee groups, and employee surveys, to identify triggers. Involving employees helps to incorporate insights about operations from a frontline perspective in determining where potential barriers exist. DHS does not consider employee input from such sources as employee groups, exit interviews, and employee surveys in conducting its MD-715 analysis. Data from the governmentwide employee survey and DHS's internal employee survey are available, but DHS does not use these data to identify triggers. By not considering employee input on DHS personnel policies and practices, DHS is missing opportunities to identify potential barriers. Once a trigger is revealed, agencies are to investigate and pinpoint actual barriers and their causes. In 2007, through its departmentwide barrier analysis, DHS identified four barriers: (1) overreliance on the Internet to recruit applicants, (2) overreliance on noncompetitive hiring authorities, (3) lack of recruitment initiatives that were directed at Hispanics in several components, and (4) nondiverse interview panels. GAO's analysis of DHS's 2007 and 2008 MD-715 reports showed that DHS has articulated planned activities to address identified barriers, has modified nearly all of its original target completion dates by a range of 12 to 21 months, and has not completed any planned activities; although officials reported completing other activities in fiscal year 2007 and 2008 associated with its EEO program. Nearly half of the planned activities involve collaboration between the civil rights and human capital offices. DHS said that it modified the dates because of staffing shortages. In order to ensure that agency programs are effectively and efficiently implemented, it is important for agencies to implement internal control activities, such as establishing and tracking implementation goals with timelines. This allows agencies to pinpoint performance shortfalls and gaps and suggest midcourse corrections. DHS has not developed project plans with milestones beyond what is included in its MD-715 report and its Human Capital Strategic Plan. These documents include only the anticipated outcomes and target completion dates, not the essential activities needed to achieve the outcome. Identifying the critical phases of each planned activity necessary to achieve the intended outcome with interim milestones could help DHS ensure that its efforts are moving forward and manage any needed midcourse corrections, while minimizing modification of target dates. DHS uses a variety of means to oversee and support components, including providing written feedback on draft reports to components that are required to prepare their own MD-715 reports, conducting program audits, and convening a council of EEO directors from each of the components.
BLM, within the Department of the Interior, and the Forest Service, within the Department of Agriculture, are the two primary federal agencies involved with timber sales. In terms of acreage, the Forest Service manages over 192 million acres of national forest system land. In contrast, BLM manages about 261 million acres of public lands, of which about 55 million acres are forests and woodlands. BLM administers two forestry programs: one on public domain lands and one in western Oregon. BLM’s public domain forestry management program covers 53 million acres— about 9 million acres of forests and about 44 million acres of woodlands. Appendix I provides a detailed listing of forest and woodland acreage administered under BLM’s public domain forestry management program. BLM’s forests and woodlands on public domain lands are primarily in 12 western states. Much of these lands tend to be in small, isolated parcels that are not as productive as BLM’s western Oregon lands or the larger forests managed by the Forest Service. BLM manages its public domain lands through a multilevel organization—national office, 12 state offices, and about 130 field offices—that carries out a variety of agency programs and activities including recreation and fish and wildlife protection, in addition to timber. BLM’s public domain forestry management program received a small portion of the agency’s $1.8 billion annual budget for fiscal year 2002. The Congress appropriated about $6.2 million for the public domain forestry management program in fiscal year 2002. Timber offered for sale on public domain lands includes sawtimber and other wood products. Sales of sawtimber and some other wood products are initiated by soliciting bids from prospective buyers. In addition, BLM offers other wood products to the public through a permit process. BLM manages its public domain forestry management program within a statutory framework consisting of a land management statute and various other environmental laws. The Federal Land Policy and Management Act of 1976 (FLPMA)—the principal law under which BLM manages its public domain forestry management program—requires BLM to manage its public lands under the principles of multiple use and sustained yield. FLPMA gives BLM broad management discretion over how it emphasizes one use, such as offering timber for sale, in relation to another, such as providing recreation. Among other things, multiple use management aims at a combination of balanced and diverse resource uses that take into account the long-term needs of future generations for renewable resources (for example, timber) and nonrenewable resources (for example, minerals). FLPMA states that BLM should consider fish and wildlife; recreation; minerals; range; ecological preservation; timber; watershed; natural scenic, scientific, and historical values; and other resources, as it balances public land uses. Under the principle of sustained yield, BLM seeks to achieve and maintain high output levels of all renewable resources in perpetuity. Under FLPMA, BLM has broad discretion in managing its timber sales. During its land use planning process, BLM identifies areas that are available and have the capacity for planned, sustained-yield harvest of timber or other forest products. BLM timber sales on public domain lands must also comply with the requirements of other environmental laws, including the National Environmental Policy Act, the Endangered Species Act, and the Clean Water Act. For major federal actions that may significantly affect the quality of the human environment, the National Environmental Policy Act requires all federal agencies, including BLM, to analyze the potential environmental effects of a proposed project, such as a timber sale. Regulations implementing the National Environmental Policy Act require agencies to include a discussion of how to mitigate adverse impacts and a discussion of those impacts that cannot be avoided under the federal action. Under the Endangered Species Act, BLM must ensure that its actions are not likely to jeopardize the continued existence of species listed as threatened or endangered or to destroy or adversely modify habitat critical to their survival. Similarly, the requirement to meet standards for water quality under the Clean Water Act may limit the timing, location, and volume of timber sales. BLM’s annual volume of timber offered for sale from public domain lands declined 74 percent from 101 million board feet of timber in fiscal year 1990 to 26 million board feet in 2002. Over the same period, the volume of the two components of BLM offerings—sawtimber and other wood products—also declined: sawtimber from 80 million to 14 million board feet (81 percent) and other wood products from 21 million to 11 million board feet (46 percent). See figure 1. Appendix II includes more detailed information on the volume of BLM public domain timber offered for sale from fiscal year 1990 through 2002. Mirroring the overall national decline, each BLM state office experienced declines in the volume of timber offered for sale from fiscal year 1990 through 2002. Eastern Oregon experienced the sharpest decline—from 56 million to 8 million board feet—representing nearly two-thirds of the overall decline. A BLM official explained that eastern Oregon offered an abnormally high volume of timber for sale in fiscal years 1990 and 1991, primarily due to a large salvage logging effort following a mountain pine beetle epidemic. For perspective, from fiscal years 1985 through 1989, eastern Oregon offered an average of 22 million board feet of timber per year. Appendix III shows the volume of timber that each BLM state office offered for sale in 1990 and in 2002 and the amount of decline. As a consequence of the decline in the volume of timber offered for sale during fiscal years 1990 through 2002, the proportion of the volume’s two components also changed. As shown in figure 2, sawtimber represented over three-quarters of the total volume in fiscal year 1990, but had decreased to slightly more than one-half of the total volume by fiscal year 2002. In contrast, the proportion of other wood products increased from about one-fifth of the total volume in 1990 to about one-half of the total volume in fiscal year 2002. Beginning in the late 1980s, the program emphasis on BLM public domain lands, like that on most other federal forests, increasingly shifted from timber production to emphasizing forest ecosystem health. This shift in emphasis, required by changing forest conditions and needs, helped cause a reduction in the volume of timber removed from all federal lands, including BLM public domain lands. As a result of this decline in supply volume, some sawmills that formerly processed BLM timber have closed, making it more difficult for BLM to market timber in some areas. In addition, the emphasis on forest ecosystem health has increased some of the costs associated with timber sales preparation, as staff must now prepare more extensive analysis of the effects of the timber harvest on other resources. Faced with generally declining funding levels and fewer foresters to prepare timber sales, and subsequently fewer sales, BLM’s volume of timber offered for sale from its public domain lands declined. The 74 percent decline in the volume of timber sale offerings from BLM public domain lands since 1990, according to BLM officials, was primarily due to the shift in program emphasis to forest ecosystem health. We previously reported that this shift in emphasis caused large declines in timber production from all federal forests. BLM’s decline mirrored a similar decline in offerings from the 155 national forests. For example, between 1990 and 1997 the volume of timber offered for sale from the national forests managed by the Forest Service declined about 65 percent, from 11 billion to 4 billion board feet. Since the late 1980s, growing concerns over declining ecological conditions on federal lands—such as poor animal habitat and water quality—resulted in federal agencies adopting a new, more scientifically based management approach, referred to then as ecosystem management. BLM officially adopted this approach to implementing its land management responsibilities in 1994 to sustain resource usage in an ecosystem—including timber production—while maintaining, and restoring where damaged, the natural functioning of interdependent communities of plants and animals and their physical environment (soil, water, air). In revising forest management policy for public domain lands, BLM increased its emphasis on managing for forest ecosystem conditions, in addition to providing for sustained yield of its forests and woodlands. This new policy recognized the role that insects, disease, fire, and other disturbance mechanisms, as well as noncommercial plant species, play in ecosystems. The reduction in the volume of timber offered for sale also resulted from environmental statutes and their judicial interpretations arising from lawsuits brought by environmental and recreational organizations. In order to increase protection of wildlife habitat, recreation, and stream quality, the volume of timber offered for sale was reduced for the following reasons: (1) some forest areas where timber sales had been planned could not be used for this purpose; (2) in some areas where trees could be harvested, fewer trees could be removed because of limitations on clear-cutting; and (3) in some cases, BLM would not offer timber for sale where the removal costs were too expensive for buyers. BLM officials cited several instances where an increased emphasis on providing greater protection to forest ecosystem resources from the adverse effects of timber harvesting had resulted in reductions of timber offerings on BLM public domain lands since 1990. For example, an official in the BLM Idaho state office noted that harvesting timber by clear-cutting is no longer performed in many locations. Likewise, concerns about potential harm to the habitat of threatened or endangered species, such as lynx and bull trout, led to a reduced volume of timber offered for sale. In addition, some current harvesting methods cost more and result in less volume, but potentially cause less harm to the species and its habitat. BLM officials told us that in eastern Oregon they offered sales in areas where there were fewer concerns about the harm to habitat in order to reduce the probability of public challenge. Additionally, BLM officials in Idaho and Oregon told us that the need to sometimes use helicopters to remove harvested trees, in order to protect other resources from effects that would result, for example, from constructing roads to access and remove timber, drove up costs and further reduced the amount of timber they could offer for sale. In the 1990s, growing concerns about changes in forest structure and composition, and the long-term threats that these changes posed to forest ecosystem health, further contributed to the declines in the volume of timber offered for sale from federal forests, including from BLM public domain lands. The principal change in forest structure that was of concern was the increasing density of tree stands in forests, especially of smaller trees and brush. Among the changes in forest composition of most concern was a reduction in the diversity of tree species. Both types of change stemmed largely from decades of previously accepted forest management practices, such as the exclusion of naturally occurring periodic fires that removed smaller trees and undergrowth; replacement, after clear-cutting, of mixed native species with a single species; and a failure to carry out planned thinning of forests. Overly dense, less diverse forests can lead to increasingly widespread insect and disease infestations and greatly increase the risk of catastrophic wildfires. Such wildfires can severely damage tree stands, wildlife habitat, water quality, and soils, and threaten human health, lives, property, and infrastructure in nearby communities. According to BLM, the need to reduce forest density and restore composition diversity in forest ecosystems has necessitated a refocusing of federal forest management activities, including timber sale offerings, on the removal of smaller trees and materials that generate less volume than the larger trees more commonly offered for sale in prior years. BLM program management officials stated that the need to restore the structure and composition of forests is currently the primary reason that the timber removed from public domain lands will have to continue to be more heavily weighted towards nonsawtimber and small-diameter trees. In many cases, the materials that need to be removed have little or no commercial value, and thus do not affect the overall volume of timber offered for sale. For example, a BLM official in a Colorado field office told us that any increase in funding would first concentrate on a backlog of areas that were overstocked following harvests several years ago, but were never thinned of small trees that had no commercial value. BLM officials could not quantify the effect of the shift to forest ecosystem health on the overall decline in the volume of timber sale offerings since 1990. They noted, however, that the shift had resulted in timber becoming largely a by-product, rather than a focus, of the public domain forestry management program. The decline in the volume of timber sale offerings from federal forests as a result of the shift in emphasis to forest ecosystem health has resulted in a reduced supply of materials for sawmills in many areas. According to two reports principally authored by The University of Montana’s Bureau of Business and Economic Research and the Forest Service, the volume of timber from national forests received by mills in Idaho and Montana declined in the 1990s. For example, in Idaho, the volume declined from about 729 million board feet in 1990 to 301 million board feet in 1995, representing a decline of 59 percent. In Montana, the volume declined from about 318 million board feet in 1993 to 215 million board feet in 1998, representing a decline of 32 percent. According to these reports, the reduced mill capacity in these states was due primarily to the decline in timber availability from national forests. Furthermore, these reports indicated that the decline in timber volume from the national forests was a contributing factor to the closure of at least 30 sawmills in these two states. Other factors mentioned by these reports as contributing to sawmill closures included fluctuations in lumber prices, changes in the volume of exports and imports of lumber, and changes in the structure of the industry. According to BLM officials, the primary reason for sawmill closures was the decline in the supply of timber from the larger, more productive Forest Service lands near BLM lands. However, they noted that purchasers of timber from BLM public domain lands also used these mills. For example, officials in some field offices in Colorado and Idaho said several nearby mills had closed, leading purchasers to transport timber to more distant mills for processing. As a result, the officials noted that the purchasers of timber from these offices have experienced higher transportation costs, thereby reducing the attractiveness of purchasing timber from BLM public domain lands. The officials told us that because of the relatively small volume of timber offered for sale from BLM public domain lands, a return to previous BLM sale offering levels would not result in sufficient supply for the mills to reopen. The shift in emphasis has also contributed to a need for more extensive analysis and the hiring of more resource protection specialists during the time that BLM’s funding for its public domain forestry management program was generally declining. Consequently, less volume of timber was offered because it takes longer and costs more to prepare a given volume of timber for sale. According to officials, over the past decade, BLM has hired more resource protection specialists, such as wildlife biologists, botanists, and hydrologists, in order to better analyze the effects of potential timber sales on other resources, such as wildlife habitat. At the same time, many foresters, who are the primary staff involved in identifying and preparing timber sales, have departed the agency either through retirement or other means in recent years and have not been replaced. For example, the number of BLM foresters decreased from 72 to 53 between fiscal year 1991 and fiscal year 2002. We were told that at some field units there are no foresters remaining that have the skills needed to prepare timber sales. Furthermore, using constant 2002 dollars, BLM’s appropriations for the public domain forestry management program declined from $8.5 million in fiscal year 1990 to $6.2 million in 2002. Thus, the higher preparation costs and smaller budgets have left BLM less able to prepare timber sales. According to BLM, it has begun recruiting new foresters and has requested an increase of $1 million in funding in fiscal year 2004 for the public domain forestry management program. In addition, BLM officials told us that for the past few years the agency has not had the funding to develop better inventory information about forests and woodlands in order to adequately assess the effects of timber sales on the forest ecosystem. For example, they do not have current information on the condition of forests and woodlands, such as tree density, species composition, and the extent of forests and woodlands affected by insects and disease—information needed to identify potential timber sale offerings. According to the officials, some timber sales cannot be prepared because BLM does not have credible inventory data needed to justify trade-offs between timber harvesting and other concerns, such as impacts on animal species habitat. Agency officials said that the lack of knowledge of its inventory has been a long-standing problem. We provided a draft of this report to the Department of the Interior for review and comment. The department pointed out that the report achieved its three objectives and that we had incorporated information based on informal discussions with staff. The department said that BLM has begun to act on some of the findings in the draft report, including recruiting new foresters, in part to support the National Fire Plan. According to the department, these foresters will help ensure that forest health considerations, such as species composition, stand structure, and insect or disease occurrence, are fully considered, in addition to hazardous fuel reduction. BLM state directors have submitted detailed action plans to meet state-specific needs for renewed emphasis on forests and woodlands management. Furthermore, the department said that the President’s fiscal year 2004 budget proposes a $1 million increase in funding for the public domain forests and woodlands management program. The increased funding, according to the department, will be used to improve utilization of small-diameter wood materials, improve forest health, and provide entrepreneurial opportunities in the wood product industry. We included information in the report regarding BLM’s recruiting efforts and its request for additional funding. The department also made technical clarifications, which we incorporated as appropriate. The department’s comments are reprinted in appendix IV. To determine the legal framework for BLM timber sales on public domain lands, we reviewed laws and regulations governing BLM’s timber sales activities. We also reviewed policy documents issued by headquarters and, if available, supplemental guidance issued by state and field locations as it relates to timber sales activities. To determine the trend in the volume of timber that BLM offered for sale from public domain lands, we obtained BLM information on the volumes and composition—sawtimber, firewood, posts, poles, and other wood products—of timber offered for sale by state office for fiscal years 1990 through 2002. We reviewed information contained in BLM’s Timber Sale Information System and its annual publication, Public Land Statistics. To determine what factors contributed to the trend in the volume of timber offered for sale from public domain lands from 1990 to 2002, we met with BLM headquarters officials and visited or contacted officials at 9 of the 12 BLM state offices and six field offices—two each in the states of Colorado, Idaho, and Montana. We discussed with these officials how their respective offices established timber sale goals, allocated forestry program funding, and monitored accomplishment of planned timber sales. We also discussed with these officials BLM’s management emphasis on improving forest health, and the trends in (1) market conditions for timber and other wood products and (2) BLM funding and staffing. In addition, we reviewed BLM’s budget justifications, strategic and annual plans and reports, land use plans, and other materials related to BLM’s timber sales activities. To gain further perspective on the market conditions of the timber industry, we interviewed officials and reviewed timber industry research publications from The University of Montana. Finally, to gain a more detailed understanding of timber sales activities on public domain lands, we met with officials in three BLM state offices—Colorado, Idaho, and Montana—and visited several BLM timber sale projects that were ongoing or had been completed recently. We conducted our review from May 2002 through May 2003 in accordance with generally accepted government auditing standards. We will send copies of this report to the Secretary of the Interior; the Director of the Bureau of Land Management; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please call me at (202) 512-3841. Key contributors to this report are listed in appendix V. Table 1 shows the number of acres of forests and woodlands and their total for each BLM state office. Table 2 identifies the volume, in board feet, of sawtimber, cords, posts, poles, and other wood products offered for sale from public domain lands from fiscal years 1990 through 2002. Table 3 shows the volume, in board feet, of timber offered for sale in fiscal years 1990 and 2002, and their differences in volume, by BLM state office. The following are GAO comments on the Department of the Interior’s letter dated June 5, 2003. 1. We changed the title to be more specific to public domain lands. 2. In accordance with our job objectives, our report addresses the trend in the volume of timber offered for sale from both public domain forests and woodlands. Furthermore, the report notes that woodlands typically have significantly lower productivity than forests. 3. We deleted reference to the federal regulations generally not requiring mitigation of adverse impacts resulting from operations on public domain lands. We added information to clarify that the federal regulations referred to in the draft report were those that implement the National Environmental Policy Act. The department agreed with this clarification. 4. We agree that the change of emphasis has affected the volume of timber offered for sale, which is already clearly articulated in the report. 5. We agree that both the budget and the volume of timber offered for sale have declined significantly. We have included a reference to the budgetary decline in a section heading. 6. We agree that the volume of timber offered for sale from BLM’s public domain lands is small compared to offerings from Forest Service or state or private land. As the report indicates, the Forest Service offered 4 billion board feet of timber for sale from national forests in 1997, while BLM offered 35 million board feet—21 million board feet of sawtimber and 14 million board feet of other wood products—from public domain lands. Also, the report points out that about 7 percent of the nation’s domestically produced timber and wood products come from federally managed forests, which include BLM and Forest Service forests. Therefore, the remaining 93 percent is from nonfederal lands, which include state and private lands. Barry T. Hill (202) 512-3841 ([email protected]) In addition to the above, Andrew S. Bauck, Linda L. Harmon, Richard P. Johnson, Chester M. Joy, Roy K. Judy, Rosellen McCarthy, Jonathan S. McMurray, Paul E. Staley, and Amy E. Webbink made key contributions to this report.
For several decades, debate over how to balance timber sales with resource protection and recreational use on federally managed lands has been at the heart of controversy surrounding federal land management. The Department of the Interior's Bureau of Land Management (BLM) is one of the federal agencies that manages some of the nation's forests--about 53 million acres--under its public domain forestry management program and offers timber for sale from these lands. With regard to BLM's offerings of timber for sale, congressional requesters asked GAO to determine (1) the statutory framework for BLM timber sales, (2) the trend in BLM timber volume offered for sale, and (3) factors contributing to any observed trends. GAO reviewed laws, regulations, and BLM policy governing BLM timber sales. GAO obtained and reviewed data on the volumes and composition of BLM timber sale offerings from fiscal years 1990 through 2002 and met with agency officials and others to identify factors affecting timber sale offering trends and their importance. A variety of land management and other environmental laws provide the statutory framework for timber sales on BLM public domain land. In particular, the Federal Land Policy and Management Act permits timber sales as one of several uses for BLM public lands. Timber sales also must comply with other environmental laws, such as the National Environmental Policy Act, the Endangered Species Act, and the Clean Water Act. From 1990 to 2002, the volume of timber offered for sale by BLM declined about 74 percent. Declines were experienced for each of the timber's components--sawtimber (trees or logs suitable for conversion into lumber) and other wood products (small logs used to make firewood, posts, and poles). Consequently, in 2002, the proportion of sawtimber in the total volume offered for sale was less than it was in 1990. The principal factor contributing to the decline in timber volume was the governmentwide shift in forestry program emphasis beginning in the late 1980s from timber production to enhancing forest ecosystem health. This shift was based on the need to provide more protection for nontimber resources and to place a greater emphasis on the removal of smaller trees to reduce the risks of insects, fire, and disease. As a result, according to BLM officials, timber became a by-product rather than the focus of BLM's management of its public domain forests.
Historically, patient health information has been scattered across paper records kept by many different caregivers in many different locations, making it difficult for a clinician to access all of a patient’s health information at the time of care. Lacking access to these critical data, a clinician may be challenged in making the most informed decisions on treatment options, potentially putting the patient’s health at risk. The use of technology to electronically collect, store, retrieve, and transfer clinical, administrative, and financial health information has the potential to improve the quality and efficiency of health care. Electronic health records are particularly crucial for optimizing the health care provided to military personnel and veterans. While in active military status and later as veterans, many DOD and VA personnel, along with their family members, tend to be highly mobile and may have health records residing at multiple medical facilities within and outside the United States. Each department operates separate electronic health record systems that they rely on to create and manage patient health information. In particular, DOD currently relies on AHLTA, which comprises multiple legacy medical information systems that were developed from commercial software products and customized for specific uses. For example, the Composite Health Care System (CHCS), which was formerly DOD’s primary health information system, is used to capture information related to pharmacy, radiology, and laboratory order management. In addition, the department uses Essentris (also called the Clinical Information System), a commercial health information system customized to support inpatient treatment at military medical facilities. For its part, VA currently uses its integrated medical information system— VistA—which was developed in-house by VA clinicians and information technology (IT) personnel. The system consists of 104 separate computer applications, including 56 health provider applications; 19 management and financial applications; 8 registration, enrollment, and eligibility applications; 5 health data applications; and 3 information and education applications. The sharing of health information among organizations is especially important because the health care system is highly fragmented, with care and services provided in multiple settings, such as physician offices and hospitals, that may not be able to coordinate patient medical care records. Thus, a means for sharing information among providers, such as between DOD’s and VA’s health care systems, is achieving interoperability. The Office of the National Coordinator for Health IT (ONC), within the Department of Health and Human Services, has issued draft guidance,describing interoperability as: 1. the ability of systems to exchange electronic health information and 2. the ability to use the electronic health information that has been exchanged from other systems without special effort on the part of the user. Similarly, the fiscal year 2014 NDAA defines interoperability, as used in the provision governing the departments’ electronic health records, as “the ability of different electronic health records systems or software to meaningfully exchange information in real time and provide useful results to one or more systems.” Thus, in these contexts, interoperability allows patients’ electronic health information to be available from provider to provider, regardless of where the information originated. Achieving interoperability depends on, among other things, the use of agreed-upon health data standards to ensure that information can be shared and used. If electronic health records conform to interoperability standards, they potentially can be created, managed, and consulted by authorized clinicians and staff across more than one health care organization, thus providing patients and their caregivers the information needed for optimal care. Information that is electronically exchanged from one provider to another must adhere to the same standards in order to be interpreted and used in electronic health records, thereby permitting interoperability. In the health IT field, standards may govern areas ranging from technical issues, such as file types and interchange systems, to content issues, such as medical terminology. On a national level, ONC has been assigned responsibility for identifying health data standards and technical specifications for electronic health record technology and overseeing the certification of this technology. In addition to exchanging the information, systems must be able to use the information that is exchanged. Thus, electronic health record technology has the potential to improve the quality of care that patients receive and to reduce health care costs, if the technology is used in a way that improves providers’ and patients’ access to critical information. For example, with interoperability, medical providers have the ability to query data from other sources while managing chronically ill patients, regardless of geography or the network on which the data resides. Since 1998, DOD and VA have relied on a patchwork of initiatives involving their health information systems to exchange information and to increase electronic health record interoperability. These have included initiatives to share viewable data in existing (legacy) systems; link and share computable data between the departments’ updated health data repositories; develop a virtual lifetime electronic health record to enable private sector interoperability; implement IT capabilities for the first joint federal health care center; and jointly develop a single integrated system. Table 1 provides a brief description of the history of these various initiatives. In addition to the initiatives mentioned in table 1, DOD and VA previously responded to provisions in the fiscal year 2008 NDAA directing the departments to jointly develop and implement fully interoperable electronic health record systems or capabilities in 2009.called for the departments to set up an interagency program office to be a single point of accountability for their efforts to implement these systems or capabilities by the September 30, 2009, deadline. In January 2009, the IPO completed its charter, articulating, among other things, its mission and functions with respect to attaining interoperable electronic health data. The act also Further, the departments’ Interagency Clinical Informatics Boardestablished the following six interoperability objectives for meeting the departments’ data-sharing needs and facilitating compliance with the fiscal year 2008 NDAA: demonstrate initial network gateway operation, expand questionnaires and self-assessment tools, expand Essentris in DOD, and demonstrate initial document scanning between the departments. refine social history data, share physical exam data, The departments’ officials, including the co-chairs of the group responsible for representing the clinician user community, asserted that they had met the priorities established by the Interagency Clinical Informatics Board and, in conjunction with capabilities previously attained (e.g., the Federal Health Information Exchange and the Bidirectional Health Information Exchange), had met the deadline for achieving full interoperability as required by the act. Nonetheless, in prior reviews, we have identified a number of challenges the departments have faced in managing their efforts in response to the fiscal year 2008 NDAA and to address their common health IT needs. While these initiatives, collectively, have yielded increased data sharing in various capacities, we previously reported that they nonetheless experienced persistent management challenges and did not result in the fully interoperable electronic health record capabilities that the departments had long sought. We have also noted that the manner in which DOD and VA reported progress toward achieving interoperability lacked results-oriented (i.e., objective, quantifiable, and measurable) goals. Specifically, we noted that the departmental plans lacked associated performance goals and measures that are a necessary basis to provide the departments and their stakeholders with a comprehensive picture to effectively manage their progress toward realizing increased interoperability. In March 2011, the Secretaries of Defense and Veterans Affairs committed the two departments to the development of a new, joint integrated electronic health record (iEHR) system. Further, in May 2012, they announced their goal of implementing the integrated health record across both departments by 2017. According to program documentation, pursuing iEHR was expected to enable the departments to align resources and investments with common business needs and programs, resulting in a platform that would replace the two departments’ separate electronic health record systems with a common system. In addition, because it would involve both departments using the same system, this approach was expected to largely sidestep the challenges they had historically encountered in trying to achieve interoperability between separate systems. Toward this end, initial development plans called for the single, joint iEHR system to consist of 54 clinical capabilities that would be delivered in six increments between 2014 and 2017, with all existing applications in VistA and AHLTA continuing uninterrupted until full delivery of the new capabilities. The initiative was to deliver several common infrastructure components—an enterprise architecture;user interface; data centers; and system interface and data exchange standards. The system was to be primarily built by purchasing commercially available solutions for joint use, with noncommercial solutions developed or adopted only when a commercial alternative was unavailable. However, in February 2013, about 2 years after initiating iEHR, the departments’ Secretaries announced changes to their approach— essentially abandoning their effort to develop a single, integrated electronic health record system for both departments. This decision resulted from an assessment of the iEHR program that the Secretaries had requested in December 2012 because of their concerns about the program facing challenges in meeting deadlines, costing too much, and taking too long to deliver capabilities. The IPO reported spending about $564 million on iEHR between October 2011 and June 2013. In place of the iEHR initiative, DOD determined that it would buy a commercially available system to replace its existing AHLTA system and VA decided that it would modernize its existing VistA health information system. In this regard, DOD is pursuing the acquisition of a replacement system for its multiple legacy electronic health record systems under a new program—the DHMSM program. For its part, VA intends to enhance and modernize its existing VistA system under a new program called VistA Evolution. The departments indicated that they would ensure interoperability between their two new systems, and with other public and private health care providers. In December 2013, the IPO was re-chartered and recognized as the single point of accountability in the development and implementation of electronic health records systems or capabilities that allow for full interoperability of health care information between DOD and VA. According to the IPO charter, the office is responsible for, among other things, establishing technical and clinical standards and processes to ensure integration of health data between the two departments and other public and private health care providers. Further, it is to monitor and report on the departments’ progress in implementing the technical standards during the development of their respective systems. It is also to coordinate with the departments to ensure that advances in interoperable capabilities enhance the quality, safety, efficiency, and effectiveness of health care services. While the departments have chosen their current approach to modernize two separate systems, we have previously reported that they did not substantiate their claims that the current approach would be less expensive and faster than the single-system approach. Further, we have noted that the departments’ efforts to modernize their two separate systems were duplicative. We stressed that major investment decisions—including terminating or significantly restructuring an ongoing program—should be justified using analyses that compare the costs and schedules of alternative proposals. Accordingly, we recommended that DOD and VA develop a cost and schedule estimate for their current approach from the perspective of both departments that includes the estimated cost and schedule of VA’s VistA Evolution program, DOD’s DHMSM program, and the departments’ joint efforts to achieve interoperability between the two systems, and then compare the cost and schedule estimates of the current and previous (i.e., single-system) approaches and, if applicable, provide a rationale for pursuing a more costly or time-consuming approach. The departments agreed with our recommendation and stated that, while initial comparisons indicated that their current approach would be more cost effective, they would continue to refine cost estimates as part of both departments’ acquisition programs. DOD, VA, and the IPO have undertaken various activities aimed at achieving interoperability between the two departments’ electronic health record systems. In this regard, DOD and VA have initiated work focused on near-term objectives including standardizing their existing health data and making them viewable by both departments’ clinicians in an integrated format. The departments have also developed plans related to their efforts to modernize their respective electronic health record systems. Further, the IPO has issued guidance outlining the technical approach for achieving interoperable capabilities between the departments’ systems. Even with the actions taken, however, the two departments did not, by the October 2014 deadline established in the fiscal year 2014 NDAA for compliance with national data standards, certify that all health care data in their systems complied with national standards and were computable in real time. Further, the departments’ system modernization plans identify a number of key activities to be implemented beyond the 2016 deadline established in the act, suggesting that deployment of the new systems with interoperability capabilities will not be completed across the departments until years later. In addition, while the IPO has begun steps to measure and report on the progress of the exchange of health data, its efforts have not included the use of outcome-oriented metrics and established goals essential to gauging the extent to which interoperability is being delivered and having an impact on improving health outcomes. As part of the approach toward achieving interoperability between their electronic health record systems, both DOD and VA, along with the IPO, have taken actions focused on near-term objectives including standardizing data and expanding the functionality and availability of patient health information. Specifically, both departments have taken actions to: Analyze data related to 25 health data domains that were identified and prioritized by the Interagency Clinical Informatics Board and map the data in their respective electronic health record systems—AHLTA and VistA—to health data standards identified by the IPO. Such standards include RxNorm and the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT). Expand functionality of the Joint Legacy Viewer—a tool that provides a real-time, integrated, categorized, and chronological view of electronic health record information contained in existing DOD and VA systems. For example, it allows both departments to share certain healthcare data (e.g., patient demographics, allergies, medications) in a viewable interface that is available to clinicians. In expanding its functionality, the departments took steps to make more computable (mapped) data from both departments available via the Joint Legacy Viewer and to increase user access to this tool. Issue plans for their respective electronic health records modernization or acquisition programs consistent with IPO guidance that describes the technical approach to interoperability. With regard to its specific actions, DOD’s recent work on interoperability has been implemented by the Defense Medical Information Exchange program. This program is focused on increasing the amount of data shared with VA and with the private sector. The program has also been working to make more data viewable through the Joint Legacy Viewer, consolidate DOD data flow through a single exchange mechanism to achieve the near-term interoperability objectives, and prepare for the deployment of the department’s new electronic health record. According to the program manager, the program has performed the infrastructure testing intended to ensure that DOD and VA have the necessary capacity to accommodate the increasing number of users of the Joint Legacy Viewer. According to DOD documentation, during fiscal years 2015 and 2016, the program’s plans include continuing to enhance the Joint Legacy Viewer by adding additional private sector data and incorporating capabilities for viewing data from existing tools, such as the Bidirectional Health Information Exchange. DOD also issued a request for proposals for the DHMSM program in August 2014 that described the department’s plans to replace its legacy systems and acquire a modernized electronic health record system with interoperable capabilities across the military operations. The department has developed a series of planning documents, including an acquisition strategy that called for awarding the DHMSM contract by the end of fiscal year 2015. The department noted in a January 2015 briefing to Congress that it plans to reach initial operational capability for the modernized system by December 2016. This is expected to include the deployment of modernized electronic health record software at eight locations. The time frame for reaching full operating capability for the new system is to be determined after contract award. For its part, VA has developed plans, such as the VA Interoperability Plan and the VistA 4 Roadmap. Both documents describe the department’s approach for modernizing its existing electronic health record system through the VistA Evolution program, while helping to facilitate interoperability with DOD’s system and the private sector. For example, the VA Interoperability Plan, issued in June 2014, describes activities intended to improve VistA’s technical interoperability, such as standardizing the VistA software across the department to simplify sharing data. In addition, the VistA 4 Roadmap, last revised in February 2015, describes four sets of functional capabilities that are expected to be incrementally deployed during fiscal years 2014 through 2018 to modernize the VistA system and enhance interoperability. According to the road map, the first set of capabilities was delivered by the end of September 2014 and included access to the Joint Legacy Viewer and a foundation for future functionality, such as an enhanced graphical user interface and enterprise messaging infrastructure. Another interoperable capability that is expected to be incrementally delivered over the course of the VistA modernization program is the enterprise health management platform. This platform is intended to provide clinicians with a customizable view of a longitudinal health record that can integrate data from DOD, VA, and third-party providers. Also, when fully deployed, VA expects the enterprise health management platform to replace the Joint Legacy Viewer. Additionally, with regard to its actions to facilitate department interoperability efforts, the IPO has developed guidance that describes a technical approach for standardizing health data, to include related roles and responsibilities and near-term actions intended to increase interoperability. Specifically, The IPO’s Healthcare Information Interoperability Technical Package, first issued in July 2014 and subsequently updated twice, describes a standards-based approach and technical objectives for interoperability. This guidance addresses issues such as how the DOD and VA systems are to exchange information consistent with national and international standards. It also identifies the standards that the IPO has selected for the 25 prioritized health data domains. The Joint Interoperability Plan, most recently updated in January 2015, is characterized by the IPO as a working document to be regularly updated. It summarizes the departments’ actions to increase interoperability, including discussions of completed actions as well as near-term goals and deliverables through 2016, such as improvements to the Joint Legacy Viewer and VistA Evolution capabilities and the DHMSM contract award, among others. In addition, the plan identifies challenges to achieving interoperability, such as the evolving nature of health care data standards and the complexity of integrating interoperable data with clinicians’ current workflows. Further, the plan includes four approved “use cases,” which are scenarios that help describe areas where increased interoperability would be most valuable. The use cases are intended to help identify additional requirements for the development and testing of interoperability capabilities beyond 2016. The IPO’s Health Data Interoperability Management Plan, issued in September 2014, outlines a high-level approach and roles and responsibilities for achieving electronic health data exchange and terminology standardization for DOD, VA, other government entities, and private sector healthcare partners. The plan also establishes the Health Data Interoperability Standards Lifecycle Model, which describes the process by which data standards are expected to be selected—consistent with national and international health data standards as they evolve and are implemented by the departments. Overall, the recent actions taken by DOD, VA, and the IPO have focused on ensuring that health care data used by the two departments’ existing systems, AHLTA and VistA, are compliant with national standards. In particular, following the technical approach outlined by the IPO, the departments have increased the amount of data from these systems that are mapped to national standards; they also have made that data available in the Joint Legacy Viewer. Nonetheless, DOD and VA program officials acknowledged that these actions did not result in the two departments meeting the October 1, 2014, deadline established in section 713 of the fiscal year 2014 NDAA for certifying that all health care data in their systems complied with national standards and were computable in real time. While the departments did provide Congress with an update on their progress, DOD officials stated that the department plans to certify that it has met the requirement in the next several months. VA officials stated that the department plans to certify that it has met the requirement later in calendar year 2015. While important actions are being taken, the departments have indicated that they do not intend to complete a number of key activities related to the deployment of their modernized systems and interoperability until after the December 31, 2016, statutory deadline for deploying modernized electronic health record software. DHMSM program officials have acknowledged that additional project details to guide the department’s efforts towards achieving full operating capability for the modernized system have yet to be determined. The officials said that they expect these details to be developed after the contract for the DHMSM service provider integrator and system solution has been awarded. The department also currently estimates full operational capability to occur at the end of fiscal year 2022. Thus, while initial deployment of an interoperable electronic health record system is expected at eight locations by the statutory deadline, additional work beyond 2016, which DOD has yet to fully define, will be required to extend access to the modernized software, provide interoperable capabilities throughout the department, and include all users who would benefit from access. In addition, deployment of VA’s modernized VistA system at all locations and for all necessary users is not scheduled until 2018. The department plans to deliver functionality in a phased approach in four product releases over 5 years to improve performance and increase interoperable capabilities in additional clinical areas. For example, according to its plans, the department intends to deliver additional features through fiscal year 2018 to improve the functionality of the enterprise health management platform. It also plans to increase interoperable capabilities in additional clinical areas as it replaces its legacy scheduling system that is intended to reduce VA patient wait times. Thus, additional actions beyond the 2016 statutory deadline are still planned for deploying a modernized electronic health record system. Prior work and guidance that we have issued stress the importance of measuring program performance,reporting of accomplishments. This guidance further states that performance measurement should evaluate both processes and outcomes related to program activities. Specifically, process metrics address the type or level of program activities conducted and the direct products or services delivered by a program, such as the number of electronic health records queried in an hour or day. Outcome metrics address the results of products and services, such as improvements on the quality of health care services or clinician satisfaction. Outcome which is the ongoing monitoring and metrics can help in assessing the status of program operations, identifying areas that need improvement, and ensuring accountability for end results. Further, measuring program performance is essential for monitoring progress toward pre-established goals and should be tied to program goals that allow organizations to demonstrate and report the degree to which desired results are achieved. The IPO’s responsibilities include monitoring and reporting on progress made by the departments to standardize their health care data and coordinating with the departments to ensure that interoperable capabilities enhance health care services. With DOD and VA continuing their activities to increase the sharing of health care data, the IPO has begun taking steps to measure and report on the progress of the two departments’ efforts. Toward this end, the office has issued guidance describing process metrics that are to be tracked and formally reported to the Health Executive Committee and congressional stakeholders. For example, among these metrics, the Health Data Interoperability Management Plan calls for tracking the percentage of data domains within the departments’ current health information systems that are mapped to selected national standards. The plan also identifies metrics to be collected and reported that relate to tracking health information exchanges through the departments’ existing initiatives. These metrics include, for example, the number of laboratory reports and the number of consultation reports exchanged from DOD to VA through the Federal Health Information Exchange for separated service members and the number of patient queries by providers from both departments through the Bidirectional Health Information Exchange. The measurements are included in a DOD/VA quarterly data sharing report that the departments prepare and send to Congress. The report is intended to provide a snapshot of the amount of data shared between the departments. While the IPO has developed process metrics and begun reporting the departments’ progress related to standardizing and exchanging health data, the office has not specified outcome-oriented metrics and established goals that are important to gauging the impact that increased interoperability has on improving health care services. The IPO’s Health Data Interoperability Management Plan indicates that the DHMSM and VistA Evolution programs are to develop outcome metrics related to their respective acquisition and modernization programs. However, the guidance does not identify outcome metrics or establish goals that the IPO expects to use to measure progress toward improving health care services resulting from the departments’ interoperable capabilities. The IPO Acting Director said that he has tasked a team with working to identify better metrics to capture both the technical and clinical progress resulting from interoperability efforts between the departments. According to the official, this team is working with DOD, VA, and ONC subject matter experts to identify metrics that would be more meaningful for determining the impact of increased interoperability, such as metrics on the quality of a user’s experience and improvements in health outcomes. However, as of late May 2015, the IPO had not established a time frame for when the metrics would be completed and incorporated into their guidance. Officials of the departments and the IPO explained that defining appropriate outcome metrics for interoperability is not just a DOD and VA issue; rather, it is a national challenge to identify how to measure interoperability and what data are needed. Using an effective outcome-based metrics approach could provide DOD and VA a more accurate, ongoing picture of their progress toward achieving interoperability and the value and benefits generated. Doing so would also better position them to assess and report on the status of interoperability-related activities in terms of results, and to determine areas that need improvement. Until they establish a time frame, complete steps to define outcome metrics and goals, and incorporate these into IPO guidance, the departments and the IPO risk not knowing the status of program operations and areas that need improvement, and ensuring accountability for end results. DOD and VA, with guidance from the IPO, have taken actions to increase interoperability between their electronic health record systems, as called for in the fiscal year 2014 NDAA. However, the departments have indicated that they do not intend to complete a number of key activities related to the deployment of their modernized systems and interoperability until after the December 31, 2016, statutory deadline for deploying modernized electronic health record software. To address the 2016 requirement, DOD has issued plans and announced the award of a contract for acquiring a modernized system to include interoperability capabilities across military operations. VA, for its part, has issued plans describing an incremental approach to modernizing its existing electronic health records system. However, these plans—if implemented as currently described—show that interoperability delivered by the new systems is not expected to be completely deployed until after 2018, which is beyond the statutory deadline. To date, the departments have kept Congress informed of their efforts, and we believe it is critical that they continue to do so. Further, the IPO has taken steps to develop guidance that includes process-oriented metrics for monitoring and reporting on the increasing exchange of health information between the departments. However, it has yet to develop outcome-oriented metrics that are important to gauging the impact that increased interoperability has on improving health care services. While IPO officials have said that a team has been tasked to identify metrics that would be more meaningful for determining the impact of increased interoperability, no time frame has been identified for when this team will report its results and when the IPO plans to incorporate these metrics into its guidance. Further, the office has yet to identify goals that can be used to indicate the status of interoperability-related activities and the extent to which progress is being made to achieve full interoperability of health care information by the departments. Without defining outcome-oriented metrics and related goals and incorporating these into the current approach, the departments and the IPO will not be positioned to assess and report on the status of interoperability-related activities and determine areas that need improvement. To facilitate oversight and inform decision making regarding their respective department’s interoperability-related activities, we recommend that the Secretaries of Defense and Veterans Affairs, working with the Interagency Program Office, take the following three actions: establish a time frame for identifying outcome-oriented metrics, ensure related goals are defined to provide a basis for assessing and reporting on the status of interoperability-related activities and the extent to which interoperability is being achieved by the departments’ modernized electronic health record systems, and update IPO guidance to reflect the metrics and goals identified. We provided a draft of this report to VA and DOD and received written comments, which are reprinted in appendixes II and III, respectively. In addition, VA provided technical comments, which we incorporated, as appropriate. In its comments, VA generally agreed with our conclusions and concurred with our recommendations. With regard to our recommendation to establish a time frame for identifying outcome-oriented metrics, the department described recent actions it has taken toward the development of interoperability milestones and metrics, which are intended to serve as a blueprint for the IPO’s efforts to synchronize outcome-oriented metrics between DOD and VA. In addition, VA noted that it has begun to develop standardized metrics related to VistA Evolution that are tied to desired business goals. VA also described its collaborative efforts with DOD and the IPO to mature interoperability metrics into more meaningful, outcome- oriented metrics and to establish timelines for formal reporting through IPO guidance and data sharing reports. In its comments, DOD also concurred with our recommendations and stated that the department continues to work with the IPO and VA stakeholders to develop baseline interoperability metrics. The department added that it plans to meet with the IPO and VA on a regular basis to mature these metrics into more meaningful, outcome-oriented measures. Nevertheless, DOD took issue with selected aspects of our discussion related to requirements in section 713 of the fiscal year 2014 NDAA. For example, the department contended that its limited initial deployment of the new system, combined with the pending documentation of interoperability, will satisfy the statutory requirement to “deploy modernized electronic health record software supporting clinicians of the departments by no later than December 31, 2016, while ensuring continued support and compatibility with the interoperability platform and full standards-based interoperability.” We disagree with DOD’s position and reaffirm our finding that DOD’s and VA’s plans—if implemented as currently described—indicate that deployment of the new systems will not be completed across the departments until after 2016 and that much additional work is needed to extend access to the modernized software to all relevant users and department locations. The history and framework of statutory requirements has long called for the departments to take steps to achieve interoperable health record capabilities. In this regard, the fiscal year 2008 NDAA included a mandate to achieve fully interoperable health record capabilities. Further, in section 713 of the fiscal year 2014 NDAA, Congress stated that the departments “…have failed to implement a solution that allows for seamless electronic sharing of medical health care data…” and that “most of the information shared…is not standardized or available in real time to support all clinical decisions.” We recognize that section 713 of the fiscal year 2014 NDAA does not qualify the degree of deployment. Nevertheless, we believe it is reasonable to expect that electronic health records interoperability between the departments should be demonstrated by more than a limited number of users and locations and, more importantly, should be made available to all relevant clinicians at all relevant locations as expeditiously as possible. DOD also disagreed that the fiscal year 2014 NDAA specified a date by which certification of compliance with existing national data standards was required and stated that the timetable for certification is distinct from the October 1, 2014 deadline for compliance with national standards. We assert that the October 1, 2014 deadline in section 713(g)(1) of the fiscal year 2014 NDAA is linked to the departments’ certification that is required in section 713(g)(2). As DOD points out, the certification requirement depends on achieving the capability described in section 713(b)(1). This capability is to be “interoperable with an integrated display of data…by complying with the national standards…”. The national standards referred to here are those national data standards described in section 713(g)’s certification provision. Section 713(g)(2) requires a certification that DOD and VA have complied with data standards referred to in section 713(g)(1). These data standards are described in section 713(g)(1) as the existing national data standards to which all health data in the DOD and VA systems must comply with by October 1, 2014. DOD and VA’s required certification involves certification of compliance with existing national data standards and that compliance was required by October 1, 2014. Thus, we stand by our statement in the report on this matter. Further, the department expressed concern about our use of the terms “full” deployment and “enhanced” interoperability, stating that these terms are not cited in the act. We agree that section 713 of the fiscal year 2014 NDAA does not use the term “full” deployment and we have revised our report accordingly to remove references to this term. On the other hand, “interoperability enhancements” is a term that VA has used when describing features to be delivered throughout its phased approach to the VistA Evolution modernization. We used the term “enhanced” interoperability to describe the improvements planned as the modernized systems mature beyond October 1, 2014. However, to resolve any ambiguity, we have revised our report to remove the term. We are sending copies of this report to the appropriate congressional committees, the Secretary of Veterans Affairs, the Secretary of Defense, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have questions about this report, please contact me at (202) 512-6304 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Our objective was to evaluate the actions taken by the Department of Defense (DOD), the Department of Veterans Affairs (VA), and the Interagency Program Office (IPO) to plan and measure the progress toward achieving interoperability between the departments’ electronic health record systems. To evaluate the actions taken with regard to planning for electronic health record interoperability between the departments’ systems, we reviewed our previous work related to electronic health records and DOD and VA efforts to develop health information systems, interoperable health records, and interoperability standards to be implemented in federal health care programs. We obtained and analyzed DOD, VA, and IPO documentation to evaluate the departments’ plans and identify recent actions taken toward achieving interoperability consistent with the requirements specified in the National Defense Authorization Act (NDAA) for Fiscal Year 2014. Specifically, we analyzed DOD, VA, and IPO plans to evaluate how they relate to NDAA requirements that direct the DOD and VA to each (1) ensure that all health care data contained in DOD’s Armed Forces Health Longitudinal Technology Application (AHLTA) and VA’s Veterans Health Information Systems and Technology Architecture (VistA) systems complied with national standards and were computable in real time by October 1, 2014, and (2) deploy modernized electronic health record software supporting clinicians by no later than December 31, 2016, while ensuring continued support and compatibility with the interoperability platform and full standards-based interoperability. Further, we reviewed and analyzed DOD’s request for proposals, the Defense Healthcare Management System Modernization Program Acquisition Strategy, VA’s VistA 4 Roadmap, the VistA Evolution Program Plan, VA’s Interoperability Plan, and the IPO guidance including the Healthcare Information Interoperability Technical Package, the Health Data Information Management Plan, and the Joint Interoperability Plan. Further, we identified what the departments plan to deliver by the 2014 and 2016 deadlines and compared planned activities to the statutory requirements. To evaluate the actions taken by DOD, VA, and IPO to measure the progress toward achieving interoperability between the departments’ electronic health record systems, we reviewed the December 2013 IPO Charter. Our review determined that DOD and VA have assigned the IPO responsibility for, among other things, monitoring and reporting on the progress of the departments’ use of national and international health data standards; compliance with the implementation of IPO’s technical standards; and coordinating and communicating with the departments to ensure advances in interoperability capabilities enhance the quality, safety, efficiency, and effectiveness of health care services. Accordingly, we reviewed IPO documentation, such as the Healthcare Information Interoperability Technical Package, the Health Data Interoperability Management Plan, the Joint Interoperability Plan, the IPO Executive Committee quarterly reports, and DOD/VA quarterly data sharing reports to identify performance measures related to measuring and reporting progress toward achieving interoperability. We compared the performance measures identified in program documentation and reported to Congress on our guidance related to process and outcome-oriented metrics and goals reported in our prior work. We supplemented our analyses with interviews of DOD, VA, and IPO officials with knowledge of the interoperability efforts, including the IPO Acting Director (also the current Program Executive Officer for the Defense Healthcare Management Systems), IPO Deputy Director, DOD officials from the Defense Medical Information Exchange program, VA officials from the Office of Information and Technology and the Veterans Health Administration with knowledge of the VistA Evolution program, and members of the Interagency Clinical Informatics Board. We conducted this performance audit from September 2014 to August 2015, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objective. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. In addition to the contact named above, Mark Bird (Assistant Director), Neelaxi Lakhmani (Assistant Director), Kami Brown, Nancy Glover, Jennifer Stavros-Turner, and Marshall Williams, Jr., made key contributions to this report.
DOD and VA operate two of the nation's largest health care systems, serving approximately 16 million veterans and active duty service members and their beneficiaries, at a cost of more than $100 billion a year. For almost two decades, the departments have been engaged in various efforts to advance DOD and VA electronic health record interoperability. Among their most recent efforts, the DOD and VA Secretaries have committed the departments to achieving interoperability between their separate electronic health record systems. The Consolidated Appropriations Act , 2014, and accompanying Joint Explanatory Statement, included a provision for GAO to review the departments' efforts. GAO evaluated the actions taken by DOD, VA, and the IPO to plan for and measure the progress toward achieving interoperability between the departments' electronic health record systems. GAO reviewed relevant program documents and interviewed agency officials. The Departments of Defense (DOD) and Veterans Affairs (VA), with guidance from the Interagency Program Office (IPO) that is tasked with facilitating the departments' efforts to share health information, have taken actions to increase interoperability between their electronic health record systems. Among other things, DOD and VA have initiated work focused on near-term objectives, including standardizing their existing health data and making them viewable by both departments' clinicians in an integrated format. The departments also have developed longer-term plans to modernize their respective electronic health record systems. For its part, the IPO has issued guidance outlining the technical approach for achieving interoperability between the departments' systems. Even with the actions taken, DOD and VA did not, by the October 1, 2014, deadline established in the National Defense Authorization Act (NDAA) for Fiscal Year 2014 for compliance with national data standards, certify that all health care data in their systems complied with national standards and were computable in real time. Both departments stated that they intend to do so later in calendar year 2015. Further, the departments' system modernization plans identify a number of key activities to be implemented beyond December 31, 2016—the deadline established in the NDAA for the two departments to deploy modernized electronic health record software to support clinicians while ensuring full standards-based interoperability. Specifically, DOD has issued plans and announced the contract award for acquiring a modernized system to include interoperability capabilities across military operations. In addition, VA has issued plans describing an incremental approach to modernizing its existing electronic health records system. These plans—if implemented as currently described—indicate that deployment of the new systems with interoperability capabilities will not be completed across the departments until after 2018. The IPO has taken steps to develop process metrics intended to monitor progress related to the data standardization and exchange of health information consistent with its responsibilities. For example, it has issued guidance that calls for tracking metrics, such as the percentage of data domains within the departments' current health information systems that are mapped to national standards. However, the office has not yet specified outcome-oriented metrics and established related goals that are important to gauging the impact that interoperability capabilities have on improving health care services for shared patients. IPO officials said this work is ongoing and that a team is working with DOD, VA, and subject matter experts to identify metrics that would provide more meaningful measures of the impact of increased interoperability. However, the IPO has not identified a time frame for when this team will report its results and when the IPO plans to incorporate these metrics and goals into its guidance. Without ensuring that outcome-oriented metrics and related goals are defined and incorporated into the current approach, the departments and the IPO will not be positioned to assess and report on the status of interoperability-related activities and determine areas that need improvement. GAO recommends that DOD and VA, working with the IPO, establish a time frame for identifying outcome-oriented metrics; define related goals to provide a basis for assessing and reporting on the status of interoperability; and update IPO guidance to reflect the metrics and goals identified. DOD and VA concurred with GAO's recommendations.
In 1995, we compared the 4 common years (1996-99) in DOD’s 1995 and 1996 FYDPs and reported that DOD projected substantial shifts in funding priorities. Specifically, about $27 billion in planned weapon system modernization programs had been eliminated, reduced, or deferred to 2000 or later. Also, the military personnel, operation and maintenance, and family housing accounts had increased by over $21 billion and were projected to continue to increase to 2001 to support DOD’s emphasis on readiness and quality-of-life programs. Moreover, the total DOD program was projected to increase by about $12.6 billion. The Secretary of Defense wants to reform the acquisition process and reduce and streamline infrastructure to help pay the billions of dollars that DOD projects it will need to modernize the force. In our September 1995 report, we said that although DOD anticipated reducing infrastructure to achieve substantial savings, our analysis of the 1996 FYDP showed that savings accrued or expected to accrue from base closures and a smaller force appeared to be offset by increased funding for other infrastructure priorities, such as base operations and management headquarters. In May 1996, we analyzed the infrastructure portion of DOD’s 1997 FYDP and reported that infrastructure costs are projected to increase by about $9 billion, from $146 billion in 1997 to $155 billion in 2001. The FYDP includes anticipated future inflation. Therefore, changes in anticipated inflation affect the projected cost of the FYDP. The Secretary of Defense testified in March 1996 that the 1997 FYDP, which covers fiscal years 1997-2001, includes the funds to buy all of the programs in the 1996 FYDP plus billions of dollars in additional programs at less cost overall. According to DOD, the increase in programs at lower projected costs results because inflation estimates were substantially reduced for future DOD purchases, from 3 percent to about 2.2 percent for fiscal years 1997-2001. The executive branch substantially reduced its forecast of the inflation rate for fiscal years 1997 through 2002, resulting in a decline in the estimated costs of DOD’s purchases of about $45.7 billion, including about $34.7 billion over the 1997-2001 FYDP period. However, the price measure used in the executive branch’s projections had inherent limitations and has since been improved. Using a different price measure, CBO projected a much smaller drop in inflation and estimated that the future cost of DOD’s purchases would be reduced by only about $10.3 billion over the 1997-2001 period. The executive branch reduced its inflation forecasts from 3 percent per year for the period 1997-2001 to 2.2 percent per year, or 8/10 of 1 percent.As a result, DOD projected that the cost of defense purchases would decline by $34.7 billion for the 1997-2001 period and an additional $11 billion for 2002. Based on these projected cost reductions, the executive branch reduced DOD’s projected budgets for fiscal years 1997-2001 by about $15.2 billion. The executive branch allowed DOD to retain about $19.5 billion of the projected increase in purchasing power. The distribution of this assumed additional purchasing power was $4.3 billion in 1997, $3.9 billion in 1998, $4.6 billion in 1999, $3.8 billion in 2000, and $2.9 billion in 2001. According to DOD, about $6 billion of the $19.5 billion was applied to the military personnel and operation and maintenance accounts for must-pay bills such as for the military retired pay accrual and ongoing contingency operations. The remaining $13 billion was applied primarily to DOD’s modernization priorities. Funding was allocated to purchase trucks and other support equipment, accelerate the acquisition of next generation systems, upgrade existing systems, and fund Army base closure costs. A detailed list of these planned purchases is provided in appendix I. For more than a decade, OMB has used projections of the Commerce Department’s Bureau of Economic Analysis’ (BEA) implicit price deflator for gross domestic product (GDP) based on a “fixed-weighted” methodology to adjust the future costs of defense nonpay purchases other than fuel. According to OMB officials, anecdotal information for recent years suggests that changes in this measure have been an accurate gauge of inflation in DOD purchases. The fixed-weighted methodology was used to prepare the President’s fiscal years 1996 and 1997 budgets. Economists within the government and in private organizations generally recognize that the implicit price deflator based on a fixed-weighted methodology has inherent limitations, in part because it is derived from the values of goods and services based on a fixed base year such as 1987. This fixed-weighted methodology has in recent years tended to overstate economic growth and understate inflation as time progressed beyond the base year. Because of the limitations in the fixed-weighted methodology, BEA switched to a new “chain-weighted” inflation methodology, just after the President’s fiscal year 1997 budget had been prepared in January 1996. The “chain-weighted” methodology, which is continuously updated by using weights for 2 adjacent years, ensures that differences in relative prices, such as the drop in computer prices, will not distort overall GDP statistics. Economists have maintained that this methodology is superior to the fixed-weighted methodology. According to BEA officials, the improved methodology gives a more accurate measure of inflation because it eliminates the potential for cumulative errors under the old (fixed-weighted) methodology. For the 1997-2001 period, the executive branch projected an annual inflation rate of 2.2 percent as measured by the fixed-weighted methodology and 2.7 percent as measured by the chain-weighted methodology. In discussing the transition from the GDP implicit price deflator based on fixed weights to the chain-weighted GDP price deflator, OMB officials stated that the two differing numerical measures represent the same inflation, in the same economy, at the same time. According to the officials, the difference is “precisely analogous” to measuring the same temperature on Celsius or Fahrenheit scales. The only difference between the two measures is the methodology used. However, as a practical matter, OMB provides DOD a specific numerical index of inflation, and DOD applies this index to estimate future funding requirements. Therefore, the index used has a direct impact on DOD’s estimated future funding requirements. For example, our analysis shows that had DOD applied the new chain-weighted inflation assumption of 2.7 percent to develop its 1997-2001 FYDP rather than the fixed-weighted assumption of 2.2 percent, DOD’s increased purchasing power would be only about $12.7 billion, not $34.7 billion. OMB officials told us they have not decided what methodology they will use to project inflation for the next FYDP, which will encompass the 1998-2003 defense program. However, in commenting on a draft of this report, DOD said that OMB has indicated its intention to adopt the chain-weighted methodology for budgeting beginning with the fiscal year 1998 submission. In addition, the President’s budget for fiscal year 1997 emphasized the limitations of the fixed-weighted methodology and featured the improved chain-weighted methodology in presenting economic assumptions for the future. If OMB uses the improved chain-weighted methodology to provide inflation guidance to DOD, DOD’s funding estimates for fiscal years 1998 and beyond could be affected. For example, on a chain-weighted basis, two major private forecasting firms currently project an inflation rate of about 2.5 percent per year over the next 5 years, which is a decline from the 2.7 percent chain-weighted inflation assumption that appears in the fiscal year 1997 budget. If OMB gives DOD an inflation projection of 2.5 percent per year for the 1998-2003 period, a question arises as to whether such a factor will be interpreted as an increase (from the 2.2 percent as measured by the fixed-weighted methodology) or a decrease (from the 2.7 percent as measured using the chain-weighted methodology.) Without further guidance, DOD may increase its estimates of future funding requirements for inflation when inflation is projected to be lower than the earlier forecast. During consideration of the fiscal year 1997 defense budget, the Chairman of the Senate Committee on the Budget requested that CBO estimate the adjustments that should be made to DOD’s budget estimates through 2002 that would keep its purchasing power constant given lower inflation rates. CBO chose not to use the implicit price deflator for GDP based on the fixed-weighted methodology that OMB had used to calculate inflation because it had been replaced by the new chain-weighted methodology. Instead, CBO based its inflation forecast on the Consumer Price Index, which measures changes in the average cost of a fixed market basket of consumer goods and services because that measure had not been revised. Neither the executive branch’s nor CBO’s estimate presumes any ability to forecast prices of goods and services purchased by DOD. Instead, the two estimates calculate the change in a general index of inflation and assume that prices of defense goods and services would change by the same amount. Using the Consumer Price Index, CBO projected a much smaller decrease in inflation between the 2 budget years than the executive branch did. Whereas the executive branch projected an 8/10 of 1 percent drop in inflation, CBO projected that inflation would drop only 2/10 of 1 percent. As a result, CBO projected that DOD’s purchasing power would increase by only about $10.3 billion for the 1997-2001 period. This estimate is about $24.4 billion less than DOD’s estimated $34.7 billion increase. Further, because the executive branch reduced DOD’s estimated 1997-2001 FYDP by about $15.2 billion, CBO’s estimate indicates that DOD’s real purchasing power was reduced by about $5 billion. In action on the fiscal year 1997 budget resolution, the Senate adjusted defense totals downward to reflect CBO’s more conservative estimate. The House did not make any adjustments for lower inflation. The conference agreement on the budget resolution recommended the Senate level for fiscal year 1997 and levels somewhat closer to the House amounts in later years. Our analysis shows that resource allocations in the 1997 FYDP vary considerably from the 1996 FYDP. These resource adjustments result primarily from inflation adjustments and transfers between accounts. Table 1 shows a year-to-year comparison of DOD’s 1996 and 1997 FYDPs by primary accounts. The following sections discuss some of the more significant changes in each of the primary accounts. Overall, funding for military personnel accounts increased by $4.7 billion for the 1997-2001 period, although DOD plans to reduce the number of military personnel below the levels reflected in last year’s FYDP. The increase primarily reflects (1) higher pay raises for fiscal years 2000 and 2001 than were included in the 1996 FYDP and (2) the transfer of U.S. Transportation Command costs from a revolving fund supported mainly by operation and maintenance accounts to the military personnel accounts. Programs that are expected to receive the largest funding increases are Army divisions ($1.5 billion) and Army force-related training ($1.6 billion). Other programs are projected to be reduced. Some of the largest declines are projected for Army National Guard support forces ($2.6 billion), Army Reserve readiness support ($1.6 billion), and Air Force permanent change-of-station travel ($650 million). The 1997 FYDP shows that DOD plans to lower active duty force levels in fiscal years 1998-2001. The planned smaller force would bring force levels below the permanent end strength levels set forth in the National Defense Authorization Act for fiscal year 1996 (P.L. 104-106). Table 2 shows the minimum force levels in the law and DOD’s planned reductions. The Commission on Roles and Missions recommended that DOD perform a quadrennial review to assess DOD’s active and reserve force structure, modernization plans, infrastructure, and other elements of the defense program and policies to help determine the defense strategy through 2005. The National Defense Authorization Act for fiscal year 1997 directed the Secretary of Defense to conduct the review in fiscal year 1997. Congress will have an opportunity to examine the assessment and recommendations of the review. The act also requires the Secretary of Defense to include in the annual budget request funding sufficient to maintain its prescribed permanent active end strengths. If DOD is precluded from implementing its planned personnel reductions, it will have to make other compensating adjustments to its overall program. The operation and maintenance accounts are projected to decrease by about $10.1 billion during the 1997-2001 period due to lower inflation rates. In addition, there were a number of funding reallocations among operation and maintenance programs from the 1996 FYDP to the 1997 FYDP. Programs that are projected to receive the largest gains include Army real property services ($3.9 billion), real property services training ($1.1 billion), and Navy administrative management headquarters ($1.5 billion). Programs that are projected to decrease the most include Navy servicewide support ($2.1 billion); defense health programs, including medical centers, station hospitals, and medical clinics in the United States ($2.3 billion); Army National Guard reserve readiness support ($1.4 billion); Army base operations ($4.2 billion); DOD environmental restoration activities ($1.3 billion); and DOD’s Washington headquarters services ($1 billion). Projected savings from the latest round of base closures are also less than were anticipated in the 1996 FYDP. The 1996 FYDP projected savings of $4 billion during 1997-2001 from the fourth round of base closures beginning in fiscal year 1996. The 1997 FYDP projects total savings of $0.6 billion, $3.4 billion less than the 1996 FYDP projection. The decrease in savings is primarily due to higher than anticipated base closure-related military construction costs for environmental cleanup activities in fiscal year 1997. Typically, the planned costs to conduct contingency operations have not been included in DOD’s budget submission. However, given that forces are deployed in Bosnia and Southwest Asia and these known expenses will continue into fiscal year 1997, DOD included $542 million for the Bosnian operations and $590 million for Southwest Asian operations in the President’s fiscal year 1997 budget. The Bosnian estimate was later revised to $725 million, and DOD has informally advised the Senate and House Committees on Appropriations of this increase. Most of these funds are in operation and maintenance accounts. The procurement accounts are projected to decrease by about $26 billion during the 1997-2001 period. About $15.3 billion of the reduction can be attributed to the use of the lower inflation rate. A comparison of the 1996 and 1997 FYDPs indicates that about $10.4 billion of the $26 billion reduction is due to a transfer of intelligence and classified program funding from the procurement accounts to classified research, development, test, and evaluation accounts. According to DOD officials, the programs are more accurately classified as research, development, test, and evaluation than procurement. The comparison also shows that DOD eliminated a $5.4-billion program in the procurement accounts that was called “modernization reserve” in the 1996 FYDP. According to DOD officials, this funding was redistributed among procurement programs. The 1997 FYDP continues the downward adjustments in the procurement accounts, which we first identified in our September 1995 report on the fiscal years 1995 and 1996 FYDPs. We reported that the fiscal year 1995 FYDP, which was the first FYDP to reflect the bottom-up review strategy, reflected relatively high funding levels for procurement of weapon systems and other military equipment. The funding level for procurement was estimated to be $60 billion by fiscal year 1999. Since the 1995 FYDP, DOD has steadily reduced programmed funding levels for procurement in favor of short-term readiness, quality-of-life improvements, research and development, and infrastructure activities. DOD now projects that the procurement account will not contain $60 billion until 2001. Table 3 shows DOD’s planned procurement reductions. In addition to the $10-billion transfer of intelligence and classified programs, significant planned decreases in funding and quantities of items include $2 billion for 1 Navy amphibious assault ship (LHD-1) and $1.1 billion for 240 theater high-altitude area defense systems. Funding levels for some programs were increased in the 1997 FYDP over last year’s plan. For example, $1.5 billion was added in the 1997 FYDP for 172 Army UH-60 Blackhawk helicopters, and $4 billion for 2 new SSN submarines. The National Defense Authorization Act for fiscal year 1997 authorized the addition of about $6.3 billion more than the President’s budget request for procurement. Programs receiving significant increases include the new SSN submarine; DDG-51 destroyer; the E-8B, C-130, V-22, and Kiowa warrior aircraft; and the Ballistic Missile Defense Program. The report also authorized $234 million for F/A-18 C/D fighter jets that was not included in the President’s budget. The research, development, test, and evaluation accounts are projected to increase by about $10.9 billion during the 1997-2001 period. Additionally, increased purchasing power in these accounts due to the use of the lower inflation rate is projected at about $6.5 billion. As mentioned earlier, about $10.4 billion was transferred from the procurement accounts. As a result of the transfer in programs and other adjustments, intelligence and classified programs experienced the most growth. Our analysis shows that the largest increase is in advance development activities, which increased about $3 billion per year over 1996 FYDP projections. The National Defense Authorization Act for fiscal year 1997 authorized the addition of about $2.6 billion more than the President’s budget for research, development, test, and evaluation. The largest portions of the increase went to missile defense programs. The 1997 FYDP projects that funding for military construction will increase by about $1.5 billion over the 1997-2001 period compared to the 1996 FYDP. One reason for the increase is that the 1996 FYDP projected savings based on interim base closing plans that subsequently changed, and actual closing costs were higher. Specifically, compared to the 1996 FYDP, the 1997 FYDP reflects spending increases in military construction expenditures of about $2.7 billion. The increase also reflects the transfer of some environmental restoration funds to the military construction account for cleanup at specific bases scheduled for closing. DOD considers family housing a priority. Nonetheless, when compared to the 1996 FYDP, the 1997 FYDP shows that the family housing accounts will decrease by about $1.8 billion. Improvements and other new construction are projected to decrease by about $1.3 billion during 1997-2001. Current family housing plans include improvements to 4,100 housing units, construction or replacement of 2,300 units and 13 support facilities, and the provision of $20 million for private sector housing ventures. We received comments on this report from OMB and DOD. DOD generally agreed with our report and offered some points of clarification, which we have incorporated where appropriate. OMB indicated that the change in inflation is important in forecasting the cost of the FYDP, not the level of inflation. Our review indicated, however, the level of inflation was also important because DOD makes its cost projections based on OMB guidance that specifies a level of inflation, not the rate of change. OMB and DOD comments are published in their entirety as appendixes II and III, respectively. To evaluate the major program adjustments in DOD’s fiscal year 1997 FYDP, we interviewed officials in the Office of Under Secretary of Defense (Comptroller); the Office of Program Analysis and Evaluation; the Army, Navy, and Air Force budget offices; CBO; OMB; and BEA. We examined a variety of DOD planning and budget documents, including the 1996 and 1997 FYDPs and associated annexes. We also reviewed the President’s fiscal year 1997 budget submission; our prior reports; and pertinent reports by CBO, the Congressional Research Service, and others. To determine the implications of program changes and underlying planning assumptions, we discussed the changes with DOD, CBO, OMB, and BEA officials. To verify the estimated increased purchasing power in major DOD accounts due to revised estimates of future inflation, we calculated the annual estimated costs for each 1996 FYDP account using inflation indexes used by DOD from the National Defense Budget Estimates for fiscal years 1996 and 1997. The increased purchasing power was the difference between these calculated costs estimates and the reported 1996 FYDP account costs. Our work was conducted from April through November 1996 in accordance with generally accepted government auditing standards. We are providing copies of this report to other appropriate Senate and House Committees; the Secretaries of Defense, the Air Force, the Army, and the Navy; and the Director, Office of Management and Budget. We will also provide copies to others upon request. If you have any questions concerning this report, please call me on (202) 512-3504. Major contributors to this report are listed in appendix IV. The following are GAO’s comments on the Office of Management and Budget’s (OMB) letter dated November 8, 1996. 1. We agree with OMB that estimates of the Future Years Defense Program (FYDP) in any given year include anticipated future inflation and that changes in anticipated inflation affect the projected cost of the FYDP. We have made this more explicit in our report. However, the levels of forecasted inflation are also important to project future costs. As we explain in this report, the Department of Defense (DOD) projects costs based on OMB guidance that specifies an annual level of inflation for the FYDP period, not the changes in forecasted inflation. 2. The report was amended to reflect this comment. 3. As explained in comment 1, DOD projects costs based on the forecasted inflation rates it receives from OMB. Therefore, we believe the forecasted inflation rates have a direct impact on DOD’s estimated future funding requirements. 4. Our example is meant to show how application of a specific inflation rate to the FYDP can affect assumed purchasing power. As we explained previously, we believe the projected costs of the FYDP are affected not only by the change in inflation rates but also by the level of inflation. OMB asserts that under its forecast, the two inflation measures declined by the same amount. However, the Analytical Perspectives of the Budget for Fiscal Year 1997 shows a smaller decrease in inflation under the chain-weighted methodology—5/10 of 1 percent compared to 8/10 of 1 percent under the fixed-weighted methodology. Therefore, use of the changes in either methodology consistently would not have yielded the same change in the price of the FYDP. 5. The sentence was deleted from the final report. Defense Infrastructure: Costs Projected to Increase Between 1997 and 2001 (GAO/NSIAD-96-174, May 31, 1996). Defense Infrastructure: Budget Estimates for 1996-2001 Offer Little Savings for Modernization (GAO/NSIAD-96-131, Apr. 4, 1996). Future Years Defense Program: 1996 Program Is Considerably Different From the 1995 Program (GAO/NSIAD-95-213, Sept. 15, 1995). DOD Budget: Selected Categories of Planned Funding for Fiscal Years 1995-99 (GAO/NSIAD-95-92, Feb. 17, 1995). Future Years Defense Program: Optimistic Estimates Lead to Billions in Overprogramming (GAO/NSIAD-94-210, July 29, 1994). DOD Budget: Future Years Defense Program Needs Details Based on Comprehensive Review (GAO/NSIAD-93-250, Aug. 20, 1993). Transition Series: National Security Issues (GAO/OCG-93-9TR, Dec. 1992). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO compared the Department of Defense's (DOD) fiscal year (FY) 1997 Future Years Defense Program (FYDP) with the FYDP for FY 1996, focusing on the: (1) impact of the reduction in the inflation rate on DOD's 1997 FDYP; (2) major program adjustments from the 1996 FDYP to the 1997 FDYP; and (3) implications of these changes for the future. GAO found that: (1) as a result of projecting significantly lower inflation rates, DOD calculated that its future purchases of goods and services in its 1997 FYDP would cost about $34.7 billion less than planned in its 1996 FYDP; (2) according to DOD, the assumed increased purchasing power that resulted from using the lower inflation rates: (a) allowed DOD to include about $19.5 billion in additional programs in FY 1997-2001 than it had projected in the 1996 FYDP; and (b) permitted the executive branch to reduce DOD's projected funding over the 1997-2001 period by about $15.2 billion; (3) the price measure the executive branch used in its inflation projections for future purchases in the 1997 FYDP had inherent limitations and has since been improved; (4) if the executive branch decides to use the improved price measure for its 1998 budget, DOD may need to adjust its program as a result of that transition; (5) Office of Management and Budget officials told GAO they have not decided what price measure they will use to forecast inflation for the 1998 FYDP; (6) using projected inflation rates based on a different price measure from that used by the executive branch, the Congressional Budget Office estimated that the future cost of DOD's purchases through 2001 would decline by only about $10.3 billion, or $24.4 billion less than DOD's estimate; (7) resource allocations in the 1997 FYDP vary considerably from the 1996 FYDP as a result of the lower inflation projections, program transfers, and program changes; (8) the projected savings from the latest round of base closures and realignments changed considerably from the 1996 FYDP to the 1997 FYDP; (9) in the 1996 FYDP, DOD estimated savings of $4 billion from base closures; however, the 1997 FYDP projects savings of only $0.6 billion because: (a) the 1996 FYDP projected savings based on interim base closing plans that subsequently changed; and (b) military construction costs related to environmental cleanup of closed bases are projected to be $2.5 billion higher than anticipated in the 1996 FYDP; (10) a comparison of the 1996 and 1997 FYDPs also shows that DOD plans to reduce active duty force levels; (11) the smaller force planned for FY 1998-2001 would bring force levels below the minimum numbers established by law; and (12) if DOD is precluded from carrying out its plan to achieve a smaller force, it will have to make other adjustments to its program.
When veterans obtain care from non-VA providers, the non-VA providers submit claims to VA for payment. See table 1 for a description of the types of non-VA medical care claims processed by VA. Preauthorizing non-VA medical care involves a multistep process conducted by the VA facility that regularly serves a veteran. The preauthorization process is initiated by a VA provider who submits a request for non-VA medical care to the VA facility’s non-VA medical care unit, which is an administrative department within each VA facility that processes VA providers’ non-VA medical care requests and verifies that non-VA medical care is necessary. Once approved by the VA facility’s Chief of Staff or his or her designee, the veteran is notified of the approval and can choose any non-VA provider willing to accept VA payment at predetermined rates. (See fig. 1.) For claims that are emergent in nature and therefore would not have gone through the traditional VA preauthorization process, VA is authorized to pay claims for emergency care from non-VA providers under certain conditions, which vary depending on whether the care was related to the veteran’s service-connected disability. If a non-VA emergency care claim is related to a veteran’s service- connected disability, the following criteria must be met in order for the services to be paid for by VA. First, the non-VA emergency care must have been rendered to treat one of the following: (a) a veteran’s service-connected disability; (b) a condition that is associated with and aggravating the veteran’s service-connected disability; (c) any condition for a veteran who has been rated by VA as permanently and totally disabled due to a service-connected disability; or (d) any condition for a veteran participating in a vocational rehabilitation program who needs care to participate in a course of training. Second, the non-VA emergency care must also meet all of these the claim must be filed within 2 years of the date the care or services were rendered; the services were rendered in a medical emergency, as determined using the prudent layperson standard; a VA or other federal facility was not feasibly available to provide the needed care, and an attempt to use either would not have been considered reasonable; and the services were needed before the veteran was stable enough to be transferred to a VA or other federal facility and before the VA or other federal facility agreed to accept the transfer. If a claim for non-VA emergency care is not related to a veteran’s service- connected disability, there are different criteria that must be met in order for the services to be paid for by VA. The Millennium Act, which was enacted in 1999, provides a safety net for veterans when they do not have other insurance and need emergency care that is not related to a service-connected disability. Specifically, all of the following criteria must be met for VA to cover Millennium Act claims: The claim is not payable under the payment authority for emergency care related to service-connected disabilities. The claim must be filed within 90 days of the latest of the following: the date of discharge, date of death, or date that the veteran exhausted, without success, action to obtain payment or reimbursement from a third party. The veteran must be enrolled in the VA health care system and have received treatment from a VA clinician within 24 months of the emergency care episode. The veteran must be financially liable to the non-VA provider of emergency care. The veteran can have no entitlement to care under a health plan contract (such as Medicare or a private health insurance plan). The veteran can have no other contractual or legal recourse against a third party that would in whole extinguish his or her liability to the non- VA provider. The services must be rendered in a hospital emergency department or a similar facility providing emergency care to the public. The services must be rendered in a medical emergency as determined using the prudent layperson standard. A VA or other federal facility was not feasibly available to provide the needed care, and an attempt to use either would not have been considered reasonable by a prudent layperson. The services were rendered before the veteran was stable enough to be transferred to a VA or other federal facility and before the VA or other federal facility agreed to accept the transfer. Regardless of whether a veteran’s non-VA medical care was preauthorized or the result of an emergency, the steps for processing payments to non-VA providers are the same. Specifically, the non-VA provider submits a claim to either a Veterans Integrated Service Network (VISN) or a VA facility for payment following the veteran’s treatment. In some VISNs, claims processing activities are centralized in a VISN-level department that is responsible for reviewing claims from non-VA providers, obtaining copies of medical records for veterans’ non-VA medical care, and approving payment to non-VA providers. In other VISNs, these claims-processing activities are decentralized and are the responsibility of individual VA facilities. After VA facility or VISN officials review the claims for accuracy, non-VA providers are reimbursed by VA. (See fig. 2.) To process all claims for non-VA medical care, VA facilities use software called the Fee Basis Claims System (FBCS). FBCS is primarily a system that helps VA facilities administer payments to non-VA providers, as opposed to a system that automatically applies relevant criteria and determines whether claims are eligible for payment. As a result, VA relies on staff in the VISNs and VA facilities that process claims, such as administrative clerks and clinicians (typically nurses), to make decisions about which payment authority applies to the claim and which claims meet the criteria for VA payment. If VA denies payment for a claim for non-VA medical care, the Department must provide written notice to the veteran and the claimant (usually, the non-VA provider) regarding the reason for the denial and inform them of their rights to request a reconsideration or to formally appeal the denial. If a veteran or non-VA provider has questions about a denied claim, claims should be reconsidered by a supervisor at the same VISN or VA facility that denied the claim. If the denial decision is upheld, the veteran or non-VA provider has the right to file an appeal through the Board of Veterans’ Appeals. Critical data limitations related to the wait times veterans face in obtaining care from non-VA providers and the cost-effectiveness of such services limit VA’s efforts to oversee the Non-VA Medical Care Program in an effective manner. Most notably, VA does not collect data on how long veterans must wait to be seen by non-VA providers. We previously reported that the amount of time veterans wait for appointments in VA facilities influenced VA’s utilization of non-VA medical care. For example, in our May 2013 report, VA officials from all six facilities we reviewed reported that they routinely referred veterans to non-VA providers to help ensure that veterans receive timely care and their facilities meet performance goals for wait times for VA facility-based care.from one of these VA facilities explained that veterans needing treatment in several specialties—including audiology, cardiology, and ophthalmology—were referred to non-VA providers for this reason. In fiscal year 2012, VA performance goals for wait times for care in VA facilities called for veterans’ primary care appointments to be completed within 7 days of their desired appointment date and veterans’ specialty care appointments to be scheduled within 14 days of their desired appointment date. However, since VA did not track wait times for non-VA providers, little was known about how often veterans’ wait times for non- VA medical care appointments exceeded VA facility-based appointment wait time goals. Officials from one VA facility we reviewed explained that non-VA providers in their community also faced capacity limitations and may not be able to schedule appointments for veterans any sooner than the VA facility. Limitations in the way VA collects non-VA medical care data also did not allow the Department to analyze the cost-effectiveness of non-VA medical care provided to veterans. In our May 2013 report, we found that VA lacked a data system to group medical care delivered by non-VA providers by episode of care—a combined total of all care provided to a veteran during a single office visit or inpatient stay. For example, during an office visit to an orthopedic surgeon for a joint replacement evaluation, an X-ray for the affected joint may be ordered, the veteran may be given a blood test, and the veteran may receive a physical evaluation from the orthopedic surgeon. The non-VA provider would submit a claim to VA for the office visit, and separate claims would be submitted by the radiologist that X-rayed the affected joint and the lab that performed the veteran’s blood test. However, VA’s non-VA medical care data system was not able to link the charges for these three treatments together. We found that this left VA without data for comparing the total non-VA medical care costs for various types of services with the VA facility-based alternative. Without cost-effectiveness data, VA is unable to efficiently compare VA and non-VA options for delivering care in areas with high utilization and spending for non-VA medical care. Two VA facilities we reviewed had undertaken such assessments, despite the limitations of current data. Officials at one facility reported that they expanded their operating room capacity to reduce their reliance on non-VA surgical services, saving an estimated $18 million annually in non-VA medical care costs. Similarly, officials from the second facility reported that they were able to reduce their reliance on non-VA medical care by hiring additional VA staff and purchasing additional equipment to perform pulmonary function tests, an effort that reduced related non-VA medical care costs by about $112,000 between fiscal years 2010 and 2012. The lack of non-VA medical care data available on an episode of care basis also prevents VA from efficiently assessing the appropriateness of non-VA provider reimbursement. Specifically, VA officials cannot conduct retrospective reviews of VA facilities’ claims to determine if the appropriate rate was applied for the care provided by non-VA providers. To help VA address these concerns, we made two recommendations in our May 2013 report that directed VA to (1) analyze the amount of time veterans wait to see non-VA providers and apply the same wait time goals to non-VA medical care that have been used to assess VA facility- based wait times, and (2) establish a mechanism for analyzing the episode of care costs for non-VA medical care. VA concurred with these recommendations. In June 2014, we discussed VA’s progress in implementing these recommendations with VA officials. These officials indicated that the Department anticipated being able to track some wait time information for veterans seen by non-VA providers that VA contracts with under its new Patient Centered Community Care (PCCC) initiative in the near term. However, wait time information for all non-VA medical care will not be readily available until VA completes a redesign of its claims processing system, which is expected to occur in fiscal year 2016. With respect to establishing a mechanism to analyze the episode of care costs for non-VA medical care, VA officials explained that they are in the process of fully implementing this recommendation by (1) improving existing data systems to systematically audit claims that include billing codes typically included in bundled payments while the claims are in a pre-payment status and to require VA facilities to review these claims prior to payment, and by (2) making improvements to its Non-VA Medical Care Program data that would allow all non-VA medical care data to be analyzed on an episode of care basis. However, VA officials did not provide a time frame for when all non-VA medical care would be routinely analyzed by episode of care. In March 2014, we reported that four VA facilities we visited had patterns of noncompliance with VA claims processing requirements, which led to the inappropriate denial of some Millennium Act emergency care claims and the failure to notify some veterans that their claims had been denied. We also found that VA’s existing oversight mechanisms for non- VA medical care claims processing were not sufficiently focused on whether VA facilities were inappropriately approving or denying claims. For our March 2014 report, we examined a sample of 128 Millennium Act emergency care claims that the four VA facilities we visited had denied in fiscal year 2012 and found 66 instances of noncompliance with VA policy requirements. We determined that about 20 percent of the claims we examined had been denied inappropriately, and almost 65 percent of the claims we examined lacked documentation showing that the veteran was notified that their claim was denied. As a result of our review, these four VA facilities reconsidered and paid 25 claims that they had inappropriately denied. We found that there are no automated processes for determining whether a claim for non-VA medical care meets criteria for payment or ensuring that veterans are notified when a claim is denied; instead these processes rely on the judgment of VA staff reviewing each claim and adherence to VA policies. There are a number of steps in the claims review process that were susceptible to errors that could lead to inappropriate denials of non-VA medical care claims. For example, we found nine instances where VA staff incorrectly determined that non-VA medical care was not preauthorized when, in fact, a VA clinician had referred the veteran to the non-VA provider.that VA must notify veterans in writing about denied claims and their appeal rights. However, we found that one facility we visited could not produce documentation of veteran notification for any of the 30 denied claims we reviewed. We concluded that when veterans are not informed that their claims for non-VA medical care have been denied, and VA has inappropriately denied the claims, then veterans could become financially liable for care that VA should have covered. Under such circumstances, veterans’ credit ratings may be negatively affected, and they may face personal financial hardships if they are unable to pay the bills they receive from non-VA providers. In addition, VA policy states These findings from our March 2014 report raise concerns about compliance with claims processing requirements at other VA facilities nationwide. To help VA address these concerns, we made six recommendations aimed at improving VA’s processing of non-VA medical care claims, specifically Millennium Act emergency care claims. These recommendations directed the Department to establish or clarify its policies or take other actions to improve VA facilities’ compliance with existing policy requirements. VA concurred with these six recommendations. Based on discussions with VA officials in June 2014 to obtain information about the status of their planned actions for implementing these recommendations, we believe that VA is making progress on the implementation of three of the six recommendations. However, VA needs to take additional steps to revise its policies on claims processing roles and responsibilities in order to address our remaining three recommendations. One of VA’s primary methods for monitoring its facilities’ compliance with applicable requirements for processing non-VA medical care claims is field assistance visits. In fiscal year 2013, VA conducted these visits at 30 out of 140 VA facilities that processed non-VA medical care claims. These 30 facilities were selected for review by VA based on their claims processing timeliness. However, we reported in March 2014 that the criteria VA used to select facilities for review may not direct VA to the facilities most in need of a field assistance visit because VA does not take into account the accuracy of claims processing activity. Moreover, we found that the checklist VA uses for its field assistance visits does not examine all practices that could lead VA facilities to inappropriately deny claims. Further, VA does not hold facilities accountable for correcting deficiencies identified during these visits, and it does not validate facilities’ self-reported corrections to address field assistance visit deficiencies. According to VA officials, these visits are meant to be consultative in nature and assist facilities in improving their non-VA medical care claims processing. However, we found weaknesses in VA’s reliance on facilities’ self-reported actions when we reviewed the Department’s fiscal year 2012 and 2013 field assistance visit data and found unresolved problems in fiscal year 2013 that originated in fiscal year 2012. Further, VA implemented automated processes for auditing approved non-VA medical care claims to ensure that VA facilities apply the correct payment rates and no duplicate versions of the claims were previously paid. However, VA has no systematic process for auditing claims to ensure that they were appropriately approved or denied. VA officials stated that they recommend, but do not require, that managers of non-VA medical care claims processing units at VA facilities audit samples of processed claims—including both approved and denied claims—to determine whether staff processed claims appropriately. However, we found that VA does not know how many facilities conduct such audits, and none of the four VA facilities we visited reported conducting such audits. In our March 2014 report, we concluded that ensuring VA facilities correct deficiencies identified during field assistance visits and conduct systematic audits of the accuracy of claims processing decisions would provide necessary transparency and stability to the Non-VA Medical Care Program. To help VA address these issues, we made three recommendations aimed at revising the scope of the field assistance visits, ensuring deficiencies identified during these visits are corrected, and instituting systematic audits of the appropriateness of claims processing decisions. VA concurred with these recommendations and detailed its plans to address them. In June 2014, VA officials detailed the Department’s progress implementing these recommendations. However, we do not believe the Department’s actions have sufficiently addressed these recommendations. To fully implement these three recommendations, VA needs to ensure field assistance visits include a review of a sample of processed claims in order to determine whether staff are complying with applicable requirements for claims processing and needs to establish systematic audits of claims processing decisions, among other things. In March 2014, we found that despite VA’s communication efforts with veterans and non-VA providers, knowledge gaps exist for veterans about eligibility for Millennium Act emergency care, and communication weaknesses exist between VA and non-VA providers. In March 2014, we reported that veterans may still be unaware of the criteria that must be met in order for VA to pay claims for non-VA medical care; specifically, Millennium Act emergency care. VA primarily educates veterans about their eligibility for non-VA medical care through patient orientation sessions and written materials, such as the Veteran Health Benefits Handbook. However, VA patient benefits and enrollment officials at two of the four VA facilities we visited said that patient orientation sessions were generally not well-attended. Also, written materials we reviewed did not always provide a complete listing of all criteria that must be met for Millennium Act emergency care claims to be covered, which may create confusion about whether veterans should seek treatment from a VA facility or a non-VA provider in the event of an emergency. VA officials said that the primary intent of the written materials was to communicate the importance of promptly seeking care and to discourage veterans from delaying care by bypassing non-VA providers in the event of an emergency. However, some VA officials acknowledged that they were aware of specific recent cases where veterans delayed or avoided seeking treatment at non-VA providers to go to a VA facility instead. For example, one VA official explained that a veteran experiencing chest pains drove over 100 miles to a VA facility rather than going to the nearest emergency department; two VA officials said the wife of a veteran who had gunshot wounds drove him to a VA facility about 30 miles away, bypassing a number of non-VA emergency departments; and another VA official explained that a veteran experiencing chest pains died during a weekend as he waited to seek care until the local VA community-based outpatient clinic opened on Monday. Alternatively, we found that without knowledge of specific criteria for VA payment of non-VA medical care, specifically Millennium Act emergency care, veterans may seek treatment in situations where the Department cannot pay. For example, veterans may seek care at a non-VA provider for conditions they believe require immediate attention—such as one for which they have not been able to obtain timely treatment from a VA facility. However, VA staff reviewing the claim may decide that the condition does not meet the prudent layperson standard for emergency care and deny payment. Veterans that are admitted as inpatients to non- VA providers also may not be aware that they should be transferred to VA facilities once their conditions have stabilized and a VA facility has notified the non-VA provider that a bed is available for their care at the VA facility. To help VA address concerns about veterans’ lack of knowledge of non- VA medical care—specifically, Millennium Act emergency care—we recommended in March 2014 that VA take steps to better understand gaps in veterans’ knowledge regarding eligibility for non-VA coverage by surveying them about their health care benefits knowledge and using information from those surveys to tailor the Department’s veteran education efforts. While VA concurred with this recommendation, in June 2014 VA officials indicated that the Department has decided not to pursue veteran surveys but instead will promote veteran education by appearing at conferences and town halls with veterans service organizations and updating the information on its public website. We remain concerned that, without surveying veterans directly, VA will not be able to identify specific veteran knowledge gaps regarding coverage of non-VA medical care or determine ways to better target VA’s veteran education efforts. For our March 2014 report, all four non-VA providers we visited cited problems in their non-VA medical care claims processing communication with VA regarding the following issues: Points-of-contact not designated. Two of the four non-VA providers said they did not have a specific point-of-contact at their VA facilities who could answer concerns and issues about claims they had submitted, which led to problems resolving their issues in a timely manner. Delays in claims processing. Billing officials at one non-VA provider described lengthy delays in the processing of their claims, which in some cases went on for years. Lack of responsiveness when trying to transfer veterans and failure to document discussions about potential transfers. Officials at one non-VA provider said they had experienced challenges connecting with the inpatient admissions staff at their local VA facility, making it difficult for them to transfer veterans to the VA facility after the veterans were stabilized. According to this provider, the VA facility did not consistently answer calls during business hours or weekends. Officials from a non-VA provider also described cases where they had attempted to transfer stable veterans to the VA facility, but the VA facility informed them that there were no beds available. Later, the VA facility denied these claims because VA could find no record of this contact with the non-VA provider or authorizations for continued care. VA officials said they have attempted to improve communications with non-VA providers. Specifically, they have established a website and electronic newsletter for non-VA providers in order to disseminate information about non-VA medical care requirements. In addition, VA mailed letters to all non-VA providers that had submitted claims during the previous 2 years to inform them of these online resources. However, none of the four non-VA providers included in our March 2014 review recalled receiving the letter that VA mailed. Two non-VA providers were familiar with the website, but one commented that it lacked some necessary information and was not useful. None of these four non-VA providers were aware of VA’s electronic newsletter, and VA officials acknowledged that a very small percentage of the non-VA providers who submit claims to VA had signed up for it. While these communications have not always reached their intended audience, VA is continuing its efforts to improve communications with non-VA providers. Specifically, VA has been conducting satisfaction surveys to continue monitoring its communications with non-VA providers and has been holding training sessions for VA staff on improving outreach with non-VA providers. Chairman Miller, Ranking Member Michaud, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have. If you or your staffs have any questions about this statement, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this statement include Marcia A. Mann, Assistant Director; Emily Beller; Cathleen Hamann; Katherine Nicole Laubacher; Alexis C. MacDonald; and Jennifer Whitworth. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Due to serious and longstanding problems with the timely scheduling of veterans' appointments in VA facilities, VA recently announced that it will allow additional veterans to be treated through its Non-VA Medical Care Program. This testimony is based on two GAO reports and addresses the extent to which (1) VA collects reliable information on wait times and cost-effectiveness of the Non-VA Medical Care Program; (2) VA facilities comply with Millennium Act claims processing requirements and VA oversees claims processing activities; and (3) VA educates veterans about eligibility for Millennium Act emergency care and communicates with non-VA providers. For both reports, GAO reviewed relevant requirements and visited 10 VA facilities. For its report on the oversight and management of the Non-VA Medical Care Program, GAO reviewed non-VA medical care spending and utilization data from fiscal year 2008 through fiscal year 2012. For its report on the Millennium Act emergency care benefit, GAO reviewed 128 denied Millennium Act claims to determine the accuracy of processing decisions. GAO made numerous recommendations to VA in the two prior reports related to improving (1) data on wait times and cost-effectiveness for non-VA medical care; (2) compliance with claims processing requirements; and (3) veterans' knowledge of non-VA medical care eligibility. VA agreed with these recommendations but has yet to fully implement them. GAO's May 2013 report on the oversight and management of the Non-VA Medical Care Program found that the Department of Veterans Affairs (VA) does not collect data on wait times veterans face in obtaining care from non-VA providers. The lack of data on wait times limits VA's efforts to effectively oversee the Non-VA Medical Care Program because it is not possible for VA to determine if veterans who receive care from non-VA providers are receiving that care sooner than they would in VA facilities. In addition, GAO found that VA cannot assess the cost-effectiveness of non-VA medical care because it cannot analyze data on all services and charges for an episode of care, which is a combined total of all care provided to a veteran during a single office visit or inpatient stay. As a result, VA cannot determine whether delivering care through non-VA providers is more cost-effective than augmenting its own capacity in areas with high utilization of non-VA medical care. GAO's March 2014 report found patterns of noncompliance with applicable requirements for processing emergency care claims covered under the Veterans Millennium Health Care and Benefits Act (Millennium Act) at each of the four VA facilities visited. This led to the inappropriate denial of some claims and the failure to notify veterans that their claims had been denied at these facilities. The Millennium Act authorizes VA to cover emergency care for conditions not related to veterans' service-connected disabilities when veterans who have no other health plan coverage receive care at non-VA providers and meet other specified criteria. Specifically, GAO determined that about 20 percent of the 128 claims it reviewed had been denied inappropriately, and almost 65 percent of the reviewed claims lacked documentation showing that the veterans were informed their claims were denied and explained their appeal rights. As a result of GAO's review, the VA facilities reconsidered and paid 25 claims that they initially had inappropriately denied. GAO also found that there is significant risk that these patterns of noncompliance will continue because VA's existing oversight mechanisms do not focus on whether VA facilities appropriately approve or deny non-VA medical care claims or fail to notify veterans that their claims have been denied. GAO also reported in March 2014 that gaps exist in veterans' knowledge about eligibility criteria for Millennium Act emergency care, and communication weaknesses exist between VA and non-VA providers. Specifically, GAO found that veterans' lack of understanding about their emergency care benefits under the Millennium Act presents risks for potentially negative effects on veterans' health because they may forgo treatment at non-VA providers, and on veterans' finances because they may assume VA will pay for care in situations that do not meet VA criteria. Despite VA's efforts to improve communications, some non-VA providers reported instances in which VA facilities' claims processing staff were unresponsive to their questions about submitted claims.
Congress funds NNSA’s modernization efforts through various activities and programs within the Weapons Activities appropriation that generally address four areas: (1) stockpile, (2) infrastructure, (3) ST&E capabilities, The four areas, which are described in and (4) other weapons activities.greater detail below, are interconnected. For example, research and experiments funded in the ST&E area contribute to the design and production of refurbished weapons, funded in the stockpile area. The infrastructure area offers critical support to both the stockpile and ST&E capabilities areas by providing a suitable environment for their various activities, such as producing weapons components and performing research and experimentation activities. The other weapons activities area offers support to the three other areas by, for example, providing for the security of nuclear weapons and nuclear material. In fiscal year 2015, the President requested $8.3 billion in total appropriations for Weapons Activities, and the Congress appropriated $8.2 billion. The stockpile area includes weapons refurbishments through LEPs and other major weapons alterations and modifications; surveillance efforts to evaluate the condition, safety, and reliability of stockpiled weapons; maintenance efforts to perform certain minor weapons alterations or to replace components that have limited lifetimes; and core activities to support these efforts, such as maintaining base capabilities to produce uranium and plutonium weapons components. Our analysis of NNSA’s data indicates that about 40 percent of the budget estimates for the stockpile area from 2015 to 2039 is for LEPs. The U.S. nuclear weapons stockpile is composed of seven different weapon types, including air- delivered bombs, ballistic missile warheads, and cruise missile warheads (see table 1). The infrastructure area involves NNSA-owned, leased, and permitted physical infrastructure and facilities supporting weapons activities. NNSA’s 2015 nuclear security budget materials include information on budget estimates for three major types of infrastructure activities: operating and maintaining the existing infrastructure, recapitalizing (improving) existing facilities, and constructing new facilities. Our analysis of NNSA’s budget materials indicates that about 57 percent of the budget estimates for infrastructure from 2015 to 2039 is for the operation, maintenance, and recapitalization of existing facilities and about 27 percent is for new facilities construction. The ST&E capabilities area is composed of five “campaigns,” which are technically challenging, multiyear, multifunctional efforts to develop and maintain critical science and engineering capabilities, including capabilities that enable the annual assessment of the safety and reliability of the stockpile, improve understanding of the physics and materials science associated with nuclear weapons, and support the development of code-based models that replace underground testing. Our analysis of NNSA’s data indicates that about 36 percent of the budget estimates for the ST&E capabilities area from 2015 to 2039 are for the Advanced Simulation and Computing Campaign. This campaign procures supercomputers; develops the computer code to simulate nuclear weapons; and develops simulations to analyze and predict these weapons’ performance, safety, and reliability and to certify their functionality. Other weapons activities include budget estimates associated with nuclear weapon security and transportation, as well as legacy contractor pensions, among other things. Our analysis of NNSA’s data indicates that about 44 percent of the budget estimates for the other weapons activities area from 2015 to 2039 are for nuclear weapon security. NNSA’s modernization efforts in the areas described above include those directed toward NNSA’s goal of stopping the growth of its deferred maintenance backlog in its facilities and infrastructure.maintenance can be avoided either by conducting scheduled Deferred maintenance activities, recapitalization activities, or demolition activities. Maintenance activities—including the replacement of parts, systems, or components—are needed to preserve or maintain a facility in an acceptable condition to safely operate. Regular maintenance throughout a facility’s service life can minimize deferred maintenance or prevent it from accumulating. NNSA’s budget materials contain two categories of maintenance budget estimates: direct-funded and indirect-funded. According to a NNSA official, estimates for direct-funded maintenance are included in the budget in two places: (1) the maintenance account specified in NNSA’s budget materials and (2) the program budgets for certain NNSA programs that are the major users of key scientific and production facilities, such as the Advanced Simulation and Computing Facility at Lawrence Livermore National Laboratory and the Tritium Extraction Facility at the Savannah River Site. Indirect-funded maintenance represents activities that are budgeted and paid for as part of a site’s overhead costs. According to NNSA officials, some sites, such as Lawrence Livermore National Laboratory, use indirect-funded maintenance as the primary way to budget and pay for maintenance. The 2015 budget materials estimate that NNSA will budget $1.6 billion for direct-funded maintenance over the next 5 years and $2.3 billion for indirect-funded maintenance over the next 5 years. NNSA identifies the total direct and indirect budget estimates planned for maintenance at each site and reports this information for the FYNSP in the congressional budget justification. NNSA is required by DOE to collect this information from its management and operating contractors through a DOE prescribed tool known as the Integrated Facilities and Infrastructure Crosscut Budget. NNSA can recapitalize facilities or their subsystems (e.g., roofing, ventilation systems, and electrical systems) when they wear out or become outdated (i.e., reach the end of their useful service life). For example, in 2016 NNSA plans to replace approximately 500 sprinkler heads, which are about 50 years old, in a building that manufactures nonnuclear components at its Y-12 National Security Complex in Tennessee. Similarly, in 2015, NNSA continues to upgrade a control tower’s electrical and mechanical components at its Sandia National Laboratories site in New Mexico to support nonnuclear testing activities for nuclear bombs. The 2015 budget materials estimate that $1.8 billion will be spent on recapitalization over the next 5 years and that $11.5 billion will be spent on such recapitalization over the next 25 years. According to officials, if NNSA determines that a facility is no longer needed for mission operations, the agency can demolish the facility. For example, NNSA recently demolished building 9744 at the Y-12 plant because the support structure was failing. Budget estimates for demolition are included as a subprogram in the recapitalization estimates; the 2015 budget materials contain 5-year budget estimates of $105 million and 25-year estimates of $230 million for demolition activities. The current process by which NNSA prioritizes infrastructure investment is based on data on a facility’s condition and importance to achieving programmatic goals. Contractors that manage and operate each site within the nuclear security enterprise are required by a DOE order to inspect all facilities on their site at least every 5 years and are to update DOE’s infrastructure database annually with information relating to the condition of the site’s facilities. This information includes estimating the amount of a facility’s deferred maintenance and its replacement plant value, which is the cost to replace the existing structure with a new structure of comparable size using current technology, codes, standards, and materials. According to DOE’s real property asset management order, a facility’s condition is determined based on the scale shown below in figure 1. NNSA categorizes each individual facility’s importance to accomplishing its mission based on designations defined by the Federal Real Property Council. The categories are as follows: Mission critical. Facilities and infrastructure that are used to perform activities—such as nuclear weapons production, research and development, and storage—to meet the highest-level programmatic goals, without which operations would be disrupted or placed at risk. According to NNSA data, 245 (or 4.0 percent) of the agency’s 6,085 facilities are designated as mission critical. Mission dependent, not critical. Facilities and infrastructure—such as waste management, nonnuclear storage, and machine shops—that play a supporting role in meeting programmatic goals. According to NNSA data, 2,063 (or 33.9 percent) of the agency’s 6,085 facilities are designated as mission dependent, not critical. Not mission dependent. Facilities and infrastructure—such as cafeterias and parking structures, that do not link directly to programmatic goals but support secondary missions or quality-of- workplace initiatives. According to NNSA data, 3,777 (or 62.1 percent) of the agency’s 6,085 facilities are designated as not mission dependent. NNSA’s 2015 budget estimates for modernization total $293.4 billion over 25 years, an increase of $17.6 billion (6.4 percent) from the $275.8 billion in estimates provided in 2014. These budget estimates are provided in four program areas: stockpile, infrastructure, ST&E, and other weapons activities. Some budget estimates for individual programs within these four areas changed more significantly from 2014 to 2015 than the total budget estimates changed—decreasing by as much as 31 percent and increasing by as much as 71 percent—because of changes in programs’ production schedules, scope, the methodology used to develop certain budget estimates, and budgetary structure. Figure 2 provides a comparison of total budget estimates for nuclear modernization activities in NNSA’s 2014 and 2015 budget materials. Table 2, which appears on the next page, details the changes in NNSA’s 25-year budget estimates from 2014 to 2015 for modernization in four program areas: stockpile, infrastructure, ST&E, and other weapons activities. Within these four program areas, we found that some budget estimates for individual programs changed more significantly from 2014 to 2015 than the total budget estimates changed—decreasing by as much as 31 percent and increasing by as much as 71 percent—because of changes in (1) programs’ scope, (2) production schedules, (3) the methodology used to develop certain budget estimates, and (4) budgetary structure. Table 3 shows the changes in the 25-year budget estimates for those individual programs with estimates that changed more significantly than the total and identifies the causes for those changes. The 25-year budget estimates for the stockpile area changed significantly between the 2014 and 2015 budget materials for multiple reasons. Specifically, budget estimates for LEPs decreased by $18.4 billion or 31 percent, and budget estimates for Stockpile Services increased by $11.3 billion or 37 percent. LEP budget estimates decreased due to, among other things, delayed production schedules and changes in estimating methodologies while estimates for Stockpile Services increased due to changes in program scope and budgetary structure. The 2015 budget materials estimate that, over the next 25 years, $41.7 billion will be needed for nuclear weapon LEPs, which is a decrease of $18.4 billion (31 percent) compared with the estimates contained in the prior year’s budget materials. According to NNSA documents and officials, one reason for this decrease in budget estimates is delayed production schedules. The 2015 budget materials state that NNSA will complete three LEPs—the W76-1, B61-12, and the cruise missile—as well as the W88 alteration over the next 25 years, whereas the prior year’s budget materials stated that the agency planned to complete these and an additional LEP. The program that will no longer be completed within the 25-year time frame of the 2015 budget materials is the Interoperable Warhead 1 (IW-1) LEP. The first production unit for the IW-1 LEP is now estimated to be in 2030, which is a 5-year delay over the prior year’s plans, and no programmatic activities are planned to occur during the 5-year FYNSP period from 2015 through 2019. According to NNSA documents, this schedule delay is due, in part, to the agency providing more time to study the concept of interoperability and to reduce uncertainty about the agency’s ability to achieve necessary plutonium and uranium capabilities to support the LEP. In addition, the 2015 budget materials included a 3-year delay to the first production unit of the IW-2 LEP (now estimated in 2034) and a 4-year delay to the first production unit of the IW-3 LEP (now estimated to be no earlier than 2041) compared with the prior year’s plans. These schedule delays move some budget estimates previously included in the 2014 budget materials outside the 25-year time frame covered by the 2015 budget materials. See figure 3 for a summary of changes to the production schedules for the planned LEPs from the 2014 to the 2015 budget materials, and see appendix II for a summary of schedule changes to major modernization efforts since the 2010 Nuclear Posture Review. Second, according to NNSA officials, DOD and NNSA made programmatic decisions about one LEP’s scope that reduced uncertainties and risks. Specifically, NNSA officials said that the agency selected the W80 warhead for the cruise missile LEP (the B61 and the W84 were also under consideration). The selection of a specific warhead, according to NNSA officials, removed certain risks and uncertainties associated with the potential of conducting research and development on three separate warheads and allowed the agency to significantly lower its program cost estimate. Further, NNSA officials said that the selection of the W80 warhead allowed the agency to eliminate uncertainties related to component design, technology development efforts, and certification requirements. The 2015 budget materials estimate that $6.8 billion will be needed to complete the cruise missile LEP, while the prior year’s materials estimated that $11.6 billion would be needed. This change represents a decrease of $4.8 billion or 42 percent. Finally, to develop LEP budget estimates for the 2015 budget materials NNSA used either (1) budget estimates contained in Selected Acquisition Reports or (2) the midpoint between the high and low bounds of the ranges in their cost estimates for LEPs and then applied a percentage inflation rate, which was calculated based on numbers provided by the Office of Management and Budget (OMB), according to NNSA officials. This methodology differed from that in the prior year’s report, in which NNSA used the low point of the estimated cost ranges and used an inflation rate higher than that which would result from the application of OMB guidance to account for uncertainties and risks. According to NNSA officials, using the midpoint estimate is a better way to account for uncertainties and risks, and using the OMB-recommended inflation rate makes LEP inflation rates consistent with the rate applied to all other NNSA programs. The 2015 budget materials estimate that, over the next 25 years, $42.2 billion will be needed for Stockpile Services, which is an increase of $11.3 billion (37 percent) compared with the estimates contained in the prior year’s budget materials. For three Stockpile Services subprograms, the 2015 budget materials included increased program scope. According to NNSA officials, this increased scope includes, among other things, (1) expanded manufacturing capabilities, such as the capability related to detonator production at Los Alamos National Laboratory, and (2) increased weapon assembly/disassembly and stockpile surveillance activities. For each of the three subprograms, the 25-year budget estimates increased approximately $2.0 billion over the estimates in the prior year’s materials. With regard to budgetary structure changes, the 2014 budget materials included the Tritium Readiness subprogram, with its 25-year budget estimate of $3.6 billion, in the ST&E area. The joint explanatory statement accompanying the Consolidated Appropriations Act, 2014 stated that funding for NNSA’s Tritium Readiness subprogram was being provided in the stockpile area. In its 2015 budget materials NNSA included budget estimates for the Tritium Readiness subprogram, with its 25-year budget estimate of $3.7 billion, in the stockpile area as a Stockpile Services subprogram. This budgetary structure change represents a significant increase to the budget estimate for Stockpile Services and a corresponding decrease in the ST&E area’s budget estimates, but the net increase to the overall budget estimates for modernization attributable to Tritium Readiness was small ($70 million). The 2015 budget materials estimate that, over the next 25-years, $23.0 billion will be needed for construction projects, which is an increase of $9.6 billion (71 percent) over the prior year’s materials. This increase in budget estimates for line item construction in the 2015 budget materials is because the estimates are more complete than those included in the 2014 budget materials. In December 2013, we found that the estimates contained in NNSA’s 2014 budget materials omitted most of the budget estimates for two multibillion dollar construction projects, the Uranium Processing Facility and the Chemistry and Metallurgy Research Replacement-Nuclear Facility. We recommended that NNSA include in future modernization plans at least a range of potential budget estimates for projects and programs that the agency knows are needed, and NNSA generally concurred with the recommendation. Consistent with our recommendation, in the 2015 budget materials, NNSA: (1) included preliminary estimates (at the midpoint of a low-high cost range) for the phase 2 and 3 Uranium Processing Facility and the Chemistry and Metallurgy Research Replacement-Nuclear Facility and (2) increased from $364 million to $851 million (current year dollars) budgeted for construction projects scheduled for the 20 years after the FYNSP. The 2015 budget materials estimate that, over the next 25 years, $59.2 billion will be needed for all ST&E related activities, which is an increase of $5.4 billion (10 percent) over the prior year’s budget materials. Across ST&E activities, some increases in budget estimates are offset by decreases, such as the budgetary structure change described above that moved the Tritium Readiness subprogram from the ST&E area to the stockpile area. The most significant increases in the ST&E area are as follows: The 25-year estimates in the 2015 budget materials for the Inertial Confinement Fusion Ignition and High Yield Campaign are $15.4 billion, which is an increase of $5.0 billion (48 percent) over the prior year’s materials. According to NNSA officials, approximately 86 percent of the $5.0 billion increase is due to a budgetary structure change. Specifically, the 2014 budget materials split estimates for operating the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory by including a portion within the campaign and another portion within the laboratory’s site operations account (infrastructure area). NNSA officials told us that, in accordance with congressional direction received during the 2014 appropriations process, the 2015 budget materials contain estimates for NIF operations solely in the Inertial Confinement Fusion Ignition and High Yield Campaign, increasing the campaign’s 2015 budget estimates by $4.3 billion over the prior year’s estimates. While this budgetary structure change increased budget estimates for the ST&E area and decreased estimates for the infrastructure area, NNSA officials said there was no net increase to the total budget estimates for modernization. The Science Campaign’s 2015 budget materials estimate that, over the next 25 years, $16.1 billion will be needed, which is an increase of $3.0 billion (23 percent) over the prior year’s budget materials. Approximately 83 percent of this increase is dedicated to funding increased plutonium experimentation to support future LEPs, according to our analysis of NNSA’s budget materials. The Advanced Simulation and Computing Campaign’s 2015 budget materials estimate that over the next 25 years $21.0 billion will be needed, which is an increase of $2.0 billion (11 percent) over the prior year’s budget materials. Approximately 90 percent of this increase is associated with new programmatic scope for NNSA’s exascale computing efforts, which are being coordinated with DOE’s Office of Science. According to NNSA officials, exascale computing budget estimates were not included in the 2014 budget materials, but they were included in the 2015 budget materials, based on congressional direction received during the 2014 appropriation process. The 2015 budget materials estimate that, over the next 25 years, $47.0 billion will be needed in the other weapons activities area, which is an increase of $7.9 billion (20 percent) over the prior year’s budget materials. This area funds activities associated with nuclear weapon security and transportation as well as information technology, among other things. A budgetary structure change for two of the agency’s emergency response and counterterrorism programs was the primary reason for the increased budget estimates. The 2014 budget materials did not include estimates for these programs under Weapons Activities; rather the programs were budgeted under Defense Nuclear Nonproliferation, an NNSA account that is separate from that used to fund modernization activities. In the 2015 budget materials, NNSA included 25-year budget estimates of $7.9 billion for the emergency response and counterterrorism programs. The 2015 budget materials included these programs based on congressional direction received during the 2014 appropriation process. operations per second, which according to NNSA officials will greatly increase NNSA’s ability to perform advanced scientific and engineering simulations. NNSA considers its current major modernization efforts to include three LEPs (currently at various stages of development and not in full scale production), as well as major construction projects to replace aging, existing facilities for plutonium (the Chemistry and Metallurgy Research Replacement-Nuclear Facility or its alternative) and uranium (the Uranium Processing Facility). The 5-year budget estimates contained in the 2015 budget materials for two of the three LEPs that NNSA considers major modernization efforts align with NNSA’s 2015 plans. The 5-year budget estimate for the remaining LEP does not align with the 2015 plans; however, based on our review of whether this misalignment persisted in NNSA’s 2016 budget materials, NNSA’s 2016 budget estimates appear to be better aligned with 2016 plans. Project plans and associated budget estimates for NNSA’s plutonium and uranium construction projects are too preliminary for us to evaluate alignment, but NNSA’s 2015 budget materials for these projects are improved in comparison to the 2014 version of these materials that we previously reviewed. The 5-year budget estimates contained in the 2015 budget materials for two of the three LEPs that NNSA considers major modernization efforts align with NNSA’s 2015 plans for these two programs. NNSA’s 5-year budget estimates for the B61-12 LEP and the W88 alteration—both of which are currently in the design phase and scheduled for first production units in 2020—align with their associated plans. Specifically, we found that, for 2015-2019, NNSA plans to request approximately $672 million annually for the B61-12 and $160 million annually for the W88 alteration. In general, these annual budget estimates are consistent with the midpoints of the program’s internally estimated cost ranges, indicating that the budget estimates reflect program plans. In addition, NNSA officials said that the budget estimates for the B61-12 LEP and the W88 alteration are consistent with these programs’ established cost baselines as outlined in their Selected Acquisition Reports to the Congress. We found that, compared with the prior year’s budget materials, which did not include a high-to-low cost range for these LEPs, the 2015 budget materials did include such a range. This inclusion is a positive development in how budget estimates are presented because the range reflects the uncertainty in these estimates for executing a technically complex program and allows decision makers an opportunity to evaluate where the budget estimates included in NNSA’s materials fall within this range. In contrast, the 5-year budget estimates contained in the 2015 budget materials for the cruise missile LEP—which is currently in the design phase and scheduled for a first production unit in the mid-2020s—are not aligned with the program’s plans. In each year of the 2015 FYNSP, budget estimates for the cruise missile LEP are below the low point of the program’s internally developed cost range, which is the minimum funding level that would be consistent with the internal cost estimate. Specifically, the 2015 budget materials contain 5-year budget estimates for the cruise missile LEP totaling approximately $480 million, which is $220 million less than the approximately $700 million that is needed to support the low point of the program’s internally estimated cost range. An additional $150 million would be needed in the 5-year budget estimates for these estimates to reflect the approximately $850 million midpoint of the internally developed cost range for the cruise missile LEP. According to NNSA officials, the shortfall against the low point and midpoints of the cost estimate in the 5-year budget estimates reflects the difference between an ideal budget environment where funding is unconstrained and the trade-offs made in an actual budget environment where constraints are imposed by competing priorities. A 2008 DOE review to identify the underlying problems associated with the department’s contract and project management identified that failure to request full program funding can result in increased program costs and schedule delays, which are risks to the achievement of program goals. NNSA officials said that the longer-term budget estimates in the 2015 budget materials “buy back” the shortfall in later fiscal years so that the total estimated cost of the cruise missile LEP is reflected in the budget materials. Specifically, the 2015 budget materials include cruise missile LEP budget estimates at the high end of its cost range for years 2020- 2027. The 2015 budget materials, however, do not explicitly state that the budget request for the cruise missile LEP is not consistent with the total amount needed to fund the program’s internal cost estimate for 2015- 2019 at even the low point. DOE guidelines state that the department should aim to disseminate information to the public that is transparent to its intended users and meets a basic level of quality. Aspects of quality include the usefulness of the information to the intended users and whether it is presented in an accurate, clear, complete, and unbiased manner. NNSA’s budget materials are a key source of information that is used by Congress to make appropriation decisions. Including information in future versions of budget materials that explicitly identify potential risk to the achievement of program objectives and goals—such as increased program cost and schedule delays, which may result from shortfalls in LEP budget requests compared with internal cost estimates— would improve the transparency and quality of information available to congressional decision makers. In addition, our prior work has emphasized the importance of transparency in federal agencies’ budget presentations because such information helps Congress have a clear understanding of how new funding requests relate to funding decisions for existing projects with continuing resource needs. Unlike the budget estimates included in the 2015 budget materials, NNSA’s 2016 budget justification contains 5-year budget estimates (2016 to 2020) for the cruise missile LEP that appear to be better aligned with revised program plans. NNSA’s 2016 budget justification includes approximately $1.8 billion in budget estimates for 2016-2020, which is approximately $1.3 billion more than the 5-year budget estimates contained in the 2015 budget materials, and more closely aligned with NNSA’s updated midpoint cost estimate for the program. Further, both the internal cost estimate and the $1.8 billion in near-term budget estimates appear to support a change in the production schedule for the cruise missile LEP based on a congressional requirement in the 2015 National Defense Authorization Act that NNSA deliver the first cruise missile warhead by September 2025, a 2-year acceleration to its 2015 production schedule. For NNSA’s major modernization efforts related to plutonium and uranium infrastructure, the agency has not established a firm cost, schedule, and scope baseline for either the Chemistry and Metallurgy Research Replacement-Nuclear Facility (or its alternative) or the Uranium Processing Facility, and the 2015 budget materials do not specify when these projects will establish such a baseline. This precludes us from assessing the extent to which budget estimates align with the agency’s preliminary plans. We have previously reported on NNSA’s challenges— significant cost increases, schedule delays, and scope changes—in executing these projects. conducted to provide continuing oversight of both of these projects. The 2015 budget materials do include estimates for both these projects which, as stated above, is an improvement from the prior year’s budget materials in which NNSA omitted most of the budget estimates for these projects. See GAO, Modernizing the Nuclear Security Enterprise: New Plutonium Research Facility at Los Alamos May Not Meet All Mission Needs, GAO-12-337 (Washington, D.C.: Mar. 26, 2012), Nuclear Weapons: Factors Leading to Cost Increases with the Uranium Processing Facility, GAO-13-686R (Washington, D.C.: July 12, 2013), and High-Risk Series: An Update, GAO-15-290 (Washington, D.C.: Feb 11, 2015). NNSA’s infrastructure budget estimates included in its 2015 budget materials are not adequate to address its reported $3.6 billion deferred maintenance backlog, and the deferred maintenance backlog will continue to grow. One reason the backlog will continue to grow is that the amounts in 2015 budget estimates to address the problem fall below DOE infrastructure investment benchmarks for maintenance or recapitalization. NNSA has calculated that it has $3.6 billion in deferred maintenance in its backlog; however, NNSA has identified needed improvements to information about the backlog that would help prioritize investment. Specifically, the amount of the backlog that actually needs to be addressed is unclear because approximately 40 percent of the backlog is related to facilities that have little to no effect on programmatic operations, and improvements in NNSA’s data would enhance the agency’s ability to identify mission priorities to drive investment needs. NNSA is currently undertaking a broad effort to improve its enterprise-wide data on facilities and infrastructure. the National Nuclear Security Administration as a manufacturing facility for nuclear weapons components. The use of corrosive substances in building 9204-2, which produces lithium for the nuclear weapons stockpile, has caused significant concrete and metal degradation in several areas. In March 2014, a large section of concrete ceiling fell. Large chunks of concrete rebounded into a frequently used walkway and an adjacent welding station. No personnel were struck by the concrete, but workers had used the welding station earlier that day. The sites’ management and operations contractor reported the incident a “near miss.” The photos below depict the ceiling and the floor after the incident. maintenance budget estimates should be at least 2 percent of a site’s replacement plant value in order to keep facilities in good working order. We determined, based on NNSA’s reporting of real property value, that the average, annual replacement plant value for the eight sites within the nuclear security enterprise and other related infrastructure over the 5-year FYNSP is about $50 billion, which means that maintenance budget estimates should be approximately $1 billion a year.contained in the 2015 budget materials are on average approximately $772 million a year over the next 5 years, which is an average annual shortfall of $224 million compared with the DOE maintenance benchmark. These annual shortfalls amount to a $1.1 billion shortfall over the next 5 years. According to NNSA’s Associate Administrator for Infrastructure and Operations, NNSA is changing its investment strategy to stop the decline of NNSA infrastructure and to improve safety, working conditions, sustainability, and productivity. This strategy will (1) invest more in infrastructure modernization including recapitalization, sustainability, and disposition and (2) consider reasonable increases to risk in operations and annual maintenance by minimizing resources dedicated to annual maintenance. Further, the 2015 budget materials state that the agency plans to decrease annual maintenance work scope by 10 percent at all sites across the nuclear security enterprise, but the materials do not describe what, if any, impact this decision will have on the deferred maintenance backlog or the goal of stopping its growth. However, the maintenance budget estimates According to DOE’s benchmark for recapitalization, recapitalization budget estimates should be 1 percent of a site’s replacement value to keep existing facilities modern and relevant in an environment of changing standards and missions. Again, based on NNSA’s reporting of real property value, we determined that the average, annual replacement plant value for the eight sites within the nuclear security enterprise and other related infrastructure over the 5-year FYNSP is about $50 billion, which means that recapitalization budget estimates should be approximately $500 million a year. However, the annual recapitalization budget estimates contained in the 2015 budget materials are approximately $360 million a year over the next 5 years, which is an average annual shortfall of $140 million as compared with the DOE recapitalization benchmark. These annual shortfalls amount to a $700 million shortfall over the next 5 years. Even though the recapitalization budget estimates do not meet the DOE benchmark, NNSA officials told us that this funding level is (1) an increase from prior years and (2) the increase is responsive to direction from NNSA’s Associate Administrator for Infrastructure and Operations to maximize resources that can be dedicated to recapitalization. According to agency officials, NNSA’s infrastructure investment decisions are based on a risk reduction methodology to which the amount of deferred maintenance is a key input. However, deferred maintenance is not the only input the agency considers when planning investment decisions. Other considerations include safety risk reduction, increased program capabilities, and opportunities to improve energy efficiency. NNSA’s 2016 budget justification (covering the 2016-2020 FYNSP) restates the agency’s commitment to increase investment to stop the growth of deferred maintenance through maintenance and recapitalization. NNSA has proposed a restructuring of its infrastructure budget in its 2016 congressional budget justification. Water Diverter at the Los Alamos National Laboratory Radiochemistry Lab The Radiochemistry Facility at Los Alamos National Laboratory conducts radiological and chemical analyses of samples and produces medical isotopes. This photo depicts a water intrusion incident that interrupted research activities at the lab. No one was injured but the lab could not be used for a few days and work was relocated to another part of the building. A water diverter was used to immediately prevent further damage to the lab. The roof has since been repaired and the lab is back to full operation. We found that the 5-year $1.1 billion shortfall in maintenance budget estimates and the $700 million shortfall in recapitalization budget estimates as compared with DOE infrastructure investment benchmarks are not explicitly identified in NNSA’s 2015 budget materials. Further, the budget materials do not identify the potential effects this shortfall may have on the agency’s stated goal of stopping the growth of its deferred maintenance backlog. As stated earlier, DOE guidelines state that the department should aim to disseminate information to the public that is useful to the intended users and presented in an accurate, clear, complete, and unbiased manner. NNSA’s budget materials are a key source of information for Congress as it makes appropriation decisions. In addition, our prior work has emphasized the importance of transparency in federal agencies’ budget presentations because such information helps Congress have a clear understanding of how new funding requests relate to funding decisions for existing projects with continuing resource need. Historical underfunding of maintenance and recapitalization, among other things, has led to the current level of deferred maintenance across the nuclear security enterprise. According to a 2014 NNSA infrastructure planning document, there are numerous examples within the nuclear security enterprise where deteriorated infrastructure conditions have affected mission performance. Therefore, it is important to identify the risks, if any, associated with levels of maintenance and recapitalization investment that fall below DOE benchmarks. Providing such information would present Congress with key information it needs to make infrastructure resource allocation decisions during the appropriations process. NNSA has identified opportunities to improve information about its reported $3.6 billion backlog that the agency needs in order to better prioritize infrastructure investment. While NNSA’s reported $3.6 billion total deferred maintenance backlog in the 2015 budget materials meets the accounting requirements for real property reporting, the figure is not useful for budget estimating because (1) approximately 40 percent of the backlog is related to facilities that have little or no effect on programmatic operations and is therefore low priority to be addressed and (2) strengthening NNSA’s data would improve the agency’s ability to fully prioritize investment needs. The agency has ongoing efforts to improve its infrastructure data. According to NNSA data, facilities considered not mission dependent comprise 40 percent ($1.4 billion) of the deferred maintenance backlog. As stated earlier, these facilities that are not mission dependent—such as cafeterias, parking structures, and excess facilities—do not link directly to programmatic goals but only support secondary missions or quality-of- workplace initiatives. NNSA officials told us that deferred maintenance at these facilities is low priority and unlikely to be addressed, beyond keeping facilities in a safe condition, because the agency is targeting scarce budgetary resources to mission critical facilities. As mentioned above, DOE guidelines and our prior work have emphasized the importance of transparency in federal agencies’ budget presentations to help Congress have a clear understanding of how new funding requests relate to funding decisions for existing projects with continuing resource needs. Reporting the $3.6 billion deferred maintenance backlog without explaining that over one-third of it has little or no effect on the programmatic mission and is of low priority limits the transparency and usefulness of the budget materials for the purpose of planning for infrastructure investment. Clarifying the budget materials in this manner would provide Congress with key information during the appropriation process. We also found that improvements in NNSA’s data would enhance the agency’s ability to identify mission priorities to drive investment needs. Specifically, According to NNSA officials, the categories of mission-based designations—defined by the Federal Real Property Council—that are assigned to NNSA facilities and infrastructure do not always accurately reflect the importance of facilities and infrastructure to mission achievement and, therefore, are not fully useful for prioritizing infrastructure investment. Among other things, NNSA’s current process for prioritizing infrastructure budget estimates focuses on those facilities and infrastructure identified as mission critical, but this designation may not accurately target infrastructure investment requirements because it understates the importance of some key facilities and other infrastructure to its mission. For example, agency officials said that current plutonium research and production facilities at Los Alamos National Laboratory are designated as mission critical, but the facility that treats the associated radioactive and hazardous waste is designated mission dependent, not critical. According to NNSA officials, if the waste treatment facility experienced an unexpected shutdown, the research and production facilities could slow down or stop operations since the waste could not be treated. However, the designation assigned to the waste facility does not elevate it to the highest priority for infrastructure investment. Elevating the importance of all mission dependent, not critical, facilities does not provide an optimal solution because doing so could similarly overstate the importance of some facilities and infrastructure that are less essential to mission achievement. NNSA officials with whom we spoke agreed that improved data on the importance of facilities and infrastructure to mission achievement, beyond the designations defined by the Federal Real Property Council, could help NNSA better identify needed infrastructure investment and improve the planning basis for its budget estimates. To improve this information, NNSA is planning to implement a “mission dependency index” that will measure a facility’s importance based on (1) the direct loss of capability and (2) how that loss effects other assets. According to agency officials, this new index may result in increased investment for supporting and enabling infrastructure (e.g., waste processing facilities, power lines, HVAC systems, etc.) that is currently considered mission dependent, not critical. According to NNSA plans, this ongoing effort is currently being used to inform program execution and is scheduled to be completed by the time the agency develops its 2017 budget materials. NNSA is improving data about the condition of its facilities and infrastructure at a level of detail to inform investment prioritization decisions. NNSA currently reports on conditions at the facility level and is in the process of implementing a method to report the condition of a facility’s subsystems, according to agency officials we interviewed. These officials told us that a facility’s overall condition can be assessed as good even if the facility has a failing subsystem that is essential to its operation. A failure of a critical subsystem could stop programmatic activities at the entire facility. For example, a leak in the fire suppression system shut down operations at the Device Assembly Facility, a mission critical facility at the Nevada National Security Site, for 10 days. Further, according to officials, a subsystem within a facility could be in better condition than the rating of the entire facility might otherwise indicate making prioritization within such facilities challenging. NNSA officials with whom we spoke agreed that improved data about the condition of subsystems could help NNSA better identify needed investment and improve the basis for its budget estimates. To improve this information, NNSA is adopting a standardized condition assessment process and infrastructure database used by the Department of Defense. According to officials, NNSA plans to implement a revised facility inspection program that (1) conducts more detailed and more frequent inspections of its key facilities—those that are mission critical, and mission dependent (not critical)—and those facilities’ key subsystems and (2) uses statistical modeling that is based on, among other things, material used and component age to predict the optimal time to conduct maintenance or recapitalization activities on these subsystems. According to NNSA plans, this ongoing effort is currently being used to inform program execution and is scheduled to be completed by the time the agency develops its 2017 budget materials. NNSA faces a complex, decades-long task in planning, budgeting, and ensuring the execution of interconnected activities to modernize the nuclear security enterprise. Because NNSA annually submits a budget justification and updates its SSMP, the agency has an opportunity each year to improve its nuclear security budget materials so that they are more useful for congressional decision makers. DOE guidelines on data quality state that information should be useful to the intended users and presented in an accurate and complete manner, and our prior work has emphasized the importance of transparency in federal agencies’ budget presentations. NNSA’s 2015 budget materials continue to demonstrate weaknesses, particularly with respect to (1) internal cost estimates for LEPs that are not fully supported by near-term budget estimates, which could affect the programs’ cost and schedule, and (2) near-term budget estimates for maintenance and recapitalization that do not achieve DOE benchmarks for infrastructure investment, which could impair NNSA’s ability to meet its goal of stopping the growth in its reported $3.6 billion deferred maintenance backlog. Providing information in the budget materials on the potential risks to the achievement of program objectives when near-term budget estimates are not aligned with plans would improve the transparency of budget materials and benefit Congress during appropriation deliberations. With particular regard to the total deferred maintenance backlog reported by NNSA, it is not useful for budget estimating because it includes deferred maintenance that is unlikely to be addressed. DOE guidelines and our prior work have emphasized the importance of transparency in the information federal agencies provide, such as in their budget presentations. Such information helps Congress have a clear understanding of how new funding requests relate to funding decisions for existing projects with continuing resource needs. By not explicitly identifying that some deferred maintenance is unlikely to be addressed, the agency cannot fully target infrastructure investment across the nuclear security enterprise or clarify programmatic scope to Congress. NNSA has ongoing efforts to improve its data on the relationship between facilities and infrastructure and the missions they support, as well as the level of detail it has on facility condition. To improve transparency in future NNSA budget materials so that they are more useful for congressional decision makers, we recommend that the Administrator of NNSA take the following three actions: In instances where NNSA’s internal cost estimates for a life extension program suggest that additional funding may be needed beyond what is included in the 5-year budget estimates to align with the program’s plan, identify the amount of the shortfall in its budget materials and, what, if any, effect the shortfall may have on the program’s cost and schedule or the risk of achieving program objectives. In instances where budget estimates do not achieve DOE benchmarks for maintenance and recapitalization investment over the 5-year budget estimates, identify in the budget materials the amount of the shortfall and the effects, if any, on the deferred maintenance backlog. Until improved data about the importance of facilities and infrastructure to mission is available, clarify in the budget materials for the 5-year FYNSP period the amount of the deferred maintenance backlog associated with facilities that has little to no effect on programmatic operations and is therefore low priority to be addressed. We provided a draft of this report to DOE and NNSA for their review and comment. NNSA provided written comments, which are reproduced in full in appendix III, as well as technical comments, which we incorporated in our report as appropriate. In its comments, NNSA agreed with our recommendations and outlined planned actions to incorporate these recommendations into the agency’s fiscal year 2017 budget materials, which is the next opportunity for such incorporation. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, the Administrator of NNSA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to (1) identify the extent to which the National Nuclear Security Administration’s (NNSA) budget estimates for modernizing the nuclear security enterprise changed between the 2015 budget materials and the prior year’s material, (2) assess the extent to which NNSA’s budget estimates for its current major modernization efforts align with plans, and (3) assess the extent to which NNSA’s 2015 budget estimates for modernizing the nuclear security enterprise address its stated goal of stopping the growth of the deferred maintenance backlog. All years in this report refer to fiscal years, unless otherwise noted. To identify the changes to NNSA’s budget estimates, we compared the estimates in the 2014 budget materials with the estimates in the 2015 version of those materials. NNSA’s budget materials are composed of two key policy documents that are issued annually: the agency’s budget justification, which contains estimates for the 5-year Future-Years Nuclear Security Program (FYNSP), and the Stockpile Stewardship and Management Plan (SSMP), which provides budget estimates over the next 25 years. We compared the budget estimates down to the subprogram and line item construction project level. If we identified changes between the 2015 and 2014 budget materials, we reviewed both versions of the materials and interviewed knowledgeable officials from NNSA to determine the reasons for those changes. We reviewed prior GAO reports on modernization and specific programs or projects included in the plans to provide context for NNSA’s plans and changes in the plans. A list of related GAO products is included at the end of this report. We also reviewed the GAO Cost Estimating and Assessment Guide, which highlights best practices for developing, managing, and evaluating cost estimates for capital programs. To assesses the extent to which the total 2015 budget estimates align with plans for major modernization efforts—which the agency defines as nuclear weapon life extension programs (LEP) and projects for plutonium and uranium infrastructure—we compared the budget estimates included in NNSA’s 2015 budget materials with its long-range plans included in the SSMP. In addition to new issues that we identified as part of our review of the 2015 budget materials, we also followed up on the findings identified in our December 2013 report, such as the extent to which NNSA’s 2015 budget materials include estimates for plutonium and uranium infrastructure projects that were omitted in the prior year’s materials.Additionally, we reviewed prior GAO reports to provide context for the concerns we identified and discussed areas where budget estimates did not appear to align with its modernization plans with knowledgeable officials from NNSA. If we identified areas in the 2015 budget materials where estimates did not appear to align with modernization plans, we reviewed the 2016 FYNSP included in NNSA’s 2016 budget justification to determine the extent to which this misalignment persisted. To determine the extent to which NNSA’s budget estimates for modernizing the nuclear security enterprise address its stated goal of stopping the growth of the deferred maintenance backlog, we compared budget estimates contained in the 2015 budget materials over the 5 years of the FYNSP for (1) maintenance and (2) recapitalization to infrastructure investment benchmarks contained in the Department of Energy’s (DOE) 2005 Real Property Asset Management Plan. DOE’s 2005 plan states that budget estimates for maintenance should be at least 2 percent of the replacement plant value, which is the cost to replace the existing structure with a new structure of comparable size using current technology, codes, standards, and materials. NNSA’s 2015 budget materials include the total replacement plant value for all eight sites in the nuclear security enterprise. DOE’s 2005 plan states that budget estimates should be dedicated toward recapitalization activities, but the plan did not provide a specific benchmark. The plan’s associated 2015 budget guidance (issued in March 2013) states that DOE programs, including NNSA, should institute a “recapitalization strategy” that is equal to 1 percent of replacement plant value if the program’s overall facility condition fell According to NNSA data, its overall facility below a certain threshold.condition was below the established threshold. These “recapitalization strategy” budget estimates were to be added to the agency’s maintenance budget account because NNSA at that time did not have a separate recapitalization budget account. NNSA made its first targeted request for recapitalization in the 2015 budget materials. We compared the budget estimates contained in the specific recapitalization control to the investment benchmark of 1 percent of replacement plant value. NNSA officials confirmed that this approach was reasonable. We then calculated the amount of budget estimates for maintenance (2 percent of replacement plant value) and recapitalization (1 percent of replacement plant value) that would be equal to DOE’s own infrastructure investment benchmarks. We then compared these benchmarks with annual budget estimates in NNSA’s 2015 budget justification for maintenance and recapitalization over each year of the FYNSP to determine if the total budget estimates met, exceeded, or fell short of the benchmarks. We discussed with knowledgeable officials from NNSA areas where these budget estimates did not appear to align with the stated policy goal. We also reviewed NNSA’s Infrastructure Data Analysis Center system to identify the estimated value of NNSA’s real property and the total amount of deferred maintenance across the nuclear security enterprise. We did not assess the reliability of these estimates because they were mostly used to determine whether NNSA was meeting its own stated goal of reducing deferred maintenance and dedicating benchmarked proportions We also of replacement plant value to maintenance and recapitalization.reviewed documentation and received briefings from NNSA officials on the agency’s ongoing efforts to improve its infrastructure data and resource prioritization. To assess the reliability of NNSA’s budget estimates and DOE’s real property management system, we conducted manual and electronic tests of the data, looking for missing values, outliers, or other anomalies. Additionally, we interviewed knowledgeable NNSA officials about the data and their methodologies for using the data to construct their estimates, including discussing missing data that we identified in our tests of the data. During our review, we found that NNSA had omitted 20 years of budget data for site operations at the Y-12 National Security Complex in Tennessee after 2019. We brought this to the attention of agency officials who confirmed the omission and provided GAO with corrected budget estimates. We determined that the corrected Y-12 data and the data underlying the budget estimates were sufficiently reliable for our purposes, which was to report the total amount of budget estimates and those estimates dedicated to certain programs and projects. We also found that the limited amount of data we used from DOE’s real property information management system were also sufficiently reliable for our purposes, which was to report the total amount of deferred maintenance and replacement plant value in the nuclear security enterprise, as well as the amount of deferred maintenance and replacement plant value associated with specific facility designations (i.e., not mission dependent). However, we did not assess the reliability of NNSA’s underlying budget estimating processes or independently verify the reliability of specific budget estimates because such analysis exceeds the scope of our mandate. We limited the scope of our review to NNSA’s Weapons Activities appropriation. NNSA does not have a definition of “modernization,” but NNSA officials consider all of the programs in the Weapons Activities appropriation to directly or indirectly support modernization. This scope is Additionally, we focused our consistent with our December 2013 review.review on those programs or projects with the potential to have a significant impact on NNSA’s modernization plans or budgets. All data are presented in current dollars, which include projected inflation, unless otherwise noted. NNSA’s budget estimates do not incorporate reductions for sequestration. As stated in NNSA’s 2014 SSMP, incorporating such reductions would lead to adjustments to future plans. We conducted this performance audit from July 2014 to August 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. All years in this report refer to fiscal years, unless otherwise noted. The 2014 SSMP stated that NNSA did not submit the 2013 SSMP to Congress because analytic work conducted by DOD and NNSA to evaluate future needs for nuclear modernization activities across the nuclear security enterprise was ongoing and, as such, predecisional. President’s policy priorities. The FYNSP is generally consistent with the first 5 years of NNSA’s plan presented in its SSMP. In addition, the report that the Department of Energy (DOE) jointly submits with the Department of Defense (DOD) in accordance with section 1043 of the National Defense Authorization Act for Fiscal Year 2012 as amended is required to provide 10-year budget estimates and plans to enhance the reliability of the nuclear weapons stockpile and modernize infrastructure. Similarly, the FYNSP is generally consistent with the first 5 years of NNSA’s plan presented in this joint report. The 2010 Nuclear Posture Review included discussion of a number of planned major modernization efforts for NNSA, while other efforts have been identified in later versions of the planning documents discussed above. In particular, the Nuclear Posture Review identified three planned LEPs, one for the W76—a warhead delivered by submarine launched ballistic missile—another for the B61—a gravity bomb delivered by aircraft—and also discussed the potential for a common warhead developed through refurbishment and for use on both Navy and Air Force delivery vehicles. NNSA’s planning documents for 2015 continue to include LEPs for the W76 and B61 and the 2015 SSMP further developed the concept of a common warhead, now termed an “interoperable” warhead (IW), including long-range plans for three IWs. In addition, NNSA’s 2015 planning documents include an LEP for the Air Force’s cruise missile warhead and a major alteration (ALT) of the Navy’s W88 warhead, also delivered on a submarine launched ballistic missile, neither of which was discussed in the Nuclear Posture Review. The Nuclear Posture Review also discussed major line item construction projects to replace aging facilities for NNSA’s plutonium and uranium processing missions. The project for plutonium processing is known as the Chemistry and Metallurgy Research Replacement-Nuclear Facility (CMRR-NF), and the project for uranium processing is known as the Uranium Processing Facility (UPF). Both projects are being reconceptualized as a result of project execution challenges and increasingly escalating cost estimates. Table 4 summarizes changes to the schedules for these major modernization efforts as reported in NNSA’s annual updates to its planning documents. Figure 4 presents budget information from fiscal year 2011 through fiscal year 2019, comparing planned budget estimates for modernization presented in the Fiscal Year 2011 Joint NNSA and Department of Defense Report (baseline) to estimates in budget materials for subsequent years. NNSA’s total budget estimates for modernization generally address four areas: (1) stockpile; (2) infrastructure; (3) science, technology, and engineering (ST&E) capabilities; and (4) other weapons activities. NNSA’s stockpile area represents the largest portion of NNSA’s overall budget estimates for modernization (about 35 percent of the total budget estimates in 2015) and includes LEPs. Figure 5 presents budget information from fiscal year 2011 through fiscal year 2019, comparing planned budget estimates for the stockpile area presented in the Fiscal Year 2011 Joint NNSA and Department of Defense Report (baseline) to estimates in budget materials for subsequent years. NNSA’s infrastructure area represents the second largest portion of NNSA’s overall modernization plans (about 29 percent of the total budget estimates in 2015) and includes construction of new facilities as well as operations and maintenance of existing facilities and infrastructure. Figure 6 presents budget information from fiscal year 2011 through fiscal year 2019, comparing planned budget estimates for the infrastructure area presented in the Fiscal Year 2011 Joint NNSA and Department of Defense Report (baseline) to estimates in budget materials for subsequent years. NNSA’s ST&E capabilities area represents the third largest portion of NNSA’s overall modernization plans (about 20 percent of the total budget estimates in 2015) and includes technically challenging, multiyear, multifunctional efforts to develop and maintain critical science and engineering capabilities in support of the stockpile. Figure 7 presents budget information from fiscal year 2011 through fiscal year 2019, comparing planned budget estimates for the ST&E area presented in the Fiscal Year 2011 Joint NNSA and Department of Defense Report (baseline) to estimates in budget materials for subsequent years. NNSA’s other weapons activities represent the smallest portion of NNSA’s overall modernization plans (about 16 percent of the total budget estimates in 2015) and includes nuclear weapon security and transportation as well as legacy contractor pensions, among other things. Figure 8 presents budget information from fiscal year 2011 through fiscal year 2019, comparing planned budget estimates for other weapons activities presented in the Fiscal Year 2011 Joint NNSA and Department of Defense Report (baseline) to estimates in budget materials for subsequent years. David C. Trimble, (202) 512-3841 or [email protected]. In addition to the individual named above, Allison B. Bawden (Assistant Director), Patrick Bernard, Pamela Davidson, Tom Fullum, and Jason Trentacoste made key contributions to this report. Nuclear Weapons: Actions Needed by NNSA to Clarify Dismantlement Performance Goal. GAO-14-449. Washington, D.C.: April 30, 2014. ICBM Modernization: Approaches to Basing Options and Interoperable Warhead Designs Need Better Planning and Synchronization. GAO-13-831. Washington, D.C.: September 20, 2013. Modernizing the Nuclear Security Enterprise: Observations on NNSA’s Options for Meeting Its Plutonium Research Needs. GAO-13-533. Washington, D.C.: September 11, 2013. Nuclear Weapons: NNSA Needs to Improve Guidance on Weapon Limitations and Planning for Its Stockpile Surveillance Program. GAO-12-188. Washington, D.C.: February 8, 2012. Nuclear Weapons: DOD and NNSA Need to Better Manage Scope of Future Refurbishments and Risks to Maintaining U.S. Commitments to NATO. GAO-11-387. Washington, D.C.: May 2, 2011. Nuclear Weapons: NNSA and DOD Need to More Effectively Manage the Stockpile Life Extension Program. GAO-09-385. Washington, D.C.: March 2, 2009. Nuclear Weapon:, Annual Assessment of the Safety, Performance, and Reliability of the Nation’s Stockpile. GAO-07-243R. Washington, D.C.: February 2, 2007. Nuclear Weapons: Improved Management Needed to Implement Stockpile Stewardship Program Effectively. GAO-01-48. Washington, D.C.: December 14, 2000. DOE Facilities: Better Prioritization and Life Cycle Cost Analysis Would Improve Disposition Planning. GAO-15-272. Washington, D.C.: March 19, 2015. DOE Real Property: Better Data and a More Proactive Approach Needed to Facilitate Property Disposal. GAO-15-305. Washington, D.C.: February 25, 2015. Nuclear Waste: DOE Needs to Improve Cost Estimates for Transuranic Waste Projects at Los Alamos. GAO-15-182. Washington, D.C.: February 18, 2015. Nuclear Weapons: Technology Development Efforts for the Uranium Processing Facility. GAO-14-295. Washington, D.C: April 18, 2014. Federal Real Property: Improved Transparency Could Help Efforts to Manage Agencies’ Maintenance and Repair Backlogs. GAO-14-188. Washington, D.C.: January 23, 2014. Nuclear Weapons: Factors Leading to Cost Increases with the Uranium Processing Facility. GAO-13-686R. Washington, D.C.: July 12, 2013. Department of Energy: Observations on Project and Program Cost Estimating in NNSA and the Office of Environmental Management. GAO-13-510T. Washington, D.C.: May 8, 2013. Department of Energy: Concerns with Major Construction Projects at the Office of Environmental Management and NNSA. GAO-13-484T. Washington, D.C.: March 20, 2013. Modernizing the Nuclear Security Enterprise: Observations on DOE’s and NNSA’s Efforts to Enhance Oversight of Security, Safety, and Project and Contract Management. GAO-13-482T. Washington, D.C.: March 13, 2013. Modernizing the Nuclear Security Enterprise: Observations on the National Nuclear Security Administration’s Oversight of Safety, Security, and Project Management. GAO-12-912T. Washington, D.C.: September 12, 2012. Modernizing the Nuclear Security Enterprise: New Plutonium Research Facility at Los Alamos May Not Meet All Mission Needs. GAO-12-337. Washington, D.C.: March 26, 2012. Nuclear Weapons: NNSA Needs More Comprehensive Infrastructure and Workforce Data to Improve Enterprise Decision-making. GAO-11-188. Washington, D.C.: February 14, 2011. Nuclear Weapons: National Nuclear Security Administration’s Plans for Its Uranium Processing Facility Should Better Reflect Funding Estimates and Technology Readiness. GAO-11-103. Washington, D.C.: November 19, 2010. Nuclear Weapons: Actions Needed to Identify Total Costs of Weapons Complex Infrastructure and Research and Production Capabilities. GAO-10-582. Washington, D.C.: June 21, 2010. Science, Technology, and Engineering Capabilities Nuclear Weapons: National Nuclear Security Administration Needs to Ensure Continued Availability of Tritium for the Weapons Stockpile. GAO-11-100. Washington, D.C.: October 7, 2010. Nuclear Weapons: Actions Needed to Address Scientific and Technical Challenges and Management Weaknesses at the National Ignition Facility. GAO-10-488. Washington, D.C.: April 8, 2010. Modernizing the Nuclear Security Enterprise: Strategies and Challenges in Sustaining Critical Skills in Federal and Contractor Workforces. GAO-12-468. Washington, D.C.: April 26, 2012. Department of Energy: Progress Made Overseeing the Costs of Contractor Postretirement Benefits, but Additional Actions Could Help Address Challenges. GAO-11-378. Washington, D.C.: April 29, 2011. Nuclear Weapons: NNSA Needs More Comprehensive Infrastructure and Workforce Data to Improve Enterprise Decision-making. GAO-11-188. Washington, D.C.: February 14, 2011. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. DOE and NNSA Project Management: Analysis of Alternatives Could Be Improved by Incorporating Best Practices. GAO-15-37. Washington, D.C.: December 11, 2014. Project and Program Management: DOE Needs to Revise Requirements and Guidance for Cost Estimating and Related Reviews. GAO-15-29. Washington, D.C.: November 25, 2014. Nuclear Weapons: Ten-Year Budget Estimates for Modernization Omit Key Efforts, and Assumptions and Limitations Are Not Fully Transparent. GAO-14-373. Washington, D.C.: June 10, 2014. National Nuclear Security Administration: Agency Report to Congress on Potential Efficiencies Does Not Include Key Information. GAO-14-434. Washington, D.C.: May 15, 2014. Modernizing the Nuclear Security Enterprise: NNSA’s Budget Estimates Do Not Fully Align with Plans. GAO-14-45. Washington, D.C.: December 11, 2013. Modernizing the Nuclear Security Enterprise: NNSA’s Reviews of Budget Estimates and Decisions on Resource Trade-offs Need Strengthening. GAO-12-806. Washington, D.C.: July 31, 2012. National Nuclear Security Administration: Observations on NNSA’s Management and Oversight of the Nuclear Security Enterprise. GAO-12-473T. Washington, D.C.: February 16, 2012. Department of Energy: Additional Opportunities Exist to Streamline Support Functions at NNSA and Office of Science Sites. GAO-12-255. Washington, D.C.: January 31, 2012.
Nuclear weapons continue to be an essential part of the nation's defense strategy. The end of the cold war resulted in a shift from producing new nuclear weapons to maintaining the stockpile through refurbishment. Also, billions of dollars in scheduled maintenance for nuclear weapons infrastructure has been deferred. The 2010 Nuclear Posture Review identified long-term stockpile modernization goals for NNSA that include (1) sustaining a safe, secure, and effective nuclear arsenal and (2) investing in a modern infrastructure. The National Defense Authorization Act for Fiscal Year 2011 included a provision for GAO to report annually on NNSA's nuclear security budget materials. This report (1) identifies changes in estimates to the 2015 budget materials from the prior year's materials, and (2) assesses the extent to which NNSA's 2015 budget estimates align with plans for major modernization efforts, and (3) addresses the agency's stated goal of stopping the growth of its deferred maintenance backlog. GAO analyzed NNSA's 2014 and 2015 nuclear security budget materials, which describe modernization plans and budget estimates for the next 25 years, and interviewed NNSA officials. The National Nuclear Security Administration's (NNSA) 25-year budget estimates for modernizing the nuclear security enterprise in its fiscal year 2015 budget materials total $293.4 billion, which is an increase of $17.6 billion (6.4 percent) compared with the prior year's materials. NNSA's budget materials are (1) its 2015 congressional budget justification that includes the President's fiscal year budget request and information about 4 additional years of planned budget requests, and (2) its update to its Stockpile Stewardship and Management Plan that includes NNSA's long-range, 25-year plans for sustaining the stockpile and modernizing the nuclear security enterprise. Congress funds NNSA's 2015 budget estimates in four program areas: stockpile; infrastructure; science, technology, and engineering capabilities; and other weapons activities. GAO found that some budget estimates for individual programs within these four areas changed more significantly from 2014 to 2015 than the total budget estimates changed. For example, stockpile budget estimates to refurbish nuclear weapons through life extension programs (LEP) decreased by 31 percent in part due to changes in programs' production schedules. In contrast, infrastructure budget estimates for construction projects increased by 71 percent largely because the estimates were more complete than those GAO evaluated in 2014. For NNSA's major modernization efforts—which include LEPs that are not in full scale production and major construction projects—near-term budget estimates for two of three LEPs align with plans, but estimates for construction projects are too preliminary to assess alignment. NNSA's near-term budget estimates to refurbish its B61 bomb and W88 warhead align with its plans because annual budget estimates reflect internally developed estimated cost ranges for the programs. However, the near-term budget estimates for the cruise missile LEP are not aligned with NNSA's 2015 plans because annual budget estimates are below the low point of the program's internally developed estimated cost range. A 2008 internal review of NNSA's project management stated that failure to request full funding can result in risks to programs' goals such as increased program costs and schedule delays. GAO's prior work has emphasized the importance of transparency in federal agencies' budget presentations because such information helps Congress understand how new funding requests relate to program decisions. Including information in future versions of budget materials on the potential risks to achieving LEPs' goals when funding requests are not aligned with plans would improve the quality of budget materials. NNSA's infrastructure budget estimates are not adequate to address its reported $3.6 billion deferred maintenance backlog, and the backlog will continue to grow. One reason the backlog will continue to grow is that the 2015 budget estimates to address the problem fall below DOE infrastructure investment benchmarks for maintaining and recapitalizing existing facilities, activities that can reduce deferred maintenance. NNSA's goal to stop the growth of the backlog is stated in its budget materials, but these materials do not identify that budget estimates for maintenance and recapitalization fall below DOE's infrastructure investment benchmarks. Including information in future versions of budget materials on the potential risks to the achievement of infrastructure goals if budget estimates fall below internal benchmarks would improve the transparency of budget materials. GAO recommends improving the transparency of future budget materials by identifying potential risks to the achievement of program goals if budget estimates are lower than plans suggest are necessary. NNSA agreed with GAO's recommendations and outlined actions to address them.
During 1998, the Navy consolidated the Pearl Harbor Naval Shipyard and the Naval Intermediate Maintenance Facility in Hawaii. Because of concerns raised about certain aspects of the consolidation, the Navy implemented a test project, commonly called the Pearl Harbor pilot, to determine if integrating the management, operations, and funding of the shipyard and the intermediate maintenance facility can result in greater efficiency and lower overall ship maintenance costs. In September 1999, we reported the Pearl Harbor pilot was not yet complete and preliminary results were mixed, and we recommended that the Navy take steps to address unresolved issues related to financial management of the consolidated facility. The Navy accomplishes maintenance on its surface ships and submarines at three levels: organizational, intermediate, and depot. Organizational- level maintenance includes all maintenance actions that can be accomplished by a ship’s crew. For example, the ship’s crew may replace or fix a cracked gasket or leaks around a hatch or doorway aboard ship. Traditionally, intermediate-level maintenance is accomplished by Navy intermediate maintenance activities for work that is beyond the capability or capacity of a ship’s crew. An intermediate maintenance activity tests, calibrates, and repairs ship systems and equipment, which the ship’s crew may not have the tools or capability to do. On the other hand, depot work includes all maintenance actions that require skills or facilities beyond those of the organizational and intermediate levels. Shipyards with extensive shop facilities, specialized equipment, and highly skilled personnel accomplish major repairs, overhauls, and modifications. Figure 1 shows where the Navy’s ship maintenance activities are located. In March 1994, the Chief of Naval Operations announced a regional maintenance program to streamline the Navy ship repair and maintenance processes, reduce infrastructure and costs, and maximize outputs. The plan was to (1) optimize intermediate-level maintenance through consolidation of intermediate activities, (2) integrate intermediate and depot activities to be managed by fleet commanders, and (3) conduct fleet maintenance using an integrated maintenance process supported by common business and maintenance procedures. The first phase, consolidation of intermediate-level maintenance, nears completion. For the second phase of the program, the Navy has implemented the Pearl Harbor pilot and may consolidate the operations of the Puget Sound Naval Shipyard and the Intermediate Maintenance Facility, Northwest (formerly the Trident Refit Center, Bangor, and the Ship Intermediate Maintenance Facility, Everett). The third phase, using a single maintenance process for fleet maintenance, is to be completed in fiscal year 2001. Prior to the consolidation, the Pearl Harbor Naval Shipyard and the Naval Intermediate Maintenance Facility were individual commands with individual physical plants, organizational infrastructures, and administrative support services. The shipyard was managed by the Naval Sea Systems Command and funded through the Navy Working Capital Fund, while the intermediate maintenance facility was managed by the Pacific Fleet and financed through direct appropriations. Navy officials recognized that these different financial and organizational structures required them to use cumbersome, work-around procedures to share workloads and resources between the shipyard and the intermediate maintenance facility. The private ship repair facilities in Hawaii also complete a small amount of maintenance work for the Navy. On April 30, 1998, the Navy consolidated the operations of the Pearl Harbor Naval Shipyard and the Naval Intermediate Maintenance Facility, including overhead functions such as engineering, quality assurance, occupational safety, and administration. Similarly, maintenance shops, crane operations, and calibration laboratories were also consolidated. Further, the Pacific Fleet assumed ownership and overall management and financial responsibility for the consolidated facility, and the Naval Sea Systems Command continued to be the technical and operating authority. The Navy named the consolidated facility the Pearl Harbor Naval Shipyard and Intermediate Maintenance Facility. This consolidation is the Navy’s first attempt at the full-scale, total merger of two maintenance activities operating under separate command structures and financial systems. To achieve a fully integrated organization, the Navy decided the consolidated facility should use a single financial structure and selected direct appropriations instead of the working capital fund during the pilot period. This decision was based on several factors, including the belief that the pilot goals could more readily be achieved by using direct appropriations. The Pacific Fleet was the largest customer of ship maintenance activities in Hawaii, and most Fleet maintenance activities (ship repair facilities, shore intermediate maintenance activities, trident refit centers, and aviation intermediate maintenance departments) were funded with direct appropriations. Thus, they expected fewer financial issues using direct appropriations because the Fleet could integrate the consolidated facility into its financial structure and would not need to establish another system. Several Navy officials also believed that the working capital fund included fees and charges that overstate ship maintenance costs compared to direct appropriations, under which some overhead is not directly assigned to the cost of operations. On the other hand, the working capital fund accounts for these costs in an attempt to identify the full cost of ship maintenance operations. While the level of resources required to carry out ship maintenance activities is likely to be similar regardless of whether financed using direct appropriations or the working capital fund, using the working capital fund a customer is more likely to be directly responsible for a larger portion of those costs. Use of a working capital fund better enables Department of Defense (DOD) components to fully account for their share of the program costs. If Navy officials decide that direct appropriations are the most appropriate financial structure at Pearl Harbor after the completion of the pilot, the consolidated facility would then be permanently transferred from the working capital fund. The Deputy Secretary of Defense required the Navy to develop a test plan to determine whether the Pearl Harbor consolidation had resulted in an increased use of personnel and lower overall unit costs than separate facilities. A panel comprised of officials from the Office of the Secretary of Defense (OSD) and the Navy selected nine test metrics that represent a variety of issues and performance indicators for the consolidated facility (see table 2). The Navy used fiscal year 1997 as the baseline for measuring success or failure of the consolidated facility because this was the last full year the former Pearl Harbor shipyard and the intermediate maintenance facility operated as independent activities. The baseline is compared with data for fiscal year 1999. Fiscal year 1998 was eliminated because of the operational turbulence expected by the consolidation of activities during the year. The Conference Report for the Department of Defense Appropriations Act for Fiscal Year 1998 concluded that it would take at least 2 years before the Navy could determine whether the consolidation of maintenance activities for the pilot was effective and should be made permanent or expanded to other locations. The report directed the Navy to report its findings from the pilot to the Committees on Appropriations on or after April 1, 1999, and not to expand the pilot until 6 months after it had made its report. The conferees also directed the Navy not to make any permanent changes to the workforce in terms of total number of employees or any other permanent changes until the pilot was completed. Navy officials expect to issue the report in fiscal year 2001. At the time of our 1999 report, we noted that while the consolidation of shipyard and intermediate maintenance activities offered clear benefits, the close proximity of the facilities and the larger portion of Fleet-funded work in Hawaii may favorably affect the pilot’s results; this may not be the case at other locations. Consequently, we concluded the Pearl Harbor pilot provided only a general indication that future consolidations would result in efficiencies largely because of unique aspects of Pearl Harbor ship maintenance activities and financial management issues. Further, we reported that OSD and Navy officials had different opinions over the potential impact of using direct appropriations on the following issues. Cost visibility and accountability for consolidated ship maintenance operations. The Congress established working capital funds to, among other purposes, provide a flexible funding mechanism that would allow defense industrial and service activities to operate on the same basis as the private sector, including the use of standard cost accounting practices and techniques. According to OSD officials, applying these practices and techniques has made shipyard costs more visible and improved the efficiency of operations. Naval shipyards and activities remaining in the working capital fund. OSD officials were concerned that the Navy had not adequately addressed the impact of removing all naval shipyards from the working capital fund. They believed that if the Navy removed the shipyards from the fund, the Navy would have to request direct appropriations to pay the cost of the shipyards to transfer from the working capital fund or other activities in the fund would need to absorb a larger share of costs. Ship maintenance activities during periods without appropriations. A working capital fund is not directly subject to the annual appropriations cycle and can continue operations without interruption between fiscal years. Consequently, the former Pearl Harbor shipyard operators were freed from reprogramming limitations and restrictions applicable to regular appropriations and were allowed to incur costs without waiting for enactment of an appropriation. Because this financial flexibility is considered critical to shipyard operations, OSD officials were concerned that the Navy had not addressed how eliminating this flexibility would affect Pearl Harbor operations or future consolidations. Capital improvement program for consolidated ship maintenance activities. According to OSD officials, an important aspect of a working capital fund is its capital improvement program. In the case of the former Pearl Harbor shipyard, it depreciated its capital assets and collected this expense through the reimbursable rate charged to its customers. Therefore, the fund had a ready reserve to finance capital improvements for the shipyard. However, OSD officials were concerned that future funding levels may be insufficient because of uncertainties in the appropriation process. We concluded that while the consolidation of shipyard and intermediate maintenance activities offered clear benefits, financial management issues existed that needed to be resolved for future operations at Pearl Harbor and in considering other consolidations. OSD concurred with the intent of our recommendations to resolve financial management issues related to the consolidation at Pearl Harbor and indicated that the Departments of Defense and the Navy would correct them. This report discusses the extent to which these issues have been addressed by the Departments. The Navy has not provided adequate cost visibility and accountability over ship maintenance activities at Pearl Harbor following the consolidation because it has not implemented a method to routinely and systematically accumulate and account for the full cost of operations or distinguish between depot and intermediate work performed by consolidated ship maintenance facilities. The consolidated facility’s management and financial systems do not readily identify and report the full cost of ship maintenance operations. Federal accounting standards require that the systems account for the full cost of operations, including the depreciation of facilities and equipment, centrally managed financial and technical support services, selected base operating support, maintenance shops overhead, military personnel, and borrowed workers. Consequently, OSD and Navy officials have not had complete, reliable data needed for making fully informed decisions related to the management of ship maintenance activities and for establishing goals and measuring performance. Specifically, at the time of our review, they did not have reliable data on an ongoing basis to determine the total cost of delivering a direct labor hour of ship maintenance work—a key metric for evaluating the consolidated facility’s productivity and performance. Furthermore, the facility’s systems do not distinguish between depot and intermediate work, even though 10 U.S.C. 2466 limits the funds that may be used for contractor performance of depot maintenance work and requires DOD to report on the allocation of depot-level workloads between public and private sectors. Additionally, OSD and Navy comptroller officials question the reliability of Pearl Harbor’s data that are used to show compliance with the Chief Financial Officers Act (P.L. 101-576) and the Government Performance and Results Act (P.L. 103-62). Prior to the consolidation, the former Pearl Harbor shipyard and intermediate maintenance facility operated their own management and financial systems. The former shipyard was funded through the working capital fund and used the Shipyard Management Information System (an intricate network of interfacing systems intended to support planning, timekeeping, payroll, material, and cost accounting) to provide management information for the shipyard. The former intermediate maintenance facility was funded through direct appropriations and used the Standard Accounting and Reporting System and the Maintenance Resource Management System. The Standard Accounting and Reporting System was relied on to track, monitor, and report on appropriations, obligations, and expenditures for the intermediate facility such as materials and civilian personnel. The Maintenance Resource Management System was used to schedule work, set priorities, procure materials, and establish time frames. Neither system was designed to provide information on the full costs of the intermediate facility’s operations. Even though federal accounting standards require federal agencies to determine the full cost of operations, the Pearl Harbor’s management and financial systems do not account for the cost of the depreciation of facilities and equipment, centrally managed financial and technical support services, overhead costs by maintenance shop, selected base operating support, military personnel, and borrowed workers. This has inhibited the Navy’s ability to produce reliable cost data that are essential for making informed decisions related to the management of ship maintenance activities and for the establishment of strategic goals and the measurement of accomplishments and performance against established goals. While this lack of adequate cost visibility and accountability may not directly affect those managers and workers performing ship maintenance and repairs at Pearl Harbor, it has generated significant concern among officials at higher organizational levels outside the consolidated facility. For example, senior OSD and Navy officials have been concerned about the Navy’s ability to provide adequate cost visibility and accountability for ship maintenance operations at Pearl Harbor and about the reliability of ship maintenance data for Pearl Harbor used to show compliance with relevant statutes and regulations. The Statement of Federal Financial Accounting Standards No. 4 requires federal agencies to accumulate the full cost of outputs through appropriate costing methodologies or “cost finding” techniques. The full cost of an output is the sum of the (1) cost of resources consumed that directly or indirectly contribute to the output and (2) cost of identifiable supporting services provided by other entities. As such, the financial structure used to fund federal activities has no bearing on determining the full cost of an output. Compliance with these standards is intended to provide managers relevant and reliable data for making resource allocations, program modifications, and performance evaluations; comparing costs to outputs; and generating financial and performance reports. Our review of the consolidated facility’s management and financial systems showed that the facility did not routinely and systematically identify and accumulate data on the full cost of its ship maintenance operations in accordance with Statement of Federal Financial Accounting Standards No. 4. Contrary to the standards to identify the full cost of operations, the consolidated facility’s systems did not recognize costs if they are paid by other Navy activities. Pacific Fleet and Pearl Harbor officials stated that their focus is to manage the consolidated facility’s appropriations and obligations and to execute the allocated funding within budget authority. In addition, they stated their belief that their systems and processes are not required to identify and accumulate data on any costs not paid from the appropriated funding provided to the consolidated facility. Specifically, at the time of our review, the categories of costs associated with consolidated ship maintenance operations that the facility’s systems did not routinely and systematically identify and accumulate include: Facilities and equipment depreciation. Depreciation costs are no longer accumulated because the former Pearl Harbor shipyard and all of its facilities and equipment are now Pacific Fleet assets that will be recapitalized through direct appropriations. Prior to consolidation, depreciation costs, reported to be $10.6 million in fiscal year 1997, were considered in developing the former shipyard’s reimbursable rate used to collect the costs of its maintenance operations from its customers. Centrally managed financial and technical support services. Support services costs were a reported $1.9 million for the former Pearl Harbor shipyard in fiscal year 1999, but these costs are no longer accumulated at the consolidated facility because they are not included in its reimbursable rate. For example, Defense Finance and Accounting Service support costs are not accumulated; instead, the Pacific Fleet assumed responsibility for the costs and their payment. In another example, the Naval Sea Systems Command’s centrally managed technical support costs (automated data processing, depot maintenance report preparation, and shipyard management support) are no longer accumulated because the Command assumed responsibility for the costs and their payment. Selected base operating support costs. The costs for security, utilities, water, steam, sewage, and recurring maintenance (infrastructure) activities are no longer accumulated by the consolidated facility. Prior to consolidation, the costs were accumulated and allocated to jobs as overhead costs and recovered through the former shipyard’s reimbursable rate. Now, the Navy Regional Commander is responsible for these costs, estimated at $13.7 million and $20.6 million in fiscal years 1999 and 2000, respectively. Overhead costs by maintenance shop and work item. Pacific Fleet and Pearl Harbor officials believe there is no longer a need to determine overhead costs by maintenance shop and work item because there is no longer a need to accumulate such costs for the purposes of developing cost data to be factored into the consolidated facility’s reimbursable rate. Currently, overhead costs are funded with direct appropriations, accounted for by cost category, and monitored to ensure that expenditures do not exceed allocations. Previously, the former shipyard accounted for overhead costs by individual maintenance shops and work items. Military personnel costs. Pacific Fleet and Pearl Harbor officials believe there is no longer a need to account for military personnel costs because they are managed by the Navy’s Bureau of Naval Personnel and funded by the Bureau with direct appropriations. These costs are no longer considered reimbursable and are not accumulated by the consolidated facility. Prior to the consolidation, the former shipyard accounted for these costs as overhead in its reimbursable rate and paid the Bureau of Naval Personnel for its military personnel with proceeds collected from its customers. Borrowed labor costs. The Navy uses borrowed workers to balance total resources with workload among its shipyards, but these costs are no longer accumulated by maintenance project and work item in Hawaii. They are funded with direct appropriations, included in the material cost category, and monitored to ensure that expenditures do not exceed allocations. Prior to consolidation, the former shipyard accounted for the costs by maintenance project and work item and paid other naval shipyards for the borrowed labor with proceeds collected from its customers. Because the Navy does not routinely and systematically accumulate and account for the full cost of operations for the consolidated facility, OSD and Navy officials have not had complete, reliable data needed for making fully informed decisions related to the management of ship maintenance activities and for establishing goals and measuring performance. For example, at the time of our review, officials did not have data to determine on an ongoing basis whether their ship maintenance operations were costing less or more to provide a direct maintenance hour—a key metric for evaluating the consolidated facility’s productivity and performance. Pearl Harbor officials attempted to generate this information during fiscal year 2000 but were unsuccessful because of difficulties in reconciling cost data between the consolidated facility’s management and financial systems and in duplicating the Naval Audit Service’s method for developing the metric value. Concerns about the adequacy of data had led the Chief of Naval Operations and the Commander, Naval Sea Systems Command, to ask the Naval Audit Service to develop the fiscal year 1997 baseline value, validate the data, and develop values for the metric for fiscal years 1998 and 1999. According to Navy officials, there is no decision on whether the metric will be used to measure the performance of the combined facility in fiscal year 2000. Pacific Fleet and Pearl Harbor officials maintain that all work is considered the same in the consolidated facility. Consequently, the Navy cannot readily identify the cost expended by the consolidated facility for depot-level work to show compliance with (1) the 10 U.S.C. 2466 depot maintenance allocation limits and associated reporting requirements for DOD departments and agencies and (2) DOD financial regulations that implement the depot maintenance reporting requirements. Section 2466 of title 10 requires that not more than 50 percent of funds allocated in a fiscal year for depot work can be used for contractor- performed work and requires the Secretary of Defense to report to the Congress on depot-level workloads during the proceeding two fiscal years. Section 2460 of title 10 requires that depot-level maintenance includes all workloads, regardless of its funding source and work location. Furthermore, DOD financial management regulations (vol. 6, ch. 14) require all depot maintenance activities, regardless of their funding source, to uniformly record, accumulate, and report costs incurred in their depot maintenance operations and require these activities to maintain systems to collect the data. Each activity is required to collect data on direct labor hours and costs, material costs, and other direct costs; operations overhead costs; general and administrative costs; total maintenance costs; and cost per direct labor hour. OSD officials use these data to analyze historical cost trends, evaluate and oversee resources and budgets, develop direction and guidance, estimate depot maintenance requirements, examine cost drivers, and comply with the 10 U.S.C. 2466 depot maintenance allocation and reporting requirements. Prior to consolidation, the Navy’s determination of depot and intermediate maintenance work was based on which facility performed it: the former Pearl Harbor shipyard performed depot work, and the former intermediate maintenance facility performed intermediate work. However, because Pacific Fleet and Pearl Harbor officials maintain that all work is considered and classified the same at the consolidated facility, the management and financial systems do not differentiate between depot and intermediate categories of work. Consequently, the Navy cannot readily identify the cost associated with the depot-level workload completed by the consolidated facility to show compliance with the 10 U.S.C. 2466 depot maintenance allocation and reporting requirements and with DOD financial regulations that implement these requirements. Because of inadequate cost visibility and accountability, DOD and Navy comptroller officials question the reliability of Pearl Harbor’s data that are used to show compliance with the Chief Financial Officers Act (P.L. 101- 576) and establish and measure performance against goals under the Government Performance and Results Act (P.L. 103-62). In some instances, officials have used rough estimates to comply with the acts. The Chief Financial Officers Act requires federal agencies to develop and report cost information and periodic performance measurements. Cost information is necessary for establishing strategic goals, measuring service efforts and accomplishments, and reporting actual performance against established goals and is essential for assessing governmental accountability. In addition, the Government Performance and Results Act, which mandates performance measurements by federal agencies, requires each agency to establish performance indicators for each program and measure or assess relevant outputs, service levels, and outcomes of each program as a basis for comparing results with established goals. According to OSD and Navy comptroller officials, the reliability of the data provided by the Pacific Fleet and Pearl Harbor officials to show compliance with the Chief Financial Officers Act and the Government Performance and Results Act was questionable. For example, Pacific Fleet and Pearl Harbor officials developed rough estimates of the overhead cost rates for maintenance shops for incorporation with other cost data to show compliance with the acts. As discussed previously, overhead rates were allocated to maintenance shops prior to consolidation so that the total cost of operations would be captured through the reimbursable rates the former shipyard charged to its customers. Pacific Fleet and Pearl Harbor officials believe there is no longer a need to determine overhead costs by maintenance shop because such costs are no longer accumulated for the purposes of developing data to be factored into the consolidated facility’s reimbursable rate. Although Pacific Fleet and Pearl Harbor officials developed rough estimates of these overhead rates for the facility in fiscal year 1999, OSD and Navy comptroller officials said that they were imprecise at best. Pacific Fleet and Pearl Harbor officials acknowledged that they have had difficulties providing data that are timely and reliable and that can be used to meet the requirements of applicable statutes. As we discussed in our prior report, OSD and Navy officials differ still over three key issues related to the financial structure for the consolidation of ship maintenance activities at Pearl Harbor and other locations. First, OSD officials believe the Navy should reimburse the working capital fund for undepreciated capital assets (assets financed through the working capital fund whose value has not yet been recovered) and assets under development (assets purchased but not delivered) when a naval shipyard is transferred to direct appropriations. However, some Navy officials believe they do not need to refinance these items because they have already been financed with funds provided by shipyard customers. Second, officials differ about whether consolidated facilities funded with direct appropriations will be able, as are naval shipyards operating under the working capital fund, to continue routine maintenance operations if potential funding gaps occur at the beginning of fiscal years or when expected maintenance costs exceed annual appropriations. A working capital fund is not directly subject to the annual appropriation cycle, which means shipyard operators can incur some costs without waiting for enactment of an appropriation and operators are free of reprogramming limitations and restrictions applicable to direct appropriations. Third, officials differ about whether funding levels under direct appropriations will be sufficient to maintain an adequate capital improvement program for the consolidated facilities funded with direct appropriations because of uncertainties in competing with other Navy programs and priorities for funding during the budgeting and appropriation processes. When the Pearl Harbor pilot was implemented, OSD officials determined that the Navy would need to make the working capital fund financially whole if it decided to permanently transfer the former Pearl Harbor shipyard to direct appropriations. According to OSD officials, the costs of such transfers, collectively called buyout costs, include liabilities, accumulated operating results (or net financial position), accrued employee leave, and undepreciated capital assets. Because DOD financial management regulations (vol. 11B, ch. 51) include only general procedures governing the transfer of working capital fund activities to direct appropriations, DOD’s processes and procedures for such transfers are open to interpretation and some Navy officials are disputing the transfer costs determined by OSD. The financial management regulations primarily address accounting procedures for transferring functions and do not specifically address processes and procedures for identifying all the categories (types) of costs and amounts that should be paid when a fund activity transfers to direct appropriations. Several OSD and Navy officials involved in the transfer believe that without more specific guidance the Navy will continue to dispute OSD’s determination of buyout costs as the Navy considers transferring other shipyards to direct appropriations. OSD and Navy officials differ on whether the Navy should reimburse the working capital fund for the value of the former shipyard’s undepreciated capital assets and assets under development, and the Navy has requested a waiver for the accrued leave liability. If the former Pearl Harbor shipyard is formally transferred from the working capital fund, OSD officials believe the Navy should pay the fund an estimated $101.4 million for undepreciated capital assets and $9 million for assets under development. However, some Navy officials believe that the former shipyard’s customers have already paid for these assets and that they do not need to request or allocate appropriations to pay them. Additionally in May 1998, the Navy requested a waiver from the Office of Management and Budget on paying the former shipyard’s accrued leave liability, reported to be $14.3 million by OSD. However, the Office of Management and Budget rejected the Navy’s request. Should the Navy decide to permanently transfer all naval shipyards from the fund, the Navy could be required to pay more than $553 million based on OSD and Navy data for these items: $390 million for undepreciated assets, $76 million for assets under development, and $87 million for accrued leave liability. In fiscal year 1999, the Navy paid the working capital fund $18.4 million for the former Pearl Harbor shipyard’s accumulated operating results, obligations not yet paid, work items in inventory, and receivables less payables. These selected buyout costs were derived through a series of lengthy meetings with OSD and Navy officials. OSD and Navy officials differ about whether the consolidated facility will be able, as are naval shipyards financed under the working capital fund, to continue routine ship maintenance operations if potential funding gaps occur at the beginning of fiscal years or expected maintenance costs exceed annual appropriations. Because a working capital fund is not directly subject to the annual appropriation cycle, the fund allows former shipyard operators to incur some costs without waiting for enactment of an appropriation and provides them freedom from reprogramming limitations and restrictions applicable to regular appropriations. OSD officials are concerned that the consolidated facility will not be able to continue routine ship maintenance operations at the beginning of fiscal years if a funding gap occurs. A funding gap would occur if neither appropriations nor continuing resolutions were enacted before the start of a fiscal year. Operating under the working capital fund, former shipyard operators were allowed to incur some costs without waiting for enactment of an appropriation. However, some Pacific Fleet and Naval Sea Systems Command officials said that the flexibility provided by the fund for activities to continue maintenance operations during periods without appropriations or continuing resolutions would extend only a few weeks. According to these officials, this limited flexibility was considered a minor factor compared with the overall benefits of using direct appropriations to fund the consolidated facility. However, OSD officials are still concerned about eliminating this flexibility on ship maintenance activities, and they note that shipyards using the working capital fund are able to continue ship maintenance projects from one fiscal year to another. OSD officials have concerns about whether the consolidated facility will be able, as are naval shipyards operating under the working capital fund, to continue routine ship maintenance operations when expected maintenance costs exceed annual appropriations. As discussed previously, activities operating under the working capital fund are not directly subject to the annual appropriation cycle and can continue operations through the end of fiscal years without being concerned about reprogramming limitations and restrictions applicable to direct appropriations. In fiscal year 1999, the Pacific Fleet transferred $30.8 million to alleviate the funding shortfall for the consolidated facility (see table 3). According to Pearl Harbor officials, a funding shortfall made it difficult to execute planned work on schedule because of uncertainties about whether the necessary funds would be obtained from another source in sufficient time to meet schedules. As shown in table 3, the Pacific Fleet moved a reported $30.8 million in budget authority, mostly in the last 2 months of fiscal year 1999, to meet the funding shortfall the consolidated facility experienced in fiscal year 1999. The Pacific Fleet allocated a reported $244.9 million in ship depot operations support funds to the consolidated facility in fiscal year 1999, $9 million less than originally requested. The Fleet later allocated the consolidated facility an additional $300,000 in depot operations support funds and an additional $30.5 million in ship depot maintenance funds. According to Navy officials, the Fleet obtained most of the $30.8 million from other fleet-funded commands and activities. According to Pacific Fleet and Pearl Harbor officials, the funding shortfall resulted because the budgeting process for the consolidated facility in fiscal year 1999 did not adequately accommodate changes inherent in the transfer from working capital fund to direct appropriations. They note that even after the transfer to direct appropriations, the Navy funded the consolidated facility at less than its projected workload for fiscal year 1999. Consequently, the budgeted amount was insufficient to fund the overtime and borrowed workers needed to accomplish the workload in fiscal year 1999 and they had to transfer and reprogram funds originally allocated to other activities to meet the funding shortfall. Additionally, some maintenance work was postponed to another time. Because of uncertainties in competing with other Navy programs and priorities for funding during the budgeting and appropriations processes, OSD and Navy officials differ about whether funding levels under direct appropriations will be sufficient to maintain an adequate capital improvement program for the consolidated facility. In the management agreement for the consolidation of the former Pearl Harbor Naval Shipyard and Naval Intermediate Maintenance Facility, the Naval Sea System Command is identified as responsible for requirements determination, planning, programming, budgeting, and acquisition cost of plant property and industrial plant equipment. Following the Command’s approval and the Pacific Fleet’s concurrence, capital improvement requirements are forwarded to the Chief of Naval Operations, where the requirements must compete against other Navy programs and priorities for limited appropriated funds. The former Pearl Harbor shipyard depreciated its capital assets and collected this expense through its reimbursable rate charged to customers and received a reported $56 million for capital improvements during fiscal years 1993-98. Senior OSD officials have concerns whether adequate funding will materialize in the future because the consolidated facility must compete for scarce funds under direct appropriations with other Navy programs and priorities. Recently, the Navy allocated $7.5 million in fiscal year 1999 appropriations and $5.7 million in fiscal year 2000 appropriations for capital improvement items at Pearl Harbor. Fiscal year 2001 funding totals $18.6 million, which includes a $17-million congressional add-on for the consolidated facility. However, according to senior OSD and Navy officials, the Navy has budgeted less than 5 percent of the identified program requirements for the consolidated facility for fiscal years 2003-07. The consolidation of the shipyard and intermediate ship maintenance activities at Pearl Harbor has improved and streamlined maintenance operations by making more effective use of workers and facilities, but overall results of the test metrics are inconclusive. The consolidation has increased overall flexibility of ship maintenance activities in Hawaii by establishing a single workforce from two work centers and reducing the maintenance infrastructure. Further, data for two of the nine metrics indicate improvements since the consolidation: the cost to provide a direct maintenance hour was less in fiscal year 1999 and the labor hours expended to deliver a direct maintenance hour were fewer in fiscal year 1999 and half of fiscal year 2000. However, because the consolidated facility did not routinely and systematically collect and report the full cost of operations, the cost to provide a direct maintenance hour during fiscal year 2000 was not available on an ongoing basis. The results for the seven other metrics are less conclusive of the consolidation’s accomplishments because factors unrelated to the consolidation affected the data, the change in the overall performance was insignificant, or the data indicated both positive and negative results. However, lessons learned from the metrics used in evaluating the Pearl Harbor consolidation should be useful in framing evaluation plans for future consolidations at other naval locations. Although the Navy is working to improve its performance, the consolidated facility continues to experience difficulties in completing long-term, complex ship maintenance projects on schedule, as did the former Pearl Harbor shipyard. Other potential benefits of the consolidation have not been fully realized because the planned number of overhead workers has not been moved to direct maintenance positions and equipment has not been entirely consolidated to increase productivity. As the Navy envisioned, the consolidation has integrated workers from two work centers into a single workforce and reduced the maintenance infrastructure in Hawaii. As planned, the Navy has integrated approximately 4,000 workers from two separate work centers into a common pool, thereby increasing management flexibility in assigning workers to maintenance projects. Prior to the consolidation, it was difficult to shift work or personnel between maintenance activities due to multiple independent organizational and financial structures. Accordingly, when administrative and financial requirements restricted the movement of workers, shipyard maintenance personnel not working on a specific maintenance project were sent to the excess labor shop to wait for an assignment and perform non-ship work such as facility maintenance or grounds-keeping. Between 100 and 200 workers were assigned daily to this shop. By integrating the two workforces, maintenance shops now have more flexibility in assigning excess workers to other projects, including projects historically completed by the former intermediate maintenance facility. In our 1999 report, we noted that the number of workers assigned daily to the excess labor shop had dropped to below 10 after the consolidation. Although Pearl Harbor officials told us during this review that the number of workers assigned daily to the excess labor shop continued to drop, we could not verify the statement because the consolidated facility had revised its process for assigning the workforce and no longer identifies excess workers. Because of the consolidation, the Navy has reduced the maintenance infrastructure at Pearl Harbor. For example, 13 (125,782 square feet) of the 27 buildings previously used by the former Naval Intermediate Maintenance Facility were turned over to the Commander, Navy Region Hawaii. While the consolidated facility will retain 6 of the remaining 14 intermediate maintenance buildings, Pacific Fleet and Pearl Harbor officials plan to vacate another six (20,074 square feet) and is reviewing the disposition of two (16,996 square feet) buildings. As of May 2000, the Commander, Navy Region Hawaii, had demolished or had plans to demolish three former intermediate maintenance buildings and was able to demolish two of its buildings after tenants moved into vacated intermediate maintenance buildings. The demolition cost for all five buildings and a portion of an additional building is estimated at $1.8 million, with a projected annual cost avoidance of $312,000 or a payback period of a little less than 6 years. Data for the following two key metrics indicate improvements in performance. It is important to note that the Naval Audit Service expended significant resources to develop and validate the data for these two metrics, which are the most reliable metrics for making pre- and post- consolidation comparisons. Fiscal year 1999 data for this metric indicate that the cost of delivering one direct maintenance hour is less since the consolidation. The Naval Audit Service determined that it cost $138.99 to deliver a maintenance shop direct labor hour in fiscal year 1999, compared with the adjusted baseline cost of $144.51 in fiscal year 1997. Pearl Harbor officials were unsuccessful in their attempts to determine the total cost to provide a direct maintenance hour during fiscal year 2000 because of difficulties in reconciling cost data between the consolidated facility’s systems and in duplicating the Naval Audit Service’s methodology for developing the metric. As discussed in chapter 2, the consolidated facility’s management and financial systems do not routinely and systematically accumulate the cost data to generate the metric. Because of concerns about the adequacy of data before the consolidation, the Chief of Naval Operations and the Commander, Naval Sea Systems Command, asked the Naval Audit Service to determine the cost of operations for the former shipyard and the intermediate maintenance facility in fiscal year 1997. The Audit Service was later tasked to determine the fiscal year 1999 post-consolidation cost of operating the consolidated facility, compute the cost of a maintenance shop direct labor hour, and compare the fiscal year 1999 results with the fiscal year 1997 baseline. To develop the metric, the Audit Service collected cost and maintenance data from the consolidated facility and other activities. Navy officials said there is no decision on whether the metric value will be developed and used to measure the performance of the consolidated facility in fiscal year 2000. Data for this metric indicate that delivering one direct maintenance hour in fiscal years 1999 and 2000 has taken fewer labor hours since the consolidation. For example, the Navy estimates that delivering a maintenance shop direct labor hour took 3.15 activity labor hours (overhead and direct maintenance hours) in fiscal year 1997, 3.03 hours in fiscal year 1999, and 3.06 hours through the middle of fiscal year 2000. The results of the following metrics are inconclusive because decisions and circumstances external to the consolidation have driven the results, the change in the overall performance was insignificant, or the data indicated both positive and negative results. Although fewer Current Ship Maintenance Program work items were completed following the consolidation, the results of this metric are inconclusive because they were influenced by the decrease in the number of military personnel since the consolidation, the increased use of borrowed workers since the consolidation, and other factors unrelated to the consolidation. Furthermore, several Navy officials are concerned about using work items from the Current Ship Maintenance Program to measure the consolidation’s success or failure because the work items vary in terms of labor hours and skills required repairing them. For example, work ranges from simple jobs (such as replacing a label or light bulb on a control panel) to complex jobs (such as overhauling a pump or nuclear valve). Additionally, the requirements to overhaul one pump differ from other overhauls depending on the problem and the type of pump. Prior to the consolidation, 19,777 work items were completed in fiscal year 1997 compared to 11,501 work items completed by the consolidated facility in fiscal year 1999. As of August, the completion rate for fiscal year 2000 was lower than in preceding years. Although the consolidated facility was expected to maintain the same completion rate, several Navy officials believed this expectation was unreasonable because the number of military enlisted personnel decreased from 1,275 in October 1996 to 616 in April 1999. On the other hand, the increased use of borrowed workers from other naval shipyards since the consolidation has had a positive influence on the metric results. For example, the consolidated facility expended 82,785 borrowed labor hours more in fiscal year 1999 (a total of 129,293 borrowed labor hours) than in the baseline fiscal year 1997 (a total of 46,508 borrowed labor hours) before the consolidation. Furthermore, as of the end of May 2000, the consolidated facility had already expended 188,344 borrowed labor hours, far exceeding its fiscal years 1997 and 1999 use of borrowed workers. Although the Navy uses borrowed workers to balance total resources with workload among its shipyards, their addition to the workforce provided managers more flexibility in assigning personnel to work on the items and had a positive influence on the metric results. According to Navy officials, because the consolidated facility was overloaded with work during fiscal year 1999, the facility had to ensure that at least the critical work was completed. Consequently, some Current Ship Maintenance Program work was designated as low priority and not accomplished, thereby impacting negatively on the results for this metric. Although the Current Ship Maintenance Program backlog was reduced following the consolidation, the results of this metric are inconclusive because of many of the same reasons discussed in the previous section. Historically, Navy officials have measured the material condition of their ships based on the number of backlogged Current Ship Maintenance Program work items: fewer backlogged items imply the ships are in better condition. Following the consolidation, the fiscal year 1997 backlog of 17,733 work items was reduced to 15,791 work items in fiscal year 1999 and to 14,279 work items at the end of July 2000. However, the Current Ship Maintenance Program backlog has been affected by the decrease in the number of military personnel and the increased use of borrowed workers since the consolidation. Furthermore, the following factors outside the direct influence of the consolidation affected the backlog: Decommissions of ships homeported at Pearl Harbor decreased the backlog by the number of work items recorded for the ships. Maintenance inspections increased the backlog by the number of unrecorded work items identified by the inspection team. Procedural changes in identifying and recording work items may increase or decrease the backlog depending on whether the changes weaken or strengthen the process. Although the data for this metric indicate improvement following the consolidation, the results are inconclusive because the facility’s ability to adhere to the work schedules established for Chief of Naval Operations maintenance projects was improved by the increased use of borrowed workers since the consolidation. Following the consolidation, the 11.4 percent late schedule adherence index achieved during fiscal year 1997 decreased to 8.6 percent late in fiscal year 1999—the lower percentile indicates that Chief of Naval Operations projects are being completed closer to their scheduled completion dates. However, because of the increased use of borrowed workers, it is possible to assign more workers to Chief of Naval Operations maintenance projects, which should result in more work completed and quicker completion of maintenance projects. In essence, this increase in the maintenance workforce had a positive influence on the results for this metric. As did the former Pearl Harbor shipyard, the consolidated facility has difficulties in completing long-term, complex Chief of Naval Operations ship maintenance projects (Depot Modernization Period) on schedule. For example, Pearl Harbor experienced a 9-month delay in completing a Chief of Naval Operations maintenance project (Depot Modernization Period) for the U.S.S. Chicago. This delay caused slippages in the completion of other long-term Chief of Naval Operations projects because employees originally scheduled to work on succeeding projects were still committed to the U.S.S. Chicago. In addition, since the consolidation, Chief of Naval Operations projects must compete for workers with other Pacific Fleet maintenance priorities in Hawaii. Short-term Fleet maintenance projects and emergent repairs are given a higher priority in staffing decisions than longer, more complex maintenance projects. Approximately 25 percent of the consolidated facility’s workload involve short-term Fleet maintenance projects or emergent repairs to operational surface ships and submarines. Prior to the consolidation, Chief of Naval Operations projects were completed by the former Pearl Harbor shipyard and did not compete for workers with short-term Fleet maintenance projects or emergent repairs, because they were performed by the former intermediate maintenance facility’s workforce. According to Pearl Harbor officials, they are trying to improve their performance on long-term Chief of Naval Operations projects through better resource allocation procedures, process improvements, and resource sharing with other naval shipyards and, consequently, have reduced the time frame to complete recent long-term projects for the U.S.S. Key West and the U.S.S. Pasadena. Although the data for this metric show a slight degradation in rework quality from 0.76 percent in fiscal year 1997 to 1.08 percent in fiscal year 1999, these results are inconclusive because the quality of work did not deteriorate significantly as a result of the consolidation. According to the Navy, the purpose of this metric was to ensure that depot-level work completed by the consolidated facility did not deteriorate from the former shipyard’s historical level of work quality. In its comments on a draft of this report, DOD stated that “successful performance” is indicated when there is no change in value of the metric between fiscal years. According to the Navy’s contractors, no specific conclusions can be drawn from comparing fiscal year 1997 and fiscal year 1999 rework indexes because the change in the overall performance was insignificant. The Navy has not yet generated the rework figures for fiscal year 2000. Although data for this metric indicate a slight improvement in performance, overall results are inconclusive because the data (1) indicated both positive and negative results depending on the type of maintenance work and (2) depended significantly on the reliability of time (labor hours) estimated to complete the work. The stated objective of this metric is to measure the consolidated facility’s schedule integrity by comparing budgeted work scheduled (labor hours) with the actual amount of work performed (labor hours). The index decreased from 1.23 in fiscal year 1997 to 1.16 in fiscal year 1999, indicating a slight improvement in overall performance following the consolidation in fiscal year 1999. Additionally, the fiscal year 1999 data indicated improvement in two of the three types of maintenance projects analyzed. The index increased to 1.2 during the first 3 months of fiscal year 2000, indicating a slight degradation in overall performance since fiscal year 1999 but still a slight improvement over the fiscal year 1997 baseline. However, senior Navy officials said the metric is more of an indicator of efficiencies achieved by changes in maintenance procedures rather than efficiencies caused by the consolidation. There is nothing in the consolidation’s design or structure that targeted maintenance procedures for improvement. According to the Navy’s contractors, this metric should not be used to measure schedule integrity of the consolidated facility because it mixes schedule data with cost data, resulting in subjective data that are not accurate indicators of the facility’s schedule integrity. The Navy has not generated the index for all of fiscal year 2000. Although the data for this metric indicate a slight degradation in performance following the consolidation, the results are inconclusive because the change in the overall performance was insignificant with respect to the consolidation’s accomplishments and there was no clear relationship between the reports and the quality of work performed. According to the Navy, the purpose of this metric is to ensure that depot- level work completed by the consolidated facility did not deteriorate as a result of the consolidation. It is based on an analysis of casualty reports that are filed by the ship after any equipment failure and, in some instances, identify maintenance work improperly performed by the consolidated facility. In its comments on a draft of this report, DOD stated that the metric indicates “successful performance” when there is no change in value of the metric from the former shipyard’s historical level of work quality. Although the number of reports identifying equipment failures relating to the maintenance work increased from two in the baseline fiscal year 1997 to four in fiscal year 1999 after the consolidation, these numbers are relatively insignificant considering that the consolidated facility repairs several hundreds of pieces of equipment annually. According to the Navy’s contractors, the number of casualty reports related to depot-level work was insignificant before and after the consolidation, indicating that there was no major problems with the end quality of depot maintenance work completed at Pearl Harbor during both periods. However, the contractors questioned whether casualty reports are useful measurements because there was no clear relationship between the reports and the quality of work performed during a maintenance project. For example, because there are no standard procedures or methods for writing casualty reports, some reports did not explicitly identify the cause of the equipment failure. In other instances, the equipment failure identified in the casualty reports can be coincidental and unrelated to any work performed by the consolidated facility. The Navy has not yet reviewed the casualty reports for fiscal year 2000 to identify equipment failures caused by work improperly performed by the consolidated facility. Results for this metric are inconclusive because the change in the overall performance was insignificant and the data indicated both positive and negative results depending on the work and ship type. To determine the earned value metric, labor hours to complete a unit of work in fiscal year 1997 are compared with hours to complete the same unit of work in fiscal year 1999 and later to measure the consolidation’s effect on maintenance outputs. However, after the analysis was completed for fiscal year 1999, Navy and contractor officials concluded that the change in the overall performance based on this metric was insignificant. In addition, the supporting data for the metric, while indicating an overall decline in performance since the consolidation, showed both positive and negative results depending on the specific work and type of ship sampled. The 1999 data for submarines showed more labor hours were expended for four cost drivers and less for one cost driver compared to data for fiscal year 1997. The 1999 data for surface ships showed more labor hours were expended for one cost driver and less for two cost drivers compared to fiscal year 1997. The Navy has not explored the earned value metric for fiscal year 2000. The Navy has not moved the projected number of overhead workers to direct maintenance positions to increase productivity and has not consolidated all the industrial plant equipment as envisioned in the Pilot Study Report. The consolidation did not result in the projected number of overhead workers moving to direct maintenance work−an important element in increasing the consolidated facility’s productivity. To increase productivity, one of the stated goals of the Pearl Harbor consolidation was to increase the number of direct maintenance workers relative to the number of supervisors and overhead personnel without increasing costs. Logically, increasing the number of maintenance workers should result in increased maintenance. To accomplish this goal, the Pilot Study Report proposed that 95 civilian overhead workers be moved to positions in direct maintenance work; however, only four workers moved. According to Pacific Fleet and Pearl Harbor officials, moving civilians to such positions has been impractical because of (1) the time required for overhead workers to become skilled, usually several years; (2) personnel regulations implementing the process for downgrades; and (3) potentially negative reaction by workers and employee representatives. In addition, several department directors and overhead supervisors said they were unwilling to release any personnel because of the increased workload due to changes in administrative and financial systems since the consolidation. Furthermore, other Navy officials said that efforts to move supervisors to direct maintenance resulted in too few supervisors, which led to problems in planning and coordinating work on maintenance projects. Pacific Fleet and Pearl Harbor officials believe that moving overhead workers to direct maintenance work is too difficult, and they believe this will not happen. Industrial plant equipment has not been completely consolidated at integrated maintenance shops to improve maintenance operations, as suggested in the Pilot Study Report. Of the 271 items of industrial equipment at the former intermediate maintenance facility, 114 items were relocated, 132 items were mothballed, and 25 items were kept operational at their original location. According to Pearl Harbor officials, the removal and installation of the 114 relocated items cost little or nothing. Although many of the 132 mothballed items are in better condition and newer than the same type of equipment in the consolidated shops, the funding required to relocate the intermediate maintenance equipment has not been available. Most mothballed items are semipermanently attached heavy equipment that requires funding to remove, transport, and install elsewhere. Officials do not plan to request funding to move the equipment until after the pilot period because of budgetary constraints. Although OSD concurred with the intent of the recommendations in our 1999 report to resolve financial management issues and assure cost visibility, the Navy’s consolidation of ship maintenance activities at Pearl Harbor has not yet shown that it can adequately identify and account for the total cost of operations or distinguish between depot and intermediate work performed by consolidated ship maintenance activities on an ongoing basis. Such data are needed to comply with the Statement of Federal Financial Accounting Standards No. 4 requirements for determining full cost of operations, which are intended to provide managers relevant and reliable data for making resource allocations, program modifications, and performance evaluations. Additionally, improved cost and performance data are needed to show compliance with 10 U.S.C. 2466, the Chief Financial Officers Act, and the Government Performance and Results Act. Although managers and workers performing ship maintenance and repairs at Pearl Harbor may not be directly affected, the lack of reliable cost and performance data impairs the ability of senior OSD and Navy officials to make timely, well-informed decisions to facilitate the effective and efficient management of the Navy’s overall ship maintenance activities, the Pearl Harbor consolidation, and other potential consolidations of ship maintenance activities. More specifically, to provide senior OSD and Navy officials reliable cost and performance data to facilitate their decision- making process, the Navy needs to implement a method that includes appropriate costing methodologies or techniques that provide sufficient data to (1) adequately identify and account for the total cost of operations, (2) distinguish between depot and intermediate work performed by consolidated ship maintenance activities, and (3) show compliance with 10 U.S.C. 2466, the Chief Financial Officers Act, the Government Performance and Results Act, DOD regulations, and federal accounting standards. Other consolidations of naval shipyards and intermediate maintenance activities are likely, even though little progress has been made since our prior report toward resolving OSD’s and the Navy’s differences over key issues related to the financial structure for the consolidation of ship maintenance activities at Pearl Harbor and for potential consolidations at other naval locations. Consequently, several outstanding financial issues still need to be resolved to facilitate effective and efficient ship maintenance operations at Pearl Harbor and other potential consolidations. First, OSD and Navy officials differ over the appropriate amount the Navy should compensate the working capital fund when a naval shipyard leaves the fund. Because DOD regulations provide only general guidance governing the transfer of working capital fund organizations, more specific guidance is needed on the processes, procedures, and costing methodology to help resolve the Pearl Harbor dispute and prevent similar occurrences in other potential transfers of naval shipyards to direct appropriations. Second, officials differ over the potential impact of using direct appropriations on the consolidated facility’s flexibility to, as are naval shipyards operating under the working capital fund, continue routine ship maintenance operations if potential funding gaps occur at the beginning of fiscal years or expected maintenance costs exceed annual appropriations. This needs to be addressed to determine whether steps are considered necessary to mitigate the risk of ship maintenance activities funded with direct appropriations not being able to continue routine operations during funding gaps at the beginning of fiscal years and funding shortfalls at the end of fiscal years. Third, officials still differ about whether funding levels under direct appropriations will be sufficient to maintain an adequate capital improvement program for the consolidated facility, because of uncertainties in competing with other Navy programs and priorities for funding during the budgeting and appropriations processes. This needs to be resolved to help assure that capital improvement programs for consolidated ship maintenance activities funded with direct appropriations receive proper funding. Although the Pearl Harbor consolidation has made more effective use of workers and facilities in Hawaii, the data available for the test metrics up to now provide an inconclusive assessment of the consolidated facility’s overall accomplishments in achieving greater efficiencies and lowering costs. Consequently, overall results of the metrics should not be used as the basis for making future consolidations of naval shipyards and intermediate maintenance activities. However, lessons learned from the metrics used in evaluating the Pearl Harbor consolidation should be useful in framing evaluation plans that would provide more conclusive data for future consolidations at other naval locations. We recommend that before the Navy implements permanent changes at the Pearl Harbor facility and any other consolidations of naval shipyards and intermediate maintenance activities, the Secretary of Defense direct the Secretary of the Navy to implement a method to (1) account for the total cost of consolidated ship maintenance operations on an ongoing basis and (2) distinguish between depot and intermediate work of consolidated ship maintenance activities. The method should include appropriate costing methodologies or techniques that provide sufficient data to show compliance with 10 U.S.C. 2466, the Chief Financial Officers Act, the Government Performance and Results Act, DOD regulations, and federal accounting standards. To help prevent disputes in the transfer of working capital fund activities to direct appropriations, we recommend that the Secretary of Defense direct the Under Secretary of Defense (Comptroller/Chief Financial Officer) to clarify DOD financial management regulations, to include specifying the processes, procedures, and costing methodology, governing the transfers of working capital fund activities to direct appropriations. We further recommend that before permanent changes are made at Pearl Harbor and any further consolidations are implemented at other naval locations, the Secretary of Defense direct the Under Secretary of Defense (Comptroller/Chief Financial Officer) and the Secretary of the Navy to resolve issues related to (1) buyout costs for the former Pearl Harbor shipyard if the Navy decides to formally transfer it to direct appropriations, (2) loss of flexibility to continue routine ship maintenance operations through potential funding gaps at the beginning of fiscal years or when expected maintenance costs exceed annual appropriations, and (3) funding for the capital improvement program for the consolidated facility. We recommend that before the Navy consolidates additional shipyards and intermediate maintenance activities, the Secretary of Defense direct the Secretary of the Navy to develop additional metrics to measure the efficiency and effectiveness of consolidated ship maintenance activities, drawing on lessons learned from the consolidation at Pearl Harbor. In its written comments on a draft of this report, DOD agreed with the report’s recommendations. However, it did not indicate specific actions or milestones for resolving the financial issues first raised in our 1999 report on the Pearl Harbor consolidation. As a result, we have added matters for congressional consideration to help assure timely implementation of our recommendations for executive action. The Congress may wish to require the Secretary of the Navy to report the Navy’s strategy and time frame for the implementation of a method to (1) account for the total cost of consolidated ship maintenance operations on an ongoing basis and (2) distinguish between depot and intermediate work of consolidated ship maintenance activities. In addition, the Congress may wish to require the Under Secretary of Defense (Comptroller/Chief Financial Officer) and the Secretary of the Navy to report their strategy and time frame for the resolution of issues related to (1) buyout costs for the transfer, (2) loss of flexibility to continue routine ship maintenance operations through potential funding gaps at the beginning of fiscal years or when expected maintenance costs exceed annual appropriations, and (3) funding for the facility’s capital improvement program.
In 1998, the Navy consolidated the Pearl Harbor Naval Shipyard and the Naval Intermediate Maintenance Facility in Hawaii. Because of concerns about some aspects of the consolidation, the Navy began a test project, commonly called the Pearl Harbor pilot, to determine if integrating the management, operations, and funding of the shipyard and the intermediate maintenance facility can result in greater efficiency and lower overall ship maintenance costs. In September 1999, GAO reported that the preliminary results of the ongoing Pearl Harbor pilot were mixed and recommended that the Department of Defense (DOD) and the Navy address unresolved issues related to the financial management of the consolidation as the Navy proceeds with similar consolidations in other locations. This report updates GAO's earlier report and discusses whether (1) the Navy has provided adequate cost visibility and accountability over the consolidation, (2) DOD and the Navy have resolved other issues related to the financial structure for consolidations at Pearl Harbor and elsewhere, and (3) the consolidation has generated greater efficiency and lower costs for ship maintenance at Pearl Harbor. GAO found that the Navy still has not provided adequate cost visibility and accountability over ship maintenance following the consolidation. DOD and the Navy have made little progress in resolving other issues related to the financial structure for the consolidation. GAO is unable to determine whether the consolidations have produced greater efficiency and lower costs for ship maintenance.
Among other things, WIOA requires that DOL and Education collaborate to implement a common performance accountability system for the six core programs identified in the law (see table 1). For the core programs, WIOA establishes six performance indicators on which states must report, with some exceptions (see table 2). As shown in figure 1, programs are required to begin using the WIOA common performance indicators as of July 2016, two years after the law was enacted and just under six months after final regulations are due. DOL and Education officials are responsible for providing technical assistance and imposing sanctions on states that do not meet performance expectations. DOL and Education issued joint proposed regulations in April 2015 that, among other things, covered WIOA performance reporting, and in July 2015, issued a joint proposed performance reporting template. DOL issued additional proposed reporting details in September 2015. Final regulations, reflecting any changes based on public comments and agency review, will, among other things, form the basis for states’ implementation of WIOA. WIOA emphasizes the importance of a comprehensive system that provides integrated, seamless services to all job seekers and workers and effective strategies that meet employers’ workforce needs. DOL and Education note that the development of integrated data systems will allow for unified and streamlined participant intake (i.e., application and registration), case management, and service delivery; minimize the duplication of data; ensure consistently defined and applied data elements; facilitate compliance with reporting requirements; and provide meaningful information about core program participation to inform operations. WIOA also requires states to use quarterly wage records, consistent with state law, to measure program performance (e.g., Unemployment Insurance (UI) wage records). Prior to WIOA, coordination among DOL- administered programs to share wage records for performance reporting was common, but access to and use of these data were less common for Education-administered programs. To help states obtain the data needed for performance reporting, particularly when participants are employed in another state or with the federal government, DOL currently funds three data exchange systems that allow agencies reporting on performance to access other states’ UI wage records or federal employment data: Wage Record Interchange System (WRIS); Wage Record Interchange System 2 (WRIS2); and Federal Employment Data Exchange System (FEDES). States compile and submit program performance data to federal systems either in aggregate or by individual participant, depending on the program. Personally identifiable information (PII) for program participants, such as names, addresses and Social Security numbers (SSNs), among other things, is often shared between entities to link individual records across data systems and to collect program outcome information from state education and employment and training agencies. Because security breaches involving PII can be hazardous to individuals and organizations, protecting the privacy of sensitive information on program participants is a concern for DOL, Education, and its state and local partners. The Fair Information Practice Principles outline a set of eight key principles for protecting PII, including limiting the collection of personal information, disclosing how collected information will be used and obtaining consent from the individual, and protecting collected information with reasonable security safeguards against risks such as loss or unauthorized access. The eight principles are transparency, individual participation, purposes specification, data minimization, use limitation, data quality and integrity, security, and accountability and auditing. While the performance reporting process is broadly similar across states, data system administration, the methods used to collect participant outcome data, and other specific practices vary by state and program. As we have reported previously, to fulfill program performance reporting requirements, state agencies generally collect participant identification and outcome data from local agencies and other sources, and then send performance reports to federal agencies (see fig. 2). To store participant data used for federal performance reporting, as well as for other program management purposes, each WIOA core program in the states we visited has its own case management data system (see table 3). Some of these systems were developed by the respective state or program and others were purchased from vendors. These data systems are used for case and program management and to compile program data used for federal performance reporting. The systems store participant information, such as personal identification; track program services, such as training received; and, in some cases, compile data on outcomes, such as employment or degree attainment. With the exception of the Texas Title I and Wagner-Peyser programs, the data systems are independent of each other, though in some instances, officials said data from the systems can be shared under established sharing agreements between agencies. For example, the VR program in Illinois has data sharing agreements in place with the Illinois agency that holds UI wage data and with the Social Security Administration to provide SSNs for data matching. The performance reporting process varies based on program-specific circumstances, such as the source of reported participant outcome information and how automated the data collection and reporting processes are. Most programs in the three states we visited currently collect the employment outcome information reported to DOL and Education by matching participant data to state UI wage records. However, the VR program in all three of the states we visited currently collects employment information through individual outreach to program participants because program staff communicate regularly with participants and can readily obtain this information. In addition, the Adult Education program in New Hampshire collects employment outcome information by surveying participants. According to OCTAE, in the 2013- 14 performance year, the Adult Education programs in four additional states—Arizona, California, Hawaii, and New York—and the District of Columbia also collected participant employment information solely by surveying participants. As shown in case study 1 (see textbox), within the same state, programs may collect information differently. In New Hampshire, for example, Title I programs rely on data matching and the VR program relies on participant self-reporting and some data matching. Even when programs rely on similar sources of information, they sometimes use different mechanisms for obtaining and reporting participant information. For example, in New Hampshire Title I programs, a single official personally compiles participant outcome data and submits performance reports to DOL after the data are reviewed and approved by Title I management staff. According to the official, he obtains the outcome data from UI wage records through a standardized exchange with another state agency, New Hampshire Employment Security (see textbox). A data sharing agreement governs this process. Title I programs in Illinois similarly obtain participants’ employment information from UI wage records by exchanging data with the Illinois Department of Employment Security in accordance with a formal sharing agreement. In Texas, the reporting process is more automated for Title I programs, as participant data is automatically merged with UI wage data on a regular basis and then extracted into a federal data reporting system. Case Study 1: Performance Reporting Processes Vary by Program in New Hampshire Title I Programs (DOL-administered): Local office staff enter information about participants, including services received, into the program’s case management data system. To collect employment and earnings information for those individuals who participated in the reporting period, an official with the Title I programs sends participant Social Security numbers (SSNs) to the New Hampshire Employment Security agency to match with the state’s Unemployment Insurance (UI) wage records. According to program officials, a data sharing agreement and standardized exchange process facilitate the sharing. The official also sends participant SSNs to the federal Wage Record Interchange System to obtain information about those who may be working outside New Hampshire. The state ultimately submits the compiled outcome data to DOL, excluding participants’ personally identifiable information. Vocational Rehabilitation Program (Education-administered): Local office staff enter information about participants, including eligibility determinations and services received, into the program’s case management data system. Program staff contact participants about every six weeks to determine whether they need any additional services or have obtained stable employment and thus can have their cases closed. Through this individual outreach, staff also collect outcome data that the state reports to Education at an individual participant level, including personally identifiable information. The program also has a data sharing agreement with the New Hampshire Employment Security agency to match participants to employment and earnings data; some of the matched data are also used for federal performance reporting. While officials in all three states we visited recognized they will need to collaborate more closely with core partners under WIOA, the extent to which they have already integrated their data systems, or plan to do so as a way to further this collaboration, varied. According to DOL and Education, WIOA encourages the development of integrated data systems across core programs to support integrated service delivery, case management, tracking participation across programs, and reporting performance, among other things. Based on our analysis of literature on information technology (IT) structures and discussions with state officials, data system integration can take a variety of forms, ranging from approaches that focus on sharing data between existing systems to approaches that consolidate existing systems (see fig. 3). Among the three states we visited, Texas is in the process of integrating core program data systems, and at the time of our visits, Illinois and New Hampshire were considering what approach to take related to integration. An overall vision for data system integration is laid out in the preamble to the proposed regulations and DOL officials said they have been engaging with states about issues related to integration. However, DOL officials said that they do not specify what model of data system integration states should adopt because states’ unique circumstances, such as their data system structures and the extent of existing integration, may make certain approaches more viable than others. For example, among the three states we visited, Illinois’ core programs are housed in four separate agencies, whereas in Texas, Title I, Wagner-Peyser, and Adult Education programs are all housed within a single agency and the Vocational Rehabilitation program will be moving into that agency as of September 2016. These different structures and other factors, such as costs associated with integration, affect the states’ approaches to data system integration. DOL and OCTAE regional officials similarly observed that states they work with are taking a variety of approaches to data system integration, with some, such as Washington, Idaho, and Tennessee moving forward in different ways. For example, a DOL regional official said that Washington is exploring completely integrating its data systems and Idaho is examining linking or interfacing data from programs’ individual systems for performance reporting. An OCTAE area coordinator said that Tennessee has hired a contractor to develop its data system integration. Officials from two DOL regions also said that some states are waiting to move forward with data system integration until they have a more complete understanding of federal expectations from final regulations, and are instead focusing on other aspects of WIOA implementation in the meantime. Texas is in the process of consolidating core programs into a single agency, the Texas Workforce Commission (TWC), and integrating program data systems at the performance reporting phase. The Adult Education program moved from the Texas Education Agency into TWC in September 2013, joining Title I and Wagner-Peyser programs. The Vocational Rehabilitation program will be joining the agency in September 2016. While this consolidation has occurred independent of WIOA, Texas officials said that it has put the state ahead of the curve in terms of data system integration. Texas is integrating its core program data systems similar to the “back-end integration” model shown in figure 3. As shown in figure 4, according to Texas officials, the case management data systems used for the Title I and Wagner-Peyser programs send participant data to a repository where UI wage data from Texas and other states—via federal exchanges, such as WRIS—are also sent. The data in this repository are merged by participants’ SSNs and uploaded to a federal performance reporting system. This automated delivery and compiling of data from multiple programs and sources in the repository is the key feature that represents “back-end integration” in Texas. The Adult Education program has its own case management data system and conducts performance reporting independently. However, Texas officials said that the automated delivery both of Adult Education data into the common repository, and of performance reports from the repository to Education, are under development. Texas officials said they also plan to integrate VR’s data system and reporting into this common repository after the program transitions into TWC. Core programs in Illinois have not begun integrating data systems and state officials are currently considering what approach to take related to integration. Currently, core programs in Illinois each have their own independent data systems and state officials from all of the programs told us they were not yet clear what integration would mean for their data systems. At the time of our visit, officials from some of the programs said that the Illinois governor’s office was involved in providing direction to the core programs about what data system integration will look like in Illinois. In the meantime, some officials said they were considering ways to increase data sharing between core programs. Depending on the ultimate direction Illinois takes, integration may mean substantial changes for core programs or more minor adjustments to how programs share data. Officials from core programs in New Hampshire said they do not plan to develop major changes to integrate data systems because their existing, independent systems are functioning well and such changes would be too costly. Some officials discussed interfacing—automated data sharing between program data systems—as a possibility the state could explore, though they said that approach would also have associated costs. State and DOL regional officials we spoke with said states do not yet know the full extent of changes to performance data collection and reporting that will be needed under WIOA because regulations and reporting templates have not been finalized. However, based on information in the law and the proposed regulations (jointly issued by DOL and Education in April 2015), as well as conversations between states, programs, and the federal agencies, program officials recognize that some changes will be needed to collect data on and calculate the new WIOA performance indicators. In addition, DOL and Education jointly released a proposed reporting template in July 2015 that provided information about proposed definitions for core indicators of performance, some of the specific data elements that need to be collected, and how to report performance on certain core indicators. DOL also released additional proposed details about data element definitions and specifications in September 2015. Programs in all three states we visited are engaged in conversations within and across core programs. For example, when we visited the states, Illinois officials were communicating about integration and were working with a contractor to facilitate discussions across core programs, and New Hampshire officials were beginning to coordinate the development of the state plan for implementation by convening stakeholder discussions. The DOL-administered programs have experience with collecting and reporting data for employment and earnings indicators, and Title I and Wagner-Peyser officials in all three states we visited said they expected few or relatively minor changes to their performance reporting. The employment and earnings indicators under WIOA are similar to those reported under WIA for the DOL-administered programs, though median earnings (instead of mean earnings) are now calculated in the 2nd quarter after program exit, and employment is measured in the 2nd and 4th quarters after program exit (instead of measuring employment in the 1st quarter and then employment retention in the 2nd and 3rd quarters after exit). However, the measurable skill gains indicator is new to Title I Adult and Dislocated Worker programs and these programs will have to develop ways of obtaining these data. Because some WIOA reporting requirements are new to the Education- administered programs, Adult Education and VR programs may face more substantial changes to collecting and reporting performance data than the DOL-administered programs. For example, the median earnings performance indicator is new for the Adult Education program. In addition, the VR program has generally reported employment and earnings outcomes at a single point in time for program participants—at the point a participant exits the program (i.e., once a participant has stable employment for 90 days)—and not thereafter (as the new indicators will require). To report on the new measures, these programs may have to develop new ways to collect information about employment and earnings and may need increased access to UI data. According to DOL, WIOA is intended to expand the use of state UI wage records, consistent with state law, for reporting employment and earnings outcomes. Of the programs in the states we visited, only the Adult Education program in New Hampshire does not currently have access to earnings information from state UI data, due to a state law prohibiting it from collecting SSNs, according to program officials. As a result, the Adult Education program in New Hampshire plans to obtain earnings information for federal reporting by adding a question about earnings to its participant survey. The Adult Education programs in Illinois and Texas currently use UI data matching to report on employment outcomes, and thus already have access to a source of earnings data. State VR officials in the states we visited said they will have to track participants longer under WIOA due to the requirement to report on outcomes after program exit, and may need new ways of collecting information after a participant leaves the program. Contractors who administer VR data systems in New Hampshire and other states said that under WIOA, VR agencies will have to change how they think about storing data and long-term maintenance because the new common performance indicators will require a reporting method that relies on case management over a significant period of time. As a result, states will have to shift from reporting on the status of a participant when he or she leaves the program to reporting the participant’s progress in the second and fourth quarters after exiting the program. For example, VR officials in Illinois and Texas said that while they currently have access to UI wage data, they generally do not use them for federal performance reporting because program staff maintain contact with participants to collect employment status updates and other information until participants exit the program. However, they said they plan to begin using UI wage data matching under WIOA to report on employment and median earnings. VR officials in New Hampshire said the program already uses UI wage data matching to collect some of its reported employment outcome data, though it does not currently have access to UI wage records for participants who find employment outside New Hampshire. The effectiveness in serving employers performance indicator will be new to all of the core programs, and how it will be measured has yet to be finalized. The statutory deadline for developing the indicator is July 2016. DOL and Education proposed several potential approaches to measuring this indicator in the proposed regulations. According to DOL, the agencies are analyzing comments received on these proposals. Some state and DOL regional officials said WIOA’s requirement to report on effectiveness in serving employers, and the emphasis on serving the business needs of employers, has generally increased awareness of employers as customers, in addition to program participants. For example, some state officials we spoke with said they expect they will need to develop ways to coordinate outreach to and interaction with employers across programs. New Hampshire Wagner-Peyser officials said they work with an interagency business team that has representatives from each of the core program agencies in order to eliminate duplication of effort and prevent employers from being burdened with repetitive visits. Illinois Title I officials similarly said programs will have to coordinate outreach to employers and added that under WIOA, programs will have to increase employer engagement to ensure that education and training is focused on employer demand (i.e., where jobs are available). After DOL and Education issue final regulations and the full extent of changes is known, programs will still have to incorporate the performance reporting changes into their data systems. As shown in case study 2 (see textbox), those programs that have vendor-purchased systems may collaborate with or rely on their contractors to incorporate changes, while those programs that have systems developed in-house will be responsible for incorporating changes. Regardless of whether program officials or a vendor are responsible, the ability of programs to process changes before WIOA implementation deadlines may vary. For example, New Hampshire VR officials said their data system vendor expects to continue implementing changes through summer 2016. Texas VR officials said that the last time they implemented major changes to performance reporting, implementing the changes took about two years to complete from when the changes were first discussed. Case Study 2: Incorporating Data System Changes Using Vendors Versus In-House Staff New Hampshire Wagner-Peyser: Wagner-Peyser officials in New Hampshire expect their data system vendor to incorporate any needed changes resulting from WIOA in their case management system. According to officials, their vendor works with 30 state programs and will be implementing changes in an updated version of its case management data system product to reflect WIOA changes. Texas Vocational Rehabilitation: According to officials, the Vocational Rehabilitation (VR) program in Texas is individually responsible for implementing data system changes. Information technology officials for the VR program said that to incorporate changes under WIOA they will receive a request from VR program officials, then develop, test, and incorporate changes into the program’s data system. The Texas officials said they had not yet received a change request from the VR program because officials are waiting for final WIOA requirements. Federal officials are also considering ways to facilitate states’ WIOA implementation efforts. The preamble to the proposed regulations issued by DOL and Education in April 2015 states that the agencies intend to engage in a renegotiation of data sharing agreements with states (currently WRIS and WRIS2) to allow interstate wage matches for WIOA programs. DOL indicated it is considering the structure of the agreements moving forward, and will be working with Education and engaging the states in that development. In addition, the President’s fiscal year 2017 congressional budget justification for the Department of Health and Human Services (HHS) requests that select federal statistical and evaluation units be granted access to data from the National Directory of New Hires (NDNH) for, among other things, performance measurement purposes. The President’s Budget also proposes to provide state agencies with responsibilities for WIOA implementation the authority to match data with the NDNH for program administration purposes, including oversight and evaluation of these programs. Among other things, NDNH contains quarterly wage information on individual employees from state UI records and federal agencies. When we spoke with DOL and HHS officials about expanding access to NDNH, they told us that NDNH data could potentially be used for state WIOA performance reporting, though that would require that states be given legal access to NDNH data. However, when we spoke with HHS officials, they cautioned that the request is still in the early stages of consideration. Many additional details would have to be settled either in statute or in agreements between HHS and DOL, including how HHS would be reimbursed for use of the data and how the security of the data would be protected. For instance, according to the fiscal year 2017 proposal, HHS “would conduct robust privacy and security reviews before granting any state agency access to data.” When we spoke with them, HHS officials said they had not engaged in detailed discussions with DOL officials about these and other related issues. Program officials across all three states we visited reported challenges in the early stages of implementing the requirements of WIOA, in part because they are awaiting final regulations and seeking more guidance from DOL and Education. Many details about the reporting requirements will not be known until final regulations are issued. Under WIOA, they were due in January 2016, though DOL officials informed us they anticipate issuing them in June 2016. Generally, early implementation efforts in the states we visited are based on the proposed regulations, and some state officials are concerned that without knowing what the final regulations will entail, they may not have enough time to implement all changes before the July 2016 deadline to begin using the new common indicators of performance. To avoid investing resources in implementing aspects of performance reporting that might later change with the publication of final regulations, some states are focusing on other WIOA efforts, such as working with new program partners and discussing ways to share participant data efficiently, according to officials we spoke with in two states and one DOL regional office. DOL and Education have provided some interim guidance in order to assist states as they await final regulations. As we noted previously, DOL and Education issued joint proposed regulations in April 2015, as well as proposed performance reporting templates in the following months. In August 2015, DOL and Education also issued joint guidance on the vision for WIOA’s coordinated service delivery system and technical assistance resources available to states. DOL’s ETA has issued other Training and Employment Guidance Letters (TEGLs) that cover various issues, such as planning information related to Title I Youth program funds and WIOA implementation activities. OCTAE has also issued other guidance related to the Adult Education program, such as a program memorandum about the vision for the program within the workforce system, and guidance on the competition and award of program funds. In addition, DOL and Education officials told us they have worked together to provide webinars and have participated in joint conference calls with states. Some officials in the states we visited said that they have found this various guidance informative, particularly DOL’s TEGLs. However, receiving such guidance earlier would have been helpful, according to some program officials in two of the three states we visited. DOL has also responded to states’ questions informally, according to state officials, while Education officials have said that until regulations are finalized they can provide only limited information outside the formal regulatory comment process. Officials for DOL-administered programs in all three states we visited said they contact DOL regional officials for assistance, and several officials in New Hampshire reported that their DOL regional office has specifically helped them figure out answers to questions about WIOA. For example, New Hampshire’s Wagner-Peyser officials stated that although their DOL regional officials understandably lack information on final regulations, they have been helpful in providing suggestions on how to plan for WIOA and encouraging officials not to wait until final regulations to move forward with implementation. Similarly, a DOL regional official said he meets with states every two weeks to discuss issues on the ground, such as training and WIOA definitions related to performance reporting. In contrast, Education officials from RSA and OCTAE have told states that they cannot answer questions that relate to ongoing rulemaking. For example, according to Education officials, they have responded to questions about performance indicators by referring states to the relevant provisions in WIOA and the proposed regulations. While WIOA encourages the integration of data systems, state capacity to pursue integration varies. In the states we visited, Texas is currently integrating its data systems, but officials in New Hampshire and Illinois said that the cost of data system integration will be a major challenge. For example, officials in every core program in New Hampshire reported that developing a new, integrated data system is cost-prohibitive. Similarly, a DOL regional official said that the majority of states in his region face challenges with data integration that include the expense of developing an integrated system and determining which systems will need to be redesigned. While states have some flexibility in how they can use their WIOA funding, DOL and Education officials told us that there are federal and state restrictions on the amount available for data integration and IT system upgrades. To help fund data system integration, DOL and Education have requested funding to provide assistance to states. In the President’s fiscal year 2016 congressional budget justification for ETA, the agency asked for $37 million under the Workforce Data Quality Initiative, of which $30 million would be to “help states build integrated or bridged data systems to facilitate WIOA implementation… support building state-based wage data matching infrastructure to enable and/or streamline WIOA performance reporting.” Similarly, as part of its fiscal year 2016 budget request for the Adult Education program, Education asked for $1 million to provide technical assistance to states in the collection of new data elements and integration of data systems. Officials at both agencies told us that the funding requests were not intended to cover all data integration costs, but instead to provide some assistance to states. The Consolidated Appropriations Act, 2016, provided $6 million for the Workforce Data Quality Initiative and did not provide the full amount of funding Education requested to provide technical assistance to states. In the President’s Budget for fiscal year 2017, both agencies again requested funding to support states’ data integration efforts. Under the Workforce Data Quality Initiative, ETA asked for $40 million, of which $33 million would be to help states build integrated data systems to facilitate WIOA implementation. Under the Adult Education program, Education asked for $6 million to support the collection of new data elements and the integration of data systems. Aside from cost, state and federal officials also identified other capacity concerns in this area, including limited staff expertise and antiquated IT systems. According to DOL officials, state efforts to integrate data systems may be challenged by, among other things, constraints in their ability to retain qualified IT staff due to state salaries that are low compared to those in the private sector. One DOL regional official said that due to budget cuts and layoffs in one of her states, the remaining staff lacks knowledgeable data programmers. Some program officials in New Hampshire expressed similar concerns about limited expertise to develop data system integration. For example, a New Hampshire Vocational Rehabilitation official reported that the state’s IT department has reduced its labor force and has limited staff capacity to support data system integration. According to DOL officials, many states have significantly antiquated and inflexible IT systems, making it challenging to support data system integration. Officials in two DOL regional offices similarly reported that rather than upgrading data systems, states have added patches over the years to respond to changes to reporting requirements, resulting in antiquated systems that make integration difficult. As part of data system integration, WIOA encourages states to share and match data across programs, but officials reported that efforts to do so may be challenged by security concerns and undeveloped relationships. Officials in two of the three states we visited and a DOL regional official said that concerns over data security may impede states’ efforts to obtain participant information from core program partners. For example, a New Hampshire official said she would have concerns about sharing data with other agencies because she does not manage or oversee the data security standards they follow. These security concerns may be the result, in part, of undeveloped relationships between partners. According to a study by the Center for Regional Economic Competitiveness (CREC), one of the challenges to data sharing is a lack of trust between partners, particularly related to the intent and ability of the entity requesting data to use the data appropriately and protect it from any breach of confidentiality. The study identifies establishing mechanisms to build trust between entities interested in obtaining data from each other as one successful strategy for responding to this challenge and improving data sharing conditions. DOL and Education intend that WIOA’s common indicators of performance will, among other things, enable consistent outcome comparisons across states. WIOA requires the use of quarterly wage records, consistent with state laws, and the preamble to the proposed regulations emphasizes data-matching by participant SSN as a way of encouraging timely and accurate data for employment and earnings measures across the different programs and states. However, under federal law, state and local government agencies generally cannot require individuals to provide their SSNs as a condition of receiving program services. In addition, not all programs collect SSNs from some, or all, participants, thus making it difficult to match participant data. For example, according to OCTAE officials, Adult Education programs in some states have reported to them that they have difficulty collecting SSNs due to state privacy laws. OCTAE officials said that in the absence of SSNs, states can use other methods to collect data for performance reporting purposes, such as participant surveys. As shown in case study 3 (see textbox), other strategies for mitigating data gaps include the development of alternative methods of data matching, instead of using SSNs. Case Study 3: Matching Participant Data Without SSNs In November 2014, we reported on challenges certain states experienced matching education and workforce program data as part of their efforts to develop statewide longitudinal data systems, which, among other things, allow states to follow individuals through their education and into the workforce. Officials in three of the five states visited as part of that review said state law or agency policy prohibit collecting a SSN in certain education programs. As a result, to match education and workforce data without a SSN, state officials reported they are developing algorithms to match individual records using other identifiers, such as an individual’s first name, last name, and date of birth. Wagner-Peyser officials in New Hampshire told us that they are able to match participants’ information based on other personally identifiable information (PII), such as name and date of birth, with UI data if a participant does not provide a SSN. Similarly, according to officials in Texas, the Adult Education program is able to match three of four elements (first name, last name, date of birth, or SSN) to obtain select education outcomes from the Texas Education Agency. However, Texas officials reported to us that relying on names for data matching may not be ideal, as names may change throughout an individual’s lifetime. DOL officials told us that states are making advances in obtaining participant information through secure portals for matching in which users are not able to see participants’ PII. In addition, DOL officials said they are aware of work groups that are developing ways to use advanced technology to do data matching without SSNs. Even when programs have participant SSNs, they may face other constraints in their ability to match with UI data. According to the CREC study, the decision to share or withhold employment and wage data is largely based on how a state interprets UI data confidentiality statutes and regulations. The study found that the states that have been most successful in promoting data sharing are those with legislation that provides greater detail about both to whom and for what purposes confidential data may be disclosed. Data gaps may continue to affect the quality of performance data reported in the Adult Education program. For example, state programs that do not collect SSNs, such as New Hampshire, may continue to utilize surveys or individual participant follow-up as a means to collect information on earnings. However, an Adult Education official in New Hampshire said that adding questions to participant outcome surveys to collect information for the WIOA median earnings performance indicator may discourage respondents from filling out the surveys and drop the response rate below OCTAE’s minimum acceptable level. Even when a state Adult Education program uses UI wage data matching, some participants do not provide SSNs—sometimes in large numbers. For example, Adult Education officials in Texas said their rate of matching participants to wage data using SSNs was about 55 percent. According to the statistical formulas OCTAE uses to calculate weighted performance outcomes for state Adult Education programs, participants whose employment outcome information is missing (i.e., whose data cannot be matched with UI wage records or who do not respond to surveys) are assumed to obtain employment at the same rates as those participants with data available. OCTAE officials said they have not assessed whether differences exist between populations that respond to surveys or who submit SSNs and those who do not respond or do not submit SSNs. Officials reported that such an analysis of nonresponse bias would not be technically feasible within each state before submitting performance data, and similarly would not be feasible for OCTAE staff since state data are reported in the aggregate. DOL regional officials also raised concerns about the accuracy of performance data reported by states. DOL regional officials annually review a sample of case files from selected states to assess the validity of data states submit for performance reporting, and provide feedback to states based on their findings. According to DOL regional officials, these regularly recurring data validation reviews are conducted by the regional officials after states have submitted their performance data. Regional officials review the data element validation results, including the error rates and provide technical assistance to states as appropriate, according to DOL officials. Officials in two of the three DOL regional offices we spoke with said they have recently identified higher-than-desired error rates in performance data reported to the federal government in their routine reviews. For example, one regional official found that a state failed to update its data system and as a result, reported inaccurate information about the level of education completed by program participants. DOL officials told us that due to resource limitations, they have not performed any analysis to determine how such error rates may or may not affect the overall accuracy of outcome data. According to DOL officials, although the data validation reviews conducted by regional officials are not tied to sanctions, states are expected to use the findings of the reviews to improve the quality of data they submit for performance reporting in the future. OCTAE also conducts risk-based monitoring of state Adult Education programs, which covers data quality issues. One recent monitoring report found that a state program’s data system did not include required automated checks for errors and invalid data. New WIOA reporting requirements will increase the responsibilities for Eligible Training Providers (ETPs) to track employment outcomes for training participants, including participants who are not enrolled in WIOA programs. State officials in all three states we visited said that these additional performance reporting requirements may discourage some ETPs from participating as WIOA training providers because the ETPs believe reporting on non-DOL-funded participants will be burdensome. Texas officials said that they have been working with ETPs in their state to find ways to reduce the burden while still complying with WIOA. In prior work, we identified similar concerns during the transition to WIA. However, according to DOL, to balance these concerns while still gathering enough information for participants and others to make informed choices about which training providers to use, states were ultimately able to obtain a waiver from certain reporting requirements for ETPs under WIA. Forty-one states obtained the waiver, according to DOL. DOL officials told us that they are aware of the challenges that states have expressed related to understanding where the burden lies for ETP performance reporting in both data collection and reporting. For example, DOL officials told us that not all ETPs have access to UI wage data. DOL officials said they solicited feedback on the proposed regulations and performance reporting templates in order to work through these issues, and that these details will be addressed in the final regulations and reporting templates, as well as in future joint guidance from DOL and Education on data access. State Vocational Rehabilitation officials in two of the three states we visited noted that identifying and tracking participants receiving pre- employment transition services, as well as reporting on new WIOA performance indicators for these participants, may be difficult. WIOA emphasizes pre-employment transition services, which assist students with disabilities transitioning from secondary school into postsecondary education or employment. WIOA states that these services are to be provided to students with disabilities who are eligible or “potentially eligible” for services, but New Hampshire and Texas officials said they did not know how to identify, serve, and track “potentially eligible” participants. According to Education, each VR program grantee received terms attached to their fiscal year 2016 grant award describing the requirements for the provision of pre-employment transition services to all students with disabilities who are eligible or potentially eligible for VR services. In addition, the agency noted that RSA plans to provide more extensive guidance related to this issue in the future. DOL and Education officials are largely aware of and, in some cases, actively working to help address the challenges the states we visited raised related to their ability to timely implement WIOA’s vision for performance reporting. In this process, the federal agencies are balancing the benefits of issuing guidance as early as possible with the importance of considering a broad array of stakeholder input via the ongoing rulemaking process. In some cases, the departments have tried to assuage states’ concerns through the guidance and ongoing dialogue discussed earlier. However, DOL and Education officials also acknowledge they likely cannot more fully mitigate states’ concerns until they issue final WIOA regulations and subsequently begin to develop more detailed implementation guidance. Among other things, the departments noted that in several areas, the proposed regulations sought suggestions and other input on key aspects of WIOA rather than spelling out explicit proposed rules (e.g., different ways to define program exit or ideas on how to measure the effectiveness in serving employers indicator), an approach that likely contributes to some states’ hesitancy to plan specific implementation steps or commit resources to them. According to DOL officials, the departments expect to issue final regulations in June 2016, which is after the date called for in WIOA. They attribute this delay to the scope and complexity of the new law, the volume of comments received on the proposed regulations, and the importance they are placing on a thoughtful and deliberative treatment of the many stakeholders’ perspectives and input. Officials in all three states we visited reported that they were not aware of any outside intrusions into the electronic data systems for core WIOA programs during the years they worked in the programs. However, officials in Illinois and Texas reported other types of occasional data or security breaches that resulted in inappropriate exposure of PII for small numbers of people in limited circumstances. For example, Texas officials described a few instances of paper records with PII stolen from vehicles, a few times when unencrypted data were transferred electronically from local offices to the state, and an instance in which an employee included PII in internal emails. Officials said an electronic flag in the Texas email system currently identifies SSN-like numbers that appear in emails from program employees before they are sent so employees can remove any SSNs. An Illinois official reported that a staff member once placed paper PII in a non-secure trash container. Officials in another Illinois program said they found that some of the entities with which they shared data did not have certain security practices in place; the entities reported this information in response to a questionnaire regarding data security practices administered by the program as part of the data sharing application process. For example, some of the entities were faxing paperwork with PII. Officials said they have used the answers to this questionnaire to help tighten data security with certain sharing partners, including mandating the use of Secure File Transfer Protocol for data sharing. Officials in all three states said they had protocols for reporting a breach of PII, often reporting the incident to a security officer or a program supervisor. For example, in the Illinois Vocational Rehabilitation program, officials told us that staff are to report data breaches to the program director and to the agency’s Chief Information Officer or to the secretary of the agency, and that such breaches must be reported to the governor and the state legislature within 24 hours. Illinois Adult Education officials also said they have procedures to investigate incidents and inform affected program participants. According to officials in New Hampshire’s Wagner-Peyser program, the program is required by the state to have a continuity of operations plan in place, which includes responding to data breaches. The response plan is coordinated by an official in the agency commissioner’s office who is responsible for overseeing data security. One of the ways that some programs reduce risks associated with the unauthorized disclosure of PII is to limit what is collected from participants, including involving participants in consenting to the collection of information, which is consistent with the Fair Information Practice Principles. According to the Privacy Act of 1974, government agencies generally cannot deny services to participants because of a refusal to provide SSNs. In 11 of the 12 programs across the three states, officials told us they allow applicants to opt out of providing their SSNs and still receive services if found otherwise eligible. One New Hampshire program only uses the last 4 digits of a participant’s SSN in its data systems and prohibits the transmission of the entire SSN. In addition, officials in at least one program in each of the three states also specified that they allow applicants to opt out of providing other types of PII, such as date of birth. While, as previously mentioned, many Adult Education program participants in the states we visited do not provide SSNs, Title I and Wagner-Peyser officials in New Hampshire and Texas told us that relatively few participants in their programs opt out of supplying their SSNs or other PII. One New Hampshire program reduces the possibility of unauthorized PII disclosures by asking for personal identification to establish identity, but not storing the PII in any files. This practice is consistent with the Fair Information Practice Principle of minimizing data collection to only directly relevant information. Officials in the New Hampshire program said that staff ask applicants to provide identity documents (e.g., driver’s license, military separation paperwork, or Social Security card), and then staff conduct a visual verification of the documents without scanning them into their electronic data system or making copies for a paper record file. An official from one DOL region suggested that states could consider more frequently using visual verification of documents with PII without making copies. As a result, state programs would not store that PII. Officials in all three states we visited also explained that they inform applicants for services about how the programs will use their PII, typically by asking applicants to read and sign a form indicating why their PII is being collected and how it will be used. This is consistent with the Fair Information Practice Principles. The forms explain the types of allowable uses of PII, among other things. For example, an information disclosure form for the VR program in New Hampshire informs participants that personal information is confidential and will not be released without their written permission, except when information must be released as required by federal authorities or in other circumstances specified on the form. The disclosure form states that information can be shared, with consent, with other related state programs with which the participant is involved and with entities involved in audit, evaluation, or research directly connected with administration of the program. In prior work, we identified actions in the areas of security management and access controls, among others, as important for keeping data secure; these are also consistent with the Fair Information Practice Principle of protecting data through safeguards against risks. Officials in the three states we visited reported using various practices in these areas. For example, in terms of security management, state officials reported various activities related to the design and operation of their data systems, including: state data security protocols (together with federal guidance) that establish practices for handling both electronic and paper PII; data security training that staff must attend periodically, as well as staff being required to sign a confidentiality (also called nondisclosure) agreement; and data sharing agreements with another state agency that must have proper PII protection procedures included, with programs owning the PII reviewing current or proposed data sharing agreements, and entities requesting data sharing required to provide evidence of acceptable data security procedures. Related to physical and data system access controls, state officials said that among other practices, they: use secured entry for buildings containing PII; keep data on a secure network and access to the network has levels of set privileges based on a staff member’s role; with staff computers password protected and locked after a certain time period of inactivity; and require staff to use encryption software to safeguard data whenever sending sensitive information via email or when transferring data. In addition, the core programs in Texas have a periodic “penetration test” to try to identify possible vulnerabilities that outsiders could use to access the data. The central state entity for information resources conducts these tests individually for each state agency; staff from that entity attempt to hack into the data system as if they were outsiders. The Texas core programs also perform self-assessments of data security. New Hampshire’s central entity for information resources also conducts scans on data servers for vulnerabilities. We provided copies of this draft report to the Department of Education and the Department of Labor for review and comment. Both departments provided technical comments, which we have incorporated as appropriate. The Department of Labor provided a transmittal letter with its comments, which is reproduced in appendix I. The letter highlights DOL’s work with Education to create a unified vision of WIOA performance accountability through guidance, technical assistance, and rulemaking. The letter also cites DOL’s goal in the coming year of providing guidance on data validation, as well as other performance accountability topics such as wage record sharing and negotiations of performance targets. We also provided copies of this draft report to officials from the state programs we visited for their review, and incorporated their technical comments as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Labor, the Secretary of Education, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Bill MacBlane (Assistant Director), Michael Kniss (Analyst-in-Charge), Kelsey Kreider, Ada Nwadugbo, and Paul Wright, along with James Bennett, Marisol Cruz, John de Ferrari, Alex Galuten, Danielle Giese, Farrah Graham, Amy Moran Lowe, Sheila McCoy, Walter Vance, and Charlie Willson made significant contributions to this report. Workforce Innovation and Opportunity Act: Performance Reporting and Related Challenges. GAO-15-764R. Washington, D.C.: September 23, 2015. Education and Workforce Data: Challenges in Matching Student and Worker Information Raise Concerns about Longitudinal Data Systems. GAO-15-27. Washington, D.C.: November 19, 2014. Workforce Investment Act: Strategies Needed to Improve Certain Training Outcome Data. GAO-14-137. Washington, D.C.: January 31, 2014. Information Security: Agency Responses to Breaches of Personally Identifiable Information Need to Be More Consistent. GAO-14-34. Washington, D.C.: December 9, 2013. Workforce Investment Act: DOL Should Do More to Improve the Quality of Participant Data. GAO-14-4. Washington, D.C.: December 2, 2013. Unemployment Insurance Information Technology: States Face Challenges in Modernization Efforts. GAO-13-859T. Washington, D.C.: September 11, 2013. Information Technology: Department of Labor Could Further Facilitate Modernization of States’ Unemployment Insurance Systems. GAO-12-957. Washington, D.C.: September 26, 2012. Federal Information System Controls Audit Manual (FISCAM). GAO-09-232G. Washington, D.C.: February 2, 2009. Workforce Investment Act: Additional Actions Would Further Improve the Workforce System. GAO-07-105IT. Washington, D.C.: June 28, 2007. Workforce Investment Act: Employers Found One-Stop Centers Useful in Hiring Low-Skilled Workers; Performance Information Could Help Gauge Employer Involvement. GAO-07-167. Washington, D.C.: December 22, 2006. Workforce Investment Act: Labor and States Have Taken Actions to Improve Data Quality, but Additional Steps Are Needed. GAO-06-82. Washington, D.C.: November 14, 2005. Workforce Investment Act: Substantial Funds Are Used for Training, but Little Is Known Nationally about Training Outcomes. GAO-05-650. Washington, D.C.: June 29, 2005. Workforce Investment Act: Labor Should Consider Alternative Approaches to Implement New Performance and Reporting Requirements. GAO-05-539. Washington, D.C.: May 27, 2005. Workforce Investment Act: States and Local Areas Have Developed Strategies to Assess Performance, but Labor Could Do More to Help. GAO-04-657. Washington, D.C.: June 1, 2004. Workforce Investment Act: Labor Actions Can Help States Improve Quality of Performance Outcome Data and Delivery of Youth Services. GAO-04-308. Washington, D.C.: February 23, 2004. Workforce Investment Act: Improvements Needed in Performance Measures to Provide a More Accurate Picture of WIA’s Effectiveness. GAO-02-275. Washington, D.C.: February 1, 2002. Workforce Investment Act: Better Guidance Needed to Address Concerns Over New Requirements. GAO-02-72. Washington, D.C.: October 4, 2001.
Enacted in 2014, the Workforce Innovation and Opportunity Act brought numerous changes to existing federal employment and training programs, including requiring DOL and Education to implement a common performance accountability system across the six WIOA-designated core programs. WIOA includes a provision for GAO to issue an interim and final report on issues related to job training databases and data exchange agreements. This final report addresses (1) changes selected states plan to make in how they collect and report performance information for core programs; (2) challenges these states face related to performance reporting and how they might be addressed; and (3) whether these states have reported breaches to core program data systems and what practices they have to safeguard personal information. GAO reviewed relevant federal laws, regulations, and policy guidance; and obtained information on the efforts under way in three states (Illinois, New Hampshire, and Texas) selected in part based on variation in level of experience with sharing data across programs. The views of these officials provide in-depth examples but are not generalizable to all states. GAO also interviewed DOL and Education officials, including selected DOL regional officials and Education state liaisons and area coordinators to obtain perspectives on more states. GAO makes no recommendations in this report. In its comments on a draft of this report, DOL highlighted its efforts with Education to promote a unified vision of performance accountability. To implement the Workforce Innovation and Opportunity Act (WIOA), the 2014 law governing the nation's employment and training programs, the three states GAO visited are considering performance reporting changes such as integrating data systems and using new data sources. GAO selected states with varying levels of experience sharing data across programs. According to the Departments of Education (Education) and Labor (DOL), WIOA is intended, in part, to improve the consistency of states' performance reporting compared to reporting under the law it replaced, the Workforce Investment Act of 1998. For six core DOL and Education programs, WIOA establishes common indicators of performance in areas such as employment and earnings, and encourages states to integrate data systems related to these indicators. In the states GAO visited—Illinois, New Hampshire, and Texas—efforts to integrate data systems varied. For example, Texas is consolidating programs in one agency and building an integrated data system and Illinois is discussing integration options across the four agencies housing its programs. Officials in all three states expect changes in how they collect and report performance information. Though specific reporting requirements are not yet final, core programs—especially those administered by Education—face substantial changes. For example, Education programs in these states are exploring new ways to collect earnings data, such as adding survey questions or obtaining greater access to unemployment insurance wage records. Program officials in the three states GAO visited identified challenges to WIOA performance reporting, including: Limited guidance. Officials in all three states said early implementation was slowed because WIOA regulations are not yet final and certain details about performance reporting are not yet resolved. In the interim, DOL and Education have offered states additional guidance. Cost and complexity of integrating data systems. Officials in Illinois and New Hampshire said that resource constraints pose challenges to integrating data systems. Among efforts to help defray integration costs, DOL and Education have sought additional federal funding for states. Data quality concerns. Missing participant data may continue to affect the quality of information states report to federal agencies. For example, some states reported using participant surveys to collect employment data due to challenges with state privacy laws. In addition, federal law generally allows participants to opt out of providing Social Security numbers (SSNs). Officials in the states GAO visited said many participants in one of the Education-administered programs do not provide SSNs, making it harder to match data to track their outcomes and participation in other programs. Officials in the three states GAO visited reported no intrusions into their data systems in recent years. Officials in two states did report other occasional security breaches that may have resulted in inappropriate exposure of personally identifiable information for small numbers of people in limited circumstances; for example, emails that included participant SSNs. Officials in all three states reported taking steps to limit and protect the participant information they collect, such as monitoring and controlling data access.
Challenges in global political affairs have placed increasing demands on the way the United States uses space capabilities to achieve national security objectives. DOD’s space network is expected to play an increasingly important role in military operations. Yet in each major conflict over the past decade, senior military commanders have reported shortfalls in tactical space capabilities, such as those intended to provide communications and imagery data to the warfighter. To provide short-term tactical capabilities as well as identify and implement long-term solutions to developing low-cost satellites, DOD initiated the ORS concept. The ORS concept aims to quickly deliver low-cost, short-term tactical capabilities to address unmet needs of the warfighter. Unlike traditional large satellite programs, the ORS concept is intended to address only a small number of unmet tactical needs—one or two—with each delivery of capabilities. It is not designed to replace current satellite capabilities or major space programs in development. Rather, the ORS concept has long-term goals of reducing the cost of space development by fostering low cost launch methods as well as common design and interface methods. The ORS concept is based on three tiers, as shown in figure 1, that are distinguished by the means to achieve the effects as well as the length of time required to deliver ORS capabilities. According to DOD, the timelines may not be possible at the outset, but will remain an important goal as the ORS program matures. The Joint ORS Office plans to focus on fielding Tier 2 and 3 space capabilities and when directed, support the achievement of Tier 1 response times in coordination with other members of the warfighter and national security space communities. ORS solutions can be derived from ORS activities from more than one tier. The Joint ORS Office has intentionally been limited in size, and therefore it will rely on existing space organizations for specific ORS support and execution activities. Capabilities developed under the ORS concept will be complementary to other fielded space capabilities. With a focus on augmenting, reconstituting, and filling unanticipated gaps in U.S. space capabilities, ORS aims to provide a critical capability for the United States to maintain the asymmetric advantage it has derived from its space-based capabilities over potential adversaries. DOD has taken several steps to develop the ORS concept to meet warfighter needs; however, the concept is still in the early stages of its development and not commonly understood by all members of the warfighter and national security space communities. DOD has developed a process for converting warfighter needs into formal requirements and identifying potential ORS solutions. In April 2008, DOD issued an Implementation Plan and continues to draft instructions and guidance to further clarify ORS and how it can meet warfighter needs. In spite of this progress, common understanding of the ORS concept is lacking because DOD has not clearly defined key elements of the ORS concept and has not effectively communicated the concept to key stakeholders. DOD has made some progress in developing the ORS concept. Since the Joint ORS Office was established in May 2007, it has developed a process that converts warfighter needs into formal requirements and potential ORS solutions. DOD also issued an Implementation Plan in April 2008 and continues to develop further ORS guidance. DOD has established a process that converts a warfighter need into formal requirements and identifies potential ORS solutions for those requirements. As shown in figure 2, the ORS Requirements and Solutions Generation process begins when a Joint Force Commander or other user submits a capability need to U.S. Strategic Command. During the requirements development phase and the solutions development phase, teams are assembled from across the warfighter and national security space communities by the designated lead for the respective phases. At this time, the Joint ORS Office has asked Air Force Space Command to facilitate the requirements development phase and has asked the Air Force Space and Missile Systems Center to facilitate the solutions development phase. The solutions development phase can begin before the formal Capability Requirements Document is delivered. The Joint Force Commander or other user who submitted the need has multiple opportunities to provide input throughout the ORS Requirements and Solutions Generation process to ensure that the solutions being considered will actually fit the need. At the time of this report, one warfighter need has completed the ORS Requirements and Solutions Generation process and two other warfighter needs are in process. The need that has completed the process was a request to augment global ultra-high-frequency communications. The Joint ORS Office received the need from U.S. Strategic Command on September 14, 2007, and the initial solutions were presented to the Commander of U.S. Strategic Command on October 17, 2007. The second need is a classified space situational awareness need. Possible solutions for the second need have also been presented to the Commander of U.S. Strategic Command and the DOD Executive Agent for Space for approval. According to the Deputy Director of the Joint ORS Office, after completing the process, there was some question whether a space-based capability was the best way to meet the need. He said that the DOD Executive Agent for Space has asked for more information and the potential solutions are now in senior leadership review. A third need for an ISR capability has begun the ORS Requirements and Solutions Generation process. As of the end of May 2008, this need has completed the requirements development phase of the ORS Requirements and Solutions Generation process. In July 2007, the Deputy Secretary of Defense tasked the DOD Executive Agent for Space to develop by October 15, 2007, an ORS Implementation Plan to guide ORS activities. The DOD Executive Agent for Space did not meet this deadline, but the plan was issued April 28, 2008. The ORS Implementation Plan identifies the DOD processes and staffing resources required to meet ORS needs, and outlines the elements necessary to implement the ORS concept as well as serving as the initial charter for the Joint ORS Office. Additionally, the Deputy Secretary of Defense required the military departments to assign personnel to fully staff the Joint ORS Office no later than August 1, 2008, and to establish dedicated funding for ORS beginning in fiscal year 2010. In addition to issuing the implementation plan, three ORS guidance documents are currently being drafted, but no timeline has been established for their completion. First, U.S. Strategic Command is drafting an update to its Initial ORS Concept of Operations that is intended to make the initial concept of operations shorter and more concise, to clarify the services’ roles and responsibilities, and to provide more information on ORS capabilities, including who will be able to operate them. Second, DOD is drafting an instruction to assign responsibilities and to prescribe procedures for Joint Force Commanders to submit urgent operational needs for a possible ORS solution. Third, U.S. Strategic Command is drafting an instruction that is designed to assign responsibilities for ORS within U.S. Strategic Command and its supporting Joint Functional Component Commands. According to U.S. Strategic Command officials, this instruction will implement and expand upon the guidance found in the DOD instruction mentioned above. U.S. Strategic Command’s instruction will also detail the procedures the command will use to prioritize warfighter needs. According to a U.S. Strategic Command document, factors that will be taken into consideration for prioritization include: (1) the operational relevance of the need, (2) the degree of urgency of the need and how soon the need must be satisfied, (3) whether the need has a potential space solution, (4) the technical feasibility of the need, (5) whether ORS resources can address the need, and (6) whether ORS is the best choice of all possible means to address the need. Most ORS efforts are in their initial phases and thus it is too early to judge their success. According to the ORS Implementation Plan, the Joint ORS Office will accomplish its objectives over time in a “crawl, walk, and run,” approach. At this time, the ORS concept is still in the “crawl” phase which means that the warfighter is getting involved with the ORS concept and the focus of ORS efforts is on demonstrating building blocks for later efforts, conducting experiments, and determining what can be accomplished with current assets. “Walking” would be characterized as the evolution of the ORS concept into a warfighter-driven concept with selected capabilities tied to gaps and integrated within the existing architecture. The ORS Implementation Plan states that this phase would not begin until approximately 2010. A “run” would involve a full range of space effects delivered when and where needed and is expected to begin in approximately 2015. The former Deputy Commander of U.S. Strategic Command told us that he expects the current tactical satellites to propel the ORS concept to somewhere between a walk and a run. Key stakeholders do not share a common understanding of the ORS concept for two primary reasons—the ORS concept is not clearly defined in its initial guidance documents and DOD has not adequately communicated the concept to key stakeholders. As a result, stakeholders throughout the warfighter and national security space communities do not share a common understanding of the ORS concept. DOD has not documented a clear definition of the ORS concept and as a result key stakeholders in the warfighter and national security space communities do not share a common understanding of the concept. Our prior work examining successful organizational transformations shows the necessity to communicate clearly defined goals and specific objectives to key stakeholders. Initial ORS planning documents—the Plan for ORS and the Initial Concept of Operations—are broad and lack the specificity needed to guide the ORS concept, according to some members of the warfighter and national security space communities. For example, the associate director of the National Security Space Office said that the Plan for ORS addressed the eight areas required by the National Defense Authorization Act for Fiscal Year 2007 in only a broad sense. Moreover, an official from one combatant command said that the Initial Concept of Operations was not well-defined, and officials from another combatant command told us that the concept of operations was really more of a vision statement. We found several examples of a lack of clarity within these initial documents. First, the Initial ORS Concept of Operations states that ORS is focused on the timely satisfaction of the urgent needs of the Joint Force Commanders, but it does not adequately define what constitutes “urgent.” Additionally, the approach presented in the April 2007 Plan for ORS for enhancing the responsiveness of space systems is to implement ORS to develop more affordable, small systems that can be deployed in operationally relevant time frames, but does not clarify what is meant by “operationally relevant time frames.” According to the Plan for ORS and the Initial Concept of Operations, some ORS solutions could take up to 1 year to execute. Officials in the Office of the Undersecretary of Defense for Policy questioned whether these time frames could really meet an urgent need. Additionally, officials from one combatant command told us that a time frame of 1 year to get a need met would not be considered responsive enough for them unless a satellite was already in orbit so that they could task it directly. Based on these examples, key stakeholders are not operating under a common understanding regarding the time frames for ORS. Moreover, key stakeholders in the intelligence community have said that they are not sure which operational needs or urgent needs the ORS concept is to satisfy. Additionally, at the time of our review, other guidance documents needed to clarify the ORS concept had not yet been developed. The August 2007 memorandum from the DOD Executive Agent for Space directed the Joint ORS Office to develop an ORS Strategy, an ORS Road Map, and an ORS Program Plan in addition to the ORS Implementation Plan. The Deputy Director of the ORS Office said that they decided to complete the ORS Implementation Plan before writing the other documents so that it could guide the development of the other required documents. Now that the ORS Implementation Plan has been released, he said that they will need to get more guidance from the DOD Executive Agent for Space regarding what specific information should be included in the remaining documents. DOD has not effectively communicated with key stakeholders or engaged them regarding the ORS concept. Our prior work examining successful organizational transformations shows the need to adopt a communication strategy that provides a common framework for conducting consistent and coordinated outreach within and outside its organization often and early and seeks to genuinely engage all stakeholders in the organization’s transformation. However, DOD did not initially involve the geographic combatant commands in the development of the ORS concept. For example, officials from one geographic combatant command told us that they did not have any input into the development of the Initial Concept of Operations for ORS and were not involved in any of the ORS working groups. These officials were concerned that failing to involve the geographic combatant commands in the ORS concept development would lead to new capabilities that drive warfighter requirements instead of warfighter requirements determining how to develop ORS capabilities. Additionally, officials from a functional combatant command told us that key ORS meetings took place in August 2007 but they were not invited to participate and neither were the geographic combatant commands. These officials were concerned that failing to invite these combatant commands to the meetings might result in the development of requirements that really do not benefit the warfighter. The first extensive outreach to the combatant commands was in preparation for the November 2007 ORS Senior Warfighters Forum, which took place 6 months after the standup of the Joint ORS Office. A senior space planner, who is the lead for ORS for one combatant command, told us that during preparatory briefings for the ORS Senior Warfighters Forum, participants were told that the purpose of the forum would be to learn what space capabilities the combatant commands needed that ORS might be able to address. However, after a couple of briefings, he learned that the purpose of the ORS Senior Warfighters Forum had shifted to that of educating the combatant commands on the ORS process and how to get an ORS capability. The senior space planner explained that rather than asking the warfighter what they need, the focus was now on placing their needs into a process that had already been developed. This same combatant command official told us that no clear answers were provided to questions asked at the ORS Senior Warfighters Forum regarding the submission of warfighter needs or how these needs would be prioritized and, as of the end of February 2008, they had received no updates from U.S. Strategic Command on any of the issues discussed at the forum. Similarly, an intelligence agency official told us that no consensus was reached during the forum and very little concrete information was relayed regarding how ORS will be used in the future. Officials from various commands called for better communication strategies to enhance their understanding of the ORS concept. Various geographic combatant command officials we spoke with generally said that U.S. Strategic Command should increase its ORS outreach activities (e.g., visits, briefings, and education) to reach more staff throughout the commands and services. The Chief of Staff at the U.S. Strategic Command Joint Functional Component Command for Space acknowledged that outreach activities need to be completed with the combatant commands so that they can better understand how future ORS capabilities can benefit their area of operation. Officials from U.S. Strategic Command acknowledged that they had not done a good job of educating the combatant commands on the ORS concept in its early days. However, the Deputy Director of the ORS Office told us that one of the responsibilities of one of the division chiefs who arrived in March 2008 at the Joint ORS Office will be to reach out to the combatant commands and engage the warfighter on the ORS concept. Additionally, DOD has not communicated well with the intelligence community regarding the ORS concept. Officials from the National Security Agency said that they are very concerned about the lack of consultation that has been done with the intelligence community regarding the ORS concept. Officials from the National Geospatial-Intelligence Agency also said that they believe that communication with the intelligence community regarding the ORS concept has been insufficient. However, both agencies acknowledged that communication between DOD and the intelligence community has improved since they started working together on tactical satellites, but their concerns regarding communication remain. While the U.S. Strategic Command and the Joint ORS Office have taken some steps to promote the ORS concept such as the November 2007 ORS Senior Warfighters Forum, directing one of the Joint ORS Office division chiefs to reach out to the combatant commands, and engaging the intelligence community on the tactical satellites, they have not developed a consistent and comprehensive outreach strategy. The lack of a clearly defined ORS concept and effective outreach to the stakeholders has affected the acceptance and understanding of the ORS concept throughout the warfighter and national security space communities. Without a complete and clearly articulated concept that is well communicated with key stakeholders, DOD could encounter difficulties in fully implementing the ORS concept and may miss opportunities to meet warfighter needs. DOD has recognized the need to integrate the ORS concept into the warfighter and national security space communities’ processes and architecture, but it has not yet determined specific steps for achieving integration. DOD does not plan to begin integrating the ORS concept in accordance with the 1999 DOD Space Policy until between 2010 and 2015. However, integrating space systems is a complex activity that involves many entities inside DOD and the intelligence community and may take more time to accomplish than expected. Therefore, taking incremental steps as the ORS concept matures may help the Joint ORS Office to achieve timely integration and help assure that warfighter requirements will be met. Senior ORS officials have told us that the ORS concept is still too new to begin its integration, but combatant command and intelligence community officials are concerned about how the ORS concept will be integrated into their existing processes for submitting warfighter needs and processing ISR data. According to the 1999 DOD Space Policy, an integrated national security space architecture that addresses defense and intelligence missions shall be developed to the maximum extent feasible in order to eliminate programs operating in isolation of one another and minimize unnecessary duplication of missions and functions and to achieve efficiencies. This policy also directs the Secretaries of the Military Departments and Combatant Commanders to integrate space capabilities and applications into their plans, strategies, and operations. In order to be consistent with DOD Space Policy, new processes or systems developed under the ORS concept should be integrated into all facets of DOD’s strategy, doctrine, education, training, exercises and operations. DOD has acknowledged that the ORS concept needs to be integrated and one of the goals in the ORS Implementation Plan is to integrate the ORS concept into the existing space architecture between 2010 and 2015. Given the complex environment of the warfighter and national security space communities, changes that affect one organization can have an effect on integrating national security space systems, and may take longer than anticipated. We previously reported that DOD is often presented with different and sometimes competing organizational cultures and funding arrangements, and separate requirements processes among the agencies involved in the defense and national space communities. This complex environment has prevented DOD from reaching some of its past integration goals. For example, in 2005, changes at the National Reconnaissance Office resulted in the removal of National Reconnaissance Office personnel and funding from the National Security Space Office, and restricted the National Security Space Office’s access to a classified information-sharing network, thereby inhibiting efforts to further integrate defense and national space activities—including ISR activities—that had been recommended by the Space Commission. If the Joint ORS Office does not successfully integrate the ORS concept into the existing space architecture within established time frames, this may result in a lack of coordination among various members of the warfighter and national security space communities. Officials from the Joint ORS Office and U.S. Strategic Command acknowledged that they have not yet determined how any future ORS processes and systems will be integrated into existing national security space processes and systems, because the concept is still too new for them to determine the best way to achieve integration. Furthermore, the ORS Implementation Plan states that the Joint ORS Office will be working with the military departments and appropriate agencies to prepare for a smooth transition of systems when they are developed and acquired by the Joint ORS Office. However, the Joint ORS Office does not yet have any new space capabilities to be transitioned. Senior ORS officials told us that they cannot develop a comprehensive plan for the integration of ORS processes into existing DOD and intelligence community processes and architecture until they know more about the nature of ORS capabilities that they will be able to develop. Moreover, U.S. Strategic Command officials said that integration of new systems will have to take place on a case-by-case basis depending on the type of capability that is developed. They also said that it is conceivable that in certain situations, integrating some ORS solutions might not be the most cost-effective and efficient way to provide an urgent capability to a warfighter. For example, some of the architecture for addressing ISR needs requires high levels of data classification. If a warfighter had a need that could be met at a lower classification level than a particular ISR system would allow, it might be faster and less expensive to not integrate that particular ORS capability in order to preserve a lower classification of the data obtained and avoid the expense and complications associated with processing data with higher classifications. For these reasons, DOD has not laid out any specific steps toward the longer-term goal of integrating the ORS concept into the existing space architecture, which has raised some concern within the warfighter and national security space communities about the possible creation of unnecessary duplicative processes. For example, combatant command officials told us that they are already burdened by multiple processes for submitting their warfighter requirements. They emphasized that any processes developed for submitting ORS requirements should be integrated into existing requirements submission processes so as not to require a new process for them to learn to use and manage. However, the Deputy Director of the ORS Office said that the process of submitting ORS requirements currently under development is a separate and parallel process to existing methods of submitting warfighter needs and he does not yet know how it will be integrated. He explained that the ORS concept has only been tested with two warfighter needs so it is too soon for them to determine how particular ORS processes—such as the requirements submission process—will be integrated into existing warfighter requirements processes. U.S. Strategic Command officials told us that in the future, they envision receiving ORS requirements from multiple existing processes already in place, but time is needed to allow the concept to mature and develop before integration can be fully addressed. Intelligence community officials also raised concerns about the importance of using their current processes and architecture so as not to create unnecessary duplicative processes to get data to the warfighter. Furthermore, officials from the National Geospatial-Intelligence Agency told us that their analysts cannot keep up with the data being collected from existing space assets, and they do not know who will process information from any new assets that might be developed under ORS. DOD officials have acknowledged the need to integrate ORS into the existing ISR enterprise; however, accomplishing this goal will be especially challenging. We recently reported that DOD’s existing roadmap for integrating current ISR capabilities does not provide DOD with a long- term comprehensive vision of the desired end state of the ISR enterprise. We also reported that DOD has not been able to ensure that ISR capabilities developed through existing processes are really the best solutions to minimize inefficiency and redundancy. Therefore, it will be difficult for the Joint ORS Office to reduce inefficiency by integrating its processes and systems into the current ISR enterprise, which already faces numerous integration challenges. The Deputy Director of the Joint ORS Office said that the office has not yet determined how data collected by any new ORS solutions developed for ISR needs will be integrated into existing intelligence community back-end processes for analyzing and distributing data collected from space assets. Integrating the ORS concept will involve many agencies across the warfighter and national security space communities and may take more time than anticipated. If the integration of the ORS concept is not adequately planned, DOD may not meet its time frames for integrating the ORS concept. If the ORS concept is not integrated into the existing space architecture as integration issues arise, the ORS concept could create duplicative efforts resulting in wasted resources and inhibiting the ORS concept’s ability to fully meet warfighter needs. While DOD has taken a number of steps to advance the ORS concept and to develop a process for providing ORS capabilities to the warfighter, its ability to implement the concept will be limited until it more clearly defines key aspects of the ORS concept and increases its outreach and communication activities. Without a complete and clearly articulated concept that is well communicated and practiced among key stakeholders, DOD could encounter difficulties in fully implementing the ORS concept and building the relationships necessary to ensure ORS’s success. Furthermore, even though it may be too early to develop a comprehensive plan for integrating ORS processes and systems into the existing national security space architecture, DOD can identify the steps necessary to achieve integration as the concept matures. Integrating the ORS concept will be very challenging, especially as it pertains to ISR activities that will have to be coordinated among many agencies across DOD and intelligence community agencies. Identifying the incremental steps toward integration could help DOD meet its time frames for integrating the ORS concept, prevent the ORS concept from creating duplicative efforts, ensure that the ORS concept meets warfighter needs, and ensure its future satellites are adequately supported. We recommend the DOD Executive Agent for Space take the following three actions: Direct the Joint ORS Office, in consultation with U.S. Strategic Command, to define ORS key terms including what qualifies as an urgent need, how timely satisfaction of a need is evaluated, and what Joint Force Commander needs the ORS concept is trying to satisfy. Direct the Joint ORS Office, in consultation with U.S. Strategic Command, to establish an ongoing communications and outreach approach for ORS to help guide DOD’s efforts to promote, educate, and foster acceptance among the combatant commands, military services, intelligence community, and other DOD organizations. In consultation with the Undersecretary of Defense for Acquisition, Technology, and Logistics and the Undersecretary of Defense for Intelligence, and in cooperation with the military services, identify the steps necessary to ensure the integration of the ORS concept into existing DOD and intelligence community processes and architecture as the Joint ORS Office continues its long-term planning of the ORS concept. In written comments on a draft of this report, DOD partially concurred with our recommendations. DOD’s comments are reprinted in appendix II. The National Reconnaissance Office also provided technical comments, which we incorporated as appropriate. DOD partially concurred with our recommendation to define ORS key terms including what qualifies as an urgent need, how timely satisfaction of a need is evaluated, and what Joint Force Commander needs the ORS concept is trying to satisfy. In its comments, DOD stated that it codified the definition of ORS on July 9, 2007, and U.S. Strategic Command developed an Initial Concept of Operations containing additional terms intended to further define and clarify ORS activities. However, our work showed that the warfighter and intelligence community believe that key ORS terms need to be better defined and clearer. As we stated in our report, the initial guidance documents—such as the Plan for ORS and the Initial Concept of Operations—are considered broad by users and lack the specificity needed to guide the ORS concept. Based on our work, this has led to a lack of a common understanding of the concept among the warfighter and national security space communities. DOD also stated that responsibility for providing overarching definitions and policy guidance will remain with the Office of the Under Secretary of Defense for Policy, and U.S. Strategic Command will continue to validate ORS requirements and provide additional clarification, definition, and direction to the ORS Office as the capability matures. However, our recommendation focuses on the need for better-defined and clear ORS terms. Therefore, we continue to believe that DOD should take additional steps now to define and clarify ORS and provide more definition of key terms. DOD partially concurred with our recommendation to establish an ongoing communications and outreach approach for ORS to help guide DOD’s efforts to promote, educate, and foster acceptance among the combatant commands, military services, intelligence community, and other DOD organizations. In its comments, DOD stated that communicating a clear, concise message was vitally important to the success of ORS and it is currently conducting outreach efforts in numerous forums. We acknowledged DOD’s efforts to promote the ORS concept in our report; however, despite these efforts, confusion regarding the ORS concept persists. As stated in our report, the lack of a clear definition combined with the lack of a consistent and comprehensive outreach strategy has affected the acceptance and understanding of the ORS concept throughout the warfighter and national security space communities. DOD’s comments also stated that the burden of outreach should not be placed solely upon the ORS Office and that all ORS stakeholders will continue to play an active role in promoting and fostering acceptance of the ORS concept. Apart from who is designated to develop and implement it, our work showed that a comprehensive communication and outreach approach or strategy that reflects agreed- upon definitions and direction for the ORS concept is needed or DOD could encounter difficulties in fully implementing the ORS concept and may miss opportunities to meet warfighter needs. DOD partially concurred with our recommendation to identify the steps necessary to ensure the integration of the ORS concept into existing DOD and intelligence community processes and architecture as the Joint ORS Office continues its long-term planning of the ORS concept. In its comments, DOD stated that integration of ORS capabilities into current processes and architecture will depend upon the value provided by the current processes and architectures and that integration into existing systems will be considered by the ORS Office as a matter of course. DOD also stated that personnel assigned to the ORS Office from across DOD and the intelligence community bring knowledge and experience that will help to identify ways to selectively integrate ORS capabilities into current systems, when appropriate, in order to streamline delivery of products to the customers. However, based on our work, if integration of the ORS concept is not timely and adequately planned, DOD may not meet its time frames for integrating the ORS concept into the existing space architecture between 2010 and 2015. Moreover, if the ORS concept is not developed and integrated well in advance of launching future satellites, the ORS concept could create duplicative efforts resulting in wasted resources and inhibiting the ORS concept’s ability to fully meet warfighter needs. Therefore, we believe our recommendation to take a more proactive approach to integrating the ORS concept, once better defined and communicated with the warfighter and national security space community, continues to have merit. We are sending copies of this report to the Secretaries of Defense, the Army, the Navy, and the Air Force. Copies will be made available to others upon request. In addition, this report will be available at no charge on our Web site at http://www.gao.gov/. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. To determine whether the Operationally Responsive Space (ORS) concept is being developed to support warfighter needs and the extent to which DOD has a plan that integrates ORS into existing DOD and intelligence community processes and architecture, we reviewed and analyzed ORS planning documents, the ORS concept of operations, and ORS processes for meeting warfighter needs. We also reviewed relevant legislation, policies, and prior GAO reports. We interviewed officials at the U.S. Strategic Command including the Joint Force Component Command for Intelligence, Surveillance and Reconnaissance and the Joint Force Component Command for Space as well as officials from the Joint ORS Office to discuss the progress of developing the ORS concept, the initial ORS planning documents, outreach regarding the ORS concept, and plans to integrate the ORS concept into the existing space architecture. We also interviewed officials at Air Force Space Command and the Air Force Space and Missile Systems Center to discuss the new process developed for converting warfighter needs into formal requirements and potential ORS solutions. In addition, we interviewed officials from the U.S. Central Command, U.S. European Command, U.S. Pacific Command, U.S. Southern Command, and U.S. Special Operations Command regarding warfighter involvement in the creation of the ORS concept, the ability of the ORS concept to meet warfighter needs, the degree of outreach received regarding the ORS concept, and the integration of the ORS concept into current processes for submitting warfighter needs. To discuss issues regarding ORS capabilities that may address warfighter ISR needs and the integration of these capabilities into current intelligence community processes and systems, we interviewed officials from the Office of the Director of National Intelligence, the National Geospatial- Intelligence Agency, the National Reconnaissance Office, and the National Security Agency. Furthermore, we interviewed officials from the Office of the Undersecretary of Defense for Policy, the Office of the Undersecretary of Defense for Intelligence, and the National Security Space Office to discuss policy issues related to ORS. Finally, we interviewed officials from U.S. Air Force Headquarters, U.S. Army Space Branch, the Air Force Research Lab, and the Naval Research Lab to discuss service involvement with the ORS concept and the tactical satellite experiments. In addition to the contact named above, Lorelei St James, Assistant Director; Grace Coleman; Jane Ervin; Amy Higgins; Enemencio Sanchez; Kimberly Seay; Jay Spaan; Matthew Tabbert; Karen Thornton; and Amy Ward-Meier made key contributions to this report.
The Department of Defense's (DOD) operational dependence on space has placed new and increasing demands on current space systems to meet commanders' needs. DOD's Operationally Responsive Space (ORS) concept is designed to more rapidly satisfy commanders' needs for information and intelligence during ongoing operations. Given the potential for ORS to change how DOD acquires and fields space capabilities to support the warfighter, this report discusses to what extent DOD (1) is developing ORS to support warfighter requirements and (2) has a plan that integrates ORS into existing DOD and intelligence community processes and architecture. GAO reviewed and analyzed ORS planning documents, the ORS concept of operations, and processes for meeting warfighter needs and also interviewed defense and intelligence community officials who are involved with the ORS concept. DOD is making some progress in developing the ORS concept, but whether it will meet warfighter requirements is unclear, principally because the concept is in the early stages of development and not commonly understood by all members of the warfighter and national security space communities. Our prior work examining successful organizational transformations shows the need to communicate to stakeholders often and early and to clearly define specific objectives. Since the Joint ORS Office was established in May 2007, it has developed a process for converting warfighter needs into formal requirements and identifying potential ORS solutions. Moreover, DOD issued the ORS Implementation Plan in April 2008 and is also developing new ORS guidance documents. However, GAO found disparity in stakeholder understanding of the ORS concept within the warfighter and national security space communities. This disparity exists because DOD has not clearly defined key elements of the ORS concept and has not effectively communicated the concept with key stakeholders. For example, initial ORS planning documents are broad and lack the specificity needed to guide the ORS concept, according to some members of the warfighter and national security space communities. Moreover, officials from the intelligence community were concerned about DOD's lack of consultation and communication with them regarding the ORS concept. Without having a well-defined and commonly understood concept, DOD's ability to fully meet warfighter needs may be hampered. DOD has acknowledged the need to integrate ORS into existing DOD and intelligence community processes and architecture, but it has not fully addressed how it will achieve this integration. The 1999 DOD Space Policy states that an integrated national security space architecture that addresses defense and intelligence missions shall be developed to the maximum extent feasible in order to minimize unnecessary duplication of missions. DOD plans to begin integrating any new ORS processes or systems that are developed for ORS sometime between 2010 and 2015. However, integrating national security space systems can be a complex activity, involving many entities within DOD and the intelligence community. GAO previously reported that DOD's existing intelligence, surveillance, and reconnaissance (ISR) processes activities already face significant integration challenges, and adding new ORS systems into the existing ISR enterprise will increase the challenges of an already complex and challenging environment. Given the concept's immaturity, members of the national security space community have raised concerns about how the ORS concept will be integrated with existing DOD and intelligence processes and architecture, and voiced concerns about being burdened by an additional new requirements process specific to ORS. Nonetheless, as GAO described earlier, DOD is developing a process unique to ORS for submitting ORS warfighter requirements. The complexity of the national security space environment calls for DOD to begin to adequately plan integration of the ORS concept now to help ensure that DOD avoids the risk of duplicative efforts and wasted resources.
Established by the Intelligence Reform and Terrorism Prevention Act of 2004, DNI serves as head of the IC, acts as the principal advisor to the President and National Security Council on intelligence matters, and oversees and directs the implementation of the National Intelligence Program. The IC comprises 17 different organizations, or IC elements, across the federal government represented by 6 executive departments. These IC elements include ODNI, seven other civilian IC elements, and nine military IC elements. The eight civilian IC elements within the scope of our review include two intelligence agencies and six intelligence components within five departments (see fig. 1).this review, we are referring to the eight as the civilian IC elements. Appendix III provides additional information on each of the eight civilian IC elements’ missions. IC CHCO officials indicated they are responsible for leading the design, development, and execution of human resource strategies, plans, and policies for the IC. In this role, IC CHCO works with both the civilian and military IC elements to collect and maintain information on the use of core contract personnel throughout the IC. Since fiscal year 2007, IC CHCO has compiled an annual core contract personnel inventory to provide information to Congress and others about the IC’s use of core contract personnel. This effort was in response to concerns from Congress that the IC relied too heavily on contractors and could not account for the number and costs of contract personnel on an annual basis. The core contract personnel inventory includes information on both the civilian and military IC elements’ contracts for over 10 different data fields. IC CHCO uses information from the inventory to develop an annual briefing for Congress, which includes year-to-year changes in the number of and reasons for using core contract personnel across the IC. In addition, since fiscal year 2011, IC CHCO has prepared a statutorily required IC-wide annual personnel level assessment. As part of this assessment, IC CHCO is required, in consultation with all of the IC elements, to report on the current, projected, and prior five fiscal years’ number and costs of core contract personnel, as well as present the budget submission for personnel costs for the upcoming fiscal year. To prepare the inventory, IC CHCO provides guidance and a data call to the IC elements on an annual basis that details how the elements should report information on their core contracts from the previous fiscal year. Civilian IC element officials stated that generally their elements’ contracting, program, finance office, or a combination thereof, collects and reports the information for the data call. For more than 20 years, OMB procurement policy has indicated that agencies should provide a greater degree of scrutiny when contracting for services that can affect the government’s decision-making authority. Without proper management and oversight, such services risk inappropriately influencing the government’s control over and accountability for decisions that may be supported by contractors’ work. The policy therefore directs agencies to ensure that they maintain sufficient government expertise to manage the contracted work. The Federal Acquisition Regulation also addresses the importance of management oversight associated with contractors providing services that have the potential to influence the authority, accountability, and responsibilities of government employees. Core contract personnel perform the types of functions that may affect an IC element’s decision- making authority or control of its mission and operations. While core contract personnel may perform functions that closely support inherently governmental work, these personnel are generally prohibited from performing inherently governmental functions, which require discretion in applying government authority or value judgments in making decisions that can only be performed by government employees. Figure 2 illustrates how the risk of contractors influencing government decision making is increased as core contract personnel perform functions that closely support inherently governmental functions. OMB has initiated a number of government-wide interrelated efforts that help to address the risks related to relying on contractors for services that are closely associated with inherently governmental work or critical to an agency’s mission (see app. IV for additional information on these OMB policy requirements). Although the IC elements are not required to address certain aspects of the OMB policies, either because of the classified nature of the contracts or because the IC element is a component of an executive department, these efforts provide IC elements with leading practices related to considering and mitigating risks when relying on contractors to perform services that are closely associated with inherently governmental and critical functions. The IC elements are also required to follow applicable IC-wide guidance and federal laws and regulations on the use of contractors. In addition, the departmental elements—the six civilian IC elements that are components within executive departments—must comply with related departmental policies and guidance. For example, DEA NN must comply with federal laws and regulations as well as all applicable OMB, DOJ, DEA, and IC-wide guidance. OMB’s July 2009 guidance and our prior work have emphasized that decisions regarding the use of contractors should be based on strategic workforce planning regarding what types of work are best done by government personnel or by contractors. Specifically, agencies should identify the appropriate mix of government and contract personnel on a function-by-function basis, especially for functions that are critical to an agency’s mission. The OMB guidance requires an agency to have sufficient internal capability to control its mission and operations when contracting for these critical functions. Our prior work has found that agencies should have overarching strategic-level guidance related to the extent to which contractors should be used, and agencies’ strategic workforce planning documents should contain evidence of strategic considerations of contractor use. In May 2013, we found that DOD had not yet assessed the appropriate mix of government and contract personnel in its strategic workforce plans as required by law and, as a result, was hampered in making more informed strategic workforce decisions. We recommended that DOD revise existing workforce policies and procedures to address the determination of the appropriate workforce mix. DOD partially concurred with this recommendation and noted that it had efforts underway to determine the workforce mix. OFPP’s September 2011 Policy Letter 11-01 builds on past federal policies on closely supporting inherently governmental functions by including a detailed checklist of responsibilities that must be carried out when agencies rely on contractors to perform services that closely support inherently governmental functions. The policy letter also builds upon past OMB guidance by seeking to broaden agencies’ focus to include critical functions, which can pose a risk if not carefully monitored. The policy letter establishes criteria agencies are to use in identifying their critical functions, which are functions that are necessary to the agency to effectively perform and maintain control of its mission and operations. The policy letter further states that the more important the function, the more important it is that the agency have internal capability to maintain control of its mission and operations. The policy letter requires executive branch departments and agencies to develop and maintain internal procedures to address the requirements of the guidance. Further, OFPP’s November 2010 and December 2011 guidance on service contract inventories and our prior work have emphasized that an inventory of contracted services, if effectively developed and analyzed, can inform an agency’s strategic workforce planning efforts and help identify which contracts may require additional oversight. The inventories can assist an agency in understanding the extent to which contractors are being used to perform activities that closely support inherently governmental work or support the agency’s mission and operations. Civilian agencies and DOD are statutorily required to compile service contract inventories on an annual basis. In September 2012, we found that the civilian agencies did not have good visibility on the number of contractor personnel or their role in supporting agency activities because they had not yet collected these data in their fiscal year 2011 inventories. Additionally, in May 2013, we found that DOD generally continued to have challenges collecting key data in its fiscal year 2011 inventory, which limited the utility, accuracy, and completeness of the inventory data. Specifically, most DOD components, other than the Army, were not able to determine the number of contractor FTEs used to perform each contracted service and were still not able to identify and record more than one type of service purchased for each contracting action entered into the inventory. We made a number of recommendations to help implement and improve both the civilian agencies’ and DOD’s service contract inventories. For example, we recommended that OMB clarify guidance to require agencies to consistently report on the number of contract personnel. OMB generally concurred with our recommendations and agreed to work with the agencies to strengthen their use of service contract inventories by sharing lessons learned and best practices from the initial inventories. Due to limitations in the core contract personnel inventory, we could not accurately determine the extent to which the eight civilian IC elements have used core contract personnel. The inventory contains two data fields—fiscal year obligations and total FTEs—that IC CHCO uses to identify the civilian IC elements’ extent of contractor reliance. IC CHCO used this inventory information to report to Congress that from fiscal year 2009 to 2011, the number of core contract personnel for the civilian IC elements declined by approximately 30 percent. However, we identified several issues that limit the comparability, accuracy, and consistency of the information reported by the civilian IC elements as a whole. First, changes to the definition of core contract personnel and data reliability limitations identified by the elements for certain years hinder the ability to use the inventory to make year-to-year comparisons of cost and FTE data. Second, our analysis found that the reported contract costs for the fiscal years 2010 and 2011 inventories were inaccurate or inconsistently determined. Third, elements calculated the number of core contract personnel FTEs differently, affecting the consistency of the information reported. In addition, a lack of readily available documentation limits the civilian IC elements’ ability to validate the information reported. Further, IC CHCO did not clearly explain the effect of the limitations when reporting the information to Congress. On an individual basis, some of the limitations we identified may not raise significant concerns. When taken together, however, they undermine the utility of the information for determining and reporting on the extent to which the civilian IC elements use core contract personnel. Trends in the civilian IC elements’ use of core contract personnel from fiscal year 2007 to 2011 in terms of the number of personnel and associated costs cannot be identified due to changes in the definition of core contract personnel and known data shortcomings. However, IC CHCO used the inventory information to compare IC-wide core contract personnel use from year to year when reporting to Congress. In response to federal statute, IC CHCO prepares an annual personnel assessment that compares the current and projected number and costs of core contract personnel to the number and costs during the prior 5 years. According to IC CHCO, the number of core contract personnel FTEs and associated costs declined nearly one-third from fiscal year 2009 to fiscal year 2011. However, we could not validate the extent to which there was a change in the number of core contract personnel providing support as we determined that a significant portion of the reported reduction is attributable to definitional changes and improvements to data systems. Further, we assessed the reliability of the civilian IC elements’ reported information for the total FTEs and fiscal year obligations data fields and determined that the data were not sufficiently reliable for our purpose of identifying the extent of reliance on core contract personnel (see app. II). Since IC CHCO’s initial data collection efforts for the core contract personnel inventory in fiscal year 2006, it has taken actions to further clarify and refine its guidance to address concerns that IC elements were interpreting the definition of core contract personnel differently and to improve the consistency of the information in the inventory. IC CHCO worked with the elements to develop a standard definition that was formalized with the issuance of ICD 612 in October 2009. Further, IC CHCO formed the IC Core Contract Personnel Inventory Control Board, which has representatives from all of the IC elements, to provide a forum to resolve differences in the interpretation of IC CHCO’s guidance for the inventory. As a result of the board’s efforts, IC CHCO provided supplemental guidance in fiscal year 2010 to either include or exclude certain contract personnel, such as those performing administrative support, training support, and information technology services. For example, the guidance stated that IC elements should include contract personnel who provide training that is unique to the IC mission but exclude those who provide training commercially available through a vendor. IC CHCO officials told us that changes made over time were intended to clarify the definition of core contract personnel and improve the consistency of the information in the inventory. Appendix V summarizes the major changes in the definition reflected in IC CHCO’s guidance from fiscal years 2007 through 2011. While these changes could improve the inventory data, it is unclear the extent to which the definitional changes contributed to the reported decrease in the number of core contract personnel and associated costs from year to year. For example, for fiscal year 2010, officials from one civilian IC element told us they stopped reporting information technology help desk contractors, which had been previously reported, to be consistent with IC CHCO’s revised definition. One of these officials stated consequently that the element’s reported reduction in core contract personnel between fiscal years 2009 and 2010 did not reflect an actual change in their use of core contract personnel, but rather a change in how core contract personnel were defined for the purposes of reporting to IC CHCO. The official told us that their reported information for fiscal year 2010 was therefore not comparable to data from prior years, in part because of definitional changes. Further, this civilian IC element identified data reliability limitations in the information reported to IC CHCO for certain fiscal years’ inventories, which the element has taken steps to address. However, while IC CHCO noted that this civilian IC element implemented an enhanced contract management system that affected the element’s reporting in the briefing and annual personnel level assessment, IC CHCO did not disclose how these improvements affected the ability to compare data across years. For its submission to the fiscal year 2011 inventory, officials from the civilian IC element stated that they used a new contract management system that provided more clarity into which FTEs and obligations should be included in the inventory and thus improved the reliability of their reported information. These officials acknowledged significant limitations with certain aspects of their reported data prior to fiscal year 2011 due to limitations with the contract management system used for those years. For example, officials told us that prior to the contract system upgrade, the system did not allow them to accurately separate out which obligations and FTEs on certain contracts should be considered core versus non-core. As a result, these officials stated that we should not compare the information reported from fiscal year 2010 to 2011 due to the improvements made in the contract management system. However, IC CHCO included this civilian IC element’s data when calculating the IC’s overall reduction in number of core contract personnel between fiscal years 2009 and 2011 in its fiscal year 2011 briefing to Congress. In addition, IC CHCO included these data when comparing the number and costs of core contract personnel between fiscal year 2009 and 2011 in the fiscal year 2013 personnel level assessment. OMB guidelines provide that agencies should ensure that disseminated information be reliable, clear, and useful to the intended users.CHCO explained in the briefing and personnel level assessment that this civilian IC element’s rebaselining had an effect on the element’s reported number of contractor personnel for fiscal year 2010. IC CHCO did not explain that the rebaselining would limit the comparability of the number and costs of core contract personnel for both this civilian IC element and the IC as a whole because the element did not adjust the number and costs previously reported. In addition, another civilian IC element changed its methodology for calculating core contract personnel FTEs over time, which limits the ability to compare this FTE information across certain years. Prior to its submission to the fiscal year 2010 inventory, this element calculated an estimated number of core contract personnel FTEs by applying a certain percentage to the number of contractor FTEs. For its submissions to the fiscal year 2010 and 2011 inventories, the element reported the actual number of core contract personnel FTEs. According to officials from this civilian IC element, because the methodology for calculating the number of core contract personnel FTEs fundamentally changed from fiscal year 2009 to 2010, the data cannot be compared across these years. However, IC CHCO reported and compared these numbers in its annual briefings and personnel level assessments without including information on these changes and any associated limitations. IC CHCO officials stated that they rely on the IC elements to inform them of any methodological changes that would impact the information reported. In addition, IC CHCO officials stated that they identify any major differences between fiscal years and the associated causes. By not fully disclosing the appropriate qualifications for making year-to-year comparisons, the information reported in the briefings and personnel level assessments may not be consistent with leading practices outlined in OMB’s guidelines for disseminated information. The civilian IC elements’ core contract personnel costs for fiscal years 2010 and 2011 could not be reliably determined, in part because our analysis identified numerous discrepancies between the amount of obligations reported by the civilian IC elements in the core contract personnel inventory and these elements’ supporting documentation for the records we reviewed. We compared the information reported for a sample of 287 records—representing 222 contracts or purchase orders— from the civilian IC elements’ submissions for the fiscal years 2010 and 2011 inventories. We found that the civilian IC elements either under- or over-reported the amount of obligations by more than 10 percent for approximately one-fifth of the records. In addition, the civilian IC elements could not provide complete documentation to validate the information reported for 17 percent of the records we reviewed. Overall, we were able to validate the amount of reported obligations for approximately 43 to 77 percent of the records we reviewed at any one element. However, IC CHCO used the core contract personnel inventory information to report fiscal years 2010 and 2011 contract costs for the eight civilian IC elements in our review. Civilian IC element officials identified several issues that may account for the discrepancies between the reported obligations and the documentation provided. For example, ODNI officials told us that the system used to report their fiscal year 2010 data had reliability issues, in part because the users had to manually enter obligations for certain contracts or manually delete duplicate contracts to avoid double-counting obligations. ODNI officials stated that a new contract management system was used for reporting contract obligations in their submission for the fiscal year 2011 inventory. According to these officials, the new system offers greater detail and improved functionality for identifying the amount of obligations on their contracts. While we observed an improvement in ODNI’s reporting of obligations from fiscal year 2010 to 2011, we still identified discrepancies in 18 percent of ODNI’s fiscal year 2011 records in our sample. ODNI officials noted that, even with the new system, they manually enter the information into the inventory submission, which may result in data entry errors. Internal control activities, such as accurate and timely recording of transactions, help provide reasonable assurance of the reliability of reported information. According to federal internal control standards, for an agency to run and control its operations, it must have relevant, reliable information relating to internal events.review the IC elements’ submissions for outliers and obvious errors but rely on the elements to ensure the accuracy of the information, in part because IC CHCO does not have the staff resources for more extensive reviews. IC CHCO officials also explained that their role is to provide guidance to the IC elements for reporting the information to the inventory but not to audit the reliability of the information reported. While civilian IC element officials described some steps taken to help ensure the reliability of the information reported, such as reviewing the information reported for outliers or prohibiting changes without prior approval, these internal controls may not be sufficient in light of the challenges we identified. We were unable to determine the full magnitude of obligations not included in the inventory because we did not have a way to identify all contracts not reported by the civilian IC elements. disclose this methodology or its effects on the information it reports to Congress. The number of core contract personnel providing support to the civilian IC elements for fiscal years 2010 and 2011 could not be reliably determined, in part because we found that the eight civilian IC elements used significantly different methodologies when determining the number of FTEs. For example, some civilian IC elements estimated contract personnel FTEs using target labor hours while other civilian IC elements calculated the number of FTEs using the labor hours invoiced by the contractor. As a result, the reported numbers are not comparable across these elements. The IC CHCO core contract personnel inventory guidance for both fiscal years 2010 and 2011 state that full accounting is the preferred method for identifying FTEs, but does not provide additional detail, such as specifying appropriate methodologies for calculating FTEs, requiring IC elements to describe their methodologies, or requiring IC elements to disclose any associated limitations with their methodologies. Depending on the methodology used, an element can calculate a different number of FTEs for the same contract. For example, for one contract we reviewed at a civilian IC element that reports FTEs based on actual labor hours invoiced by the contractor, the element reported 16 FTEs for the contract. For the same contract, however, a civilian IC element that uses estimated labor hours at the time of award would have calculated 27 FTEs. As a result, using different methodologies limits the comparability of civilian IC elements’ reported numbers and obscures what the information represents in a given fiscal year. IC CHCO officials stated they have discussed standardizing the methodology for calculating the number of FTEs with the IC elements but identified challenges, such as identifying a standard labor-hour conversion factor for one FTE. IC CHCO guidance for fiscal year 2012 instructs elements to provide the total number of direct labor hours worked by the contract personnel to calculate the number of FTEs for each contract, as opposed to allowing for estimates, which could improve the consistency of the FTE information reported across the IC. Since this methodology is different than the methodology used by several civilian IC elements to calculate their number of FTEs in the fiscal year 2010 and 2011 inventories, IC CHCO will be further limited in the extent to which it can compare FTE data across years. In addition, we found that most of the civilian IC elements did not maintain readily available documentation of the information used to calculate the number of FTEs reported for a significant number of the records we reviewed. As a result, these elements could not easily replicate the process for calculating or validate the reliability of the information reported for these records. Federal internal control standards call for appropriate documentation to help ensure the reliability of the information reported. For 37 percent of the 287 records we reviewed, we could not determine the reliability of the information reported. Two of the civilian IC elements were able to provide documentation to support the number of FTEs reported for almost all of the records we reviewed, but the other civilian IC elements experienced challenges in providing documentation to varying degrees. For example, officials from one civilian IC element explained that they did not document how they calculated the number of core contract personnel FTEs at the time of reporting. As a result, these officials stated that it would be very time-consuming to replicate the process for making these calculations, and that, for contracts with higher numbers of contract personnel, it could take months to recreate the methodology used. In addition, another civilian IC element had challenges providing documentation for certain records, in part because some contracts included in the inventory are fixed-price contracts for which it does not negotiate or have insight into the number of FTEs. While IC CHCO does not require the IC elements to maintain documentation of their calculations, without complete documentation, elements cannot ensure the reliability of the information reported in their submissions or may not be able to replicate the methodology used to report the number of FTEs for their contracts. However, IC CHCO aggregates and compares the FTE data across the civilian IC elements when reporting to Congress and has not disclosed in its briefings or personnel level assessments that the FTEs, reported collectively or by element, reflect various definitions and methods of counting contract personnel. IC CHCO and the civilian IC element officials further identified several challenges related to elements’ preparation of their inventory submissions. Officials at several civilian IC elements stated that they experienced turnover in the staff who prepared their submissions over the years. As a result, these officials were unable to explain the methodology used by staff to report the information for certain submissions in prior years. IC CHCO officials stated that they frequently have to work with new staff at the elements to help them understand the reporting requirements because the elements did not have documentation of how prior staff reported certain information. In addition, for a large civilian IC element, many contracting and program officials can be involved in preparing the elements’ submissions, making it difficult to ensure consistency in reporting. The civilian IC elements have used core contract personnel to perform a range of functions, such as human capital, information technology, and program management, and have reported in the core contract personnel inventory the reasons for using contractors for such functions. Due to limitations in the core contract personnel inventory, the number of core contract personnel performing these functions in support of the civilian IC elements and the reasons for their use cannot be reliably determined. We found for the contracts we reviewed, the civilian IC elements generally reported reliable information in the inventory on functions performed by contractors by selecting from one of over 20 broad categories. However, the limitations we identified in the inventory’s obligation and FTE data preclude the information on contractor functions from being used to determine the extent to which civilian IC elements contracted for each function. Further, in the inventory, the civilian IC elements provided information on their reasons for using core contract personnel, such as the need for unique expertise, but our analysis found that 40 percent of the contracts in our sample did not contain evidence of the reasons reported. As a result, we could not corroborate the information reported in the inventory on the reasons for using core contract personnel. Moreover, the most widely cited reason in the sample of contracts we reviewed does not describe why a civilian IC element contracted for a service but rather describes the nature of the contract. As part of the core contract personnel inventory, IC CHCO collects information from the elements on contractor-performed functions using the primary contractor occupation and competency expertise data field. An IC CHCO official explained that this data field should reflect the tasks performed by the contract personnel. IC CHCO’s guidance for this data field instructs the IC elements to select one option from a list of over 20 broad categories of functions for each contract entry in the inventory. Based on our review of relevant contract documents, such as statements of work, we were able to verify the primary contractor occupation and competency expertise reported for almost all of the records we reviewed. Using the primary contractor occupation and competency expertise data field in the core contract personnel inventory, the civilian IC elements reported functions performed such as human capital, information technology, program management, administration, collection and operations, and security services, among others. While we could verify the categories of functions performed for the contracts we reviewed, we could not determine the extent to which civilian IC elements contracted for these functions. Limitations we identified in the obligation and FTE data reported in the inventory precluded us from using the information on contractor functions to determine the number of personnel and their costs associated with each function category. For example, we were able to verify for one State INR contract that contract personnel performed functions within the systems engineering category, but we could not determine the number of personnel dedicated to that function because of unreliable obligation and FTE data. In addition, IC CHCO provides information on contractor functions in its reports and briefings to Congress. However, it does not include the information it collects through the inventory’s primary contractor occupation and competency expertise data field. IC CHCO instead uses the budget category data field in the inventory as its source for information on functions performed by core contractor personnel, citing a desire for information provided to Congress to align with the budget request. The budget category data field, however, reflects a contract’s funding source rather than the functions performed by personnel working under these contracts. By using budget category information as a proxy for contractor functions, IC CHCO does not adhere to leading practices outlined in OMB guidelines for disseminated information. OMB guidelines provide that agencies should ensure that disseminated information be accurate, clear, and useful to the intended users. IC CHCO and civilian IC element officials acknowledged that the budget category is not the best representation of the functions performed by contractors. For example, we found contracts from one civilian IC element that were reported as collection and operations for the budget category, as required by IC CHCO guidance, included services such as policy and program development support, information technology, and administration. The reasons that the civilian IC elements use core contract personnel could not be reliably determined from the core contract personnel inventory information due to a lack of documentation to corroborate the reasons reported in the inventory. In preparing their inventory submissions, IC elements can select one of eight response options for the reason data field (see table 1). However, we could not verify the information reported by the civilian IC elements in the inventory due to a lack of corroborating documentation. For the 81 of the 102 records in our sample coded as unique expertise, we did not find evidence in the statements of work or other contract documents that the functions performed by the contractors required expertise not otherwise available from U.S. government civilian or military personnel. For example, ODNI contracts coded as unique expertise included services for conducting workshops and analysis, producing financial statements, and providing program management. Based on inventory submissions by both the civilian and military IC elements, IC CHCO reported to Congress that for fiscal year 2011, 57 percent of the core contract personnel FTEs were contracted for their unique expertise. Further, we found that the most widely used reason response option among the records we reviewed—specified service—does not provide insight into the civilian IC elements’ reasons for using core contract personnel. Instead, this response option describes the nature of the contract. Of the 287 records we reviewed, civilian IC elements selected specified service as the reason for the contract for 45 percent of those records. For example, an official from one civilian IC element stated that they selected specified service for all of their contracts in the fiscal years 2010 and 2011 inventories because, in accordance with the definition, they were buying services. However, these officials also cited the need for contractors due to personnel restrictions and budgetary considerations, which could correspond to the insufficient staffing resources response option. Civilian IC element officials noted that the reasons for contractor use reported in the inventory are subjective and based on the knowledge of the contracting or program official at the time of reporting. IC CHCO does not require elements to maintain supporting documentation for their contract reason codes. As a result, the civilian IC elements could not provide documentation for 40 percent of the records we reviewed. Additionally, an official from one civilian IC element told us that there was confusion among program offices responsible for determining the reason code as to the specific meaning of certain response options for the reason code. Civilian IC element officials stated that multiple reasons could pertain for utilizing the contract, but they can only select one option for the purposes of the inventory. For example, while officials from one civilian IC element stated that many of their contractors are brought on board for their institutional knowledge and skills, this element’s inventory data does not reflect the transfer of institutional knowledge reason code for any of their reported contracts. Most of this element’s contracts were coded as unique expertise, more efficient or effective, and specified service, which were supported by the contract documents. Due to the subjectivity of the coding, combined with the capability to only select one response and without requiring supporting documentation, the reasons identified in the civilian IC elements’ inventory submissions do not fully reflect why they use core contract personnel. CIA, ODNI, and the executive departments, which are responsible for developing policies to address risks related to contractors for the other six civilian IC elements within those departments, have generally made limited progress in developing such policies. Further, the eight civilian IC elements have generally not developed strategic workforce plans that address contractor use. While DHS and State have issued policies and guidance that address generally all of OFPP Policy Letter 11-01’s requirements related to contracting for services that closely support inherently governmental functions, the other departments, CIA, and ODNI are in various stages of developing required internal policies to address the policy letter. In addition, the civilian IC elements’ decisions to use contractors are generally not informed by strategic workforce plans or other strategic-level guidance on the appropriate mix of government and contract personnel for functions that are critical to elements’ missions. The civilian IC elements’ ability to use the core contract personnel inventory as a strategic workforce planning tool is hindered because the inventory does not provide these elements insight into the functions performed by contractors or the extent to which contractors are performing functions that closely support inherently governmental functions or are critical. Without guidance, strategies, and tools related to services that closely support inherently governmental functions and critical functions, the civilian IC elements may not be well-positioned to identify and manage the related risks of contracting for those functions. OFPP Policy Letter 11-01’s requirements related to contracting for services that closely support inherently governmental functions include giving special consideration to using federal employees to perform these functions, and if contractors are used to perform such work, giving special management attention to contractors’ activities. The policy letter includes a checklist of responsibilities that must be carried out when agencies rely on contractors to perform these functions and requires agencies to develop and maintain internal procedures to address the requirements of the guidance. OFPP, however, did not establish a deadline for when agencies need to complete these procedures. In 2011, we concluded that a deadline may help better focus agency efforts to address risks and therefore recommended that OFPP establish a near-term deadline for agencies to develop internal procedures, including for services that closely support inherently governmental functions. OFPP generally concurred with our recommendation and commented that it would likely establish time frames for agencies to develop the required internal procedures, but it has not yet done so. We assessed the extent to which CIA, ODNI, and the executive departments of the other civilian IC elements—DHS, DOE, DOJ, State, and Treasury—developed internal procedures to address the policy letter because the civilian IC elements within departments are not required to develop their own procedures to address the policy letter. The departmental civilian IC elements are subject to policies and guidance at the department level for considering and managing risks related to contracting for services that closely support inherently governmental functions. Our analysis found that DHS and State have issued policies and guidance that generally address all of these requirements, but CIA, ODNI, and the other three departments have not fully developed policies to do so. Civilian IC element and department officials cited various reasons for not yet developing policies to address all of the OFPP policy letter’s requirements. For example, Treasury officials stated that the OFPP policy letter called for dramatic changes in agency procedures and thus elected to conduct a number of pilots before making policy changes. DOE officials stated that they are waiting for revisions to the Federal Acquisition Regulation, which would incorporate the OFPP policy letter’s requirements, before reviewing and updating their acquisition policies as necessary. OMB’s July 2009 memorandum on managing the multisector workforce and our prior work on best practices in strategic human capital management have indicated that agencies’ strategic workforce plans should address the extent to which it is appropriate to use contractors. The civilian IC elements’ current strategic workforce plans, however, generally do not address the extent to which it is appropriate to use contractors, either in general or more specifically to perform critical functions, as called for in the OMB guidance. For example, ODNI’s 2012- 2017 strategic human capital plan outlines the current mix of government and contract personnel by five broad function types: core mission, enablers, leadership, oversight, and other. The plan, however, does not elaborate on what the appropriate mix of government and contract personnel should be on a function-by-function basis. The plan also discusses efforts to reduce the number of core contract personnel but does not elaborate on particular functions to target. In August 2013, ODNI officials informed us they are continuing to develop documentation to address a workforce plan. OMB M-09-26 (July 29, 2009). should be done by government personnel and contract personnel based on program goals, priorities, and associated human capital needs. While IC CHCO requires IC elements to conduct strategic workforce planning and prepare a human capital employment plan, neither effort requires the elements to determine the appropriate mix of personnel either generally or on a function-by-function basis. ICD 612 directs IC elements to determine, review, and evaluate the number and uses of core contract personnel when conducting strategic workforce planning but does not reference the requirements related to determining the appropriate workforce mix specified in OMB’s July 2009 memorandum or require elements to document the extent to which contractors should be used. IC CHCO also required IC elements to submit a 2012-2016 human capital employment plan, which was to include information on the current workforce mix and expected changes as well as information on elements’ efforts to examine the mix of government and contract personnel, as appropriate. One IC CHCO official, however, explained that some IC elements’ strategic workforce planning efforts are more robust than others, so the level of detail and information provided in the plans vary widely across the IC elements. Nevertheless, irrespective of an agency’s size, OMB’s guidance on managing the multisector workforce notes that agencies that have a strategic understanding of their current and appropriate mix of personnel for each function are better positioned to build and sustain the internal capacity necessary to maintain control over their missions and operations. OFPP’s November 2010 memorandum on service contract inventories indicates that a service contract inventory is a tool that can assist an agency in conducting strategic workforce planning. Specifically, an agency can gain insight into the extent to which contractors are being used to perform specific services by analyzing how contracted resources, such as contract obligations and FTEs, are distributed by function across an agency. The memorandum further indicates that this insight is especially important for contracts whose performance may involve critical functions or functions closely associated with inherently governmental functions. OFPP officials stated that the IC’s core contract personnel inventory serves this purpose for the IC and, to some extent, follows the intent of the service contract inventories guidance to help mitigate risks. OFPP officials stated that IC elements are not required to submit separate service contract inventories that are required of the civilian agencies and DOD, in part because of the classified nature of some of the contracts. The core contract personnel inventory, however, does not provide the civilian IC elements with detailed insight into the functions their contractors are performing or the extent to which contractors are used to perform functions that support their missions and closely support inherently governmental work. Without complete and accurate information in the core contract personnel inventory on the extent to which contractors are performing specific functions, the civilian IC elements may be missing an opportunity to leverage the inventory as a tool for conducting strategic workforce planning and for prioritizing contracts that may require increased management attention and oversight. We found that the data reported by the civilian IC elements in the primary contractor occupation and competency expertise data field accurately reflect the broad categories of contracted functions for each contract, but these data do not provide detailed information on the functions performed by contractors. Based on the contract documents we reviewed, such as statements of work, we identified at least 128 instances in the 287 records we reviewed in which the primary contractor occupation and competency expertise data field did not reflect the full range of services listed in the contracts. This was due in part to IC CHCO’s guidance, which instructs the elements to select only one service from the list of multiple response options for each contract entry in the inventory. An IC CHCO official explained that elements are instructed to select the predominant type of service provided by the contract given that elements are not able to record more than one type of service purchased for each contract. The civilian IC element officials acknowledged that the primary contractor occupation and competency expertise coding is not fully reflective of the services the contractors are performing. IC CHCO’s guidance, including ICD 612 and core contract personnel inventory guidance, do not require the elements to review all of their contracts, including classified contracts, to ensure that they identify and manage risks related to contracts for services that closely support inherently governmental or critical functions. In contrast, the civilian executive agencies are statutorily required to compile an annual service contract inventory, and as part of the inventory review process, agencies are required to ensure that they are not using contract personnel to perform critical functions in such a way that could affect the ability of the agency to maintain control of its mission and operations and giving special management attention to functions that closely support inherently governmental functions. However, certain civilian IC elements’ contracts, along with classified contracts at the civilian IC elements, are excluded from the civilian agencies’ service contract inventories. For those elements with contracts that are excluded from the civilian agencies’ service contract inventories, identifying which contracts contain these types of functions in the core contract personnel inventory could help target agencies’ efforts to provide enhanced management attention. The eight civilian IC elements, like other federal agencies, have long relied on contractors to support their missions. In fiscal year 2006, IC CHCO initiated data collection efforts for the core contract personnel inventory to collect information from elements on their use of these personnel and to report to Congress on the number of core contract personnel and their associated costs. IC CHCO and the civilian IC elements have taken and continue to take steps to improve the reliability of the reported information, such as standardizing how FTEs will be calculated for the fiscal year 2012 inventory. These are positive steps. Nevertheless, we identified several limitations, including definitional changes, inaccurate data, methodological differences, and poor documentation, that collectively undermine the utility of the information for determining the extent to which the civilian IC elements rely on core contract personnel. As a result, the IC CHCO cannot reliably report on statutorily required information comparing the number and cost of core contract personnel over time. By enhancing their internal controls, the civilian IC elements can help ensure that the data being reported to Congress are as accurate and complete as possible and consistent with OMB guidelines. Further, inherent limitations or changes in definitions or methodologies, including those intended to improve the data, can affect data accuracy, completeness, and comparability. Not fully disclosing these limitations and the effects of these changes limits the transparency and usefulness of the information reported to Congress. Within the IC, core contract personnel perform functions that could influence the direction and control of key aspects of the U.S. intelligence mission, such as intelligence analysis and operations. Our prior work and OMB policies have underscored the importance of agencies having guidance, strategies, and reliable data to inform decisions related to the appropriate use of contractor personnel. Building on longstanding OMB policy, OFPP’s September 2011 guidance requires agencies to develop internal procedures to identify and oversee contractors providing services that closely support inherently governmental functions. Yet, of the agencies we reviewed, ODNI, CIA, DOJ, DOE, and Treasury have not fully developed such procedures or established required time frames for doing so. Without these procedures in place, ODNI, CIA, and the civilian IC elements within these three departments risk not taking appropriate steps to manage and oversee contract personnel, particularly those performing work that could influence government decision making. In an effort to help manage the use of contractor personnel within the IC, elements are required by ICD 612 to conduct strategic workforce planning related to their use of core contract personnel. However, ICD 612 falls short of OMB’s July 2009 memorandum on managing the multisector workforce by not requiring the elements to document their assessment of the appropriate use of contractors or the appropriate mix of government and contractor personnel on a function-by-function basis. One tool identified by OFPP that can help agencies plan for the use of contract personnel and mitigate associated risks is a service contract inventory, which for the IC is the annual core contract personnel inventory. Yet, as it is currently structured, the core contract personnel inventory is limited in its ability to be an effective tool for doing so. As a result, the civilian IC elements cannot use the inventory to identify those services that require increased management attention under OFPP’s September 2011 guidance. Additionally, ICD 612 and other IC CHCO guidance do not require elements to identify in the inventory those contracts that provide critical services or those that closely support inherently governmental functions. Consequently, civilian IC elements or their respective departments we reviewed are not well-positioned to assess the potential effects of relying on contractor personnel who perform such functions. To improve congressional oversight and enhance civilian IC elements’ insights into their use of core contract personnel, we recommend that IC CHCO take the following two actions: When reporting to congressional committees, clearly specify limitations and significant methodological changes and their associated effects and In coordination with the IC elements, develop a plan to enhance internal controls for compiling the annual core contract personnel inventory. Such a plan could include requiring IC elements to document their methodologies for determining the number and costs of core contract personnel and the steps the elements took for ensuring data accuracy and completeness. To improve civilian IC elements’ or their respective departments’ ability to mitigate risks associated with the use of contractors, we recommend the Director of National Intelligence, Director of the Central Intelligence Agency, Attorney General of the United States, and Secretaries of Energy and the Treasury direct responsible agency officials to set time frames to develop guidance that fully addresses OFPP Policy Letter 11-01’s requirements related to closely supporting inherently governmental functions. To improve the ability of the civilian IC elements to strategically plan for their contractors and mitigate associated risks, we recommend that IC CHCO take the following three actions: Revise ICD 612’s provisions governing strategic workforce planning to require the IC elements to identify their assessment of the appropriate workforce mix on a function-by-function basis; Assess options for how the core contract personnel inventory could be modified to provide better insights into the functions performed by contractors when there are multiple services provided under a contract; and Require the IC elements to identify contracts within the core contract personnel inventory that include services that are critical or closely support inherently governmental functions. We provided a draft of our September 2013 classified report to CIA, DHS, DOE, DOJ, ODNI, State, and Treasury for review and comment. We received written comments from ODNI, which are reprinted in appendix VI, as well as technical comments that we incorporated into the draft as appropriate. In its written comments, ODNI generally agreed with the six recommendations that we directed to it. With regard to our first recommendation to clearly specify limitations and significant methodological changes and their associated effects when reporting on the IC’s use of core contract personnel, ODNI agreed that IC CHCO will highlight all adjustments to the data over time and the implications of those adjustments in future briefings to Congress and OMB. Similarly, ODNI agreed with our second recommendation to develop a plan to enhance internal controls for compiling the annual core contract personnel inventory. ODNI stated that IC CHCO, in coordination with the IC Chief Financial Office, has added requirements for the IC elements to include the methodologies used to identify and count the number of core contract personnel and their steps for ensuring the accuracy and completeness of the data. ODNI further stated that IC CHCO intends to request the methodologies used by the IC elements for the fiscal year 2014 budget data call, which includes the core contract personnel inventory. In commenting on our third recommendation, ODNI proposed that when ICD 612 is revised, IC CHCO will request notification on the mechanism by which each IC element adheres to OFPP Policy Letter 11-01. We believe IC CHCO’s proposal to monitor the IC elements’ implementation of OFPP Policy Letter 11-01 can help improve policies and guidance to mitigate the risks associated with using contractors across the IC. ODNI, however, did not directly address whether it will set time frames to develop guidance for use within ODNI to fully address OFPP Policy Letter 11-01’s requirements. We continue to believe that ODNI as an IC element should set timeframes to develop its own guidance that fully addresses the OFPP policy letter. With regard to our fourth recommendation to revise ICD 612’s provisions governing strategic workforce planning, ODNI stated that IC CHCO has recognized the need to update ICD 612 and will work to determine the most appropriate mechanism to identify the functions performed within a contract. ODNI further noted that IC CHCO proposed that IC elements be responsible for ensuring they are addressing the appropriate workforce mix when conducting workforce planning rather than requiring this information as part of core contract personnel inventory data collection efforts. We believe that ODNI’s comments are consistent with our recommendation. Regarding our fifth recommendation to assess options for how the core contract personnel inventory could be modified to provide better insights into the functions performed by contractors when multiple services are provided under a contract, ODNI stated that IC CHCO will examine the requirement to provide insights into all functions under a contract to determine if there is a need to modify the inventory to capture that level of information. As we note in our report, having better insight into contractor functions through the core contract personnel inventory can help the civilian IC elements conduct strategic workforce planning and prioritize contracts that may require increased management attention and oversight. For our sixth recommendation to require the IC elements to identify contracts within the core contract personnel inventory that include critical services or those closely supporting inherently governmental functions, ODNI stated it will explore doing so. ODNI noted in its comments that the definition of core contract personnel in ICD 612 is already aligned with OFFP Policy Letter 11-01’s definition of contract personnel who perform services that closely support inherently governmental functions. As we note in our report, however, not all core contract personnel perform functions that closely support inherently governmental functions and therefore do not need enhanced management oversight required by OFPP. Further, the definition of core contract personnel does not identify those functions that are critical to an agency’s mission. OFPP Policy Letter 11-01 requires agencies to take different steps to manage the risks related to contractors performing critical functions, such as ensuring government personnel perform or manage these functions to the extent necessary to maintain control of their missions and operations. Clearly identifying which contracts within the core contract personnel inventory include services that closely support inherently governmental functions as well as those that include critical functions will better position the civilian IC elements to assess the potential effects of relying on contract personnel who perform such functions and take any necessary actions to mitigate risks. CIA, DOE, DOJ, and Treasury did not comment on our recommendation to them but generally provided technical comments that we incorporated into the draft as appropriate. DHS and State also provided technical comments which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and the Attorney General of the United States; the Director of National Intelligence; the Director of the Central Intelligence Agency; and the Secretaries of Energy, Homeland Security, State, and the Treasury. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact me at (202) 512-4841 or [email protected]. Contact points for our Congressional Relations and Public Affairs may be found on the last page of this report. Staff who made key contributions to this report are listed in appendix VII. The objectives of this review were to determine (1) the extent to which the civilian intelligence community (IC) elements rely on core contract personnel; (2) the functions performed by core contract personnel and the factors that contribute to their use; and (3) whether the civilian IC elements have developed policies and guidance and strategically planned for their use of these contract personnel to mitigate related risks. The eight civilian IC elements covered by our review are the Central Intelligence Agency (CIA), the Department of Energy’s Office of Intelligence and Counterintelligence (DOE IN), Department of Homeland Security’s Office of Intelligence and Analysis (DHS I&A), Department of State’s Bureau of Intelligence and Research (State INR), Department of the Treasury’s Office of Intelligence and Analysis (Treasury OIA), Drug Enforcement Administration’s Office of National Security Intelligence (DEA NN), Federal Bureau of Investigation (FBI), and Office of the Director of National Intelligence (ODNI). To address our first and second objectives, we examined the civilian IC elements’ submissions to the fiscal years 2007 to 2011 core contract personnel inventories, when available. The submissions contain information on the elements’ core contracts from over 10 data fields, which vary by fiscal year. For the purposes of answering the first and second objectives, we focused on five data fields related to the elements’ extent of reliance on core contract personnel, the functions performed by these contract personnel, and the factors that contributed to their use: fiscal year obligations, total full-time equivalents (FTE), primary contractor occupation and competency expertise, budget category, and reason code. We were not able to assess the reliability of the information reported for these data fields in the elements’ submissions to the fiscal years 2007 to 2009 inventories for various reasons, such as elements not having records of their submissions for certain years. In addition, the information reported for the fiscal year obligations, total FTEs, budget category, and reason code data fields in the elements’ submissions to the fiscal years 2010 to 2011 inventories was not sufficiently reliable for our intended purposes of determining the civilian IC elements’ extent of reliance on core contract personnel, the functions performed by these personnel, or the factors that contribute to their use. We present this information with the associated limitations in the report where appropriate. Although we identified some limitations, the primary contractor occupation and competency expertise data field was sufficiently reliable for identifying the general types of functions performed. However, because neither the fiscal year obligations nor total FTEs data field was sufficiently reliable, we could not determine the extent to which the civilian IC elements use contract personnel to perform certain functions based on the primary contractor occupation and competency expertise data field. Appendix II contains a more detailed discussion of our sampling methodology and data reliability assessment. We also reviewed the Intelligence Community Chief Human Capital Officer’s (IC CHCO) annual guidance to elements for preparing their submissions to the fiscal years 2007 to 2012 inventories and information reported in annual core contract personnel inventory briefings and personnel level assessments provided to Congress. We interviewed officials at IC CHCO and the eight civilian IC elements on the processes for collecting and reporting their information for the inventory. To address our third objective, we compared the civilian IC elements’ or their respective departments’ relevant guidance, planning documents, and tools related to their use of contractors to Office of Management and Budget (OMB) guidance that address risks related to relying on contractors. We reviewed Office of Federal Procurement Policy (OFPP) Policy Letter 11-01 and compared the policy letter’s requirements addressing contracting for closely supporting inherently governmental functions to civilian IC elements’ or their respective departments’ acquisition policies and guidance to determine the extent to which the requirements were met. We reviewed OMB’s July 2009 Memorandum on Managing the Multisector Workforce and GAO’s prior work on strategic human capital best practices and compared the leading practices identified to the strategic workforce planning requirement in Intelligence Community Directive (ICD) 612 and civilian IC elements’ strategic human capital or other workforce plans to determine the extent to which the leading practices were implemented. We reviewed the leading practices identified in OMB’s November 2010 and December 2011 memoranda on service contract inventories and the civilian IC elements’ data on functions performed by contractors. We reviewed IC CHCO guidance on core contract personnel to determine the extent to which it addressed services that closely support inherently governmental functions and critical functions. We also interviewed human capital, procurement, or program officials at each civilian IC element to discuss ongoing efforts related to strategic planning and developing policies to mitigate risks from each IC element or their respective departments. ODNI, in consultation with the other civilian IC elements, deemed some of the information in the September 2013 report as classified, which must be protected from public disclosure. Therefore, this report omits sensitive information about (1) the number and associated costs of government and core contract personnel and some details on how the civilian IC elements prepare the core contract personnel inventory, (2) specific contracts from civilian IC elements we reviewed, and (3) details related to the civilian IC elements’ or their respective departments’ progress in developing policies to mitigate risks related to contractors and the civilian IC elements’ strategic workforce planning efforts. We conducted this performance audit from November 2012 to September 2013 in accordance with generally accepted government auditing standards. We subsequently worked with ODNI from September 2013 to December 2013 to prepare an unclassified version of this report for public release. Government auditing standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted an analysis to determine whether the eight civilian intelligence community (IC) elements’ submissions to the fiscal years 2007 to 2011 core contract personnel inventories were sufficiently reliable for the purpose of identifying the extent to which these elements have relied on core contract personnel, the functions performed by these contract personnel, and the factors that contributed to their use. We examined data fields from the submissions related to these purposes, including the amount of obligations (fiscal year obligations), the number of contractor full-time equivalents (total FTEs), the types of functions performed by the contract personnel (primary contractor occupation and competency expertise), the type of funding used for the contract (budget category), and the reason for using contract personnel to perform a service (reason code). In addition, we reviewed the Intelligence Community Chief Human Capital Officer’s (IC CHCO) guidance to the elements for preparing their submissions and interviewed civilian IC element officials on their processes for compiling and reporting the information. We could not determine the reliability of the information reported for these data fields in the elements’ submissions to the fiscal years 2007 to 2009 inventories. In addition, we identified several concerns with the reliability of the information reported for the fiscal year obligations, total FTEs, budget category, and reason code data fields in the civilian IC elements’ submissions to the fiscal years 2010 and 2011 inventories. As a result, we determined that the data in these submissions were not sufficiently reliable for the purposes of our review.identified some limitations, the primary contractor occupation and Although we competency expertise data field was sufficiently reliable for identifying the broad types of functions performed. However, because neither the fiscal year obligations nor total FTEs data fields was sufficiently reliable, we could not determine the extent to which the civilian IC elements use contract personnel to perform certain functions based on the primary contractor occupation and competency expertise data field. To determine the extent to which the civilian IC elements relied on core contract personnel, the functions performed by these contract personnel, and the factors that contributed to their use, we examined data from the eight civilian IC elements’ submissions to the core contract personnel inventory: fiscal year obligations, total FTEs, budget category, reason code, and primary contractor occupation and competency expertise. We planned to examine these five data fields for the civilian IC elements’ submissions to the fiscal years 2007 to 2011 inventories. We chose to assess the submissions for fiscal years 2007 to 2011 because IC CHCO published the first inventory in fiscal year 2007, and the fiscal year 2011 inventory was the most recent data available at the time we started our review. In addition, we chose to analyze data for these five fields because they were related to our audit objectives. We planned to review the civilian IC elements’ submissions to the fiscal years 2007 to 2011 core contract personnel inventories. However, we could not determine the reliability of their submissions for the fiscal years 2007 to 2009 inventories for various reasons. For all but one of these elements, we were unable to assess at least one year of data because (1) element officials told us they did not have records of the data they submitted to IC CHCO, (2) element officials told us they had specific concerns about the reliability of data reported in certain fiscal years that would make it difficult for us to verify the data, or (3) obtaining relevant documentation would require an unreasonable amount of time. As a result, we assessed the reliability of the five data fields from the elements’ submissions to the fiscal years 2010 and 2011 inventories because we could assess the data for at least a majority of the elements for these years. To determine whether the five data fields from the civilian IC elements’ submissions to the fiscal years 2010 and 2011 inventories would be reliable for the purpose of our review, we interviewed IC CHCO and civilian IC element officials knowledgeable about the processes for compiling and reporting the information. In addition, we reviewed IC CHCO’s guidance to elements for preparing their submissions. We also assessed the accuracy, consistency, and completeness of the data in the submissions by analyzing the five data fields from the civilian IC elements’ submissions. We compared the information reported to information in relevant documentation for a sample of 287 records— representing 222 contracts or purchase orders. For elements that reported 30 or fewer records in either fiscal year, we reviewed data for all reported records for both fiscal years. For elements that reported more than 30 records in either fiscal year, we selected a random, nongeneralizable sample of records from their submissions. We reviewed relevant documents to determine whether they validated the information reported in the civilian IC elements’ submissions to the fiscal year 2010 and 2011 core contract personnel inventories for the five data fields for each record in the sample. Table 2 below summarizes our criteria for making these determinations for each data field. After our initial review of the documents, we provided the civilian IC elements with an overview of our determinations that indicated whether the documentation validated the information reported. In addition, we offered these elements an opportunity to provide additional documentation for records in which we identified discrepancies with the documents or lacked sufficient information to validate the reported data. In the instances in which elements provided additional documentation, we reviewed the documents and made adjustments to our determinations, as appropriate. We made the following determinations of whether the information reported for the five data fields was sufficiently reliable for our intended purposes. We determined that the information reported for the fiscal year obligations data field was not sufficiently reliable for our intended purpose of identifying the eight civilian IC elements’ extent of reliance on core contract personnel for several reasons. First, we could only validate the amount of obligations reported for approximately 62 percent of the records we reviewed. For an additional 21 percent of the records we reviewed, the civilian IC elements either under- or over-reported the amount of obligations by more than 10 percent. Second, we identified inconsistencies between civilian IC elements’ methodologies for reporting the amount of obligations. Officials from six of the civilian IC elements stated that they reported the amount of obligations on core contracts that are active at any point within a given fiscal year while officials from two of the civilian IC elements told us they do not report the amount of obligations on certain contracts if they are not active on the date of reporting. As a result, the information reported cannot be compared across the eight civilian IC elements. Lastly, the amount of obligations reported for two of the civilian IC elements does not reflect all of the obligations in a given fiscal year. First, these elements’ methodology would exclude certain obligations on core contracts that are active at some point during a fiscal year but not on the date of reporting. We were unable to determine the magnitude of obligations not included in the inventory because we did not have a way to identify contracts not reported by the civilian IC elements. Further, officials from these two elements stated that they also do not report obligations on contract option periods that are no longer active on the date of reporting even if the contract is active at that time. We determined that the information reported for the total FTEs data field was not sufficiently reliable for our intended purpose of identifying the eight civilian IC elements’ extent of reliance on core contract personnel. First, the elements could not provide complete or readily available documentation to validate the information reported for approximately 37 percent of the records. Second, we identified inconsistencies in civilian IC elements’ methodologies for calculating the number of FTEs in their submissions to the fiscal years 2010 and 2011 inventories, thus limiting our ability to compare the number of FTEs reported across the elements. The civilian IC elements reported the number of FTEs by: (1) calculating estimates based on target labor hours, (2) calculating the number of labor hours invoiced by the contractor, (3) counting the number of contract personnel on board on a selected date and the number of approved contractor vacancies, or (4) using the amount of obligations and average labor hour rates. Lastly, as noted above, officials from two of the civilian IC elements stated that they do not report certain contracts in their submissions if they are not active on the date of reporting. As a result, the number of FTEs on core contracts that are active at some point during a fiscal year but not on the date of reporting would not be reflected in these two elements’ submissions. We were unable to determine the magnitude of the number of FTEs not included in the inventory because we did not have a way to identify contracts not reported by these two elements. We determined that the information reported for the budget category data field in the eight civilian IC elements’ submissions was not sufficiently reliable for our intended purpose of identifying the types of functions performed by core contract personnel. We intended to use this data field to describe the types of functions performed by core contract personnel because IC CHCO uses the budget category in its briefings to Congress to provide information on the functions performed by core contract personnel. The IC CHCO core contract personnel inventory guidance instructs the elements to complete the budget category data field by reporting where the funding for the contract is assigned according to the Congressional Budget Justification Books. Civilian IC element officials acknowledged that the budget category is not the best representation of the functions being performed by contractors. Based on our review of documents, we found that contracted functions are not necessarily reflected by the budget category designation. Further, we identified discrepancies between the budget category information reported and the information contained in relevant documents for approximately one-third of the records we reviewed for two of the civilian IC elements. Based on our review of the documents provided, the reported budget category information for one of these elements improved from fiscal year 2010 to 2011. However, we still identified discrepancies between the information reported and relevant documents for 23 percent of the records we reviewed. We determined that the information reported for the reason code data field in the eight civilian IC elements’ submissions was not sufficiently reliable for our intended purpose of identifying the factors for using core contract personnel because we could not determine the reliability of the information reported for a significant number of the records we reviewed. We could not validate the reported reason code based on the information in the documents provided for approximately 40 percent of the records we reviewed. Further, this percentage is even more pronounced for those records coded as a category other than specified service, which is a broad category defined as when the service being provided is of indefinite quantity. The civilian IC elements selected specified service for approximately 45 percent of the records we reviewed. For the 156 remaining records coded as a category other than specified service, we could not validate the information reported for approximately 73 percent of the records. In addition, we identified instances when multiple selection options could apply to a record. IC CHCO guidance requires elements to select one category per record. However, civilian IC element officials acknowledged that more than one response option can apply to a record and that officials at the time of reporting make a subjective determination of which option best applies. As a result, the subjective nature of the determination and that more than one record could apply to a record raises concerns about the consistency of the information reported. Lastly, because we determined that the fiscal year obligations and total FTEs data fields were not sufficiently reliable for determining the extent of reliance on core contract personnel, we would not be able to use the information reported for these data fields to describe the extent to which the civilian IC elements used these personnel for particular reasons based on the reason code data field. Although we identified some limitations, the information reported for the primary contractor occupation and competency expertise data field is sufficiently reliable for our intended purpose of identifying the types of functions performed by core contract personnel. Based on our review of relevant documents, we were able to find support for the selected response option for almost all of the records we reviewed. However, we identified some limitations that would limit insight into the functions performed by the contract personnel. First, we identified instances when multiple selection options could apply to a record. The core contract personnel inventory guidance instructs the elements to select only one response option for each record. Based on the documents provided, we identified at least 128 instances in the 287 records we reviewed in which the primary contractor occupation and competency expertise data field did not reflect the full range of services listed in the documents. Civilian IC element officials acknowledged that the primary contractor occupation and competency expertise coding are not fully reflective of the services the contractors are performing. As a result, the information may not be consistently reported given the subjective nature of this data field. Further, we identified a limited number of instances when the information may not be consistently reported as a function performed by the contract personnel. For example, an element may make a selection based on the mission the contractor supported or the contractor’s general area of expertise rather than the type of function performed. Lastly, because we determined that the fiscal year obligations and total FTEs data fields were not sufficiently reliable for determining the extent of reliance on core contract personnel, we would not be able to use the information reported for these data fields to describe the extent to which the contract personnel performed certain types of functions based on the primary contractor occupation and competency expertise data field. Mission Collects, analyzes, evaluates, and disseminates foreign intelligence to assist the President and senior U.S. government policymakers in making decisions relating to national security. Department of Energy Office of Intelligence and Counterintelligence (DOE IN) Provides expert scientific, technical, analytic, and research capabilities to other agencies in the IC and participates in formulating intelligence collection, analysis, and information relative to foreign energy matters. Department of Homeland Security Office of Intelligence and Analysis (DHS I&A) Equips DHS, other IC elements, departments, state, local, tribal, territorial, and private sector partners with the intelligence and information needed to keep the homeland safe, secure, and resilient. Facilitates intelligence coordination and information sharing with other members of the IC and leverages its global law enforcement drug intelligence assets to enhance efforts to protect national security, combat global terrorism, and facilitate IC support to DEA’s law enforcement mission. Ensures that well-informed and independent analysis informs foreign policy decisions and that intelligence and counterintelligence activities support America’s foreign policy. Department of the Treasury Office of Intelligence and Analysis (Treasury OIA) Receives, analyzes, collates, and disseminates intelligence and counterintelligence information related to the operations and responsibilities of the entire Treasury Department. Publishes analytic products and intelligence information reports for senior leaders at Treasury and other policymakers and intelligence consumers throughout the government. DOJ Federal Bureau of Investigation (FBI) Protects and defends against terrorist and foreign intelligence threats, upholds and enforces the criminal laws of the United States, and provides leadership and criminal justice services to federal, state, municipal, and international agencies and partners. Serves as head of the IC; acts as the principal adviser to the President, National Security Council, and the Homeland Security Council for intelligence matters related to national security; and develops and ensures the execution of an annual budget for the National Intelligence Program based on budget proposals provided by the IC elements. Appendix IV: Selected Office of Management and Budget Guidance Related to Considering and Mitigating Risks Selected requirements related to closely supporting inherently governmental functions Agency officials must provide an enhanced degree of management controls and oversight when contracting for functions that closely support the performance of inherently governmental functions. Selected requirements related to critical functions Not applicable As part of determining whether it is appropriate to use contractors, fill critical functions only with government personnel to the extent required by the agency to maintain control of its mission and operations and by either government or contract personnel once the agency has sufficient internal capability to control its mission and operations. Beginning with the fiscal year 2012 service contract inventory submissions, identify which contracts include services that are predominantly for functions closely associated to inherently governmental work. Analyze the inventory to ensure the agency is giving special management attention to functions that are closely associated with inherently governmental functions. Beginning with the fiscal year 2012 service contract inventory submissions, identify which contracts include services that are predominantly for functions that are critical. Analyze the inventory to ensure that the agency is not using contractor employees to perform critical functions in such a way that could affect the ability of the agency to maintain control of its mission and operations. Limit or guide a contractor’s exercise of discretion and retain control of government operations. Assign a sufficient number of qualified government employees, with expertise to administer or perform the work, to give special management attention to the contractor’s activities. Identify agency’s critical functions. Ensure that government personnel perform and/or manage critical functions to the extent necessary for the agency to operate effectively and maintain control of its mission and operations. Contracts that provide direct support to core intelligence community (IC) mission areas such as collection activities and operations; intelligence analysis and production; basic and applied technology research and development; acquisition and program management; and/or management and administrative support to these functions. Also, these employees are functionally indistinguishable from U.S. government personnel whose mission they support. Consulting contractors are to be included when they provide primarily intellectual products or services. Individuals employed by a private or independent contractor to provide analytical, technical, managerial, and/or administrative support to: (1) intelligence collection activities and operations; (2) intelligence analysis and production; (3) basic and applied technology research and development; (4) acquisition and program management; (5) enterprise information technology; and (6) ongoing operations and maintenance in support of a particular product; and/or support the general management and administration of an IC agency or element. From contracts that provide “direct support” to individuals that “support.” Removes “such as” for mission areas and names six core IC mission areas. Adds “enterprise information technology” and “operations and maintenance” as mission areas not previously listed. From “management and administrative support to these functions” to “support the general management and administration of an IC agency or element.” Removes that the “employees are functionally indistinguishable from U.S. government personnel whose mission they support.” Personnel that provide only direct support to core IC mission areas that include: (1) collection activities and operations (technical and human intelligence); (2) intelligence analysis and production; (3) basic and applied technology research and development; (4) acquisition and program management; (5) enterprise information technology; and (6) management or administrative support to these functions. Also, these employees are functionally indistinguishable from U.S. government personnel whose mission they support. From individuals that “support” to personnel that provide “only direct support.” Specifies technical and human intelligence for collection activities and operations. Reverts from “support the general management and administration of an IC agency or element” to “management or administrative support of these functions.” Adds back in that the “employees are functionally indistinguishable from U.S. government personnel whose mission they support” but does not mention consulting contractors. Personnel that provide only direct support to core IC mission areas that include: (1) collection activities and operations (technical and human intelligence), (2) intelligence analysis and production, (3) basic and applied technology research and development, (4) acquisition and program management, (5) enterprise information technology, and (6) management or administrative support to these functions. Also, these employees are functionally indistinguishable from U.S. government personnel whose mission they support. No definitional change. Guidance includes that personnel performing certain administrative support, training support, information technology services, and operations and maintenance will either be included or excluded depending on the types of services they provide. Personnel that provide only direct support to core IC mission areas that include: (1) collection activities and operations (technical and human intelligence), (2) intelligence analysis and production, (3) basic and applied technology research and development, (4) acquisition and program management, (5) enterprise information technology, and (6) management or administrative support to these functions. Summary of major changes from prior fiscal year Revision to “substantive work products may be incorporated in and/or indistinguishable from those of U.S. government personnel.” Guidance further refines the inclusion or exclusion of personnel performing certain administrative support, information technology services, and operations and maintenance. 1. ODNI’s observation that the number of core contract personnel reported during initial data collection efforts may not have fully or accurately reflected all core contract personnel due to the potential exclusion of some functions underscores our finding that the core contract personnel inventory data cannot be used reliably to make year-to-year comparisons or establish trends. 2. Our recommendation did not specify that the assessments of the appropriate workforce mix be addressed as part of the core contract personnel inventory. However, as discussed in our report, OFPP has indicated that a service contract inventory can assist an agency in conducting strategic workforce planning by providing insight into the extent to which contracted resources are distributed by function across an agency. In addition to the contact above, John P. Hutton, Director (ret.); Johana R. Ayers, Acting Director; Molly W. Traci, Assistant Director; Laura Greifner; Julia M. Kennon; John A. Krump; Claire Li; Stephen V. Marchesani; Anne McDonough-Hughes; Kenneth E. Patton; Jared A. Sippel; and Roxanna T. Sun made key contributions to this report.
The IC uses core contract personnel to augment its workforce. These contractors typically work alongside government personnel and perform staff-like work. Some core contract personnel require enhanced oversight because they perform services that could inappropriately influence the government's decision making. This report is an unclassified version of a classified report issued in September 2013. GAO was asked to examine the eight civilian IC elements' use of contractors. This report examines (1) the extent to which the eight civilian IC elements use core contract personnel, (2) the functions performed by these personnel and the reasons for their use, and (3) whether the elements developed policies and strategically planned for their use. GAO reviewed and assessed the reliability of the eight civilian IC elements' core contract personnel inventory data for fiscal years 2010 and 2011, including reviewing a sample of 287 contract records. This sample is nongeneralizable as certain contract records were removed due to sensitivity concerns. GAO also reviewed agency acquisition policies and workforce plans and interviewed agency officials. Limitations in the intelligence community's (IC) inventory of contract personnel hinder the ability to determine the extent to which the eight civilian IC elements--the Central Intelligence Agency (CIA), Office of the Director of National Intelligence (ODNI), and six components within the Departments of Energy, Homeland Security, Justice, State, and the Treasury--use these personnel. The IC Chief Human Capital Officer (CHCO) conducts an annual inventory of core contract personnel that includes information on the number and costs of these personnel. However, GAO identified a number of limitations in the inventory that collectively limit the comparability, accuracy, and consistency of the information reported by the civilian IC elements as a whole. For example, changes to the definition of core contract personnel and data shortcomings limit the comparability of the information over time. In addition, the civilian IC elements used various methods to calculate the number of contract personnel and did not maintain documentation to validate the number of personnel reported for 37 percent of the 287 records GAO reviewed. Further, IC CHCO did not fully disclose the effects of such limitations when reporting contract personnel and cost information to Congress, which limits its transparency and usefulness. The civilian IC elements used core contract personnel to perform a broad range of functions, such as information technology and program management, and reported in the core contract personnel inventory on the reasons for using these personnel. However, limitations in the information on the number and cost of core contract personnel preclude the information on contractor functions from being used to determine the number of personnel and their costs associated with each function. Further, civilian IC elements reported in the inventory a number of reasons for using core contract personnel, such as the need for unique expertise, but GAO found that 40 percent of the contract records reviewed did not contain evidence to support the reasons reported. Collectively, CIA, ODNI, and the departments responsible for developing policies to address risks related to contractors for the other six civilian IC elements have made limited progress in developing those policies, and the civilian IC elements have generally not developed strategic workforce plans that address contractor use. Only the Departments of Homeland Security and State have issued policies that generally address all of the Office of Federal Procurement Policy's requirements related to contracting for services that could affect the government's decision-making authority. In addition, IC CHCO requires the elements to conduct strategic workforce planning but does not require the elements to determine the appropriate mix of government and contract personnel. Further, the elements' ability to use the core contract personnel inventory as a strategic planning tool is hindered because the inventory does not provide insight into the functions performed by contractors, in particular those that could inappropriately influence the government's control over its decisions. Without guidance, strategies, and tools related to these types of functions, the eight civilian IC elements may not be well-positioned to identify and manage related risks. GAO is recommending that IC CHCO take several actions to improve the inventory data's reliability and transparency and revise strategic workforce planning guidance, and develop ways to identify contracts for services that could affect the government's decision-making authority. IC CHCO generally agreed with GAO's recommendations.
With the growth in the nation’s highway and aviation systems in the previous decades, intercity passenger rail service lost its competitive edge. Highways have enabled cars to be competitive with conventional passenger trains (those operating up to 90 miles per hour), while airplanes can carry passengers over longer distances at higher speeds than can trains. The Rail Passenger Service Act of 1970 created Amtrak to provide intercity passenger rail service because existing railroads found such service unprofitable. Like other major national intercity passenger rail systems in the world, Amtrak has received substantial government support—nearly $24 billion for capital and operating needs through fiscal year 2001. Amtrak operates a 22,000-mile passenger rail system, primarily over tracks owned by freight railroads. (See fig. 1.) Amtrak owns 650 miles of track, primarily in the Northeast Corridor, which runs between Boston and Washington, D.C. About 70 percent of Amtrak’s service is provided by conventional trains; the remainder is provided by high-speed trains. It offers high-speed service (up to 150 miles per hour) on the Northeast Corridor. About 22 million passengers in 45 states rode Amtrak’s trains in 2000 (about 60,000 passengers each day), a small share of the commercial intercity travel market. In comparison, in 1999, domestic airlines carried about 1.6 million passengers per day, and intercity buses carried about 983,000 people per day (latest data available). Proponents see high-speed rail systems (with speeds over 90 miles per hour) as a promising means for making trains more competitive with these other modes of transportation. They see introduction of high-speed rail in various areas of the country as a cost-effective means of increasing transportation capacity (the ability to carry more travelers) and relieving air and highway congestion, among other things. The Federal Railroad Administration defines high-speed rail transportation as intercity passenger service that is time-competitive with airplanes or automobiles on a door-to-door basis for trips ranging from about 100 to 500 miles. The agency chose a market-based definition, rather than a speed-based definition, because it recognizes that opportunities for successful high- speed rail projects differ markedly among different pairs of cities. High-speed trains can operate on tracks owned by freight railroads that have been upgraded to accommodate higher speeds or on dedicated rights of way. The greater the passenger train’s speed, the more likely it is to require a dedicated right-of-way for both safety and operating reasons. Ten corridors (not including Amtrak’s Northeast Corridor) have been designated as high-speed rail corridors either through legislation or by the Department of Transportation. (See fig. 2.) Designated corridors may be eligible for federal funds through several Department of Transportation programs. According to the Department, the designation also serves as a catalyst for sustained state, local, and public interest in corridor development. The 10 federally designated corridors are generally in various early stages of planning. Amtrak’s Northeast Corridor is in operation and supports high-speed service up to 150 miles per hour. Amtrak’s future is uncertain, in part, because it has made limited progress toward achieving operational self-sufficiency, as required by the Amtrak Reform and Accountability Act of 1997. The act prohibits Amtrak from using federal funds for operating expenses, except for an amount equal to excess Railroad Retirement Tax Act payments, after 2002. If the Amtrak Reform Council (an independent council established by the act) finds that Amtrak will not achieve operational self-sufficiency, the act requires that the railroad submit to the Congress a liquidation plan and the Council submit to the Congress a plan for a restructured national intercity passenger rail system. Amtrak has made little progress in reducing its need for federal operating assistance—i.e., closing its “budget gap”—in order to reach operational self-sufficiency. In fiscal year 2000, Amtrak closed its budget gap by only $5 million, achieving very little of its planned $114 million reduction. Results for the first 8 months of fiscal year 2001 (October 2000 through May 2001) are not encouraging: Amtrak’s revenues increased by about $83 million (6 percent) over the same period in 2000, but its cash expenses increased by about $120 million (7 percent). Overall, in the last 6 years (fiscal years 1995 through 2000), Amtrak has reduced its budget gap by only $83 million. By the end of 2002, about 17 months from now, Amtrak will need to achieve about $281 million in additional financial improvements to reach operational self-sufficiency. Although Amtrak has undertaken a number of actions to reach and sustain operational self- sufficiency by the end of 2002, we believe that it is unlikely that it will be able to do so. Intercity passenger rail systems, like other intercity transportation systems, are expensive. The level of federal financial assistance that would be required to maintain and expand the nation’s intercity passenger rail network far exceeds the amounts that have been provided in recent years. In February, Amtrak’s capital and finance plans called for $30 billion (in constant 2000 dollars) in federal capital support from 2001 through 2020 (an average of $1.5 billion each year, with $955 million in fiscal year 2002) to upgrade its operations and to invest as seed money in high-speed rail corridors. The proposed amount is nearly $10 billion more than the $20.4 billion (in 2000 dollars) that Amtrak has received in federal operating and capital support over the past 20 years (1982 through 2001). The amount is also nearly three times the annual amount that the Congress provided Amtrak in recent years (e.g., $571 million for 2000 and $521 million for 2001 that could be used for both capital and operating expenses). Additionally, fully developing high-speed rail corridors would require substantial amounts of federal capital assistance. Overall cost figures are unknown because corridor initiatives are in various stages of planning. However, the capital costs to fully develop the federally designated high- speed rail corridors and the Northeast Corridor could be $50 billion to $70 billion over 20 years, according to a preliminary Amtrak estimate. The federal government could be expected to provide much of these funds. However, estimates of the costs and the financial viability of high-speed rail systems can be subject to much uncertainty, especially when they are in the early stages of planning. Some of the federal funding (as much as $12 billion) for high-speed rail projects could be provided if the High-Speed Rail Investment Act of 2001 (H.R. 2329) is enacted. (A similar bill, S. 250, was introduced in the Senate.) Amtrak views the bill as an important first step in providing seed money and helping build partnerships with states, localities, and freight railroads critical to the development of high-speed passenger rail in the United States. According to Amtrak and Federal Railroad Administration officials, several federally designated corridors could be ready for infrastructure investment in the next year or so. We agree that the bill offers the potential to facilitate the development of high-speed rail systems outside the Northeast Corridor. However, issues remain to be addressed if corridors are to realize the benefits that proponents see for them, including how to complete projects where costs grow beyond the bond funds made available for them. Further, in applying the bill’s public benefit criteria, the Secretary and others will have to address issues raised by a project that, by itself, is insufficient to provide high-speed rail service on a corridor (or a portion of the corridor). In these situations, one approach could be to require applicants for bond funding to demonstrate that other resources could reasonably be expected to be available to initiate such service or that the project would result in a “useful asset” even if no other funding is provided. There is growing interest in and enthusiasm for intercity passenger rail by states, particularly for high-speed rail systems. Proponents see opportunities for increasing ridership—such as a quadrupling of riders on corridors other than the Northeast Corridor (from 10 million to 40 million passengers annually) by 2020. Proponents see a number of public benefits—such as reduced congestion, improved air quality, increased travel capacity, and greater travel choices—from further developing and expanding such systems. According to the Federal Railroad Administration, 34 states are participating in the development of high- speed rail corridors and these states have invested more than $1 billion for improvements of local rail lines for this purpose. As the Congress moves forward to define the role of intercity passenger rail in our nation’s transportation framework, it needs realistic appraisals of the level, nature, and distribution of public benefits that can be expected to accrue. A public benefit cited to support the expansion of high-speed passenger rail service is its potential to help relieve congestion in air travel and on our nation’s highways. Such service might have some impact on congestion if it were targeted to areas where roads are at or near their design capacity, for example. As more traffic uses these roads, travel time increases sharply and the delays are felt by all travelers. Expectations for the extent to which intercity passenger rail can reduce congestion must be realistic. For example, in 1995, we reported that each passenger train along the busy Los Angeles-San Diego corridor kept about 129 cars off the highway (about 2,240 cars each day)—a small number relative to the total volume. Intercity passenger rail cannot be expected to ease congestion at airports when long distance travel is involved because rail travel is not time- competitive with air travel. For example, the scheduled travel time for the approximately 700-mile distance between Washington, D.C., and Chicago is about 2 hours for air and about 18 hours for conventional Amtrak passenger trains. High-speed rail proponents believe that one potential for high-speed rail is to replace shorter intercity air service, thus freeing up airport capacity for longer-distance travel. High-speed rail may work best for relatively short trips (of several hundred miles or less) where it connects densely populated cities with substantial travel between the cities. Amtrak’s Metroliner service, which travels up to 125 miles per hour between New York City and Washington, D.C., is an example. The Metroliner is one of only two Amtrak routes that made an operating profit in 2000. Notably, the Federal Railroad Administration is supporting the development of high-speed rail corridors that are competitive in travel time with air and highway travel. Another advantage cited for intercity passenger rail is that it is energy- efficient, thus improving air quality. For example, the Congressional Research Service reported that Amtrak is much more energy-efficient than air travel. However, it also found that Amtrak is much less energy- efficient than intercity bus transportation and about equal in energy efficiency as automobiles for trips longer than 75 miles. Our 1995 analysis of the Los Angeles-San Diego corridor found that the increase in emissions from added automobiles, intercity buses, and aircraft would be very small if existing diesel-powered trains were discontinued. Another cited advantage is that an investment in intercity passenger rail can do more to increase transportation capacity than a similar expenditure in another mode. For example, Amtrak recently suggested that a dollar invested in intercity rail can increase capacity 5 to 10 times more than a dollar invested in new highways, depending on location. However, a 1999 study of the costs of providing high-speed rail, highway, and air service in a particular corridor reached different conclusions. This study found that the investment costs (per passenger-kilometer traveled) of providing highway and high-speed rail service between San Francisco and Los Angeles were about the same, but both were substantially higher than the cost of providing air service for the same route. When considering increasing transportation capacity, federal, state, and other decisionmakers will need to understand the extent to which travelers are using existing capacity and are likely to use the increased capacity in various modes. If new capacity is underutilized (e.g., because it is not cost competitive or convenient), then the expected benefit will not be fully realized. Another benefit ascribed to expanding intercity passenger rail is increasing travel choices—as an alternative to air, automobile, or bus travel. For example, the Federal Railroad Administration estimates that the development of the designated high-speed rail corridors could ultimately give about 150 million Americans (representing slightly over half of the nation’s current population) access to one of these rail networks. Yet travel choice entails more than physical access. To offer travel choice, rail must be competitive with other travel modes: it must take travelers where they want to go; be available at convenient times of the day; be competitive in terms of price and travel time; and meet travelers’ expectations for safety, reliability, and comfort. For example, travelers may view a rail system more favorably if it offers multiple trips— rather than one or two round trips—each day and if it arrives and departs at convenient hours. The Congress is facing critical decisions about the future of Amtrak and intercity passenger rail because operating a national intercity passenger rail system as currently structured without substantial federal operating support is very unlikely. Thus, the goal of a national system much like Amtrak’s current system and the goal of operational self-sufficiency appear to be incompatible. In fact, Amtrak was created because other railroads were unable to profitably provide passenger service. In addition, Amtrak needs more capital funding than has been historically provided in order to operate a safe, reliable system that can attract and retain customers. Developing high-speed rail systems is also costly, requiring additional tens of billions of dollars. If intercity passenger rail is to have a future in the nation’s transportation system, the Congress needs to be provided with realistic assessments of the expected public benefits and resulting costs of these investments as compared with investments in other modes of transportation. Such analyses would provide sound bases for congressional action in defining the national goals that will be pursued, the extent that Amtrak and other intercity passenger rail systems can contribute to meeting these goals, state and federal roles, and whether federal and state funds would likely be available to sustain such systems over the long term. Mr. Chairman, this concludes our testimony. We would be pleased to answer any questions you or Members of the Subcommittee might have.
Congress faces critical decisions about the future of the National Railroad Passenger Corporation (Amtrak) and intercity passenger rail. In GAO's view, the goal of a national system, much like Amtrak's current system, and the goal of operational self-sufficiency appear to be incompatible. In fact, Amtrak was created because other railroads were unable to profitably provide passenger service. In addition, Amtrak needs more capital funding than has been historically provided in order to operate a safe, reliable system that can attract and retain customers. Developing a high-speed rail system is also costly, requiring additional tens of billions of dollars. If intercity passenger rail is to have a future in the nation's transportation system, Congress needs realistic assessments of the expected public benefits and the resulting costs of these investments as compared with investments in other modes of transportation. Such analyses would provide sound bases for congressional action in defining the national goals that will be pursued, the extent that Amtrak and other intercity passenger rail systems can contribute to meeting these goals, and whether federal and state money would be available to sustain such systems over the long term.
As part of its mission and in accordance with the Homeland Security Act, DHS has responsibility for coordinating efforts to share homeland security information across all levels of government, including federal, state, local, and tribal governments and the private sector. Specifically with respect to fusion centers, DHS envisions creating partnerships with state and local centers to improve information flow between DHS and the centers and to improve their effectiveness as a whole. As such, the Office of Intelligence and Analysis (I&A) was designated in June 2006 by the Secretary as the executive agent to manage a program to accomplish DHS’s state and local fusion center mission. The Assistant Secretary for Intelligence and Analysis approved the establishment of the State and Local Program Office (SLPO) under the direction of a Principle Deputy Assistant Secretary to implement this mission. Specifically, the office is responsible for deploying DHS personnel with operational and intelligence skills to state and local fusion centers to facilitate coordination and the flow of information between DHS and the center, provide expertise in intelligence analysis and reporting, coordinate with local DHS and FBI components, and provide DHS with local situational awareness and access to fusion center information. As part of this effort, DHS is conducting needs assessments at fusion centers to review their status and determine what resources, such as personnel, system access, and security, are needed. As of September 2007, DHS had conducted 25 fusion center needs assessments. The SLPO also coordinates the granting of DHS security clearances for personnel located in fusion centers and the deployment of DHS classified and unclassified systems for use in the fusion center. The Homeland Security Grant Program (HSGP) awards funds to states, territories, and urban areas to enhance their ability to prepare for, prevent, and respond to terrorist attacks and other major disasters. HSGP consists of five separate programs, three of which can be used by states and local jurisdictions, at their discretion, for fusion center-related funding. The State Homeland Security Program (SHSP) supports the implementation of the State Homeland Security Strategies to address the identified planning, equipment, training, and exercise needs for preventing acts of terrorism. The Law Enforcement Terrorism Prevention Program (LETPP) provides resources to law enforcement and public safety communities to support critical terrorism prevention activities. Each state receives a minimum allocation under SHSP and LETPP and additional funds are allocated based on the analyses of risk and anticipated effectiveness. The Urban Areas Security Initiative (UASI) program addresses the unique planning, equipment, training, and exercise needs of high-threat, high-density urban areas and assists them in building an enhanced and sustainable capacity to prevent, protect against, respond to, and recover from acts of terrorism. UASI funds are allocated on the basis of risk and anticipated effectiveness to about 45 candidate areas. The fiscal year 2007 HSGP grant guidance specified the establishment and enhancement of state and local fusion centers as a prevention priority, making them a priority for LETPP. DHS’s Federal Emergency Management Agency National Preparedness Directorate (FEMA/NPD) manages the grant process and allocates these funds to state and local entities. The FBI serves as the primary investigative unit of DOJ, and its mission includes investigating serious federal crimes, protecting the nation from terrorist and foreign intelligence threats, and assisting federal, state, and municipal law enforcement agencies. Following the attacks on September 11, 2001, the FBI shifted its primary mission to focus on counterterrorism; that is, detecting and preventing future attacks. The FBI primarily conducts its counterterrorism investigations through its Joint Terrorism Task Forces (JTTF), which are multi-agency task forces that generally contain state and local officials. As of September 2007, there were JTTFs in 101 locations, including one in each of the FBI’s 56 field offices. Since 2003, each of the 56 field offices has also established a Field Intelligence Group (FIG) to serve as the centralized intelligence component responsible for the management, execution, and coordination of intelligence functions. Recognizing that fusion centers are becoming focal points for the sharing of homeland security, terrorism, and law enforcement information among federal, state, and local governments, the FBI has directed that its field offices, through their FIGs, become involved in the fusion centers in order to enhance the FBI’s ability to accomplish its mission and “stay ahead of the threat.” In June 2006, the FBI’s National Security Branch directed each field office to assess its own information sharing environment and, when appropriate, detail a FIG special agent and intelligence analyst to the leading fusion center within its territory. The FBI’s Directorate of Intelligence established an Interagency Integration Unit in January 2007 to provide headquarters oversight of FBI field offices’ relationships with fusion centers. While the FBI’s role in and support of individual fusion centers varies depending on the interaction between the particular center and the FBI field office, FBI efforts to support centers include assigning FBI special agents and intelligence analysts to fusion centers, providing office space or rent for fusion center facilities, providing security clearances, conducting security certification of facilities, and providing direct or facilitated access to the FBI. FBI personnel assigned to fusion centers are to provide an effective two-way flow of information between the fusion center and the FBI; participate as an investigative or analytical partner uncovering, understanding, reporting, and responding to threats; and ensure the timely flow of information between the fusion center and the local JTTF and FIG. Established under the Intelligence Reform Act, the PM-ISE is charged with developing and overseeing implementation of the ISE, which consists of the policies, processes, and technologies that enable the sharing of terrorism information among local, state, tribal, federal, and private sector entities as well as foreign partners, and, as such, released an ISE Implementation Plan in November 2006. Recognizing that the collaboration between fusion centers and with the federal government marks a tremendous increase in the nation’s overall analytic capacity that can be used to combat terrorism, the plan—integrating presidentially approved recommendations for federal, state, local, and private sector terrorism-related information sharing—calls for the federal government to promote the establishment of a nationwide integrated network of state and local fusion centers to facilitate effective terrorism information sharing. The plan outlines several actions on the part of the federal government, largely through DHS and DOJ, to support fusion centers, including providing technical assistance and training to support the establishment and operation of centers. In addition, the PM-ISE has established a National Fusion Center Coordination Group (NFCCG), led by DHS and DOJ, to identify federal resources to support the development of a national, integrated network of fusion centers. The NFCCG is to ensure that designated fusion centers achieve a baseline level of capability and comply with all applicable federal laws and policies regarding the protection of information and privacy and other legal rights of individuals. The NFCCG also is to ensure coordination between federal entities interacting with these fusion centers and has been tasked to develop recommendations regarding funding options relating to their establishment. However, to date, the efforts of the NFCCG have not included delineating whether such assistance is for the short-term establishment or long-term sustainability of fusion centers. In addition, the PM-ISE, in consultation with the Information Sharing Council—the forum for top information sharing officials from departments and agencies with activities that may include terrorism-related information—has also established a Senior-Level Interagency Advisory Group that oversees the NFCCG as part of its overall responsibility to monitor and ensure the implementation of the ISE. We reported in April 2007 that DHS and DOJ have 17 major networks and 4 system applications that they use to support their homeland security missions, including sharing information with state and local entities such as fusion centers. In addition, state and local governments have similar information technology initiatives to carry out their homeland security missions. Table 1 provides information on the primary networks and systems used by fusion centers. Established by state and local governments to improve information sharing among federal, state, and local entities and to prevent terrorism or other threats, fusion centers across the country vary in their stages of development—from operational to early in the planning stages. Those centers that are operational vary in many of their characteristics, but generally have missions that are broader than counterterrorism, have multiple agencies represented—including federal partners—in their centers, and have access to a number of networks and systems that provide homeland security and law enforcement-related information. Since September 2001, almost all states and several local governments have established or are in the process of establishing a fusion center. Officials in 43 of the 58 fusion centers we contacted described their centers as operational as of September 2007. Specifically, officials in 35 states, the District of Columbia, and 7 local jurisdictions we contacted described their fusion center as operational, officials in 14 states and 1 local jurisdiction considered their centers to be in the planning or early stages of development, and 1 state did not have or plan to have a fusion center, as shown in figure 1. In 6 states we contacted, there was more than one fusion center established. Officials cited a variety of reasons why their state or local jurisdiction established a fusion center. To improve information sharing—related to homeland security, terrorism, and law enforcement—among federal, state, and local entities and to prevent terrorism or threats after the attacks of September 11 were the most frequently cited reasons. For example, officials in one state said that their state was “mentioned 59 times in the 9/11 Commission Report, the majority of which were not complimentary,” and as a result established a 24-hour-per-day and 7-day-per-week intelligence and information analysis center to serve as the central hub to facilitate the collection, analysis, and dissemination of crime and terrorism-related information. Several officials cited the need to enhance information sharing within their own jurisdictions across disciplines and levels of government as the reason why their jurisdiction established a center. While most officials from fusion centers that were in the planning or early stages of development stated that they were establishing a fusion center in general to enhance information sharing or protect against future threats, officials in a few centers also noted that their jurisdictions were discussing or establishing fusion centers because of available DHS grant funding or their perception that DHS was requiring states to establish a center. Appendixes II and III provide basic information about operational fusion centers and fusion centers in the planning and early stages of development, respectively. Appendix IV provides a state-by-state summary of state and local areas’ efforts to establish and operate fusion centers. Officials in operational fusion centers provided varying explanations for their centers’ stage of development. Officials in 16 of the 43 operational fusion centers said that their fusion centers were at an “intermediate” stage of development, that is, the centers had limited operations and functionality. For instance, several of these officials said that while they had many operational components (such as policies and procedures, analytical personnel, or technical access to systems and networks) in place, at least one of these components was still in the process of being developed or finalized. For example, officials in one fusion center said that its analysts have completed training and are producing products, but the center is still in the final stages of reconstructing its facility and establishing access to systems and networks. Officials in 21of the 43 operational fusion centers considered their fusion centers to be “developed,” that is, fully operational and fully functional. For example, an official in one center said that the fusion center has analysts from DHS, FBI, and state and local entities; operates at the Top Secret level; and has a Sensitive Compartmented Information Facility (SCIF). Additionally, several officials also stated that even though their centers were developed, the centers would continue to expand and evolve. Officials in the remaining six fusion centers considered their centers to have more than limited operations and functionality but not yet be fully operational. For example, one official said that the center would like to develop its strategic component, for example related to risk assessments. Another official stated that his center would like to expand its operations but does not have enough personnel. Thirty-four of the operational centers are relatively new, having been opened since January 2004, while 9 centers opened in the couple of years after September 11, as shown in figure 2. Consistent with the purpose of a fusion center, as defined by the Fusion Center Guidelines, officials in 41 of the 43 operational centers we contacted said that their centers’ scopes of operations were broader than solely focusing on counterterrorism. For example, officials in 22 of the 43 operational centers described their centers’ scopes of operations as all crimes or all crimes and counterterrorism, and officials in 19 operational centers said that their scopes of operations included all hazards. There were subtle distinctions in officials’ descriptions of an all-crimes scope; however, they generally either said that their center focused on all “serious” crimes, such as violent crimes or felonies, or specified that the center focused on those crimes that may be linked to terrorist activity. Officials who described their centers as including an all-hazards focus provided different explanations of this scope, including colocation with the state’s emergency operations center or partnerships with emergency management organizations or first responders. One official referred to Hurricanes Katrina and Rita as reasons why the center had an all-hazards scope of operation. Officials provided two primary explanations for why their fusion centers have adopted a broader focus than counterterrorism. The first explanation was because of the nexus, or link, of many crimes to terrorist-related activity. For example, officials at one fusion center said that they have an all-crimes focus because terrorism can be funded through a number of criminal acts, such as drugs, while another said that collecting information on all crimes often leads to terrorist or threat information because typically if there is terrorist-related activity there are other crimes involved as well. The second reason why officials said that their fusion centers had a broader focus than counterterrorism was in order to include additional stakeholders or to provide a sustainable service. For example, one official said that because the state is rural with only two metropolitan areas and many small communities, the center needed to have a broader focus than terrorism to obtain participation from local law enforcement. Officials in another center said that their center opened in the months after September 2001, so it focused on homeland security and terrorism, but since then has evolved to include an all-hazards focus as it has established partnerships with agencies outside of law enforcement. An official in another center said that while counterterrorism is the primary mission of the center, in the past year the center has included an all-crimes element since on average the center only receives three terrorism-related tips a day, and as a result, it is difficult to convince agencies to detail a staff person to the center for this mission alone. The majority of the operational fusion centers we contacted were primarily led by law enforcement entities, such as state police or state bureaus of investigation. Some of these centers were established as partnerships between state or local law enforcement entities and the FBI, and others were established as partnerships with the state homeland security offices. While all of the operational fusion centers we contacted had more than one agency represented in the centers, the staff size and agencies represented varied. For example, three centers we contacted had fewer than five people on their staff representing fewer than five agencies. Whereas, 2 of the centers we contacted had over 80 people staffed to the center, representing about 20 agencies. In its fusion center report, CRS determined that the average number of full-time staff at about 27 persons. In addition to law enforcement agencies, such as state police or highway patrol, county sheriffs, and city police departments, 29 of the 43 operational centers we contacted had personnel assigned to their centers from the state’s National Guard, and some centers’ also included emergency management, fire, corrections, or transportation partners. At least 34 of the 43 operational fusion centers we contacted had federal personnel assigned to their centers. Officials in about three quarters of the centers we contacted reported that the FBI has assigned personnel, including intelligence analysts and special agents, to their centers. Most had one or two full-time intelligence analysts or special agents at their center. Additionally, 12 of the 43 operational centers we contacted were colocated in an FBI field office or with an FBI task force, such as a JTTF or a FIG, allowing the center’s personnel access to FBI systems and networks. Also, officials in 17 of the 43 operational centers reported that DHS’s Office of Intelligence and Analysis had assigned intelligence officers to their centers. These officers are assigned to fusion centers on a full- time basis and are responsible for, among other things, facilitating the flow of information between the center and DHS, providing expertise in intelligence analysis and reporting, and providing DHS with local situational information and access. Finally, officials in 19 of the 43 operational centers reported that they had other DHS and DOJ components represented in their centers including personnel from U.S. Customs and Border Protection; U.S. Immigration and Customs Enforcement (ICE); United States Secret Service; United States Coast Guard; Transportation Security Administration; United States Attorneys Office; Bureau of Alcohol, Tobacco and Firearms; Drug Enforcement Administration (DEA); or the United States Marshals Service. As we have previously highlighted, operational fusion centers we contacted reported having access to a variety of networks and systems for collecting homeland security, terrorism-related, and law enforcement information. For example, as of September 2007, 40 and 39 of the 43 operational fusion centers we contacted told us they had access to DHS’s and FBI’s unclassified networks, such as HSIN and LEO, respectively. Further, about half of the operational centers also said that they had access to one of the RISS networks. In addition, 16 of the 43 operational centers we contacted reported that they had access or had plans to obtain access to HSDN, and 23 indicated that they had access or were in the process of obtaining access to FBINet or FBI’s other classified networks. Further, 3 centers also reported having access to FBI’s Top Secret network. Additionally, several operational fusion centers reported having access to other classified and unclassified federal systems and networks providing defense, financial, drug, and immigration-related information, including the Department of Defense’s Secret Internet Protocol Router Network (SIPRNet), Financial Crimes Enforcement Network (FinCEN), El Paso Intelligence Center (EPIC), and the Student and Exchange Visitor Information System (SEVIS). Thus far, products disseminated and services provided also vary. Fusion centers reported issuing a variety of products, such as daily and weekly bulletins on general criminal or intelligence information and intelligence assessments that, in general, provide in-depth reporting on an emerging threat, group, or crime. For example, one center’s weekly bulletin contained sections on domestic and international terrorism, cold case investigations, missing persons, officer safety, and items of interest to law enforcement. Some centers provide investigative support for law enforcement officers. For example, one fusion center reported that it provided response within 20 minutes to requests for information from law enforcement officers who were conducting traffic stops or responding to major crime scenes. Further, several of the centers in our review were organized into two sections—an operational section that manages and processes the information flowing into the center and an analytical section responsible for analyzing the information and disseminating products. Officials in 7 states and one local jurisdiction said that their fusion centers were in the early stages of development and officials in 7 states said that they were in the planning stage. For example, one official said that the center is developing memorandums of understanding for agency representation at and support of the center, working to get the center’s secure space certified, and placing equipment and furniture. Officials from another state said that they had appointed an officer-in-charge and are in the process of acquiring additional staff members but had not acquired access to federal networks and systems. Officials in 6 of the 15 centers said that their centers had already opened or were expected to open by the end of 2007. Efforts to establish a fusion center are being led by homeland security offices, law enforcement entities, and in some states, by a partnership of two or more state agencies. As with operational centers, these centers planned to include all crimes and all hazards scopes of operations. While most of these centers were being newly established, a few were in the process of transitioning from existing law enforcement intelligence units or criminal intelligence centers. For example, an official in one center said the fusion center is in the planning stages and is transitioning from an intelligence center, which was established prior to the 2002 Winter Olympics. One state, Wyoming, was planning to partner with an adjacent state instead of building a physical fusion center. In light of the importance of fusion centers in facilitating information sharing among levels of government, DHS and DOJ have several efforts under way that begin to address challenges that fusion center officials identified in establishing and operating their centers. DHS and DOJ have made efforts to provide fusion centers access to federal information systems, but some fusion center officials cited challenges accessing relevant, actionable information and managing multiple information systems. As a result, these center officials said that their ability to receive and share information with those who need it may be limited. Additionally, both DHS and the FBI have provided clearances to state and local officials, but some fusion center officials told us they had encountered challenges obtaining and using security clearances, which interfered with their ability to obtain classified information. Further, notwithstanding DHS and FBI efforts to deploy personnel to fusion centers and DHS’s grant funding to support their establishment and enhancement, fusion center officials noted challenges obtaining personnel and ensuring sufficient funding to sustain the centers. Finally, DHS and DOJ have taken steps to develop guidance and provide technical assistance and training, but fusion center officials cited the need for clearer and more specific guidance in a variety of areas to help address operational challenges. As described earlier, DHS and FBI have provided access to their primary unclassified systems (HSIN and LEO) to many of the 43 operational fusion centers we contacted. Further, DHS and DOJ have outlined plans to provide access to their primary classified networks, HSDN and FBINET, to state and local fusion centers that have federal personnel at the center. However, officials in 31 of the 58 fusion centers we contacted told us that they had difficulty obtaining access to federal information networks or systems. For example, officials in some centers cited challenges with DHS and FBI not providing fusion center personnel with direct access to their classified systems. In these centers, fusion center personnel must rely on federal personnel who are assigned to the center or other state personnel assigned to FBI task forces to access these systems, obtain the relevant information and share it with them. Further, officials in 12 of 58 fusion centers reported challenges meeting system security requirements or establishing technical capabilities necessary to access information systems. For example, officials cited challenges with the cost and logistics of setting up a secure room or installing the requisite hardware to access the information systems. DHS and FBI have taken steps to address these logistical challenges to providing access to classified systems. For example, as part of its needs assessment process, DHS reviews the fusion centers’ security status and assesses its adequacy in light of DHS’s intention to assign personnel and information systems in the center. The FBI has provided fusion centers access to classified systems through JTTF members and has colocated with some fusion centers in FBI space. Finally, several FBI field offices have coordinated with fusion centers to rent or build and certify facilities or secure rooms for those centers located outside of FBI-controlled space. For example according to FBI field offices, it is paying estimated costs of about $40,000 and $50,000 respectively to provide secure facilities in two fusion centers. While officials in many fusion centers cited challenges obtaining access to systems, primarily classified systems, officials in 30 of the 58 fusion centers we contacted told us that the high volume of information or the existence of multiple systems with often redundant information was challenging to manage. More specifically, officials in 18 fusion centers said that they had difficulty with what they perceived to be the high volume of information their center receives, variously describing the flow of information as “overwhelming,” “information overload,” and “excessive.” For example, officials said that center personnel must sort through the large amount of information, much of which is not relevant to the center, to find information that is useful or important to them. Additionally, officials in 18 fusion centers found the lack of integration among these multiple, competing, or duplicative information systems challenging, or said that they wanted a single mechanism or system through which to receive or send information. Finally, officials in 11 centers said that the redundancy of information from these multiple sources posed a challenge. For instance, an official said that the center receives volumes of information that contain redundancies from DHS and the FBI. CRS also reported that one of the most consistent and constant issues raised by fusion center officials relates to the plethora of competing federal information-sharing systems, including, but not limited to, DHS and DOJ systems such as HSIN, HSDN, LEO, and RISS. DHS/DOJ’s current joint guidance on operating fusion centers—the Fusion Center Guidelines—does not delineate the primary systems to which fusion centers should have access or provide guidance to centers about how to manage multiple systems with potentially redundant information. For example, the guidance recommends that fusion centers obtain access to a variety of databases and systems and provides a list of 17 available system and network resources that provide homeland security, terrorism-related, or law enforcement information, including the LEO, RISS, and HSIN, but do not identify which of the 17 available systems are critical to sharing information with federal counterparts. In addition, we have previously reported on the redundancies and lack of coordination among DHS’s HSIN and other systems. For example, we found in April 2007 that in developing HSIN, DHS did not work with the two key state and local initiatives comprising major portions of the RISS program, thereby putting itself at risk that HSIN duplicated state and local capabilities. In that report, we recommended that DHS identify existing and planned information-sharing initiatives and assess whether there are opportunities for DHS to avoid duplication of effort. In response, DHS initiated efforts to accomplish this goal—such as creating a bridge between the RISS and HSIN systems to allow reports to flow back and forth between these two systems—though it is currently too early to determine the effect of these efforts. The PM-ISE also reported that in consultation with the Information Sharing Council, it has been coordinating the efforts of a working group intended to address the issue of duplicative or redundant information systems that handle sensitive but unclassified information. Officials from the PM-ISE stated that this group has completed a review of the most commonly used systems, such as LEO, RISS, and HSIN. According to the officials, the review included an examination of the services provided by the systems and the needs of the systems’ users to identify any potential areas to streamline system access. The review is in accordance with recommendations that fusion centers made during the National Fusion Center Conference in March 2007. Specifically, fusion centers recommended the federal government explore using a single sign-on or search capability, which would facilitate accessing multiple systems. Further, in our interviews, officials in 23 of the 58 fusion centers said that DHS and DOJ, to facilitate the implementation of a national network of fusion centers, should streamline existing systems or develop a unified platform or mechanism for information sharing with fusion centers. In addition, PM-ISE officials said that they, along with DHS and DOJ and other federal agencies, were taking steps to improve the quality and flow of information through the development of an Interagency Threat Assessment Coordination Group (ITACG). As part of the National Counterterrorism Center, this group will provide advice, counsel, and subject-matter expertise to the intelligence community regarding the types of terrorism-related information needed by state, local, and tribal governments and how these entities use that terrorism-related information to fulfill their counterterrorism responsibilities. In doing so, ITACG will enable the timely production by the National Counterterrorism Center of clear, relevant, and federally coordinated terrorism-related information products intended for dissemination to state, local, and tribal officials. As of September 2007, ITACG has achieved an initial operational capability, according to PM-ISE officials. Additionally, the 9/11 Commission Act, enacted in August 2007, made the ITACG a statutorily mandated body. Both DHS and the FBI have provided clearances for numerous state and local personnel and have set goals to shorten the length of time it takes to obtain a security clearance. DHS and the FBI provide clearances at the Secret level for state and local officials with a need-to-know national security information classified at the Confidential or Secret level, and the FBI, when necessary, also provides clearances at the Top Secret level to state and local officials with a need-to-know national security information classified at this level and who need unescorted access in FBI facilities. For instance, to date DHS reported that it had provided security clearances, typically granted at the Secret level, for 1,291 state and local personnel—not necessarily personnel in fusion centers. The FBI, in fiscal year 2007, reported that as of April it had provided 520 security clearances, typically granted at the Top Secret level, to state and local fusion center personnel. Further, CRS reported that on average, as of July 2007, each fusion center appeared to have 14 staff with Secret clearances and 6 staff with Top Secret clearances. However, officials in 21 of the 58 fusion centers we contacted reported difficulties obtaining the clearances necessary to access different levels of classified materials. DHS and FBI also have provided centers with information for state and local personnel about the security clearance process, stating that processing time for individual security clearances can vary depending on complexity. For example, DHS set a goal of 90 days to complete a Secret clearance, and FBI set a goal of 45 to 60 days to complete a Secret clearance and 6 to 9 months to complete a Top Secret clearance. Yet, officials in 32 of the 58 fusion centers at the time we contacted them reported difficulties with the length of time it takes to receive a security clearance from DHS or the FBI. For example, in one center that receives security clearances from both DHS and the FBI, officials said that it was taking 6 to 9 months for a Secret clearance and 1 year to 1½ years for a Top Secret clearance. While some fusion center officials acknowledged that the process (and the associated length of time) was necessary—to perform the requisite background checks to ensure that clearances are only given to individuals who meet the requirements—others said it was detrimental to the fusion center because newly hired or newly promoted analysts were unable to work without the clearances to perform their duties. To address timeliness concerns, the FBI has taken steps to reduce the turnaround time for clearances. According to the FBI, Top Secret security clearances granted by the FBI to state and local personnel in March 2007 took an average of 63 days to complete, down from an average of 116 days in fiscal year 2006. The FBI is also implementing both short- term solutions—including prioritization of background investigations for state, local, and tribal officials and the electronic submission of fingerprints—and long-term solutions, such as training fusion center security officers to conduct preliminary background checks, according to a May 2007 FBI Interagency Integration Unit review of security clearances. Indeed, officials at one fusion center told us that when the center was opening in 2003, it took approximately 2 years to obtain a clearance, but in January 2007, it took only 3 months to obtain a security clearance for new personnel. While law and executive order provide that a security clearance granted by one federal agency should generally be accepted by other agencies, officials in 19 fusion centers we contacted said they faced challenges with federal agencies, particularly DHS and the FBI, accepting each others’ clearances. This reported lack of reciprocity could hinder the centers’ ability to access facilities, computer systems, and information from multiple agencies. For example, an official at one fusion center who holds an FBI security clearance said he was unable to access other federal agencies’ facilities. An official at another fusion center said that DHS did not accept clearances that had been issued by the FBI to fusion center personnel and therefore would not provide access to information technology or intelligence. DHS and DOJ officials said that they were not aware of fusion centers encountering recent challenges with reciprocity of security clearances, but they said that there were complications in the clearance process. For example, a DHS official said that multiple federal agencies carry out their own clearance processes and grant clearances without central coordination. For example, both DHS and the FBI could each be conducting a separate security clearance investigation and determining eligibility for access to classified information on the same individual. An FBI official also explained that some agencies and some parts of the Department of Defense do require a polygraph examination to obtain a clearance and some do not, so reciprocity among those agencies with different standards may be an issue. Indeed, the DHS official acknowledged that, overall, federal agencies do not have a consolidated system for granting and handling security clearances and said that currently there are not sufficient federal efforts to develop such a system. Officials in 43 of the 58 fusion centers we contacted reported facing several challenges related to obtaining personnel, and officials in 54 of the centers reported encountering funding challenges when establishing and operating their centers, challenges which some of these officials also indicated affected their centers’ sustainability. Although many of these reported challenges were attributed to difficulties at the state and local level, DHS and FBI have efforts under way to help support fusion centers by providing some personnel and grant funding. Officials in 37 of the 58 centers we contacted said they had difficulty with state, local, and federal agencies assigning personnel to the center—one means of staffing the centers—primarily as a result of resource constraints. Most (27 of the 37) of these officials identified challenges with state and local agencies rather than with federal agencies contributing personnel. For instance, an official at one fusion center said that, because of limited resources in state and local agencies, it is challenging to convince these agencies to contribute personnel to the center because they view doing so as a loss of resources. In addition, officials in 8 of the 58 centers we contacted said that they had difficulty with state and local agencies contributing personnel to their centers specifically because the state and local agencies had to continue to fund the salaries of personnel assigned to the fusion centers from their own budgets. Similarly, CRS reported that there are many cases in which local law enforcement agencies appear unconvinced of the value of fusion centers—and by their cost/benefit analysis, it does not benefit their agencies to detail personnel to the center. In terms of federal personnel, officials in 11 of the 58 fusion centers said that they encountered challenges with federal agencies not contributing personnel to their centers. In addition, officials in 20 of the 58 fusion centers we contacted said that they faced challenges finding, attracting, and retaining qualified personnel. Specifically, officials from 12 of these centers said that they had difficulty finding qualified personnel. For instance, an official from one fusion center said that finding personnel with the expertise to understand the concept behind the development of the center and to use the tools to build the center was challenging, while an official at another fusion center acknowledged that there was a very limited number of qualified candidates in the state from which to hire personnel. Additionally, officials in eight centers reported that retention was a challenge because of competition with other entities, particularly higher-paying federal agencies and private sector companies. In some cases, such as for those analysts hired by the FBI, the official said that the federal salaries are almost twice what the center could afford to pay. An official at another fusion center expressed concern that, if fusion centers do not find a way to offer state and local analysts a career path comparable to that offered by the federal agencies, fusion centers will see a plateau in the quality of available analysts. To support fusion centers and facilitate information sharing, DHS and FBI have each assigned federal personnel to centers. As of September 2007, DHS had deployed intelligence officers to 17 of the 43 operational fusion centers we contacted, and was in the process of staffing 6 additional centers we contacted. The FBI had assigned personnel to about three quarters of the operational fusion centers we contacted. Additionally, DHS was in the process of staffing 2 local fusion centers we did not contact, and the FBI had assigned personnel to 7 local fusion centers that were not included in our review. In terms of the future, DHS plans to place intelligence officers in as many as 35 fusion centers by the end of fiscal year 2008. DHS has not determined to what extent it will provide additional staff to centers after the first round of assessments and placements are completed. For its part, FBI officials noted in January 2007 that the FBI process and criteria for staffing personnel to fusion centers remains ongoing. Because of the variety of fusion centers, the FBI—through its field office leaders— conducts its staffing efforts on a case-by-base basis using criteria such as whether the fusion center has a facility, connectivity to state and local systems, and personnel from multiple agencies. Officials in 35 of the 58 centers we contacted cited a variety of challenges with the federal grant process, including its complexity, and challenges related to uncertain federal funding or declining federal funding, challenges that led to overall concerns about the sustainability of the centers. For example, officials in 16 of the 58 fusion centers we contacted said that they faced challenges with the federal grant process, including unclear and changing grant guidance and a lack of understanding of how federal funding decisions are made. One official said that the fusion center did not perceive a link between the work performed at the center and the level of federal funding received, and hence, he did not understand how DHS made its funding decisions for fusion centers. The official added that it is important for local units of government to better understand these issues to help them understand the need to provide funding for fusion centers. Further, officials in 22 of the fusion centers said that they encountered challenges related to the sustainability of federal funding, such as the potential for, or actual, declining federal funding, which created concerns for the officials about their centers’ ability to sustain capability for the long term. Officials at another fusion center said that they are concerned that they will establish a fusion center with DHS funding only to have the funding end in the future and the center close because the region is unable to support it. When asked about key factors for sustaining their centers, officials in 41 of the 58 fusion centers indicated funding, and several specified a sustainable source or mechanism for that funding. Officials in 40 of the 58 fusion centers we contacted identified challenges with finding adequate funding for specific components of their centers’ operations—in particular personnel, training, and facilities—and officials in 24 of those 40 centers related these challenges to restrictions and requirements of federal grant funding. Specifically, officials in 21 fusion centers we contacted said that obtaining adequate funding for personnel was difficult, and officials in 17 fusion centers found federal time limits on the use of DHS grant funds for personnel challenging—challenges that they said could affect the sustainability of their centers. For example, one official at another fusion center said that the 2-year time limit on the use of DHS grant funds for personnel makes retaining the personnel challenging because state and local agencies may lack the resources to continue funding the positions, which could hinder the fusion center’s ability to continue to operate. Officials in eight of the fusion centers expressed concerns about maintaining their personnel levels, particularly if federal funding declines. For instance, one fusion center official said that a state police official did not want to fund analysts with federal grant funds because of a concern that, if the federal grant funds end, the center will lose qualified personnel who will take away their knowledge base. Furthermore, officials in 17 of the 58 fusion centers we contacted found complying with the DHS grant requirement for training newly hired analysts (that they attend training within 6 months or have previous analytical experience) or the funding costs associated with training challenging. For example, one fusion center official said that the center found limitations on the particular training the grant funds can be used for to be challenging. In addition, officials in 14 of the centers said that they had difficulty funding training costs, such as when using the funds for training conflicted with buying equipment or other tangible goods. Finally, officials in 14 fusion centers said that funding their facilities poses a challenge, particularly because of DHS restrictions on the use of grant funds for construction and renovation. For example, officials in eight fusion centers said that the DHS grant restrictions on construction and renovation have made it challenging to meet security requirements for their facilities, build a secure room, or build or renovate their facilities. Officials in 17 of the 58 fusion centers we contacted said that competition for funding with state and local entities was challenging, particularly as a result of the pass-through requirement associated with DHS grant funding in which the state must make no less than 80 percent of the total grant available to local units of government. For example, one fusion center official said that it is very difficult for state-run fusion centers to cover costs such as hiring analysts and completing renovations to their physical space out of the 20 percent of the DHS grant funds they are eligible to receive after the state complies with the pass-through requirement. Other officials noted that, even after the state has complied with the pass- through requirement, fusion centers must still compete with other state and local entities for the remaining DHS funding. For instance, one fusion center official said that the state emergency management agency wants to dedicate DHS funds to priorities other than the fusion center, such as the purchase of new fire-fighting equipment. CRS also reported that the 80 percent funding requirement was cited continually by fusion center officials as a major hurdle in channeling homeland security funds toward statewide fusion center efforts. Officials in 28 of the 58 fusion centers we contacted told us that they also had difficulty obtaining state or local funding for a variety of other reasons, including state or local budgetary constraints; challenges with convincing state officials, for example, in disciplines other than law enforcement, to provide funding to support the fusion center; and managing state and local officials who thought the federal government should be responsible for funding fusion centers. Further, 5 of these fusion center officials expressed concerns about their centers’ long-term sustainability without state or local funding. For example, one official said that federal funding for the center will eventually end and the state will need to provide funding to support the fusion center, but the state currently has no plan for providing that support. However, an official at another fusion center expressed concern that federal funding could cause states to lose autonomy over their centers, and the centers would become federal fusion centers located in a state rather than the state fusion centers originally envisioned. DHS homeland security grant programs, such as SHSP, LETPP, and UASI, have provided funding to state and local entities for data collection, analysis, fusion, and information-sharing projects, and DHS has adjusted the programs for fusion centers. Table 2 shows that from fiscal years 2004 through 2006, DHS allocated almost $131 million to states and local areas from these programs for what DHS defined as fusion-related activities. Further, according to DHS, 49 states and 37 local jurisdictions submitted grant investment justifications in fiscal year 2007 in support of information-sharing and dissemination efforts, with requests for funding totaling nearly $180 million, though exact funding amounts had not been determined as of August 2007. Exact funding amounts for fusion centers will be determined on the basis of the prioritization and allocation of funds by states. For fiscal year 2007, DHS included language in its grant guidelines emphasizing fusion center activities and explicitly made establishing and enhancing fusion centers a priority for the Law Enforcement Terrorism Prevention Program. However, these grant programs are not specifically targeted at or limited to fusion centers. As a result, funding provided to states may not necessarily reach a particular fusion center because, as a DHS official noted, states are free to reprioritize their use of grant funds after submitting their grant justifications to DHS and receiving allocated funds. Thus, fusion centers cannot be certain that they will receive funds to sustain fusion center activities from year to year or over the long term. Over time, DHS has also made several changes to help address challenges identified by fusion centers by focusing homeland security grants on fusion-related activities, by taking steps to ease the grant process, and by adjusting some of the restrictions on the timing and use of grant funds. For example, DHS expanded grant funding in fiscal year 2006 in the area of allowable costs for information sharing and collaborative efforts. Funds could be used by states to develop and enhance their fusion centers, particularly by hiring contract or government employees as intelligence analysts; purchasing information technology hardware, software and communications equipment; hiring consultants to make recommendations on fusion center development; or by leasing office space for use by a fusion center. In addition, DHS continued to make Homeland Security Grant Program adjustments in fiscal year 2007 based on outreach to grant program stakeholders. For example, DHS gave potential applicants more time to complete the grant application process and the period for performance under HSGP grants increased from 2 years to 3 years. DHS and DOJ have collaborated to provide guidance and technical assistance to fusion centers and, along with the PM-ISE, have sponsored regional and national conferences, in part to determine the needs of fusion centers. For example, DHS and DOJ jointly issued their most recent Fusion Center Guidelines in August 2006 that outline 18 recommended elements for establishing and operating fusion centers. The Guidelines were intended as a way to ensure state and local fusion centers could be established and operated consistently and were developed to help fusion center administrators create policies, manage resources, and evaluate fusion center services. Officials in 48 of the 58 fusion centers told us that the Guidelines were generally good and useful. However, officials in 20 of the 58 fusion centers we contacted found the available federal guidance lacking in specificity, conflicting, confusing, or difficult to implement in their individual centers. For example, some of these officials said that the Guidelines were broad and did not provide guidance on specific issues relevant to operating a fusion center, such as how to connect the multiple information-sharing systems or set up their physical space. In addition, officials in 19 of the 58 fusion centers we contacted said that they lacked guidance on specific information-sharing policies and procedures, such as sharing or handling sensitive or classified information or privacy and civil liberties issues. For example, officials in some fusion centers we contacted said that they lacked guidance on sharing and handling classified information, and officials in five fusion centers said the lack of guidance on privacy and civil liberties issues is a concern when sharing or storing information. To illustrate, officials at one fusion center said that the absence of an encompassing guideline to use as a standard makes it difficult to manage information sharing across levels of government and among states because of the variations in state and federal privacy laws and regulations. For instance, federal regulation provides that certain information on individuals may not be retained for longer than 5 years, whereas the center’s state requirement provides that information may not be retained for longer than 1 year. FEMA/NPD, DOJ’s Bureau of Justice Assistance (BJA), and the FBI have partnered to provide a program of technical assistance services for fusion centers to facilitate information sharing. As shown in table 3, many of the technical services provided under this program provide an overview or general information, and the technical assistance efforts focus on giving the state or local area the basic tools they need to successfully establish and operate a fusion center, such as helping to create a governance board, assisting with the development of a fusion process of implementation plan, and providing the basics of fusion center operations. As of September 2007, there have been 35 service deliveries to 19 fusion centers, according to FEMA/NPD and BJA officials. DHS and DOJ have numerous efforts to provide training to fusion centers. Also, DHS offers over 90 courses from 45 training partners and is working to increase the availability of training under Homeland Security Grant Program funding. According to FEMA/NPD officials, DHS recently approved for funding three courses, two of which involve analyst training. DHS and BJA provide a number of training services under their joint technical assistance program, and the FBI provides ongoing training for fusion centers through its field offices. However, officials in 21 of the 58 fusion centers we contacted said that the availability of adequate training for mission-related issues, such as training on intelligence analysis was a challenge. Further, officials in 11 fusion centers we contacted, most of whom were in fusion centers that had been in operation for more than 2 years, said that they lacked national standards or guidelines for analyst training or qualifications. For example, one fusion center official said that no one federal agency has taken responsibility for determining a single, standardized training agenda—including content and length of time—for both new and experienced analysts. Officials in another fusion center said that the center had difficulty creating a training program for its analysts because of the lack of a coordinated, trusted set of training guidelines. Two other officials said that they would like to see a federal baseline for appropriate and necessary training for analysts with a certification attached to its completion or a standardized course of analyst training to ensure that analysts are trained in the same way nationwide. They said that this would help fusion center analysts better communicate and be more likely to share information with analysts in other centers. DHS and FBI officials noted some challenges with designating a single training curriculum for fusion center analysts because agencies and training groups differ on what should constitute the minimum baseline. To remedy this, the NFCCG has developed and documented minimum baseline capabilities for state and major urban area fusion centers, and as of September 2007 was in the process of evaluating the current level of capability of designated state and major urban area fusion centers. The minimum baseline capabilities require fusion centers to develop a training plan to ensure their personnel are knowledgeable of fusion center operations, policies, and procedures, including training on the intelligence and fusion processes, analytic processes and writing skills, security policy and protocols, and the fusion center mission and goals. However, it is too soon to determine the extent to which the baseline document sets out minimum training standards for fusion center analysts that would address the challenges fusion centers reported to us. Although state and local governments created fusion centers to fill their information needs, the centers have attracted the attention of the federal government as it works to improve information sharing with state, local, and tribal entities in accordance with the Homeland Security and Intelligence Reform Acts, as amended. Indeed, recognizing that the collaboration between fusion centers and the federal government marks a tremendous increase in the nation’s overall analytic capacity that can be used to combat terrorism, the PM-ISE’s implementation plan envisions that the federal government will work to promote fusion center initiatives to facilitate information sharing and designates fusion centers as the focus of sharing with state, local, and tribal governments. Given the federal interest in fusion centers and the fusion centers’ interest in supporting such a national network, it is important that the federal government continue to provide fusion centers with added value as an incentive to facilitate such a network. To date, DHS’s and DOJ’s efforts to assist fusion centers, such as providing access to information systems, security clearances, personnel, funding, and guidance have begun to address a number of the challenges fusion center directors identified to us. However, it is also important for fusion center management to understand the federal government’s longer-term role with respect to these centers. Many fusion center officials were uncertain about the level of future resources and the sustainability of federal support. Although the federal government cannot make promises regarding future resources, decisions could be made and articulated to fusion centers regarding whether the federal government views its role with respect to providing resources— such as grant funding, facilities, personnel, and information-sharing systems—to fusion centers as short term for start-up resources or longer term for operational needs. The National Fusion Center Coordination Group (NFCCG) is already tasked with identifying grant funding, technical assistance, and training to support fusion centers. However, to date, the efforts of the NFCCG have not included delineating whether such assistance is for the short-term establishment or for the long-term sustainability of fusion centers. The NFCCG, through the PM-ISE and the Information Sharing Council, would be in the best position to articulate whether fusion centers can expect to continue to receive this support over the longer term. To improve efforts to create a national network of fusion centers, we recommend that the NFCCG, through the Information Sharing Council and the PM-ISE, determine and articulate the federal government’s role in, and whether it expects to provide resources to, fusion centers over the long- term to help ensure their sustainability. Particular emphasis should be placed on how best to sustain those fusion center functions that support a national information sharing capability as critical nodes of the ISE. We requested comments on a draft of this report from the Secretary of Homeland Security, the Acting Attorney General, and the Program Manager for the Information Sharing Environment or their designees. In commenting on drafts of the report, DHS and the PM-ISE concurred with our recommendation that the federal government should determine its long-term fusion center role and whether it expects to provide resources to centers to help ensure their sustainability. DOJ had no comments on the draft. Further, DHS commented that it, along with its federal partners, is reviewing strategies to sustain fusion centers as part of the work plan of the National Fusion Center Coordination Group. This group plans on presenting these strategies to the federal departments before the end of the year. As agreed with your offices, unless you publicly release the contents of this report earlier, we plan no further distribution until 30 days from the report date. We will then send copies of this report to the Secretary of the Department of Homeland Security, the Acting Attorney General, the Program Manager for the Information Sharing Environment, selected congressional committees, and other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8777 or [email protected]. Contact information for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. The objectives of our review were to (1) describe the stages of development and characteristics of state and local fusion centers and (2) identify to what extent efforts under way by the Program Manager for the Information Sharing Environment (PM-ISE), Department of Homeland Security (DHS), and Department of Justice (DOJ) help to address some of the challenges identified by fusion centers. To describe the stages of development and characteristics of state and local fusion centers, we conducted semistructured telephone interviews with the director (or his or her designee) of every state fusion center, the District of Columbia fusion center, and eight local fusion centers. We defined “local fusion center” to include centers established by major urban areas, counties, cities, and intrastate regions. Our selection criteria for local fusion centers included their relationships with the state fusion center, stage of development, and geographic diversity. Fusion center officials we spoke with included state and local police officials, agents in state bureaus of investigation, state homeland security directors, and directors in state public safety departments. Where a fusion center was in the planning stages, we spoke with officials involved in planning and establishing the center, such as directors of state homeland security offices. We asked fusion center officials about the status and characteristics of the fusion centers, including their stages of development, reasons for establishing, scopes of operations, and the types of funding the centers received. We relied on the centers’ own definitions of themselves as fusion centers and did not evaluate their status, characteristics, or operations. From February through May, we spoke with officials from all 50 states, the District of Columbia, and 8 local jurisdictions. While we did contact officials in all state fusion centers, we did not contact officials in all local fusion centers; therefore our results are not generalizable to the universe of fusion centers. Data were not available to determine the total number of local fusion centers. We also obtained and summarized descriptive information from the fusion centers including structure, organization, personnel, and information technology systems used. We provided the summaries to the fusion centers for a review of accuracy. However, we did not independently verify all of the information provided to us. We also interviewed officials from 11 agencies conducting research on state and local information sharing, including RAND, the Police Executive Research Forum, the International Association of Chiefs of Police, and the Congressional Research Service (CRS), which released a report in July 2007 on fusion centers. Finally, to obtain detailed information about centers’ operations, we conducted site visits to fusion centers in Atlanta, Georgia; Phoenix, Arizona; Richmond, Virginia; Baltimore, Maryland; West Trenton, New Jersey; and New York, New York. Our selection criteria for these centers included their stages of development, extent of federal partnerships, and geographic representation. To identify to what extent efforts under way by the PM-ISE, DHS, and DOJ help to address some of the challenges identified by fusion centers, we analyzed fusion center responses to our semistructured telephone interviews, reviewed applicable documents, and interviewed officials at the PM-ISE, DHS, and DOJ, as well as several organizations conducting research about fusion centers. Specifically, to describe the challenges fusion centers encountered in establishing themselves and operating, we asked officials during our semistructured telephone interviews whether they had encountered challenges in 10 different categories and, if so, the extent to which the category was a challenge both at establishment and, for operational centers, in day-to-day operations. These categories included federal partnerships, personnel, guidance, training, funding, access to information, and security clearances. Fusion center officials provided open-ended, descriptive responses of challenges faced by their centers. On the basis of a content analysis of fusion center officials’ responses, we identified, categorized, and counted similar challenges. Fusion center officials may not have indicated that they encountered all the challenges discussed in the report. In addition, individual fusion center officials may have identified multiple challenges in a given category, for example funding. We also reviewed CRS’s July 2007 report to obtain information on fusion center challenges. In addition, to determine to what extent efforts under way by the PM-ISE, DHS, and DOJ help to address some of the challenges identified by fusion centers, we reviewed applicable federal laws, executive orders, directives, briefings, testimonies, plans, reports, and documents to identify efforts of the PM-ISE, DHS, and DOJ to address challenges identified by fusion centers. We interviewed officials at the PM-ISE’s office, DHS’s Office of Intelligence and Analysis, the Federal Emergency Management Agency National Preparedness Directorate, the Federal Bureau of Investigation (FBI), and DOJ’s Bureau of Justice Assistance and discussed efforts under way to address challenges identified by fusion centers. We also asked fusion center officials in our semistructured telephone interviews to describe the support they had received and were interested in receiving from DHS and the FBI. We performed our work from August 2006 through September 2007 in accordance with generally accepted government auditing standards. Table 4 presents information about operational fusion centers as reported to us by fusion center officials during semistructured interviews, as of September 2007. During these interviews, we asked officials to characterize their fusion centers as being in one of the following stages: planning, early development, intermediate (limited operations and functionality), or developed (fully operational and fully functional). Table 5 presents information about fusion centers in the planning and early stages of development, as reported to us by fusion center officials during semistructured interviews as of September 2007. During these interviews, we asked officials to characterize their fusion centers as being in one of the following stages: planning, early development, intermediate (limited operations and functionality), or developed (fully operational and fully functional). Following is a summary of the status and selected characteristics of the state and local fusion centers we contacted between February and May 2007. The summaries are primarily based on documents provided to us by fusion centers and interviews we conducted with fusion center officials. Specifically, we obtained and summarized documentation about the centers that covered a variety of topics including mission; lead agency; staffing; federal, state, and local entities represented; and types of services performed and products disseminated. During semistructured interviews with officials, we asked about the stage of development of the fusion center, reasons for establishing the center, and the scope of operations (e.g., counterterrorism). In some instances we augmented the information provided to us by fusion center officials with publicly available information about the fusion center or information provided to us by the Department of Homeland Security (DHS) or the FBI. We sent the summaries to the fusion centers for a review of accuracy as of September 2007. However, we did not independently verify all of the information provided to us. The Alabama Department of Homeland Security is in the final planning stage of establishing the Alabama Information Fusion Center. The center intends to use information not normally considered crime-related to prevent terrorist activity, but it will also adopt an all-crimes scope of operations. The fusion center has appointed an officer in charge and is in the process of acquiring additional staff members. However, the center is not yet fully operational. The executive order that will establish the office has been submitted to the Governor for approval, and it is expected that the fusion center will open for business in the fall of 2007. The Alaska Fusion Center is in the advanced planning stage with the major concentration being on defining the missions, developing the governance, and outlining potential products and services. The fusion center will be a combined effort of the Alaska Department of Public Safety and the Alaska Division of Homeland Security and Emergency Management. While they do not have a physical fusion center, planning officials have partnerships established with the FBI, other federal and state law enforcement, the U.S. Attorney’s Office, the U.S. Coast Guard, the military, and the Federal Emergency Management Agency (FEMA). Through these partnerships, the member agencies already share information and coordinate activities. The officials said that they are considering the advantages of a joint, permanently staffed facility. If feasible and advantageous, they will plan to build or move into an available facility in the future. The Alaska Fusion Center will have an all-crimes, all-hazards, and all- source scope of operations. As a result of Public Safety and Homeland Security and Emergency Management involvement in developing the fusion center, the center will have both law enforcement and emergency management components. All-source includes law enforcement as well as economic information and infrastructure issues. The center will have three focus areas: day-to-day compilation, distillation, and distribution of information products; analyses and assessments of patterns and trends in the risks, threats, and hazards facing Alaska; and serving as an operational planning group serving all agencies when a threat emerges or a disaster occurs. The center has access to DHS’s Homeland Security Information Network (HSIN), Department of Justice’s Law Enforcement Online (LEO), and the Department of Defense’s Secret Internet Protocol Router Network (SIPRNet). The Arizona Counter Terrorism Information Center (AcTIC) opened in October 2004 as a cross-jurisdictional partnership among local, state, and federal law enforcement; first responders; and emergency management. Mandated by the Governor’s Arizona Homeland Security plan, AcTIC’s mission is to protect the citizens and critical infrastructures of Arizona by enhancing intelligence and domestic preparedness operations for all local, state, and federal law enforcement agencies. Mission execution will be guided by the understanding that the key to effectiveness is the development of information among participants to the fullest extent permitted by law or agency policy. AcTIC has an all-crimes focus and both an analytical and investigative scope of operations. AcTIC is run jointly by the FBI and the Arizona Department of Public Safety. There are 24 state, local, and federal agencies represented in the center. Among them are the Arizona Department of Public Safety; Arizona Department of Homeland Security; Arizona National Guard; Arizona Motor Vehicle Department; Arizona Department of Liquor License & Control; a number of county and city fire and law enforcement departments; the Rocky Mountain Information Network; the Bureau of Alcohol, Tobacco, and Firearms (ATF); U.S. Immigration and Customs Enforcement (ICE); the Department of State; DHS’s Office of Intelligence and Analysis (I&A); and the FBI. AcTIC is colocated in the same building with the FBI’s Joint Terrorism Task Force (JTTF) and Field Intelligence Group (FIG). These FBI groups are located in a separate suite and operate at the Top Secret/Sensitive Compartmented Information (TS/SCI) level. In addition, AcTIC has collaborated with Arizona State University-West Campus to create an internship program. Overall, there are about 240 personnel in AcTIC, including investigators, analysts, and support personnel. Most AcTIC personnel receive Secret clearances from the FBI. AcTIC is overseen by a Management Board that consists of the leader of every agency represented in the center and a governor-appointed Oversight Committee that provides guidance to the center. Within AcTIC, there is a Watch Center that is the central location for all information coming into the AcTIC. In addition, the facility houses the Terrorism Liaison Officer (TLO) squad, the HAZMAT/Weapons of Mass Destruction unit, a computer forensics laboratory, the Criminal Investigations Research Unit, Geographical Information Systems, and the Violent Criminal Apprehension Program. AcTIC concentrates on an all-crimes focus for gathering information, which is collected from a variety of Web sites; federal, state, and local databases and networks; the media; and unclassified intelligence bulletins. DHS and DOJ information systems or networks accessible to the center include LEO Special Interest Groups, HSIN-Intel, HSIN-Intel Arizona, and HSDN. AcTIC has direct connectivity to FBI classified systems and networks. However, those AcTIC personnel with Top Secret clearances must enter the JTTF suite and access an FBI system. AcTIC has access to, among others, Regional Information Sharing System (RISS) Automated Trusted Information Exchange (ATIX), SIPRNet, the National Criminal Information Center (NCIC), International Criminal Police Organization (INTERPOL), Financial Crimes Enforcement Network (FinCEN), and El Paso Intelligence Center (EPIC). In total AcTIC has over 100 law enforcement and public source databases available to it. AcTIC produces biweekly intelligence briefings, advisories, citizens’ bulletins, information collection requirement bulletins, information bulletins, intelligence bulletins, and threat assessments. These products are primarily created for law enforcement entities and specific community partners, but some are for the public (e.g., advisories and citizens’ bulletins). The products are typically disseminated via e-mail, Web site postings to LEO or HSIN, or faxes on occasion. The Arkansas State Police is in the early stage of development of the Arkansas Fusion Center. The focus of the Arkansas Fusion Center will initially be all crimes and all threats, although the intent is to incorporate an all-hazards element in the future. Currently, the center has commitments from the following federal, state, and local agencies to assign between 12 to 13 full-time personnel to the center: FBI, Arkansas Highway Police, Arkansas Crime Information Center, Arkansas National Guard, Arkansas Department of Corrections, Arkansas Department of Health, Arkansas State Police, Arkansas Game and Fish Commission, the Arkansas Association of Chiefs of Police, and the Arkansas Sheriff’s Association. The officials said that they expect to receive funding in the fall of 2007, and that the center may be able to begin limited operations by the winter of 2007. In addition to the State Terrorism Threat Assessment Center (STTAC), California has established four regional fusion centers known as Regional Terrorism Threat Analysis Centers (RTTACs) that are located in San Diego, San Francisco, Los Angeles, and Sacramento and correspond to the FBI’s four field office regions. The mission of the RTTACs is to collect, fuse, and analyze information related to terrorism from local law enforcement, fire departments, and public health and private sector entities. Each RTTAC is uniquely organized, but each is closely linked with local sheriffs. We contacted the STTAC, the Los Angeles RTTAC, known as the Joint Regional Intelligence Center (JRIC), and the Sacramento RTTAC. Former Governor Gray Davis and Attorney General Bill Lockyer created the California Anti-Terrorism Information Center on September 25, 2001, and in December 2005 the center was transformed into the State Terrorism Threat Assessment Center. STTAC is a joint partnership of the Governor’s Office of Homeland Security, California Department of Justice, and the California Highway Patrol. The mission of STTAC is to serve as a joint operation among the parties with the function of receiving, analyzing, and maintaining relevant intelligence information obtained from various federal, state, local, and tribal sources, and disseminating counterterrorism intelligence information in appropriate formats to individuals and entities in California for the purpose of protecting California’s citizens, property, and infrastructure from terrorist acts. STTAC’s core mission is serving as California’s central all-crimes and counterterrorism criminal intelligence center. STTAC is also to perform warning functions with the California State Warning Center in the Office of Emergency Services. STTAC operates in close cooperation with the Office of Homeland Security, California Highway Patrol, Office of Emergency Services, the four RTTACs, and federal agencies including DHS and FBI. STTAC’s authorized staff level is 44, and the staff is composed primarily of California Department of Justice and Office of Homeland Security analysts and investigators. There are also representatives from the California Highway Patrol and the state National Guard. STTAC does not have DHS or FBI staff assigned directly to it. However, DHS has provided one senior intelligence officer who resides at the Sacramento RTTAC and supports STTAC and another officer who resides at the Los Angeles JRIC. The FBI provides support to STTAC upon request and has assigned personnel to all of the California RTTACs. An Executive Management Board consisting of leaders from the partner agencies provides strategic oversight of STTAC. DHS and DOJ information systems or networks accessible to the fusion center include HSIN (e.g., the Law Enforcement, Counterterrorism, and Intel portals), LEO, the Federal Protective Service (FPS) portal, and the Foreign Terrorist Tracking Task Force portal, as well as several California law enforcement and justice information and intelligence systems and commercially available databases. STTAC provides intelligence support to all state agencies and disseminates situational awareness products. For instance, it supports regional intelligence analysis and criminal investigations by supplying the RTTACs analytical support, field investigations, and intelligence assessments and reports, among other things. STTAC produces a variety of intelligence products including, but not limited to, the following: advisories that provide a brief description of a local tactical issue, suspect, event, or situation that may be of immediate concern to law enforcement or key policy makers; intelligence bulletins that provide a strategic in-depth review of a particular terrorist group, event, or public safety issue affecting the state; alerts that are issued when there is a specific, validated, and verified threat; special reports that provide extensive overviews of a particular group or issue and contain background information, methods and geographical areas of operation, violence potential, conclusions, and recommendations for interdicting the activity; and threat assessments. The Los Angeles Joint Regional Intelligence Center (JRIC) opened in July 2006. However, according to Los Angeles County Sheriff’s Department and FBI officials, the three founding agencies of JRIC—the Los Angeles Sheriff’s Department, FBI Los Angeles, and the Los Angeles Police Department (LAPD)—came together and realized that the region needed a center to address counterterrorism and critical infrastructure protection missions after the events of 9/11. The County Sheriff, the Chief of LAPD, and the Assistant Director in Charge of the FBI Los Angeles Field Office jointly decided to develop the center to cover the seven counties in the Los Angeles/southern California area. JRIC brought together the FBI’s FIG, LAPD’s Major Crimes Division, and the Sheriff Department’s Terrorism Early Warning (TEW) group. JRIC has an all-crimes and counterterrorism scope of operations. Specifically, JRIC collects information using an all-crimes approach, converts the information into operational and strategic intelligence, and disseminates the intelligence to prevent terrorist attacks and combat crime in the Central District of California. Its mission is intelligence intake, fusion, and analysis, with an emphasis on terrorist threat intelligence; providing timely, regionally focused, and actionable information to consumers and producing assessments; and identifying trends, patterns and terrorist tactics, techniques and procedures; and sponsoring training opportunities. In addition to JRIC’s founding agencies, cooperating agencies in JRIC include DHS I&A, the U.S. Attorney’s Office, Governor’s Office of Homeland Security, and California Department of Justice. DHS I&A has assigned an intelligence officer to JRIC, and the center includes about 30 full-time personnel representing 14 agencies. JRIC personnel receive Secret or Top Secret clearances from the FBI. TLOs connect law enforcement and public safety partners in the seven-county region to JRIC by collecting, assessing, and passing on information, intelligence, tips, and leads to the center and then distributing advisories, bulletins, assessments, and requests for information to their home agencies. JRIC collects information from national reporting; leads and tips from the FBI, LAPD, the Sheriff’s Department and the TLOs; and from private sector outreach. DHS and DOJ information systems or networks accessible to JRIC include LEO; every HSIN portal (e.g., Intelligence, Law Enforcement, Emergency Management); the classified Homeland Security Data Network (HSDN); and all of the system and databases available in the FBI’s FBINet/Trilogy system. The center also has access to the FBI’s Top Secret network, the Sensitive Compartmental Information Operational Network (SCION) through a facility located on the same floor as JRIC. JRIC disseminates information to, among others, JTTFs, California Office of Homeland Security, DHS and LEO portals, law enforcement and public safety partners, affected municipality and critical infrastructure owners, and the originator of the information (in the form of feedback). JRIC also produces daily and weekly reports. The Sacramento RTTAC was established primarily to bring analysts from different state, local, and federal agencies together to work on terrorism- related issues. The center has been located in its current building, which has a Sensitive Compartmented Information Facility (SCIF), and operating at its current level of functionality, at the TS/SCI level, since November 2006. Prior to that, the center operated at a Law Enforcement Sensitive level for about 2 years in a different facility. RTTAC has an all-crimes and counterterrorism scope of operations and handles all of the critical asset management and threat assessment capabilities in its area of responsibility. Participating agencies include the National Guard, FBI, U.S. Attorney’s Office, ICE, and representatives from fire, law enforcement, and public health disciplines. DHS I&A has assigned an intelligence officer, and the FBI has assigned two analysts and one intelligence research specialist, and recently added a JTTF threat squad to the RTTAC team to vet tips and leads. In addition, there are other state and local analysts in the center. The FBI also provides RTTAC personnel with TS/SCI security clearances. DHS and DOJ information systems or networks accessible to the fusion center include LEO, HSIN, HSIN-Counterterrorism portal, HSDN, as well as FBI systems, such as the Automated Case Support (ACS) system and SCION. The RTTAC also has access to SIPRNet, among other federal and state systems and networks. The Colorado Information Analysis Center (CIAC) became operational in October 2004 under the direction of the Colorado Bureau of Investigation. The Colorado State Patrol took over operation and management of CIAC in March 2005, and it moved into its new facility in April 2005. CIAC was originally opened to support and respond to credible threats during the elections in 2004, but has since evolved to have an all-crimes and all- hazards scope of operation. Its mission is to provide an integrated, multidiscipline information-sharing network to collect, analyze, and disseminate information to stakeholders in a timely manner in order to protect the citizens and critical infrastructure of Colorado. CIAC has no investigative power but does have the ability to collect, analyze, and vet information for authenticity. When additional investigation is necessary, CIAC sends information to the DHS, the FBI’s FIG, and to local law enforcement. CIAC is staffed full-time by the Colorado State Patrol, the National Guard, the Department of Revenue, and the FBI. There are part-time participants in CIAC from the Colorado Departments of Agriculture, Public Health, Corrections, Education, and the Colorado Springs Police Department, as well as from the U.S. Marshals Service. The University of Denver also provides interns to CIAC. DHS I&A has conducted a needs assessment of CIAC. However, at the time of our review, it had not placed an intelligence analyst in the center. CIAC has access to a regional DHS protective security advisor. DHS and DOJ information systems or networks accessible to CIAC include HSIN, LEO, and the FPS portal. In addition, the center has access to, among others, Rocky Mountain Information Network, U.S. Northern Command, and SIPRNET, which is accessed through the FBI. CIAC produces several types of bulletins and summaries, including For Official Use Only and Law Enforcement Sensitive versions of a monthly summary of reported incidents, daily reports, officer safety bulletins, and early warning and special reports. These products are e-mailed to a number of recipients, including members of the critical infrastructure sectors. Products are also distributed directly to law enforcement officers via in- car mobile data computers. The monthly summaries are produced with the FBI FIG and also cover incidents in Wyoming, and some of the special reports are produced jointly with the FBI and the U.S. Northern Command. The Connecticut Intelligence Center (CTIC) opened in April 2005 as the centralized point of information sharing for the state. CTIC is a multi- agency operation representing various jurisdictions that serves to collect, analyze, and disseminate criminal and terrorism-related intelligence to all law enforcement agencies in the state. CTIC has an all-crimes scope of operations and endeavors to identify emerging threats or crime trends. Colocated with an FBI field office and jointly led by the FBI and the Connecticut State Police, CTIC’s 12-member staff includes representatives from the FBI, the U.S. Coast Guard, the state department of corrections, State Police, and local law enforcement agencies. DHS I&A placed an intelligence officer in the center in September 2007. FBI personnel serve in both supervisory and analytical roles in CTIC. For example, CTIC Operations Supervisor is also the FBI FIG supervisor. Day-to-day operations are managed by an FBI Supervisory Special Agent and supported by two Intelligence Coordinators, one from the state police and one from the FBI. The FBI also provides Top Secret clearances to CTIC personnel. The state is divided into five regions, each of which is represented in CTIC by a Regional Intelligence Liaison Officers. The officers are appointed by the corresponding Connecticut Police Chiefs Association and represent local law enforcement agencies in the center. The officers maintain full- time positions at CTIC and serve a recommended minimum of 2 years after obtaining a Top Secret clearance. CTIC offers a stipend for each municipality that places an officer in the center. The officers serve as the communication link between CTIC and a network of Intelligence Liaison Officers who are specially trained officers who represent local departments within each region. The Intelligence Liaison Officers are responsible for providing information to CTIC and for providing statewide and jurisdictional-specific information from CTIC to their respective agencies. CTIC has an Advisory Board that meets quarterly and defines strategy and policy for the center. CTIC also has partnerships with the private sector through Connecticut Infragard. CTIC takes an all-crimes approach to information collection and has access to a number of state and federal systems and networks. DHS and DOJ information systems or networks accessible to CTIC include HSIN, LEO, and ACS, Guardian, and Investigative Data Warehouse (IDW) through the FBI. In addition, CTIC has access to the New England State Police Information Network, which is part of RISSNet, and SIPRNet. CTIC produces a variety of intelligence products, including weekly bulletins on criminal activities; weekly intelligence bulletins; intelligence assessments, which provide in-depth reporting on an emerging threat, group, or crime; and intelligence information reports. Their primary customers are the law enforcement officers, emergency managers, and the private sector in the state and Northeast region. After the September 11, 2001, attacks, Delaware officials identified a need to establish a conduit for information flow, both to and from the federal government and local entities and in and out of Delaware. Led by the Delaware State Police, the Delaware Information Analysis Center (DIAC) was subsequently opened in December 2005. DIAC, through a multijurisdictional and multidiscipline effort, is committed to providing a coordinated, professional, and all-hazards approach in preventing, disrupting, and defeating criminal and terrorist activity while safeguarding individuals’ constitutional guarantees. Specifically, using an all-crimes and all-hazards approach, DIAC will collect, analyze, and disseminate criminal intelligence; conduct crime analysis; provide officer and public safety alerts to all disciplines; and disseminate critical infrastructure information to those persons in law enforcement, government, and the private sector who have both a right and need to know, with the objective of protecting the citizens, infrastructure, and key assets of the state. Partners from other state agencies include Public Health, Department of Technology and Information, Department of Corrections, Transportation, Division of Revenue, and Natural Resources, as well as the Delaware Volunteer Firemen’s Association and all other law enforcement entities in the state, including local and federal agencies. At the time of our review, DIAC staff included six full-time analysts and two Delaware National Guard analysts, as well as three personnel assigned to critical infrastructure protection. DIAC also has two Delaware State Police commissioned officers assigned in administrative roles. Two of the six state police analysts have Top Secret clearances that were granted by the FBI. At the time of our review, there were no DHS or FBI personnel represented in DIAC. Analysts produce a variety of products, including a weekly intelligence report for law enforcement and a weekly infrastructure bulletin for private sector partners as well as situational reports and homeland security and situational alerts. Tactical alerts and reports on multijurisdictional criminal activity are supplied to Delaware law enforcement agencies in many forms such as officer safety warnings, warnings and indicators of terrorist events, site-specific critical infrastructure and asset alerts, and informational bulletins and assessments. DHS and DOJ information systems or networks accessible to the fusion center include HSIN, LEO, and FPS portal. In addition, DIAC has access to information from High Intensity Drug Trafficking Area (HIDTA) and the Information Sharing and Analysis Centers, which are private sector critical infrastructure protection sharing centers. Both information and intelligence are collected from and disseminated to other state fusion centers, DHS, the FBI, the U.S. Coast Guard, local law enforcement, the private sector, the Delaware National Guard, Dover Air Force Base, other state agencies, and the Information Sharing and Analysis Centers. After a planning stage that began in 2005, the Metropolitan Washington Fusion Center (MWFC) opened in the spring of 2006 to provide local governments and agencies with an approach and capability for networked information sharing. Led by the Metropolitan Police Department, MWFC has multiple agencies and disciplines represented and serves the National Capital Region. MWFC has a 24/7 command center that provides a constant flow of information and looks at that information for patterns of activity alongside the crime analysis unit. MWFC is an all-crimes center, but also has an all- hazards function as it follows the National Infrastructure Protection Plan, in particular focusing on the large number of national monuments located within the Washington metropolitan area. The all-hazards function is supported by partnerships with the Department of Health, which helps with responding to health issues such as pandemics and natural disasters, and the Washington, D.C., National Guard, which helps with the analysis of patterns and response to events. It is also coordinated with the MWFC’s Emergency Management Agency functions. An official said that MWFC did not want to focus only on crime because important threat information and information that leads analysts to detect suspicious patterns occurs in many other areas as well. In addition, it was also important to MWFC to adopt the dual all-crimes, all-hazards focus because the fusion center wanted to give a number of partner agencies “a seat at the table” to increase support of the center. An official also noted that the MWFC has created a Fusion Center Regional Programmatic Workgroup to develop a regional strategy, product development, and charter, and to form a solid, cohesive, common operating picture for the region. The FBI and DHS I&A have assigned personnel to MWFC. At the time of our review, the fusion center was located in secure space provided by the FBI. However, according to the official, the center is planning to move into D.C. government space within 30 months. DHS and DOJ information systems or networks accessible to the fusion center include LEO and FBINet. The center will be a RISS node through the Middle Atlantic-Great Lakes Organized Crime Law Enforcement Network and is in the review process with DHS to receive HSDN. The Florida Fusion Center is a component of the Florida Department of Law Enforcement’s (FDLE) Office of Statewide Intelligence. According to FDLE officials, the Office of Statewide Intelligence was created in 1996 with the primary mission “to provide FDLE leadership with sufficient information so that they (sic) may make informed decisions on the deployment of resources.” The office is responsible for the coordination of FDLE’s intelligence efforts and analysis and dissemination of intelligence and crime data information. The office has always had an all-crimes approach that was reflective of FDLE’s investigative strategy and focus areas. This approach was enhanced with the addition of a domestic security mission after 9/11. Under the coordination of FDLE, seven regional domestic security task forces were created, along with an analytical unit within the Office of Statewide Intelligence to enhance domestic security and counterterrorism investigative efforts. Each task force is cochaired by an FDLE Special Agent in Charge and a sheriff from the region. The task forces include multidisciplinary partners from education, fire rescue, health, communications, law enforcement, and emergency management. These disciplines are also reflected in the composition of the fusion center. The Florida Fusion Center was established in January 2007 with a mission to protect the citizens, visitors, resources, and critical infrastructure of Florida by enhancing information sharing, intelligence capabilities, and preparedness operations for all local, state, and federal agencies in accordance with Florida’s Domestic Security Strategy. The fusion center will serve as the state node and will provide connectivity and intelligence sharing amongst Florida’s regional fusion centers. The center consists of approximately 45 FDLE members, federal agencies, state multidisciplinary partners, and includes outreach to private sector entities. FDLE members who are part of the fusion center have assignments to various squads within the Office of Statewide Intelligence, to include counterterrorism intelligence, financial crime analysis, critical infrastructure, and a 24/7 situational awareness unit, the Florida investigative support squad. FFC also has full-time analysts from DHS I&A and the FBI working at the center, as well as representation from the U.S. Attorney’s Office. The center will add a full-time analyst from the Florida National Guard in October 2007. State agencies and departments that have committed to participate as members of the Fusion Center Executive Policy Board and have designated an intelligence liaison officer or analyst to the fusion center include: Agriculture; Business and Professional Regulation; Corrections; Education; Emergency Management; Environmental Protection; Fish and Wildlife Conservation Commission; Financial Services; Health; Highway Safety; FDLE; Transportation; and the National Guard. DHS and DOJ systems and networks the center has access to include LEO, HSIN, HSIN-Intel, HSIN-Florida, and HSDN. The Georgia Information Sharing and Analysis Center (GISAC) was established in October 2001 and falls under the responsibility and management of the Georgia Office of Homeland Security. The initial focus of GISAC was to address terrorism and the information gap among federal, state, and local law enforcement in providing homeland security intelligence. Its mission is to serve as the focal point for collection, analysis, and dissemination of information on threats or attacks of a terrorist nature within and against the State of Georgia, its citizens, or infrastructure. GISAC is one of the three components of the Georgia Office of Homeland Security and is divided into four sections—law enforcement, criminal intelligence, fire services/hazmat, and emergency management. GISAC has a staff of 27, the majority of whom are personnel from the Georgia Bureau of Investigation. Other state agencies with assigned personnel at the center include the Georgia Emergency Management Agency, Georgia State Patrol, Georgia Department of Corrections, and Georgia National Guard. The Georgia Sheriffs’ Association, Georgia Fire Chiefs Association, and Georgia Association of Chiefs of Police have each assigned one person to the center. DHS I&A has assigned two staff to GISAC; one Southeast region representative and one intelligence officer. There are no FBI personnel assigned directly to GISAC. However, there are two GISAC personnel assigned to the JTTF, and all analysts have access to the FBI FIG, whereby they have access to FBI systems. GISAC is also located in the same building as the FBI field office with its JTTF and FIG. GISAC produces a variety of products, including an open source report weekly, which is posted on the Office of Homeland Security-Georgia Emergency Management Agency Web site and distributed electronically; a monthly intelligence report that is For Official Use Only and distributed electronically; alerts and notices, which are produced on an as-needed basis; a monthly outbreak and surveillance report from the Georgia Department of Health; and an Georgia Bureau of Investigation-produced joint GISAC/FBI multipage monthly bulletin that contains GISAC statistics combined with FBI information. DHS and DOJ systems and networks to which GISAC has access include HSIN, HSDN, and LEO. In addition, GISAC analysts are able to access FBI systems, such as E-Guardian, IDW, and ACS. According to a State of Hawaii Department of Defense official, for the past 2 years officials from civil defense and state law enforcement have discussed the possibility of establishing a fusion center in Hawaii. Specifically, they have discussed establishing an intelligence unit under state and local law enforcement control to complement the FBI’s JTTF in Honolulu. A state fusion center would provide intelligence and analysis to all disciplines, especially law enforcement. Planning officials are not seeking a center that is only focused on the prevention and disruption of terrorism, but one that would complement other departments, agencies, and task forces within the context of all hazards. The official noted that the establishment of a fusion center in Hawaii depends on the adequacy and allocation of Homeland Security grant funds in fiscal years 2007 and 2008. According to the official, Hawaii’s fiscal year 2007 allocation will not support the current investment strategy for a fusion center. The official said that they will have to wait until fiscal year 2008 or find an alternative funding strategy. Idaho does not have and is not planning to establish a physical fusion center. However, according to the directors of the Bureau of Homeland Security and Idaho State Police, the state has a “virtual fusion process.” The fusion process grew out of monthly information-sharing meetings prior to September 11 that were held by the Idaho Bureau of Hazardous Materials with other federal, state, and local officials in Idaho. In October 2001, the U.S. Attorney’s Office for the District of Idaho offered to serve as the cornerstone of an information-sharing effort. A member of the U.S. Attorney’s Office, who is also a member of the Anti-Terrorism Advisory Council, provides the overarching structure for the fusion process by facilitating connections between federal sources of intelligence in Idaho and state and local law enforcement. This individual holds meetings several times a year, provides information and analyses to consumers by way of the Internet and coordinates with the two JTTFs in the area. Participants in the fusion process also use HSIN. The officials articulated several reasons why they are not planning to establish a fusion center, including the state’s commitment to support the efforts of the U.S. Attorney’s Office to conduct threat analyses and share information, political concerns about the role of the government in information sharing, local agencies’ lack of interest in participating in a fusion center because they perceive the centers to be intelligence- gathering entities, and local communities do not want law enforcement to be involved in gathering intelligence, the state’s low risk for international terrorism, and difficulties staffing a center because state and local agencies would not have the capacity to provide personnel to work in a fusion center. There are two fusion centers in Illinois, the Statewide Terrorism and Intelligence Center (STIC) and the Chicago Crime Prevention and Information Center (CPIC). Led by the Illinois State Police, the Statewide Terrorism and Intelligence Center (STIC) was established in May 2003 with the mission to provide timely, effective, and actionable intelligence information to local, state, and federal law enforcement and private sector partners in order to enhance public safety, facilitate communication between agencies, and provide support in the fight against terrorism and criminal activity. STIC is an all-crimes fusion center that is colocated with the Illinois Emergency Management Agency, with which it works closely during disasters. When STIC was established, it absorbed the state police intelligence unit, which focused on general crimes (e.g., violent crimes, narcotics, sex offenders), because, in part, the planners wanted to combine all of the silos of information needed to prevent criminal and terrorist activity. STIC is organized into two sections: a terrorism section, which staffs the 24/7 watch, and the field support section, which has a criminal intelligence unit with specialists working on drugs, violent crimes, motor vehicle theft, and sex offenses. The Illinois State Police and the Illinois National Guard provide nearly all of the personnel, including 7 sworn officers, 18 terrorism research specialists, 4 narcotics analysts, 3 other crime/violent crime analysts, 1 senior terrorism lead analyst, 1 firearms analyst, 2 motor vehicle theft analysts, 6 Internet crime analysts, 1 America’s Missing: Broadcast Emergency Response (AMBER) Alert analyst, and 2 office assistants. The FBI has assigned two analysts to work terrorism-related cases, the Drug Enforcement Administration (DEA) has assigned one analyst to work narcotics cases, and DHS I&A has assigned one analyst to work on homeland security issues. The Illinois Terrorism Task Force, which is composed of representatives from state and local agencies involved in emergency planning in the event of a critical incident, provides support to the Illinois State Police and approves the funding for STIC. STIC also has partnerships with the private sector. For example, the Illinois Association of Chiefs of Police and its Public-Private Liaison Committee, along with STIC, initiated the Infrastructure Security Awareness program in September 2004. The program was designed to share critical and sensitive non-law-enforcement information in a timely manner with corporate security executives as well as provide a forum for information exchange among private security professionals. This program enables STIC to provide threat information to major corporations and to receive reports of suspicious activity. STIC provides information to private security partners by using HSIN, which allows for the exchange of data, text messages, meeting dates, and the building of specialized tools to meet various applications through a secure Internet connection. STIC’s terrorism research specialists collect, analyze, and disseminate terrorism-related intelligence data; complete in-depth threat assessments; and identify predictive, incident-based indicators of potential terrorist activities within the state. The specialists have access to various state and federal law enforcement intelligence databases, public records databases, and financial databases. DHS and DOJ information systems or networks accessible to STIC include HSIN, JRIES, LEO, RISSNET, R-DEX, and the FPS portal as well as FinCEN and HIDTA. The officials said that they will have access to FBI and DHS classified systems when their SCIF, for which the FBI is funding the construction, is completed. STIC provides a variety of services to support officers in the field, including a 20-minute workup on requests from officers conducting traffic stops and responding to major crime scenes, lead management and development, on-scene analytical services, and statewide deconfliction to all law enforcement agencies by using the HIDTA nationwide network. STIC recently adopted the Internet Crimes Analysis Unit, which takes calls from the public regarding fraud, sexual predators, terrorism, and other issues and also administers the AMBER Alert program. The Chicago Crime Prevention and Information Center (CPIC), led by the Chicago Police Department, opened in April 2007 with the mission “to enhance partnerships which foster a connection between every facet of the law enforcement community. CPIC will afford the men and women, who are dedicated to protecting the public and addressing violence, with all available intelligence resources, and communications capabilities.” CPIC’s goal is to be the clearinghouse of information that is fused and delivered to stakeholders. CPIC has an all-crimes and counterterrorism focus. The Chicago Police Department is involved in the fusion process as it relates to violent crime, and the department has an in-house counterterrorism section. In addition to Chicago Police Department officers, the FBI has assigned three analysts to the CPIC. The center also has personnel from ICE, ATF, HIDTA (5 days a week), Chicago’s Metrarail, the Cook County Sheriff’s Department, the Illinois State Police, and 35 suburban police departments. CPIC is in the process of establishing a transportation “desk” staffed with Transportation Security Administration (TSA), Amtrak, Federal Air Marshal Service (FAMS), and local agency personnel. DHS I&A has conducted a needs assessment. However, at the time of our review, it had not placed an intelligence analyst in the center. CPIC is tactically oriented and designed to provide direct, near-real-time support to law enforcement personnel on the street. It provides, among other things, real-time violent crime detection monitoring and response, continual assessment of available resources for the purpose of possible redeployment of manpower, instantaneous major incident notification, analysis and identification of retaliatory violence and automated construction of enforcement missions to thwart retaliatory violence, crime pattern identification, and immediate access to in-depth background data on persons of investigative interest. DHS and DOJ systems and networks accessible to CPIC include HSIN, HSDN, FPS Portal, LEO, FBI’s ACS and IDW, as well as NCIC, FinCEN, RISSNET, RISS ATIX, INTERPOL, International Justice and Public Safety Information Sharing Network (NLETS), EPIC, National Drug Intelligence Center (NDIC), Treasury Enforcement Communications System, and numerous other state and local systems and data sources. CPIC also recently added a satellite tracking system that traces stolen bank funds and the offender. CPIC focuses on producing products to assist police officers on the street. A primary product is the District Intelligence Bulletin System, which is a Web-based application that uses multiple data sources to provide officers with a law enforcement road map. It provides officers with calls for service, wanted persons, most recent shootings/homicides, and additional intelligence in a succinct format. All of this information is automatically updated on a continuous basis throughout the day and is accessible by deployed patrol officers. It allows an officer to review information by district and deployment area. CPIC also publishes a daily intelligence briefing, which is designed to give officers a more detailed overview of potential threats based on international, national, and local events, and a weekly version for the private sector. The Indiana Intelligence Fusion Center (IIFC), which opened in December 2006, was established with the mission to collect, evaluate, analyze, and disseminate information and intelligence data regarding criminal and terrorist activity in the State of Indiana while following Fair Information Practices to ensure the rights and privacy of citizens. In addition to collecting information on all crimes, IIFC will specifically collect information as it relates to terrorism and its impact on Indiana. IIFC has an all-crimes approach, acting as an intelligence group for the state. However, there is a terrorism nexus to the fusion center’s work. IIFC is operated by the Indiana Department of Homeland Security and has been staffed as a task force entity with federal and state partners. Indiana state agency, department, and association partners in IIFC are: Homeland Security, National Guard, State Excise Police, Natural Resources, Association of Chiefs of Police, Gaming Commission Division of Gaming Agents, Indiana State Police, Corrections, Sheriff’s Association, Marion County Sheriff’s Department, and the Indiana Campus Law Enforcement Association. Federal partners in IIFC include the FBI, and the U.S. Attorneys for the Northern and Southern Districts of Indiana. The FBI has assigned two FIG analysts to IIFC. DHS I&A has conducted a needs assessment of IIFC. However, at the time of our review, it had not yet placed an intelligence analyst in the center. A 12-member executive committee oversees IIFC’s activities. The center is a 24/7 intelligence operations center that works in conjunction with statewide law enforcement liaisons, providing for intelligence-led policing throughout the state. IIFC operates a 1-800 tip line and an IIFC e-mail box and produces a bulletin three times a week. DHS and DOJ systems and networks that IIFC has access to include HSIN-Intel, HSDN, RISS, FPS portal, LEO, as well as SIPRNet, RISS, NCIC, and EPIC. The FBI has built a Top Secret secure room within IIFC, and also provided access to the ACS and Guardian systems in the secure space. The Iowa Intelligence Fusion Center was established in December 2004 with the mission to enable the State of Iowa to proactively direct core resources with its partners to avert or meet current, emerging, and future public safety and homeland security threats. Following the attacks of September 11, Iowa established a Homeland Security Advisory Council to enhance the state’s capability to implement the Iowa Homeland Security Initiative. In the spring of 2002, the council’s Information and Intelligence Sharing Task force was formed to make recommendations for sharing intelligence and information, and among other things, it recommended the establishment of a fusion center. Built on the backbone of the Iowa Law Enforcement Intelligence Network, the Intelligence Fusion System consists of the fusion center, six regional fusion offices, and a number of partner agencies and organizations. The fusion center is led by the Iowa Department of Public Safety and serves as a centralized information collection, analysis, and dissemination point. It is staffed with 18 full-time analysts (16 of whom are state funded), 11 investigator/collectors, and 5 support staff. Nearly all are Iowa Department of Public Safety personnel. However, there is an Iowa National Guard analyst assigned to the center, and the Midwest HIDTA provides funding for one intelligence analyst in the center. Although no federal agencies have assigned personnel to the fusion center yet, the center conducts regular meetings with the FBI’s FIG and JTTF, the U.S. Attorney’s Office, and the DHS Protective Security Advisor. The fusion center has placed one Department of Public Safety agent full-time at the JTTF and conducts regular and as-needed coordination and information-sharing meetings with the state Homeland Security Advisor, the Iowa Homeland Security and Emergency Management Division, the Iowa Department of Agriculture and Land Stewardship, and the Iowa Department of Public Health, among others. The six regional fusion offices are strategically located across the state. A fusion center agent is assigned to each regional office and is partnered with two to four local officials at each site. Fusion system personnel also regularly participate in meetings of the local InfraGard chapter. In addition, the Department of Public Safety is part of the Safeguard Iowa Partnership, which is a voluntary coalition of business and government leaders who combine their efforts to prevent, protect from, respond to, and recover from catastrophic events. The Safeguard Iowa Partnership was formally launched in January 2007. DHS and DOJ information systems or networks accessible to the fusion center include LEO and HSIN (law enforcement and counterterrorism portals), as well as RISSNET. HSIN–Secret has been deployed to the State Emergency Operations Center, but DHS has not deployed the system to the fusion center. Fusion center personnel who are JTTF members are granted Top Secret clearances and can access FBI systems at the JTTF office. The Adjutant General of Kansas, Kansas Bureau of Investigation, and Kansas Highway Patrol formed the Kansas Threat Integration Center (KSTIC) in June 2004 with the mission to assist Kansas law enforcement and other related agencies in their mission to protect the citizens and critical infrastructures within Kansas through enhanced gathering, analysis, and dissemination of criminal and terrorist intelligence information. KSTIC focuses on the development, gathering, analyzing, and dissemination of criminal and terrorist threat information in order to protect citizens, property, and infrastructure in Kansas. Additionally, KSTIC works to increase threat awareness among law enforcement, other governmental agencies, and private infrastructure providers in the state. KSTIC’s scope of operations is primarily focused on terrorist/extremist activities with a secondary all-crimes scope of operations that comes into play when criminal acts serve as a prelude to terrorist or extremist activities. It is not an all-hazards facility, but KSTIC is colocated with the Kansas Division of Emergency Management and therefore has access to its resources. KSTIC is a joint operation of the Kansas Bureau of Investigation, National Guard, and Highway Patrol. An Executive Board, consisting of one member from each agency, provides oversight and the KBI representative is responsible for the day-to-day operations of KSTIC. There are three full- time staff—a Kansas Bureau of Investigation senior special agent, an investigator from the Highway Patrol, and a National Guard Captain— however, KSTIC can use Kansas Bureau of Investigation analysts for assistance as needed. Personnel hold TS/SCI clearances. KSTIC is in the process of hiring two to four analysts depending on funding availability. While KSTIC has no federal partners, it interfaces with state FBI JTTFs on a regular basis and is discussing the possibility of colocating with the JTTF. KSTIC accesses sensitive but unclassified bulletins and reports and open source information to report on terrorist/extremist threats to Kansas in particular and the Midwest in general. Additionally, KSTIC receives tip and other information directly from citizens and state law enforcement. KSTIC uses access to classified systems to identify and monitor potential threats to Kansas. DHS and DOJ information systems or networks accessible to KSTIC include HSIN, HSIN-Secret, LEO, FPS portal, as well as SIPRNet and FinCEN. KSTIC personnel also have full access to the FBI’s various databases (i.e., Guardian and IDW) at an FBI field office or JTTF location. Through the National Guard, the KSTIC is planning to construct a SCIF, which when completed will provide space for approximately 15 personnel as well as secure connectivity to a variety of Top Secret and other systems. KSTIC produces intelligence/information bulletins for state and regional law enforcement. All information disseminated is sensitive but unclassified, with the exception of the periodic open source bulletins published for dissemination to a public/infrastructure distribution list. Bulletins are posted on several secure sites, such as LEO and FPS Web sites, as well as distributed via statewide teletype and e-mail. The distribution list includes state, local, and federal agencies in Kansas as well as other agencies around the country. The Kentucky Intelligence Fusion Center (KIFC) opened in December 2005 as an all-crimes fusion center. The fusion center focuses on all crimes, rather than those with a nexus to terrorism, primarily to obtain buy-in from local agencies. KIFC was established with support from the Kentucky Office of Homeland Security; the Kentucky State Police, which transferred its intelligence center to KIFC; and the Kentucky Transportation Cabinet, which provides KIFC its facility. Other agencies in the fusion center include ATF, the Kentucky Department of Corrections, Kentucky Department of Military Affairs, Kentucky Vehicle Enforcement, and the Lexington Metro Police. The FBI has assigned one full-time FIG analyst to the center. KIFC does not have any DHS personnel assigned to the center. The fusion center provides all-crimes and terrorism intelligence analytical services; supports the JTTF with counterterrorism investigators; assists all federal, state, and local law enforcement with requests for information on suspects; assists law enforcement in the location of subjects, suspect vehicle registration, and suspect driver’s license photo and data; provides link analysis charts such as association links, communication links, and event flow; serves as the conduit for law enforcement’s request for information from other state fusion centers; provides access to HSIN-KY, the state Web site for law enforcement information sharing; and serves as a repository for the state’s identified critical infrastructures. Some KIFC components are operational 24/7, such as the law enforcement communication and the transportation component. KIFC receives statewide all-crimes tips through a toll-free hotline and through Web site submission and has law enforcement radio and data communications capability through Kentucky State Police Communications, which is located in the fusion center. DHS and DOJ information systems or networks accessible to the fusion center include HSIN and LEO, as well as RISS/Regional Organized Crime Information Center. The official said that KIFC does not have the capacity to receive classified information because the facility has no secure room or SCIF. Established in October 2004, the Louisiana State Analysis & Fusion Exchange (La-SAFE), which is led by the Louisiana State Police, evolved from existing state police analytical units. The state police Investigative Support Section has been in place since the late 1960s and early 1970s, with an intelligence collection and analysis unit that was developed primarily to handle organized crime. As the investigative and intelligence needs of the police shifted over time, so too did the mission of the intelligence component, expanding from organized crime to gangs, drug trafficking, and, post-September 11, homeland security. The police intelligence unit was engaged in all-crimes collection of intelligence to support all criminal investigations. La-SAFE has adopted an all-crimes/all- hazards scope of operations. The mission of La-SAFE is to (1) promote a collaborative environment for governmental and corporate partners to work together in providing timely information for use in providing public safety and promoting national security against terrorist and other criminal threats; (2) actively work to collect and analyze information from various sources to provide those responsible for protecting state resources with information that is pertinent in decision-making processes, allows for the maximizing of resources, and improves the ability to efficiently protect the citizens of Louisiana in matters of infrastructure protection and against organized criminal activity; and (3) evaluate all information provided and ensure that the information La-SAFE retains and utilizes is directly related to legitimate law enforcement purposes and has been legally obtained. La- SAFE will not interfere with the exercise of constitutionally guaranteed rights and privileges of individuals. La-SAFE is staffed by 3 commissioned personnel and 20 analysts with experience in case support, information production, and information sharing in the areas of organized crime and terrorism. Louisiana State Police, the Louisiana Governor’s Office of Homeland Security and Emergency Preparedness, Louisiana National Guard, East Baton Rouge Parish Sheriff’s Office, DHS I&A, and FBI have assigned full-time analysts to La-SAFE. The center recently established a relationship with the U.S. Coast Guard. La-SAFE produces a variety of information and intelligence products, including general information bulletins (e.g., notices on general crimes or intelligence); daily incident briefs (i.e., daily reports of incidents reported to the center from a variety of sources); a weekly homeland defense bulletin covering homeland security issues around the world; and a summary of monthly regional crime information called the Intelligator. DHS and DOJ systems and networks accessible to La-SAFE include HSIN, LEO, and the U.S. Coast Guard’s Homeport. Louisiana has a state HSIN portal, HSIN-LA, that provides a secure capability to share information and collaborate with public and private sector partners. It allows users to report suspicious activities to the fusion center for review and action. There are currently over 800 participants representing law enforcement, first responders, and critical infrastructure with access to HSIN-LA. Information currently being shared within HSIN-LA includes safety bulletins, intelligence reports, training opportunities, information-sharing meetings, and requests for information. The Maine Intelligence and Analysis Center is a collaborative effort between the Maine State Police and the Maine Emergency Management Agency to share resources, expertise, and information to maximize homeland security efforts and to detect and assist in the deterrence of terrorist activity. Maine has had a traditional state police criminal intelligence unit for 30 years, but the state’s background in counterintelligence was limited to traditional criminal enterprises. The Governor decided after September 11 that the state needed a counterintelligence unit that was homeland-security driven to deliver information to the Governor. The center was formally established by an executive order that was effective December 2006. The Maine Intelligence and Analysis Center is in the early stages of development and at the time of our review was not yet fully functional. For example, the center has physical space and personnel, has developed standard operating procedures, and is in the process of conducting outreach with state and local entities. The center’s mission is to support the Maine State Police and the Maine Emergency Management Agency in their respective roles of public safety protector and homeland security incident manager for the citizens of the State of Maine. The center is to be a clearinghouse of and central repository for intelligence and information related to Maine’s homeland security and any terrorist-related activity that may threaten the lives and safety of the citizens of the United States and the State of Maine. Its scope of operations is counterterrorism. However, the center plans to expand its focus in the future to include an all-crimes approach. The Maine Intelligence and Analysis Center has one intelligence analyst and one homeland security specialist, along with a backbone of four analysts from the Maine State Patrol Criminal Intelligence Unit. An official indicated that the center will absorb the Criminal Intelligence Unit in the near future forming a single unit. While there are no federal personnel assigned to the center, it has partnered with FBI’s JTTF, the U.S. Attorney’s Office Anti-Terror Section, U.S. Customs and Border Protection (CBP), TSA, U.S. Coast Guard, ICE, Maine National Guard, other state and local law enforcement agencies with intelligence sections, and the Maine Anti-Terror Intelligence Network, which is organized by the U.S. Attorney’s Office to facilitate interaction between partner agencies’ analysts. The center is overseen by an Advisory Board consisting of three members who meet at least twice annually. The center conducts research and analysis to provide actionable intelligence for field units and policy makers, and provides quick (i.e., within 15 minutes) response to queries from the field to allow officers to take action within constitutionally reasonable time frames. DHS and DOJ information systems or networks accessible to the fusion center include LEO and HSIN, as well as RISS ATIX, RISS, FinCEN, INTERPOL, EPIC, and NLETS. The center also has access to a variety of state and commercial information systems and databases. Products include notices, bulletins, briefing information, reports, and assessments that cover day-to- day events, warnings, and officer safety issues. First responders, law enforcement, emergency managers, civilians, and the private sector (e.g., utilities, chemicals, food supply, and technology) are among the center’s customers. The Maryland Coordination and Analysis Center (MCAC) is operated by the Anti-Terrorism Advisory Council Executive Committee and is governed by a charter that was developed with input from the FBI and approved by the Executive Committee. MCAC began operations in November 2003 in response to the events of September 11 and the need for ways for the FBI and local agencies to disseminate terrorist-related information. MCAC has an all-crimes and counterterrorism scope of operations and consists of representatives of 24 agencies who staff the center, including the FBI, DHS, U.S. Army, U.S. Coast Guard, and Maryland state and local organizations. DHS I&A has assigned one analyst, and the FBI has seven analysts, one special agent, and one supervisory special agent assigned to MCAC. MCAC, which operates 24 hours a day, 7 days a week, is organized into two sections, the Watch Section and the Strategic Analysis Section. The Watch Section provides support to federal, state, and local agencies by receiving and processing information, monitoring intelligence resources, coordinating with Maryland law enforcement, and disseminating intelligence information. The Watch Section primarily consists of representatives from Maryland police and sheriffs, along with representation from the U.S. Army and the Maryland National Guard. As information enters MCAC, it is passed through the Watch Section, which either passes that information on to federal or state entities or the Strategic Analysis Section or enters it into federal and state databases, such as the FBI’s Guardian. The Strategic Analysis Section receives, processes, analyzes, and disseminates information. MCAC has 12 analysts, and the section is staffed by representatives from various organizations, including the Maryland State Police, FBI, U.S. Coast Guard, and the Maryland National Guard. Information enters MCAC through a variety of ways, including tips from the general public or law enforcement, as well as from the National Guard or emergency response personnel. Information is received via a tip line or e-mail. DHS and DOJ systems and networks accessible to MCAC include HSIN, HSDN, LEO, FBINet, SCION, as well as RISS/Middle Atlantic-Great Lakes Organized Crime Law Enforcement Network, SIPRNET, NCIC, INTERPOL, EPIC, and NLETS, among others. MCAC provides a daily report to every police chief in the state as well as other state fusion centers and any other organization that is on its distribution list. Entities such as the Maryland JTTF, Terrorism Screening Center, and National Counterterrorism Center receive information from MCAC. Terrorism- related law enforcement information is also shared and entered into the FBI’s Guardian database. Products include a daily watch report, which is a brief summary of tips and requests for information received by the Watch Section over the previous 24-hour period, and intelligence bulletins, which are intelligence/law enforcement-related information disseminated to law enforcement and homeland security personnel by fax, teletype, or e-mail, and may also be posted to LEO or RISS. Other products include threat assessments, covering, for example, threats to military-recruiting stations, propane cylinders, agroterrorism, or gang activity. The Commonwealth Fusion Center (CFC) was established in October 2004 on the foundations of the State Homeland Security Strategy and an executive order designating it the state’s principal center for information collection and dissemination. Its mission is to collect and analyze information from all available sources to produce and disseminate actionable intelligence to stakeholders for strategic and tactical decision making in order to identify, disrupt, or deter domestic and international terrorism as well as criminal activity. CFC takes an all-threats, all-crimes approach and has both criminal and counterterrorism analytical support roles. The center focuses on precursor crimes—such as organized crimes, which can be indicators of terrorism. CFC also supports the state’s Emergency Management Agency, which is responsible for handling all hazards. CFC works with various federal and state and agencies including FBI, ICE, U.S. Coast Guard, HIDTA, Secret Service, TSA, ATF, the United States Marshals Service, U.S. Attorney’s Office, the Massachusetts Emergency Management Agency, Massachusetts Department of Fire Services, Department of Public Health, Department of Corrections, and the National Guard. There are 15 analysts assigned to CFC, the majority of who are Massachusetts State Police employees. However, officials said that four of these analysts are assigned to other duties, such as the Crime Reporting Unit or security officer, or are otherwise engaged. The Department of Corrections and the Army National Guard have also each assigned an analyst to CFC. All analysts and most sworn members of CFC have Secret clearances, and a few sworn members have Top Secret clearances. The FBI has assigned both an intelligence analyst and special agent to CFC. DHS has assigned an intelligence officer to the center. CFC also possesses an investigative component through the Massachusetts State Police Criminal Intelligence Section that provides 5 state troopers and the Massachusetts JTTF, which has 11 state troopers in Boston and Springfield, for a total of 16 investigators assigned to CFC. CFC also has a railroad representative and is involved in public/private outreach through Project Sentinel, which is a program targeting businesses likely to identify precursor terrorist activity. CFC also has personnel assigned to the Boston Regional Intelligence Center, which is the regional intelligence center for the Boston/Cambridge Urban Areas Security Initiative (UASI) region and is led by the Boston Police Department. CFC analysts produce information and intelligence briefings and assessments and provide support to the statewide assessment of critical infrastructure. Past products include an overview of gang activity in the state, an assessment on prison radicalization in the state, a report on trafficking and possible links to terrorism, a report on the stock market and possible indicators of terrorism, and an overview of white supremacist activity in the state. CFC also uses Geographic Information Systems to develop products and provide data to law enforcement and critical infrastructure stakeholders. DHS and DOJ unclassified systems and networks accessible to CFC include HSIN and LEO, and CFC also has access to FBI and DHS classified systems on site. Briefings and assessments are posted on CFC secure Web site, HSIN-MA, which also provides a document library and information-sharing capability to Massachusetts’ law enforcement, public safety, and critical infrastructure sectors. CFC has developed e-mail lists and extensive contact lists for state, local, and federal law enforcement partners, military stakeholders, fire services, transportation, and other critical infrastructure sectors. There are two fusion centers in Michigan, the statewide Michigan Intelligence and Operations Center (MIOC) and the Detroit and Southeastern Michigan Regional (Detroit UASI) Fusion Center. Led by the Michigan State Police, MIOC opened in December 2006 and went to a 24/7 operation in January 2007. The center was built on the preexisting foundation of the Michigan State Police Intelligence and Operations sections. MIOC’s mission is to collect, evaluate, collate, and analyze criminal justice-related information and intelligence and, as appropriate, disseminate this information and intelligence to the proper public safety agencies so that any threat of terrorism will be successfully identified and addressed. Additionally, MIOC will provide criminal justice information to appropriate law enforcement agencies to aid in the successful prosecution of individuals involved in criminal behavior. MIOC has an all-crimes, all-threats scope of operations with a focus on the prevention of terrorism. MIOC is divided into two components: (1) the operational, 24/7 portion where all tips, requests for information, and initial information flow into MIOC, and (2) the intelligence portion, where information is processed, analyzed, disseminated, and reviewed. MIOC’s 45-person staff includes operational, intelligence, and administrative personnel, most of whom are Michigan State Police personnel. There are 26 intelligence personnel (detectives and analysts) and 13 operational personnel (officers and dispatchers). Included in the intelligence personnel are five analysts assigned by the National Guard, one responsible for narcotics and four responsible for HSIN-Intel and critical infrastructure protection at a statewide level. The state Department of Corrections has assigned a person 2 days per week, and the Michigan State University Police Department has assigned a full-time inspector. The FBI has assigned one analyst and one special agent to MIOC. Three Michigan State Police detectives are also assigned to the JTTF. DHS I&A has conducted a needs assessment of MIOC and posted the position for an analyst. However, at the time of our review an analyst an analyst had not yet been assigned to the center. Additionally, MIOC is expecting the assignment of a U.S. Coast Guard intelligence lieutenant and a DEA analyst. MIOC has established an internship program with the Michigan State University Criminal Justice Program. MIOC’s 14-member advisory board, which includes representatives from state and federal entities, civil rights groups, the Detroit UASI Fusion Center, and state law enforcement associations, provides advice and counsel to MIOC. MIOC collects and disseminates information regarding criminal investigations of all natures and serves as a direct case support for various investigations. Its personnel are divided into the four priority areas of international terrorism, domestic terrorism, organized crime, and smuggling. DHS and DOJ information systems or networks accessible to MIOC include HSIN, LEO, and HSDN, as well as RISS/MAGLOCLEN. MIOC is planning to have access to SIPRNET, ACS, and Guardian, for members of those agencies residing at MIOC. MIOC disseminates information via briefings and bulletins (weekly and special) to law enforcement, responds to requests for information, prepares intelligence analysis reports, provides case support, operates a Tip Line, provides support services (such as K-9, underwater recovery, hazmat, forensic artists, and emergency support team), and posts its products on sites such as LEO, HSIN-Law Enforcement, and MAGLOCLEN. Non-law-enforcement homeland security and critical infrastructure protection partners receive information through postings on HSIN-Michigan. The Detroit and Southeastern Michigan Regional (Detroit UASI) Fusion Center is in the early stages of development. Led by the Detroit Homeland Security and Emergency Management, there are seven regional partners in the urban area that are planning the center, including Wayne County, the City of Detroit, and five other surrounding counties. The vision for the fusion center is to identify, monitor, and provide analysis on all terrorism, all crimes, and all hazards in the Southeast Michigan Region in support of law enforcement, public safety, and the private sector’s prevention, preparedness, and response activities. The center will focus on prevention and protection by serving as a conduit to local police officers and emergency managers in the field and will follow up on tips, conduct a watch function, and provide “foresight on emerging situations.” The fusion center is in the first phase of planning, which is expected to culminate in the center being fully operational in January 2008. At the time of our review, planning officials were in the process of establishing partnerships, selecting and training analysts, and planning to move into the center’s new facility, which will be colocated with the Michigan HIDTA. Federal partners identified include ICE, CBP, TSA, Secret Service, U.S. Coast Guard, FBI, Federal Bureau of Prisons, U.S. Attorney’s Office, and HIDTA. The center did not have access to DHS and DOJ systems, but would obtain access to FBI systems once it was colocated with the HIDTA. Additionally, the fusion center plans to work with MIOC to leverage technology purchases and utilize similar policies and standard operating procedures. The Minnesota Joint Analysis Center (MN-JAC) opened in May 2005 as a partnership of the Department of Public Safety, the FBI, and several local police departments. Its mission is the collection, management, and distribution of strategic and tactical information and the development and implementation of useful and meaningful information products and training, focusing on all crimes and all hazards within and affecting Minnesota. MN-JAC has an all-crimes and all-hazards scope of operations. However, the center is not a law enforcement or investigative agency because of restrictions imposed by state law. As the center works to determine its position relative to the existing laws, it serves primarily to coordinate among FBI and DHS and state and local agencies. One of the ways MN-JAC accomplishes its information-sharing function is through the development and maintenance of its information-sharing Web portal, the Intelligence Communications Enterprise for Information Sharing and Exchange (ICEFISHX). MN-JAC has 10 employees, 2 of whom are provided by the state, and the remainder from local law enforcement agencies and the National Guard. MN-JAC does not have an FBI analyst staffed to its center. However, MN- JAC and the FBI’s field office are colocated in the same building, and MN- JAC personnel have access to the FBI’s systems and networks. DHS I&A has conducted a needs assessment of MN-JAC. However, at the time of our review, it had not yet placed an intelligence analyst in the center. DHS and DOJ systems and networks accessible to MN-JAC include HSIN, HSIN-Law Enforcement, FPS portal, LEO, ACS, the FBI Intelligence Information Reports Dissemination System, as well as SIPRNet. The center also has access to HSIN-Secret. However, the system is accessible only at the state’s Emergency Operations Center. MN-JAC produces two weekly briefs, one for all subscribers that covers critical infrastructure and one for law enforcement agencies that is law enforcement sensitive, as well as situation analyses of incidents and threats and special threat assessments. The Mississippi Office of Homeland Security and Mississippi Department of Public Safety are in the early stages of developing a state fusion center. The Mississippi Analysis & Information Center (MSAIC) is expected to open its door at the end of September 2007 and become operational at that time. Currently, planning officials are developing memorandums of understanding for agency representation at and support of the center, certifying the center’s secure space, and placing equipment and furniture. The fusion center will have a broad scope of operations—focusing on all crimes, all hazards, and all threats—in order to support the needs of the state and to help with the sustainability of the center. For instance, the official said that with an all-crimes scope of operations, the fusion center could give something back to local law enforcement entities, many of which have limited resources and access to information. In terms of all hazards, the fusion center is to support the state strategy to aid in prevention and deterrence and will be colocated with the state emergency management agency. The Missouri Information Analysis Center was established in December 2005 with the mission to provide a public safety partnership, consisting of local, state, and federal agencies, as well as the public sector and private entities, that will collect, evaluate, analyze, and disseminate information and intelligence to the agencies tasked with homeland security responsibilities in a timely, effective, and secure manner. The main goal of the center is to serve as the fastest means for sharing information during hazards, along with the ability to acquire and disseminate information throughout the state. The center was initially established with analysts who were transferred from the Missouri Highway Patrol Criminal Intelligence and Analysis Unit. The center, which is led by the Missouri Highway Patrol, has an all-crimes and all-hazards focus, which was established in part as a result of the center’s partnerships. The center is a member of the RISS project and is partners with the Missouri Department of Public Safety, the Missouri Emergency Management Administration, and the Missouri National Guard, the latter two with which it is colocated. In addition to its director, the Missouri Information Analysis Center has 21 other personnel, including an assistant director and an intelligence network manager, 8 full-time criminal intelligence analysts, and 10 part- time intelligence intake analysts. The Missouri Gaming Commission has also dedicated a full-time intelligence analyst to the center. Investigators are assigned to cases out of the Highway Patrol’s Division of Drug and Crime Control as needed. The center also provides local law enforcement the opportunity to assign analysts and officers to it for internships. The center works closely with the JTTFs and FIGs in the state and, according to officials, is close to completing its secure room for two FBI special agents currently working in the center. The center also works with the Business Executives for National Security, which has representation from the majority of the private corporations within the state as well as individuals interested in assisting homeland security, and has a 13-member oversight board that is composed of state, local, and federal representatives. The Missouri Information Analysis Center collects and disseminates information involving all crimes, all threats, and all hazards. The information can be tips, leads, law enforcement reports, and open source reports as well as information provided from the federal level. DHS and DOJ information systems or networks accessible to the fusion center include HSIN, HSIN-Secret, JRIES, RISSNet, RDEx, and LEO, as well as the Midwest HIDTA Safety Net. Analysts conduct numerous services, including, but not limited to, responding to intelligence and criminal activity inquiries from local, state, and federal law enforcement agencies and prosecuting attorneys; performing various analyses to evaluate patterns of criminal activity; compiling and disseminating intelligence booklets containing data on subjects in question for criminal activity to case investigation officers and prosecutors; developing reports, threat assessments, bulletins, summaries, and other publications on relevant criminal activity trends; serving as liaison for the statewide intelligence database; providing strategic analytical services, development, and training at the state level to support the Midwest HIDTA; maintaining close liaison with the Midwest HIDTA; and developing various standardized statistical reports involving criminal and terrorist threat assessments. The Montana All-Threat Intelligence Center (MATIC) developed from the intelligence unit of the Montana Department of Justice, Division of Criminal Investigation. After the attacks of September 11, the unit relocated to a Department of Military Affairs facility and colocated with the JTTF. The unit opened its fusion center incarnation, MATIC, in the spring of 2003 with the mission to collect, store, analyze, and disseminate information on crimes, both real and suspected, to the law enforcement community and government officials concerning dangerous drugs, fraud, organized crime, terrorism and other criminal activity for the purposes of decision making, public safety, and proactive law enforcement. The center has an all-crimes scope of operations. MATIC, administered by the Division of Criminal Investigation, is a joint venture of the division and the Department of Corrections, Department of Military Affairs, and the Rocky Mountain Information Network. There are eight full-time employees, five of whom are Division of Criminal Investigation employees. The Department of Corrections, Department of Military Affairs, and the Rocky Mountain Information Network each provide one full-time employee. The FBI has assigned one analyst to MATIC, and all MATIC analysts are also considered assigned to the JTTF. DHS and DOJ systems and networks accessible to MATIC analysts include HSIN, LEO, NCIC, and FBI classified systems located in the JTTF, as well as the RISS/Rocky Mountain Information Network, FinCEN, and INTERPOL. MATIC analysts provide case support to all Montana law enforcement and assist investigators in identifying evidence, suspects, and trends in their investigation. Each analyst assigned to MATIC has one or more portfolios for which he or she is responsible; the portfolios include drugs, outlaw motorcycle gangs, corrections, general crime, left wing, right wing, northern border, critical infrastructure, and international terrorism. Analysts review new organizations active within the state, ongoing or potential criminal activity, trends or activity around the country that could affect Montana, and trends or activity in Montana that could affect other parts of the United States or Canada. MATIC produces a daily brief for Montana that covers three topic areas—international terrorism/border issues, domestic terrorism, and general crime—and is disseminated on RISS and on the MATIC Web portal. MATIC also responds to specific requests for information, manages the critical infrastructure program, conducts training sessions for law enforcement, and maintains a Web portal to assist in the secure sharing of information among law enforcement. About 180 local, state, tribal, and federal agencies access MATIC information on its Web portal. The Nebraska State Patrol is in the planning stage of establishing the Nebraska Fusion Center. The Nebraska State Patrol is setting up the command structure for the center, has reorganized and placed staff into positions, and, according to an official, is awaiting DHS funding to hire a consultant to help develop a blueprint for the center. The official also noted that funding will allow purchase of software necessary to fuse their intelligence databases together. The center’s timeline has the center scheduled to open in the fall of 2007. The fusion center is to be all-crimes, all-hazards, including terrorism. Nebraska State Patrol officials said that the center will collect as much intelligence information as it can, whether related to crime, drugs, threats, terrorism, or other hazards, and then combine it and share it with the necessary agencies. The fusion center will be the lead intelligence-sharing component in the state and provide a seamless flow of information to assess potential risks to the state. The fusion center is planning to initially invite FBI, DEA, and ICE at the federal level; the Nebraska Emergency Management Agency, Department of Health and Human Services, and Department of Roads at the state level; and the Omaha Police Department, the TEW in the Omaha area, and the Lincoln Police Department at the local level. The center also plans to partner with key elements of the private sector since the center plans to develop infrastructure protection plans. The officials expect that many information systems will be in place, including LEO and RISS/Mid-States Organized Crime Information Center, and the center is planning to disseminate an intelligence update either daily or weekly. The Nevada Department of Public Safety is in the planning stages of establishing the Nevada Analytical and Information Center. The center is planning to have an all-crimes and all-threats focus, which would include major crimes (such as burglary rings, fraud, rape, or homicides) and terrorism. The state fusion center will look at crimes at the state level and will share information with federal and local law enforcement agencies to identify crime trends and patterns. The center will also have an all- hazards mission and plans to include fire departments and public health entities as stakeholders. The center will be responsible for 15 of the 17 counties in the state, excluding Clark and Washoe Counties, which will be covered by separate centers operated by the Las Vegas Metropolitan Police Department and the Washoe County Sheriff’s Office. The New Hampshire Department of Safety Division of State Police is in the early stages of establishing the New Hampshire Fusion Center. They are in the process of developing the fusion center as a separate entity from several existing intelligence units within the state. For example, after the events of September 11 the State Police created a terrorism intelligence unit, in addition to a criminal intelligence unit that focuses on narcotics and organized crime. The fusion center is planning to open in 2008. The New Hampshire Fusion Center will focus on all crimes and all hazards. The state chose to adopt an all-crimes focus both because terrorism is funded by and associated with many other crimes (such as drug trafficking, credit card fraud, and identity theft). The New Hampshire State Police intelligence unit has a full-time member assigned to it and coordinates with the FBI JTTF. Also, the State Police sustain interoperability with the FBI because four of its members have FBI clearances. The fusion center will be housed within the New Hampshire Department of Safety, which includes the State Police, the Division of Motor Vehicles, and the Bureau of Emergency Communication (911), and the Office of Homeland Security and Emergency Management. The fusion center has access to LEO and RISS systems. The Regional Operations Intelligence Center (ROIC) was established in January 2005 and moved into its current facility in October 2006. ROIC is a 24-hour a day all-crimes, all-hazards, all-threats, all-the-time watch command and analysis center. The New Jersey State Police is the executive agency of ROIC and administers the general personnel, policy, and management functions. The center’s mission is to collect, analyze, and disseminate intelligence to participating law enforcement entities; evaluate intelligence for reliability and validity; provide intelligence support to tactical and strategic planning; evaluate intelligence in the Statewide Intelligence Management System; and disseminate terrorism-related activity and information to the FBI, among others. ROIC is also the home of the State Emergency Operations Center, the State Office of Emergency Management, and the State Police Emergency Management Section Offices. ROIC has personnel assigned (including 13 analysts) from the FBI, DHS, ATF, ICE, FAMS, and the U.S. Coast Guard, in addition to personnel from the State Police, New Jersey Office of Homeland Security and Preparedness, and the Department of Transportation. ROIC is seeking representation from the departments of Corrections, Parole, Health and Senior Services; Environmental Protection; and Military and Veteran Affairs. ROIC is overseen by a Governance Committee, chaired by the director of ROIC, that consists of representatives from state and federal entities and law enforcement associations who meet quarterly to discuss ROIC policies and other related matters. ROIC is seeking to develop additional relationships with private sector organizations—such as the American Society of Industrial Security, the Princeton Area Security Group, the Bankers and Brokers Group, and the All Hazards Consortium— to further the mission of the intelligence analysis element of ROIC. ROIC consists of three components: (1) an analysis component, responsible for collecting, analyzing, and disseminating intelligence information entered into the Statewide Intelligence Management System by local, county, state, and federal law enforcement; (2) the operations component, which will control the actions of State Police operational and support personnel and serve as a liaison to federal agencies, other state entities, and county or municipal agencies on operational matters; and (3) a call center component, which will provide the center with situational awareness intelligence about emergency situations. DHS and DOJ systems and networks to which ROIC has access include LEO, HSIN, HSIN-Secret, and ACS, as well as SIPRNet. ROIC is scheduled to have HSDN installed in late September 2007. ROIC disseminates officer safety information, bulletins, and any other information deemed to be of value to the law enforcement or homeland security community. The State Police provide operational support to the law enforcement community on canine support for bomb and drug detection, bomb technicians, medevac helicopter support, and marine services. The New Mexico All Source Intelligence Center (NMASIC) will serve as New Mexico’s primary intelligence collection, analysis, and dissemination point for all homeland security intelligence matters, which will include intelligence support for counterterrorism operations, intelligence support for counter-human smuggling operations, critical infrastructure threat assessments, intelligence training, and terrorism and counterterrorism awareness training. According to officials in the New Mexico Governor’s Office of Homeland Security and Emergency Management, NMASIC was established to provide the Governor and State Homeland Security Advisor with the capability to receive information and intelligence from a number of sources and fuse that information and intelligence together to create a common intelligence and threat picture, upon which they, and other senior officials, can make long-term policy decisions. NMASIC was also established to provide tactical intelligence support to local, tribal, and state agencies in New Mexico. NMASIC will accomplish this mission by developing and sustaining five key projects and programs: a statewide integrated intelligence program, a statewide information sharing environment, intelligence product development and production management, collection requirements and collection management, and a state law enforcement operational component. NMASIC’s analytical functions will include a collection management analyst, an international terrorism/Islamic extremist analyst, a single-issue extremist analyst, a militia/white supremacists analyst, a border security analyst, and a critical infrastructure analyst. Once fully staffed, NMASIC will provide tactical and strategic intelligence support to agencies in New Mexico. Participating agencies and disciplines include the Governor's Office of Homeland Security, Department of Public Safety/New Mexico State Police, local law enforcement, local fire departments, local emergency management, Pueblo Public Safety Organizations, FPS, TSA, DHS I&A, Department of Energy, Department of State, the FBI, HIDTA, the U.S. Attorney’s Office, and the U.S. Bureau of Reclamation. In addition to the state fusion center—the New York State Intelligence Center (NYSIC)—there are other local area centers in New York, including those operated by the New York City Police Department Intelligence Division and Rockland County. The New York State Intelligence Center (NYSIC) was established in August 2003 as a multijurisdictional intelligence and investigative center composed of representatives from state, federal, and local law enforcement, criminal justice, and intelligence agencies. Its mission is to advance the effectiveness and efficiency of New York State law enforcement operations and services by acting as a centralized and comprehensive criminal intelligence resource. NYSIC, which is led by the New York State Police, operates 24 hours a day, 7 days a week. NYSIC combines the duties of an intelligence center and fusion center to enhance collaboration among New York state law enforcement agencies and law enforcement agencies nationwide. Using an all-crimes approach, the NYSIC collects, analyzes, evaluates, and disseminates information and intelligence to identify emerging patterns and trends, investigate current criminal activities, and prevent future criminal acts. NYSIC opened the Counterterrorism Center (CTC) in May 2004, and this component is responsible for intelligence and information sharing in all areas outside New York City. The mission of NYSIC-CTC is to provide law enforcement agencies throughout New York state with timely and useful intelligence to assist in the prevention, detection, and deterrence of terrorism. NYSIC-CTC provides a centralized contact point for the reporting of suspicious activity from both civilians and law enforcement. NYSIC-CTC vets information and directs it to the appropriate federal, state, or local law enforcement agency for investigation. NYSIC has 18 agencies represented in the facility, with over 80 people in the center. Federal entities with personnel in NYSIC include the FBI (three intelligence analysts and one special agent); DEA (one intelligence analyst); the U.S Attorney’s Office (one part-time intelligence research specialist); DHS I &A (one senior intelligence analyst); ICE (one senior special agent); CBP (one special agent expected); and the Social Security Administration (one special agent). The New York State Police provides the majority of NYSIC’s personnel, with 44 investigators and 20 analysts assigned. In addition, the New York State Department of Motor Vehicles, Division of Parole, Department of Correctional Services, Department of Insurance, Office of Homeland Security, and National Guard have provided personnel and services to NYSIC. Among the local entities providing personnel, liaison, and services to NYSIC are the New York City Police Department, New York City Metropolitan Transit Authority Police Department, Rensselaer County Sheriff’s Office, and the Town of Colonie Police Department. The Executive Committee on Counter Terrorism, consisting of state police executives, the Director of the State Office of Homeland Security, commissioners of various state agencies, representatives from police chiefs and sheriffs, and the Office of the Governor, serves as an Advisory Board to NYSIC. NYSIC collects information including tips from law enforcement, the private sectors, and the public via crime and terrorism tip hotlines. NYSIC also receives reports from federal, state, local, and tribal law enforcement entities. Federal information includes threat assessments, CBP reporting, and DHS daily reporting. DHS and DOJ information systems or networks accessible to the fusion center include HSIN (Unclassified and Secret), HSDN, FPS portal, LEO, as well as FinCEN and Treasury Enforcement Communications System. Some NYSIC personnel—CTC personnel with Top Secret clearances—have full access to FBI systems (e.g., ACS, IDW, and Guardian). Reporting from state, local, and tribal agencies includes investigative and intelligence submissions, suspicious incidents, public safety and public health information, and infrastructure information. NYSIC also collects information from open sources. The types of services performed and products disseminated include counterterrorism and criminal intelligence analysis and reporting, situational awareness reporting, situation reports on emerging incidents, investigative support, and outreach and training. NYSIC conducts critical infrastructure and outreach and awareness in conjunction with New York Office of Homeland Security. The NYPD Intelligence Division opened its Intelligence Center in March 2002. The center has both an all-crimes and counterterrorism focus, for example, focusing on traditional crimes (e.g., guns, gangs, and drugs) as well as having one group of intelligence analysts who analyze information for ties to terrorism. Analysts look at global trends and patterns for applicability to New York City. Personnel have access to among others, all FBI systems, LEO, and HSIN. The Rockland County Intelligence Center has been in existence since 1995. However, according to its director, the center changed focus after September 2001. The mission of the center is to provide intelligence to law enforcement agencies based upon the collection, evaluation, and analysis of information that can identify criminal activity. The center takes an all- crimes approach and is involved in any crime that occurs within the county. The center is composed of sworn officers from Rockland County law enforcement agencies who are assigned specialized desks, such as street gangs, burglary/robbery, terrorism, and traditional organized crime. In addition, the FBI assigned a special agent on a full-time basis to the center. DHS and DOJ networks and systems to which the center has access include HSIN and LEO, as well as HIDTA and RISS/Mid-Atlantic Great Lakes Organized Crime Law Enforcement Network. The North Carolina Information Sharing and Analysis Center (ISAAC) opened in May 2006 and is overseen by the North Carolina State Bureau of Investigation’s (SBI) Intelligence and Technical Services Section within the state Department of Justice. The mission of ISAAC is to serve as the focal point for the collection, analysis, and dissemination of terrorism and criminal information relating to threats and attacks within North Carolina. ISAAC will enhance and facilitate the collection of information from local, state, and federal resources and analyze that information so that it will benefit homeland security and criminal interdiction programs at all levels. Specifically, ISAAC develops and evaluates information about persons or organizations engaged in criminal activity, including homeland security, gang activity, and drug activity. ISAAC partners include the U.S. Attorney’s Office, the FBI, SBI, the State Highway Patrol, National Guard, Association of Chiefs of Police, Sheriff’s Association, Division of Public Health, Department of Agriculture, Department of Corrections, Alcohol Law Enforcement, Emergency Management, and the Governor’s Crime Commission. Partners take what ISAAC refers to as “a global approach to a state response.” The ISAAC team consists of investigators and analysts from SBI, the Raleigh Police Department, Wake County Sheriff’s Office, State Highway Patrol, state Alcohol Law Enforcement, National Guard, U.S. Attorney’s Office, and the FBI. Specifically, the FBI assigned a full-time analyst and a part-time special agent to ISAAC. ISAAC investigators actively investigate leads and tips and work jointly with the JTTFs throughout the state. DHS I&A has conducted a needs assessment of the ISAAC. However, at the time of our review, it had not yet placed an intelligence officer in the center. DHS and DOJ information systems or networks accessible to the fusion center include HSIN, LEO, as well as FinCEN, Regional Organized Crime Information Center, RISSNET, EPIC, INTERPOL, as well as a variety of state information. The FBI analyst has access to FBI classified systems, and the FBI has cleared all sworn and analytical personnel assigned to the fusion center. ISAAC produces a variety of products, including an open source report; Suspicious Activity Reports; and a monthly information bulletin with articles of interest, a special events calendar, tips and leads summary, and products and services of ISAAC. ISAAC also supports special events, maintains a tips and leads database, and conducts community outreach. For instance, ISAAC has developed relationships with several Muslim organizations. The North Dakota Fusion Center was established in September 2003 with support from the North Dakota Division of Homeland Security and the North Dakota National Guard. In January 2004, the North Dakota Bureau of Criminal Investigation and the North Dakota Highway Patrol assigned a special agent and a captain, respectively, to the center. The fusion center takes an all-crimes and all-hazards approach to terrorism. As such, it collects and disseminates all-hazard and all-crime information with possible links to terrorism. The fusion center is staffed with personnel from the Bureau of Criminal Investigation, North Dakota Division of Homeland Security, North Dakota National Guard, and North Dakota Highway Patrol. The fusion center consists of law enforcement, intelligence analysts (both domestic and international), operations and planning, and critical infrastructure personnel and is divided into three sections—law enforcement, operations, and intelligence—that work together. While there are no FBI personnel assigned to the center, fusion center law enforcement personnel are JTTF members. The North Dakota Fusion Center provides training and terrorism investigative support; conducts critical infrastructure assessments; and disseminates products including a monthly newsletter to law enforcement and homeland security stakeholders, and summaries to military stakeholders. DHS and DOJ information systems or networks accessible to the fusion center include HSIN, HSIN-Secret, and FBI’s ACS, as well as RISS, RISS ATIX, INTERPOL, FinCEN, and INFRAGARD. The fusion center is located in a secure National Guard facility. After the September 11 attacks, Ohio established the Ohio Strategic Task Force, a working group of state cabinet-level positions, to develop a strategic plan that included the formation of a fusion center. In January 2005, the Ohio Homeland Security Division’s Strategic Analysis and Information Center (SAIC) began initial operations with a base group composed of state National Guard, State Highway Patrol, Emergency Management, and Homeland Security personnel. According to an SAIC official, legislation subsequently widened the foundation and basis for the center. In December 2005, SAIC moved to its second phase of development and implemented a work-week-style operation, acquired personnel, conducted additional training in intelligence analysis, and bought additional software to handle information acquisition. SAIC’s third phase of development is projected to begin in the fall/winter of 2007 and will include an evening second shift. SAIC maintains a 10-hour-a-day, 5- day-a-week schedule with 24-hour radio and telephone coverage through the Highway Patrol. The center serves as a secure one-stop shop that collects, filters, analyzes, and disseminates terrorism-related information. SAIC has a counterterrorism and all-crimes scope of operations. The center has seven full-time employees with a number of agencies represented on a part-time rotational basis. State entities represented include the Department of Agriculture, Attorney General’s Office, Bureau of Criminal Identification and Investigation, Emergency Medical Services, Environmental Protection Agency, Fire Marshal, Department of Health, Highway Patrol, Homeland Security, National Guard, Department of Public Safety, Department of Transportation, the Ohio Association of Chiefs of Police, and the Fire Chief’s Association. There are also a number of city and county agencies represented on a part-time basis. The FBI has assigned a full-time analyst and a special agent to the center. DHS has assigned a full-time intelligence analyst to the center. Other federal partners in SAIC include ATF, TSA, U.S. Coast Guard, and the U.S. Attorney’s Office. SAIC’s Investigation Unit is composed of law enforcement personnel from multiple agencies and includes commissioned officers from local, state, and federal agencies as well as intelligence analysts. The unit’s primary mission is the detection of persons engaged in terrorist activities. This unit receives information from law enforcement agencies, crime reports, and field interrogation contacts as well as direct reports from both the public and law enforcement via a telephone tip line and Internet Web applications. DHS and DOJ systems and networks accessible to SAIC include HSIN and LEO. SAIC has HSDN installed in its secure room, but the system is not currently operational pending certification of the secure space. The FBI’s classified systems are accessible by the FBI analyst assigned to SAIC, and preparations are underway to build a secure room to house FBINet. The Investigation Unit conducts preliminary investigations of information and either processes the complaint, lead, or tip internally or forwards the complaint, lead, or tip to the JTTF or law enforcement agency with primary jurisdiction. The Oklahoma State Bureau of Investigation, in collaboration with other state and local entities, is in the early stages of developing the Oklahoma Information Fusion Center. Specifically, it has obtained funding, developed an implementation plan, and identified 10 positions for which it will be hiring. The official opening of the center is expected in early 2008. There were two primary reasons for the establishment of the center. First was to help in the prevention of future attacks, and second was to serve as a hub to facilitate information and intelligence sharing with law enforcement officers in the field. The purpose of the fusion center will be to screen the information, determine whether it is pertinent to Oklahoma, and consolidate the information. As such, the proposed mission statement for the center is to serve as the focal point for the collection, assessment, analysis, and dissemination of terrorism intelligence and other criminal activity information relating to Oklahoma. The scope of operations for the center will be both all-crimes and all-hazards. The Oklahoma State Bureau of Investigation is to act as the host agency for the fusion center, serving as the focal point for all fusion center activities, housing the fusion center within its headquarters, and providing most of the center’s analysts and agents. The fusion center will include nine Oklahoma State Bureau of Investigation analysts. In addition, the center plans to include analysts from the Oklahoma Office of Homeland Security, the FBI FIG, and the Oklahoma National Guard. There are also six Oklahoma State Bureau of Investigation agents who will play a support role to the center. The Oklahoma Information Fusion Center will collect information on all crimes in accordance with 28 CFR, part 23. Fusion center personnel are expected to perform intelligence analysis on all investigative reports and informational reports provided to the center. Personnel can provide investigative support through the use of intelligence products such as charts, timelines, intelligence summary reports, and many other products. Center personnel also have direct access to numerous databases that can be used to support investigative activities as well as intelligence investigations. DHS and DOJ systems to which the fusion center has access include LEO, HSIN, as well as RISSNet, FinCEN, and VICAP. Oregon’s Terrorism Intelligence and Threat Assessment Network (TITAN) Fusion Center opened in June 2007. Its primary mission is information sharing and coordination of terrorism intelligence among Oregon’s 220 law enforcement entities. In addition, the coordination and passing of terrorism-related information to the FBI is a primary function for the center. The center will also support an all-crimes approach to identifying terrorism-related activity, including criminal activities in areas such as money laundering, counterfeiting and piracy, and human and weapons smuggling. The center is administered by the Oregon Department of Justice and has representatives from the FBI, ATF, Internal Revenue Service, the Oregon HIDTA program, Oregon State Police, and the Oregon Military Department. TITAN Fusion Center is located in the same building as an FBI’s resident agency. The Terrorism Intelligence and Threat Assessment Network is Oregon’s terrorism liaison officer program. This program began in May 2004 and has since grown to include 53 members who represent 35 agencies. The primary mission of the program is information sharing between the fusion center and first responders. The fusion center is the clearinghouse and information hub within the state. Intelligence is collected, collated, analyzed, and then disseminated as briefings, intelligence, and officer safety bulletins and alerts. The Pennsylvania Criminal Intelligence Center (PaCIC) was established in July of 2003 to serve as the primary conduit through which law enforcement officers in Pennsylvania can submit information and receive actionable intelligence for the benefit of their decision makers. PaCIC, which is a component of the Pennsylvania State Police, is an all-crimes analysis center. However, the center is planning to diversify and focus on all hazards in the future. According to its director, PaCIC is a developed criminal intelligence center. However, in terms of a fusion center, it is still in the early stages of development. PaCIC is staffed 24 hours a day, 7 days a week. PaCIC’s 32-member staff includes Pennsylvania State Police intelligence analysts, research analysts, officers, and an information technology specialist. PaCIC also contains a watch-center component of enlisted supervisors designed to maintain situational awareness. A representative of the state Department of Corrections works in the center on a part-time basis and provides direct access to corrections intelligence. There are currently no federal entities represented in PaCIC. However, DHS representation is being planned with the eventual expansion into an all-crimes and all-hazards fusion center. FBI security modifications to the center are under way, and FBI representation is anticipated by November 2007. PaCIC has access to HSIN and disseminates products including daily reports, strategic assessments, intelligence alerts, information briefs, and threat assessments. The center also operates a drug tip line and a terrorism tip line. The Rhode Island Fusion Center was established in March 2006 and is a component of the Rhode Island State Police. In establishing the center, the state recognized the importance of the fusion center concept for state and local information sharing. The fusion center is colocated with an FBI field office and thus focuses primarily on counterterrorism. However, the center also serves as a resource for the local police agencies in the state. The fusion center has three personnel—one investigator and two analysts. The FBI is the fusion center’s only federal partner, although the director said that he works with ICE on a regular basis. The fusion center works closely with the JTTF, to which there are also State Police officers assigned. DHS and DOJ information systems or networks accessible to the fusion center include HSIN, LEO, FPS portal, and FBI’s ACS system and Guardian. The FBI also facilitates all of the center’s security clearances and provides the center’s facility, which is colocated with the JTTF, free of charge. In South Carolina, the Chief of the State Law Enforcement Division is the state Director of Homeland Security and the state representative to DHS. In July 2004, the Chief of the Division approved the development of a fusion center and the South Carolina Information Exchange (SCIEx) was established in March 2005. SCIEx has an all-crimes scope of operations and has devoted its resources to combating all nature of criminal activity. Its mission is to prevent and deter acts of terrorism and criminal activity, and to promote homeland security and public safety through intelligence fusion and information sharing with all sectors of South Carolina society. SCIEx also handles reports of suspicious activity and includes a component that deals with “situations as they develop,” including a response to emergent hazards. Further, SCIEx goals focus on providing real-time response and timely assistance to local law enforcement agencies, developing actionable intelligence and analysis to predict and prevent homeland security threats, and using intelligence-led policing and other products to facilitate the prevention and interdiction of criminal and terrorist activities. SCIEx’s 14-person staff includes agents and analysts from the State Law Enforcement Division; the National Guard; Department of Heath and Environmental Control; Department of Corrections; Department of Probation, Pardon, and Parole; and the FBI, which assigned one FIG analyst. DHS I&A has conducted a needs assessment of SCIEx. However, at the time of our review, it had not yet placed an intelligence analyst in the center. The center is organized into an 8/5 watch (with a 24/7 on-call duty roster); a collection, analysis, and production unit; AMBER alert and missing persons coordinators; and liaisons to JTTF and Project SeaHawk. SCIEx analysts collect information from a variety of sources including federal intelligence and law enforcement agency reports, incident reports, and other first responder reports and graphics to produce daily bulletins, targeted advisories, and intelligence assessments. DHS and DOJ information systems or networks accessible to the fusion center include HSIN/JRIES, LEO, EPIC, and VICAP, as well as RISSNET, Interpol, and FinCEN. HSIN-Secret is available to personnel if they travel to the Emergency Operations Center, which is located in a different facility than SCIEx. Additionally, the FBI funded the development of a secure room at SCIEx, and once the room is completed, SCIEx will also gain access to FBI systems. SCIEx also uses a variety of analytical tools such as Geographic Information System and crime mapping to enhance its analytic products. The South Dakota Fusion Center was established in June 2006 with the mission to protect the citizens by ensuring the resiliency of critical infrastructure operations throughout South Dakota by enhancing and coordinating counterterrorism intelligence and other investigative support efforts among private sector and local, state, tribal, and federal stakeholders. The principal role of the fusion center is to compile, analyze, and disseminate criminal and terrorist information and intelligence and other information to support efforts to anticipate, identify, prevent, and/or monitor criminal and terrorist activity. The center has an all-hazards and all-crimes scope of operations and focuses on all criminal activity, not just those with a nexus to terrorism. The all-hazards focus comes from the center’s coordination with the state Office of Emergency Management. The center is staffed by the South Dakota Office of Homeland Security and the South Dakota Highway Patrol and receives oversight from the State Homeland Security Senior Advisory Committee. The center has one full- time staff person and two part-time personnel from the Office of Homeland Security. There were no federal entities represented in the fusion center. However, officials said that they coordinate with the local JTTF, the HIDTA, and other drug and fugitive task forces. The fusion center gathers information about all-hazard, all-crimes incidents and disseminates it to first responders, surrounding states, and the federal government. DHS and DOJ information systems or networks accessible to the fusion center include HSIN, LEO, as well as RISS/Mid- States Organized Crime Information Center, ICEFISHX, and Law Enforcement Intelligence Network, which are operated by fusion centers in Minnesota and Iowa. The state center for Tennessee, the Tennessee Regional Information Center (TRIC), opened in May 2007, with the mission to lead a team effort of local, state, and federal law enforcement in cooperation with the citizens of the state of Tennessee for the timely receipt, analysis, and dissemination of terrorism and criminal activity information relating to Tennessee. TRIC provides a central location for the collection and analysis of classified, law enforcement sensitive, and open source information; provides a continuous flow of information and intelligence to the law enforcement community; and provides assistance to law enforcement agencies in criminal investigation matters. TRIC has an all-crimes scope of operations that includes crimes such as traditional organized crime, narcotics, gangs, fugitives, missing children, sex offenders, and Medicaid/Medicare fraud. TRIC also has a terrorism/national security focus that includes international and domestic terrorism, foreign counterintelligence, and other national security issues (such as Avian Flu). Led by the Tennessee Bureau of Investigation and the Tennessee Department of Safety/Office of Homeland Security, TRIC’s 31-person staff includes analysts from these two entities, as well as the Department of Corrections, the National Guard, and the FBI FIG. Other partner agencies include the Highway Patrol, the Oak Ridge National Laboratory, HIDTA, the U.S. Attorney’s Office, ATF, and the Regional Organized Crime Information Center, as well as a growing connectivity to the state’s local law enforcement agencies. TRIC provides support to all agencies within the state, reviews and analyzes data for crime trend patterns and criminal activity with a potential nexus to terrorism, disseminates information through regular bulletins and special advisories, develops threat assessments and executive news briefs, performs requests for information as needed, and produces suspicious incident report analysis. The public can provide tips and information to TRIC through its Web site and via a toll-free telephone number. DHS and DOJ information systems or networks accessible to TRIC include HSIN and LEO, as well as RISS and the Regional Organized Crime Information Center. Fusion center operations work in concert with other ongoing Tennessee Bureau of Investigation programs, including AMBER Alerts, sex offender registry, and the aviation unit. In addition to the statewide Texas Fusion Center, there are regional fusion centers including the North Central Texas Fusion Center. After September 11, Governor Rick Perry created a task force to study homeland security matters, and the task force identified communication and coordination as predominant themes. Subsequently, the Texas Legislature passed a bill that created a communications center to serve as the focal point for planning, coordinating, and integrating government communications regarding the state’s homeland defense strategy. This center, then known as the Texas Security Alert and Analysis Center, opened in July 2003. The center functioned as a call center to allow the public and law enforcement to report suspicious activities. In July 2005, the center was expanded and renamed the Texas Fusion Center, which acts as a tactical intelligence center for law enforcement that is open 24 hours a day, 7 days a week and helps coordinate multi-agency border control activities. The fusion center has an all-crimes and all-hazards scope of operations in order to disrupt organizations that are using criminal activities to further terrorist activities. The center gathers information from the public and law enforcement, analyzes it, and provides it to JTTFs. The fusion center also focuses on border security, narcoterrorism, and criminal gangs. The all-hazards scope of operations was adopted in the aftermath of Hurricanes Katrina and Rita. The center works in conjunction with, and is located in, the State Operation Center, in order to create an all-hazards response capability. The Texas Fusion Center has dual oversight by the Criminal Law Enforcement Division and the Governor’s Office of Homeland Security at the Texas Department of Public Safety. It is staffed by Department of Public Safety officers and analysts. The FBI assigned a part-time analyst to the center. The Texas Fusion Center is the central facility for collecting, analyzing, and disseminating intelligence information related to terrorist activities. The center is designed to handle and respond to telephone inquiries from law enforcement and the general public, in addition to having access to several information systems. The Texas Fusion Center monitors HSIN, LEO, and JRIES and has access to FBI systems, though only through the part-time analyst assigned to the center. The center also uses a variety of state systems and databases, including the Texas Data Exchange, which is a comprehensive information-sharing portal that allows criminal justice agencies to exchange jail and records management systems data, and provides system access to a variety of state databases. The North Central Texas Fusion Center (NTFC) became operational in February 2006, after a 2½-year planning process. The planners recognized the fusion center needed a different mission from those already being conducted by the North Texas HIDTA and FBI FIG, so NTFC adopted an all-crimes and all-hazards scope of operations. Specifically, NTFC works to prevent or minimize the impacts of natural, intentional, and accidental hazards/disasters through information sharing across jurisdictions and across disciplines. The center also supports emergency response, field personnel, and investigations. Stakeholders include those in homeland security, law enforcement, public health, fire, emergency management, and state and federal government such as the Texas Fusion Center, Texas National Guard, and DHS. DHS I&A has assigned an intelligence analyst to the center. The center provides intelligence support to regional task forces, State of Texas initiatives, and local police department homicide and criminal investigations and also assesses regional threats. Users from 42 regional jurisdictions and agencies covering five major disciplines, including law enforcement, health, fire, emergency management, and intelligence, receive bulletins and alert information. Reports and alerts are also distributed via e-mail to the stakeholders. Most of the reports are all- hazard and all-discipline focused and look at trends, observations, and predictive elements primarily in support of prevention and preparedness. DHS and DOJ systems and networks accessible to NTFC include HSIN, LEO, and HSDN, in addition to a variety of other state and open source information databases. The Utah Fusion Center is in the planning stage and is transitioning from an intelligence center—the Utah Criminal Intelligence Center—which was established prior to the 2002 Winter Olympics. Led by the Utah Department of Public Safety, the fusion center is in the process of developing operations guidelines and memorandums of understanding and consulting with DHS’s Office of Grants and Training. The Utah Fusion Center will adopt an all-crimes and all-hazards scope of operations to move beyond law enforcement and broaden the center’s focus to include homeland security and public safety. The fusion center was established to enhance the ability to share information across disciplines beyond law enforcement and levels of government. The fusion center, as was the criminal intelligence center, is colocated with the local FBI JTTF and will employ criminal researchers and investigators. The center works closely with the FBI JTTF and the local DHS representative, partnerships that were developed with the establishment of the precursor intelligence center in 2002. The FBI provides Top Secret clearances, and most of the staff members have had Top Secret security clearances since the 2002 Winter Olympics. DHS and DOJ information systems or networks accessible to the fusion center include HSIN, LEO, FBI classified systems, as well as RISS/Rocky Mountain Information Network. The Vermont Fusion Center, which is managed by the Vermont State Police, was established in August 2005 in order to further the national homeland security mission in response to the terrorist attacks on September 11. The fusion center, which is colocated with ICE’s Law Enforcement Support Center (LESC), is a partnership of the Vermont Department of Homeland Security, Vermont State Police Criminal Intelligence Unit, ICE, Vermont National Guard Counter Drug Program, and the U.S. Coast Guard. Each entity provides personnel to the center. The fusion center serves as Vermont’s clearinghouse to analyze and assess information received from law enforcement and disseminate information from a single location. The goals of the Vermont Fusion Center include providing timely, accurate, and actionable information to the state, national, and international law enforcement communities; identifying parallel investigations, reducing duplication, and increasing officer safety (deconfliction); and providing strategic analysis, to include crime mapping for all types of criminal activity, particularly related to illegal narcotics, money laundering crimes, identity theft, crimes that support terrorism, and other major crimes. The center has an all-crimes scope of operations reflecting the multiple sectors, including public safety and law enforcement, that have come together to form the fusion center. The center provides major criminal case assistance, such as fugitive tracking, phone searches, liaison with federal and Canadian agencies, analytical reports, and utilization of federal capabilities such as cellular telephone triangulation, mail covers, passport information, and border lookouts. The center also disseminates notifications, alerts, indicators, and warnings to Vermont law enforcement. DHS and DOJ information systems or networks accessible to the fusion center include HSIN, Student and Exchange Visitor Information System, U.S. Visitor and Immigration Status Indicator Technology System, National Security Entry-Exit Registration System, FPS portal, U.S. Coast Guard Homeport, LEO, VICAP, EPIC, NCIC, as well as RISS/New England Police Information Network, NLETS, INTERPOL, HIDTA, FinCEN, and Treasury Enforcement Communications System. The center also has access to a number of state and commercial systems and databases, and to the Canadian Border Information / Intel Center. The Virginia Fusion Center was established in February 2005 after being mandated by legislation and moved into a new facility in November 2005. Operated by the Virginia State Police, in cooperation with the Virginia Department of Emergency Management, the primary mission of the center is to fuse together resources from local, state, and federal agencies and private industries to facilitate information collection, analysis, and sharing in order to deter and prevent criminal and terrorist attacks. The secondary mission of the center is, in support of the Virginia Emergency Operations Center (with which it is colocated), to centralize information and resources to provide coordinated and effective response in the event of an attack. The center has an all-hazards and counterterrorism scope of operations. The Virginia Fusion Center has partnerships established with state, local, and federal law enforcement agencies, including ATF and the U.S. Secret Service; DHS’s Homeland Security Operation Center; FBI JTTFs in the state of Virginia; the private sector; Fire and Emergency Medical Services; the military, including the Army and the U.S. Coast Guard; the National Capitol Regional Intelligence Center; other state intelligence centers; as well as the public. There are over 20 people in the center—17 analysts, 5 special agents, and other management and administrative personnel. The analysts are primarily from the Virginia State Police and the Department of Emergency Management. The National Guard has also assigned an analyst. DHS has detailed one intelligence analyst, and the FBI has assigned one reports officer to the center. The DHS Protective Security Advisor has a desk in the center as well. Several center employees are detailed to other organizations; for example, the Virginia State Police have five agents assigned to JTTFs in the state. DHS and DOJ information systems or networks accessible to the fusion center include HSIN, HSIN-Intel, HSDN, LEO, and JRIES, as well as the RISS/Regional Organized Crime Information Center. The FBI reports officer in the center can access FBI classified systems. The fusion center shares all-hazards information and intelligence, tactical information, raw information, and finished intelligence products with a variety of clients. These products include daily terrorism intelligence briefings that could be produced at Law Enforcement Sensitive, For Official Use Only, and open source levels and are e-mailed to all law enforcement and military contacts and posted to a bulletin; intelligence bulletins that describe emerging trends or upcoming events; threat assessments for events; and information reports on pertinent information that has not been fully analyzed. Virginia Fusion Center analysts also produce special projects or reports, provide case support, follow up on calls, and respond to requests for information. The center has established a variety of performance measures, including quarterly surveys disseminated to its users and activity reports (e.g., daily, weekly, quarterly, and yearly). All personnel also have core responsibilities and competencies. The Washington Joint Analytical Center (WAJAC) started as a small project in 2003 to facilitate information sharing within the state and with the federal government and has gradually evolved. WAJAC, which is a joint effort between the Washington State Patrol and the FBI, has an all-crimes, all-hazards, and counterterrorism scope of operations to support the state and local law enforcement community. This approach allows WAJAC intelligence analysts and investigators the ability to fully evaluate information for trends, emerging crime problems, and their possible connections to terrorism. WAJAC has recently included an all-hazards focus and has started looking at natural disasters and public health epidemics. WAJAC personnel include representatives from the Washington State Patrol, King County Sheriff’s Office, Bellevue Police Department, Seattle Police Department, the Washington Military Department (National Guard), ICE, and TSA. There are no FBI personnel assigned directly to WAJAC; however, WAJAC is colocated in an FBI field office and WAJAC analysts work side by side with the FIG in the field office. DHS I&A has conducted a needs assessment of WAJAC, and, according to DHS, had assigned an intelligence analyst to the center. DHS and DOJ information systems or networks accessible to the fusion center include HSIN, LEO, ICE and TSA systems; all FBI systems; as well as access to systems of each partner agency in WAJAC. WAJAC personnel receive all of their clearances, at the Top Secret level through the FBI. WAJAC produces a variety of weekly intelligence briefings, bulletins, and assessments in conjunction with the FIG. These products are e-mailed to law enforcement agencies, other government agencies, private sector security officers, and military units. The Department of Military Affairs and Public Safety’s Homeland Security Division is in the planning stage of establishing the West Virginia Fusion Center. The planning team for the development of the fusion center consists of multiple agencies and stakeholders with leadership from the Homeland Security Advisor. The West Virginia Fusion Center is to operate under the direct control of the Homeland Security Advisor and the State Administrative Agency. A governance committee, to be chaired by the State Administrative Agency with representatives from the Northern and Southern Anti-Terrorism Advisory Councils, state police, National Guard, health care, higher education, the private sector, and the interoperability coordinator will be responsible for providing guidance and policy. At the time of our review, the West Virginia Fusion Center was beginning its phased opening and bringing in personnel from the National Guard and the state police. The vision for the fusion center is to prevent, deter, and disrupt terrorism and criminal activity, enabling a safe and secure environment for the citizens of West Virginia. The fusion center will adopt an all-crimes, all- hazards, and counterterrorism scope of operations but plans to tailor each depending on the stakeholders in the center. For example, the West Virginia Public Broadcasting System will be represented in the fusion center to help gather and manage information. However, if there is an evacuation event, it will also disseminate the information directly to the public as public service announcements through television and radio stations. There are two fusion centers in Wisconsin: the Wisconsin Statewide Intelligence Center (WSIC) and the Milwaukee-based Southeastern Terrorism Alert Center (STAC). Led by the Wisconsin Department of Justice Division of Criminal Investigation, the Wisconsin Statewide Intelligence Center (WSIC) became operational in March 2006 as the central information and intelligence- gathering entity for the state of Wisconsin and acts as the clearinghouse for information and intelligence coming from local and county agencies. WSIC’s mission includes managing intelligence gathering efforts and passing information to appropriate agencies and the JTTF; interfacing with the Emergency Operations Center and Joint Operations Center during critical incidents or as requested; producing general weekly law enforcement bulletins and daily intelligence briefings for the Governor, top law enforcement officials, and partner agency heads, among others; supporting the Division of Criminal Investigation technology assets in the field; and providing statewide major case support and analytical services. Though counterterrorism is the primary concern of WSIC, the center operates with an all-crimes, all-hazards, all-events approach directed by the state Homeland Security Council, which wanted the center to be the intelligence voice for the state and to help the state in a comprehensive way. WSIC is staffed by eight full-time personnel, five of whom are Division of Criminal Investigation personnel. There are also two National Guard analysts, a special investigator from the Wisconsin Department of Natural Resources, and one FBI analyst at the center. DHS I&A has conducted a needs assessment of WSIC. However, at the time of our review it had not yet placed an intelligence analyst in the center. WSIC also supports the STAC by providing three Division of Criminal Investigation personnel to the center. WSIC is overseen by a Governance Board made up of federal, state, and local representatives. WSIC analysts provide short- or long-term assistance to agencies by using analytical tools and systems to clarify and visualize case investigations, tailoring the analytical support to the requesting agency’s needs. WSIC analysts work in a variety of areas and initiatives, including counterterrorism and domestic security, gang intelligence, identity theft, and the Highway Drug Interdiction Program with the Wisconsin State Patrol. DHS and DOJ networks and systems accessible to WSIC include HSIN, LEO, and NCIC, as well as RISSNET and a statewide law enforcement network that enables law enforcement officers to submit intelligence or requests for assistance to WSIC, and it provides law enforcement with WSIC bulletins and alerts, staff contact information, officer safety information, and resource links. WSIC provides a variety of products and services that include weekly law enforcement bulletins for every agency in the state containing sections on domestic and international terrorism, cold case investigations, missing persons, officer safety, and items of interest to law enforcement. Additionally, WSIC prepares a daily Command Staff Intelligence Briefing for the Governor, the Attorney General, the Adjutant General, and top law enforcement officials across the state that is primarily focused on issues within the previous 24 hours. WSIC also broadcasts statewide Alert Bulletins when it receives time-sensitive information, handles major criminal case analytical support, provides assistance on electronic surveillance, and conducts training events across the state and region. The Southeastern Wisconsin Terrorism Alert Center (STAC) is a counterterrorism, all-crimes, all-hazards intelligence organization made up of law enforcement, fire service, homeland security, military, DOJ, FBI, emergency management, and health department members. STAC officials said they were exposed to the TEW concept from Los Angeles and saw a need for establishing a TEW in their urban area in 2005 to improve information sharing. STAC was built on the TEW foundation as a satellite of WSIC. STAC began operating when its analysts were hired in October 2006. However, the officials said that they are still getting the physical location established and are in the final stages of reconstruction and establishing the facility. The mission of STAC is to protect the citizens, critical infrastructure, and key resources of southeastern Wisconsin by promoting intelligence-led policing, supporting criminal investigative efforts, and enhancing the domestic preparedness of first responders, all levels of government, and its partners in the private sector. STAC staff will eventually include 10 full- and part-time officers, detectives, and analysts from the Milwaukee Police Department, Office of the Sheriff of Milwaukee County, one DCI analyst, and one Milwaukee Fire Department analyst. The FBI has assigned a full-time intelligence analyst and a part-time special agent. A governance board provides oversight for the center. STAC is in the process of developing a TLO program, which is a network of police, fire department, public health, and private sector partners that collect and share information related to terrorism threats. STAC TLO coordinators will be responsible for analyzing available sources of terrorist threat information and preparing versions for distribution to the first responder agencies within their regions. STAC has also conducted some initial critical infrastructure assessments and published alerts, threat assessments, and intelligence information bulletins with information for first responders about local threats, terrorism trends, and counterterrorism training and offers training information for critical incident preparation. DHS and DOJ systems or networks accessible to STAC include LEO, HSIN, and RISS. STAC does not have classified FBI systems in its facility. However, the FBI analyst at STAC has access to them. The FBI also provides STAC personnel with their security clearances, most at the Secret level, and one at the Top Secret level. Wyoming does not have and is not planning to establish a physical fusion center. However, the Office of Homeland Security is working with Colorado officials to develop a plan for Wyoming to become an “adjunct” to CIAC. The officials stated that Wyoming, which has a population of only around 400,000 people and operates its law enforcement agencies with a total of only 1,600 officers, does not have the threat or the necessity for a full-fledged fusion center, much less the funding or personnel to support such a center. In addition, the Wyoming Office of Homeland Security is supported by the FBI’s JTTF in Wyoming that provides assistance such as helping with analytical review of information. Wyoming officials said that they have taken several steps to facilitate the development of a partnership with Colorado’s CIAC, including putting in place a technical system to augment the communications capability of Wyoming’s law enforcement agencies to transmit intelligence and information with CIAC. Wyoming officials intend to develop memorandums of understanding with CIAC to cover a regional area including both Colorado and Wyoming. In addition, Wyoming will furnish personnel for CIAC. The officials characterized the development of the partnership as between the planning and early stages of development and said that Wyoming and CIAC will have their partnership operational approximately in the fall of 2007. However, a Wyoming official noted that the state’s fiscal year 2007 funding did not designate any funding to continue with the fusion initiative. The official said that the fusion center initiative is critical to efforts to thwart terrorism, and the state intends to continue its partnership with CIAC and attempt to obtain future grant funding. In addition to the contact named above, Susan H. Quinlan, Assistant Director; Michael Blinde; Katherine Davis; George Erhart; Jill Evancho; Mary Catherine Hult; Julian King; Tom Lombardi; and Jay Smale made key contributions to this report.
In general, a fusion center is a collaborative effort to detect, prevent, investigate, and respond to criminal and terrorist activity. Recognizing that fusion centers are a mechanism for information sharing, the federal government--including the Department of Homeland Security (DHS), the Department of Justice (DOJ), and the Program Manager for the Information Sharing Environment (PM-ISE), which has primary responsibility for governmentwide information sharing and is located in the Office of the Director of National Intelligence--is taking steps to partner with fusion centers. In response to Congressional request, GAO examined (1) the status and characteristics of fusion centers and (2) to what extent federal efforts help alleviate challenges the centers identified. GAO reviewed center-related documents and conducted interviews with officials from DHS, DOJ, and the PM-ISE, and conducted semistructured interviews with 58 state and local fusion centers. The results are not generalizable to the universe of fusion centers. Data are not available on the total number of local fusion centers. Most states and many local governments have established fusion centers to address gaps in information sharing. Fusion centers across the country vary in their stages of development--from operational to early in the planning stages. Officials in 43 of the centers GAO contacted described their centers as operational, and 34 of these centers had opened since January 2004. Law enforcement entities, such as state police or state bureaus of investigation, are the lead or managing agencies in the majority of the operational centers GAO contacted; however, the centers varied in their staff sizes and partnerships with other agencies. Nearly all of the operational fusion centers GAO contacted had federal personnel assigned to them. For example, DHS has assigned personnel to 17, and the FBI has assigned personnel to about three quarters of the operational centers GAO contacted. DHS and DOJ have several efforts under way that begin to address challenges fusion center officials identified. DHS and DOJ have provided many fusion centers access to their information systems, but fusion center officials cited challenges accessing and managing multiple information systems. Both DHS and the FBI have provided security clearances for state and local personnel and set timeliness goals. However, officials cited challenges obtaining and using security clearances. Officials in 43 of the 58 fusion centers contacted reported facing challenges related to obtaining personnel, and officials in 54 fusion centers reported challenges with funding, some of which affected these centers' sustainability. The officials said that these issues made it difficult to plan for the future and created concerns about the fusion centers' ability to sustain their capability for the long-term. To support fusion centers, both DHS and the FBI have assigned personnel to the centers. To help address funding issues, DHS has made several changes to address restrictions on the use of federal grants funds. These individual agency efforts help address some of the challenges with personnel and funding. However, the federal government has not clearly articulated the long-term role it expects to play in sustaining fusion centers. It is critical for center management to know whether to expect continued federal resources, such as personnel and grant funding, since the federal government, through the information sharing environment, expects to rely on a nationwide network of centers to facilitate information sharing with state and local governments. Finally, DHS, DOJ, and the PM-ISE have taken steps to develop guidance and provide technical assistance to fusion centers, for instance, by issuing guidelines for establishing and operating centers. However, officials at 31 of the 58 centers said they had challenges training their personnel, and officials at 11 centers expressed a need for the federal government to establish standards for training fusion center analysts to help ensure that analysts have similar skills. DHS and DOJ have initiated a technical assistance program for fusion centers. They have also developed a set of baseline capabilities, but the document was still in draft as of September and had not been issued.
Many minority banks are located in urban areas and seek to serve distressed communities and populations that have traditionally been undeserved by financial institutions. For example, after the Civil War banks were established to provide financial services to African-Americans. More recently, Asian-American and Hispanic-American banks have been established to serve the rapidly growing Asian and Hispanic communities in the United States. In our review of regulators’ lists of minority banks, we identified a total minority bank population of 195 for 2005 (table 1). Table 2 shows that the distribution of minority banks by size is similar to the distribution of all banks by size. More than 40 percent of all minority banks had assets of less than $100 million. Each federally insured depository institution, including each minority bank, has a primary federal regulator: FDIC, OTS, OCC, or the Federal Reserve. The primary regulator for each bank is determined by the institution’s charter (table 3). As shown in table 4, FDIC serves as the federal regulator for over half of minority banks—109 out of 195 banks, or 56 percent—and the Federal Reserve regulates the fewest. The primary responsibilities of federal banking regulators include helping to ensure the safe and sound practices and operations of the institutions they oversee, the stability of financial markets, and compliance with laws and regulations. To achieve these goals, among other activities, the regulators conduct on-site examinations, issue regulations, conduct investigations, and take enforcement actions. Regulators may also close banks that are deemed to be insolvent and pose risks to the Deposit Insurance Fund. FDIC is responsible for ensuring that deposits in failed banks are protected up to established federal deposit insurance limits. Banking regulators primarily focus on ensuring the safety and soundness of banks, but laws and regulatory policies can identify additional goals and objectives. Recognizing the importance of minority banks, under Section 308 of FIRREA, Congress outlined five broad goals that FDIC and OTS, in consultation with Treasury, are to work toward to preserve and promote minority banks. These goals are preserving the present number of minority banks; preserving their minority character in cases involving mergers or acquisitions of minority banks; providing technical assistance to prevent the insolvency of institutions that are not currently insolvent; promoting and encouraging the creation of new minority banks; and providing for training, technical assistance, and educational programs. Technical assistance is typically defined as one-on-one assistance that a regulator may provide to a bank in response to a request. For example, a regulator may advise a bank on compliance with a particular statute or regulation. Regulators may also provide technical assistance to banks that is related to deficiencies identified in safety and soundness or compliance examinations. In contrast, educational programs are typically open to all banks regulated by a particular agency or to all banks located within a regulator’s regional office. For example, regulators may offer training for banks to review compliance with laws and regulations. Most minority banks with assets exceeding $100 million were nearly as profitable—measured by ROA—as their peers in 2005 as well as in earlier years, or had levels of profitability that have historically been considered adequate, according to our analysis of FDIC data. However, small minority and African-American banks of all sizes (which together account for about half of all minority institutions) have been significantly less profitable than their industry peers. Our analysis and other research has suggested some possible reasons for lower profitability among some small minority banks and African-American banks, such as higher reserves for potential loan losses and higher operating expenses. The results of other studies we reviewed were consistent with these findings, and minority banks that we spoke with offered additional explanations, such as the effects of increased competition from larger banks. However, overall officials from banks across all minority groups were positive about the financial outlook of their institutions. Many found their minority status to be an advantage in serving their communities—for example, in communicating with customers in their primary languages. As shown in figure 1, most minority banks with assets exceeding $100 million had ROAs in 2005 that were close to those of their peer groups, while many smaller banks had ROAs that were significantly lower than that of their peers. Minority banks with more than $100 million in assets accounted for 58 percent of all minority banks, while those with less than $100 million accounted for 42 percent. Each size category of minority banks with more than $100 million in assets had a weighted average ROA that was slightly lower than that of its peers, but in each case their ROAs exceeded 1 percent. By historical banking industry standards, an ROA of 1 percent or more has generally been considered an adequate level of profitability. We found that of these larger minority banks, Hispanic- American, Asian-American, Native American, and women-owned banks were close to, and in some cases exceeded, the profitability of their peers in 2005. Overall, small minority banks (those with assets of less than $100 million) had an average ROA of 0.4 percent, and their peers had an average ROA of 1 percent. Our analysis of FDIC data for 1995 and 2000 also indicated some similar patterns, with minority banks with assets greater than $100 million showing levels of profitability that were generally close to those of their peers, or ROAs of about 1 percent, while minority banks with assets of less than $100 million showed greater differences with their peers. Further, in 2000 the Chairman of FDIC discussed the agency’s finding that many small minority banks lagged in profitability. According to FDIC’s analysis, nearly 70 percent of small minority banks reported an ROA in 1999 of under 1 percent, and nearly 40 percent reported an ROA of less than half the industry average. Among small minority banks, African-American, Asian-American, and Hispanic-American banks had ROAs that were significantly lower than those of their peers, while the ROAs of small Native American and women- owned banks were closer to those of their peers (fig. 2). For example, the ROA for small Asian-American banks in 2005 was 0.10 percent and Hispanic-American banks’ ROA was 0.65 percent, compared with their peers’ ROA of 1 percent. Our analysis of FDIC data for 1995 and 2000 showed similar results, with small African-American, Asian-American, and Hispanic-American banks in particular having significantly lower ROAs than their peers. The profitability of African-American banks has generally been below that of their peers in all size categories (fig. 3). African-American banks with less than $100 million in assets—which constitute 61 percent of all African-American banks—had an average ROA of 0.16 percent, while their peers averaged 1.0 percent. Similarly, African-American banks with assets of between $100 million and $300 million—which constituted 26 percent of all African-American banks—had ROAs that were 75 percent lower than those of their peers. While profitability improved among larger categories, the profitability of African-American banks with assets of $300 million or more was lower than that of their peers. Our analysis of FDIC data for 2000 and 1995 also found that African-American banks of all sizes had lower ROAs than their peers. For example, in 2000 African-American banks with assets of between $100 million and $300 million had an average ROA that was about half of their peers’ average of 1.2 percent. Our analysis of 2005 FDIC data suggests some possible reasons for the differences in profitability between some minority banks and their peers. For example, our analysis of 2005 FDIC data showed that African- American banks with assets of less than $300 million—which constitute 87 percent of all African-American banks—had significantly higher loan loss reserves as a percentage of their total assets than the average for their peers (fig. 4). Although having higher loan loss reserves may be necessary for the safe and sound operation of any particular bank, because loan loss reserves are counted as expenses, higher reserves lower bank profits. Most Asian-American, Hispanic-American, Native American, and women-owned banks had loan loss reserves that were closer to the average for their peer group in 2005. We also found some evidence that higher operating expenses may affect the profitability of some minority banks. Operating expenses— expenditures for items such as administrative expenses and salaries—are typically compared to an institution’s total earning assets, such as loans and investments, to indicate the proportion of earning assets banks spend on operating expenses. As figure 5 indicates, many minority banks with less than $100 million in assets had higher operating expenses than their peers in 2005. Specifically, the average ratio of minority banks’ operating expenses to earning assets was 4.88 percent, compared with an average 3.86 percent for the peer group, or a difference of 21 percent. Small African-American and Asian-American banks had higher operating expenses than their peers (41 and 20 percent higher, respectively), while operating expenses for small Hispanic-American banks were closer to their peers (7 percent higher). Data on the operating expenses of small women-owned banks were lower than their peers, while Native American banks had higher operating expenses, although, as we have seen, both Native American and women-owed banks were the most profitable of small minority banks. Because larger African-American banks were relatively less profitable than their peers, we also reviewed FDIC data on their operating expenses in 2005. The FDIC data indicate that African- American banks with assets of between $100 million and $500 million had operating expense ratios that exceeded those of their respective peer groups by 20 percent or more. Other studies corroborated our findings that some minority banks operate in more challenging markets and may face higher operating costs. Officials from several minority banks we contacted also described aspects of their operating environments and business practices, including a focus on customer service that could result in higher operating costs. In particular, the officials cited the costs associated with providing banking services in low-income urban areas or in communities with high immigrant populations. Bank officials also told us that they focus on fostering strong customer relationships, sometimes providing financial literacy services. Consequently, these banks spend more time and resources on their customers per transaction than other banks as part of their mission. Other minority bank officials said that their customers made relatively small deposits and preferred to do business in person at bank branch locations rather than through potentially lower-cost alternatives, such as over the phone or the Internet. Along with these factors, minority bank officials we contacted cited other factors that could limit their profitability. First, many minority banks indicated competition from larger banks, credit unions, and nonbanks as their institution’s greatest challenge. In particular, minority bank officials said that larger banks, in response to Community Reinvestment Act (CRA) incentives, were increasingly posing competitive challenges among the banks’ traditional customer base. The bank officials said that larger banks could offer loans and other financial products at more competitive prices because these banks could raise funds at lower rates and had advantageous operational efficiencies. Second, some African-American, Asian-American, and Hispanic-American banks cited attracting and retaining quality staff as a challenge to profitability. Officials from one Hispanic-American bank said that the difficulty of attracting qualified new staff restricted the bank’s growth. An Asian-American banker said that many Asian-American banks tended to focus on the Asian-American market, potentially limiting the pool of qualified applicants. Despite these challenges, officials from banks across minority groups were optimistic about the financial outlook for their institutions. When asked in our survey to rate their financial outlook compared to those of the past 3 to 5 years, 65 percent said it would be much or slightly better; 21 percent thought it would be about the same, and 11 percent thought it would be slightly or much worse, while 3 percent did not know. Officials from minority banks said that their institutions had advantages in serving minority communities. For example, officials from an Asian-American bank said that the staff’s ability to communicate in customers’ primary language provided a competitive advantage. FDIC has established the most comprehensive efforts among the bank regulators to support minority banks and also leads interagency efforts to coordinate agencies’ activities. OTS also has developed several specific initiatives to support minority banks. While not required to do so by Section 308 of FIRREA, OCC and the Federal Reserve have taken some steps to support minority banks, such as holding occasional conferences for Native American banks, and are planning additional efforts. Treasury, which FIRREA stipulates is to consult with FDIC and OTS on preserving minority banks, no longer does so on a routine basis, but Treasury officials told us that the agency does confer with the banking agencies on an as- needed basis. Although recently FDIC has proactively sought to assess the effectiveness of its efforts to support minority banks, none of the regulators routinely survey institutions they regulate to obtain comprehensive performance information on their minority bank efforts, nor have they established outcome-oriented performance measures to gauge results in relation to pre-established targets. As a result, the regulators are not well positioned to assess the results of their efforts to support minority banks or identify potential areas for improvement. Of the four banking regulators, FDIC—which supervises 109 of 195 minority banks—has developed the most extensive efforts to support such institutions (fig. 6). FDIC also has taken the lead in coordinating regulators’ efforts in support of minority banks, including leading a group of all the banking regulators that meets semiannually to discuss individual agency initiatives, training and outreach events, and each agency’s list of minority banks. FDIC and OTS have established national and regional coordinators to implement their policies to support minority banks and provide routine technical and other outreach procedures for the institutions that they regulate. OCC officials we contacted said that they believed that minority banks could play an important role in providing financial services to minorities and other groups, and Federal Reserve officials told us that they adhered to the spirit of Section 308 of FIRREA. While neither agency has developed support efforts designed specifically for all the minority institutions that they regulate, both agencies provide technical assistance and educational services to minority banks upon request, as they do for all of their supervised banks, and have undertaken efforts in support of some types of minority banks. Both agencies also told us that they were planning additional efforts to support minority institutions. The following briefly describes the regulators’ minority bank support programs, as listed in figure 6. FDIC, OTS, and OCC all have policy statements that outline the agencies’ efforts with respect to minority banks. The policy statements discuss how the regulators identify minority banks, participate in minority bank events, provide technical assistance, and work toward preserving the character of minority banks during the resolution process. OCC officials told us that they developed their policy statement in 2001 after an interagency meeting of the federal banking regulators on minority bank issues. Both FDIC and OTS issued policy statements in 2002. FDIC has a national coordinator in Washington, D.C., and coordinators in each regional office from its Division of Supervision and Consumer Protection to implement the agency’s minority bank program. Among other responsibilities, the national coordinator regularly contacts minority bank trade associations about participation in events and other issues, coordinates with other agencies, maintains FDIC’s list of all insured banks that are considered to be minority under the agency’s definition, and compiles quarterly reports for the FDIC chairman based on regional coordinators’ reports on their minority bank activities. Similarly, OTS has a national coordinator in its headquarters and supervisory and community affairs staff in each region who maintain contact with the minority banks that OTS regulates. The national coordinator participates in the interagency coordination meetings with the other banking regulators and works with the regional community affairs staff to compile the agency’s annual report to Congress on minority bank issues. OCC and the Federal Reserve do not have similar structures in place. However, OCC does have an agency ombudsman who maintains contact with minority banks and a senior adviser for external outreach and minority affairs who participates in the interagency coordination meetings. Officials from the Federal Reserve—which directly supervises the fewest number of minority banks—told us that Federal Reserve staff at the district level maintain frequent contact with minority banks under their purview and Federal Reserve staff participate in interagency coordination meetings. FDIC has a public Web page dedicated specifically to minority banking issues that includes FDIC’s list of all minority banks, staff contacts, links to trade associations and other relevant sites, and a link to provide feedback on FDIC’s minority banking efforts. FDIC officials told us that the feedback link has been on their Web page since 2002 but that the agency rarely receives feedback from minority banks. FDIC is planning to improve its Web page by adding a link to FDIC’s home page and additional resources, including research highlighting issues relevant to minority banks. OCC also has a Web page that contains some information on minority bank issues. The Web site containing this page, BankNet, is available to registered national banks. OCC’s Web site is not as extensive as FDIC’s but does contain a list of minority banks that OCC regulates, links to OCC’s minority bank policy statement, and a comparative analysis tool to compare the financial performance of minority banks with that of their peers. FDIC has taken the lead role in sponsoring, hosting, and coordinating with the other regulators events in support of minority banks. These events have included A national conference in 2001, which was attended by about 70 minority banks supervised by different banking regulators and in which all four banking regulators participated. Participants discussed challenges, shared best practices, and evaluated possible actions regulators could take to preserve minority banks. In August 2006, FDIC sponsored a national conference for minority banks in which representatives from OTS, OCC, and the Federal Reserve participated. Regional forums and conferences, which were organized after 2002 to follow up on the national conference and implement initiatives set forth in FDIC’s 2002 policy statement. FDIC officials told us that these events are held annually by each of their regional offices. The content of these events has varied among regions, but has included issues relating to safety and soundness and compliance examinations, community affairs, deposit insurance, and FDIC’s minority banking program. Representatives from other banking agencies have participated in these events. The Minority Bankers Roundtable (MBR) series, which FDIC officials told us was designed to provide insight into the regulatory relationship between minority banks and FDIC and explore opportunities for partnerships between FDIC and these banks. In 2005, FDIC held six roundtables around the country for minority banks supervised by all of the regulators. Other regulators have also held events in support of minority banks. For example: In May 2006, the Director, Deputy Director, and the Northeast Regional Director of OTS held a meeting in New York in which all of the OTS- regulated minority banks in the region participated. The issues discussed included ways to strengthen community development and investment activities and partnerships with community-based organizations, and other issues of concern. In 2002, OCC held a forum with the North American Native Bankers Associations and a Native American bank and have created publications on banking in Native American communities. In February 2006, OCC held an event for several chief executive officers from African-American national banks to meet with OCC’s Executive Committee and the Comptroller of the Currency to discuss the challenges these banks faced. Federal Reserve banks have hosted workshops and other events for Native American banks, as well as produced publications on Native American banking. Outside of the customary training and educational programs that regulators make available to all banks, FDIC is the only regulator to convene training sessions only for minority banks (including minority banks not regulated by FDIC) that the banks may attend free of charge. FDIC officials told us that the agency’s regional offices have held several such training sessions on an as-needed basis or when suggested at minority bank events. For example, FDIC’s Dallas regional office has conducted 1-day seminars in 2004 and 2005 specifically for minority banks that included presentations on compliance, the Bank Secrecy Act and anti- money-laundering issues, and economic and banking conditions. All of the federal banking regulators told us that they provided their minority banks with technical assistance if requested, but only FDIC and OTS have specific procedures for offering this assistance. More specifically, FDIC and OTS officials told us that they proactively seek to make minority banks aware of such assistance through established outreach procedures outside of their customary examination and supervision processes. FDIC also has a policy that requires its regional coordinators to ensure that examination case managers contact minority banks 90 to 120 days after an examination to offer technical assistance in any problem areas that were identified during the examination. This policy is unique to minority banks. As part of their quarterly reports to headquarters, FDIC regional coordinators report on how many offers of technical assistance they have made to minority banks and how many banks requested the assistance. More generally, FDIC staff contact the minority banks they supervise at least once a year to offer to have a member of regional management meet with banks’ board of directors and to familiarize the institutions with FDIC’s initiatives. OTS officials told us that technical assistance is the focus of their minority banks efforts. According to the agency’s policy statement, OTS monitors the financial condition of minority banks to identify those that might benefit from a program of increased support and technical assistance. OTS regional staff contact minority banks they supervise annually to make them aware of their minority bank efforts and to offer to meet with the banks’ boards of directors to discuss issues of interest and types of assistance OTS can provide. Additionally, FDIC and OTS officials told us that they have taken proactive steps to assist individuals or groups that have filed applications for deposit insurance or to acquire a national thrift charter. FDIC officials said that they had developed a package of assistance to help smaller institutions, including many minority banks, overcome challenges associated with the FDIC insurance application process. OTS officials said that they had provided substantial assistance to a minority group that filed to acquire a national thrift charter and had extended established application deadlines to assist the group. FDIC officials said that the agency interprets FIRREA’s general goal to “promote and preserve” minority banks as a charge to support those minority banks already in existence or those that have filed deposit insurance applications rather than as a charge to actively seek out minority groups or individuals to form new banks. FDIC officials explained that the agency was an insurer, not a chartering authority, and that it would probably be inappropriate to encourage potential applicants to choose one banking charter over another. OTS officials told us that the agency currently does not promote the thrift charter to any groups but is considering the extent to which it might do so in the future. OCC and the Federal Reserve provide technical assistance to all of their banks, but they currently have not established outreach procedures for all their minority banks outside of the customary examination and supervision processes. However, OCC officials told us that the agency would be designing an outreach plan for all of OCC’s minority banks this fiscal year. Federal Reserve officials told us that Federal Reserve districts conduct informal outreach to their minority banks and consult with other districts on minority bank issues as needed. The officials said that four reserve banks had begun a pilot outreach program specifically tailored to minority banks that would include technical assistance, training, advisory visits, and ongoing analysis. Staff are in the process of conducting interviews with minority banks to obtain input on their draft program. OCC and Federal Reserve officials told us that, like FDIC and OTS, their agencies also provided assistance to minority groups during the application process and that they put forth extra effort in certain cases. For example, Federal Reserve officials told us that they had recently assisted 15 sovereign tribal nations in establishing a Native American bank. And like FDIC and OTS, neither OCC nor the Federal Reserve seeks out individuals to form either minority or nonminority banks. OCC agency officials said it would not be appropriate for their agency to do so, and Federal Reserve officials told us that it was not within their jurisdiction to do so, as they did not have authority to charter banks. The Federal Reserve, however, has conducted activities such as providing information to Native American, Muslim, and Asian-American communities on entering the banking business. FDIC has developed policies for failing banks that are consistent with FIRREA’s requirement that the agency work to preserve the minority character of minority banks in cases of mergers and acquisitions. For example, FDIC maintains a list of qualified minority banks or minority investors that may be invited to bid on the assets of troubled minority banks that are expected to fail. Officials from several minority banks we contacted said that FDIC had invited them to bid on failing minority banks. However, as we pointed out in our 1993 report, FDIC is required to accept the bids on failing banks that pose the lowest expected cost to the Deposit Insurance Fund. As a result, all bidders, including minorities, are subject to competition. FDIC provided us with a list of minority banks that had failed from 1990 to 2005. Of the 20 minority banks that failed during this period, 12 were acquired by nonminority banks and 5 by minority banks, while 3 were resolved through deposit payoffs. According to FDIC, the most recent failures of minority banks were two institutions in 2002, neither of which retained its minority status. OTS and OCC’s policy statements on minority banks describe how the agencies are to work with FDIC to identify qualified minority banks or minority investors to acquire minority banks that are failing. Federal Reserve officials told us that they do not have a similar written policy, given the small number of minority banks the agency supervises. However, agency officials said that they work with FDIC to identify qualified minority banks or investors to acquire failing minority banks. Officials from the four banking agencies said that they also tried to assist troubled minority banks to help improve their financial condition before a bank deteriorated to the point at which a resolution through FDIC was necessary. For example, officials from OCC, Federal Reserve, and OTS said that they provided technical assistance to such institutions or tried to identify other minority banks or investors that might be willing to acquire or merge with them. Section 308 of FIRREA required the Secretary of the Treasury to consult with FDIC and OTS to determine the best methods for meeting FIRREA’s goals in support of minority banks. In 1993, we reported that Treasury initially convened interagency meetings to facilitate communication among the federal banking regulators on minority banking issues. Treasury convened four such meetings between 1990 and 1993 at which regulators exchanged ideas, discussed policies regarding minority banks, and worked to coordinate their efforts. However, during our work for this report, Treasury officials said that the department no longer convened or participated regularly in interagency discussions on minority banking issues, although it still consulted with the federal banking regulators as issues arose. Treasury officials explained that while the nature of the FIRREA consulting requirement could be open to some interpretation, given that Treasury had discontinued formal consultations in 1993, the general view within the department is that ongoing consultations were not required. Further, Treasury officials said the department’s authority to assist the banking regulators in preserving the minority character of failing minority banks was limited by federal legislation that prohibits the Secretary of the Treasury from intervening in matters or proceedings that are before the Director of OTS or the Comptroller of the Currency, unless otherwise specifically provided by law. According to these officials, Section 308 of FIRREA does not override this prohibition, which is also consistent with Treasury’s policy not to intervene in case-specific matters before the banking agencies. While FDIC has recently been proactive in assessing its support efforts for minority banks, none of the regulators have routinely and comprehensively surveyed their minority banks on all issues affecting the institutions, nor have the regulators established outcome-oriented performance measures. Evaluating the effectiveness of federal programs is vitally important in order to manage programs successfully and improve program results. To this end, in 1993 Congress enacted the Government Performance and Results Act, which instituted a governmentwide requirement that agencies report on their results in achieving their agency and program goals. Agencies can evaluate the effectiveness of their efforts by establishing performance measures or through program evaluation. Performance measures are established in order to assess whether a program has achieved its objectives and are expressed as measurable, quantifiable indicators. Outcome-oriented performance measures assess a program activity by comparing it to its intended purpose or targets. Program evaluations are systematic studies that are conducted periodically to assess how well a program is working. In our 1993 report, we recommended that FDIC and OTS periodically survey minority banks that they regulate to help assess their support efforts. Surveys are an instrument by which agencies may assess their efforts and obtain feedback from the recipients of their efforts on areas for improvement. As part of its assessment methods, FDIC has recently conducted roundtables and surveyed minority banks on aspects of its minority bank efforts, as follows: In 2004, in response to an FDIC Corporate Performance Objective to enhance minority bank outreach efforts, FDIC completed a review of its minority bank outreach program that included a survey of 20 minority banks from different regulators. Seven banks responded. On the basis of the 2004 review, FDIC established the MBR program to gain insights into issues affecting minority banks and obtain feedback on its efforts. In 2005, FDIC requested feedback on its minority bank efforts from institutions that attended the agency’s six MBRs (which approximately one-third of minority banks attended). The agency also sent a survey letter to all minority banks to seek their feedback on several proposals to better serve such institutions, but only 24 minority banks responded. The proposals included holding another national minority bank conference, instituting a partnership program with universities, and developing a minority bank museum exhibition. FDIC officials said that they used the information gathered from the MBRs and the survey to develop recommendations for improving programs and developing new initiatives. According to FDIC officials, these recommendations, which have been approved and are expected to be implemented by the end of 2006, include enhancing the agency’s minority bank Web page by (1) adding a link to FDIC’s home page, (2) including a calendar of minority bank events, and (3) adding more resource links, such as links to research highlighting issues relevant to minority banks; hosting another national conference for minority banks—the conference was held in August 2006; continuing the MBR series and hosting six more roundtables in 2006; instituting the University Partnership Program, through which FDIC and minority bank staff would advise and lecture at universities that have an emphasis on minority student enrollment. The goals of the program include enhancing recruiting efforts for minority banks and FDIC and increasing students’ knowledge base of banking in general and minority banks in particular. While recently FDIC has taken steps to assess the effectiveness of its minority bank support efforts, we identified some limitations in the agency’s approach. For example, in its surveys of minority banks, the agency did not solicit feedback on key aspects of its support efforts, such as the provision of technical assistance. Moreover, FDIC has not established outcome-oriented performance measures to gauge the effectiveness of its various support efforts. As discussed previously, in its quarterly reports FDIC has provided output measures that track the number of technical assistance offers it makes to minority banks and the number of banks making use of the assistance. FDIC also requires regional case managers to follow up with minority banks 90 to 120 days after examinations to offer technical assistance to address deficiencies that have been identified in examinations. However, FDIC does not report agencywide on the extent to which minority banks are able to resolve any deficiencies found during the examination process. FDIC officials told us while the agency has not conducted surveys regarding technical assistance or developed related performance measures, technical issues may be resolved during the course of the examination process. Further, FDIC officials said that throughout the examination process and through other agency contacts, minority banks may informally provide feedback on the effectiveness of any assistance provided. However, without surveys or agencywide outcome-oriented performance measures, FDIC management may lack comprehensive and reliable information necessary to help ensure that agency staff provide effective technical assistance to minority banks to help them resolve problems identified in examinations or through other means. Further, the public and stakeholders, such as Congress, may not be informed as to the effectiveness of the agency’s technical assistance, as well as other efforts in support of minority banks. In 1994-1995, OTS interviewed the 40 minority banks that it regulated to obtain their views on the agency’s support efforts. The interviews covered topics such as the banks’ overall impressions of the agency’s efforts, technical assistance, and application issues and asked for suggestions for improving OTS’s efforts to support minority banks. However, OTS has not conducted a similar effort since that time. OTS officials told us that in 2003 and 2004 the agency conducted surveys of all OTS-regulated institutions and that a 2006 survey is in process. Because of restrictions imposed by the Office of Management and Budget on the amount of information that can be collected from institutions, OTS officials told us that they surveyed all of their banks at the same time. The surveys solicited feedback on OTS’s examination process and provided opportunities for banks to make suggestions for improving OTS’s operations. While OTS officials stated that the results from these surveys could be sorted by minority status, and has plans to do so and use the information for program enhancement, such analysis has not been conducted. As required under Section 3 of FIRREA, OTS provides annual reports to Congress that, among other things, track technical assistance offers made to minority banks. But OTS has also not established quantifiable outcome- oriented measures to gauge the quality and effectiveness of technical assistance. OCC and Federal Reserve officials told us that they had not surveyed the minority banks that they regulated to assess the effectiveness of their support efforts, and neither agency has established performance measures related to minority banking efforts. OCC officials explained that the agency did not survey minority banks because it did not treat these banks any differently from other banks. However, as described earlier, OCC has a written policy statement for minority banks, information on a Web page for such institutions, and has held events on Native American banking. OCC officials also told us that they recently convened a forum for African- American bankers and were in the process of developing an outreach program specifically for its minority banks. By not periodically surveying and obtaining comprehensive feedback from a substantial number of minority banks or through developing outcome- oriented performance measures for various support efforts (such as technical assistance), the regulators are not well positioned to assess their support efforts or identify areas for improvement. Further, the regulators cannot take corrective action as necessary to provide better support efforts to minority banks. Minority bank survey respondents identified potential limitations in the regulators’ efforts to support them and related regulatory issues, such as examiners’ understanding of issues affecting minority banks, which would likely be of significance to agency managers and warrant follow-up analysis. Minority banks regulated by FDIC were generally more positive about the agency’s efforts than other banks were about their regulators’ efforts. Still, only about half of FDIC-regulated banks gave their regulator very good or good marks, whereas about a quarter of banks regulated by other agencies gave the same ratings. Although some regulators emphasized technical assistance as a key component of their efforts to support minority banks, relatively few institutions used such assistance. Further, in our interviews and open-ended survey responses, banks reported some specific concerns about regulatory issues related to their minority status. In particular, survey respondents were concerned that (1) examiners, as was also noted in our 1993 report, did not always understand their operating environment or the challenges that minority banks faced in their communities and might need more training on the topic, and (2) a provision of CRA designed to facilitate relationships between minority banks and other banks has not produced the desired results. When minority bankers were asked to rate regulators’ overall efforts to support minority banks, responses varied. Some 36 percent of survey respondents described the efforts as very good or good, 26 percent described them as fair, and 13 percent described the efforts as poor or very poor (fig. 7). A relatively large percentage—25 percent—responded “don’t know” to this question. Banks’ responses varied by regulator, with 45 percent of banks regulated by FDIC giving very good or good responses, compared with about a quarter of banks regulated by other agencies. However, more than half of FDIC-regulated banks and about three-quarters of the other minority banks responded that their regulator’s efforts were fair, poor, or very poor or responded with a “don’t know.” In particular, banks regulated by OTS gave the highest percentage of poor or very poor marks, while banks regulated by the Federal Reserve most often provided fair marks. Nearly half of minority banks reported that they attended FDIC roundtables and conferences designed for minority banks, and about half of the 65 respondents that attended these events found them to be extremely or very useful (fig. 8). Almost a third found them to be moderately useful, and 17 percent found them to be slightly or not at all useful. One participant commented, “The information provided was useful, as was the opportunity to meet the regulators.” Many banks also commented that the events provided a good opportunity to network and share ideas with other minority banks. We noted that minority banks frequently reported participating in training and education events and that they found these events extremely or very useful, even though most of these programs were not designed specifically for minority banks. About 58 percent reported participating in their regulator’s training and education activities—a higher percentage than had participated in FDIC roundtables and conferences. Of this group, 76 percent found training and education to be extremely or very useful, 15 found it to be moderately useful, 6 percent found it to be slightly useful, and 3 percent did not know. While FDIC and OTS emphasized technical services as key components of their efforts to support minority banks, less than 30 percent of the institutions they regulate reported using such assistance within the last 3 years in our survey (fig. 9). Minority banks regulated by OCC and the Federal Reserve reported similarly low usage of the agencies’ technical assistance services. However, of the few banks that used technical assistance—41—the majority rated the assistance provided as extremely or very useful. Further, although small minority banks and African- American banks of all sizes have consistently faced financial challenges and may benefit from certain types of assistance, these banks also reported low rates of usage of the agencies’ technical assistance. In addition, both regulators and minority banks explained that minority banks often have difficulty attracting and retaining qualified staff, and given this fact, technical assistance could be particularly important in providing these banks with guidance tailored to their staff’s specific needs. While our survey did not address the reasons that relatively few minority banks appear to use the agencies’ technical assistance and banking regulators cannot compel banks under their supervision to make use of offered technical assistance, the potential exists that many such institutions may be missing opportunities to learn how to correct problems that limit their operational and financial performance. Over 80 percent of the minority banks we surveyed responded that their regulators did a very good or good job of administering examinations, and almost 90 percent felt that they had very good or good relationships with their regulator. However, as in our 1993 report, some minority bank officials said in both survey responses and interviews that examiners did not always understand the challenges the banks faced in providing services in their particular communities. Twenty-one percent of survey responses mentioned this issue when asked for suggestions about how regulators could improve their efforts to support minority banks, and several minority banks we spoke with in interviews elaborated on this topic. The bank officials said that examiners tended to treat minority banks like any other bank when they conducted examinations and thought such comparisons were not appropriate. For example, some bank officials whose institutions serve immigrant communities said that their customers tended to do business in cash and carried a significant amount of cash because banking services were not widely available or trusted in the customers’ home countries. Bank officials said that examiners sometimes commented negatively on the practice of customers doing business in cash or placed the bank under increased scrutiny with respect to the Bank Secrecy Act’s requirements for cash transactions. While the bank officials said that they did not expect preferential treatment in the examination process, several suggested that examiners undergo additional training so that they could better understand minority banks and the communities that these institutions served. FDIC has conducted such training for its examiners. In 2004, FDIC invited the president of a minority bank to speak to about 500 FDIC examiners on the uniqueness of minority banks and the examination process. FDIC officials later reported that the examiners found the discussion helpful. According to a Federal Reserve official, the organization is developing guidance to better educate examination staff about the various types of minority institutions and minority communities. Also, according to an OCC official, OCC has an initiative under consideration to provide training for its examiners on minority bank issues. Many survey respondents also said that a provision in the Community Reinvestment Act (CRA) that was designed to assist their institutions was not effectively achieving this goal. CRA requires bank regulators to encourage institutions to help meet credit needs in all areas of the communities they served. The act includes a provision allowing regulators conducting a CRA examination to give consideration to banks that assist minority banks through capital investment, loan participations, and other ventures that help meet the credit needs of local communities. Despite this provision, only about 18 percent of survey respondents said that CRA had—to a very great or great extent—encouraged other institutions to invest in or form partnerships with their institutions, while more than half said that CRA encouraged such activities to some, little, or no extent (fig. 10). Some minority bank officials said that current interagency guidance on the provision granting consideration for investments in minority banks should be clarified to assure banks that they will receive CRA consideration for such investments. Some minority banks believe that CRA does not provide incentives for nonminority banks to make investments in minority banks that operate in other parts of the country. A minority bank official said that the CRA provision does not clearly state that a bank making an investment in a minority bank that is outside of its CRA assessment area will receive consideration for such investments in its CRA compliance examinations. However, officials from each of the four regulators said that they had interpreted the provision in CRA as allowing consideration for such out-of-area investments in minority banks. OCC recently published guidance clarifying this issue, and FDIC officials said that the agencies would clarify the guidance provided to all CRA examiners across agencies on such investments. This report does not contain all results from the survey. The survey and a more complete tabulation of the results can be viewed at GAO-07-7SP. Federal banking regulators have adopted differing approaches to support minority banks but generally have not assessed their efforts using regular and comprehensive surveys of minority banks or outcome-oriented performance measures. FDIC, which along with OTS is required by FIRREA to help preserve and promote minority banks, has established the most comprehensive support efforts and has taken the lead on interagency initiatives. In this regard, FDIC appears to be serving a coordination and facilitation role for the banking agencies’ efforts. OTS has also taken several steps to support minority banks, while OCC and the Federal Reserve, which are not subject to Section 308 of FIRREA, have, on their own initiative, taken some steps to support such institutions. Further, officials from OCC and the Federal Reserve, which collectively supervise about one-third of minority banks, stated that they recognize the importance of minority banks and are planning additional efforts to support them. While these efforts may help ensure that more minority banks receive support, it is important that when managing both existing and new programs, regulators assess their effectiveness. While FDIC has recently sought to evaluate its efforts through conducting surveys, these surveys have not addressed all key activities (including the provision of technical assistance), and the agency has not established outcome- oriented performance measures. None of the other agencies regularly or comprehensively surveys minority banks regarding its support efforts or has developed outcome-oriented performance measures. Consequently, the regulators are not well positioned to identify issues of concern to minority banks or to take corrective actions to improve their support efforts. Our work identified potential limitations in the regulators’ support efforts and related activities that would likely be of significance to agency managers and potentially warrant follow-up analysis and the initiation of corrective actions as necessary. For example, only about half of minority banks regulated by FDIC and only about a quarter regulated by the other agencies view their regulator’s support efforts as very good or good. We also found that some issues identified in our 1993 report may still be potential limitations to the regulators’ efforts. First, although regulators emphasize the provision of technical assistance services to minority banks, less than 30 percent of such banks have recently used such services. Small banks and African-American banks, which have struggled financially over the years and potentially stand to benefit most from additional technical assistance, are no more likely than other minority banks to use such assistance. While there may be a variety of reasons that minority banks do not take advantage of the regulators’ technical assistance services and regulators cannot compel banks to use this assistance, without soliciting further feedback from these banks, the regulators cannot identify these reasons, determine whether more banks would benefit from such assistance, or obtain suggestions for improvement. Second, both our 1993 report and our current analysis found that some minority banks believe that regulators have not ensured that examiners fully understand the challenges that such institutions often face in, for example, providing financial services in areas with high concentrations of poverty or to immigrant communities. Again, without further analysis and soliciting feedback from banks, regulators cannot identify possible areas where they can provide additional assistance or take corrective action. By establishing outcome-oriented performance measures to determine the extent to which they are achieving program goals, regulators could then measure the progress of their efforts and any results. Using existing interagency forums for coordination to assess minority bank support efforts and related regulatory activities could help ensure that all minority banks have access to the same opportunities while minimizing burdens on the regulators themselves. We recommend that the Chairman of the FDIC, the Director of OTS, the Comptroller of the Currency, and the Chairman of the Federal Reserve regularly review the effectiveness of their minority bank support efforts and related regulatory activities and, as appropriate, assess the need to make changes necessary to better serve such institutions. In conducting such reviews, the regulators should consider conducting periodic surveys of such institutions to determine how they view regulators’ minority support efforts and related activities, and/or developing outcome-oriented performance measures to assess the progress of their efforts in relation to program goals. As part of these regular program assessments, the regulators may wish to focus on such areas as minority banks’ overall views on support efforts, the usage and effectiveness of technical assistance services (particularly technical assistance provided to small minority banks and African- American banks), and the level of training provided to agency examiners regarding minority banks and their operating environments. Regulators may also wish to utilize existing interagency coordination processes in implementing this recommendation to help ensure consistent efforts and minimize burdens on agency staff. We provided a draft of this report to FDIC, OTS, OCC, and the Federal Reserve for comment, and they provided written comments that are reprinted in appendixes IV–VII. In their responses, the agencies further elaborated on their efforts to support minority banks and described planned initiatives. Further, FDIC, OTS, and OCC agreed to implement our recommendation, while the Federal Reserve commented that it would consider implementing the recommendation. The agencies also provided technical comments, which we have incorporated as appropriate. We also requested comments from the Department of the Treasury on the section of the draft report relevant to their activities under Section 308 of FIRREA. Treasury provided us with technical comments, which we have incorporated as appropriate. We will provide copies to Chairman of the FDIC, the Director of OTS, the Comptroller of the Currency, the Chairman of the Federal Reserve, and the Secretary of the Department of the Treasury, and other interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report at listed in appendix VIII. The objectives of this report were to (1) review the profitability of minority banks over time, (2) identify the federal banking regulators’ efforts to support minority banks and determine whether the regulators were evaluating the effectiveness of these efforts, and (3) obtain the views of minority banks on the federal regulators’ minority banking support efforts and related regulatory issues. To review the profitability of minority banks, in addition to undertaking a literature review, we analyzed financial data provided by the Federal Deposit Insurance Corporation (FDIC) for year end 2005, 2000, and 1995. Each bank is required to file consolidated Reports of Condition and Income (Call Report) data, and each thrift institution is required to file Thrift Financial Reports (Thrift Report) quarterly. We obtained Call and Thrift Report data from FDIC listing each minority bank’s financial characteristics (such as return on assets, net income, and loan loss provisions), along with summary statistics for peer groups. Peer groups were formed by FDIC based on standard asset sizes used in FDIC reports (less than $100 million, $100 million-$300 million, $300 million-$500 million, $500 million-$1 billion, $1billion-$10 billion, greater than $10 billion). The peer groups include minority and nonminority institutions. Using these data, we classified the minority banks by asset size and minority status. To classify the banks by minority status, we used the regulators’ designations and confirmed these classifications with a bank’s survey response (if the banks responded to our survey). FDIC provided summary statistics for peer groups based on asset size. The peer groups included all banks of a given asset size, including minority banks. We did not attempt to remove minority banks from the peer group to simplify the analysis because minority banks are so few, it is unlikely that their inclusion in the peer group would change composite statistics for any peer group. We analyzed the profitability characteristics of each group and compared the summary statistics to the comparable statistics generated by FDIC for relevant peer groups. Because information on minority banks was not available for both 2000 and 1995 from all federal banking regulators, for these periods we analyzed data only for those minority banks that were still operating as minority banks in 2005. On the basis of the regulators’ lists, we were aware that not all of the banks were operating in 2005 were operating in previous years. In 2000, 181 of these banks were operating, and 152 were operating in 1995. Minority banks that failed or merged with other institutions between 1995 and 2005 are not included in the analysis for those years. In addition, we did not obtain data on the minority status of banks operating in 1995 and 2000 and were unable to confirm that all 2005 minority banks were operating as minority banks in 1995 and 2000, although the change of ownership rate for minority banks is low. We chose to use Call and Thrift Report data because it was designed to provide information on all federally insured banks’ financial condition and has been collected and reported by FDIC in a standardized format. We have tested the reliability of FDIC’s Call and Thrift Report databases during previous studies and found the data to be reliable. As with any self- reported financial information, however, the data are subject to change for a variety of reasons. We corroborated our analysis of the Call and Thrift Report data with other studies, which also found that minority banks lag in profitability and have high operating expenses. To address the second objective, we interviewed officials at the federal banking agencies and the Department of the Treasury and reviewed regulators’ documentation addressing their efforts to support minority banks and assess the effectiveness of these efforts. We also reviewed publicly available documentation maintained by the regulators, such as policy statements, lists of minority banks, Web sites, and public statements. We reviewed the regulators’ minority banking support efforts across the different banking agencies and compared any program assessment efforts with our standards for program assessment and performance measures, and those established in the Government Performance and Results Act. We also interviewed 19 minority banks throughout the United States that we selected based on type of minority ownership and primary regulator, and relevant trade associations, to discuss the business environment in which they operate, regulators’ minority banking efforts, any assessment efforts undertaken by the regulators, and their knowledge and experience with their regulators’ minority banking efforts. To obtain the views of minority banks on the federal regulators’ minority banking support efforts and related regulatory issues, we surveyed banks that were designated as minority institutions. We created a list of the population of minority banks by asking FDIC, Office of the Comptroller of the Currency (OCC), Federal Reserve, and Office of Thrift Supervision (OTS) for the names of all such institutions. The objective was to survey all minority banks that were officially recognized by regulators as such. Of the 204 institutions in our original population, 14 represented women- owned institutions. We identified the total minority bank population by reviewing and compiling one list of these banks from FDIC and the Federal Reserve’s lists as of September 30, 2005; OCC’s list from December 31, 2005; the most recent list from the Department of the Treasury (December 2004); and OTS’s list as of January 2006. All institutions we originally identified as minority banks were asked to complete a Web-based questionnaire in March of 2006. We determined that of the original 204 minority banks we identified, 9 were actually ineligible, either because the ownership was no longer minority or had insignificant minority interest, and some had merged with other banks. Our final survey population therefore consisted of 195 institutions. When the survey closed in late April, 149 of the 195 banks ultimately determined to be eligible minority banks had provided usable responses, for a response rate of 76 percent. While developing our Web-based questionnaire, we asked all four banking regulators and minority banking associations to review a draft of the instrument and to offer comments. We also conducted four pretests of the draft questionnaire, each one using the software environment that actual respondents would experience. During the pretest, we observed respondents filling out the questionnaire and asked follow-up questions to clarify the respondents’ understanding of the questions. On the basis of these results, we made modifications as appropriate before finalizing the questionnaire. The questionnaire also underwent a peer review by an independent survey specialist in our organization. The survey, which was implemented as an automated questionnaire on a secure Web site, was accessible only to specifically contacted bank officials and could be completed using a typical Web browser. However, the questionnaire, which contained 51 questions, was also reproduced as an electronic word- processing document that could be administered via e-mail, mail, or fax, for those respondents who preferred those modes or who could not access the Internet. We began the survey in late February of 2006 by precontacting banks by telephone to verify their status and to obtain the names, titles, and e-mail addresses of the president or chief executive officer of the institution, who were designated as respondents, or were responsible for delegating the survey to another official. Prenotification e-mails were sent in early March to verify that the e-mails were valid. The survey was opened and respondents were given user names and passwords to their institution’s questionnaires on March 14. In late March and early April 2006, we sent two reminder e-mails to banks that had not yet responded, and began to call nonrespondents after that. We also made appeals encouraging responses through the National Bankers Association’s (NBA) e-mailings and events. We also made a paper copy of the questionnaire that respondents could receive and return via mail or fax. In a final set of telephone follow-ups, we gave reluctant respondents the opportunity to answer a reduced set of key questions to encourage participation. A final reminder e-mail was sent in late April, and the survey was closed on April 28. Not all surveyed members of the population returned questionnaires or answered every question. Two institutions explicitly refused to participate, and we were not able to obtain answers from the other 44 nonrespondents by the close of this review. This resulted in a response rate of 76 percent, calculated as the number of usable questionnaires returned divided by the final eligible population. The response rate to any one particular question varied, however, as some survey participants declined to provide answers to individual questions, and those 4 institutions agreeing to respond only to the final telephone follow-up attempt were asked only a limited number of key questions. Results from this type of survey are subject to several types of errors: failure to include all eligible members in the listing of the population, measurement errors when administering the questions, nonresponse error from failing to collect information on some or all questions from part of the surveyed population, and data-processing error. To limit the error from failing to list members of the population, we compared the regulators’ lists of minority banks and discussed any discrepancies with each regulator. In accordance with our request, we included any bank considered by at least one regulator to be eligible to participate in its efforts. In some cases we surveyed minority banks that were not considered by their primary regulator to be minority institutions but were considered to have minority status or be eligible for participation in another regulator’s efforts. We compared the survey results for questions reported on in the text of the report with and without such banks to ascertain whether or not the results would have been significantly different without including such banks. We found no significant differences in the results when the banks not considered minority banks by their regulator were included from when such banks were excluded. Generally, removing the responses from such banks would have changed the results of key questions by 1 or 2 percentage points. In a few cases, the inclusion of banks not viewed by their regulators as minority institutions changed the survey results by 4 or 5 percentage points in a manner more favorable to the regulator. However, the inclusion of such banks did not have a material effect on the overall results. To limit measurement error, we obtained comments from experts and tested the questionnaire with bank officials and attempted to improve the questionnaire before finalizing it. Although we chose to send our survey to all members of the population and not a sample, and thus the survey results are not technically subject to sampling error, because only 76 percent of the population provided usable responses, bias from nonresponse may result. If the responses of those who did not respond would have differed from the responses of those who did on some survey questions, the estimates made solely from those who did respond would be biased from excluding parts of the population with different characteristics or views. To limit this kind of error, we made multiple attempts to gain the participation of as many banks as possible. To assess the likelihood of significant bias, we compared characteristics such as asset size, regulator, and minority type—which may be related to the substance of answers to our survey questions—of nonrespondents to respondents. We did not detect a significant difference between those who chose to respond and those who did not based on these characteristics. To further assess the potential extent of nonresponse bias, we compared the response rates of the subgroups of those characteristics in our population, and determined that response rate did not differ markedly between categories of these subgroups, suggesting that banks of certain types were not materially more likely to participate or not participate than others. Finally, we analyzed the patterns in response between those who answered in the earlier part of the fieldwork period and those who responded only after repeated follow-up attempts. It is possible that the latter group resembles nonrespondents. No significant difference in the answers between the groups was detected, which may suggest that actual nonrespondents would not have answered in a substantially different way from those who did. While the possibility exists that the true results for the entire population might be different from those we estimated in our report, we feel that on the basis of our analysis, nonresponse bias is unlikely. To limit data-processing error, a second data analyst independently verified analysis programming. In addition, the coding process of converting narrative answers into quantitative, categorical data was independently assessed to be reliable, and diagnostic checks were performed on the survey data to the extent possible. For example, one of our checks identified inconsistencies for four questionnaires that indicated a primary supervisor which did not match regulator records, allowing us to make a correction to responses. We did not otherwise verify the substance of respondents’ answers to our questions. We conducted our work in Washington, D.C., and New York from December 2005 to September 2006 in accordance with generally accepted government auditing standards. Banking regulators use different criteria for determining the types of institutions that can participate in their respective minority bank efforts, and all regulators maintain lists of minority banks based on these different criteria (fig. 11). Some regulators base their definition on Section 308 of the Financial Institutions Reform, Recovery, and Enforcement Act of 1989 (FIRREA), and others base their definition on the criteria in a 1969 executive order that established the Department of the Treasury’s Minority Bank Deposit Program (MBDP). The MBDP is a voluntary program that encourages federal agencies, state and local governments, and the private sector to use MBDP participants as depositaries and financial agents. Participants are certified by Treasury’s Bureau of Financial Management Service and included on an annual program roster. FDIC is subject to the “minority depository institution” definition set forth in Section 308 of FIRREA but has interpreted ownership by “socially and economically disadvantaged individuals” as requiring ownership by minorities as defined in Section 308. FDIC does not include women-owned banks in its minority bank definition. For stock institutions, FDIC determines minority ownership of stock institutions based on the proportion of the outstanding voting stock owned by minorities. In addition, FDIC has made its program available to public or privately held stock institutions and mutuals whose boards of directors and communities served are predominantly minority, without regard to the minority status of the institution’s ownership or its account holders. OTS is also subject to the “minority depository institution” definition set forth in Section 308 of FIRREA. Like FDIC, OTS has interpreted ownership by “socially and economically disadvantaged individuals” as requiring ownership by minorities as defined in Section 308. OTS also determines minority ownership of stock institutions based on the proportion of the outstanding voting stock owned by minorities. OTS has also expanded the availability of its program to some constituencies that are not eligible for FDIC’s program. For example, mutual institutions that have women CEOs and have a majority of women on their boards of directors are eligible to participate in OTS’s minority bank efforts. In addition, public stock institutions and mutuals (but not private stock institutions) whose boards of directors, communities served, and account holders are predominantly minority, may participate in OTS’s efforts regardless of the minority status of the institution’s ownership. Treasury’s criteria—on which OCC and the Federal Reserve base their criteria—differ from those of Section 308 of FIRREA, FDIC, and OTS in several ways. In the first instance, the MBDP program is available to both minority- and women-owned banks, stock savings and loans, and mutual savings and loans. In order to be included on Treasury’s MBDP roster as a minority-owned bank or stock savings and loans, more than 50 percent of an institution’s outstanding stock must be either owned or controlled for voting purposes by individuals of minority groups. A mutual savings and loan may qualify as minority-owned if a majority of the institution’s board of directors are members of minority groups. To qualify as a women- owned bank or stock savings and loans, more than 50 percent of the institution’s outstanding stock must be owned by women and a significant percentage of senior management positions must be held by women. A women-owned mutual savings and loan is eligible for the MBDP if a majority of its board of directors are women and a significant percentage of senior management positions are held by women. The OCC’s definition is consistent with that established by Treasury’s MBDP criteria. OCC is not covered by Section 308 of FIRREA. The Federal Reserve also bases its definition on Treasury’s MBDP criteria. However, the Division of Supervision of the Federal Reserve also compiles an internal list of minority banks that is based on Section 308 FIRREA criteria. We identified several discrepancies in the regulators’ lists of minority banks. These banks were all listed as minority banks by one regulator but not by another. When we spoke to officials from each of the agencies, they told us that these discrepancies were due to differences in criteria for minority banks. For example, five of these discrepancies were the result of FDIC’s exclusion of women-owned banks—women-owned banks cannot participate in FDIC’s programs, but they can participate in the MBDP program. Another discrepancy resulted from a bank’s primary regulator excluding a certain ethnicity (not named in FIRREA), while another regulator included it. This appendix provides the number of minority banks that responded to each survey question discussed in the report body by response category. In addition to the contact named above, Wesley M. Phillips, Assistant Director; Allison Abrams; Anna Bonelli; Stefanie Bzdusek; Emily Chalmers; Catherine Hurley; Marc Molino; Carl Ramirez; and Omyra Ramsingh made significant contributions to this report.
Minority banks can play an important role in serving the financial needs of historically underserved communities and growing populations of minorities. For this reason, the Financial Institutions, Reform, Recovery, and Enforcement Act of 1989 (FIRREA) established goals that the Federal Deposit Insurance Corporation (FDIC) and the Office of Thrift Supervision (OTS) must work toward to preserve and promote such institutions (support efforts). To evaluate their efforts, as well as those of the Office of the Comptroller of the Currency (OCC) and the Federal Reserve, GAO (1) reviewed the profitability of minority banks, (2) identified the regulators' support and assessment efforts, and (3) obtained the views of minority banks on the regulators' efforts. GAO reviewed financial data from FDIC, interviewed regulators, and surveyed all minority banks. The profitability of most large minority banks (assets greater than $100 million) was nearly equal to that of their peers (similarly sized banks) in 2005 and earlier years. However, many small minority banks and African-American banks of all sizes were less profitable than their peers. GAO's analysis and other studies identified some possible explanations for these differences, including relatively higher loan loss reserves and operating expenses and competition from larger banks. Bank regulators have adopted differing approaches to supporting minority banks, but no agency has regularly and comprehensively assessed the effectiveness of its efforts. FDIC--which supervises over half of all minority banks--has the most comprehensive support efforts and leads interagency efforts. OTS focuses on providing technical assistance to minority banks. While not required to do so by Section 308 of FIRREA, OCC and the Federal Reserve have taken some steps to support minority banks and are planning others. Although FDIC has recently sought to assess the effectiveness of its support efforts through various methods, none of the regulators comprehensively surveys minority banks to obtain their views or has developed outcome-oriented performance measures. Consequently, the regulators are not well positioned to assess their support efforts. GAO's survey of minority banks identified potential limitations in the regulators' support efforts that would likely be of significance to agency managers and warrant follow-up analysis. Only about one-third of survey respondents rated their regulators' efforts for minority banks as very good or good, while 26 percent rated the efforts as fair, 13 percent as poor or very poor, and 25 percent responded "don't know." Banks regulated by FDIC were more positive about their agency's efforts than banks regulated by other agencies. However, only about half of the FDIC-regulated banks and about a quarter of the banks regulated by other agencies rated their agency's efforts as very good or good. Although regulators may emphasize the provision of technical assistance to minority banks, less than 30 percent of such institutions have used such agency services within the last 3 years and therefore may be missing opportunities to address problems that limit their operations or financial performance.
Transparency tools are a way to make information on health care cost and quality transparent to consumers and others, and are a key part of HHS’s strategy to improve the quality and affordability of health care. There are multiple ways to assess the cost of health care. For example, cost can be measured based on the amount of money providers set as a “charge” for various individual services, but these charges typically do not represent the actual amounts paid by insurers or consumers. The cost that an insured consumer is responsible for paying to receive services is called an out-of-pocket cost, which depends on the consumer’s individual provider choices and insurance benefit design. In addition, any given episode of care usually involves payments to multiple providers (e.g., surgeons, anesthesiologists, pathologists, etc.), facility fees, and other ancillary fees, and any given cost figure may or may not represent the total costs for an episode of care by including all of these expenses. For a description of the quality measure categories, see Agency for Healthcare Research and Quality, National Quality Measures Clearinghouse, http://www.qualitymeasures.ahrq.gov/tutorial/varieties.aspx, accessed August 5, 2014. The National Quality Measures Clearinghouse also includes other types of clinical quality measures, such as access measures, which measure a patient’s ability to attain timely and appropriate care. In March 2011, HHS first published its National Strategy for Quality Improvement in Health Care (the National Quality Strategy), as required by the Patient Protection and Affordable Care Act (PPACA). The National Quality Strategy builds on priorities HHS previously identified in its strategic plan for fiscal years 2010-2015, which emphasize the need for transparent information to give consumers the means to make more informed choices about their health care. Two of the National Quality Strategy’s overarching goals—better care and affordable care—relate to health care cost and quality transparency. According to the strategy, to achieve better care, patients must be given access to understandable information and decision support tools that help them manage their health and navigate the health care delivery system. To achieve affordable care, systems must be created to make health care cost and quality more transparent to consumers and providers, so they can make better choices and decisions. The strategy also focuses on coordinating and aligning efforts across the public and private sectors, for example by establishing an aligned set of common cost and quality measures by which to assess how well providers and programs support effective care. As part of its efforts to foster greater transparency of information, HHS developed transparency tools for consumers focused on quality, some of which predate PPACA and the development of the National Quality Strategy. Between 1998 and 2010, HHS—through CMS—launched five Compare websites to publicly report certain information, including quality information, based on data submitted by different types of providers participating in the Medicare program (see table 2). For some, but not all, of these provider types, HHS is required to publicly report certain information about provider performance. HHS—primarily through AHRQ—also plays a role in research and dissemination of information on consumer-focused public reporting. AHRQ has supported the research and publication of numerous papers that lay out best practices for public reporting of cost and quality information to consumers and has widely shared the expertise contained in that research through various forums, such as conferences and webinars that support wider transparency efforts. A variety of private sector entities have developed transparency tools that provide cost and quality information to consumers, including health insurance carriers, third-party vendors, and regional collaboratives. These private-sector tools allow consumers to obtain personalized cost estimates, compare different providers, or estimate their out-of-pocket costs before receiving a service. For example, many health plans offer tools that provide cost estimates to enrolled members. Researchers calculated that approximately 70 percent of the privately insured population had access to cost transparency tools in 2013. In addition to tools provided directly by health plans, employers may contract with third- party vendors to provide transparency tools for their employees. Health plans and third-party vendors frequently offer some quality information along with cost estimates to their enrolled members. Often this information is derived from data reported by CMS on its Compare sites and may be combined with additional quality information obtained from a variety of sources that collect and report data on provider quality. Several states also have developed transparency tools.plans and third-party vendors, state tools can be accessed by the general public and are not restricted to members who pay or enroll for services, giving the uninsured access to cost and quality information. Like the private-sector tools, state tools generally draw on quality information from CMS in combination with information from other sources. The two selected consumer transparency tools we reviewed show that some providers are paid thousands of dollars more than others for the same service in the same geographic area, regardless of the quality of such services. Specifically, we found this variation in cost to be present across multiple services, settings, and geographic areas in the information provided by the transparency tools of two different entities (a health insurer and Castlight) that we reviewed. For example, the health insurer reported that in the Denver area, the estimated total cost of a laparoscopic gallbladder surgery in selected ASCs ranged between $3,281 and $18,770 (consumers would pay between $3,281 and $6,954 in estimated out-of-pocket costs) in July 2014. Meanwhile, for the same time period, the health insurer reported that the estimated total cost for the same service in selected hospital outpatient departments in the Denver area ranged from $17,791 to $40,626 (consumers would pay between $6,758 and $11,325 in estimated out-of-pocket costs).Similarly, Castlight reported, in May 2014, that for an MRI of the lower back at selected acute care hospitals in the Indianapolis, Indiana area, the estimated total cost ranged between $277 and $5,184 depending on the provider (with consumers paying between $277 and $2,637 in estimated out-of-pocket costs).variation in information we obtained from selected consumer transparency tools.) (See app. I for more examples of the Information from the two selected transparency tools indicates that the observed cost variation is not tied to variations in quality, regardless of the treatment and geographic area. For example, according to information obtained from Castlight, the estimated total cost of maternity care at selected acute care hospitals in the Boston area that rated more highly on several quality indicators ranged between $6,834 and $21,554 (consumers would pay between $2,967 and $5,000 in estimated out-of- pocket costs). laparoscopic gallbladder surgery showed that a number of hospital outpatient departments in the Denver area that rated more highly on several quality indicators had lower total costs than other hospital outpatient departments that rated less highly on quality. These examples are particularly relevant as researchers have found that many consumers assume that all providers offer good quality care, while others have the misconception that higher-cost providers will provide higher quality of care than lower-cost providers. The quality indicators provided by Castlight for maternity care include the percent of pregnancies for which the baby is delivered at the appropriate developmental age, using an appropriate delivery method, and recommended processes of care. assistance of consumer transparency tools.hospitals we contacted to inquire about the cost of an inguinal hernia repair and diagnostic colonoscopy for an uninsured patient in the Portland, Oregon, and Minneapolis, Minnesota health care markets— locations selected because they have initiatives to promote transparency—we received limited cost information from 54 percent (13) and quality information from 29 percent (7) (see table 3). Transparency tools are most effective if they both provide information relevant to consumers and convey that information in a way that consumers can readily understand. The information on cost and quality that consumers find relevant makes up just a portion of the information available for inclusion in transparency tools. For example, research shows that most consumers do not find the information derived from many specific process-of-care measures that clinicians have developed and used to guide quality improvement efforts useful. Rather, consumers are most likely to respond to information that applies to their personal circumstances, including, for example, information on the specific procedures consumers are considering, on providers who would be available to perform those procedures, and on cost estimates that take account of their particular insurance coverage. Consumers also seek information that helps them to make meaningful distinctions among providers in terms of cost and quality. The research we reviewed has found that consumers value quality measures that show differences in clinical outcomes and patient experiences, and cost measures that show differences in out-of-pocket expenses. In addition, consumers want to know the source of cost and quality information because this source information helps them determine their level of confidence in that information. Eight of the 15 characteristics of effective transparency tools we identified address the extent to which a tool provides substantive quality and cost information of relevance to consumers. Specifically, the research we reviewed shows that more effective transparency tools: 1. Review a broad range of services so that more consumers’ particular needs are included. The more services that are covered by a transparency tool (or set of tools), the more likely it is that the tool will have information relevant to the particular services of interest to any given consumer. It is especially important to include services that are predictable and non-urgent in a transparency tool, because these services are most likely to afford consumers the opportunity to evaluate cost and quality information before receiving the service. 2. Cover a broad range of providers. Transparency tools that provide information for all or most of the available providers in a given geographic area, regardless of network status or practice setting, give consumers more information about their full range of options. For example, for procedures that can be conducted in either a hospital outpatient department or ASC, it helps consumers to provide comparable information for both settings, so that consumers can choose from a larger number of providers that offer those procedures. 3. Describe key differences in clinical quality of care, particularly patient-reported outcomes. Assessments of the clinical quality of care that have been shown to have particular relevance to consumers are those that relate to long-term outcomes of the care experienced by other patients. Often this is best addressed by patient-reported outcomes, which tell consumers the eventual outcome of treatments, as reported by previous patients of a particular provider. For example, patients receiving hip replacements can be asked, through such patient-reported outcomes, to rate their ability to climb stairs both before and after their procedures, which enables assessments of the procedures’ effects on patients’ mobility. 4. Describe key differences in patient experiences with providers. Another outcome that matters is patients’ assessment of their interactions with providers. Effective transparency tools include information on how past patients evaluated providers in terms of dimensions such as how well nurses communicate with patients, or the responsiveness of clinicians to patients’ needs. 5. Describe key differences in costs, particularly patient out-of- pocket costs. The cost information of greatest relevance to consumers is finding out what they will have to pay for a given service before it occurs. For example, a tool that provides information on the average cost of a selected procedure in a given geographic location may be less relevant to consumers than one that takes into account the specific costs for that procedure and location given that consumer’s specific health insurance coverage. 6. Describe other information related to quality, where appropriate. There may be other quality indicators that could have major significance to consumers for certain types of services. For example, facility inspection results and staffing levels are of particular relevance to nursing home care. 7. Provide timely Information. More recent data are intrinsically more relevant than data that are several years old. Because consumer transparency tools necessarily rely on past data to assess likely cost and quality performance in the future, some lag in collecting, analyzing, and providing data is inevitable. Data that are no more than two years old are generally considered timely. 8. Describe key strengths and limitations of the data. Although the research we reviewed shows that few consumers are inclined to delve into the many methodological issues that concern appropriate techniques for collecting, checking, and analyzing cost and quality data, transparency tools can provide both summary assessments of strengths and limitations for most consumers, and links to more complete explanations for those wanting to pursue these issues in greater detail. Such information, along with identification of the organization responsible for the tool, provides consumers a basis to judge the credibility of the cost and quality information provided. Regardless of its potential relevance to them, consumers only can respond to information about cost and quality if they understand it. Although cost and quality information can be inherently difficult to understand, research suggests ways that transparency tools can make it easier for consumers to do so. One factor that increases the difficulty of understanding such information is the prevalence of misleading preconceptions about the cost and quality of care. For example, researchers find that many consumers incorrectly assume that all hospitals and physicians provide good quality care, while others assume that higher-cost providers will provide higher quality of care than lower- cost providers. As a result, transparency tools that address these misleading preconceptions can better help consumers to understand the information they present. Other challenges identified by researchers that consumers face include absorbing and evaluating large amounts of information about multiple providers across different measures of cost and quality, assessing a provider who does relatively well on some measures of quality and less well on others, and interpreting complex numerical information. Therefore, it is important that transparency tools limit the amount of information that consumers need to pay attention to and make it easy for them to discern overall patterns. One experiment demonstrated that improving the organization and presentation of quality information in transparency tools led to an increased proportion of consumers who could identify the best-performing providers for a given dimension of quality. Specifically, with the improvements, the proportion of consumers who could identify the best-performing providers increased from 18 percent to 76 percent. Seven of the 15 characteristics of effective consumer transparency tools we identified focus on the extent to which a tool presents its information in a way that enables the consumer to grasp and interpret it. Specifically, the research we reviewed shows that more effective transparency tools: 1. Use plain language with clear graphics. Effective consumer transparency tools use labels and descriptions that make sense to consumers who typically are unfamiliar with clinical terminology and who often have difficulty interpreting numerical information. Graphics, including symbols, can help to readily convey information on relative provider performance, especially when they are designed to display a summary assessment of that performance as part of the symbol itself, for example one that incorporates the words “superior” or “poor”. 2. Explain purpose and value of quality performance ratings to consumers. Effective consumer transparency tools address prevalent misleading preconceptions by providing consumers coherent explanations of how different quality measures relate to the aspects of quality that consumers find relevant. These explanations work best when they link individual measures to overarching categories indicating what is being achieved, such as effectiveness of care, safety, or patient-focused care. 3. Summarize related information and organize data to highlight patterns and facilitate consumer interpretation. Two techniques that consumer tools can use to help consumers make sense of large amounts of information are (a) combining information from multiple related measures into summary or composite scores, and (b) structuring presentation of the data in ways that make patterns evident. For example, listing providers in rank order on selected cost and quality dimensions greatly simplifies identification of high and low performers. 4. Enable consumers to customize information selected for presentation to focus on what is most relevant to them. Consumers differ in the priority they assign to different aspects of quality. Tools that enable consumers to customize which quality information is presented help consumers filter out information of lesser consequence to them, and hone in on the information that they find most compelling. For example, one consumer may choose to focus on providers’ capacity to communicate well with patients, while another may focus on providers’ rates of complications and infections. 5. Enable consumers to compare quality performance of multiple providers in one view. Transparency tools are most effective when they present side-by-side assessments of providers’ performance on a given aspect of cost or quality, so consumers can most easily compare providers. 6. Enable consumers to assess cost and quality information together. Consumers cannot make judgments about the value of the care offered by providers unless they can consider both cost and quality in relation to each other. For example, transparency tools can enable consumers to rank order available providers first on selected measures of quality, and then, within the high-quality group, show those with lower costs. 7. Enable easy use and navigation of the tool. Unless consumers can quickly find information of interest to them, they are likely to quickly dismiss the potential utility of a consumer transparency tool and move on. Extensive testing with consumers can help public and private entities providing transparency tools to develop intuitive, user-friendly approaches to website navigation and for manipulating how the data are presented. The CMS transparency tools we evaluated are limited in the relevance and understandability of cost and quality information they provide to consumers. Based on the characteristics we identified concerning information provided to consumers by effective transparency tools, we found that CMS’s tools demonstrate a number of the characteristics of effective transparency tools, such as timeliness of data; however, the tools lack relevant information on cost and key differences in quality of care. These limitations hinder their relevance and usefulness by consumers, particularly consumers’ ability to make meaningful distinctions among providers. (See Appendix 2 for our assessment of how CMS tools fare on all 15 characteristics of effective tools identified through our literature review and interviews with experts.) With respect to cost information, none of the tools contain information on the specific costs that patients would incur under Medicare—such as the out-of-pocket costs to a consumer for a full episode of care. Therefore, they do not allow consumers to combine cost and quality information to assess the value of health care services, or anticipate and plan for expenses related to non-emergency procedures. For example, a consumer may wish to compare the costs of similar high-quality providers. Depending on the service a consumer is planning to receive, a consumer may be able to find some indications of provider quality in CMS transparency tools, but would not find relevant information on cost. One CMS official stated that CMS does not provide expected patient out-of- pocket costs because the agency does not have information on what beneficiaries would pay when they have coverage other than, or in addition to, traditional fee-for-service Medicare, as many beneficiaries have.of what Medicare beneficiaries likely would pay for different treatments and procedures, based on payment levels the program has set for each provider and the cost-sharing provisions that apply under the traditional fee-for-service Medicare program. For some Medicare beneficiaries, this information would provide an estimate of their out-of-pocket costs. For those with supplemental insurance coverage, it would not provide full out- However, CMS has the information necessary to create estimates of-pocket costs, but could be useful as an indicator of the maximum amounts they would have had to pay without supplemental coverage. We found that all of the CMS transparency tools provide some clinical quality information relevant to consumers, but they often lack condition- specific information for the type of non-urgent procedures that consumers can most readily plan for in advance. For example, some information on hip and knee replacements is included in Hospital Compare, but limited information is available in Hospital Compare for many other common medical procedures such as a colonoscopy. In addition, with the exception of Hospital Compare, none of CMS’s transparency tools currently provide information on patient-reported outcomes, which have been shown to be particularly relevant to consumers considering common elective medical procedures, including hip and knee replacements. CMS officials stated that the department plans to expand over time the cost and quality information reported in the CMS transparency tools. For example, CMS officials described their efforts to develop patient-reported outcome measures to include in Hospital Compare, beginning with hip and knee replacements. They also plan to expand the information in Hospital Compare on costs to the Medicare program to cover a number of specific conditions, such as heart attacks, pneumonia, and stroke. However, CMS officials stated that the department has no plans to add estimated consumer out-of-pocket costs related to specific medical procedures to any of its transparency tools. Based on the characteristics we identified regarding how effective transparency tools make cost and quality information understandable to consumers, we found limitations in how CMS transparency tools present information about comparative provider performance. Specifically, the tools have substantial limitations in their use of clear language and symbols, in summarizing and organizing information to highlight patterns for consumers, and in enabling consumers to customize how information is presented. Although the CMS tools demonstrate other characteristics of effective transparency tools, such as explaining the purpose and value of the performance information reported in the tools, and providing the capacity to compare up to three providers side by side, the limitations we identified may hinder consumers’ ability to understand and use the information provided. (See Appendix 2 for our assessment of how CMS tools compared to the 15 characteristics of effective tools identified through our literature review and interviews with experts.) The tools generally do not use clear language and symbols. Although consumers can follow links to a separate page to obtain plain-language explanations of quality measures, the labels for the actual results of the tools often use fairly technical terms. For example, one quality measure for heart attack patients is labeled “heart attack patients given fibrinolytic medication within 30 minutes of arrival.” In addition, many of the quality measures are reported in numerical form, most often as percentages. With the exception of the “five-star” ratings currently limited to Nursing Home Compare, none of the other CMS tools use symbols to help consumers interpret the meaning of the information provided. Additionally, the CMS tools also generally do not summarize results for consumers—with the exception of Nursing Home Compare—or organize data to highlight patterns. The Nursing Home Compare tool’s five-star rating system provides overall assessments of performance in three major categories—health inspections, quality measures, and staffing— each of which summarizes a set of individual measures in those categories that the tool also reports. CMS officials told us that the agency In plans to expand this five-star rating system to its other tools by 2015.addition, CMS generally has not structured the presentation of the information in its tools in a way that helps consumers detect patterns in provider performance. For example, consumers may choose up to three providers to compare, but must do so from lists in which the providers are sequenced in alphabetical order or by distance from a geographic location, not in rank order according to provider performance.consumers have to sort through the information themselves if they want to identify the top performing providers. In addition, the CMS tools provide consumers very limited ability to customize what information is presented, which may not be sufficient to allow consumers to focus on information most relevant to them. the CMS tools typically allow consumers to filter providers for consideration based on their geographic location and whether the provider offers one or more of a particular set of services. Expanding the options available to consumers to customize the information presented on cost and quality according to their individual situations and priorities would allow them to reduce the amount of information that they must review to identify providers that meet their needs. For the purposes of our review, customizing information refers to the ability of consumers to select from the quality measures offered in the tool those that are relevant to their personal situation. organizing the tools’ information in rank order according to provider performance. CMS has taken limited steps in three key areas to expand information on cost and quality and increase transparency to consumers and others. Although both HHS, through its department-wide strategy, as well as CMS, through agency-specific plans, have clearly articulated the goal of enhancing cost and quality transparency, these plans lack the procedures and performance metrics needed to ensure that the particular needs of consumers will be met. CMS has taken steps to expand cost and quality information in three key areas, although we identified some limitations in each step based on the research we reviewed and experts we interviewed. Developing and selecting measures. CMS funds and directs, primarily through contracts, the development of new cost and quality measures that can then be reported in both CMS and state and private sector transparency tools. It also selects among available measures for reporting information in its own tools. Through these two activities CMS influences the extent to which relevant cost and quality information are made available to consumers. Although CMS’s transparency tools are intended to provide consumers with the cost and quality information that they need to assess their health care options, the tools also serve a different role for providers. According to research conducted to support CMS public reporting efforts, CMS uses the tools to create an incentive for providers to improve the quality of care they deliver. By publically reporting how providers compare to their peers, CMS intends providers to be able to identify shortcomings in their performance that can be improved. However, research we reviewed demonstrates that information used to motivate providers can be different from information needed by consumers. Consequently, researchers who have assessed CMS’s tools have raised the question of whether CMS can ensure that tools intended to influence provider quality can also provide consumers with the cost and quality information they need. According to consumer transparency experts we interviewed and research we reviewed, CMS’s processes for measure development and selection do not adequately address consumer needs. For example, according to a summary of CMS-sponsored consumer testing conducted between 2001 and 2013 for the development and support of the CMS Compare websites, consumer involvement generally comes late in the measure development process. Moreover, CMS’s consumer testing has focused on assessing the ability of consumers to interpret measures developed for use by clinicians, rather than to develop or select measures that specifically address consumer needs. CMS officials report several ways that they have recently begun to incorporate consumer input into the agency’s measure development and selection process, although it is not yet clear if these efforts will ensure that measures are focused on consumers. For example: In 2013, CMS updated its requirements for measure development contracts to address the needs of consumers. Specifically, contractors are required to include patient or caregiver participation on technical expert panels, which are a part of the measure development process. According to one CMS official, the requirements do not specifically dictate how contractors must involve these consumers, but rather asks contractors to consider meaningful ways to include them. They also do not specify how contractors will be evaluated on their ability to meet consumer needs or specifically define expected results. According to the CMS official, only two measure contracts have taken effect since the requirements were updated, so it will take time to determine the extent to which they may improve the development of cost and quality measures of particular relevance to consumers. CMS also has begun to solicit consumer input in the selection of measures included in its transparency tools. In 2011, HHS convened the Measure Applications Partnership (MAP), a consensus-based group that includes representatives of consumer as well as provider and other stakeholder organizations, to furnish CMS with input on the selection of measures to include in its transparency tools and payment programs. Although the inclusion of consumer organizations in the MAP gives consumers an opportunity to influence measure selection, both a CMS official and a MAP member said that the concerns of provider organizations, which command greater technical expertise on measurement methodology and therefore are better prepared to advocate for their own selection preferences, tend to take priority. Aligning cost and quality measurement. CMS, along with several other HHS agencies, collaborates with multiple consumer organizations, state and private sector entities that also collect and report cost and quality data, to work toward agreement on an aligned core set of cost and quality measures. According to HHS’s National Quality Strategy, providers are often asked to submit quality and other information to multiple payers, including Medicare, and having a consistent, or aligned, set of measures can help facilitate the collection of this information. Transparency experts have indicated that such alignment could not only reduce the providers’ reporting burden, but it could also lead to more consistent information on provider performance. However, according to a CMS official and an expert involved in the alignment efforts, the extent to which this promise is fulfilled remains to be seen, both in terms of the number and breadth of the measures for which an agreed alignment is reached, as well as how effectively the agreement ensures uniform application of the aligned measures. In addition, according to experts we interviewed, the extent to which this movement toward measure alignment benefits consumers will depend on the extent to which the agreed-upon set of measures are relevant to consumer concerns. Releases of claims data. In addition to the quality data published through the Compare sites, CMS has begun to publicly report information on provider costs based on Medicare fee-for-service claims data. Specifically, beginning in 2013, CMS began releasing large data sets containing partially aggregated payment data on individual physicians and hospitals. According to experts we interviewed, the release of these data help to promote the concept of transparency. However, these experts noted that the lack of patient-level information in the files prevents even rudimentary risk adjustment—adjustments based on differences in patients—which precludes using these data to supply cost and claims- data-based quality information for state and private sector transparency tools. CMS has released more detailed patient-level claims data to CMS- approved qualified entities (QE). Established under PPACA, QEs are organizations that are given access to Medicare claims data to enable evaluation of the performance of providers and suppliers.CMS officials, by having access to more detailed patient-level data, QEs—selected based in part on their expertise in analyzing complex claims data—could help make available to consumers information on quality measures that use claims data, such as mortality and readmissions for specific conditions or procedures. HHS has begun to According to share these data with 13 organizations that have been approved to become QEs, according to CMS officials, but the organizations have only just begun to publicly share new cost or quality information based on these data. Therefore, it is too early to tell to what extent the QE program will expand the cost and quality information available to consumers. Although HHS has included transparency as part of its strategic plan, and set a specific goal to report on measures that meet consumer needs, it has not established specific procedures or performance metrics to ensure that the cost and quality information that it collects and publicly reports accomplish this goal. A key part of HHS’s strategic plan to ensure optimal health care is to give the public the means to make more informed health care choices by improving transparency and providing public access to HHS’s data on provider performance, among other things. Furthermore, HHS has set a specific goal to report on measures that are important to consumers and other key audiences in ways that can be easily understood and readily acted upon. However, despite this goal, neither HHS, through its department-wide strategy, nor CMS, through its agency-specific strategic plans, have established specific procedures or performance metrics to clarify how they will ensure that the particular needs of consumers are met, in the midst of competing demands from providers. In particular, they have not specified how the particular needs of consumers, as distinct from those of providers, should be determined, and how to assess progress in addressing those needs: In an analysis conducted for HHS of its public reporting efforts, researchers proposed that HHS develop and adopt processes and criteria to select measures for reporting to consumers as well as for reporting to more specialized audiences. HHS subsequently released a framework to guide its public reporting activities in December 2013. Although the framework acknowledged gaps in available measures relevant to consumers and made the task of addressing these gaps a priority, it did not outline a process for integrating consumer needs into developing or selecting measures, nor did it define criteria HHS planned to use in selecting measures for reporting to consumers. In November 2012, CMS also produced a draft strategic plan that outlines the agency’s public reporting strategy. One of three goals contained in the plan is to better meet consumer and other audience needs, but the plan does not contain any procedures to address how consumer needs should be determined or metrics to assess whether identified needs are met. According to CMS officials, the draft plan is under ongoing review by CMS officials and has not been finalized. Similarly, CMS’s policies with respect to collaborating with other entities to align cost and quality measures and its programs to publicly report Medicare cost information, through its own public releases and via QEs, have not included procedures or performance metrics that address the particular needs of consumers. Establishing procedures to implement goals and performance metrics to monitor progress is important for accomplishing agency goals, according to our guidance to federal agencies on effectively implementing GPRAMA, which describes leading practices for how federal agencies should assess their performance. According to the guidance, procedures and metrics are particularly important when a government agency must respond to multiple priorities and competing demands, such as CMS faces in meeting the demands of both consumers and providers in the development and selection of cost and quality measures and in the reporting of information through its transparency tools. Without such specific procedures and performance metrics, CMS cannot ensure that its efforts to make cost and quality information publicly available will meet consumer needs or help them to make meaningful distinctions among providers. Health care costs vary widely without being related to quality. For some services, the differences in consumers’ out-of-pocket costs between providers can be thousands of dollars. However, as transparent information on both cost and quality is difficult for consumers to obtain— either through transparency tools or calling providers directly—consumers often do not realize they could obtain better value in their health care decisions, for example by utilizing high-quality, low-cost providers. With consumers paying greater shares of their health care costs, it is particularly important that they have access to relevant and understandable cost and quality information to make value-based decisions about services that can be planned in advance, as well as to better anticipate and plan for their expenses. Through HHS’s National Quality Strategy, which builds on HHS’s strategic plan, the department has made health care cost and quality transparency a priority for itself and its component agencies, including CMS. CMS’s transparency tools—five Compare websites—are one way the agency has provided cost and quality information to consumers. However, CMS addresses multiple strategic aims with these tools, including both informing consumers and incentivizing providers to improve the quality of their care. Partly as a result, the tools exhibit critical weaknesses when assessed in terms of the characteristics that would make them most effective for consumers. For example, the tools lack information on topics of considerable relevance to consumers, such as patient-reported outcome measures and patient out-of-pocket costs. Additionally, the tools do not organize cost and quality information in a way that may enable consumers to readily understand and compare provider performance, or customize how the information is presented to enable consumers to identify the best providers for aspects of care that they may find most relevant. While providing relevant and understandable information on cost and the ability to compare it with provider quality is inherently complex, consumers need this information to determine if different providers or settings provide a greater overall value for care they expect to receive and to determine what their health care expenses will be in advance. Although HHS has set goals to promote transparency for consumers, and, largely through CMS, has taken steps to expand available cost and quality information, CMS has not established procedures or performance metrics to ensure that this information is relevant and understandable to consumers. Furthermore, each of the steps CMS has taken has limitations that may prevent consumer needs from being met. In particular, CMS’s process for developing and selecting cost and quality measures for its tools has been heavily influenced by the concerns of providers rather than consumers, which helps to account for the relative lack of cost and quality information in CMS’s tools that consumers would find relevant. Especially in such situations where agencies are responding to multiple priorities, leading practices for strategic planning call for establishing specific procedures and performance metrics to ensure that each of those priorities receives due attention. Until CMS establishes such procedures and performance metrics with respect to implementing their goals for promoting health care transparency, the agency is likely to continue to have limited effectiveness in conveying relevant and understandable information on cost and quality to consumers. To improve consumers’ access to relevant and understandable information on the cost and quality of health care services, we recommend that the Secretary of HHS direct the Administrator of CMS to take four actions: 1. Include in the CMS Compare websites, to the extent feasible, estimated out-of-pocket costs for Medicare beneficiaries for common treatments that can be planned in advance; 2. Organize cost and quality information in the CMS Compare websites to facilitate consumer identification of the highest-performing providers, such as by listing providers in order based on their performance; 3. Include in the CMS Compare websites the capability for consumers to customize the information presented, to better focus on information relevant to them; and 4. Develop specific procedures and performance metrics to ensure that CMS’s efforts to promote the development and use of its own and others’ transparency tools adequately address the needs of consumers. We provided a draft of this report to HHS for review, and HHS provided written comments, which are reprinted in appendix III. In its comments, HHS concurred with each of our recommendations and noted a number of activities being done to improve transparency of cost and quality information for consumers. HHS stated its intention to ensure that its transparency tools, the Compare websites, fully address consumer priorities. For example, HHS stated that it was committed, to the extent feasible, to providing estimated out-of-pocket costs to Medicare beneficiaries for common procedures that can be planned in advance. However, HHS noted that, as mentioned in our draft report, there are challenges to collecting all of the relevant cost information. Similarly, HHS agreed to organize the cost and quality information provided in its Compare websites to facilitate consumer identification of the highest performing providers, such as by listing providers in order based on ratings of the quality of their performance. HHS noted its plans to expand the use of star ratings, including the ability to sort and filter provider listings based on those ratings. HHS also agreed with our recommendation to enable consumers to customize the information presented to them on the Compare websites. Finally, HHS agreed to develop specific procedures and performance metrics to ensure that its transparency tools adequately address the needs of consumers. Although HHS stated that it had already developed many internal procedures and performance metrics, it did not identify or describe them in its comments, nor did we find any such procedures or performance metrics in the planning documents provided to us by HHS during the course of our work. HHS also provided us with technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of the Department of Health and Human Services, the Administrator of the Centers for Medicare & Medicaid Services, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix IV. We obtained examples of cost and quality information that were available to certain consumers in specific circumstances by obtaining information from selected transparency tools and by contacting providers, acting in the role of a consumer, to ask for cost and quality information. The examples we obtained from two selected transparency tools (a health insurer and Castlight) and from contacting selected providers in two health care markets by telephone illustrate the variation in cost and quality information that are available to consumers in specific situations, such as those who are members of a particular health plan or uninsured. Specifically, the examples showed variation in cost, but often did not include enough information on quality for it to be considered alongside cost (see tables 4-7 below). We assessed the extent to which the Centers for Medicare & Medicaid Services’ transparency tools include the 15 characteristics of effective tools we identified through our literature review and interviews with experts. (See Table 8.) In addition to the contact named above, Will Simerl, Assistant Director; N. Rotimi Adebonojo; Jennie Apter; Rebecca Hendrickson; Monica Perez-Nelson; Eric Peterson; and Elise Pressma made key contributions to this report.
The cost and quality of health care services can vary significantly, with high cost not necessarily indicating high quality. As consumers pay for a growing proportion of their care, they have an increased need for cost and quality information before they receive care, so they can plan and make informed decisions. Transparency tools can provide such information to consumers and others. GAO was asked to study cost and quality information for consumers. This report examines (1) information on cost and quality available to consumers from selected transparency tools, (2) characteristics of effective transparency tools, (3) limitations, if any, in the effectiveness of CMS transparency tools, and (4) CMS efforts to expand cost and quality information available through transparency tools. GAO analyzed information from two private tools—selected because they had both cost and quality information—and CMS's five transparency tools, reviewed research to identify best practices for conveying information to consumers, interviewed CMS and HHS officials and subject matter expects, and reviewed CMS and HHS planning documents and relevant criteria for effective planning in the federal government. Results obtained from two selected private consumer transparency tools GAO reviewed—websites with health cost or quality information comparing different health care providers—show that some providers are paid thousands of dollars more than others for the same service in the same geographic area, regardless of the quality of such services. For example, the cost for maternity care at selected acute care hospitals in Boston, all of which rated highly on several quality indicators, ranged between $6,834 and $21,554 in July 2014. Transparency tools are most effective if they provide information relevant to consumers and convey information in a way that consumers can readily understand. GAO identified key characteristics of effective transparency tools through a literature review and interviews with experts. The information that is most relevant to consumers relates directly to their personal circumstances, such as information on specific procedures they are considering, and allows them to make meaningful distinctions among providers based on their performance. Characteristics of such relevant information include describing key differences in quality of care and costs, particularly for what consumers are likely to pay out of pocket based on their specific circumstances. In addition, effective transparency tools must take specific steps to make the information they present understandable by consumers. For example, tools must enable consumers to discern patterns by summarizing related information and allowing consumers to customize information to focus on what is most relevant to them. The Centers for Medicare & Medicaid Services (CMS) operates five transparency tools—Nursing Home Compare, Dialysis Facility Compare, Home Health Compare, Hospital Compare and Physician Compare—that are limited in their provision of relevant and understandable cost and quality information for consumers. In particular, GAO found that the tools lack relevant information on cost and provide limited information on key differences in quality of care, which hinders consumers' ability to make meaningful distinctions among providers based on their performance. Because none of the tools contain information on patients' out-of-pocket costs, they do not allow consumers to combine cost and quality information to assess the value of health care services or anticipate the cost of such services in advance. Additionally, GAO found substantial limitations in how the CMS tools present information, such as, in general, not using clear language and symbols, not summarizing and organizing information to highlight patterns, and not enabling consumers to customize how information is presented. CMS, part of the Department of Health and Human Services (HHS), has taken some steps to expand access to cost and quality information for consumers, but has not established procedures or metrics to ensure the information it collects and reports meets consumer needs. Both HHS and CMS have set goals to report on measures that meet consumer needs. However, CMS's process for developing and selecting cost and quality measures for its tools has been heavily influenced by the concerns of providers rather than consumers. Without procedures or metrics focusing on consumer needs, CMS cannot ensure that these efforts will produce cost and quality information that is relevant and understandable to consumers seeking to make meaningful distinctions. GAO recommends that HHS's CMS take steps to improve the information in its transparency tools and develop procedures and metrics to ensure that tools address consumers' needs. HHS concurred with the recommendations and provided technical comments that were incorporated as appropriate.
Although not specifically required to do so by statute, FRB considers the fair lending compliance of the entities under the holding companies involved in the merger and any substantive public comments about such compliance. FRB must act on a merger request within 90 days of receiving a complete application or the transaction will be deemed to have been approved. FRB also seeks comments from appropriate state and federal banking regulatory agencies, which have 30 days to respond. While the application is pending, public comment on the proposed merger is to be solicited through notices in newspapers and the Federal Register. The public is allowed 30 days to provide written comments. FRB is required to consider several factors when reviewing a merger application, including (1) the financial condition and managerial resources of the applicant, (2) the competitive effects of the merger, and (3) the convenience and needs of the community to be served. Fair lending oversight and enforcement responsibilities for entities within a bank holding company vary according to entity type (see fig. 1). Federal banking regulators are responsible for performing regularly scheduled examinations of insured depository institutions and their subsidiaries to assess compliance with fair lending laws. In contrast, nonbank subsidiaries of bank holding companies are not subject to regularly scheduled compliance examinations by any agency. However, the fair lending laws provide primary enforcement authority over nonbank mortgage subsidiaries to HUD and FTC. HUD has enforcement authority with respect to FHAct violations for all institutions, and FTC has ECOA enforcement responsibility with respect to all lenders that are not under the supervision of another federal agency. For example, FTC is responsible for the enforcement of ECOA with respect to nonbank mortgage subsidiaries of bank holding companies. FRB has general legal authority under the Bank Holding Company Act and other statutes to examine nonbank mortgage subsidiaries of bank holding companies. Appendix III contains information regarding the extent of mortgage lending performed by banks, thrifts, and independent mortgage companies, another major component of the mortgage lending market, which are not addressed in this study. It also provides data specific to the banking sector. Federal banking regulatory agencies are authorized under ECOA to use their full range of enforcement authority to address discriminatory lending practices by financial institutions under their jurisdictions. This includes the authority to seek prospective and retrospective relief and to impose civil money penalties. HUD, on the other hand, has enforcement authority with respect to FHAct violations for all institutions and HMDA compliance responsibilities for independent mortgage companies. Both ECOA and FHAct provide for civil suits by DOJ and private parties. Whenever the banking regulatory agencies or HUD have reason to believe that an institution has engaged in a “pattern or practice” of illegal discrimination, they must refer these cases to DOJ for possible civil action. Such cases include repeated, regular, or institutionalized discriminatory practices. Other types of cases also may be referred to DOJ. From 1996 through 1998, DOJ entered into four settlements and one consent decree involving fair lending compliance. In the same period, FTC entered into three consent decrees and issued one complaint that were based at least in part on ECOA compliance issues. FRB and OCC, respectively, took two and nine enforcement actions against regulated institutions for violations of the fair lending laws and regulations in this same time period. During this time period FRB, OCC, and FTC also conducted various investigations of consumer complaints they received regarding alleged fair lending violations by institutions under their jurisdiction. For example, FRB conducted 32 investigations of consumer complaints it received in 1998 that alleged fair lending violations by state member banks. HUD can investigate fair lending complaints against various types of institutions, including bank holding companies, national banks, finance companies, mortgage companies, thrifts, real estate companies, and others. In processing fair lending complaints, HUD is to conduct an investigation and, if evidence suggests a violation of the law, issue a charge. HUD is required by law to attempt to conciliate such cases. From 1996 through 1998, HUD entered into 296 conciliation agreements. Of the 296, at least 108 involved banks, mortgage companies, or other entities related to bank holding companies. If conciliation is not achieved, HUD may pursue the case before an Administrative Law Judge. However, a complainant, respondent, or aggrieved person may elect to have the claims asserted in a federal district court instead of a hearing by an Administrative Law Judge. The Secretary of HUD may review any order issued by the Administrative Law Judge. Decisions of the Administrative Law Judge may be appealed to the federal court of appeals. Regulatory enforcement of ECOA and FHAct, enacted in 1974 and 1968, respectively, is supported by the HMDA. As amended in 1989, HMDA requires lenders to collect and report data annually on the race, gender, and income characteristics of mortgage applicants and borrowers. Lenders who meet minimum reporting requirements submit HMDA data to their primary banking regulator or HUD in the case of independent mortgage companies. HMDA data are then processed and made available to the public through the reporting lenders, the Federal Financial Institutions Examination Council, and other sources. Such information is intended to be useful for identifying possible discriminatory lending patterns. As we noted in our 1996 report on fair lending, federal agencies with fair lending enforcement responsibilities face a difficult and time-consuming task in the detection of lending discrimination. Statistical analysis of loan data used by some federal agencies can aid in the search for possible discriminatory lending patterns or practices, but these methods have various limitations. For example, these statistical models cannot be used to detect illegal prescreening or other forms of discrimination that occur prior to the submission of an application. For these forms of discrimination, consumer complaints may be the best indicator of potential problems. We noted in the report that it is critical that the agencies continue to research and develop better detection methodologies in order to increase the likelihood of detecting illegal practices. In addition, we encouraged the agencies’ efforts to broaden their knowledge and understanding of the credit search and lending processes in general because such knowledge is prerequisite to improving detection and prevention of discriminatory lending practices. regardless of asset size, if they originated 100 or more home purchase loans (including refinancings) during the calendar year. Depository institutions are exempt from reporting HMDA data if they made no first-lien home purchase loans (including refinancings of home purchase loans) on one-to-four family dwellings in the preceding calendar year. Nondepository institutions are exempt if their home purchase loan originations (including refinancing of home purchase loans) in the preceding calendar year came to less than 10 percent of all their total loan originations (measured in dollars). NBD’s acquisition of First Chicago in 1995, Fleet’s acquisition of Shawmut in 1995, Chemical’s acquisition of Chase in 1996, NationsBank’s acquisition of Boatmen’s in 1997, NationsBank’s acquisition of BankAmerica in 1998, and BancOne’s acquisition of First Chicago NBD in 1998. To verify the completeness of FRB’s summaries of the comment letters, we developed a data collection instrument, reviewed a sample of comment letters submitted for two of the mergers, and compared our data with the FRB summaries. From our sampling of comment letters, we determined that FRB’s internal summaries of the comment letters were accurate and that we could rely upon the other FRB summaries as accurate reflections of the public comments submitted. To assess FRB’s consideration of the types of fair lending issues raised during the merger process for large bank holding companies, we reviewed FRB’s internal memorandums and supporting documentation for the six selected mergers and FRB’s orders approving the mergers in question. We also interviewed FRB staff involved in assessing the comments made by consumer and community groups for the six selected mergers. In addition, we obtained and analyzed fair lending enforcement actions taken by FRB, OCC, DOJ, FTC, and HUD to determine if they involved institutions that were part of the six selected mergers. We also conducted interviews with representatives of these agencies to discuss coordination policies and procedures related to the merger process for these large bank holding companies. We held discussions with representatives of the four bank holding companies that resulted from the six mergers, representatives of bank industry trade groups, and various consumer and community groups that commented on the six mergers to obtain their views regarding the federal regulatory response to fair lending issues raised during the merger process. We conducted our review from November 1998 to July 1999, in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from FRB, OCC, FTC, DOJ, and HUD. FRB, OCC, and HUD provided written comments that are included in appendixes IV through VI. A summary of the agencies’ comments and our responses are presented at the end of this letter. Consumer and community groups submitted comment letters raising fair lending issues in each of the six mergers. The number of comment letters that FRB received on the mergers—which included letters supporting or opposing the merger—ranged from 17 to approximately 1,650. Table 1 lists the primary fair lending issues raised and the number of mergers in which each issue was raised. As shown in table 1, consumer and community groups raised the issue of perceived high denial and low lending rates to minorities in all six cases. The groups typically based these concerns on their analysis of HMDA data. For example, one of the community groups commenting on a proposed merger cited denial rates for minorities that were twice the rate for Whites in a particular geographic area. In other cases, consumer and community groups cited HMDA data indicating that the number of loans made to minority groups by the institutions involved in the merger was not consistent with the demographics of a particular market. The groups claimed that the HMDA data provided evidence of a disparate impact in lending to minorities. The consumer and community groups were most often concerned about the lending record of the subsidiaries of the holding company that was the acquirer. However, a number of these groups raised issues with the lending records of both holding companies involved in the proposed merger. In a few cases, the lending record of the subsidiaries of the holding company that was to be acquired was identified as an issue. The consumer and community groups often did not identify the specific institution under the holding company in question but, instead, focused on the overall lending in specific geographic markets. Consumer and community groups raised fair lending concerns in five of the six mergers regarding the activities of nonbank mortgage subsidiaries. In four of the mergers, the concerns involved the nonbank mortgage subsidiaries of the holding companies. Nonbank mortgage subsidiaries of holding companies accounted for approximately one-fifth of the total mortgage lending of the bank sector, and they experienced steady growth in both the number and dollar value of mortgage loans originated from 1995 through 1997. Their growth in lending activity out-paced other bank sector entities in 1997. (See app. III, figs. III.2 to III.5.) The nonbank mortgage company in the fifth merger was a subsidiary of one of the lead banks involved in the merger. In five merger cases, consumer and community groups cited abusive or what they characterized as “predatory” sub-prime lending as a fair lending issue. Sub-prime lending itself is not illegal and is generally acknowledged as a means of widening consumer access to credit markets. However, as stated in a recent interagency document, the “higher fees and interest rates combined with compensation incentives can foster predatory pricing or discriminatory steering of borrowers to sub-prime products for reasons other than the borrower’s underlying creditworthiness.” The alleged abusive sub-prime lending activities cited by the consumer and community groups included such practices as undisclosed fees and aggressive collection practices that were more likely to affect the elderly, minorities, and low- to moderate-income individuals. Other concerns identified with sub-prime lending included the alleged targeting of minorities for the higher priced sub-prime loans even if they would qualify for loans at lower rates. The groups typically relied on anecdotal rather than statistical evidence to support their concerns. HMDA data cannot be used to analyze sub-prime lending because HMDA does not require lenders to identify which loans are sub-prime or report loan characteristics that can be used to identify sub-prime lending, such as the pricing and fees, and does not require the reporting of borrowers’ credit information. In three of the merger cases, consumer and community groups alleged that minorities were being directed or steered disproportionately to the holding company lender that offered the highest-priced loans or the least amount of service. In two of the mergers, the allegations focused on steering between the banks and the holding companies’ nonbank mortgage companies engaged in sub-prime lending. The steering issue raised in the third merger involved referral practices between a bank and its subsidiaries that allegedly resulted in minorities typically receiving a lower level of service. One of the consumer and community groups alleged that a holding company established the nonbank mortgage company as a bank holding company subsidiary rather than as a bank subsidiary to escape regulatory scrutiny. As noted earlier, nonbank subsidiaries of bank holding companies are not subject to regularly scheduled compliance examinations. The group stated that this created a “regulatory blindspot.” Consumer and community groups raised prescreening and marketing issues in four mergers. In two of the four, the consumer and community groups were concerned about prescreening of applicants that resulted in the referral of only those applicants deemed qualified. The groups alleged that the prescreening programs violated the ECOA provision that requires lenders to provide applicants with written notification of a loan application denial stating the reason or basis for the denial. The community groups also raised issues with bank fee or marketing practices. According to these groups, some practices were intended to discourage minorities from applying for credit, and other practices disproportionately targeted minorities for loans with higher interest rates. In two of the merger cases, consumer and community groups raised issues related to lending to small businesses owned by minorities or located in minority communities. The primary support for these issues appeared to be analysis of HMDA data and Community Reinvestment Act (CRA) data.The consumer and community groups alleged that the holding companies involved in the two mergers were discriminating against or providing an inadequate level of funding to minority-owned small businesses or small businesses located in minority communities. Concerns about the discriminatory treatment of minority applicants were raised in two of the mergers. The basis for the complaint on one merger was the results of an independent testing program that used matched-pair testing. According to the complainant, Black applicants were kept waiting longer, were quoted higher closing costs and overall processing times, and overall were discouraged from applying for credit in comparison to White applicants. In another merger, FRB received several comment letters that objected to the acquiring bank holding company’s customer call center’s handling of fair lending complaints. Specifically, they asserted that the center’s staff did not inform callers of their right to file a complaint and lacked expertise in fair lending and investigative techniques. Redlining of predominantly minority neighborhoods was alleged in one merger. A consumer/community group said that the acquiring bank holding company had redlined many of the low- and moderate-income, predominantly minority communities in a particular city. The group based its allegation on the lack of bank branches and minimal marketing of credit products in those communities. FRB analyzed HMDA data to help assess the validity of the fair lending concerns raised by the groups. FRB also obtained and reviewed additional information from the bank holding companies involved in the proposed merger. FRB staff stated that in assessing fair lending concerns, they relied primarily on current and past fair lending compliance examinations performed by the primary banking regulator(s). In each of the six mergers, FRB staff obtained and reviewed additional information provided by the bank holding companies to assess the fair lending issues raised by consumer and community groups. According to FRB officials, they forwarded the comments received from the consumer and community groups during the public comment period to the bank holding companies involved in the mergers. They explained that the bank holding companies were encouraged, but not required, to provide information or a response to the issues raised in the comment letters. In addition, FRB sometimes requested specific information from the bank holding companies in response to issues raised by the consumer and community groups. For example, FRB staff requested and assessed information from one holding company about the settlement of lawsuits involving consumer complaints. This request was made in response to a group’s concerns about the compliance of a nonbank mortgage subsidiary with fair lending and consumer protection laws. In response to consumer and community groups’ concerns about overall lending to minorities by the entities involved in the proposed holding company mergers, FRB staff obtained and analyzed HMDA data. Using these data, FRB compared the lending performance of the bank holding company subsidiary in question to the performance of other lenders in the aggregate for a particular community or geographic area. In addition, they looked at the holding company’s record of lending to minorities over the last several years to determine if there were any discernible patterns that could indicate discriminatory lending. In conducting their analysis, FRB staff identified lending rate disparities in some areas/markets that indicated that the holding company subsidiary was lagging behind the aggregate or not doing as well as could be expected. However, FRB staff noted that although HMDA data may indicate a need for further analysis or targeted reviews through examinations, HMDA data alone cannot provide conclusive evidence of illegal discrimination because of known limitations in the HMDA data. Bank regulators, bank officials we contacted, and some academics and community group representatives agreed that HMDA data are limited in their potential to demonstrate discrimination. Principal among the limitations associated with HMDA data is the lack of information on important variables used in the credit underwriting process. For example, HMDA data do not include information on the creditworthiness of the applicant, the appraised value of the home, or the credit terms of the loan. This information typically is maintained only in the lender’s loan files and is accessible to regulators conducting compliance examinations or investigations. FRB staff stated that they relied heavily on the primary regulator’s compliance examinations because on-site comprehensive reviews of actual bank practices and records are the best means to assess compliance with the fair lending laws. Moreover, time, access, and authority constraints limit the analysis of fair lending issues that FRB staff can perform during the application process for bank holding company mergers. FRB officials stated that the merger application review process is not a substitute for the fair lending examination process. Therefore, FRB relied on the past and current fair lending examination results of the primary banking regulator. In response to the fair lending concerns raised by the consumer and community groups, FRB staff said they obtained information on the scope of and conclusions reached on prior and on-going fair lending compliance examinations performed by the primary banking regulator. The age of the examinations relied on by FRB ranged from over 3 years old to having been recently completed or still on-going. These examinations covered the fair lending compliance of the banks and their subsidiaries with the fair lending laws and regulations. The fair lending examination reports typically did not address all of the fair lending issues raised by the consumer and community groups during the merger process, such as abusive sub-prime lending, discriminatory prescreening/marketing, and steering. Moreover, nonbank mortgage subsidiaries of bank holding companies are not routinely examined for fair lending compliance by any federal regulatory or enforcement agency. On a case-by-case basis, FRB officials told us they have exercised their general authority granted under the Bank Holding Company Act and other statutes to conduct fair lending compliance investigations of a bank holding company’s nonbank mortgage subsidiaries. In two cases, FRB had conducted prior investigations of nonbank mortgage subsidiaries involved in proposed mergers we studied. According to FRB officials, a long-standing FRB policy of not routinely conducting consumer compliance examinations of nonbank subsidiaries was formally adopted in January 1998. The policy is based on three primary considerations. First, ECOA and other major laws enforced under FRB’s compliance program give primary enforcement responsibility for nonbank subsidiaries of bank holding companies to FTC. Second, routine examinations of the nonbank subsidiaries would be costly. Third, such examinations would, in the FRB officials’ opinion, raise questions about “evenhandedness” given that similar entities, such as independent mortgage companies, that are not part of bank holding companies would not be subjected to examinations. FRB does not have specific criteria as to when it will conduct on-site investigations of these nonbank mortgage subsidiaries. According to FRB, on-site inspections of a holding company nonbank mortgage subsidiary are conducted when factors present suggest that discriminatory practices are occurring and when it seems appropriate to do so because the matter may relate to relevant managerial factors. In contrast, FRB’s policy is to conduct full, on-site examinations of the subsidiaries of the banks it regulates. Banks still account for a greater amount of lending than the other bank sector entities—bank subsidiaries and nonbank mortgage subsidiaries of holding companies. However, the growth in lending by nonbank mortgage subsidiaries has steadily increased since 1995 and outpaced other bank sector entities in 1997 (see app. III). In discussions with FTC officials, we confirmed that they do not examine or routinely investigate nonbank mortgage subsidiaries of holding companies. They emphasized that FTC is a law enforcement agency, not a regulator. FTC, they said, does not conduct compliance examinations but does investigations targeted at specific entities, most of which are agency- initiated. However, investigations can result from consumer complaints that indicate a pattern or practice or public interest problem to be explored. The officials noted that FTC’s jurisdiction is broad—generally covering any lending entity that is not a bank, thrift, or their holding companies—but FTC resources are limited. They said FTC’s current ECOA enforcement efforts have focused on independent mortgage or finance companies and discriminatory pricing issues. During the period of the six mergers that we reviewed, 1996 through 1998, FTC achieved three settlements and issued one complaint in ECOA enforcement actions; none involved bank holding company entities. In all six mergers, FRB noted that the primary banking regulator had found no evidence of illegal credit discrimination in its most recent fair lending compliance examinations. Of the two prior FRB investigations of nonbank mortgage subsidiaries, FRB found no evidence of illegal discrimination in one case. As discussed further in the next section, FRB made a referral to DOJ on the other case on the basis of the nonbank mortgage subsidiary’s use of discretionary loan pricing practices that resulted in disparate treatment based on race. FRB approved all six of the mergers, but one was approved with a condition related to a fair lending compliance issue. At the time of the merger application in question, DOJ was pursuing an investigation—on the basis of a FRB referral—of the holding company’s nonbank mortgage subsidiary. The focus of the investigation was on the nonbank mortgage subsidiary’s use of discretionary loan pricing—known as overaging— which allegedly resulted in minorities disproportionately paying higher loan prices than nonminorities. The nonbank mortgage subsidiary was under a commitment with FRB not to engage in overage practices. FRB approved the merger with the condition that the holding company not resume the overage practice without FRB’s approval. DOJ subsequently entered into a settlement agreement with the nonbank mortgage subsidiary in which it agreed to change its overage policies and pay $4 million into a settlement fund. In our review of the six merger cases, we found weaknesses in some of FRB’s practices that could limit the access of various government agencies to information about the fair lending compliance performance of bank holding company entities. Two weaknesses could limit FRB’s access to such information during consideration of bank holding company merger applications. Specifically, FRB did not routinely contact FTC or HUD to obtain information about any fair lending complaints or concerns related to the entities involved in the mergers. Moreover, FRB did not ensure that information about the structural organization of the bank holding companies was available to the public or DOJ, which could have limited the information provided to FRB by these sources. A third weakness could limit the access of other agencies with fair lending compliance responsibilities to information FRB obtained during consideration of merger applications. Specifically, FRB did not routinely provide the primary banking regulators, FTC, and HUD with the comment letters it received during the merger applications process regarding the fair lending compliance of the banks and nonbank mortgage subsidiaries of the holding companies involved in the six mergers. As discussed previously, the enforcement of fair lending laws is shared by a number of federal agencies. For example, there are four agencies (FRB, FTC, HUD, and DOJ) that have roles in fair lending enforcement with regard to nonbank mortgage subsidiaries of bank holding companies. Federal agencies involved in fair lending oversight and enforcement— including FRB, FTC, HUD, and DOJ and other federal banking regulators— recognize the need for effective coordination in their Interagency Policy Statement on Discrimination in Lending. This policy states that they will seek to coordinate their actions to ensure that each agency’s action is consistent and complementary. In keeping with the spirit of this policy, FRB routinely solicited input from the primary federal regulator for the banking subsidiaries of the holding companies involved in the merger. In addition, FRB and DOJ staff told us that they coordinated informally with each other during the merger application process regarding the fair lending compliance of the holding company subsidiaries involved in the mergers. However, FRB did not typically contact FTC or HUD to determine if they had ongoing investigations involving any of the bank holding company subsidiaries or other data, including consumer complaints, that could be useful in assessing the fair lending concerns raised by consumer and community groups during the merger process. In the five merger cases in which fair lending concerns about the nonbank mortgage subsidiaries were raised, FRB contacted FTC with regard to only one of the merger applications; FRB did not contact HUD in any of the cases. Without coordination with FTC and HUD, FRB cannot ensure that it has access to all relevant information about fair lending issues that may arise in its consideration of bank holding company merger applications. In three of the six merger cases, HUD had fair lending complaint investigations in process at the same time that FRB was considering the merger applications. There was one merger in which HUD had three ongoing investigations arising out of consumer complaints (complaint investigations) at the time of the merger application. For example, one of the cases that HUD was investigating during a merger involved alleged discrimination at the preapplication interview, such as minority applicants receiving less information about the bank’s mortgage products and being quoted less favorable terms than similarly qualified White applicants. All six of the complaint investigations that were in process at the time of the mergers were the result of complaints by individuals. In five of the six cases, HUD entered into conciliation agreements that involved monetary payments to the complainants ranging from $350 to $46,000. In soliciting input on the proposed merger, FRB did not provide or direct federal enforcement agencies or the public to structural information about bank holding companies that would identify an affiliated bank and nonbank lenders involved in the merger. As a result, federal enforcement agencies and the public may not have been able to provide all relevant information. For this reason, FRB may not have had current and complete fair lending information on bank holding companies to properly assess the fair lending activities of these companies during the merger application process. Ensuring knowledge of and access to structural information on bank holding companies, including the names and addresses of bank and nonbank lenders under the applicant, could enable the enforcement agencies to better complement FRB’s efforts to assess the fair lending activities of bank holding company entities for the merger application process. A HUD official we interviewed stated that without information from FRB regarding the structural organization of a bank holding company, HUD may not be able to identify the entities within the holding company structure that were subject to ongoing or past complaint investigations. Officials from DOJ and FTC also indicated the need for such information. Access to information about the structural organization of the holding companies involved in proposed mergers could also help improve the quality of public comments that FRB receives during the merger process. FRB staff stated that the comments that they receive from consumer and community groups often exhibit a lack of understanding of the often complex structural organization of the holding companies involved in a proposed merger—particularly as it relates to mortgage lending activity. Outlines of the hierarchical structure of bank holding companies have been available since January 1997 through the FRB’s National Information Center (NIC) on the Internet. However, not all the government agencies and consumer and community groups may be aware of the NIC source or have access to it. In addition, the structural information provided by NIC could be viewed as somewhat overwhelming and, in that sense, difficult to use. As noted on the NIC Web site itself, the information for large institutions “can be quite lengthy and complex.” The structural information on the NIC Web site is also limited in that geographical information is provided for some, but not all, lenders within holding companies. Although the site offers the names and addresses of banking institutions’ branch offices, it does not offer such information for nonbank lenders within a holding company. To determine the affiliation of a local lender’s branch office, consumers are likely to find names and addresses necessary— especially in light of the many consolidations that are occurring in today’s financial marketplace and the similarities that can exist in lenders’ names. Because the enforcement of fair lending laws is shared by a number of federal agencies and fair lending problems may involve the interaction of entities overseen by differing federal agencies, coordinated information- sharing among the agencies can contribute to effective federal oversight. FRB staff told us they do not typically forward the fair lending-related comment letters received during the merger process to the appropriate primary banking regulator, FTC, or HUD for consideration in subsequent fair lending oversight activities. FRB staff stated that they do refer some of the fair lending-related comment letters if they identify problems or practices that give rise to supervisory concerns. They explained that their internal policies and, in the case of HUD, a Memorandum of Agreement between HUD and the banking regulators require FRB to forward consumer complaints by individuals to the appropriate federal agency. However, FRB staff stated that comment letters that raised general fair lending issues regarding lending patterns or policies would not have been routinely forwarded to other agencies. For example, FTC did not receive the comment letters from consumer and community groups that raised fair lending issues with the nonbank mortgage subsidiaries of the holding companies involved in four of the mergers. We believe that by forwarding the fair lending-related comment letters, FRB will provide the other agencies the opportunity to detect problems that arise from the interactions of entities under the holding company structure that may otherwise go undetected. The historical division of fair lending oversight responsibility and enforcement authority presents challenges and opportunities to agencies that have jurisdiction over the entities in large bank holding companies. Although large bank holding companies typically include entities overseen by different federal regulators, some types of fair lending abuses could involve operating relationships between such entities. An adequate federal awareness during the merger application process of fair lending compliance performance and federal response to any alleged fair lending abuses may well depend upon effective information-sharing among the various agencies and the ready availability to these agencies and the public of information identifying lenders under the holding company. Although the merger application process is not intended to substitute for fair lending examination or enforcement processes of individual agencies, it presents an opportunity to enhance the effectiveness of those processes. To take advantage of this opportunity, the FRB’s merger application process for large bank holding companies should provide that relevant information, including consumer complaints or consumer complaint data, be obtained from all agencies with responsibility for compliance with fair lending laws. Further, the process should ensure that this information, as well as comment letters received from consumer and community groups, is shared among those agencies to assist in their continuing efforts to identify and oversee developments in mortgage lending that can affect lender compliance with fair lending laws. FRB, as regulator of bank holding companies, is uniquely situated to monitor developments in operating relationships among holding company entities that could affect fair lending. Its role could be especially valuable in monitoring the lending activity of nonbank mortgage subsidiaries. The FRB policy of not routinely examining nonbank mortgage subsidiaries for fair lending compliance and the FTC role as an enforcement agency rather than a regulator result in a lack of regulatory oversight of the fair lending performance of nonbank mortgage subsidiaries whose growth in lending out-paced other bank sector entities in 1997. To enhance the consideration of fair lending issues during the bank holding company merger approval process, we recommend that the Board of Governors of the Federal Reserve System develop a policy statement and procedures to help ensure that all parties asked to provide information or views about the fair lending performance of entities within the bank holding companies are given or directed to sources for structural information about the holding companies, and all federal agencies responsible for helping to ensure the fair lending compliance of entities involved in the proposed merger are asked for consumer complaints and any other available data bearing on the fair lending performance of those entities. To aid in ongoing federal oversight efforts, we recommend that FRB develop a policy and procedures to ensure that it provides federal agencies relevant comment letters and any other information arising from the merger application process that pertains to lenders for which they have fair lending enforcement authority. For example, the other agencies may be interested in receiving FRB’s HMDA analysis as well as the other data obtained and analyzed by FRB in response to the fair lending allegations raised in the comment letters. In addition, we recommend that FRB monitor the lending activity of nonbank mortgage subsidiaries and consider examining these entities if patterns in lending performance, growth, or operating relationships with other holding company entities indicate the need to do so. We requested comments on a draft of this report from the Chairman of the Federal Reserve Board, the Comptroller of the Currency, the Secretary of Housing and Urban Development, the General Counsel of the Federal Trade Commission, and the Assistant Attorney General for Administration of the Department of Justice. Each agency provided technical comments, which we incorporated into the report where appropriate. In addition, we received other written comments from FRB, OCC, and HUD; these are reprinted in appendixes IV through VI of this report. With respect to the draft report’s recommendations, FRB sought clarification regarding the first recommendation, generally agreed with the next two recommendations, and disagreed with the last recommendation. OCC and HUD did not disagree with our recommendations and expressed their support for efficient and effective enforcement of the fair lending laws. Further, HUD suggested that a more formal arrangement be created for obtaining and considering agency input during FRB’s merger approval process. FRB sought clarification of our intent in the first recommendation—that, when soliciting comments on proposed bank holding company mergers, FRB provide structural information about those holding companies. FRB said that information about holding company structure is available to the public and federal agencies on the Internet at the Federal Reserve’s National Information Center (NIC) site and, upon request, from the Board and the Reserve Banks. FRB also said that the information is often in the application filed by the applicant bank holding company, for those who elect to review the application in full; and the information is widely available from publications and from other federal agencies. We added information to the text to clarify our intent. Our intent in recommending that FRB provide the structural information or a source or sources of such information is to enhance consideration of fair lending issues during the merger approval process. We believe that the provision of structural information, including names and addresses of branch offices of lenders, or directions about how to obtain that information, can help ensure that FRB receives from interested parties timely and complete fair lending information on lenders involved in the merger. Without being able to identify the bank and nonbank lenders in the holding companies involved in a merger, interested parties could be unable to determine if lenders whose actions have raised fair lending concerns are affiliated with those holding companies. We do not disagree that this information is sometimes available from a variety of sources. However, ready public access to that information depends upon public awareness of the availability of the information. We note that none of the Federal Register notices requesting public comment on bank holding company mergers in our sample that occurred after 1997, when NIC was created, mentioned the NIC Internet site or any other source of information about the structure of the applicant bank holding companies. Responding to the report’s statement that information provided on NIC can be quite lengthy and complex, FRB said that it believed the complexity is largely a reflection and a function of the size and scope of these large organizations. FRB also said it was not clear just how the information could be made simpler for the public. We agree that the complexity of the information about the largest bank holding companies on NIC is a function of the size and scope of these organizations. However, we also believe that the information could be narrowed, and in that way simplified, by a mechanism that could help interested parties focus on the relevant details of the holding company’s structure. A variety of entities are often affiliated with large holding companies, including, for example, investment, leasing, and real estate development companies. A NIC search mechanism to narrow the structural information to bank and nonbank lenders affiliated with a holding company would aid federal agencies and consumer organizations that may need such information to collect or sort through fair lending concerns about such institutions from field offices or member organizations nationwide. More focused information, including names and addresses of branch offices, would also benefit consumers attempting to determine the affiliation of a local lender’s office. As mentioned in the report, NIC provides a mechanism for obtaining lists of the names and addresses of banking institutions’ branch offices; however, it does not provide the addresses of nonbank lenders’ branch offices or list such branch offices. We believe that this is an important weakness in NIC as a tool to be used in the merger application process by agencies, consumer groups, and individuals, considering the prevalence of concerns about nonbanks’ fair lending performance in the merger cases we analyzed. FRB said that persons generally start out with the identity of the organization about which they have concerns, and it should be relatively simple to confirm whether that organization is affiliated with an applicant bank holding company. We agree that persons would generally use NIC to determine if an identified organization is affiliated with an applicant bank holding company. However, the ease of determining this through NIC could vary, depending upon whether the organization of concern is a banking institution or a nonbank subsidiary of the holding company. As of October 10, 1999, NIC users could determine the holding company affiliation of a banking institution (but not a nonbank holding company subsidiary) by entering the legal name of a banking institution (or even part of that name) and the city and state in which the institution is located. NIC also offered a function enabling users to obtain a listing of addresses of all branch offices of banking institutions (but not addresses of franchises or branch offices of lenders that are nonbank holding company subsidiaries). To confirm a nonbank lenders’ affiliation with an applicant bank holding company, the interested party’s only option is to search for the nonbank lender’s legal name while reading through the multipage listings of entities that describe the entire hierarchial structure, starting with the parent holding company. The only geographical information provided for a nonbank holding company subsidiary in the listing is the city and state domicile of the head office—that is, no branch offices or franchises are identified in the listing. Referring to our mention of the absence of geographic information on NIC, FRB notes that a person’s concerns about a particular entity will likely relate to the geographic area in which the person resides, or to which the person has some link. We agree with this statement. We also believe that a person concerned about a particular local lender is likely to need to see the names and addresses of lenders affiliated with holding companies involved in a proposed merger to determine if his concern about the local lender is relevant for FRB’s consideration. With regard to our recommendation for greater information sharing between FRB, the other banking regulators, HUD, and FTC during the merger application process, FRB generally agreed and said it would explore ways to enhance the systematic exchange of relevant information. However, FRB did not agree that it should seek information about other agencies’ consumer complaints as part of the merger application review process. The reasons for this were: A 1992 Memorandum of Understanding between HUD and the banking regulators’ calls for HUD to refer allegations of fair lending violations to the appropriate banking regulator, which is to take these into account in examinations and supervisory activity. HUD cases involving individual or isolated grievances—and not a finding of a pattern or practice—would not likely represent the type of information that is particularly useful in FRB’s review of managerial resources for purposes of the Bank Holding Company Act. Although the 1992 Memorandum of Understanding between HUD and the banking regulators calls for the referral of allegations of fair lending violations to the appropriate banking regulator, it does not address the referral of these fair lending allegations to FRB for consideration during the bank holding company merger application process. The fair lending allegations received by HUD, FTC, and the other banking regulators could be useful to FRB in its consideration of the managerial resources factor during the merger process. We acknowledge that not all consumer complaints received by other agencies would be relevant for FRB to consider during the bank holding company merger process. However, an otherwise unobserved pattern or practice bearing on the managerial resources of a large and complex holding company could emerge from a review of widely collected consumer complaints. Moreover, consumer complaint letters can be a useful indicator of certain types of illegal credit discrimination, such as discriminatory treatment of applicants and illegal prescreening and marketing. FRB stated that the exchange of information between agencies should (1) ensure that the information is provided in a timely manner and (2) maximize the benefits of the exchange while minimizing the burden to all parties. We concur with FRB’s expectations regarding the exchange of information and acknowledge FRB’s initiative in planning to consult with the other federal agencies to identify possible ways to enhance the systematic exchange of relevant information. FRB stated that it planned to take action in response to our recommendation that it provide copies of relevant comment letters received during the merger application process to the other federal agencies involved with fair lending enforcement. Specifically, FRB indicated that it would consult with the other agencies and was prepared to establish whatever mechanism deemed appropriate to ensure that the agencies receive public comments that they would find helpful to ongoing supervisory oversight. FRB’s plans are a positive first step in responding to our recommendation. FRB disagreed with our recommendation as stated in the draft report that it monitor the lending activities of nonbank mortgage subsidiaries and consider reevaluating its policy of not routinely examining these entities if circumstances warranted. FRB stated that it had recently studied this issue at length and concluded that although it had the general legal authority to examine nonbank mortgage subsidiaries of bank holding companies, it lacked the clear enforcement jurisdiction and legal responsibility for engaging in routine examinations. We revised the wording of our recommendation to clarify that we were not necessarily recommending that FRB consider performing routine examinations of nonbank mortgage subsidiaries. We recognize that FTC has the primary fair lending enforcement authority for the fair lending compliance of nonbank mortgage subsidiaries. However, FRB is uniquely situated to monitor the activities of these nonbank mortgage subsidiaries by virtue of its role as the regulator of bank holding companies and its corresponding access to data that are not readily available to the public or other agencies, such as FTC. If patterns in growth, lending performance, or operating relationships with other holding company entities do not change dramatically, then there may be no reason to examine these entities. Monitoring the lending activities of the nonbank mortgage subsidiaries would help FRB determine when it would be beneficial to conduct targeted examinations of specific nonbank mortgage subsidiaries using size, extent of lending in predominately minority communities, involvement in sub-prime lending, or other factors as the basis for selection. In other cases, FRB may determine that the results of its monitoring efforts should be referred to those agencies responsible for enforcement of nonbank mortgage subsidiaries’ compliance with fair lending laws. OCC and HUD did not disagree with our recommendations. OCC stated that it was committed to working with all the agencies that have a role in providing efficient and effective oversight of compliance with fair lending laws. HUD stated that it stands committed to enhancing coordination among federal agencies to achieve fair lending. HUD noted its support for efforts to ensure greater compliance among nondepository lenders with the FHAct and other consumer protection laws. HUD suggested that a memorandum of understanding that would govern interagency coordination during the merger application process might be appropriate. Such a memorandum could be a useful tool to document each agency’s responsibility regarding information sharing and coordination during the merger application process for bank holding companies. As agreed with your offices, we are sending copies of this report to Representative Rick Lazio, Chairman, and Representative Barney Frank, Ranking Minority Member, of the House Subcommittee on Housing and Community Opportunities; Representative James Leach, Chairman, and Representative John LaFalce, Ranking Minority Member, of the House Committee on Banking and Financial Services; and Senator Phil Gramm, Chairman, and Senator Paul Sarbanes, Ranking Minority Member, of the Senate Committee on Banking, Housing, and Urban Affairs. We are also sending copies of the report to the Honorable Alan Greenspan, Chairman, Board of Governors of the Federal Reserve System; the Honorable John D. Hawke, Jr., Comptroller of the Currency; the Honorable Andrew Cuomo, Secretary, Department of Housing and Urban Development; the Honorable Stephen R. Colgate, Assistant Attorney General for Administration, Department of Justice; and the Honorable Deborah A. Valentine, General Counsel, Federal Trade Commission. Copies will also be made available to others on request. If you or your staff have any questions regarding this letter, please contact me or Kay Harris at (202) 512-8678. Key contributors to this report are acknowledged in appendix VII. GAO recommendation Remove the disincentives associated with self-testing. Responsible agency(ies) Federal Reserve Board (FRB) and Department of Housing and Urban Development (HUD) Action taken by agency(ies) Congress enacted legislation in September 1996. FRB and HUD issued implementing regulations in December 1997. FRB, Office of the Comptroller of the Currency (OCC), Federal Deposit Insurance Corporation (FDIC), Office of Thrift Supervision (OTS), and National Credit Union Administration (NCUA) The Federal Financial Institutions Examination Council approved Interagency Fair Lending Examination Procedures in December 1998. Adopt guidelines and procedures for the use of preapplication discrimination testing. Department of Justice (DOJ) DOJ issued updated guidance on pattern and practice of discrimination to the banking regulators and HUD in November 1996. NCUA is in the process of developing guidance to address preapplication testing. In addition to the issues raised by consumer and community groups in the six mergers that we looked at, representatives of the regulatory and enforcement agencies and the bank holding companies we contacted identified various emerging fair lending issues. These issues involved (1) credit scoring, (2) automated loan underwriting, and (3) mortgage brokers. The fair lending concerns associated with these three issues are discussed below. We do not attempt to address all of the various and complex enforcement, compliance, and consumer protection issues associated with each of the three topics. Instead, we highlight some of the fair lending concerns that have been associated with each topic. The Federal Reserve Board (FRB) and the Department of Justice (DOJ) raised the issue of potential discrimination in credit scoring as an emerging fair lending concern. The Office of the Comptroller of the Currency (OCC) expressed the concern that some lenders may view credit scoring as a safe harbor from fair lending issues. This would ignore the possibility that differential treatment may occur in segmenting the applicant population during the development or input of the data, or in judgmental overrides of the credit-scoring system. According to credit reporting companies (credit bureaus), credit scoring is intended to be an objective method for predicting the future credit performance of borrowers. Credit scoring has gained wide usage among lenders who use it to make lending decisions on various types of loans, such as installment; personal finance; bankcard; and, most recently, mortgages. To develop a credit-scoring system, lenders generally use a risk-scoring process that examines consumer credit reports, assigns numerical values to specific pieces of information, puts those values through a series of mathematical calculations, and produces a single number called a risk score or credit score. Lenders generally offer credit to borrowers with the higher scores. The premise is that the higher scores indicate a better likelihood that the borrower will repay the loan. According to FRB, discrimination in credit scoring could be revealed in two ways, either through disparate treatment or disparate impact. Disparate treatment and disparate impact are methods of analyzing whether discrimination exists. The disparate treatment analysis determines whether a borrower is treated less favorably than his/her peers due to race, sex, or other characteristics protected by the Equal Credit Opportunity Act (ECOA) or the Fair Housing Act (FHAct). The disparate impact analysis determines whether a lender’s seemingly neutral lending policy has a disproportionately adverse impact against a protected group, the policy is justified by business necessity, and a less adverse alternative to such policy or practice exists. OCC, DOJ, and the Federal Trade Commission (FTC) agree that fair lending concerns in credit scoring most often arise when lenders ignore the credit score (i.e., override the score) and use subjective judgment to make a lending decision. Fair lending concerns associated with credit scoring were not raised as an issue in any of the six bank holding company mergers in our study. Officials from all four of the bank holding companies we interviewed stated they used credit-scoring systems. However, they indicated that their credit-scoring systems were applied with safeguards designed to ensure compliance with fair lending laws and regulations. From 1990 through 1998, the regulators and enforcement agencies had few cases of discrimination in credit scoring. OCC referred a case in 1995 and another in 1998 to DOJ that dealt with alleged discrimination in credit scoring. An agreement was reached with OCC in the 1995 case, and the 1998 referral resulted in DOJ filing a lawsuit. In this particular case, DOJ alleged that the bank required a higher credit score for Hispanic applicants to be approved for loans/credit. FTC cited one case of credit discrimination in 1994, which resulted in a consent decree. In this case, the lender had used overrides of the credit-scoring system that discriminated against applicants on the basis of marital status. The fair lending issues that were raised regarding credit scoring are closely associated with the issues associated with automated loan underwriting. According to the Federal National Mortgage Association (Fannie Mae), automated loan underwriting is a computer-based method that is intended to enable lenders to process loan applications in a quicker, more efficient, objective, and less costly manner. The lender enters information from the borrower’s application into its own computer system. This information is communicated to an automated loan underwriting system, such as those developed by Fannie Mae and the Federal Home Loan Mortgage Corporation (Freddie Mac). The lender then requests a credit report and credit score from a credit bureau. The automated loan underwriting system then evaluates the credit bureau data and other information to arrive at a recommendation about whether or not the loan meets the criteria for approval. “Currently, there is little known about the effects of automated underwriting systems on low- and moderate-income or minority applicants. Some informants believe these systems may prevent underwriters from taking full advantage of the increased levels of underwriting flexibility allowed by the GSEs . Lower income applicants are more likely to be required to produce documentation supporting their loan application, such as letters explaining past credit problems or statements from employers about expected salary increases. Automated systems may not have the ability to assess all of these kinds of data, and so may place lower income borrowers at a disadvantage. Informants also raised concerns that these systems may allow lenders to reduce their underwriting staff because automated systems increase the productivity of individual underwriters. Lenders, informants pointed out, could reduce staff and only process applications identified by automated systems as requiring minimal further review. As a result, automated systems may make it harder for marginal applicants to receive personalized attention from an underwriter.” ” Representatives of the four holding companies that resulted from the mergers included in our study stated that they all used automated loan underwriting and credit-scoring systems to some degree. Three of the four holding companies said they have adopted a program in which loans that are not initially approved by their automated loan underwriting systems are subject to a secondary review by an experienced loan underwriter. Although the secondary review programs added additional costs and time to the process, the holding companies stated that it was necessary to guard against potential disparate impacts with respect to lending to minorities. Another concern that was raised by bank holding company officials that we met with involved a lender’s liability for the fair lending activities of mortgage brokers who are affiliated in some fashion with the lender. Although no standard definition of a mortgage broker exists, mortgage brokers are generally entities that provide mortgage origination or retail services and bring a borrower and a creditor together to obtain a loan from the lender (or funded by the lender). Typically, the lender decides whether to underwrite or fund the loan. HUD defines two categories of mortgage brokers. HUD’s narrowly defined category consists of entities that may have an agency relationship with the borrower in shopping for a loan and therefore have a responsibility to the borrower because of this agency representation. HUD’s broadly defined category consists of entities who do not represent the borrower but who may originate loans with borrowers utilizing funding sources in which the entity has a business relationship. The banking industry is concerned that lenders could be held liable for a fair lending violation resulting from the activity of a mortgage broker that provides origination or retail services for a lender. When lenders use mortgage brokers in providing mortgage credit, it is not always clear whether the lender, the mortgage broker, or both are responsible for the credit approval decision. FRB officials noted differences between the federal enforcement agencies and FRB with respect to the criteria used to determine when lenders are responsible for lending transactions involving brokers. Of the four holding companies resulting from the mergers in our study, three indicated that they use mortgage brokers. Officials of one of the holding companies we contacted said they wanted additional clarification from bank regulators regarding the bank’s liability for its lending decisions in transactions involving brokers because it used mortgage brokers extensively in making loans for manufactured housing and automobile loans. ECOA, as implemented by FRB’s Regulation B, defines a creditor as someone who “regularly participates” in credit-making decisions. Regulation B includes in the definition of creditor “a creditor’s assignee, transferee, or subrgee who so participates.” For purposes of determining if there is discrimination, the term creditor also includes “a person who, in the ordinary course of business, regularly refers applicants or prospective applicants to creditors, or selects or offers to select creditors to whom requests for credit may be made.” Regulation B states that “a person is not a creditor regarding any violation of ECOA or regulation B committed by another creditor unless the person knew or had reasonable notice of the act, policy, or practice that constituted the violation before becoming involved in the credit transaction.” This is referred to as the “reasonable notice” standard. On the basis of the definition of creditor contained in Regulation B and the specific facts, a mortgage broker can be considered a creditor and a lender can also be considered a creditor even if the transaction involves a mortgage broker. FRB noted that lenders have increasingly asked for guidance regarding the definition of a creditor as they expand their products and services. In March 1998, FRB issued an Advance Notice of Proposed Rulemaking that solicited comments related to the definition of “creditor” and other issues as part of its review of Regulation B. Specifically, FRB solicited comments on whether (1) it was feasible for the regulation to provide more specific guidance on the definition of a creditor; (2) the reasonable notice standard regarding a creditor’s liability should be modified; and (3) the regulation should address under what circumstances a creditor must monitor the pricing or other credit terms when another creditor (e.g., a loan broker) participates in the transactions and sets the terms. On August 4, 1999, FRB published proposed revisions to Regulation B that expand the definition of creditor to include a person who regularly participates in making credit decisions, including setting credit terms. In the Discussion of Proposed Revisions to the Official Staff Commentary (the Discussion), FRB stated that it believes that it is not possible to specify by regulation with any particularity the circumstances under which a creditor may or may not be liable for a violation committed by another creditor. Thus, FRB decided that Regulation B would retain the “reasonable notice” standard for when a creditor may be responsible for the discriminatory acts of other creditors. In the Discussion, FRB further stated that it believes that the reasonable notice standard may carry with it the need for a creditor to exercise some degree of diligence with respect to third parties’ involvement in credit transactions, such as brokers or the originators of loans. However, FRB believes that is not feasible to specify by regulatory interpretation the degree of care that a court may find required in specific cases. Opinions vary among regulatory agencies in terms of a lender’s liability in transactions that involve mortgage brokers. OCC and FRB share the view that a broker must be an agent of the lender, or the lender must have actual or imputed knowledge of a broker’s discriminatory actions, for a lender to share liability for discrimination by a broker. DOJ has taken the position that lenders are liable for all of their lending decisions, including those transactions involving mortgage brokers. In 1996, DOJ took one enforcement action involving a mortgage broker. The case involved mortgage company employees and brokers charging African-American, Hispanic, female, and older borrowers higher fees than were charged to younger, White males. HUD officials told us their agency has not taken a position on this issue. FTC officials told us that FTC has not taken any action that reflects a position on this issue. From 1995 through 1997, Federal Reserve Board (FRB) data indicated that home mortgage lending activity by institution type within the financial sector generally increased as measured by the total number of loans originated. Figure III.1 provides an overview of mortgage lending activity by financial sector. It shows that the bank sector originated more loans than the thrift sector or independent finance companies over this period when the large bank holding company mergers we studied occurred. As discussed previously, banking regulators (FRB, Office of the Comptroller of the Currency, the Federal Deposit Insurance Corporation, and the Office of Thrift Supervision) have the primary oversight responsibility for the bank and thrift sectors. The Federal Trade Commission (FTC) and the Department of Housing and Urban Development (HUD) are responsible for fair lending enforcement of independent finance companies, which are not addressed in this study. Figures III.2 and III.3 provide overviews of lending by components of the bank sector: banks, bank subsidiaries, and nonbank mortgage subsidiaries of bank holding companies. The home mortgage lending activity of the three components has remained relatively stable from 1995 to 1997. Figure III.2 shows that banks originated the most home mortgage loans in this period followed by bank subsidiaries and then nonbank mortgage subsidiaries of bank holding companies. Figure III.3 reveals the same pattern when dollar value of loans is considered. However, the data reveal larger percentages in the dollar value of home mortgage loan originations for both bank subsidiaries and bank holding company mortgage subsidiaries in comparison to the share of mortgage loan originations. The banking regulators are responsible for the fair lending oversight of the banks and bank subsidiaries; FTC and HUD are responsible for fair lending enforcement of the nonbank mortgage subsidiaries of bank holding companies. Because the nonbank mortgage subsidiaries of bank holding companies are not routinely examined for fair lending compliance by any federal regulatory or enforcement agencies, we analyzed their rate of growth compared to other bank sector lenders. Figure III.4 shows that in 1997, the percent change in loan originations by nonbank mortgage subsidiaries of bank holding companies was large in comparison to loan originations by banks and banking subsidiaries. Figure III.5 shows that the dollar value of mortgage loan originations has a pattern similar to the percentage change in loan originations. Figure III.4 and III.5 combined show an increasing presence in home mortgage lending by nonbank mortgage subsidiaries of bank holding companies. In addition to those named above, Harry Medina, Janet Fong, Christopher Henderson, Elizabeth Olivarez, and Desiree Whipple made key contributions to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed large bank holding company mergers and regulatory enforcement of the Fair Housing Act and the Equal Credit Opportunity Act, focusing on the: (1) fair lending issues raid by consumer and community groups during the application process for six large bank holding company mergers; and (2) Federal Reserve Board's (FRB) consideration of those issues. GAO noted that: (1) in each of the six mergers, consumer and community groups raised the issue of perceived high loan denial and low lending rates to minorities by banks, bank subsidiaries, and nonbank mortgage subsidiaries involved in the mergers; (2) in four merger cases, community and consumer groups were concerned about alleged potential discriminatory practices of the holding companies' nonbank mortgage subsidiaries; (3) nonbank mortgage subsidiaries are not subject to routine examinations by federal regulators for compliance with fair lending and other consumer protection laws and regulations; (4) the fair lending laws generally confer enforcement, authority for nonbanking companies with the Federal Trade Commission, Department of Housing and Urban Development, or Department of Justice and do not specifically authorize any federal agency to conduct examinations of nonbanking companies for compliance with these laws; (5) the consumer and community groups were concerned that: (a) sub-prime lending activities of the nonbank mortgage subsidiaries had resulted or could result in minorities being charged disproportionately higher rates and fees; and (b) minority loan applicants were being "steered" between the affiliated banking or nonbank subsidiaries of the holding company to the lender that charged the highest rates or offered the least amount of services; (6) other fair lending issues included alleged discriminatory prescreening and marketing, low lending rates to minority-owned small businesses, discriminatory treatment of applicants, and redlining; (7) FRB considered these fair lending issues in the six merger cases by analyzing information from various sources, including the bank holding companies involved in the mergers and other federal and state agencies; (8) FRB staff analyzed Home Mortgage Disclosure Act data provided annually by the banks and nonbank mortgage subsidiaries involved in the mergers; (9) FRB staff stated that they placed heavy emphasis on prior and on-going compliance examinations performed by the appropriate primary banking regulators for the banks involved in the merger; (10) examinations for nonbank mortgage subsidiaries were generally not available because these entities are not routinely examined by any federal agency; (11) in two of the six mergers in GAO's review, FRB has previously performed compliance investigations of nonbank mortgage subsidiaries involved in the mergers; and (12) according to FRB staff, FRB had used its general examination and supervisory authority for bank holding companies to conduct these particular investigations.
We reported that even though DCPS changed parts of its enrollment process in school year 1996-97 to address prior criticisms, the process remained flawed. Some of the changes, such as the use of an enrollment card to verify attendance, increased complexity and work effort but did little to improve the count’s credibility. Because DCPS counts enrollment by counting enrollment records—not actual students—accurate records are critical for an accurate count. Errors, including multiple enrollment records for a single student, remained in SIS, but DCPS had only limited mechanisms for correcting these errors. For example, although Management Information Services personnel maintained SIS, they had no authority to correct errors. In addition, DCPS’ enrollment procedures allowed multiple records to be entered into SIS for a single student, and its student transfer process may have allowed a single student to be enrolled in at least two schools simultaneously. Furthermore, DCPS’ practice of allowing principals to enroll unlimited out-of-boundary students increased the possibility of multiple enrollment records for one student. Nevertheless, DCPS did not routinely check for duplicate records. In addition, DCPS’ official enrollment count included categories of students usually excluded from enrollment counts in other districts when the counts are used for funding purposes. For example, DCPS included in its enrollment count students identified as tuition-paying nonresidents of the District of Columbia and students above and below the mandatory age for public education in the District of Columbia, including Head Start participants, prekindergarten students (age 4), preschool students (age 0 to 3), and some senior high and special education students aged 20 and older. In contrast, the three states that we visited reported that they exclude from enrollment counts used for funding purposes any student who is above or below mandatory school age or who is fully funded from other sources. Furthermore, even though the District of Columbia Auditor has suggested that students unable to document their residency be excluded from the official enrollment count, whether they pay tuition or not, DCPS included these students in its enrollment count for school year 1996-97. During school year 1996-97, District of Columbia schools had some attractive features. Elementary schools in the District had free all-day prekindergarten and kindergarten, and some elementary schools had before- and after-school programs at low cost. For example, one school we visited had before- and after-school care for $25 per week. This program extended the school day’s hours to accommodate working parents—the program began at 7 a.m. and ended at 6 p.m. In addition, several high schools had highly regarded academic and artistic programs; and some high schools had athletic programs that reportedly attracted scouts from highly rated colleges. Furthermore, students could participate in competitive athletic programs until age 19 in the District, compared with age 18 in some nearby jurisdictions. Problems persisted, however, in the critical area of residency verification. In school year 1996-97, schools did not always verify student residency as required by DCPS’ own procedures. Proofs of residency, when actually obtained, often fell short of DCPS’ standards. Moreover, central office staff did not consistently track failures to verify residency. Finally, school staff and parents rarely suffered sanctions for failure to comply with the residency verification requirements. In addition, the pupil accounting system failed to adequately track students. SIS allowed more than one school to count a single student when the student transferred from one school to another. Furthermore, schools did not always follow attendance rules, and SIS lacked the capability to track implementation of the rules. Finally, some attendance rules, if implemented, could have allowed counting of nonattending students. Other school districts report that they use several approaches to control errors, such as the ones we identified, and to improve the accuracy of their enrollment counts. These include using centralized enrollment and pupil accounting centers and a variety of automated SIS edits and procedures designed to prevent or disallow pupil accounting errors before they occur. the enrollment count. The Authority decided, however, that the inadequacies that led to the restructuring of the public school system would make auditing the school year 1996-97 count counterproductive. In short, the Reform Act’s audit requirement was not met. Because the enrollment count will become the basis for funding DCPS and is even now an important factor in developing DCPS’ budget and allocating its resources, we recommended in our report that the Congress consider directing DCPS to report separately in its annual reporting of the enrollment count those students fully funded from other sources, such as Head Start participants and tuition-paying nonresidents; above and below the mandatory age for compulsory public education, such as those in prekindergarten or those aged 20 and above; and for whom District residency cannot be confirmed. We also recommended that the DCPS Chief Executive Officer/ Superintendent do the following: Clarify, document, and enforce the responsibilities and sanctions for employees involved in the enrollment count process. Clarity, document, and enforce the residency verification requirements for students and their parents. Institute internal controls in the student information database, including database management practices and automatic procedures and edits to control database errors. Comply with the reporting requirement of the District of Columbia School Reform Act of 1995. We further recommended that the District of Columbia Financial Responsibility and Management Assistance Authority comply with the auditing requirements of the District of Columbia School Reform Act of 1995. checks and balances, no aggressive central monitoring, and few routine reports were in place. In addition, he said that virtually no administrative sanctions were applied, indicating that the submitted reports were hardly reviewed. The Authority shared DCPS’ view that many findings and recommendations in our report will help to correct what it characterized as a flawed student enrollment count process. Its comments did, however, express concerns about certain aspects of our report. The Authority was concerned that we did not discuss the effects of the Authority’s overhaul of DCPS in November 1996. It also commented that our report did not note that the flawed student count was one of the issues prompting the Authority to change the governance structure and management of DCPS. In the report, we explained that we did not review the Authority’s overhaul of DCPS or the events and concerns leading to the overhaul. DCPS has made some changes in response to our recommendations. For example, it dropped the enrollment card. DCPS now relies upon other, more readily collected information, such as a child’s grades or work, as proof that a child has been attending. DCPS has also strengthened some mechanisms for correcting SIS errors, such as multiple enrollment records for a single student. Staff reported that central office staff now conduct monthly duplicate record checks. These staff then work with the schools to resolve errors. In addition, central office staff now have the authority to correct SIS errors directly. Schools are also now required to prepare monthly enrollment reports, signed by the principal, throughout the school year. Central office staff review and track these reports. In addition, SIS can now track consecutive days of absence for students, which helps track the implementation of attendance rules. Finally, all principals are now required to enter into SIS the residency status of all continuing as well as new DCPS students. DCPS officials believe SIS’ residency verification status field also serves as a safeguard against including both duplicate records and inactive students in the enrollment count. who live outside school attendance boundaries. School data entry staff may still manually override SIS safeguards against creating multiple records. In addition, SIS still lacks adequate safeguards to ensure that it accurately tracks students when they transfer from one school to another. SIS’ new residency verification status field will not prevent the creation or maintenance of duplicate records. For example, a student might enroll in one school, filling out all necessary forms required by that school, including the residency verification form, and decide a few days later to switch to another school. Rather than officially transferring, the student might simply go to this second school and re-register, submitting another residency verification form as part of the routine registration paperwork. If the second school’s data entry staff choose to manually override SIS safeguards, duplicate records could be created. Even if a student did not submit a residency verification form at the second school, the data entry staff could simply code the SIS residency field to show that no form had been returned, creating duplicate records. Regarding the critical area of residency verification, all principals must now issue and collect from all students a completed and signed residency verification form (as well as enter residency verification status information into SIS as discussed). Principals are also encouraged to obtain proofs of residency and attach these to the forms. DCPS considers the form alone, however, the only required proof of residency for the 1997-98 count. The school district encouraged but not did not require such supporting proofs to accompany this form. A signed form without proofs of residency is insufficient to prove residency in our opinion. Such proofs are necessary to establish that residency requirements have been met. Until DCPS students are required to provide substantial proofs of residency, doubts about this issue will remain. of residency.) Furthermore, DCPS staff told us that the school district has not yet monitored and audited the schools’ residency records but plans to do so shortly. DCPS has proposed modifications to the Board of Education’s rules governing residency to strengthen these rules. The proposed modifications would strengthen the residency rules in several ways by stating that at least three proofs of residency “must” be submitted, rather than “may be” submitted, as current rules state; specifying and limiting documents acceptable as proofs; eliminating membership in a church or other local organization operating in the District of Columbia as an acceptable proof; and strengthening penalties for students who do not comply. DCPS staff told us that these proposed changes are now under consideration by the Authority. Regarding our recommendation that the Congress consider directing DCPS to report separately the enrollment counts of certain groups of students, the Congress has not yet required that DCPS do this. DCPS continues to include these groups in its enrollment count. For school year 1997-98, DCPS reports an official count of 77,111 students. This number includes 5,156 preschool and prekindergarten students who are below mandatory school age in the District of Columbia. Some of these students are Head Start participants and are paid for by Head Start; nevertheless, DCPS counts Head Start participants as part of its elementary school population. The count also includes 18 tuition-paying nonresident students attending DCPS. In addition, DCPS staff told us that although the count excludes adult education students, they did not know whether it includes other students above the mandatory school age. Finally, as noted earlier, the count includes students who have not completed residency verification. In addition to talking to DCPS staff, we talked to staff at the Authority about whether the Authority has provided for an independent audit of the 1997-98 enrollment count. Staff said that the Authority is in the process of providing for an audit but has not yet awarded a contract. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions you or members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed its recent report on the enrollment count process that District of Columbia Public Schools (DCPS) used in school year 1996-97. GAO noted that: (1) in spite of some changes in DCPS' enrollment count process in response to criticisms, the 1996-97 count process remained flawed in several respects; (2) for example, the Student Information System (SIS) continued to have errors, such as multiple enrollment records for a single student and weaknesses in the system's ability to track students; (3) in addition, verification of student residency remained problematic; (4) although DCPS made some changes in its enrollment count process for the 1997-98 school year in response to GAO's recommendations and plans to make more, the larger systemic issues appear to remain mostly uncorrected; (5) consequently, fundamental weaknesses still remain in the enrollment count process, making it vulnerable to inaccuracy and weakening its credibility; (6) for example, DCPS staff report that although an important internal control--duplicate record checks--has been implemented for SIS, additional internal controls are still lacking; (7) several DCPS enrollment and pupil accounting procedures continue to increase the possibility of multiple enrollment records for a single student; (8) GAO is concerned that duplicate record checks alone may not be sufficient to protect the integrity of SIS, given the many possiblities for error; (9) furthermore, the enrollment count may still include nonresident students; (10) more than half of DCPS' students have either failed to provide the residency verification forms or have provided no proofs of residency to accompany their forms; (11) GAO questions the appropriateness of including students who have failed to prove residency in the official count, particularly students who have not even provided the basic form; (12) in addition, because DCPS has not yet monitored and audited residency verification at the school level, additional problems may exist that are not yet apparent; (13) proposed new rules governing residency will help DCPS deal with residency issues; (14) until these issues are fully addressed and resolved, however, the accuracy and credibility of the enrollment count will remain questionable; (15) in GAO's more recent discussions with DCPS officials, they acknowledge that more needs to be done to improve the enrollment count process, particularly in the areas of further strengthening DCPS' automated internal controls and addressing the nonresident issue; and (16) they have expressed concern, however, that GAO has failed to recognize fully the improvements DCPS made in the enrollment count process for school year 1997-98.
As a result of a 1995 Base Realignment and Closure (BRAC) Act decision, the San Antonio and Sacramento Air Logistics Centers, including their maintenance depots, are to close by the year 2001. To mitigate the impact of the closings on the local communities and employees, the administration announced its intention to maintain employment levels by privatizing the depots’ workloads in place. The Air Force followed by announcing a strategy to privatize-in-place five prototype depot maintenance workloads at the two closing centers. Since then, there has been a continuing debate between Congress and DOD over where and by whom the workloads at the closing depots would be performed. Central to this debate are concerns about the excess facility capacity that exists at the Air Force’s three remaining depots and the legislative requirement that workloads exceeding $3 million dollars in value be subject to a public-private competition before being moved to the private sector. Appendix II provides a more detailed description of the closure history for the two logistics centers. In response to congressional concerns regarding the appropriateness of its privatization-in-place plans, the Air Force revised its strategy to allow the public depots to participate in public-private competitions for the closing depot workloads. Congress included provisions in the fiscal year 1998 Authorization Act that require us to review and report on the process, procedures, and results of these competitions. The C-5 aircraft depot maintenance workload was the first of such competitions. On February 11, 1997, the Air Force, Aircraft Directorate at Kelly Air Force Base issued a request for proposals for the purpose of conducting a public-private competition for the C-5 aircraft business area workload being performed at the closing San Antonio Air Logistics Center. The Air Force received proposals from private sector offerors and from one public offeror—the Air Force’s Warner Robins Air Logistics Center. Following technical and cost evaluations, the Air Force selected Warner Robins to perform the C-5 workload on the basis that its proposal represented the lowest total evaluated cost to the government. The Air Force’s procedures provided an equal opportunity for public and private offerors to compete for the C-5 workload without regard to where the work could be performed. Both public and private offerors acknowledged that the solicitation contained no limitations on location of performance. Since the San Antonio facilities were designed to support C-5 depot maintenance, the private offerors stated that the site had a natural advantage that they found attractive in competing for the workload. Therefore, the private offerors stated they proposed performing the C-5 workload at San Antonio. Because the competition placed no limitation on the location, the Warner Robins depot—the public offeror—was able to propose the use of its facilities. In assessing the C-5 competition’s compliance with applicable laws and regulations, we reviewed the solicitation, proposal evaluation, and award in the context of applicable laws and regulations. This review included examining documents, reviewing processes and procedures, and conducting discussions with cognizant Air Force and DOD officials. We also assessed several specific concerns raised by the participants. We found no reason to conclude that the procedures used in selecting the successful offeror deviated in a material way from applicable laws or relevant provisions of the FAR. The Air Force issued a competitive solicitation that provided for the participation of a public sector depot. Pursuant to its Depot Competition Procedures, the Air Force issued the solicitation in accordance with FAR part 12, which prescribes the policies and procedures for the acquisition of commercial items, and FAR part 15, which sets forth the source selection procedures for competitively negotiated acquisitions. The solicitation called for proposals from public and private sector sources for the C-5 business area workload currently being performed at the closing San Antonio Air Logistics Center at Kelly Air Force Base. The solicitation provided for award to the public sector source if its proposal conformed to the solicitation requirements, showed it had the necessary technical capabilities, and represented the lowest total evaluated cost over the life of the requirement. Proposals were evaluated in accordance with management criteria, a risk assessment, cost criteria, and other general considerations. Several statutes govern the use of public-private competitions for the performance of depot workloads. In particular, 10 U.S.C. 2469 provides for the use of “competitive procedures for competitions among private and public sector entities” whenever DOD contemplates changing the performance of public depot workloads of $3 million or more to contractor performance. Neither 10 U.S.C. 2469 nor the other statutes governing public-private competitions for depot workloads prescribe the specific elements that constitute a competition. Because the Air Force’s Depot Competition Procedures use the competitive acquisition system, the standards in chapter 137 of title 10 of the United States Code (governing DOD acquisitions) and the FAR apply to the extent they are consistent with the basic public-private competition statutes. Among other things, these standards require that the requirements in a solicitation be stated clearly and unambiguously and that restrictive provisions be included only to the extent necessary to satisfy an agency’s needs. Further, under these standards, an agency must follow the criteria announced in the solicitation and exercise its judgment in a reasonable manner in determining which of the competing offers is to be selected. Based on our review of the C-5 competition, we found no basis to conclude that procedures used in selecting the successful offeror deviated in any material respect from the applicable laws or relevant provisions of the FAR. The Air Force issued a solicitation providing for the participation of a public sector depot consistent with the requirement for public-private competition, and the solicitation was issued competitively in accordance with the FAR. Overall, the evaluation appeared to be reasonable, fair, and consistent with the solicitation and the Air Force’s Depot Competition Procedures. The private sector offerors raised several concerns about the conduct of the C-5 competition. A summary of their concerns and our conclusions follows. Private sector sources believe there is an inherent inequity in public-private depot competitions that is created by the solicitation of offers on a fixed-price basis because the government often pays for any cost overruns incurred by a public sector source from public funds. Because public and private sector entities are fundamentally different in this regard, agencies conducting public-private competitions are required to make a reasoned judgment as to the actual cost the government will incur if work is to be performed by a public depot. We believe that the procedures used in the C-5 competition reasonably addressed the issue of public sector cost accountability. Among other things, the solicitation required the public depot to certify that its offer represented the full costs of performance, and the Air Force conducted an extensive realism analysis of Warner Robins’ cost proposal. The Defense Contract Audit Agency reviewed Warner Robins’ cost proposal and its accounting and estimating systems, as required by the Air Force’s Depot Competition Procedures. Private sector participants in the C-5 competition believe that Warner Robins was unfairly advantaged when it was given a $153-million cost credit to reflect expected savings in overhead costs. We found that, although the amount was large and became the primary determining factor in the selection of Warner Robins, it was properly used in the Air Force evaluation. The overhead savings evaluation was provided for in the solicitation and the Depot Competition Procedures, and we found that the Air Force followed the evaluation scheme in calculating the savings proposed by Warner Robins. Private sector participants were concerned that the selection did not account for, or put a dollar value on, certain identified risks or weaknesses, in the respective proposals. The solicitation provided that the calculation of an offeror’s total evaluated cost would include the dollar impact of significant discriminators based on identified proposal strengths, weaknesses, and risks. The evaluation record shows that, for the highest priority management factors, transition and production operations, the lowest cost private sector offeror received a low-risk rating while the public offeror approach was rated as a moderate risk. Overall, the private sector offeror was credited with more strengths under the management factors than was the public sector offeror. In the final selection decision, the source selection authority did not quantify the risk differences or all the strengths or weaknesses but only included adjustments representing discriminators based on reduction of flow days and a lack of capacity at one of Warner Robins’ proposed facilities. Under the applicable legal standards, a procuring agency has broad discretion to decide whether it will include any particular feature of a proposal in its cost calculations. In our view, the dollar valuation approach the Air Force adopted represented a reasonable exercise of its discretion under the solicitation. The solicitation did not explain in any detail how dollar values were to be assigned, but left it to the Air Force to determine an appropriate approach. Further, our review of the evaluation record did not disclose that the Air Force’s approach was uneven or unfair. Warner Robins officials stated that they were not permitted to include private sector firms as part of their proposed effort to perform the workload. Although Warner Robins won the competition, depot officials believe that the use of private sector support would have enhanced their competitiveness by providing a better way to perform the paint and depaint operations on the C-5 aircraft. According to Air Force officials, significant use of private sector support as part of the public offeror’s proposal would have been inherently inconsistent with a public-private competition. Consequently, Warner Robins developed an alternative approach to perform the C-5 aircraft paint and depaint workload at its depot facilities. This matter had no impact on the outcome of the C-5 competition, and it has been resolved for any future public-private competitions for workloads at the closing San Antonio and Sacramento depots by the 1998 Defense Authorization Act. The act added section 2469a to title 10, United States Code, which provides for special procedures for public-private competitions for the workloads at the two closing depots. The new section allows public sector offerors to use private sector firms as part of their proposed effort. The Warner Robins proposal after cost comparability adjustments, as provided for in the solicitation and the depot maintenance cost comparability handbook, was determined by the source selection authority to offer the lowest total evaluated cost to the government. Before the cost comparability adjustments, the Warner Robins proposal was higher than the lowest private sector proposal and was determined to represent a higher risk under the two most important management evaluation factors. Both the public and private competitors raised questions about the proposal’s cost evaluation and adjustments. As stated above, our review of these adjustments in the context of compliance with applicable laws and regulations found them to be consistent with the solicitation and reasonable. We further examined the accuracy and soundness of the data and assumptions supporting a number of these adjustments. Except for a large adjustment for overhead savings, the adjustments we reviewed would not have affected the selection decision. To determine whether adjustments made in the source selection evaluation were accurate and supported, we (1) discussed the selection process with cognizant Air Force and DOD officials, as well as the offerors; (2) reviewed the calculation methods for the various cost element estimates used in the cost evaluation; (3) compared the cost elements among offerors to test for reasonableness; (4) discussed the rational for cost element treatment in the evaluation with the evaluation team members; and (5) discussed each offeror’s assessment of cost element treatment in the evaluation and followed up on issues or concerns raised. Since several of these adjustments were presented to us as concerns, we briefly address each of these below. Our review of the proposal cost evaluation and adjustments showed that the award resulted in the lowest cost to the government given Air Force assumptions and conditions at the time of award. Warner Robins’ total evaluated cost—after adjustments—for the 7-year period was $746,519,392. This amount included a $153-million downward evaluation adjustment to reflect expected savings in overhead costs. This adjustment made Warner Robins’ total evaluated cost about $42 million less than the lowest private sector offeror’s cost. Accordingly, the $153-million overhead adjustment became the primary determining factor of the competition. The adjustment was large because the evaluation showed that Warner Robins, due to its excess capacity, could absorb the additional C-5 workload with no significant increase in total overhead costs and that the overhead was primarily a fixed-cost that would be incurred with or without the additional workload for the 7-year period. The savings were determined by calculating the reduced overhead charges to existing workloads resulting from adding the C-5 workload over the 7-year period. Industry officials questioned Warner Robins’ ability to achieve these savings. According to one private sector offeror, the Air Force did not clearly explain how the public offeror could achieve such large savings relative to the proposed cost for performing the workload. Additionally, while stating that some savings may be achievable, contractors said the overhead savings estimate in the Air Force’s cost evaluation was too high. Further, they considered this adjustment a reward for maintaining existing depot inefficiency. One private sector offeror characterized the overhead savings adjustment as the one factor that most favored the public depots, and said that unless this factor is changed, they may not participate in future public-private competitions. The evaluation records show that the cost evaluators questioned the overhead savings initially proposed by Warner Robins and made downward adjustments to that amount. For example, the evaluators deleted workload hours included in the Warner Robins’ savings calculations for work on the KC-135 aircraft because that requirement had not been committed to the Warner Robins facility. After extensive discussions with the offeror concerning the proposed overhead savings, the evaluators calculated that $153,935,160 in savings could be attributed to the other workloads to be performed at the public facility during the performance of the C-5 requirement. These savings were primarily due to the more efficient use of the existing workforce and facilities, which before the addition of the C-5 workload had been underused. We have reported that Air Force depots have significant excess capacity and that significant savings could be achieved by reducing this excess capacity. The excess capacity consists of both people and facilities. In addition to downsizing and streamlining depot operations, excess capacity can be reduced by bringing in additional workloads to achieve savings by spreading personnel and facility costs over a larger workload base. We also have reported that over $200 million in overhead savings could be achieved annually by consolidating the closing San Antonio and Sacramento depots’ workloads into remaining DOD depots. In calculating the overhead savings, the Air Force assumed that the excess capacity and overhead cost condition at Warner Robins would not otherwise be significantly reduced over the 7-year period. We did not explore the cost effectiveness of other potential measures and opportunities to reduce Warner Robins’ excess capacity. As a result, given the Air Force’s assumption and the excess capacity condition at the time of award, the projected overhead savings appear reasonable. Private sector offerors questioned a $4.25-million upward adjustment made to their proposals. The adjustment was based on a $104-million interest free mortgage grant to the Greater Kelly Development Corporation by the federal government. According to the Air Force, the interest free mortgage enabled the Corporation to subsidize the private offerors’ lease below market levels. The private sector offeror stated that the lease costs charged by the local redevelopment authority were higher than those at comparable commercial locations. Further, the offeror stated that the high cost of operating at the privatized facility created a competitive disadvantage. Our review of the subsidy and the private sector offeror’s lease cost data found that the Air Force’s adjustment relating to the interest free loan was reasonable. Private industry raised concerns about the public depot’s ability to accurately control costs for the C-5 workload. According to industry aircraft maintenance officials, if an Air Force depot overruns its proposed costs, the Air Force recovers the cost overruns by charging higher rates to other depot customers. According to these officials, this means a higher cost to DOD and eventually the taxpayer pays. Documentation prepared by the Warner Robins Center as a part of its C-5 proposal indicated that private sector concerns about the Warner Robins depot not being able to accurately control costs for the C-5 workload were not supported by the depot’s performance in earlier public-private competitions. Warner Robins officials note that they demonstrated cost control for the $64 million C-141 Center Wing Box Replacement Program, which was won in a public-private competition, by performing within 2 percent of the proposal cost. The Air Force considered this evidence as a part of its proposal evaluation. The Air Force plans to ensure that Warner Robins performs at its proposed cost, and that the anticipated savings are achieved. For example, the Air Force plans to involve the Defense Contract Audit Agency and Defense Contract Management Command in ensuring the cost performance of the C-5 depot maintenance workload and to develop special tracking procedures to monitor and report cost, schedule, and performance data. Warner Robins questioned the Air Force’s treatment of a $20-million downward adjustment to its overhead costs. Warner Robins officials believe the adjustment may limit the Air Force’s ability to accurately measure its cost performance. The Air Force concluded that the adjustment was necessary based on its evaluation of the proposal. However, Warner Robins officials state that the Air Force misinterpreted the proposal. In its initial proposal, Warner Robins noted the existence of overhead savings on the C-5 workload. The contracting officer questioned whether the overhead savings were already included in Warner Robins’ proposed overhead rate. To preclude double counting the savings, the contracting officer requested that Warner Robins clarify its treatment of C-5 overhead savings. According to Air Force officials, Warner Robins’ response did not adequately explain whether the C-5 overhead savings were included in its proposed rate. Consequently, the Air Force included a downward adjustment for the savings in its cost evaluation. Warner Robins officials maintained that they clearly communicated that all C-5 overhead savings had been included in its proposed overhead rate. The Air Force maintains that the C-5 contracting officer made a sound decision at the time given the information provided by Warner Robins during the selection process. The adjustment, had it not been made, would not have affected the selection decision. If a corresponding post award adjustment is finalized, Warner Robins could have problems meeting its cost objectives in performing the workload. The Air Force has not made a final determination as to how to resolve this dispute. Warner Robins officials stated that the Air Force required them to use a depreciation method that resulted in a higher charge than depreciation methods the private sector was permitted to use. The Air Force required the public depots to depreciate proposed capital expenditures over the contract period, rather than the longer depreciation periods allowed the private sector offerors. According to Air Logistics Center officials, the depreciation requirement created a disadvantage for their offer. Ultimately, the Warner Robins’ proposal did not include large capital expenditures and consequently, the impact of the Air Force’s depreciation policy was not material to the selection. This matter was included in the special procedures for future public-private competitions added by the 1998 Defense Authorization Act for workloads at the closing San Antonio and Sacramento depots. The procedures at 10 U.S.C. 2469a provide that to the maximum extent practicable, the cost standards used to determine depreciation of facilities and equipment should provide for identical treatment to public and private offerors. Air Force and Defense Contract Audit Agency officials reviewed a draft of this report and provided oral comments. They generally agreed with the report. The Air Force spokesperson stated that the report accurately characterized and reflects the process and procedures the Air Force used in conducting the C-5 aircraft depot workload competition. The spokesperson specifically noted agreement with our statement that the Air Force could have taken a more expansive approach to dollarization. The Air Force spokesman added that due to the lessons learned from the C-5 aircraft competition, and legislative changes resulting from the fiscal year 1998 Defense Authorization Act, the processes and procedures that the Air Force will use in upcoming public-private depot competitions will be different. Both the Air Force and Audit Agency spokespersons suggested several technical changes for clarity and accuracy. We agreed on specific wording changes and incorporated them in the final report. We also provided a copy of the draft report to relevant public and private sector participants in the C-5 competition. Warner Robins, the public sector participant, said that the report accurately reflected their concerns with the competition. However, they noted that while the report said the competition placed no limitation on the location, they were not able to propose accomplishing the 7-year contract using the depot facilities at the closing San Antonio Air Logistics Center beyond the July 2001 date when the Kelly realignment will be completed. We believe this does not reflect a limitation on the competition, but does reflect the fact that the public depot at San Antonio is closing pursuant to the BRAC. Warner Robins also noted that despite their proposal being evaluated as moderate risk for transition and production, their C-5 transition and production operations are on schedule. Private sector officials stated that given our reporting schedule, they did not have time to review and comment on the draft report. In conducting our work, we obtained information from and interviewed officials at Air Force Headquarters, Washington, D.C.; Headquarters, Air Force Materiel Command, Wright-Patterson Air Force Base, Ohio; the San Antonio Air Logistics Center, Kelly Air Force Base, Texas; and the Warner Robins Air Logistics Center, Robins Air Force Base, Georgia. We also discussed C-5 contracting issues with the two unsuccessful private sector offerors as well as Defense Contract Audit Agency officials. To analyze the Air Force’s decision to award the C-5 aircraft’s programmed depot maintenance to the Warner Robins Air Logistics Center, we interviewed officials and collected relevant documents from Headquarters, Department of the Air Force; Headquarters, Air Force Materiel Command; and Air Force source selection team members, representatives from the three competing offerors, and the Defense Contract Audit Agency. To verify compliance of the C-5 competition and award with applicable laws and regulations, we reviewed the solicitation, proposed evaluation, and award in the context of applicable laws and regulations. To determine whether cost elements considered in the source selection evaluation were complete and reasonable, we discussed the selection structure with cognizant Air Force and DOD officials, as well as the offerors determined to be within the competitive range. We also reviewed the calculation methods for the various cost element estimates used in the award evaluation for reasonableness, and compared the cost elements between offerors to identify material drivers and to further test for reasonableness. We discussed with the evaluation team members their rational for cost element treatment in the evaluation. We discussed with each competitive range offeror, their assessment of cost element treatment in the evaluation and followed up on issues or concerns raised. A listing of our related reports we have issued is provided at the end of this report. We performed our review between October 1997 and January 1998 in accordance with generally accepted government auditing standards. Please contact me at (202) 512-8412 if you or your staff have questions concerning this report. The major contributors to this report are listed in appendix IV. The National Defense Authorization Act for Fiscal Year 1998 includes the following depot-related reporting requirements for our office. I. Report on DOD’s Compliance With 50-Percent Limitation (section 358) The act amends 10 U.S.C. 2466(a) by increasing the amount of depot-level maintenance and repair workload funds that the Department of Defense (DOD) can use for contractor performance from 40 to 50 percent and revises 10 U.S.C. 2466(e) by requiring the Secretary of Defense to submit a report to Congress identifying the percentage of funds expended for contractor performance by February 1 of each year. Within 90 days of DOD’s annual report to Congress, we must review the report and submit our views to Congress on whether DOD has complied with the 50-percent limitation. II. Reports Concerning Public-Private Competitions for the Depot Maintenance Workloads at the Closing San Antonio and Sacramento Depots (section 359) The act adds section 2469a to title 10 of the United States Code, which provides for special public-private competition for workloads at these two closing depots. It also requires us to issue reports in four areas. First, the Secretary of Defense is required to submit a determination to Congress if DOD finds it necessary to consolidate workloads into a single solicitation. We must report our views on the DOD determination within 30 days. Second, we are required to review all DOD solicitations for the workloads at the San Antonio and Sacramento centers and report to Congress within 45 days of the solicitations’ issuance regarding whether the solicitations provide “substantially equal” opportunity to compete without regard to performance location and is otherwise in compliance with applicable laws and regulations. Third, we must review all DOD awards for the workloads at the two closing Air Logistics Centers and report to Congress within 45 days of the contract award on whether (1) the procedures used complied with applicable laws and regulations and provided a “substantially equal” opportunity to compete without regard to performance location; (2) “appropriate consideration was given to factors other than cost” in the selection; and (3) the selection resulted in the lowest total cost to DOD for performance of the workload. Fourth, within 60 days of its enactment, the 1998 Defense Authorization Act requires us to review the C-5 aircraft workload competition and subsequent award to the Warner Robins Air Logistics Center and report to Congress on whether (1) the procedures used provided an equal opportunity for offerors to compete without regard to performance location, (2) are in compliance with applicable law and the Federal Acquisition Regulation (FAR), and (3) whether the award results in the lowest total cost to DOD. III. Report on the Navy’s Practice of Using Temporary Duty Assignments for Ship Maintenance and Repair (section 366) The act requires us to report by May 1, 1998, on the Navy’s use of temporary duty workers to perform ship maintenance and repair at homeports not having shipyards. The 1995 Base Realignment and Closure (BRAC) Commission recommended closing the Sacramento and San Antonio Air Logistics Centers and transferring their workloads to the remaining depots or private sector commercial activities. In making these recommendations, the Commission considered the effects on the local communities, workload transfer costs, and potential effects on readiness and concluded that the savings and benefits outweighed the drawbacks. The Commission’s report noted that given the significant amount of excess depot capacity and limited DOD resources, closure is a necessity. Further, closing these activities would improve the use of the remaining centers and substantially reduce DOD operating costs. The specific Commission recommendations were as follows: Realign Kelly Air Force Base, including the air logistics center; disestablish the defense distribution depot; consolidate the workloads to other DOD depots or to private sector commercial activities as determined by the Defense Depot Maintenance Council; and move the required equipment and personnel to the receiving locations. Close McClellan Air Force Base, including the air logistics center; disestablish the defense distribution depot; move the common-use ground communication electronics to Tobyhanna Army Depot, Pennsylvania; retain the radiation center and make it available for dual use and/or research, or close as appropriate; consolidate the remaining workloads with other DOD depots or private sector commercial activities as determined by the Council; and move the required equipment and any required personnel to receiving locations. All other activities and facilities at the base will close. In considering the BRAC recommendations to close the two centers, the President and the Secretary of Defense expressed concerns about the near-term costs and potential effects on local communities and Air Force readiness. In response to these concerns, the administration, in forwarding the Commission’s recommendations to Congress, indicated that the air logistics centers’ work should be privatized-in-place or in the local communities. He also directed the Secretary of Defense to retain 8,700 jobs at McClellan Air Force Base, which had been recommended for closure, and 16,000 jobs at Kelly Air Force Base, which had been recommended for realignment, until 2001 to further mitigate the closures’ impact on the local communities. Additionally, the size of the workforce remaining in the Sacramento and San Antonio areas through 2004 was expected to remain above 4,350 and 11,000, respectively. The Air Force initially focused on privatizing five prototype workloads—three at Sacramento (for hydraulics, electric accessories, and software) and two at San Antonio (for C-5 aircraft paint/depaint and fuel accessories). The Defense Depot Maintenance Council approved the Air Force’s plans for the five prototype workloads on February 1, 1996. The prototype workloads involved about 11 percent of the San Antonio depot’s maintenance personnel and about 27 percent of the Sacramento’s personnel. Shortly after the Council approved the prototype program, the appropriateness of the concept began to be questioned. Community and industry groups expressed an interest in having larger packages, and DOD officials were concerned about the cost of administering a large number of smaller contracts. Implementation of the prototype concept was put on hold in May 1996 as the Air Force considered various options. Further, in April 1996, we testified that privatizing depot maintenance activities, if not effectively managed, including the downsizing of remaining DOD depot infrastructure, could exacerbate existing excess capacity problems and the inefficiencies inherent in underused depot maintenance capacity. Privatizing workloads in place at two closing Air Force depots does not reduce the excess capacity in the remaining depots or the private sector and consequently, is not a cost-effective approach to reducing depot infrastructure. Later that year, we reported that privatizing-in-place, rather than closing and transferring the depot maintenance workloads at the Sacramento and San Antonio air logistics centers, would leave a costly excess capacity situation at remaining Air Force depots that a workload consolidation would have mitigated. Our analysis showed that transferring the depot maintenance workloads to other depots could yield additional economy and efficiency savings of over $200 million annually. We recommended that the Secretary of Defense require the Secretary of the Air Force to take the following actions: Before privatizing any Sacramento or San Antonio workloads, complete a cost analysis that considers the savings potential of consolidating the Sacramento and San Antonio depot maintenance workloads at other DOD depots, including savings that can be achieved for existing workloads by reducing overhead rates through more efficient capacity utilization and reduction of fixed overhead that is applied to each production unit at underused military depots that could receive this workload. Use competitive procedures, where applicable, for determining the most cost-effective source of repair for workloads at the closing Air Force depots. In August 1996, the Air Force announced a revised strategy for allocating the depot workloads at the Sacramento and San Antonio centers, which involved several large consolidated work packages, essentially one at Sacramento and two at San Antonio (one for the C-5 aircraft and one for engines). In December 1996, the Air Force issued procedures to conduct public-private competitions for the workloads and to allow one of the remaining public depots to compete with the private sector for each of the three workload packages. The Air Force’s procedures included an evaluation adjustment to public and private sector proposals for overhead savings to other government workloads. In February 1997, the Air Force issued a request for proposals for the C-5 aircraft depot maintenance workload. In September 1997, the Air Force awarded the C-5 workload to the Warner Robins Air Logistics Center based on lowest total evaluated cost. On February 11, 1997, the Air Force, Aircraft Directorate at Kelly Air Force Base, issued a request for proposal (RFP) for the purpose of conducting a public-private competition for the C-5 aircraft business area workload being performed at the closing San Antonio Air Logistics Center at Kelly. The Air Force received proposals from private sector offerors and from one public offeror—the Air Force’s Warner Robins Air Logistics Center. Following technical and cost evaluations, the Air Force selected Warner Robins to perform the C-5 workload on the basis that its proposal represented the lowest total evaluated cost to the government. Section 359 of the National Defense Authorization Act for Fiscal Year 1998, Public Law 105-85, requires that we, among other things, determine whether the procedures used to conduct the competition for the C-5 aircraft workload were in compliance with applicable laws and the FAR. Based on our review of the procedures the Air Force used to conduct the C-5 competition in the context of concerns that were raised by the private sector, we found no basis to conclude that the procedures used in selecting the successful offeror deviated in any material respect from the applicable laws or relevant provisions of the FAR. The following sections describe the legal standards applicable to the C-5 competition, relevant aspects of the solicitation and evaluation procedures used by the Air Force, and our analysis of those procedures under the applicable legal standards. The basic authority for the C-5 competition is 10 U.S.C. 2469, which provides for the use of “competitive procedures for competitions among private and public sector entities” when DOD contemplates changing the performance of a depot workload, valued at $3 million or more, to contractor performance. In addition, section 8041 of the Department of Defense Appropriations Act for Fiscal Year 1997, Public Law 104-208, authorizes public-private competitions for depot workloads as long as the “successful bids” are certified to “include comparable estimates of all direct and indirect costs for both public and private bids.” Both provisions state that Office of Management and Budget Circular A-76 is not to apply to the competitions. Other than the reference in section 8041 of the act to the use of comparable estimates of all costs, neither provision prescribes the elements that constitute a competition. Further, 10 U.S.C. 2470 provides that DOD depot-level activities are eligible to compete for depot workloads. The Air Force implements these authorities through the Air Force Materiel Command, Procedures for Depot Level Public-Private Competition, December 20, 1996 (Depot Competition Procedures). Among other things, the procedures provide for issuing a solicitation calling for offers from public and private sector sources and they establish the criteria for deciding how the Air Force will select a source for the performance of depot workloads from the private or public sector. According to these procedures, a competitive solicitation is to be issued in accordance with the applicable provisions of the FAR. The FAR sets forth uniform policies and procedures for the competitive acquisition system used by all executive agencies and implements the provisions of chapter 137 of title 10 of the United States Code, which govern DOD acquisitions. This use of the competitive acquisition system subjects a depot workload competition to the applicable provisions of chapter 137 and the FAR to the extent that they do not conflict with the public-private competition statutes cited above. (Newport News Shipbuilding and Dry Dock Company, B-221888, July 2, 1986, 86-2 CPD 23.) Further, aspects of a competition that fall outside the competitive acquisition system’s parameters as defined by chapter 137 and the FAR, such as the selection of the public depot offeror to participate in the competition, are governed by the statutes applicable to public-private depot competitions as implemented by the Depot Competition Procedures. In general, the standards in chapter 137 and the FAR (1) require that a solicitation clearly and unambiguously state what is required so that all offerors can compete on an equal basis and (2) allow restrictive provisions to be included only to the extent necessary to satisfy an agency’s needs. Further, under these standards, an agency must follow the criteria announced in the solicitation and exercise its judgment in a reasonable manner in determining which of the competing offers is to be selected. (Dimensions International/QSOFT, Inc. , B-270966.2, May 28, 1996, 96-1 CPD 257.) The RFP for the C-5 workload contemplated the award of a fixed-price requirements type contract, with economic price adjustment and award fee, for a 7-year term. The contract was to include a transition period; an assumption of work in process that had been begun by the government at Kelly Air Force Base; and the scheduled workload for fiscal years 1998 through 2004, including any “over and above” work. According to the solicitation, the public-private competition was to be conducted pursuant to FAR part 12, which prescribes the policies and procedures for the acquisition of commercial items, and FAR part 15, which sets forth the source selection procedures for competitive negotiated acquisitions. Further, the solicitation provided that the selection would be governed in part by the Defense Depot Maintenance Cost Comparability Handbook (CCH), dated August 10, 1993, and its interim amendments dated December 21, 1995, and December 4, 1996, as well as by the Depot Competition Procedures. The RFP stated that the award would be made to the public offeror if its proposal conformed to the RFP, showed that it had the necessary technical capabilities, and represented the lowest total evaluated cost over the life of the requirement. On the other hand, if one of the private offerors’ proposals represented the lowest evaluated cost and had these same technical characteristics then that offeror would receive the award. Finally, if two or more private offerors’ proposals were acceptable and each represented a lower cost than the public offeror’s proposal, the award would be made to the private offeror judged to represent the best value to the government based upon a combined assessment of cost and other technical factors not considered discriminators (distinguishing factors) in the initial evaluation, as well as certain other factors. The RFP evaluation criteria that were to be used for the selection consisted of management criteria, which relate to program characteristics; a risk assessment; cost criteria, which relate to the proposed cost; and general considerations. Management criteria were made up of five factors: transition, production operations, corporate operations, logistics support, and source of repair qualification. The risk assessment consisted of two parts: proposal risk and performance risk. Proposal risk was to measure the risk associated with an offeror’s proposed approach to accomplishing the solicitation requirements relating to each of the five management factors. Performance risk was to assess, based on an offeror’s present and past performance, the probability of the offeror successfully accomplishing its proposed effort. General considerations were to relate to matters such as the results of preaward surveys. Cost was to be evaluated by first conducting realism and reasonableness assessments of the cost proposals. Then each offeror’s total alternative cost was to be developed by applying numerous adjustments made to the proposals in accordance with the CCH and the RFP. Next, each offer’s total evaluated cost was to be determined by adjusting the total alternative cost to reflect the evaluators’ quantification or “dollarization” of the significant discriminators among the proposals based upon strengths, weaknesses, and risks identified in the proposals in accordance with the RFP evaluation criteria. Under the RFP evaluation scheme, in order for the technical merits or risks associated with an offeror, or an offeror’s particular approach to performing the workload, to affect whether a public or private source is to be selected—assuming the minimum standards were met by both—the merits or risks were to be “dollarized” or quantified and included in the calculation of the offeror’s total evaluated cost. The proposals were evaluated in the first instance by specialized teams, which reported to a Source Selection Evaluation Board (SSEB), which in turn reported its conclusions to a Source Selection Advisory Council (SSAC). The SSAC then advised the Source Selection Authority (SSA), who made the final selection decision on the merits of the proposals. Four offerors submitted proposals in response to the solicitation. Warner Robins Air Logistics Center, the public depot chosen by the Air Force to submit the public sector offer, proposed to perform the work at Robins Air Force Base, Georgia. The three private sector offerors all proposed to perform the work at the facilities at the closing Air Logistics Center at Kelly Air Force Base, where the C-5 workload is currently being performed by government employees. Initially, the four proposals were evaluated to determine which was to be included in the competitive range in accordance with FAR 15.609 and considered for award. One of the proposals from the private sector was eliminated from the competitive range and not considered further because, in the SSA’s view, it failed to adequately address the solicitation requirements. Discussions were held with the three offerors remaining within the competitive range. As a result of the discussions, each offeror revised its proposal and submitted a best and final offer, which was the subject of the Air Force’s final cost adjustments and evaluation. Based on the results of the evaluations and cost adjustments, the advice of the SSAC, and the SSA’s own analysis in the context of the RFP evaluation criteria, the SSA decided that the Warner Robins Air Logistics Center’s proposal met all of the RFP requirements and represented the lowest total evaluated cost at $746,519,392 for the 7-year requirement. Consequently, the SSA selected Warner Robins to perform the C-5 workload. Of the three offerors within the competitive range, two of them, the winning public sector offeror—Warner Robins—and one of the private sector offerors (offeror A), had evaluated costs that were reasonably close. The other private sector offeror (offeror B) had considerably higher evaluated costs. As noted previously, the solicitation stated that five factors would be used to evaluate the offerors’ management approach. The priority of these factors—except for corporate operations and logistics support, which were co-equal—was (1) transition, (2) production operations, (3) corporate operations, (4) logistics support, and (5) source of repair qualifications. The SSA rated the proposals under each of the five factors. The first factor, transition, was to measure the offeror’s approach to transferring program responsibility and accountability from Kelly Air Force Base to the new operation, including such tasks as manpower build-up, material procurement, and production ramp-up. Under this factor, the SSA concluded that offeror B had the best approach to transferring performance of the workload to its proposed facility, with an exceptional technical rating and low proposal risk. Offeror A was next, with an acceptable technical rating and a low proposal risk. Warner Robins followed with an acceptable rating and a moderate risk. The SSA noted that while Warner Robins’ approach posed a moderate risk, it did meet the minimum standard and, in fact, offered a strength in the availability of skilled workers. Overall, the SSA concluded that the strengths of the approaches of offerors A and B increased the chances of success but did not offer a specific cost impact that could be “dollarized” and, thus, have a positive impact on their total evaluated costs. On the other hand, according to the SSA, the cost impact of Warner Robins’ weaknesses could be ameliorated with close monitoring. Consequently, the SSA decided not to include this as a matter for dollarization. Under the production operations factor, an offeror was to provide its plan to perform the work, including the proposed sequence for all major tasks and the identification of facilities and shops to be used. The SSA concluded that offeror A “far exceeded” the RFP minimum standard and merited an exceptional rating by proposing a significant reduction in the “flow days” (i.e. , time required to perform the work) needed to maintain the aircraft over the life of the requirement. In addition, the SSA assigned the offeror a low-risk rating for its approach to the performance of the workload. Offeror B received an acceptable rating with low risk for its approach. The SSA assigned Warner Robins a marginal rating with moderate risk. The SSA was concerned that while Warner Robins proposed to reduce flow days, two of its facilities, the “programmed depot maintenance (PDM) facility” and the “paint/depaint facility,” might not have the capacity to handle the workload within the proposed time frame. The SSA concluded that while close monitoring could overcome these difficulties, the production schedule could be disrupted. Accordingly, the SSA determined that the potential for problems in the paint/depaint facility should be reflected or dollarized as an added cost in the evaluation. The cost associated with the potential problems related to the PDM facility was, according to the SSA, already reflected in Warner Robins’ cost proposal, and thus, was not added as a “dollarized” cost in the evaluation. In sum, the SSA concluded that the reduced flow days proposed by both offeror A and Warner Robins were significant discriminators and merited a downward dollarization cost adjustment in the determination of total evaluated cost. The evaluation of Warner Robins’ cost was, in addition, to include an estimated cost increase for the potential paint/depaint problems in its dollarization evaluation. Offeror B’s proposal did not merit a dollarization cost adjustment in either direction. Under the third factor (corporate operations), the SSA concluded that all offerors had extensive relevant experience and assigned each a rating of exceptional with a low proposal risk. Since all were equally rated, there was no discriminator and consequently no dollarization. All of the offerors were rated by the SSA as acceptable and low risk under the logistics support factor. Again, there was no discriminator or corresponding dollarization adjustment. Under the final factor (source of repair qualification), the SSA rated offerors A and B as exceptional with low risk. Warner Robins met the minimum standard and was assigned a rating of acceptable with low risk. There was, according to the SSA, no discriminator or dollarization. As for the more general category of performance risk, all three proposals were determined to represent low risk in the management and cost areas, with no discriminators. As noted previously, the cost evaluation consisted of (1) an assessment of the realism and reasonableness of the cost proposals; (2) a determination of the “total alternative cost” of each proposal, calculated through adjustments required by the CCH and RFP; and (3) a determination of the total evaluated cost of each proposal, calculated by taking the total alternative cost and adjusting it to reflect the dollarization of significant discriminators among the proposals. The evaluation results for each of these analyses are summarized below. The cost team evaluators initially reviewed each offeror’s cost proposal to determine its completeness, realism, and reasonableness. As a result of this review, the evaluators ultimately were satisfied that each cost proposal met these standards. In accordance with the Depot Competition Procedures, the Defense Contract Audit Agency (DCAA) audited the Warner Robins’ cost proposal and reviewed the public offeror’s disclosure statement and accounting and estimating systems. The disclosure statement was in the first instance found to be adequate. After discussions with Warner Robins and some adjustments to the cost proposal, the proposal was determined to be realistic. DCAA also reviewed Warner Robins’ accounting and estimating systems and found them deficient in certain respects. In an August 1, 1997, memorandum, DCAA noted that the deficiencies would not significantly affect Warner Robbins’ cost proposal. In a subsequent audit report—issued on November 26, 1997, after the selection of Warner Robins—DCAA stated that although most of the deficiencies in Warner Robins’ accounting and estimating systems had been corrected, all of them had not been fully addressed. While Warner Robins had not met all the correction milestones at the time of its selection, the deficiencies remaining at that time, were not significant enough to preclude its selection. The cost evaluators determined each offeror’s total alternative cost by first calculating the offeror’s “customer cost”—in essence, its proposed price for performing the requirement, excluding material—and then making upward and downward adjustments to this cost in accordance with the RFP and the CCH. Offeror A’s customer cost was calculated to be $409,042,577. Warner Robins’ customer cost was $434,378,781. Offeror B’s cost was considerably higher than either of these. Using the customer cost for each offeror as a base, the evaluators made the depot maintenance comparability adjustments called for in the CCH and the RFP. Two sets of adjustments were made to the public and the private sector offers. The first set, required by form number 1 of the CCH, encompassed adjustments to the public sector offer; and the second set, required by form number 2 of the CCH, governed adjustments applicable to the public and private sector proposals. The comparability adjustments were identified either in the RFP directly or in the CCH. The CCH form number 1 adjustments made to the Warner Robins’ proposal included upward and downward changes in a number of categories. The most significant were upward adjustments of $19,887,441 for unfunded civilian retirement, $18,872,571 for base operating support, and $12,467,409 for employee casualty insurance. The net result of the upward and downward adjustments was an upward adjustment to the public depot’s proposal of $57,812,033. This resulted in an adjusted cost of $492,189,454 for Warner Robins. The CCH form number 2 adjustments were made to the private and public sector proposals. Some adjustments were made to both types of proposals. For example, upward adjustments were made to both types for contract administration, additional overhead costs due to the new workload, reduction-in-force costs, and costs associated with the transition of government personnel (i.e., costs of retaining current C-5 workforce at Kelly that will be subject to a reduction in force and not be rehired by the new source after the workload is transitioned pending their separation). Most of these adjustments were similar in size for both proposals. The largest difference was in the transition of government personnel, resulting in an $10,956,997 increase to Warner Robins’ cost and an $5,663,324 increase for offeror A. Other form number 2 adjustments applied solely to the private sector proposals. For example, a downward adjustment of $12,271,277 was made to offeror A’s proposal to represent the amount the firm would have to pay in income taxes on the contract proceeds. An upward adjustment of $4,251,429 was made to reflect the firm’s planned use of the facilities transferred by the Air Force to the Greater Kelly Development Corp. (GKDC). The transfer was under terms that were considered by the Air Force to result in a subsidy to GKDC, which it passed on through advantageous lease rates to private firms, such as offeror A, proposing to perform work at the Kelly location. The most significant of all of the adjustments made was a downward adjustment to Warner Robins’ proposal of $153,935,160. This represented the evaluators’ estimate of the overhead savings that would be attributable to the other workloads performed by Warner Robins as a result of the addition of the C-5 workload to its currently underutilized facilities. The net result of the form number 2 comparability analysis was a downward adjustment of $57,229,727 to Warner Robins’ proposal and a final upward adjustment of $81,890,194 to offeror A’s proposal. A cost of $312,474,994 was added to each that represented the cost of material. This final cost adjustment resulted in a total alternative cost for Warner Robins of $747,434,721 and for offeror A, a cost of $803,407,765. The most significant single element contributing to the more than $55,000,000 cost advantage for Warner Robins was the cost reduction made to reflect the depot’s overhead cost savings. To arrive at the total evaluated cost of each proposal, the evaluators took the total alternative cost, as determined above, and applied the dollarization adjustments that were identified during the technical evaluation. The dollarization adjustments reflected the evaluators’ assessments of the benefit or detriment to the Air Force that would result from aspects of the proposed performance considered to be discriminators among the proposals. The one aspect of offeror A’s proposal that was considered to be a significant discriminator suitable for quantification was its offer to significantly reduce the flow days to complete the work under the production operations evaluation factor. This resulted in a downward adjustment of $14,560,019 in the evaluation of its total cost. Similarly, the evaluators concluded that Warner Robins’ proposal to reduce flow days merited dollarization. The downward adjustment was tempered, however, by the evaluators’ belief that the risk associated with the lack of capacity at Warner Robins’ paint/depaint facility also qualified as a significant discriminator and, thus, a dollarization candidate: this time, for an upward adjustment. As a result of an upward adjustment of $1,838,767 to represent the risk associated with the paint/depaint facility and a downward adjustment of $2,755,456 representing the benefit to the Air Force of the reduced flow days proposed, Warner Robins’ total cost was reduced by $916,689. The total evaluated costs for the two lowest offerors were $746,519,392 for Warner Robins and $788,847,746 for offeror A. Based on the evaluation results, the SSA concluded that all of the offers were technically acceptable and that all three offerors were responsible. The SSA found the prices proposed by each, as adjusted through the cost analysis, to be reasonable and realistic. The SSA selected Warner Robins to perform the C-5 workload on the basis that its proposal represented the lowest total evaluated cost over the life of the requirement. As discussed previously, several statutes govern the use of public-private competitions for the performance of depot workloads. In particular, 10 U.S.C. 2469 provides for the use of “competitive procedures for competitions among private and public sector entities” whenever DOD contemplates changing the performance of depot workloads of $3 million or more to contractor performance. Neither 10 U.S.C. 2469 nor the other statutes governing public-private competitions for depot workloads prescribe the specific elements that constitute a competition. Because, however, the Air Force’s Depot Competition Procedures use the competitive acquisition system, the standards in chapter 137 of title 10 of the United States Code and the FAR apply to the extent they are consistent with the basic public-private competition statutes. (See Newport News Shipbuilding and Dry Dock Co., cited above.) Reviewing the C-5 competition in this context, we found no basis to conclude that the procedures used in selecting the successful offeror deviated in any material respect from the applicable laws or relevant provisions of the FAR. The Air Force issued a competitive solicitation in accordance with FAR parts 12 and 15, which provided for the participation of a public sector depot. Overall, the evaluation process appeared to be reasonable, fair, and consistent with the evaluation scheme in the solicitation, the Depot Competition Procedures, and the CCH. The private sector has raised several specific concerns about the conduct of the C-5 competition. The concerns are that (1) there is an inherent inequity in public-private depot competitions created by the solicitation of offers on a fixed-price basis, since the government often pays for any cost overruns incurred by a public sector source from public funds; (2) Warner Robins was unfairly advantaged during the cost evaluation by the large cost credit representing projected overhead savings in its other workloads; and (3) the selection did not account for, or dollarize, identified risks and weaknesses in the proposals. Private sector representatives stated that the Air Force’s solicitation of offers on a fixed-price basis revealed the inequity inherent in the procedures used in the C-5 public-private competition. According to industry representatives, the fixed-price concept is only relevant to private sector offerors who must assume the risk that their costs will be less than the price offered or they will incur losses. On the other hand, public sector offerors are subject to no contractually enforceable cost risk, as any overruns will simply be paid for by the government from public funds. Public and private sector entities are fundamentally different; therefore, it is unlikely that a completely equal comparison of the projected costs of public and private sector performance of a particular function could be achieved. However, the cost comparison aspect of a public-private competition needs to be conducted in as fair a manner as practicable. To this end, our decisions involving public-private competitions have held that, to ascertain whether a public depot’s costs are fairly stated and reasonable, an agency must make a reasoned judgment of the actual cost the government will incur if work is to be performed by the depot. (See Department of the Air Force; DCAA; Canadian Commercial Corporation/Heroux, Inc., B-253278, Apr. 7, 1994, 94-1 CPD 247.) Recognizing the concerns now being raised in the context of the C-5 competition, we have observed that, because a public source’s cost overruns are paid for by the government, its arrangement to perform work is more closely analogous to a cost reimbursement type contract than to the fixed-price contract a private sector offeror is bound to perform. As a result, we have stated that in a solicitation for a fixed-price type contract, an agency must treat the public sector offer as if it were one for a cost type contract and subject it to a cost realism analysis in accordance with FAR 15.805-3. (See Newport News Shipbuilding and Dry Dock Co., cited above.) In our view, the procedures used in the C-5 competition reasonably addressed the issue of public sector accountability for costs. Both the solicitation and the Depot Competition Procedures contained a number of provisions designed to ensure that Warner Robins’ full costs were disclosed and supported. For example, the solicitation required the public sector offeror to certify that its offer represented the full cost of performance, subject to criminal penalties for false statements. Warner Robins included this certification in its proposal. Also, under the Depot Competition Procedures, DCAA is required to provide an opinion that a public depot’s proposal complies with the CCH, is not materially understated, and is acceptable for evaluation. These requirements are not applicable to private sector offers. In fact, the procedures state that proposals “submitted by private firms will generally not require auditing.” In addition, the procedures mandate that proposals from both public and private offerors be supported by systems and procedures maintained by the offeror that are in accord with generally accepted accounting principles. The evaluation record for the C-5 competition shows that the Air Force conducted an extensive realism analysis of the Warner Robins proposal, consistent with FAR 15.805-3 and the previously cited decisions. In this connection, the Air Force developed a pre-solicitation estimate of what it would cost to perform the requirement. The evaluators compared this figure with the overall cost proposed by the depot and reviewed individual cost elements. In addition, DCAA reviewed Warner Robins’ disclosure statement and the depot’s accounting and estimating systems. After extensive discussions, including a site visit, and a number of revisions to both the public offeror’s proposal—including a number of upward adjustments—and its accounting and estimating systems, the SSA concluded that Warner Robins’ proposal was realistic and reasonable. As required by section 8041 of the Department of Defense Appropriations Act for Fiscal Year 1997, Public Law 104-208, the SSA certified that the proposal included estimates of direct and indirect costs that were comparable to those in the other proposals. Private sector sources expressed concern about the large amount—$153,935,160—of projected overhead savings attributable to the public depot’s other existing workloads. Although the amount was large and became the primary determining factor in the selection of Warner Robins, it was properly used in the evaluation by the Air Force. The concept of assessing and evaluating the overhead savings attributable to an offeror’s other government workloads resulting from the addition of the C-5 workload was spelled out in the solicitation and the Depot Competition Procedures. Specifically, both sources provided for an adjustment to be made to a public or private sector proposal for “identified and reasonable” overhead savings to other government workloads performed by the offeror that would be realized during the 7-year period. The evaluation records show that the cost evaluators questioned the overhead savings initially proposed by Warner Robins and made downward adjustments in the amount originally proposed. For example, the evaluators deleted workload hours included in the Warner Robins’ savings calculations for work on the KC-135 aircraft because that requirement had not been committed to the Warner Robins facility. After extensive discussions with the offeror concerning the proposed overhead savings, the evaluators calculated that $153,935,160 could be attributed to the other workloads to be performed at the public facility during the performance of the C-5 requirement. These savings were primarily due to the more efficient use of the existing workforce and facilities, which before the addition of the C-5 workload had been underused. The overhead savings credit was provided for in the solicitation and the Depot Competition Procedures, and the Air Force followed the evaluation scheme in calculating the savings proposed by Warner Robins. The size of the savings and their significance in Warner Robins’ selection were due to the excess capacity that existed in Warner Robins’ facilities and the underutilization of the existing workforce, to the fact that the savings were to be applied to a 7-year performance period, and to the fact that offeror A did not propose any similar overhead savings. Private sector firms raised concerns about the Air Force’s failure to consider some of the evaluated risks in the respective proposals in its final selection decision. The evaluation records show that under the two highest priority management factors, transition and production operations, offeror A received a low-risk rating while the public offeror’s approach was rated as representing a moderate risk. Overall, offeror A was credited with more strengths under the management factors than was the public offeror. In the final selection decision, the SSA did not dollarize or quantify the risk differences or all the strengths or weaknesses but only included adjustments that represented discriminators based upon reduction of flow days and a lack of capacity at one of Warner Robins’ proposed facilities. The result was that all of the superior risk or strength ratings given offeror A did not enter into the calculation of its total evaluated cost and were not a factor in the final selection. The RFP provided that the calculation of an offeror’s total evaluated cost would include “the dollarized impact of significant discriminators based on identified proposal strengths, weaknesses and risks.” Since the RFP stated that these elements would be considered for inclusion in the dollarization, the Air Force had to do so. However, an agency has discretion to decide whether any particular feature of an individual proposal should or should not enter into cost calculations, and such decisions generally will not be questioned as long as they are fair, reasonable, and consistent with the solicitation. (See Universal Shipping Co., Inc., B-223905.2, Apr. 20, 1987, 87-1 CPD 424.) In our view, the dollarization approach adopted by the Air Force represented a reasonable exercise of its discretion under the RFP. As noted previously, some evaluated risks for the two highest priority management factors were dollarized and reflected in the final selection decision, and some were not. Under the transition factor, the SSA noted that Warner Robins met the minimum standard, but that its approach of operating dual locations—Kelly and Robins Air Force Bases—for work performed during the transition period posed a moderate risk because of the close monitoring that would be required to avoid schedule disruption and performance degradation. On the other hand, the SSA stated that offeror A merited a low-risk rating because it had experience managing the type of transition it proposed. The SSA noted that the strength of offeror A’s approach increased confidence that the transition would be successful, but did not offer “specific cost impacts or savings.” The SSA expressed concern regarding Warner Robins’ approach but concluded that close monitoring should ameliorate any cost impact. The SSA determined that while offeror A’s approach involved less risk than Warner Robins’ plan, the difference would not have an impact on the cost of performance. Under the production operations factor, the SSA concluded that offeror A far exceeded the minimum standard by proposing a low-risk approach that would significantly reduce the flow days needed to perform the work on the aircraft. The SSA considered this approach to be a significant discriminator offering a cost benefit to the Air Force. Thus, the offeror was given a dollarized cost credit in the evaluation. Similarly, Warner Robins was given an estimated cost reduction in the dollarization evaluation for its proposal to reduce flow days in processing the aircraft. This was tempered by a corresponding dollarization because the SSA concluded that the public offeror’s moderately risky approach could in one respect—potential schedule conflicts in the use of its paint/depaint facility—result in a negative cost impact during performance. A similar problem involving potential schedule conflicts in the use of another Warner Robins’ facility was not dollarized because the SSA concluded that the potential cost impact was already reflected in the proposed costs. As these facts demonstrate, the Air Force evaluators took a conservative approach in implementing the dollarization concept announced in the RFP. The evaluation record indicates that they included only those elements that were judged to be discriminators among the proposals and would offer “specific cost impacts or savings.” For example, if a proposal offered a plan to reduce a specific number of flow days, a feature that could be readily quantified, it was dollarized. On the other hand, more subjective elements of the evaluation, such as the overall risk of a particular approach, were not as susceptible to quantification and were not dollarized. In our view, while the Air Force could have taken a more expansive approach to dollarization, it was within its discretion under the RFP to take the approach that it did. The RFP did not explain in any detail how dollarization was to be accomplished; the determination of an appropriate approach was left to the Air Force. Further, our review of the evaluation record disclosed nothing to suggest that the Air Force applied the dollarization approach it selected unevenly or unfairly. For example, both offeror A and Warner Robins were given dollarization credit for their plans to reduce the flow days needed to maintain the aircraft. Considering the subjective nature of such evaluation judgments and the discretion procuring agencies have in this area, we believe that the Air Force’s approach to dollarization was consistent with the solicitation and reasonable. (See URS Consultants, B-275068.2, Jan. 21, 1997, 97-1 CPD 100.) Public-Private Competitions: DOD’s Determination to Combine Depot Workloads Is Not Adequately Supported (GAO/NSIAD-98-76, Jan. 20, 1998). DOD Depot Maintenance: Information on Public and Private Sector Workload Allocations (GAO/NSIAD-98-41, Jan. 20, 1998). Air Force Privatization-in-Place: Analysis of Aircraft and Missile System Depot Repair Costs (GAO/NSIAD-98-35, Dec. 22,1997). Outsourcing DOD Logistics: Savings Achievable But Defense Science Board’s Projections Are Overstated (GAO/NSIAD-98-48, Dec. 8, 1997). Air Force Depot Maintenance: Information on the Cost Effectiveness of B-1B and B-52 Support Options (GAO/NSIAD-97-210BR, Sept. 12, 1997). Navy Depot Maintenance: Privatizing the Louisville Operations in Place Is Not Cost Effective (GAO/NSIAD-97-52, July 31, 1997). Defense Depot Maintenance: Challenges Facing DOD in Managing Working Capital Funds (GAO/T-NSIAD/AIMD-97-152, May 7, 1997). Depot Maintenance: Uncertainties and Challenges DOD Faces in Restructuring Its Depot Maintenance Program (GAO/T-NSIAD-97-111, Mar. 18, 1997) and (GAO/T-NSIAD-112, Apr. 10,1997). Defense Outsourcing: Challenges Facing DOD As It Attempts to Save Billions in Infrastructure Costs (GAO/T-NSIAD-97-110, Mar. 12, 1997). Navy Ordnance: Analysis of Business Area Price Increases and Financial Losses (GAO/AIMD/NSIAD-97-74, Mar. 14,1997). High-Risk Series: Defense Infrastructure (GAO/HR-97-7, Feb. 1997). Air Force Depot Maintenance: Privatization-in-Place Plans Are Costly While Excess Capacity Exists (GAO/NSIAD-97-13, Dec. 31, 1996). Army Depot Maintenance: Privatization Without Further Downsizing Increases Costly Excess Capacity (GAO/NSIAD-96-201, Sept. 18, 1996). Navy Depot Maintenance: Cost and Savings Issues Related to Privatizing-in-Place the Louisville, Kentucky Depot (GAO/NSIAD-96-202, Sept. 18, 1996). Defense Depot Maintenance: Commission on Roles and Mission’s Privatization Assumptions Are Questionable (GAO/NSIAD-96-161, July 15, 1996). Defense Depot Maintenance: DOD’s Policy Report Leaves Future Role of Depot System Uncertain (GAO/NSIAD-96-165, May 21, 1996). Defense Depot Maintenance: More Comprehensive and Consistent Workload Data Needed for Decisionmakers (GAO/NSIAD-96-166, May 21, 1996). Defense Depot Maintenance: Privatization and the Debate Over the Public-Private Mix (GAO/T-NSIAD-96-146, Apr. 16, 1996 ) and (GAO/T-NSIAD-96-148, Apr. 17, 1996). Military Bases: Closure and Realignment Savings Are Significant, but Not Easily Quantified (GAO/NSIAD-96-67, Apr. 8, 1996). Depot Maintenance: Opportunities to Privatize Repair of Military Engines (GAO/NSIAD-96-33, Mar. 5, 1996). Closing Maintenance Depots: Savings, Personnel, and Workload Redistribution Issues (GAO/NSIAD-96-29, Mar. 4, 1996). Navy Maintenance: Assessment of the Public-Private Competition Program for Aviation Maintenance (GAO/NSIAD-96-30, Jan. 22, 1996). Depot Maintenance: The Navy’s Decision to Stop F/A-18 Repairs at Ogden Air Logistics Center (GAO/NSIAD-96-31, Dec. 15, 1995). Military Bases: Case Studies on Selected Bases Closed in 1988 and 1991 (GAO/NSIAD-95-139, Aug. 15, 1995). Military Base Closure: Analysis of DOD’s Process and Recommendations for 1995 (GAO/T-NSIAD-95-132, Apr. 17, 1995). Military Bases: Analysis of DOD’s 1995 Process and Recommendations for Closure and Realignment (GAO/NSIAD-95-133, Apr. 14, 1995). Aerospace Guidance and Metrology Center: Cost Growth and Other Factors Affect Closure and Privatization (GAO/NSIAD-95-60, Dec. 9, 1994). Navy Maintenance: Assessment of the Public and Private Shipyard Competition Program (GAO/NSIAD-94-184, May 25, 1994). Depot Maintenance: Issues in Allocating Workload Between the Public and Private Sectors (GAO/T-NSIAD-94-161, Apr. 12, 1994). Depot Maintenance (GAO/NSIAD-93-292R, Sept. 30, 1993). Depot Maintenance: Issues in Management and Restructuring to Support a Downsized Military (GAO/T-NSIAD-93-13, May 6, 1993). Air Logistics Center Indicators (GAO/NSIAD-93-146R, Feb. 25, 1993). Defense Force Management: Challenges Facing DOD as It Continues to Downsize Its Civilian Workforce (GAO/NSIAD-93-123, Feb. 12, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the Air Force's solicitation and selection of a source for C-5 aircraft depot maintenance, focusing on whether the: (1) procedures used to conduct the C-5 competition provided substantially equal opportunity for the public and private offerors to compete for the workload without regard to work performance location; (2) procedures complied with the requirements of all applicable provisions of law and the Federal Acquisition Regulation (FAR); and (3) C-5 award results in the lowest total cost to the Department of Defense for performance of the workload. GAO noted that its assessment to the issues required under the 1998 Defense Authorization Act relating to the C-5 aircraft competition concluded that: (a) the C-5 competition procedures provided an equal opportunity for public and private offerors to compete without regard to where work could be performed; (b) the procedures did not appear to deviate in any material respect from the applicable laws or the FAR; and (c) based on Air Force assumptions and conditions at the time of award, the award resulted in the lowest total cost to the government.
A U.S. passport is not only a travel document but also an official verification of the bearer’s origin, identity, and nationality. Each day, Americans submit them as identification to board international flights, obtain drivers’ licenses, cross the border from the United States into Canada and Mexico, apply for loans, and verify their employability. To acquire a U.S. passport for the first time, an applicant must provide evidence of citizenship, or non-citizen nationality, such as a certificate of birth in the United States or a naturalization certificate, and a valid government-issued identification document that includes a photograph or physical description of the holder (most commonly a state-issued driver’s license or identity card). Most passport applications are submitted by mail or in-person at one of almost 9,400 passport application acceptance facilities nationwide. The passport acceptance agents at these facilities are responsible for, among other things, verifying whether an applicant’s identification document matches the applicant. Then, through adjudication, passport examiners determine whether State should issue each applicant a passport. Adjudication requires the examiner to scrutinize identification and citizenship documents presented by applicants to verify their identity and U.S. citizenship or non-citizen nationality. Since 2005, we have issued several reports on fraud vulnerabilities within the passport issuance process and the subsequent actions taken by State to prevent individuals from fraudulently securing passports. For example, we reported that identity theft was among the most common means used to commit passport fraud. In March 2009, we reported that our covert testing of State’s passport issuance process demonstrated how malicious individuals might use identity theft to obtain genuine U.S. passports. Through our work, we have identified two major areas of vulnerability in State’s passport issuance process. Passport acceptance agents and passport examiners have accepted counterfeit or fraudulently acquired genuine documents as proof of identification and citizenship. We reported in March 2009 that State issued four genuine U.S. passports to GAO investigators, even though the applications that we submitted contained bogus information and were supported by counterfeit drivers’ licenses and birth certificates. The sheer variety of documents that are eligible to prove citizenship and identity also complicate State’s verification efforts. State’s limited access to information from other federal and state agencies hampers its ability to ensure that supporting documents belong to the bearer. In 2005 we reported that the information State used from SSA to corroborate SSNs was limited and outdated. Although State and SSA had signed a memorandum in April 2004 giving State access to SSA’s main database, the memorandum had not been implemented. Moreover, the memorandum did not include access to SSA’s death records, though State officials said they were exploring the possibility of obtaining these records. Yet, in one case from our covert testing in 2009, we obtained a U.S. passport using the SSN of a man who died in 1965. In response to our prior findings, State officials said that the lack of an automated check against SSA death records was a long-standing vulnerability, but noted that Passport Services had recently purchased a subscription to the Death Master File, which included weekly updates of deaths recorded by SSA. State also indicated that federal agencies limit its access to records due to privacy concerns and the fact that State is not a law enforcement agency. For example, it could not conduct real-time authentication of the birth certificates presented by passport applicants. The agency added that these documents present an exceptional challenge to fraud detection efforts, due to the thousands of different acceptable formats that the documents can be presented in. It further indicated that there are also difficulties with verifying the authenticity of drivers’ licenses. State’s passport issuance process continues to be vulnerable to fraud, as the agency issued five of the seven passports GAO attempted to fraudulently obtain. Despite multiple indicators of fraud and identity theft in each application, State identified only two as fraudulent during its adjudication process and mailed five genuine U.S. passports to undercover GAO mailboxes. GAO successfully obtained three of these passports, but State had two others recovered from the mail before they were delivered. According to State officials, the agency discovered—after its adjudication process—that the two passports were part of GAO testing when they were linked to one of the passport applications it initially denied. State officials told us that they used facial recognition technology —which it could have also used during the adjudication process—to identify our two remaining applications. According to State, one of our applications was denied in April 2010 during processing at the National Processing Center in New Hampshire by an examiner who was suspicious that the application in totality was likely an “imposter.” The examiner sent the file to a fraud manager in Florida who subsequently determined that the Florida birth certificate was counterfeit. State detected the second fraudulent application after the SSN used was flagged as recently issued by SSA. This application was then sent to the same fraud manager in Florida who processed the first application, since they both contained Florida birth certificates. State officials indicated that they then uncovered GAO’s undercover tests by crosschecking the fraudulent Florida birth certificate with the state’s Bureau of Vital Statistics. After State discovered our undercover test, the agency used methods and resources not typically utilized to detect fraud during the normal passport adjudication process to identify our remaining tests. For example, according to State officials, they subsequently identified the two remaining GAO applications by using facial recognition technology to search for the photos of the applicants, who were our undercover investigators. State could have used the very same technology to detect fraud in the three applications for passports that we received, because all three passports contained the photo of the same GAO investigator. One of the passports that were recovered after issuance also included the photo of the same investigator. Our most recent tests show that State does not consistently use data verification and counterfeit detection techniques in its passport issuance process. Of the five passports issued, State failed to crosscheck the bogus citizenship and identity documents in the applications against the same databases that it later used to detect our other fraudulent applications. In addition, despite using facial recognition technology to identify the photos of our undercover investigators and to stop the subsequent delivery of two passports, State did not use the technology to detect fraud in the three applications for passports that we received, which all contained a passport photo of the same investigator. Table 1 and the text that follows provide more detail about each of our tests. State issued a genuine passport even though the application contained multiple indicators that should have raised suspicion of fraud, either independently or in aggregate. First, this application included both a counterfeit Florida birth certificate and West Virginia driver’s license, both using the same fictitious name that was on the application. If State had confirmed the legitimacy of these documents, it would have easily discovered that they were bogus and thus, not representative of the true identity of the bearer. Second, we utilized an SSN that was recently issued to us by the SSA. If State had authenticated the SSN, it would have detected the fact that its issue date did not closely coincide with the date of birth and age of the U.S. citizen represented in the application. Specifically, the applicant listed was a 62-year-old man born in 1948 while the SSN was issued by SSA in 2009. Finally, State did not question discrepancies between our addresses which included a permanent home address located in West Virginia and a mailing address in Seattle, Washington. According to State, these were fraud indicators that should have been questioned prior to the issuance of the passport. State denied this passport after identifying certain discrepancies and indicators of identity theft and fraud that we included in the application. According to State, this fraudulent application was first detected when the applicant’s identity information did not match SSA’s records. The application was then submitted to an examiner, who determined that our Florida birth certificate was fraudulent after checking it against Florida Bureau of Vital Statistics records. State also identified physical properties of the document that were inconsistent with an original. In addition, State checked our bogus West Virginia driver’s license against the National Law Enforcement Telecommunications System (NLETS), which showed that the license did not belong to the bearer. State issued a genuine passport even though the application contained multiple indicators and discrepancies that should have raised red flags for identity theft and fraud. Our investigator went to the U.S. Department of State Passport Office in Washington, D.C., which provides expedited passport services to applicants scheduled to travel out of the country within 14 days from the date of application. The State employee made a line-by-line examination of the application to make sure that the information coincided with what was provided to him, on the bogus Florida birth certificate and District of Columbia driver’s license. Both documents contained the same fictitious name that was used on the application. However, if State had crosschecked the information from these two bogus documents against the same records that it did in the previous case, it could have discovered that neither were representative of the bearer. Further, if State officials had checked the SSN in the application, State would have concluded that it was recently issued and did not coincide with the date of birth represented in the application. In addition, our application indicated that our applicant’s height was 5’ 10” while his bogus driver’s license showed a height of 6’. According to State, these were fraud indicators that should have been questioned prior to the issuance of the passport. The following day, our investigator returned to the same location and was issued a genuine U.S. passport. State again issued a genuine passport even though the application contained multiple indicators and discrepancies that should have raised red flags for identity theft and fraud. This application also included a counterfeit Florida birth certificate and West Virginia driver’s license, both in the same fictitious name that was used on the application. If State had adequately corroborated the information from these two bogus documents against the same records that it did in case number two, it could have discovered that the documents were counterfeit and not representative of the bearer. In addition, if State had adequately verified the SSN in the application, it would have found that the recent issue date did not coincide with the age or date of birth represented in the application. State also did not identify about a 10 year age difference between the applicant’s passport photo and the photo in his driver’s license. Finally, the application included suspicious addresses and contact information—a California mailing address, a permanent and driver’s license address from West Virginia and telephone number from the District of Columbia. According to State, these were fraud indicators that should have been questioned prior to the issuance of the passport. State identified the fraud indicators and discrepancies that we included in this test and did not issue a passport. In addition, the agency identified this application as a GAO undercover test. First, State identified a major discrepancy with the SSN in our application. When our investigator spoke with a State employee about the status of his application, he was told that the birth year in his application did not match SSA records. In our investigator’s fabricated explanation, he explained that he was recently a victim of identity theft and had a new SSN issued. Second, the agency determined that our Florida birth certificate was fraudulent after its check against Florida Bureau of Vital Statistics records indicated that the document was counterfeit. State also identified physical properties of the document that were inconsistent with an original. Finally, State questioned why the application was filed in Illinois yet listed a mailing, permanent, and driver’s license address from West Virginia. State issued a passport for this application even though it contained multiple indicators of fraud. However, after discovering our testing through our fifth application, it subjected this application to further review and recovered the passport from the USPS before it was delivered. Before the application was discovered as a part of a GAO test, State never identified any of the fraud indicators that we included in the application. Officials stated that facial recognition technology allowed them to discover that the photograph in this application was the same used in previous applications. State then checked our bogus West Virginia driver’s license against NLETS, which showed that the license belonged to a person other than the bearer. State officials never questioned why the application was filed in Georgia yet listed a mailing, permanent, and driver’s license address from West Virginia and phone number from the District of Columbia. State also failed to identify the misspelling of the city in our West Virginia license and discrepancies with the zip code information on our passport application. According to State, these were fraud indicators that should have been questioned prior to the issuance of the passport. As with our sixth test, State issued a passport for this application but prevented its delivery after using facial recognition technology to link the photo to one used in previous applications—again, after discovering our undercover testing. Only after discovering our testing did State check our bogus West Virginia driver’s license against NLETS, which showed that the license belonged to a person other than the bearer. If State had checked this license prior to issuing a passport, it would have discovered discrepancies regarding information on the license including the misspelling of the city. Further, State never questioned why the application was filed in New York yet listed a Maryland mailing address and a permanent and driver’s license address from West Virginia, prior to issuing the passport that it later revoked. According to State, these were fraud indicators that should have been questioned prior to the issuance of the passport. In conclusion, Mr. Chairman, the integrity of the U.S. passport is an essential component of State’s efforts to help protect U.S. citizens from those who would harm the United States. Over the past several years, we have reported that State has failed to effectively address the vulnerabilities in the passport issuance process. Our recent tests show that there was improvement in State’s adjudication process because State was able to identify 2 of our 7 passport applications as fraudulent and halted the issuance of those passports. However, our testing also confirmed that State continues to have significant vulnerabilities and systemic issues in its passport issuance process. We look forward to continuing to work with this Subcommittee and State to improve passport fraud prevention controls. Mr. Chairman and Members of the Subcommittee, this concludes my statement. I would be pleased to answer any questions that you may have at this time. For further information regarding this testimony, please contact Greg Kutz at (202) 512-6722 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Andy O’Connell, Assistant Director; John Cooney, Assistant Director; Matthew Valenta, Assistant Director; Lerone Reid, Analyst-In- Charge; Jason Kelly; Robert Heilman; James Murphy; and Timothy Walker. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
A U.S. passport is one of the most sought after travel documents in the world, allowing its holder entrance into the United States and many other countries. People attempting to obtain a U.S. passport illegally often seek to use the guise of a U.S. citizen to conceal their involvement with more serious crimes, such as terrorism, drug trafficking, money laundering, or murder. In March 2009, GAO reported on weaknesses in State's passport issuance process that could allow a terrorist or criminal to fraudulently acquire a genuine U.S. passport. Specifically, GAO easily obtained four genuine passports from State using counterfeit documents. In April 2009, GAO suggested that State take 5 corrective actions based on these undercover tests and State acknowledged those corrective actions. GAO was asked to perform additional proactive testing of State's passport issuance process to determine if it continues to be vulnerable to fraud. To do this work, GAO applied for seven U.S. passports using counterfeit or fraudulently obtained documents, such as driver's licenses and birth certificates, to simulate scenarios based on identity theft. GAO created documents for seven fictitious or deceased individuals using off-the-shelf, commercially available hardware, software, and materials. Undercover investigators applied for passports at six U.S. Postal Service locations and one State-run passport office. State's passport issuance process continues to be vulnerable to fraud, as the agency issued five of the seven passports GAO attempted to fraudulently obtain. While there were multiple indicators of fraud and identity theft in each application, State identified only two as fraudulent during its adjudication process and mailed five genuine U.S. passports to undercover GAO mailboxes. GAO successfully obtained three of these passports, but State had the remaining two recovered from the mail before they were delivered. According to State officials, the agency discovered--after its adjudication process--that the two passports were part of GAO testing when they were linked to one of the passport applications it initially denied. State officials told GAO that they used facial recognition technology--which they could have also used during the adjudication process--to identify the two remaining applications. GAO's tests show that State does not consistently use data verification and counterfeit detection techniques in its passport issuance process. Of the five passports it issued, State did not recognize discrepancies and suspicious indicators within each application. Some examples include: passport photos of the same investigator on multiple applications; a 62 year-old applicant using a Social Security number issued in 2009; passport and driver's license photos showing about a 10 year age difference; and the use of a California mailing address, a West Virginia permanent address and driver's license address, and a Washington, D.C. phone number in the same application. These were fraud indicators that should have been identified and questioned by State. State also failed to crosscheck the bogus citizenship and identity documents in the applications against the same databases that it later used to detect GAO's other fraudulent applications. State used facial recognition technology to identify the photos of GAO undercover investigators and to stop the subsequent delivery of two passports but not to detect fraud in the three applications that GAO received, which all contained a passport photo of the same investigator.
DOD issued a directive signed by the Deputy Secretary of Defense that provides DOD’s antiterrorism policy and assigns responsibilities to DOD organizations for implementing antiterrorism initiatives. This directive places responsibility for developing antiterrorism policy and guidance with the Office of the Assistant Secretary of Defense for Special Operations and Low-Intensity Conflict. In this capacity, the Assistant Secretary of Defense issued an instruction that established 31 antiterrorism standards that DOD organizations, including the services, are required to implement.8, These standards address antiterrorism planning, training requirements, physical security measures, and related issues. The office also issued a handbook containing additional detailed guidance on antiterrorism policies and practices, including guidance on assessment methodology. The Joint Staff has also issued an installation- planning template to help installations prepare their antiterrorism plans.Additionally, each of the services has issued regulations, orders and instructions to implement the DOD guidance and establish its own specific policies and standards. DOD and the services have recently revised some of these key guidance documents, and others are now under revision. DOD Instruction 2000.16, DOD Antiterrorism Standards, June 14, 2001. The services assign responsibility for protecting installations from terrorist attacks to installation commanders, who identify and prioritize antiterrorism requirements. Installation commanders are to compose a prioritized list of antiterrorism requirements from annual assessments of threat, vulnerability, and the criticality of assets, which they submit to their respective major commands. The major commands merge the antiterrorism requirements from all of their installations, prioritize them, and forward their integrated list to the service’s headquarters. Similarly, the services merge and prioritize the antiterrorism requirements of their major commands, and the consolidated list is then used as a basis for funding decisions. The required assessments of threat, vulnerability, and criticality of assets form the foundation of each installation’s antiterrorism plan and support a risk management approach to resource allocation. These three assessments are designed to assess (1) the threats to the installation, (2) the installation’s vulnerabilities, and (3) the installation’s critical assets. The threat assessment identifies and evaluates potential threats on the basis of such factors as the threats’ capabilities, intentions, and past activities. This assessment represents a systematic approach to identify potential threats before they materialize. However, this assessment might not adequately capture some emerging threats, even in cases where the assessment is frequently updated. The risk management approach therefore uses vulnerability and asset criticality assessments as additional inputs to the risk management decision-making process. A vulnerability assessment identifies weaknesses that may be exploited by identified threats and suggests options that address those weaknesses. For example, a vulnerability assessment might reveal weaknesses in an installation’s access control system, its antiterrorism awareness training, or how mission-critical assets such as fuel storage sites and communications centers are protected. Teams of multidisciplinary experts skilled in such areas as structural engineering, physical security, and installation preparedness conduct these assessments. A criticality assessment evaluates and prioritizes assets and functions to identify which assets and missions are relatively more important to protect from attack. For example, important communications facilities, utilities, or major weapons systems might be identified as critical to the execution of U.S. military war plans, and therefore receive additional protection. Criticality assessments provide information in order to prioritize resources while at the same time, reducing the potential application of resources on lower-priority assets. The critical elements of a results-oriented management framework are not being used by the services to guide their antiterrorism efforts. In results-based management, program effectiveness is measured in terms of outcomes or impact rather than outputs (i.e., activities and processes). Results-oriented principles and elements, which we have derived from the Government Performance and Results Act, are presented in table 1. Benefits from a results-based management approach depend upon the combined use of all eight of the critical elements that appear in the table. These elements, when combined with effective leadership can provide a management framework to guide major programs and activities. The critical elements of a results-oriented management framework were largely absent in the antiterrorism efforts of three services’ headquarters and at six of the eight commands we examined. Specifically, the services have not published and disseminated unambiguous results-based, strategic and performance goals for their antiterrorism efforts. Some service antiterrorism officials did articulate broadly stated goals—such as protecting personnel and material assets against terrorist attack, and defeating terrorism—but these goals have not been endorsed and disseminated by service headquarters as servicewide goals nor have the services described how these goals will be achieved or how they intend to evaluate results in terms of the goals. The Air Force, however, has taken some steps toward a results-based management framework. For example, it has published long-term goals and established service-level working groups to evaluate the effectiveness of its antiterrorism program and identify the actions needed to address or revise any unmet goals. Although the Air Force has taken these positive steps, Air Force officials acknowledge that the elements may not have been effectively articulated servicewide so that installations can understand the “big picture” and how all elements fit together. In fact, officials we contacted from Air Combat Command and Air National Guard were not aware of the service-level goals or performance-planning elements. At the command level, a results-oriented management framework was largely absent in the antiterrorism efforts of six of the eight major commands we reviewed. For example, the Air Combat Command did not have overarching antiterrorism goals for its 15 bases, although command officials said that they planned to develop them. Also, the Army National Guard has not issued antiterrorism goals for its 3,900 armories and 211 installations and has no plan to do so. Two of the commands—the Army’s Forces Command and the Navy’s Atlantic Fleet—adopted aspects of a results-oriented framework, and officials said that they did so on their own initiative and without direction from their parent service. The Army Forces Command management framework contained most of the critical management elements, such as quarterly reviews, long-term and annual goals, clear performance measures, and identification of resource requirements. Army Forces Command officials said that the results-based management approach enables its senior officers to monitor the command’s progress toward its short- and long-term goals and make necessary adjustments to the strategy and resource allocation to accomplish these goals. Forces Command officials attributed their management approach’s success, in large part, to the involvement of senior command officials and their endorsement of this management approach. According to Army headquarters antiterrorism officials, the Forces Command management framework has been an effective approach and may be useful as a model for other major commands. The Navy’s Atlantic Fleet Command also articulated long-term goals and strategies to accomplish its antiterrorism goals. For example, the fleet developed a plan of action to address security deficiencies that were identified through assessments by establishing a database to track deficiencies and identify trends. The fleet also linked resource requirements to accomplish these steps and developed metrics to measure results. According to the Atlantic Fleet officials we spoke with, however, these strategies are not currently being used by the fleet to shape its antiterrorism efforts because they are waiting for the Navy to issue servicewide antiterrorism goals. Atlantic Fleet officials stated they wanted to avoid having separate and different strategic plans for each command. The services and their major commands cite two primary reasons for not employing a results-based management framework to guide and implement their antiterrorism efforts. First, the services do not want to adopt goals and strategies that might prove inconsistent with DOD’s forthcoming, Department-wide antiterrorism strategy. As discussed earlier, the Department was in the process of developing an antiterrorism strategy, but suspended its efforts after the attacks on the World Trade Center and the Pentagon because of the pressing needs of the war on terrorism. DOD officials have indicated that they have reinitiated their efforts to develop a strategy but have not set a target date for their completion. The second reason cited by service officials for not employing a results-oriented management framework was that strategic planning and performance planning called for by the Results Act applies to agencies and not to specific efforts such as antiterrorism. We agree that the services and major commands are not required by the Results Act to prepare strategic plans and performance plans specific to their antiterrorism efforts. Nonetheless, the Results Act offers a model for developing an effective management framework to improve the likelihood of successfully implementing initiatives and assessing results. Without a results-based management approach to prioritize, integrate, and evaluate their efforts, it will be difficult for the services and their major commands to systematically plan and implement antiterrorism programs or assess their progress in reducing the likelihood and impact of terrorist attacks. It is crucial that the services identify and support those efforts that are most likely to achieve long-term antiterrorism goals because funding is not sufficient to eliminate or mitigate all identified vulnerabilities. The services and commands we reviewed are generally following prescribed guidance and regulations to use the DOD risk management approach in developing their installation antiterrorism requirements, but a significant weakness exists with the oversight of this process. Specifically, the services are not required to evaluate the thoroughness of all installations’ annual risk management assessments or whether installations used required methodologies to perform these assessments. As previously discussed, under DOD’s antiterrorism approach, three assessments (threat, vulnerability, and criticality) provide the installation commanders with the information necessary to manage the risk of a terrorist attack, and develop an antiterrorism program for the installation. It also provides guidance for completing these assessments;and it requires the military Departments, through the services, to oversee the antiterrorism efforts at their installations. In their oversight role, the military Departments, through the services, are required to ensure that installation antiterrorism efforts adhere to the antiterrorism standards established by DOD. To implement DOD’s required risk management approach, the services have issued supplements to DOD’s guidance requiring installations to conduct the three risk management assessments and indicating how these assessments should be performed. The supplemental guidance of three of the services—the Army, Air Force, and the Marine Corps—requires service-specific methodologies to be used for the assessments. The commands, to which the services have delegated some oversight responsibility for installations’ antiterrorism efforts, generally verified that installations completed annual threat, vulnerability, and asset criticality assessments. Command officials indicated that they verify whether installations’ annual risk assessments have been completed in one of two ways: (1) through the request for and receipt of copies of the written assessments or (2) through verbal verification from the installation commanders. The Navy, however, does not require that annual vulnerability assessments be documented and does not verify that these assessments are completed. To provide oversight of the risk management process, DOD’s antiterrorism standards require a higher headquarters review of subordinate installations’ antiterrorism programs once every 3 years for installations that meet specific criteria. These reviews are conducted by teams of specialists skilled in various disciplines (such as engineering, intelligence, and security) from the Joint Staff, service headquarters, or major command. The reviews assess, among other things, an installation’s antiterrorism plans, physical security, vulnerabilities and solutions for enhanced protection, and incident response measures. These reviews, however, do not routinely evaluate the methodology used to develop the annual installation assessments. Moreover, there is no requirement to review the antiterrorism programs of installations that do not meet DOD’s criteria for higher headquarters assessments. Because the results of assessments form the foundation of installation antiterrorism plans, which drive servicewide requirements, it is critical that assessments be performed consistently across each service to ensure that assessment results are comparable. According to DOD officials, installations’ risk assessments were not evaluated for two reasons. First, DOD does not specifically require the services and their commands to evaluate installation assessments. Second, several command officials indicated that evaluating assessment methodologies would provide little or no added value to the process. The Air Force and the Navy have initiatives under way that will place a greater emphasis and importance on the results of the installations’ risk management efforts. Both services are using to varying degrees an automated risk management program that should improve visibility over installation assessments and the resulting antiterrorism requirements. This program—the Vulnerability Assessment Management Program—will enable service and command officials to track assessment results and prioritize corrective actions servicewide. The program will contain information about installations’ antiterrorism requirements and the threat, vulnerability, and asset criticality assessments that support these requirements. It is also designed to allow service officials to conduct trend analyses, identify common vulnerabilities, and track corrective actions. Service officials stated that this program will also enable them to evaluate the risk assessment methodologies used at each installation, but it is unclear how this will be accomplished. If installations’ risk assessments are not periodically evaluated to ensure that assessments are complete and that a consistent or compatible methodology has been applied, then commands have no assurance that their installations’ antiterrorism requirements are comparable or based on the application of risk management principles. Consequently, when the services and commands consolidate their antiterrorism requirements (through the process of merging and reprioritizing the requirements of their multiple installations), the result may not accurately reflect the services’ most pressing needs. For example, if a standard methodology is not consistently applied, then vulnerabilities may not be identified and critical facilities may be overlooked. Or in the case of the Navy, the lack of assessment documentation further limits the command’s ability to perform its oversight responsibility. DOD has reported that $32.1 billion has been allocated or requested for combating terrorism activities from fiscal year 1999 through fiscal year 2003; however, these reported amounts may not present a clear picture of total combating terrorism costs. Each year, DOD is required to provide Congress with a report on the funds allocated to combat terrorism activities. DOD’s reported annual combating terrorism allocations have risen from $4.5 billion in fiscal year 1999 to $10 billion in the fiscal year 2003 budget request. Significant uncertainty exists, however, regarding the accuracy of these reported amounts because over half are associated with personnel who may or may not be engaged in combating terrorism activities full-time. The National Defense Authorization Act for Fiscal Year 2000 requires DOD to provide Congress with an annual consolidated budget justification display that includes all of its combating terrorism activities and programs and the associated funding. In response, DOD has submitted a separate budget report for fiscal years 2001, 2002, and 2003 that portrays its allocation of funds within the four categories of combating terrorism: antiterrorism/force protection, counterterrorism, consequence management, and intelligence support. The most recent budget report, submitted to Congress in March 2002, includes the following: the combating terrorism program descriptions and budget request estimates for fiscal year 2003, the estimated budget for fiscal year 2002, and the actual obligations for fiscal year 2001. It also reflects the funding provided by the Defense Emergency Response Fund for fiscal years 2001 and 2002. If Congress passes the fiscal year 2003 budget request as submitted, annual funding to combat terrorism will increase 122 percent from fiscal year 1999 through fiscal year 2003—rising from $4.5 billion (actual obligations) to $10 billion (budget request), including the Defense Emergency Response Fund request for fiscal year 2003. (See fig. 1.) In total, DOD reports that $32.1 billion has been allocated for combating terrorism activities during this 5-year period. The dollar amounts shown in figure 1 do not include funding for the current global war on terrorism, such as military operations in Afghanistan, because these activities are not intended to be included. Although not clearly identified in DOD’s budget reports, our analysis estimates that $19.4 billion (60 percent) of the $32.1 billion combating terrorism funding is for military ($14.1 billion) and civilian personnel and personnel-related operating costs ($5.3 billion); however, this estimate may be overstated. (See fig. 2.) In accordance with DOD’s Financial Management Regulation, the Department’s combating terrorism costs include funding for personnel in designated specialties that have combating terrorism missions, such as military police, civilian police, and security guards. The military services’ accounting systems do not track the time that individuals in these specialties spend on activities related to combating terrorism; therefore, the total personnel costs are reported even if the individuals spend only a portion of their time performing combating terrorism activities. The actual proportion of time these personnel spend between combating terrorism and unrelated activities (such as counter drug investigations) varies, although all of these personnel are available to perform combating terrorism duties when needed. The $19.4 billion of estimated combating terrorism personnel costs shown in figure 2 consists of military personnel costs of $14.1 billion and estimated operation and maintenance civilian personnel costs of $5.3 billion. Other components of the total $32 billion shown include $4.3 billion from the Defense Emergency Response Fund and $8.4 billion in other appropriations, including procurement, research and development, and military construction. Officials in the Office of the Assistant Secretary of Defense for Special Operations and Low-Intensity Conflict recognize that improvements could be made in the budget report for next year and plan to consider ways to restructure its contents to include more summary information. Funding for antiterrorism requirements has increased since fiscal year 1999, but it is widely recognized that vulnerabilities at military installations will continue to outpace available funding. It is therefore essential that funds be spent efficiently and effectively if the services are to achieve the highest level of protection possible for military personnel, equipment, and critical facilities and operations. Our analysis indicates that the military services generally are not applying a results-oriented management framework to guide their antiterrorism efforts, in part, because DOD does not yet have a Department-wide antiterrorism strategy. Without a results- oriented management framework to implement antiterrorism efforts and monitor results, the services, military commanders, and Congress will not be able to determine if past and future resources—which have been significantly increased—are achieving their desired results in the most efficient and effective manner. The services and commands we reviewed are adhering to prescribed policies and procedures and taking significant steps to improve their capability to use a risk management approach. We identified a significant weakness in the services’ current risk management approach, however, which limits their ability to ensure that these methodologies are consistently used. As a result, there is limited assurance that assessment results—which ultimately drive funding allocations—have been achieved through a consistent assessment process prescribed by DOD guidance. This creates the potential that limited resources could be misapplied and important opportunities to improve an installation’s force protection posture could be overlooked. The Department’s annual combating terrorism report to Congress provides a detailed description of DOD funds allocated for combating terrorism activities, but that report should be viewed with caution because over half of the reported amounts are estimates that do not reflect actual activities dedicated to combating terrorism. Consequently, as Congress considers DOD’s budget requests and oversees DOD’s combating terrorism activities, it may not have a clear picture of total costs incurred by DOD for this purpose. Because of the magnitude of the funds being allocated for, and the importance of antiterrorism efforts within, DOD, we recommend that simultaneous steps be taken within the Department to improve the management framework guiding these efforts. Accordingly, to establish a foundation for the services’ antiterrorism efforts, we recommend that the Secretary of Defense (1) direct the Assistant Secretary of Defense for Special Operations and Low-Intensity Conflict to accelerate and set a target date to issue a Department-wide antiterrorism strategy that will underpin each service’s efforts, and (2) work with each service to ensure that its management framework is consistent with this Department-wide strategy. To improve the effectiveness of the services’ antiterrorism efforts, we recommend that the Secretary of Defense direct the Secretaries of the Army, Navy, and Air Force to adopt and effectively communicate a results-oriented management framework, consistent with DOD’s overall antiterrorism strategy, to guide each service’s antiterrorism efforts. This framework should include the following: approaches to achieve the goals, and key factors that might significantly affect achieving the goals. An implementation approach that provides performance goals that are objective, quantifiable, and measurable; resources to achieve the goals; performance indicators to measure outputs; an evaluation plan to compare program results with established goals; and actions needed to address any unmet goals. To improve their risk management approach for identifying antiterrorism requirements, we recommend that the Secretary of Defense direct the Secretaries of the Army, Navy, and Air Force to require installation commanders to document all threat, vulnerability, and asset criticality assessments and periodic higher headquarters evaluations of the methodologies used by installations to conduct their threat, vulnerability, and asset criticality assessments. Such an evaluation may be incorporated into the existing service-level review process; however, for those installations that are not covered by this process, the services should develop an alternative approach. To clarify the annual consolidated budget justification display for combating terrorism reported to Congress, we recommend that the Secretary of Defense highlight the military and civilian personnel funding included in the report and clearly indicate that these total personnel funds are reported even though the individuals may spend only a portion of their time performing combating terrorism activities. DOD agreed with all of our recommendations and stated that it is accelerating the development of an antiterrorism strategy and working with the military services to ensure that a consistent approach is followed across the Department. In commenting on this report, DOD said that it would publish an antiterrorism strategic plan by January 2003 that articulates strategic goals, objectives, and an approach to achieve them. Moreover, DOD will require each service to develop its own antiterrorism strategic plan that complements and supports the Department’s plan. DOD also agreed to improve its risk management process for establishing antiterrorism requirements. In its comments, DOD said that it is revising guidance to validate the methodologies their installations use to perform threat, vulnerability, and asset criticality assessments and the thoroughness of these three assessments as part of regularly scheduled antiterrorism program reviews. DOD agreed with our recommendation to clarify how personnel costs that appear in the Department’s annual combating terrorism funding report to Congress were calculated. In its fiscal year 2004 combating terrorism funding report to Congress, DOD plans to highlight the personnel costs and the methodology used to determine them. DOD officials also provided technical comments that we have incorporated as appropriate. DOD’s written comments are reprinted in their entirety in appendix II. We are sending copies of this report to the Secretaries of Defense, the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-6020. Key contributors to this report are listed in appendix III. The scope of our study was limited to the antiterrorism preparedness of Department of Defense (DOD) installations in the continental United States. To perform our review, we contacted the antiterrorism offices for each of the four military services, as well as two commands within each service. We selected an active-duty command from each service that was responsible for a large number of installations and that had a key role in providing personnel and weapons systems for military operations. Additionally, we selected a reserve command from each service because they typically have smaller-sized installations than do active-duty commands; consequently, a large number of them do not receive service- level reviews of their antiterrorism efforts. To determine whether the services use a results-oriented management framework to guide their antiterrorism efforts, we met with Office of the Secretary of Defense and service headquarters and command antiterrorism officials, and reviewed their strategic-planning documents for evidence of the critical elements of a strategic plan and performance plan—as embodied in the Government Performance and Results Act of 1993. We also reviewed service- and command-specific documents, such as campaign plans, operating orders, and briefing slides, which describe and communicate the management structure of the services and commands antiterrorism programs. We interviewed officials and gathered relevant documentation for our review primarily from the following DOD organizations located in the Washington, D.C., area: Assistant Secretary of Defense for Special Operations and Low-Intensity Conflict. Headquarters, Department of the Army, Force Protection and Law Enforcement Division, Antiterrorism Branch. Headquarters, Department of the Navy, Interagency Support and Antiterrorism/ Force Protection Division. Headquarters, Department of the Air Force, Force Protection Branch, Directorate of Security Forces. Headquarters, U.S. Marine Corps, Homeland Defense Branch, Security Division. We also spoke with officials from the following commands, who provided data on the number of domestic installations within their respective commands. Army Forces Command, Atlanta, Georgia (number of installations = 11). Navy Atlantic Fleet, Norfolk, Virginia (number of installations = 18). Air Combat Command, Hampton, Virginia (number of installations = 16). Marine Forces Atlantic, Norfolk, Virginia (number of installations = 7). Army National Guard, Arlington, Virginia (number of installations = 165). Naval Reserve Force, New Orleans, Louisiana (number of installations = 116). Air National Guard, Arlington, Virginia (number of installations = 69). Marine Force Reserve, New Orleans, Louisiana (number of installations = 42). To determine the extent to which the military services use risk management analysis to develop antiterrorism requirements, we obtained relevant documents and interviewed antiterrorism officials from the organizations and commands previously listed as well as the following organizations: Joint Staff Directorate for Combating Terrorism Programs and Requirements, Washington, D.C. Air Force Security Forces Center, Lackland Air Force Base, San Antonio, Texas. We reviewed DOD as well as Joint Staff-, service-, and command-specific regulations, orders, pamphlets, manuals, and other antiterrorism guidance to determine whether organizations were required to perform the three assessments (of threat, vulnerability, and asset criticality) that comprise risk management to identify and prioritize antiterrorism requirements. We also reviewed these documents for procedures and directions on how these assessments are to be performed. We spoke with headquarters and command officials about their involvement in overseeing how installations identify antiterrorism requirements and about their process for merging, reprioritizing, and funding these installation requirements. Additionally, we spoke with Air Force and Navy headquarters officials as well as officials from the Air Force Security Forces Center about the utility of the Vulnerability Assessment Management Program for prioritizing and tracking installation antiterrorism requirements servicewide. To identify funding trends and determine if DOD accurately and completely reports its combating terrorism funding to Congress, we obtained and analyzed the three annual combating terrorism activities budget reports that cover fiscal years 1999 through 2003. We did not independently verify the information contained in the funding reports, although we did examine the methodology and assumptions that were used to develop the information. We discussed how the budget report is reviewed and consolidated with officials from the DOD Comptroller’s Office, the Office for Special Operations and Low-Intensity Conflict, and the Program Analysis and Evaluation Directorate. To determine if the military services’ funding information is accurate and complete, we interviewed budget officials responsible for compiling the information for each service. To estimate the combating terrorism personnel funding that appears in figure 2, we analyzed 5 fiscal years of funding from the previously mentioned combating terrorism budget reports. The $14.1 billion of military personnel presented in the figure represents appropriations for military personnel for combating terrorism. We estimated civilian personnel funding by combining the four antiterrorism activities that contain most of the operation and maintenance funds for personnel: physical security management and planning, security forces and technicians, law enforcement, and security and investigative matters. DOD’s budget report does not distinguish civilian personnel funds from the other funds contained in these activities; therefore, our estimate of civilian personnel funds includes the nonpersonnel funds as well. However, we believe that the estimate is appropriate on the basis of our analysis of DOD’s budget report and discussions with DOD officials. We could not determine the civilian personnel funds embedded in other operation and maintenance activities and in research and development activities and, therefore, did not include them in our estimate of personnel funding. We conducted our review from February through August 2002 in accordance with generally accepted government auditing standards. In addition to those named above, Alan Byroade, J. Paul Newton, Marc Schwartz, Corinna Wengryn, R. K. Wild, Susan Woodward, and Richard Yeh made key contributions to this report. Combating Terrorism: Department of State Programs to Combat Terrorism Abroad. GAO-02-1021. Washington, D.C.: September 6, 2002. Port Security: Nation Faces Formidable Challenges in Making New Initiatives Successful. GAO-02-993T. Washington, D.C.: August 5, 2002. Combating Terrorism: Preliminary Observations on Weaknesses in Force Protection for DOD Deployments Through Domestic Seaports. GAO-02-955TNI. Washington, D.C.: July 23, 2002. Combating Terrorism: Critical Components of a National Strategy to Enhance State and Local Preparedness. GAO-02-548T. Washington, D.C.: March 25, 2002. Combating Terrorism: Key Aspects of a National Strategy to Enhance State and Local Preparedness. GAO-02-473T. Washington, D.C.: March 1, 2002. Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs. GAO-02-160T. Washington, D.C.: November 7, 2001. Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts. GAO-02-208T. Washington, D.C.: October 31, 2001. Combating Terrorism: Considerations for Investing Resources in Chemical and Biological Preparedness. GAO-01-162T. Washington, D.C.: October 17, 2001. Homeland Security: Key Elements of a Risk Management Approach. GAO-02-150T. Washington, D.C.: October 12, 2001. Homeland Security: A Framework for Addressing the Nation’s Issues. GAO-01-1158T. Washington, D.C.: September 21, 2001. Combating Terrorism: Selected Challenges and Related Recommendations. GAO-01-822. Washington, D.C.: September 20, 2001. Combating Terrorism: Actions Needed to Improve DOD’s Antiterrorism Program Implementation and Management. GAO-01-909. Washington, D.C.: September 19, 2001. Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy. GAO-01-556T. Washington, D.C.: March 27, 2001. Combating Terrorism: Linking Threats to Strategies and Resources. GAO/T-NSIAD-00-218. Washington, D.C.: July 26, 2000. Combating Terrorism: Action Taken but Considerable Risks Remain for Forces Overseas. GAO/NSIAD-00-181. Washington, D.C.: July 19, 2000). Chemical and Biological Defense: Program Planning and Evaluation Should Follow Results Act Framework. GAO/T-NSIAD-00-180. Washington, D.C.: May 24, 2000. Chemical and Biological Defense: Observations on Actions Taken to Protect Military Forces. GAO/T-NSIAD-00-49. Washington, D.C.: October 20, 1999. Critical Infrastructure Protection: Comprehensive Strategy Can Draw on Year 2000 Experiences. GAO/AIMD-00-1. Washington, D.C.: October 1, 1999. Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks. GAO/NSIAD-99-163. Washington, D.C.: September 7, 1999. Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency. GAO/NSIAD-99-3. Washington, D.C.: November 12, 1998. Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments. GAO/NSIAD-98-74. Washington, D.C.: April 9, 1998. Combating Terrorism: Efforts to Protect U.S. Forces in Turkey and the Middle East. GAO/T-NSIAD-98-44. Washington, D.C.: October 28, 1997. Combating Terrorism: Status of DOD Efforts to Protect Its Forces Overseas. GAO/NSIAD-97-207. Washington, D.C.: July 21, 1997.
After the September 11, 2001, terrorist attacks, domestic military installations increased their antiterrorism measures to their highest levels. These measures were reduced in the weeks following the attacks, but because of the persistent nature of the threat, the antiterrorism posture at domestic installations remains at a higher than normal level more than 1 year later. The Department of Defense's (DOD) budget request for fiscal year 2003 includes more than $10 billion for combating terrorism activities, which includes a substantial increase in funding for antiterrorism measures to safeguard personnel and strategic issues. The service headquarters GAO reviewed did not use a comprehensive results-oriented management framework to guide their antiterrorism efforts. According to service officials, a comprehensive results-oriented management framework for antiterrorism efforts is not consistently used across all services and commands because DOD does not require it, and service officials indicated that they were reluctant to develop such an approach before the forthcoming DOD-wide antiterrorism strategy was issued. Although the Department has recently restarted its efforts toward developing this strategy, it has not set a specific time frame for its completion. The services and commands are following prescribed guidance and regulations to conduct risk management analyses to support their antiterrorism requirements, but significant weaknesses exist with the current approach. The commands do not always require documentation of the assessments, and they do not periodically evaluate the assessment methodology used at each installation to determine the thoroughness of the analyses or the consistency with required assessment methodology. DOD has reported that $32.1 billion has been allocated or requested for combating terrorism activities from fiscal year 1999 through fiscal year 2003; however, these reported amounts may not present a clear picture of total combating terrorism costs. GAO's analysis indicates that $19.4 billion of this amount is for military and civilian personnel and personnel-related operating costs associated with individuals in designated specialties that have combating terrorism-related missions, such as military police, civilian police, and security guard.
DOD’s strategy for planning, executing, and funding its weapon system acquisition programs relies on three principal decision-making systems. First, the Joint Capabilities Integration and Development System (JCIDS) is a requirements system used to assess gaps in warfighting capabilities and recommend solutions to resolve those gaps. Second, the Defense Acquisition System is used to manage the development and procurement of weapon systems and other equipment. Third, the Planning, Programming, Budgeting, and Execution process is used to allocate resources and is intended to provide a framework from which the department can articulate its strategy; identify force size, structure, and needed equipment; set program priorities; allocate resources to individual programs; and assess program performance. All three of these systems can incur lengthy time frames. For example, the requirements system can take an average of up to 10 months to validate a need. The acquisition system involves large budgets and generally meets materiel warfighter needs in 2 or more years, with some systems taking decades to develop and procure. The budgeting process is calendar driven, taking nearly 2 years from planning to the beginning of budget execution. We have previously reported on challenges the department faces within each of these systems. Each of the military services has established processes to address urgent warfighter needs. Our review focuses primarily on the following: The Army established its Operational Needs Statement process in 1987 to provide a way for unit commanders to identify urgent needs for new materiel or new capabilities. The Office of the Deputy Chief of Staff G3/5/7 oversees the process. Prior to the wars in Afghanistan and Iraq, the Army received about 20 requests per year. From September 2006 to February 2010 the Army’s database shows 6,712 Operational Needs Statements containing 21,864 urgent needs requests that have been or are being processed to support operations in those two theaters. The Army’s process supports deployed units, deploying units, and units conducting their assigned missions, and responds to a variety of urgent needs, from new capabilities to shortfalls of existing equipment in theater, to requests for training equipment for mobilizing units in the United States. Operational field commanders also use the Army’s process to document the urgent need for a materiel solution to correct a deficiency or to improve a capability that impacts upon mission accomplishment. In September 2006, the Equipment Common Operating Picture, an automated processing tool for Army urgent needs, became operational. This data management tool is a classified, Web-based application for processing urgent needs from the unit submitting the request through all phases of the process. According to the user’s guide, the tool was designed to simplify requests, consolidate existing sources of information, and significantly speed the approval process while providing situational awareness to all involved in a request. The Marine Corps created its Urgent Universal Needs Statement process in November 2003 to meet the immediate operational needs of deployed forces or forces preparing to deploy. The Marine Corps Combat Development Command oversees this process. The command establishes guidance and direction, and provides oversight to ensure solutions are effectively and efficiently delivered to the warfighter. The Marine Corps received 574 requests through the process between December 2001 and November 2009. In August 2007, the Marine Corps’ Virtual Universal Urgent Needs Statement data management system for processing urgent needs requests became operational. The Corps developed this system as a result of a Lean Six Sigma continuous improvement initiative to replace the manually updated Combat Development Tracking System. In addition to the military services’ urgent needs processes, The Bob Stump National Defense Authorization Act for Fiscal Year 2003 (the Fiscal Year 2003 NDAA) directed the Secretary of Defense to create a process to rapidly meet the urgent needs of combatant commands and the Joint Chiefs of Staff. Specifically, Section 806 of the act required the Secretary of Defense to prescribe procedures for the rapid acquisition and deployment of items that are currently under development by DOD or available from the commercial sector, and that are urgently needed to react to an enemy threat or to respond to significant and urgent safety situations. According to the legislation, the procedures should include a process for demonstrating, rapidly acquiring, and deploying items that meet the needs communicated by the combatant commanders and the Joint Chiefs of Staff. In September 2004, the Deputy Secretary of Defense directed the Under Secretary of Defense (AT&L) and the Under Secretary of Defense (Comptroller) to create the Joint Rapid Action Cell (JRAC), later renamed the Joint Rapid Acquisition Cell, to facilitate meeting the urgent material and logistics requirements which combatant commanders certify as operationally critical. Subsequently, in November 2004, the Deputy Secretary of Defense provided guidance on the procedures, roles, and responsibilities of the JRAC and on the identification and validation of urgent operational needs. The Deputy Secretary’s memo defines urgent operational needs as urgent, combatant commander-prioritized operational needs that, if left unfilled, could result in loss of life and/or prevent the successful completion of a near-term military mission. The memo defines immediate warfighter needs as urgent operational needs requiring a timely materiel or nonmateriel solution in 120 days or less that, if left unfilled, could result in loss of life and/or prevent the successful completion of a near-term military mission. An executive director leads JRAC and reports to the Director, Rapid Fielding, within DDR&E and under the Office of the Under Secretary of Defense (AT&L). JRAC’s Core Group consists of full-time professional staff and part-time senior executives and military officers from the offices of the Under Secretary of Defense (Comptroller), DOD General Counsel, and Chairman of the Joint Chiefs of Staff. An Advisory Group supports the Core Group and includes pertinent Under or Assistant Secretaries based on the specific need. Just weeks before the Deputy Secretary issued the November 2004 guidance, the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 (the Fiscal Year 2005 NDAA) was enacted. It amended section 806 of the Fiscal Year 2003 NDAA by providing the Secretary of Defense with a rapid acquisition authority to respond to combat emergencies. Under that authority, when the Secretary of Defense, without delegation, determines in writing that equipment is urgently needed to eliminate a combat capability deficiency that has resulted in combat fatalities, the Secretary is to use procedures developed under this section to accomplish the rapid acquisition and deployment of the needed equipment. The amendment states that whenever the Secretary makes the above determination, the Secretary shall designate a senior official to ensure that the needed equipment is acquired and deployed as quickly as possible, with a goal of awarding a contract within 15 days. Also, under the amendment, the Secretary is to authorize the senior official to waive certain provisions of law, policy, directive, or regulation that would unnecessarily impede the rapid acquisition and deployment of the needed equipment. The amendment also stated the “authority of this section may not be used to acquire equipment in an amount aggregating more than $100,000,000 during any fiscal year.” In addition, the amendment stated that “or acquisitions of equipment under this section during the fiscal year in which the Secretary makes the determination [that equipment is urgently needed to eliminate a combat capability deficiency that has resulted in combat fatalities] with respect to such equipment, the Secretary may use any funds available to the Department of Defense for that fiscal year.” The Chairman of the Joint Chiefs of Staff (CJCS) issued an instruction in July 2005 establishing policy and procedures to facilitate the assessment, validation, sourcing, resourcing, and fielding of operationally driven urgent, combatant command needs during the same fiscal year that a request is made. According to the instruction, combatant commanders involved in ongoing operations identify joint urgent needs as life- or combat mission-threatening needs based on unforeseen military requirements that must be resolved in days, weeks, or months. Under the instruction, a joint urgent need must be considered inherently joint in nature; for example, the need is theaterwide and/or spans multiple military services. Joint urgent needs must also fall outside of DOD’s established guidance for weapon systems acquisition and the military services’ established urgent operational needs processes. JRAC has applied its guidance to process joint urgent needs meeting these criteria. The instruction delegates shared oversight responsibility of the process to the Joint Staff J-8 Director for Force Structure, Resources and Assessment; the Under Secretary of Defense (Comptroller) Deputy Comptroller for Program and Budget; and JRAC within the Office of the Under Secretary of Defense (AT&L). Data for joint urgent needs are managed through the CENTCOM Requirements Information Manager database system, which the command added to a preexisting Web site it managed in 2005. The database includes 283 joint urgent needs requests from August 2004 through February 2010. The Army, Marine Corps, and joint urgent needs processes have some distinctions in guidance, terminology, and data systems; however they share similar decision points. Although each of these urgent needs processes is distinct, we identified seven broad phases that we used to track the progression of each request over time and to compare performance across the Army’s Operational Needs Statement process, the Marine Corps’ Urgent Universal Needs Statement process, and the Joint Urgent Operational Needs processes. These phases are: initiation, theater endorsement, command validation, headquarters approval, funding, contract award, and initial fielding. Urgent needs requests that result in fielded solutions typically move through the process as follows: Initiation: Any of the three urgent needs processes can begin when either a warfighter in the theater of operations or an official at the theaterwide or combatant command level identifies a need and an officer with a rank of Colonel or higher submits the request into the relevant Army, Marine Corps, or joint process. The request could be for either a known, specific piece of equipment or for an unknown materiel or nonmateriel solution based on a description of a capability gap. Theater Endorsement: Theater command leadership reviews, endorses, and forwards a request for component or combatant command validation. For example, a joint urgent needs request from a warfighter in Iraq would be reviewed and endorsed by the theater commands such as Multi National Force-West, Multi-National Corps-Iraq, or Multi National Force-Iraq. Command Validation: Endorsed urgent needs requests from Iraq or Afghanistan are elevated to the appropriate commandwide leadership—U.S. Central Command, U.S. Army Forces Central, or U.S. Marine Corps Central Command— for validation or rejection. Headquarters Approval: Validated Army urgent needs requests are sent to the Office of the Deputy Chief of Staff for the Army G3/5/7 directorate, while Marine Corps urgent needs are sent to the Marine Corps Requirements Oversight Council for its headquarters approval. The combatant commander sends joint urgent needs to the Joint Chiefs of Staff, who send the need to JRAC concurrently in order to alert it of the impending request. Upon headquarters approval, JRAC assigns the requests for capabilities related to countering improvised explosive devices to JIEDDO. For all other joint urgent needs, JRAC designates a military service to sponsor the procurement and fielding of a solution. Funding: The military service or joint sponsor applies funds to the program office to begin the procurement of approved solutions. When funds are not already available, the services may obtain funding for an urgent need through the annual budget process, by reprogramming funds from other programs during the current fiscal year, or by requesting the Secretary of Defense to invoke the department’s rapid acquisition authority. For joint urgent needs requests, JRAC may assist in identifying available funding as needed. In previous years, sponsors have also requested funding for urgent needs through the wartime supplemental appropriation. Contract Award: The appropriate military service or joint program office develops and executes an acquisition strategy in order to procure the solution. Among other options, a new contract may be awarded using competitive procedures or as a sole source, as provided in the Federal Acquisition Regulation (FAR), or an existing contract could be amended or modified. The rapid acquisition authority may be available for the acquisition and deployment of some equipment. Production and Initial Fielding: The program office manages the production and fielding of solutions to the theater. Some solutions may be readily available from current DOD inventory or from commercial vendors, while others may require modifications to existing equipment or substantial efforts to research, develop, and produce new technologies. Combatant commanders have sometimes made strategic or tactical changes that eliminate the need for a solution. Also, an urgent needs request could be addressed by existing equipment that was previously unavailable until changes in the combatant commander’s plans and priorities resulted in the availability of the equipment in the theater. In addition, a joint urgent needs request that meets the criteria of another urgent needs process may be rerouted; for example, counter-IED capability gaps may be redirected to JIEDDO for resolution. Beginning in fiscal year 2003, Congress began appropriating funds to the Iraq Freedom Fund. Over 4 years, from fiscal year 2005 to fiscal year 2008, OSD provided approximately $442.54 million from the Iraq Freedom Fund to DOD components seeking to fund solutions to joint urgent needs, as shown in table 1. In fiscal years 2005-2008, JRAC administered funds appropriated to the Iraq Freedom Fund to assist sponsors in funding solutions to 31, or about 30 percent of an estimated 102 approved joint urgent needs not related to countering improvised explosive devices. Beginning in fiscal year 2008, funding for joint urgent needs began to decline from less than $34 million to nothing in fiscal years 2009 and 2010. When funds are not provided by one of the services or other DOD components, after the department’s annual budget has been approved, OSD can fund urgent needs, among other ways, by invoking the rapid acquisition authority granted by Congress and/or by utilizing the department’s existing authority to reprogram or transfer. Although DOD has taken steps to create urgent needs processes that are more responsive to urgent warfighter requests than its traditional acquisition procedures, DOD is unable to fully assess how well the urgent needs processes are addressing critical deficiencies or to measure the effectiveness of solutions fielded in the theater because it has not established an effective management framework for those processes. GAO’s Standards for Internal Control in the Federal Government provides a general framework for management control of agencies’ operations. In implementing this framework, management is responsible for developing detailed policies, procedures, and practices to fit their agency’s operations and to ensure that those controls are built into and are an integral part of operations. Internal control, which is synonymous with management control, helps government program managers achieve desired results. However, we found that DOD’s guidance for its joint urgent needs processes is fragmented and outdated, in addition to lacking other important internal controls. As a result, the department does not have the tools it needs to fully assess how well its processes are working to address critical warfighter needs, to manage their performance, to ensure the efficient use of resources, and to make decisions regarding the long-term sustainment of a fielded combatant command capability. Existing guidance for the joint urgent needs process is fragmented among several documents and is outdated, which is inconsistent with federal internal control standards that prescribe the establishment of a clearly defined organizational structure that provides a framework to achieve agency objectives. We found that guidance for DOD’s urgent needs processes is widely dispersed among several memoranda from the Secretary of Defense, the Deputy Secretary of Defense, and the Under Secretary of Defense (AT&L), and an instruction from the Chairman of the Joint Chiefs of Staff. For example, OSD’s guidance describing how joint urgent needs should be processed is contained in memoranda issued in September and November 2004, and March 2005. In addition, the Chairman of the Joint Chiefs of Staff issued an instruction in July 2005 establishing the policies and procedures for warfighters in the theater and combatant commanders to identify, review, and approve joint urgent needs. As a result, the guidance does not frame a cohesive common operating picture that explains how the process should function. Further, neither the November 2004 memo nor the Chairman’s instruction have been updated since their creation, although significant changes in the urgent needs process have occurred since both were issued. Once received and approved by the Joint Chiefs of Staff, a joint urgent need is handed off to JRAC for disposition through additional phases of the process that address funding, acquisition, and fielding as outlined in Deputy Secretary of Defense memoranda. The Deputy Secretary of Defense memo from November 15, 2004, outlined procedures for JRAC to follow in facilitating joint urgent needs and included a provision directing that the guidance remain in effect for 3 years after it had been approved, at which time a determination will be made as to the continued existence of JRAC. However, OSD has not released additional guidance or amended the current guidance to address this provision, and JRAC continues to operate and facilitate the urgent needs process, more than 5 years after the guidance was issued. According to GAO’s Internal Control Management and Evaluation Tool, one of the steps management can take to ensure consistency with internal controls is to periodically evaluate the organization structure and make changes as necessary in response to changing conditions. Neither the November 2004 memo nor the Chairman’s instruction have been updated to incorporate guidance regarding how the rapid acquisition authority is to be implemented. Among other things, the operational guidance for the joint urgent needs process could delineate for potential requestors the advantages of using the authority, the circumstances under which a request for the use of the authority should be contemplated, what factors might persuade the Secretary that a given request is a good candidate for the use of the authority, as well as how and when the use of waivers would be appropriate under the rapid acquisition authority. This kind of information could be useful to officials assigned the responsibility of processing urgent need requests and finding funds for those requests. In addition, the Deputy Secretary of Defense memo defines immediate warfighter needs as urgent operational needs requiring a timely solution within 120 days or less. According to JRAC officials, because they have found it difficult to complete all phases of the joint process and field a solution in 120 days, in practice, they have modified this time frame by extending it to between 120 days to 2 years. The modification of this time frame occurred informally, and has not been documented in guidance. Also, it remains unclear whether OSD approval is required to change the time frame or whether authority is delegated to JRAC to make this change, which affects the standard for timeliness in meeting urgent warfighter needs. Additionally, the November 2004 Deputy Secretary of Defense memorandum defines differently the terms urgent operational need and immediate warfighter need. Officials relate that, in practice, there is no longer a distinction between the two and both have been subsumed in the term joint urgent operational need, and are treated as one and the same. JRAC staff completed a Lean Six Sigma study of the joint urgent needs process. According to JRAC officials, they plan to use the findings of that study to guide improvements to the process. However, because this effort is still ongoing, it is unclear to what extent any actions taken as a result of this study will address the issues we have identified. As a result of its current organizational structure and lack of comprehensive, updated guidance, DOD cannot be assured that the objectives of the joint urgent needs process are being achieved as effectively as possible. Urgent needs guidance for the joint process does not clearly define the roles and responsibilities of OSD, the Joint Chiefs of Staff, and the military services in implementing, monitoring, and evaluating all phases. Federal internal control standards call for clearly established areas of authority, responsibility, and appropriate lines of reporting for federal programs. For example, the Chairman of the Joint Chiefs of Staff instruction directed the creation of the Budget Office Director’s Board within the Joint Staff to adjudicate funding during the same fiscal year that a request is made for solutions for joint urgent needs. According to the Chairman’s instruction, the board is responsible for reviewing and approving recommendations to fund joint urgent needs, and to direct the reprogramming of funding from military services’ or agencies’ budgets to do so. However, this board has never convened, and JRAC has assumed responsibility for identifying funding to procure solutions to joint needs. The November 2004 Deputy Secretary of Defense memorandum states that the JRAC is to assist in resolving issues impeding the resolution of joint urgent needs, but the memorandum does not give JRAC the authority or responsibility for identifying funding for solutions. Rather, the guidance states that the military services, defense agencies, and combatant commands are responsible for funding solutions. Further, the Chairman’s instruction and the November 2004 Deputy Secretary of Defense memorandum are inconsistent regarding the scope of solutions for joint urgent needs. For example, the Chairman’s instruction includes criteria for the scope of solutions to joint urgent needs, stipulating that they should not involve the development of a new technology or capability. The instruction further states that the acceleration of a new technology in progress or the minor modification of an existing system to adapt to a new or similar mission is within the scope of solutions to joint urgent needs. However, the November 2004 memorandum that governs the process after the Joint Chiefs of Staff approves the need does not prescribe such a limitation on the scope of solutions. According to JRAC officials, they have nonetheless received approved joint urgent needs where the proposed solutions are currently on hold due to their technological complexity. In the absence of clearly defined roles and responsibilities, the department faces difficulty in ensuring that the joint process is implemented efficiently and effectively and in identifying the appropriate personnel who are accountable for operations, stewardship of resources, and achieving results. With the approval of the Secretary of Defense, military services that sponsor solutions to joint urgent needs may use the rapid acquisition authority to expedite the acquisition and fielding of solutions. However, this authority is not defined or incorporated in DOD’s guidance for the joint urgent needs process. Internal control standards cite the importance of policies and procedures that enforce management’s directives, and become integral to an agency’s accountability for stewardship of government resources and achieving effective results. Once joint urgent needs are approved by the Joint Staff and passed on to JRAC for disposition, JRAC assigns military services to sponsor the acquisition and fielding of solutions to address those needs. Upon the Secretary of Defense’s approval, the military services may use the rapid acquisition authority created by the Fiscal Year 2005 NDAA. That legislation states that the Secretary of Defense is to use procedures developed under the authority of that legislation to rapidly acquire and deploy urgently needed equipment to eliminate a combat deficiency that has resulted in combat fatalities and, if necessary, to waive laws, policies, directives, or regulations addressing the solicitation and selection of sources and the award of the contract, in order to rapidly acquire and deploy the equipment. As a result of DOD not including the rapid acquisition authority in its guidance, program managers may not be aware of all procedures available to them for fielding solutions quickly to the theater. The online data management systems of the joint and Army urgent needs processes lack comprehensive, complete, and reliable information on the achievement of key process phases, as well as the ability to generate reports to track key dates and activities because DOD guidance has not established standards for the collection and management of urgent needs data. GAO’s Standards for Internal Control cites the significance of accurately documenting events and creating and maintaining records as evidence of the execution of agency activities. In addition, those standards call for the proper classification of transactions and events that includes appropriate organization and formatting of information from which reports and statements are prepared. Relevant, reliable, and timely communications and effective information technology management are critical to achieving useful, reliable, and continuous recording and communication of information. However, the milestone data located in the joint and Army databases are often incomplete and unreliable. Although both joint and Army systems generally contain documentation to support completion of milestones at the early phases of the processes such as theater command endorsement and headquarters leadership approval, once a request is delegated to the acquisition community for procurement and fielding, visibility into subsequent actions is largely lost. For example, the joint system rarely contains detailed information and support documentation regarding the funding, contract award, or production and fielding of solutions. Additionally, the Army database does not contain information regarding acquisition milestones following the approval of a funding strategy. As a result, data limitations can prevent managers and decision makers of the urgent needs processes from assessing the overall responsiveness and effectiveness of their processes. Further, Army policy instructs system managers to close out requests 120 days after the scheduled fielding date if no information regarding actual fielding is received. This may result in the closure of some requests without confirmation of whether or not solutions were actually fielded. Although the joint system contains the most detailed qualitative data of any of the three systems we reviewed, the dates cited for specific milestones do not reflect the dates on which those milestones were achieved, and instead reflect the dates the milestones were recorded in the joint system’s electronic record. Consequently, the dates regarding the funding, acquisition, and fielding of solutions are vague or inaccurate, and the database usually lacks documentation to support the dates listed. Additionally, the joint and Army systems lack a capability to produce either management summary reports or complete historical information regarding the completion of phases, which prevents DOD from measuring responsiveness over time and initiating process improvements. Furthermore, managers of the Army, Marine Corps, and joint urgent needs processes lack visibility into other urgent needs data systems across the department, which limits their ability to determine if possible solutions to their urgent needs might have already been developed through other processes. Finally, none of the data systems we reviewed include information regarding the effectiveness of fielded solutions. As a result, DOD process managers are unable to identify potentially systemic problems that could otherwise be mitigated by process improvements and updates. The Joint Chiefs of Staff and JRAC are exploring information-sharing concepts and data exchange capabilities among DOD’s urgent needs data systems with the modification of DOD’s Knowledge Management Database System to improve visibility into urgent needs requests across the department. However, these improvements are in the very early stages, and according to DOD officials it is uncertain when these efforts will be completed. Therefore, DOD’s urgent needs database systems will continue to lack various characteristics and capabilities that would enable process managers to better assess the performance of their processes in responding to warfighter requests. The joint urgent needs process does not include a formal method for joint decision makers to receive feedback on how well fielded solutions have met the urgent needs for which they were requested. The Army assesses the performance of solutions that are fielded through its urgent needs process as well as those solutions from the joint process that the Army sponsors, and the Marine Corps is working to develop a similar performance assessment process. However neither service’s assessment process includes a mechanism for providing actionable performance feedback to joint decision makers. Internal control standards emphasize the importance of routine feedback and performance monitoring when assessing process effectiveness, and they direct agencies to assess the quality of performance over time. Such assessments can occur during normal operations and include regular management and supervisory activities. While the Army makes information from its assessment process available to joint decision makers, the information is narrowly focused on issues specific to Army personnel and processes, and as such does not provide DOD, JRAC, or Joint Chiefs of Staff with feedback assessing the extent to which those solutions met the joint urgent needs of the combatant command or whether given solutions should be sustained for the long term and acquired in the future through DOD’s established requirements, budgeting, and acquisition process. We have previously reported that the department’s established requirements process, JCIDS, has not met its objectives to identify and prioritize warfighting needs from a joint capability perspective. In 2008, we reported that capabilities continue to be driven primarily by the individual services and that DOD may be losing opportunities to improve joint warfighting capabilities. In responding to our 2008 report, DOD stated that identifying and prioritizing joint capabilities occurs through multiple processes within and outside JCIDS, including the joint urgent needs process. However, without a joint warfighter perspective on performance, there is not sufficient information to adequately assess whether a capability should transition to an acquisition program, particularly when the sponsoring service would like to phase out or terminate support of the capability. Joint Chiefs of Staff officials recognize the need for performance feedback on joint solutions; however, its previous attempt to establish a process for collecting performance feedback was unsuccessful. In 2007, the Joint Chiefs of Staff attempted to craft a feedback loop as part of an update to the Chairman’s instruction for joint urgent needs. The draft revision failed to obtain DOD-wide approval and was canceled—in part due to disagreement over the feedback process outlined in the draft instruction. According to Joint Staff and JRAC officials, the combatant commands contended that their resources were focused on planning and managing contingency operations, and that providing feedback was a military service responsibility under Title 10. Conversely, according to officials the military services believed that since solutions addressed joint urgent needs, feedback should be provided by the user, the combatant command. Nevertheless, in 2008, the Joint Staff reinitiated its effort to revise the Chairman’s instruction and establish a feedback mechanism for joint urgent needs solutions. The draft revision was in coordination within the department at the time of our report. In May 2009, the Deputy Secretary of Defense directed U.S. Central Command to establish a joint requirements liaison office as a pilot program within its Afghanistan joint task force to assist in processing Army and joint urgent needs statements. At the time of our report the program had not begun operations, and it was unclear whether it would collect performance feedback on joint solutions as part of its operations. Without adequate feedback information from the theater that addresses how well fielded solutions address the risks to warfighters and to their missions and whether solutions will be necessary for the future, DOD cannot assess the performance of the joint urgent needs process in meeting immediate and future warfighter needs. Feedback provided by commanders in the field would better enable Joint Staff and military service officials to determine if solutions are effective, and whether they need to be sustained, adopted as a formal acquisition program, or suspended. In one case, DOD fielded a solution to a joint need for an airborne counter-improvised explosive device for more than 18 months, although it did not meet the warfighters’ needs. Joint officials stated that the service did not track the operational effectiveness of the solution, called Angel Fire, and failed to provide feedback after initial fielding. The Angel Fire system provided a daytime-only solution, and did not meet the warfighter’s request for a 24-hour surveillance capability. The warfighter then rescinded the urgent needs request in December 2008 and the Angel Fire aircraft were scheduled for removal from the theater to the United States. Internal controls prescribe that ongoing monitoring should occur in the course of operations to support timely actions when problems occur or require follow-up. Feedback information can help prevent the inefficient use of resources when participants spend time and funding on a solution that is ineffective. We also found that DOD’s acquisition policy makes no reference to urgent needs or how program managers should respond to these needs. The department’s acquisition policy is articulated in two principal documents: DoD Directive 5000.01 which describes management principles and mandatory policies and procedures for managing all acquisition programs, and DoD Instruction 5000.02 which describes the operation of the Defense Acquisition System. The Defense Acquisition Guidebook, published by DOD, complements these two policy documents and provides best business practices for the acquisition community. According to the Guidebook, the objective of the Defense Acquisition System is to rapidly acquire quality products that satisfy user needs with measurable improvements to mission capability at a fair and reasonable price, and that the fundamental principles and procedures that the department follows in achieving those objectives are described in DoD Directive 5000.01 and DoD Instruction 5000.02. However, we reviewed these documents and found no discussions about or references to the joint urgent needs process. As a result of DOD’s acquisition policy not referencing to urgent needs guidance, program managers may be unaware of the range of options that may be available for responding to urgent warfighter needs and be unable to assess when use of the urgent needs process may be appropriate. Until very recently, Army Regulation 71-9, the guidance for force development and materiel requirements that governs the Army urgent needs process, had not been updated, predating Operation Iraqi Freedom and Operation Enduring Freedom. To support these operations, the Army expanded the scope of its urgent needs process in late 2003 beyond providing solutions to address capability gaps identified by the warfighter as an urgent need to including requests for items already available to units deploying for nonstandard missions. For example, an artillery unit deploying as an infantry unit will need fewer howitzers, but will need a greater number of armored vehicles. Other equipment may be necessary for counter-insurgency operations, but these items are not included in the unit’s authorized list of equipment known as its Modified Table of Organization and Equipment. Before the conflicts in Iraq and Afghanistan, Army headquarters staff processed less than 10 urgent needs requests per year, but this figure escalated significantly in the build-up to the invasion of Iraq and has continued to increase to about 290 per month in 2009. The volume of requests and the speed of change have strained the Army’s urgent needs process. During our review, we found that Army Regulation 71-9 was ambiguous regarding time frames for approving urgent needs requests, did not sufficiently define roles and responsibilities, and did not sufficiently recognize or distinguish between how urgent needs requests for new warfighter capabilities should be processed from more routine requests for equipment that is readily available. In a 2007 report, the Army Audit Agency also addressed these deficiencies and recommended corrective actions. The Army issued updated guidance for its urgent needs process on December 28, 2009, as we were completing our report. Headquarters staff now has a goal to provide an “initial response” within 14 days of receiving a request, and in total there is a 120-day goal for reviewing requests, but that goal can be changed to 30 days where “the urgency of warfighter needs dictate a more rapid response.” While the updated guidance does provide more detail regarding roles and responsibilities, the Army still lacks standard operating procedures for Army headquarters officials to follow when processing urgent needs requests. Furthermore, while the updated guidance recognizes the dual use of the urgent needs process to address capability gaps and requests for items already available to units deploying for nonstandard missions, it does not distinguish how these different types of requests for solutions should be processed. Consequently, Army leadership continues to lack a means of assuring that its process is meeting warfighter needs as efficiently and effectively as possible and is consistent with internal control standards. During our field work in Iraq as well as our analysis of 23 urgent needs case studies, we found several challenges that hinder DOD’s ability to respond to urgent warfighter needs as quickly as possible. We reviewed the joint, Army, and Marine Corps urgent needs processes across each of their seven phases and found that, with the exception of the Active Denial System, the urgent needs in all of our case studies were met by the initial fielding of solutions within 2 years of theater endorsement—which is within JRAC’s modified time frame. The highest potential for extended response times occurred in the initiation and funding phases due to insufficient training, the lack of timely funding decisions, and other factors. Our case study analysis also demonstrated that attempts to meet urgent needs with immature technologies or with solutions that are technologically complex can lead to longer time frames for fielding solutions to urgent needs. Army personnel who utilize the joint and Army urgent needs processes do not receive adequate training on how to select which process to use to request a solution for an urgent need and how to submit and review requests. To acquire needed equipment, units may submit requests for theater-provided equipment or pursue new capabilities through the Army’s rapid equipping force process which equips operational commanders with commercial off-the-shelf and existing solutions, or the Army’s Tank- automotive and Armaments Command’s weapons loan program, in addition to one of the three urgent needs processes. According to Army theater command officials, some warfighters who need to request a critical capability do not know how to select the process most appropriate for their situation, and officers responsible for reviewing and processing the documentation in the theater do not receive adequate training on how the processes should function, which may result in inefficiencies and delays in fielding solutions to critical needs. In addition, the Army has expanded the scope of its urgent needs process beyond the requests for new solutions to address capability gaps, to address equipment shortfalls resulting from units deploying in nonstandard roles. For example, an artillery unit may be deployed to perform a force protection mission, requiring a different mix of equipment than what is authorized to carry out its artillery mission. As we have previously reported, units are currently being deployed in nonstandard roles, and this has caused challenges across the force, in part because deploying units in nonstandard roles often encounter unanticipated equipment needs. According to Army requirements officials, the result has been a dramatic increase in the volume of urgent needs requests from 10 per year prior to September 11, 2001, to about 290 per month in 2009. However the Army has not increased the number of staff available from prewar levels to provide support at headquarters despite the rising volume of requests. With the expansion of the scope of the urgent needs process, the Army found an increasing number of invalid requests because users do not understand what type of equipment can be requested through this process. According to Army requirements officials, about 97 percent of the Army urgent needs statements are requests for the reallocation of equipment already available. They estimated over 557,000 pieces of equipment have been requested through the Army process alone. Further, theater command officials stated that the increased number of requests has also contributed to processing backlogs of urgent needs in theater, when the requests might have been more quickly addressed by theater- provided equipment or by the weapons loan program. Army officials stated that prior to deployment, replacement personnel are informed that a user’s guide and help desk are available for the Army’s Equipment Common Operating Picture data system used to process Army urgent needs requests. Theater command officials said uncertainty over how to address needs often results in officers submitting a larger number of urgent needs requests early in a unit’s deployment. This uncertainty, combined with confusion regarding the different sources and processes available to address capability gaps or equipment shortfalls, can result in the inefficient use of resources and prolonged amounts of time needed to request and receive critical capabilities. According to DOD’s strategic plan for transforming training, deploying personnel should receive priority for training and be responsive to the needs of the combatant commander across the full spectrum of operations. The prevailing principle of this strategic plan states that no one should experience a task in a real-world operation without having previously experienced that task in training or education. However, during our field work in Iraq we found that the requirements officers who prepare urgent needs requests at the brigade level—where most urgent needs requests originate—are not well trained in the processes. Marine Corps officials told us that they provide insufficient predeployment training on preparing and reviewing urgent needs documentation for their own and joint processes, and Army officials told us that Army requirements officers responsible for drafting and submitting urgent needs requests at the brigade level do not receive formal training on these processes prior to deployment. According to theater command officials, requirements officers deploying overseas must learn the process on the job. Frequent rotations of force management officers at the division level responsible for reviewing brigade-level requests further increase the likelihood of extended time frames for approving urgent needs and fielding solutions, as the already steep learning curve recurs each time a new reviewing official is deployed into the theater. We found that lack of knowledge about how and under what circumstances to prepare an urgent needs request, especially among recently deployed personnel, may cause reviewing officers to initially reject requests. In turn, some reviewing officers, who themselves have not received adequate training, may reject urgent needs applications based on personal preferences. As a result, reviewers may receive multiple resubmissions of requests related to the same urgent need, increasing the overall amount of time needed to field solutions to the theater. Although information that would have allowed us to determine what factors contributed to the time frames for processing urgent needs in the theater was unavailable, in the 13 case studies for which we were able to obtain documentation, we observed that the time between the creation of a joint urgent need document and theater command-level endorsement varied widely from as few as 6 days to as many as 446 days. Moreover, senior force management officers in theater at the division level or higher who are responsible for reviewing and processing urgent needs requests may have received limited exposure to the urgent needs process as part of force management training. In some cases, force management officers in theater, who are trained in the organization and execution of requirements determination, force structuring, and combat development, are employed in the urgent needs review process either on a part-time or full-time basis. However, the formal urgent needs process training they receive is limited to an hour-and-a-half introductory segment within a 14- week course. In addition, officials responsible for the force management training course stated that the course focuses on duties performed in the United States, rather than those that will be required as part of a deployed task force. Further, the division-level training segment on the urgent needs process has only been included in the course since 2005 and officers who completed the 14-week course prior to 2005 are exempt from repeating it. According to Army training officials, no provision has been made to update force management officers on the urgent needs elements of the course or to train them on the joint urgent needs process. As a result, most force management officers arriving in theater to review and process urgent needs requests at the division level or higher, like their counterparts at the brigade level, must learn about reviewing and processing urgent needs on the job, and likewise this pattern tends to repeat itself with each rotation of new forces to the theater. The previous commander of the Multi-National Forces-Iraq recognized in 2008 that warfighters in the theater needed assistance in requesting critical capabilities. On September 16, 2008, he wrote a memorandum to the Deputy Secretary of Defense that recommended the establishment of a joint requirements liaison office in theater to assist the warfighter in identifying capability or equipment shortfalls and in preparing Army and joint urgent needs statements. On April 20, 2009, the Deputy Secretary of Defense responded by directing the Commander, U. S. Central Command; in coordination with the Chairman, Joint Chiefs of Staff; Under Secretaries of Defense (for Personnel and Readiness, and Acquisition, Technology, and Logistics); and the Commander, U.S. Forces-Afghanistan, to create a pilot joint requirements liaison program in Afghanistan to assist in the identification of capability and equipment needs via the military services’ and joint urgent needs processes. Officials in theater said that these liaison offices would function at the division level or higher; however, since most urgent needs requests are generated at lower levels, the joint requirements liaison office will not eliminate the need to address the lack of training at both the division and brigade levels. We have reported in the past that military personnel have received limited or no training on key operational functions—such as using and managing deployed contractors—as part of their predeployment training or professional military education. Similarly, improved training on the appropriate use of the urgent needs process and how to craft urgent needs documentation can improve the overall timeliness of addressing capability gaps and delivering solutions to help ensure that warfighters receive critical capabilities as quickly as possible. After urgent needs requests have been approved by service headquarters or by the Joint Chiefs of Staff, the funding needed to field solutions to those needs has not always been provided in a timely manner. Although urgent needs can be funded in a variety of ways, the funding phase for some urgent needs requests—through the joint process in particular—is often lengthy. This is due in part because OSD has not designated any one organization with primary responsibility for determining when to implement the department’s statutory rapid acquisition authority or to execute other timely funding decisions. In 11 of our 23 case studies—7 joint, 3 Army, and 1 Marine Corps— obtaining funding was a challenge that increased the amount of time needed to field solutions to the theater. In a representative example from our case studies (which comprised a nonprobability sample, and thus are not representative of urgent needs requests as a whole), it took 474 days to field communications equipment to warfighters in Afghanistan after the request was endorsed by theater command. During that time, JRAC delayed assigning a sponsor for that joint urgent need for 131 days because it was unable to resolve which service would fund the solution. JRAC officials told us that, although the services and components assigned to sponsor joint urgent needs solutions have never refused to fill that role, assigned sponsors sometimes allow requests to wait—up to 2 years—until the next budget cycle. In one of the more extreme cases we found, it took 509 days for the Army to field a solution to a joint urgent need for mobile explosive scanning equipment. Within that time, the Army took 293 days after the solution was approved by the Joint Chiefs of Staff to reprogram the necessary funding and an additional 4 months to award a contract for the equipment. In another joint case, it took almost a year after theater endorsement to field an aerial surveillance capability known as Angel Fire. Of that time, approximately 5 months was spent awaiting funding—in addition to 2 months the Marine Corps spent pursuing its own funding strategy prior to approval of the joint request. The Marine Corps began efforts to fund Angel Fire in July 2006, with the intent of seeking full funding from JIEDDO. However, a Deputy Secretary of Defense decision prevented JIEDDO from funding the purchase of platforms, such as vehicles or aircraft, so this urgent need request was split into two—$19.5 million for the development of surveillance sensors and platform integration submitted through the joint process and approximately $15 million for aircraft and services through the Marine Corps process. Funding of approximately $34.5 million was finally arranged in February 2007. The Deputy Secretary of Defense assigned JRAC the responsibility of helping to resolve issues that could prevent timely and effective warfighting support but did not give JRAC the authority to allocate funding for solutions. As a general rule, JRAC forwards approved solutions aimed at countering IEDs to JIEDDO, which receives funding through its own direct appropriation. According to JRAC officials, 123 or approximately 55 percent of the estimated 225 joint urgent needs requests it has received since 2004 have been related to IEDs. JRAC delegates the other 45 percent of approved joint solutions for critical needs, such as intelligence surveillance and recognizance, biometerics, communications, and force protection, to the military services, geographic combatant commands such as U.S. Central Command, the U.S. Special Operations Command, or other DOD components who sponsor the funding and fielding of solutions. In addition to the department’s annual budget process and congressional appropriations dedicated to efforts to counter IEDs, DOD may rapidly fund non-counter IED joint urgent needs by invoking the rapid acquisition authority granted by Congress, by using the department’s authority to reprogram funds except as otherwise precluded by law, or by using any applicable statutory authority to transfer funds from another appropriation. OSD has, however, allowed the military services or other DOD components to make most of the decisions about when to initiate these funding options. OSD has not frequently used the rapid acquisition authority that Congress made available specifically for rapidly fulfilling warfighters’ operational needs. In amending the Fiscal Year 2003 NDAA, the Fiscal Year 2005 NDAA provided the Secretary of Defense a rapid acquisition authority. Under this authority, OSD can use any funds available to the Department of Defense for that fiscal year to accomplish the rapid acquisition and deployment of equipment that is urgently needed to eliminate a combat capability deficiency that has resulted in combat fatalities. Our review of the Secretary of Defense’s use of rapid acquisition authority over the past 5 years shows that DOD has used that authority four times to obligate $170 million for three projects, as shown in table 2. OSD plays a reactive, rather than proactive, role in the use of its rapid acquisition authority, while many approved urgent needs requests aimed specifically at preventing combat fatalities wait for funding. Rather than identifying cases eligible for funding through the rapid acquisition authority at a high level, the Office of the Secretary of Defense issued an implementing memorandum for its rapid acquisition authority that directed JRAC to recommend cases for the use of this authority to the Under Secretary of Defense (AT&L), based on requests from the military departments, Joint Chiefs of Staff, Combatant Commands, Under Secretaries of Defense, and other OSD directorates, agencies, and activities. Consequently, the services are in a position to limit the number of funding requests for urgent needs that reach OSD. Moreover, JRAC officials we spoke with said that the services have shown little interest in requesting the use of rapid acquisition authority to begin funding joint urgent needs because the acquisition strategy and funding of existing programs could be disrupted, preferring instead either to reprogram funds themselves or, in most cases, to await funding through DOD’s annual budget for overseas contingency operations. As a result, OSD is effectively taken out of the process of deciding which urgent needs request should be considered for funding through the rapid acquisition authority. As previously noted, obtaining initial funding was the primary challenge to rapidly fielding solutions for 11 of the 23 cases we studied. By not employing its rapid acquisition authority more frequently, OSD may not have enabled the acquisition of as many urgent needs solutions as it otherwise could have. In a December 2007 action memorandum requesting the support of OSD and the Joint Chiefs of Staff in stabilizing JRAC’s leadership, staffing, and funding, the Deputy Commander of U.S. Central Command noted that, at that time—over 2 months into fiscal year 2008—the Command was aware of 38 joint urgent needs from fiscal year 2007 that remained unresolved because of funding shortages. Further, the Deputy Commander predicted that JRAC would be unable to address urgent warfighting needs that had already been submitted or were emerging in fiscal year 2008. To help resolve funding shortages, the Deputy Commander recommended in 2007 that OSD and the Joint Staff provide JRAC with, among other things, executive leadership and funds to support the combatant commands and the warfighter. Our prior work has demonstrated that, given the long-standing and deeply entrenched nature of the department’s financial management challenges, combined with the numerous competing DOD organizations—each operating with varying, often parochial views and incentives—strong leadership from the Secretary of Defense over resource control is critical. Without greater high-level participation in the decision-making process over when to invoke, or not to invoke, its rapid acquisition authority, OSD will continue to play a reactive, rather than proactive, role in the timely use of DOD resources to meet urgent warfighter needs. Apart from the Secretary’s rapid acquisition authority, DOD has a reprogramming authority, but the military services are reluctant to reprogram funds from their respective budgets to fund solutions to joint urgent needs, and OSD has not exercised its authority to do so. The Secretary of Defense—and in some cases the military departments and defense agencies—have the authority to reprogram funds for purposes other than those originally specified by Congress without prior congressional approval as long as the reprogrammed amount remains below established dollar thresholds. Reprogrammed funds may be used to initiate a new procurement program, subprogram, or modification as long as the estimated cost is less than $20 million for the first 3 years. DOD may also use reprogrammed funds to start a new research, development, testing, and evaluation program, project, or subproject if the estimated cost for the first 3 years is less than $10 million. In cases where the amount of funding needed exceeds established thresholds, DOD may seek congressional approval. In fiscal year 2009, for example, JRAC—as facilitator of the urgent needs process, including funding—-reviewed and worked with Joint Staff, the military services, JIEDDO, and the combatant commands to prioritize urgent needs DOD-wide. This effort resulted in a congressionally approved end-of-year reprogramming action of $624 million from Army and Defense-Wide Operation and Maintenance accounts that could be reapplied to the Other Procurement, Army procurement account to obtain force protection capabilities for warfighters in Afghanistan. However, in the absence of a high-level authority with primary responsibility to execute such reprogramming or transfer decisions, JRAC has faced challenges consistently securing cooperation from the services or other components to initiate other reprogramming actions to make funds needed to field joint urgent needs available in a timely manner. Military service officials we spoke with said that they are reluctant to use their own funds to initiate acquisition of a joint urgent need without first receiving assurance that funding will be replaced during the next budget cycle. According to those officials, without such assurance, the acquisition strategy of existing programs could be disrupted. Our prior work on interagency collaboration has shown that top-level leadership—such as that provided by OSD and its Deputy or Under Secretaries—is a necessary element for sustaining collaboration among federal agencies, including among DOD components, particularly when effective interagency coordination is needed to better leverage resources. This work has also found that midlevel agencies, such as JRAC, can not guide policies at a high enough level to promote effective interagency cooperation. Although JRAC was initially created with direct reporting responsibility to the Under Secretary of Defense (AT&L), the Under Secretary realigned JRAC in March 2008 to report to the Director of the Rapid Reaction Technology Office, within the Office of the Director for Defense Research and Engineering (DDR&E). In July 2009, JRAC and the Rapid Reaction Technology Office were both realigned under the Director of Rapid Transition to accomplish the responsibilities of DDR&E, which were expanded to include oversight of the Systems Engineering Development Test and Evaluations functions. Currently, the JRAC resides under the Director, Rapid Fielding. According to JRAC officials, the most recent realignment will help the department better anticipate emerging threats and ensure the technology needed to counter urgent threats is mature before the threat fully materializes, as well as improve the synergy between the requirements, acquisition, and research communities. However, JRAC’s most difficult challenge, according to its Director, continues to be prioritizing needs and quickly identifying the resources needed to execute a solution, which is the responsibility of the DOD components. Referring to JRAC as “mission essential” for effective coordination with the services, JIEDDO, and other agencies addressing urgent warfighter needs, the Deputy Commander of U.S. Central Command has called for a permanent organizational structure led by a senior leader capable of coordinating, influencing, and directing actions. We and others have found that establishing a senior executive council is a best practice that can provide an implementation team—such as JRAC—access to senior leadership while reinforcing the team’s accountability for successfully implementing the program. An executive council can set policies, ensure that decisions are made quickly, resolve conflicts that arise, review and approve plans, and monitor and report progress back to top leaders of the organization. Members of such a council, which could include both political and career executives within the organization, would work with the department Secretary, Deputy Secretary, and other high-level appointees to develop a leadership direction and communicate the leadership’s position. Without a departmentwide approach to addressing its funding challenges, DOD will continue to struggle to field timely solutions to problems that create risk to warfighter lives or mission failure. Further, extended time frames in identifying and securing funding for solutions to joint urgent needs and challenges to JRAC’s mission will persist. Conversely, a JRAC with support from an interagency executive council with the means to better leverage funding from across DOD, all under the oversight of top- level DOD officials, would be in an improved position to provide timely solutions to meet the urgent needs of warfighters while assuring effective use of DOD resources. In 14 of 23 case studies we conducted (8 joint, 2 Army, and 4 Marine Corps), technological immaturity or complexity was a factor that led to longer time frames for fielding solutions to urgent needs. In the 8 technologically challenged joint urgent needs cases we found, solutions for 2 requests—both related to the Active Denial System—were never fielded because the capability was technologically immature and could not be adequately sized or adapted for operational use in a wartime environment and under changing theater conditions. Solutions for the remaining 6 technologically challenged joint urgent needs were eventually fielded, but the average response time from theater endorsement to fielding ranged from 320 to 497 days with an average of 393 days. In one of the more protracted cases, the Combined Joint Task Force–82 in Afghanistan endorsed a request on October 20, 2007, for an improvised explosive device detection system capable of detecting devices that were buried underground. However, following JRAC’s request that JIEDDO accept responsibility for providing a solution, 497 days passed before JIEDDO began initially fielding a solution because additional time was required to develop the experimental Husky Mounted Detection System. In a recent DOD Inspector General report, the Inspector General determined that JIEDDO decided to produce the system in large numbers before determining its operational effectiveness and suitability. Nevertheless, while these cases exceeded the original 120-day fielding target expressed in both Joint Chiefs of Staff and OSD guidance, they fall within the 2-year time frame used by JRAC and the Joint Chiefs of Staff. Guidance for the Army process does not address the technological complexity or maturity of a potential solution to an urgent need. Guidance for the Marine Corps process states that capability gaps and solutions to urgent needs are not restricted to commercially available equipment or technologies and may require the rapid development of new capabilities. Conversely, when Congress directed the Secretary of Defense to prescribe procedures for the rapid acquisition and deployment of urgently needed items in the Fiscal Year 2003 NDAA, it specified that those items should be either currently under development by DOD or already available from the commercial sector. Further, DOD guidance on the scope of its joint urgent needs process states that urgent operational solutions should not involve the development of a new technology or capability. However, the acceleration of an Advanced Concept Technology Demonstration or the minor modification of an existing system to adapt to a new or similar mission is within the scope of the joint process. According to JRAC and military service sponsors for solutions to joint urgent needs, requests are becoming increasingly more technologically complex. As of June 2, 2009, JRAC indicated that approximately 20 joint urgent needs were sufficiently impacted by technological development concerns that their projected fielding date is uncertain. For example, one urgent need request asked for explosive ordinance disposal suits and helmets equipped with night vision capability. The Multi-National Force Iraq submitted the request in May 2005. Initially, the Army worked with the Office of the Assistant Secretary of Defense for Special Operations and Low Intensity Conflicts to develop a prototype to meet the warfighter’s need, but this effort proved unsuccessful. In March 2007, U.S. Central Command consolidated the initial urgent need with two additional urgent needs requests it had received from the theater for bomb suit helmets with night vision capability. In April 2007, the Joint Chiefs of Staff and JRAC validated and approved the urgent need request and assigned it to JIEDDO, which has thus far been unable to develop a successful prototype. We have reported on the department’s success in fielding MRAPs in response to an urgent need, and stated that, among several factors contributing to the success of the program were that 1) DOD kept the requirements simple, clear, and flexible and did not dictate a single acceptable solution, and 2) the department made sure that only mature technologies and stable designs were used by setting a very short and inflexible schedule. In addition, the Defense Science Board reported recently that any rapid response to an urgent need must be based on proven technology and robust manufacturing processes because attempts to squeeze new technology development into an urgent time frame create risks for delays and ultimately may not adequately address an existing capability gap. The board stated in its report that, in order to achieve initial deployment of a solution in weeks or months, technology must be sufficiently mature and likely filled by commercial or government off-the- shelf products, or foreign government sources. Further, the board stated that needs that cannot be met with mature technology should be handed to the defense science and technology community as a high priority for further development. Sponsors for joint solutions we spoke with expressed concerns that the maturity of the technology associated with approved urgent needs solutions is often overstated, ultimately requiring further integration, development, and testing before the solutions can be successfully acquired and produced. The board advocated a triage process to differentiate between different urgent needs and determine whether an urgent need should be addressed through expedited acquisition procedures or the department’s traditional acquisition system. Both of the services’ processes include procedures for reviewing whether a potential solution that requires the development of a new technology should be sustained for the long term, across the service, as a formal acquisition program. Army and Marine Corps officials involved in their respective urgent needs processes stated that they prefer urgent needs requests that cite capability gaps rather than specific solutions in order to provide the warfighter with flexibility to utilize creative solutions that may be inexpensive and readily available but unknown to the warfighter. CJCSI 3470.01 is unclear about who should be responsible for applying the technological maturity criteria, and based on our case studies it remains unclear who is responsible during the review, endorsement, and approval phases to apply the criteria, remove those urgent needs that fall outside of the scope of the process, and recommend a different approach. Based on the results of our case studies, we found that attempts to meet urgent needs with technologically complex or immature technologies risk prolonging the fielding of solutions, and could result in fielding a capability too late to effectively address rapidly changing theater conditions. As we state earlier in this report, DOD lacks clearly defined roles and responsibilities for managing DOD’s urgent needs processes in general. As a result, the department faces difficulty in ensuring that the joint process is implemented efficiently and effectively and in identifying the appropriate personnel who are accountable for operations, stewardship of resources, and achieving results. Due to rapidly changing battlefield threats in Iraq and Afghanistan, Congress has recognized DOD’s need to be more nimble in its response to warfighter requests for urgently needed capabilities than the department’s usual acquisitions process allows. Similarly, DOD’s leadership has recognized the importance of rapidly procuring solutions to meet warfighter needs during contingency operations. Although the establishment of the Army, Marine Corps, and joint urgent needs processes improved capabilities available to the warfighter, without improvements to the management framework to incorporate additional internal control standards, DOD risks fielding solutions that are either too late to do good or that do not successfully meet warfighter needs. In the absence of consolidated and updated departmentwide guidance permanently establishing its joint urgent needs process, and clearly delineated roles, responsibilities, and authorities of various stakeholders, the department will continue to face challenges implementing the process, monitoring the process to ensure efficiency and effectiveness in each of its phases, and evaluating results. In addition, unless DOD’s joint urgent needs guidance and acquisition policy clearly communicate the availability of the rapid acquisition authority that the services and the JRAC can use to meet urgent needs, the services could continue to miss opportunities to quickly field urgently needed solutions to the theater of operations and inadvertently increase costs by unnecessarily prolonging the acquisition process. Furthermore, without more comprehensive, complete, and reliable data that can be used to accurately track and document key process milestones, as well as to create reports for management review, DOD will continue to lack the ability to oversee and track the progress of individual requests or to determine which phases of the process, if any, might need adjustments to prevent unnecessary delays. Finally, a formal mechanism for soliciting and collecting feedback from servicemembers in theater is essential for determining how well fielded solutions are meeting warfighter requests as well as ensuring that the resources invested in the urgent needs process are achieving the desired results. For the Army, Marine Corps, and joint urgent needs processes, challenges in the initiation and funding phases, in particular, can significantly increase the number of days—or weeks, or months—that elapse between the time a warfighter submits an urgent request and the time a solution is fielded. When the personnel responsible for documenting and reviewing urgent needs requests do not receive needed training before arriving in the theater of operations, they can become quickly overwhelmed by the volume of requests, leading to backlogs, errors, and delays. Unless DOD takes steps to ensure that both unit requirements officers and senior force management officers responsible for processing urgent needs requests receive training on appropriate uses of the service and joint processes, as well as how to craft related documentation, before they arrive in theater, warfighter requests are likely to continue to face delays early in those processes. More consistent predeployment training would be an important step toward ensuring that warfighters receive critical capabilities as quickly as possible. Moreover, in the absence of OSD leadership on recommending when to use the rapid acquisition authority Congress provided the department specifically for the purpose of funding solutions to urgent needs, some requests that have been validated as urgent may continue to experience increasing time frames during the funding phase of the process. Until OSD begins to play a proactive, rather than a reactive role in the use of its rapid acquisition authority, urgent requests that have been assigned to one of the services or components for funding are likely to continue to compete with longer-term service programs and, in some cases, wait until the next annual budget process to be funded from the base budget for the next fiscal year. Similarly, without a means to secure cooperation from the services and other DOD components to reprogram and transfer funds to meet joint urgent needs, JRAC will continue to face challenges in providing timely solutions. We recommend that the Secretary of Defense take the following nine actions: To improve the department’s ability to fully assess how well the urgent needs processes are addressing critical warfighter deficiencies and to measure the effectiveness of solutions fielded in the theater, we recommend that the Secretary of Defense, in conjunction with the Chairman, Joint Chiefs of Staff, combatant commands, military services, and other DOD components, as necessary, take the following actions to permanently establish the joint urgent needs process and to improve consistency with federal internal control standards: Clearly define the roles and responsibilities of the Office of the Secretary of Defense, Joint Chiefs of Staff, the military services, and other DOD components, as necessary, through the issuance of new or updated OSD and Joint Chiefs of Staff guidance, to identify who is accountable for implementation, monitoring, and evaluation of all phases of the process—including applying the technological maturity criteria. Include rapid acquisition authority procedures available to officials responsible for meeting joint urgent need requests. Develop and implement standards for accurately tracing and documenting key process milestones such as funding, acquisition, fielding, and assessment, and for updating data management systems to create activity reports to facilitate management review and external oversight of the process. Develop an established, formal feedback mechanism or channel, for the military services to provide feedback to the Joint Chiefs of Staff and JRAC on how well fielded solutions met urgent needs. To better inform DOD personnel of the options for acquiring capabilities to meet warfighters’ needs, we recommend that the Secretary of Defense amend DOD Directive 5000.01 and DOD Instruction 5000.02 to reflect that officials responsible for acquisition of urgently needed equipment may need to consider using joint urgent processes, including rapid acquisition authority. In addition, we recommend that the Secretary direct the Secretary of the Army to amend the urgent needs process guidance in Army Regulation 71- 9 to include distinct performance standards that distinguish how different types of urgent needs, such as nonstandard mission equipment shortages and new capabilities, should be processed, and to develop and implement standard operating procedures for headquarters officials to use when processing urgent needs requests. To better address training challenges the department faces in preventing process delays and improving its ability to more quickly field solutions to the theater, we recommend that the Secretary of Defense direct the Secretary of the Army to update training procedures to include instruction for unit requirements officers regarding the development of joint and Army urgent need statements in order to ensure that these personnel are prepared to effectively draft urgent requirement documents upon arrival in theater. To more rapidly field urgent needs solutions aimed at eliminating deficiencies that have resulted in combat fatalities, we recommend that the Secretary of Defense amend its implementing memorandum for the department’s rapid acquisition authority to designate an OSD entity, such as the Under Secretary of Defense for AT&L, with primary responsibility for recommending to the Secretary of Defense when to implement the department’s statutory rapid acquisition authority—as provided in Pub. L. No. 108-375—as urgent needs are validated by the Joint Staff. To expedite the funding needed to field approved solutions to joint urgent needs, we recommend that the Secretary of Defense create an executive council to include the Deputy Under Secretary of Defense (Comptroller), the Director of JRAC, the Comptrollers of each of the military services, and other stakeholders as needed, and appoint a chair for the purpose of making timely funding decisions as urgent needs are validated by the Joint Staff. In written comments on a draft of this report, DOD concurred with four of our recommendations and partially concurred with five other recommendations. Technical comments were provided separately and incorporated as appropriate. The department’s written comments are reprinted in appendix III. DOD concurred with our recommendation to clearly define roles, responsibilities, and accountability through the issuance of new or updated OSD and Joint Chiefs of Staff guidance. The department stated that it is developing new DOD policy and the Joint Chiefs of Staff is updating the Chairman of the Joint Chiefs of Staff Instruction (CJCSI) 3470.01 Rapid Validation and Resourcing of Joint Urgent Operational Needs (JUONS) in the Year of Execution, to clearly define roles and responsibilities of all DOD components. DOD concurred with our recommendation to include rapid acquisition authority procedures available to officials responsible for meeting joint urgent need requests in the issuance of new or updated OSD and Joint Chiefs of Staff guidance. In its response, the department noted that it is developing additional DOD policy to facilitate the use of rapid acquisition authority and has issued guidance to Service Acquisition Executives to ensure the use of rapid acquisition authority is considered when necessary to address urgent needs. While we agree that the proposed action is a good step towards addressing our recommendation, we also believe, as we recommended, that DOD should include these procedures in the new urgent needs policy it is also developing in order to better inform program managers of all procedures available to them for fielding solutions quickly to the theater and to follow internal control standards that cite the importance of policies and procedures that enforce management’s directives and integrate accountability for achieving effective results. DOD concurred with our recommendations to develop and implement standards for accurately tracing and documenting key process milestones and for updating data management systems; and to develop an established, formal feedback mechanism or channel for the military services to use. The department stated that it is developing new DOD policy and the Joint Chiefs of Staff is updating the Chairman’s instruction to establish requirements for oversight and management of the fulfillment of urgent needs from initiation, operational assessment, fielding, and ultimate disposition. DOD stated further that visibility of actions of the DOD components to fulfill urgent needs is expected to be incorporated into new DOD policy and should improve the ability for OSD to provide oversight of the fulfillment of urgent needs and satisfaction of the warfighter’s requirements. We agree that new and updated policy is a good first step to addressing these deficiencies. However, it is not clear from DOD’s response if the updated policies will directly establish standards for collecting accurate data and updating data systems, and include a method for obtaining feedback from the warfighter. Unless these components are part of DOD’s revised policies, DOD will still fall short of being able to fully oversee and manage the urgent needs processes and will remain inconsistent with internal control standards. DOD partially agreed with our recommendation to amend DOD Directive 5000.01 and DOD Instruction 5000.02 to reflect that officials responsible for acquisition of urgently needed equipment may need to consider using joint urgent processes, including rapid acquisition authority. The department noted that it is developing new DOD policy to establish responsibilities for oversight and management of the fulfillment of urgent needs and the utilization of rapid acquisition authority. DOD stated further that this policy development is expected to result in a DOD directive that will be separate from the DOD Directive 5000.01 and DOD Instruction 5000.02. While we agree that DOD’s effort to develop new policy for the urgent needs process is a positive step, as stated in our report, the DOD acquisition directive and instruction represent the overarching guidance for the Defense Acquisition System. As such, we continue to believe that these documents should also be amended to better inform program managers of the range of options available to respond to urgent warfighter needs. DOD partially concurred with our recommendation to amend the Army’s urgent needs process guidance in Army Regulation 71-9 to include distinct performance standards that distinguish how different types of urgent needs should be processed, and to develop and implement standard operating procedures. The department stated that, in December 2009, the Army updated its regulation and partially addressed our recommendations. DOD stated further that upon issuance of additional DOD policy and an update to the Chairman’s instruction, additional changes to the Army regulation and other DOD components policies may be required. We are aware of the Army’s update to its regulation and reviewed it prior to issuance of our draft to DOD. Based on our review, we found that the updated regulation did not address the lack of distinct performance standards and standard operating procedures. Therefore, we continue to support our recommendation to further amend the regulation to address these issues. DOD partially concurred with our recommendation to update the Army’s training procedures regarding the development of joint and Army urgent need statements. The department noted that the proposed direction by the Secretary of Defense should be to all military department secretaries as well as the heads of other DOD components because our findings based upon the assessment of the Army’s urgent needs processes are applicable across the department. DOD acknowledged that training and improved instructions for all DOD component personnel involved in the generation of urgent needs requirements and their fulfillment would improve the department’s ability to respond to the warfighter’s urgent needs. The department stated further that it is developing additional DOD policy that will direct DOD components to develop procedures for urgent operational needs and the implementation steps of these procedures will be monitored by OSD to ensure they are accomplished and include the training we recommended. While our evaluation focused specifically on Army practices, we agree that if the Secretary has determined deficiencies in training present a capability gap across DOD in the urgent needs process, updated training procedures for all department personnel involved in the process are appropriate. DOD partially concurred with our recommendation to amend its implementing memorandum for the department’s rapid acquisition authority to designate an OSD entity with primary responsibility for recommending when the authority should be implemented. The department stated that it is developing additional DOD policy to facilitate the use of rapid acquisition authority and has issued guidance to Service Acquisition Executives to ensure the use of rapid acquisition authority is considered when necessary to address urgent needs. DOD noted further that it is continuing to evaluate the need for legislative changes to enhance rapid acquisition authority. While we recognize DOD’s efforts to develop additional policy, issue guidance, and evaluate potential legislative changes, we continue to support our recommendation that the Secretary designate an OSD entity to recommend when this authority should be implemented. During our evaluation, we found that unless OSD plays a proactive role in identifying cases eligible for this authority rather than a reactive role, requests for urgent needs may not be funded in a timely manner due to other competing service priorities. DOD partially concurred with our recommendation to create an executive council to make timely funding decisions as urgent needs are validated by the Joint Staff. The department noted that it is developing additional DOD policy that is expected to clarify processes for funding urgent needs, and intends to use established senior governance councils to achieve the goal of the recommendation rather than establish a new council. We did not evaluate the roles and missions of these existing senior governance councils as to the extent they consistute the appropriate body to address funding solutions for urgent needs. We agree in principle with the intent to utilize existing councils to make timely funding decisions for urgent needs as long as those councils have the authority to directly address our recommendation and their membership includes those offices we cited. The department also recommended we change language in our report from “. . . as solutions are validated by the Joint Staff to “. . . as needs are validated by the Joint Staff” because the Joint Staff does not validate solutions but the requirements, or needs. We incorporated this language in our final report. We are sending copies of this report to interested congressional committees and the Secretary of Defense. This report will be available at no charge on GAO’s Web site http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8365 or by e-mail at [email protected]. Contact information for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who have made major contributions to this report are listed in appendix IV. To determine the extent to which the Department of Defense (DOD) has a means to assess the effectiveness of its urgent needs processes we conducted site visits, reviewed key documentation, and interviewed relevant DOD, joint, and military service officials. During this review we focused on urgent wartime needs submitted through the joint, Army, and Marine Corps urgent needs processes as these are the processes most frequently used, and commanders used the Air Force and Navy urgent needs processes much less frequently. Air Force officials stated they had one active request under their urgent needs process when we began our review, and Navy officials stated they had eight active requests under their urgent needs process when we began our review. We visited forces conducting operations in the U.S. Central Command area of responsibility and gathered information on how they identify, document, and submit urgent requests through the urgent needs processes, and on the fielding and assessment of solutions in the theater. We conducted site visits to joint, Army, and Marine Corps offices responsible for the respective urgent needs processes, as well as offices of officials who participate in reviewing urgent needs requests and developing funding strategies and solutions to be fielded. We reviewed existing policy and guidance applicable to joint, Army, and Marine Corps urgent needs processes, and compared them to our standards for internal control in the federal government. We also compared actual practices, tools, and data systems used to manage the joint, Army, and Marine Corps urgent needs processes to our internal control standards. We assessed the reliability of the databases and information systems used to process urgent needs requests by 1) interviewing knowledgeable officials, 2) reviewing data system guidance and procedures when they were available, and 3) conducting limited electronic testing that included comparing values from source documentation with data elements in the data systems. While our assessment of databases and systems used to process urgent needs requests showed that some data elements were accurate and supported by sufficient documentation, we found that other items for reporting specific urgent needs requests were incomplete, and not sufficiently reliable for reporting specific results here, or to support accurate, useful management reports related to overall results. As a result, we determined that we would conduct case studies of selected urgent needs requests to provide insights related to this, and the following objective. We used data elements from the information systems that we had determined were sufficiently reliable to support the selection of case study candidates from the universe of joint, Army, and Marine Corps urgent needs requests. To determine what challenges, if any, have affected the overall responsiveness of DOD’s urgent needs processes we analyzed joint, Army, and Marine Corps data management systems in order to review the data collected on the time frames between decision points and determine how timely and effective each process was for providing solutions to urgent warfighter needs. To conduct this analysis we selected a nonprobability sample of cases to review from a universe of 49 Joint Urgent Operational Needs, 4,054 Army Operational Needs Statements, and 524 Marine Corps Universal Urgent Need Statements. Our selected cases included 11 joint, 6 Army, and 6 Marine Corps for a total of 23 urgent needs cases reviewed. To ensure that the case studies reflect the current DOD urgent response processes as much as possible, we selected cases that were submitted after the latest iteration of updates in each process. We considered urgent needs requests initiated in the Marine Corps process after September 1, 2006; initiated in the Army process after October 1, 2006; and initiated in the joint process after August 1, 2006. We then eliminated 1) requests for which solutions have not been fielded and 2) requests for items that the Army already procures. We selected cases for which solutions have not been produced in order to explore aspects of the process based on their visibility, cost, and scope. We selected cases in order to represent distinct types of needs such as: Command and Control; Force Protection; Intelligence, Surveillance, and Reconnaissance; Counter-Improvised Explosive Device; Logistical Support; and Miscellaneous (such as nonlethal weapons or other items not so easily categorized). We also selected cases where duplication of effort appeared possible, and urgent needs requests that commanders in Iraq or Afghanistan identified as high priority. Assessments of the selected cases were based on a comparison of the time required to achieve key objectives in completing the urgent needs process against stated goals and interviewing knowledgeable officials regarding the relative ease or difficulty in accomplishing these objectives, as well as end users in theater regarding the sufficiency of fielded solutions. In order to allow for comparison across the joint and service urgent needs processes, we constructed a chronology of each urgent need beginning with initiation of the urgent needs process and culminating with the initial fielding of a solution in theater, if applicable. Since each urgent needs process within DOD is distinct and uses differing terms and procedures, we used a consistent approach to demonstrate progression between key events and decision points across time lines from initiation of an urgent need request to initial fielding of a solution However, in collecting data for our case studies, we found that documentation regarding the initial theater recognition of an urgent need was inconsistent and often unavailable. For further details and the results of our case studies see appendix II. We interviewed officials from the Department of Defense; the Joint Chiefs of Staff; all four of the military services; two selected combatant commands; and military activities participating in ongoing military operations. The specific offices and military activities we interviewed and obtained information from include the following: Office of the Undersecretary of Defense, Acquisition, Technology, and Logistics, Arlington, Va.; Office of the Assistant Deputy Undersecretary of Defense for Innovation & Technology Transition, Arlington, Va.; Joint Rapid Action Cell, Arlington, Va. Rapid Reaction Technology office, Arlington, Va. Defense Information Systems Agency, Falls Church, Va. U.S. Air Force, Secretary of the Air Force for Acquisition, Roslyn, Va. U.S. Army Headquarters, Arlington, Va.; Deputy Chief of Staff, G-3/5/7, Operations Deputy Chief of Staff, G-8, Force Development U.S. Army, 224th Military Intelligence Battalion Office of the Assistant Secretary of the Army (Acquisition, Logistics, and Technology), Crystal City, Va. U.S. Army Tank-automotive and Armaments Command, Warren, Mich. U.S. Army, 1st Infantry Division, 2nd Brigade, Headquarters, Camp Liberty, Victory Base Complex, Baghdad, Iraq. U.S. Army, 18th Airborne Corps, 525th Battlefield Surveillance Brigade, Fort Bragg, N.C. U.S. Army, 15th Military Intelligence Battalion, Joint Base Balad, Iraq. U.S. Army, Army Requirements and Resourcing Board Council of Colonels. U.S. Army, Program Executive Office for Ammunition, Picatinny Arsenal, N.J. U.S. Army, Communications Electronics Command, Fort Monmouth, N.J. U.S. Marine Corps, Marine Corps Central Command, Tampa, Fla. U.S. Marine Corps, Marine Corps Capability Development Command, Quantico, Va. U.S. Navy, Office of the Assistant Secretary of the Navy for Research, Development and Acquisition, Rapid Capability Development and Deployment, Arlington, Va. U.S. Navy, Office of the Chief of Naval Operations, Requirements Division, Arlington, Va. U.S. Navy, Naval Surface Warfare Center, Dahlgren, Va. Office of the Joint Chiefs of Staff, Force Structure, Resources, and Assessment Directorate (J8), Capabilities and Acquisition Division, Arlington, Va. Joint Improvised Explosive Device Defeat Organization, Crystal City, Va. Joint Non-Lethal Weapons Directorate, Quantico, Va. U.S. Central Command, Tampa, Fla. Multi-National Corps-Iraq; Camp Victory, Baghdad, Iraq. Commander, Multi-National Forces-West, Al Asad Air Base, Anbar Province, Iraq. Multi-National Division-Baghdad, Camp Victory, Baghdad, Iraq. Multi-National Division-Central, Camp Victory, Baghdad, Iraq. Multi-National Corps-Iraq, Science and Technology (MND S&T); Camp Victory, Baghdad, Iraq. U.S. Special Operations Command, MacDill Air Force Base, Tampa, Fla. We conducted this performance audit from June 2008 to March 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We selected 23 urgent need requests as case studies to illustrate issues that may impact the amount of time required to provide solutions to the warfighter. We reviewed 6 Army, 6 Marine Corps, and 11 joint requests. Although each of these urgent needs processes is distinct, we identified seven broad phases that we used to track the progression of each request over time and to compare performance across the Army’s Operational Needs Statement process, the Marine Corps’ Urgent Universal Needs Statement process, and the Joint Urgent Operational Needs processes. These phases are: initiation, theater endorsement, command validation, headquarters approval, funding, contract award, and initial fielding. Figure 1 illustrates these phases. For each of our 23 case studies, we tracked the progress of an urgent need request beginning with the initiation of an urgent needs process and culminating with the initial fielding of a solution, if any. Each of the figures that follow represents the case studies we selected, describing an identified need or capability gap, the proposed solution, and a brief description of challenges, if any, affecting the ability of the urgent needs process(es) in question to rapidly field a response to that request, and a photograph. Although each urgent needs request is unique some of the requests we reviewed were closely related. Where appropriate we combined these case studies in our discussion below. Challenges, if any, to providing a solution for an urgent needs request were identified in discussions with agency officials and supported by our review of the request’s progress through each phase of the process. Further information on our methodology can be found in appendix I. Issues associated with funding and technical complexity were the most frequent challenges affecting the response or causing delays. For further information please contact William Solis, (202) 512-8365 or [email protected]. In addition to the contact named above, Cary B. Russell (Assistant Director), Susan Ditto, Larry Junek, Ron La Due Lake, Lonnie McAllister, Jason Pogacnik, Paulina T. Reaves, Matthew Tabbert, and John E. Trubey made key contributions to this report.
Forces in Iraq and Afghanistan have faced rapidly changing threats to mission failure or loss of life, highlighting the Department of Defense's (DOD) need to develop and field new capabilities more quickly than its usual acquisition procedures allow. Since 2006, Congress has provided nearly $16 billion to counter improvised explosive devices alone. GAO and others have reported funding, organizational, acquisition, and oversight issues involving DOD's processes for meeting warfighters' urgent needs. The Senate Armed Services Committee asked GAO to determine 1) the extent to which DOD has a means to assess the effectiveness of its urgent needs processes, and 2) what challenges, if any, have affected the overall responsiveness of DOD's urgent needs processes. To conduct this review GAO looked at three urgent needs processes--joint, Army, and the Marine Corps processes--visited forces overseas that submit urgent needs requests and receive solutions, and conducted 23 case studies. Although DOD has taken steps to create urgent needs processes that are more responsive to urgent warfighter requests than traditional acquisition procedures, DOD is unable to fully assess how well the processes address critical deficiencies or to measure the effectiveness of solutions fielded in the theater because it has not established an effective management framework for those processes. GAO found that DOD's guidance for its urgent needs processes is dispersed and outdated. Further, DOD guidance does not clearly define roles and responsibilities for implementing, monitoring, and evaluating all phases of those processes or incorporate all of the expedited acquisition authorities available to acquire joint urgent need solutions. Data systems for the processes lack comprehensive, reliable data for tracking overall results and do not have standards for collecting and managing data. In addition, the joint process does not include a formal method for feedback to inform joint leadership on the performance of solutions. In one case, a solution for a joint request was fielded for 18 months without meeting warfighter needs. In the absence of a management framework for its urgent needs processes, DOD lacks tools to fully assess how well its processes work, manage their performance, ensure efficient use of resources, and make decisions regarding the long-term sustainment of fielded capabilities. In conducting field work in Iraq as well as 23 case studies, GAO found several challenges that could hinder DOD's ability to rapidly respond to urgent warfighter needs. First, not all personnel involved in the initial development and review of urgent needs documentation receive adequate training. DOD policy states that deploying personnel should receive priority for training and be responsive to the needs of the combatant commander; however, officers responsible for drafting, submitting, and reviewing Army and joint urgent needs requests are not likely to receive such training. Hence, once in theater, they often face difficulties processing the large volume of requests, in a timely manner. Second, in 11 of 23 cases GAO studied, challenges obtaining funding were the primary factor that increased the amount of time needed to field solutions. Funding has not always been available for joint urgent needs in part because the Office of the Secretary of Defense (OSD) has not assigned primary responsibility for implementing the department's rapid acquisition authority. Congress provided OSD with that authority to meet urgent warfighter needs, but OSD has played a reactive rather than proactive role in making decisions about when to invoke it. In addition, DOD can reprogram funds appropriated for other purposes to meet urgent needs requests, but authority for determining when and how to reprogram funds has been delegated to the services and combatant commands. Prior GAO work has shown that strong leadership from OSD over resource control is critical, and midlevel agencies such as the Joint Rapid Acquisition Cell, which is responsible for facilitating urgent needs requests, including funding, cannot guide other agencies at a high enough level to promote effective interagency coordination. Finally, GAO found that attempts to meet urgent needs with immature or complex technologies can result in significant delays.
During the past several years, service chiefs and commanders in chief (CINC) have expressed concerns about the effect on current and future readiness of (1) the level of current military operations, (2) contingency operations, (3) the shifting of funds to support these operations, and (4) personnel turbulence. Related to these concerns is a question about the ability of the Department of Defense’s (DOD) readiness reporting system to provide a comprehensive assessment of overall readiness. DOD’s current system for reporting readiness to the Joint Chiefs of Staff (JCS) is the Status of Resources and Training System (SORTS). This system measures the extent to which individual service units possess the required resources and are trained to undertake their wartime missions. SORTS was established to provide the current status of specific elements considered essential to readiness assessments, that is, personnel and equipment on hand, equipment condition, and the training of operating forces. SORTS’ elements of measure, “C” ratings that range from C-1 (best) to C-4 (worst), are probably the most frequently cited indicator of readiness in the military. According to JCS and DOD officials, the definition and measures of readiness that are currently available in SORTS are no longer adequate in today’s national security environment. Specifically, SORTS does not (1) address all the factors that JCS considers critical, (2) provide a warning of impending decreases in readiness, and (3) provide data on joint readiness. In addition, SORTS includes subjective assessments of training proficiency. Figure 1 shows those elements reported under SORTS and all the elements that JCS believes would make up a more comprehensive assessment. Information reported under SORTS is a snapshot in time and does not predict impending changes. Units report readiness monthly or, for some units, upon a change of status. These reports provide commanders and JCS with status information only for that point in time. Commanders have stated that in today’s environment of force reductions and increasing commitments, there is a need for indicators that can predict readiness changes. Some elements of SORTS are not based on objective data. The C-rating for training, for example, is based on a commander’s subjective assessment of the number of additional training days the unit needs to reach a C-1 status. This assessment may be based on any number of factors, including completion of required or scheduled training or personal observation. In the past, we have found that Army training assessments have not been reliable. For example, in 1991 we reported that training readiness assessments of active Army units may have been overstated. We reported that the information provided to higher commands and JCS was of limited value because the assessments (1) were based on training conducted primarily at home stations rather than on results of more realistic exercises conducted at combat training centers and (2) may not have adequately considered the effect that the loss of key personnel had on proficiency. Likewise, in our reviews pertaining to the Persian Gulf War, we noted that readiness reports for Army support forces and National Guard combat forces were often inflated or unreliable. For example, in a September 1991 report, we noted that when three Army National Guard combat brigades were mobilized for Operation Desert Shield, their commanders were reporting readiness at the C-2 and C-3 levels, which meant that no more than 40 days of post-mobilization training would be needed for the brigades to be fully combat ready. However, on the basis of their independent assessment of the brigades’ proficiency, active Army officials responsible for the brigades’ post-mobilization training developed training plans calling for over three times the number of days that the readiness reports stated were needed. Finally, SORTS does not provide data with which commanders can adequately assess joint readiness. There is no clear definition of areas of joint readiness that incorporates all essential elements, such as individual service unit readiness, the deployability of forces, or en route and theater infrastructure support. The need for joint readiness information was demonstrated by the Persian Gulf War and reaffirmed by contingency operations in Somalia and Bosnia. Officials at four joint commands told us that SORTS, the primary source of readiness data, was inadequate for assessing joint readiness. Although the Joint Staff recently developed its first list of joint mission tasks, it has not developed the training conditions for conducting joint exercises and criteria for evaluating them. It may be several years before JCS completes these efforts. Recognizing the limitations of SORTS and the need for more reliable readiness information, DOD and the services have initiated actions to improve readiness assessments. In June 1994 the Defense Science Board Readiness Task Force, which is composed of retired general officers, issued its report to the Secretary of Defense on how to maintain readiness. The Task Force identified major shortcomings in assessing joint readiness and noted that while the services have increased their commitment to joint and combined training since Operation Desert Storm, such training requires greater emphasis. The Task Force recommended improvements in the measurement of joint readiness, stating that “real readiness must be measured by a unit’s ability to operate as part of a joint or combined task force.” More recently, DOD created the Senior Readiness Oversight Council to evaluate and implement the recommendations of the Readiness Task Force and to develop new ways to measure combat readiness. The Council, whose membership includes high-level military and civilian officials, is focusing on three main ways to improve readiness: (1) developing better analytical tools for determining the relationship of resources to readiness and predicting the potential impact of budget cuts on readiness, (2) developing analytical tools for measuring joint readiness, and (3) taking advantage of computer simulation to improve readiness, especially joint readiness. The Army implemented its Readiness Management System in June 1993. This system allows the Army to project for 2 years the status of elements reported under SORTS. The system integrates the reported SORTS data with other databases that contain future resource acquisition and distribution information. The Army can, for example, compare a unit’s reported equipment shortages with planned acquisition and distribution schedules, and the system can then forecast when those shortages will be alleviated and the unit’s readiness posture improved. In September 1993, the Air Force began to develop a computer model, called ULTRA, to forecast readiness. ULTRA is intended to measure four major elements: (1) the ability to deploy the right forces in a timely manner to achieve national objectives; (2) the ability to sustain operations; (3) the personnel end strength, quality, and training of people; and (4) the availability of facilities. If successful, the system will allow the Air Force to estimate the effect that various levels of funding have on readiness. The project is still under development, and the Air Force estimates it will be about 2 years before the system will provide credible, widely accepted forecasts. To supplement data currently reported in SORTS and facilitate readiness assessments at the unit level, the military commands in all four services independently monitor literally hundreds of additional indicators. These indicators are generally not reported to higher command levels. Military commanders and outside defense experts agreed that many of the indicators are not only critical to a comprehensive readiness assessment at the unit level but also have some degree of predictive value regarding readiness changes within the services. We compiled a list of over 650 indicators that 28 active and reserve service commands were monitoring in addition to SORTS. To further refine these indicators, we asked the commands to rate the indicators in three areas: (1) the importance of the indicator for assessing readiness, (2) the degree of value the indicator has as a predictor of readiness change, and (3) the quality of the information the indicator provides. Table 1 shows the readiness indicators that service officials told us were either critical or important to a more comprehensive assessment of readiness and that also have some predictive value. The indicators that are shaded are those rated highest by at least one-half of the commands visited. We asked the Defense Science Board Task Force on Readiness to examine the indicators presented in table 1. Task Force members agreed with the commands’ ratings and said that the indicators are an excellent beginning for developing a more comprehensive readiness measurement system. The Task Force suggested four additional indicators: (1) the use of simulators to improve individual and crew proficiency on weapon systems; (2) the quality of recruits enlisted by the services; (3) equipment readiness based on fully mission capable rates rather than on mission capable rates, which permit a weapon system to be reported as mission capable even though it cannot fully perform its mission; and (4) the extent to which readiness-related information in DOD is automated. In commenting on a draft of this report DOD pointed out that it is useful to know if a system having a multimission capability can perform parts of the mission, therefore, it believes that both fully mission capable and mission capable rates are useful indicators. Also, DOD said that the extent to which readiness-related information is automated is not an indicator of readiness but that it might be helpful in obtaining an understanding of automation requirements. We agree with DOD’s position on these two issues. As table 1 shows, some indicators are supported more by commanders of one service than by the others. For example, information on commitments and deployments (Training, item 15) and deployed equipment (Logistics, item 17) were assessed as critical by Marine Corps commanders because of the manner in which its forces and equipment are deployed. They were not listed as critical by any of the commands from the other services. By examining a group or series of indicators, one may gain a broader insight than is possible from a single indicator. To illustrate, changes in the extent of borrowed manpower (Personnel, item 7) may be related to proficiency on weapon systems (Training, item 12) or crew turnover (Personnel, item 8). Also, table 1 identifies indicators that because of restricted training time and opportunities are especially critical to the reserve components. Several of the indicators that commanders rated as critical to readiness assessments relate to major readiness concerns recently expressed by service chiefs and CINCs. For example, while in the midst of downsizing, U.S. military forces are being called upon for operational contingencies—delivering humanitarian aid in Iraq, Bosnia, Rwanda, and Somalia and enforcing “no-fly” zones in Bosnia and Iraq, to name just a few. Unusually high operating tempos required for these contingencies have exacerbated the turbulence inherent in a major downsizing of U.S. forces. Several senior service leaders have raised concerns about the impact of this situation on morale, retention, and the ability to maintain readiness for traditional warfighting missions. Among the indicators suggested by some of the command officials we interviewed were personnel tempo, a measure of the frequency and number of personnel deployed on assigned missions, and crew turnover, a measure of personnel turnover within weapon system crews. Similarly, the services report that they were required to shift funds from operations and maintenance appropriations to support contingency operations, and, according to officials of each of the services, some scheduled training exercises were canceled and others were postponed. Several commanders suggested readiness indicators related to operating tempo, funding levels, and individual/unit proficiency. Related to the feature of predictive capability is the ability to conduct trend analyses based on the most important indicators. Assuming that relevant data is available, the services can identify trends in the additional indicators over time. However, no criteria are currently available to assess the meaning of a trend in terms of its impact on readiness. During our visits to the military commands, we noted an unevenness in the availability of historical data, depending on the indicator being monitored. Also, the commands reported that there is unevenness in the quality of the data available for measurement. While some indicators were rated high in importance, they were rated low in quality. We recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to develop a more comprehensive readiness measurement system to be used DOD-wide. We recommend that as part of this effort, the Under Secretary review the indicators we have identified as being critical to predicting readiness and select the specific indicators most relevant to a more comprehensive readiness assessment, develop criteria to evaluate the selected indicators and prescribe how often the indicators should be reported to supplement SORTS data, and ensure that comparable data is maintained by all services to allow the development of trends in the selected indicators. In written comments on a draft of our report, DOD generally agreed with our findings and recommendation (see app. I). The Department said that it plans to address the issue of using readiness indicators not only to monitor force readiness but also to predict force readiness. In response to our recommendation, DOD said that it is developing a specification for a readiness prediction system and that it has already used the indicators presented in our report as input to that process. DOD did not agree with our assessment of the overall value of SORTS information and the reliability of training ratings contained in SORTS. First, DOD said that it did not agree that SORTS information provided to higher commands and JCS is of limited value. We agree that SORTS provides valuable information on readiness. Nevertheless, the system does have several limitations. The matters discussed in the report are not intended as criticisms of SORTS but rather as examples of limitations that are inherent in the system. For example, C-ratings represent a valuable snapshot of readiness in time but by design they do not address long-term readiness or signal impending changes in the status of resources. Second, DOD said that it did not agree that SORTS may not adequately consider the effect that the loss of key personnel has on proficiency. DOD may have misinterpreted our position on this issue. Although SORTS recognizes the loss of key personnel, it does not always consider the impact of replacing key personnel with less experienced personnel. Lastly, DOD cited a number of factors that it believes make it infeasible to base training readiness on the results of combat training center exercises. This report does not propose that DOD take this course of action. Reference to the fact that training readiness is based primarily on training conducted at home stations rather than on results of more realistic exercises conducted at combat training centers is intended only to illustrate how the reliability of SORTS training information can be effected. To assess the adequacy of the current definition and indicators of readiness, we examined military service and JCS regulations, reviewed the literature, and interviewed officials from 39 DOD agencies, including active and reserve service commands, defense civilian agencies, unified commands, and the Joint Staff (see app. II). To identify indicators that are being monitored to supplement SORTS data, we asked the 39 agencies to identify all the indicators they use to assess readiness and operational effectiveness. After compiling and categorizing the indicators by type, that is, personnel, training, and logistics, we asked the commands to rate the indicators’ significance, predictive value, and quality. Indicator significance was rated as either critical, important, or supplementary. The commands’ opinions of predictive value were provided on a five-point scale ranging from little or none to very great. The quality of the indicator was rated on a three-point scale—low, medium, and high. We asked the Defense Science Board’s Task Force on Readiness to (1) review and comment on the indicators that the commands rated the highest in terms of their importance and predictive value and (2) identify additional indicators that, in their judgment, were also critical to a comprehensive readiness assessment. We conducted our review from May 1993 to June 1994 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce this report’s contents earlier, we plan no further distribution until 30 days from its issue date. At that time, we will send copies to the Chairmen of the Senate and House Committees on Armed Services and on Appropriations; the Subcommittee on Military Readiness and Defense Infrastructure, Senate Armed Services Committee; and the Subcommittee on Readiness, House Armed Services Committee; and to the Secretaries of Defense, the Army, the Navy, and the Air Force. Copies will also be made available to others on request. Please contact me at (202) 512-5140 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix III. Secretary of the Army Washington, D.C. 4th Infantry Division (Mechanized) Fort Carson, Colorado 18th Airborne Corps Fort Bragg, North Carolina 24th Infantry Division Fort Stewart, Georgia Corps Support Command 18th Airborne Corps Fort Bragg, North Carolina Headquarters, Forces Command Fort McPherson, Georgia Headquarters, Training and Doctrine Command Fort Monroe, Virginia National Guard Bureau Washington, D.C. Secretary of the Navy Washington, D.C. Secretary of the Air Force Washington, D.C. 1st Tactical Fighter Wing Langley Air Force Base, Virginia 375th Air Wing Scott Air Force Base, Illinois Air Combat Command Langley Air Force Base, Virginia Air Force Reserve Washington, D.C. Office of the Inspector General Washington, D.C. Office of the Joint Chiefs of Staff Washington, D.C. Ray S. Carroll, Jr., Evaluator-in-Charge James E. Lewis, Evaluator (Data Analyst) James K. Mahaffey, Site Senior Robert C. Mandigo, Jr., Site Senior Jeffrey C. McDowell, Evaluator Jeffrey L. Overton, Jr., Site Senior Susan J. Schildkret, Evaluator Lester L. Ward, Site Senior The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the effect of declining defense budgets on military readiness, focusing on whether: (1) the definition and indicators of readiness adequately reflect the many complex components that contribute to overall military readiness; and (2) there are readiness indicators that can predict positive or negative changes in readiness. GAO found that: (1) the Department of Defense's (DOD) system for measuring readiness, the Status of Resources and Training System (SORTS), has limitations; (2) SORTS was never intended to provide a comprehensive assessment of overall military readiness; (3) SORTS only measures individual service readiness because there are no indicators available to measure joint readiness; (4) SORTS does not assess operating tempo or troop morale; (5) to supplement SORTS data and facilitate readiness assessments at the unit level, the military commands independently monitor additional indicators that are critical to a comprehensive readiness assessment at the unit level and have some degree of predictive value; and (6) DOD can improve its comprehensive readiness assessments by incorporating unit level indicators, but these indicators will require further refinement to improve their usefulness.
Today, and in the foreseeable future, military operations require U.S. personnel, in particular Army and Marine Corps ground forces, to communicate and interact with multinational partners and local populations. DOD, and the Army and Marine Corps, have emphasized the need to build and sustain language and culture knowledge and skills in the general purpose forces. The Army and Marine Corps are providing language and culture predeployment training in support of ongoing operations. DOD relies on formal tests to measure service members’ proficiency in a foreign language. Various training and personnel systems exist within DOD at the service and department level. Departmentwide and service-level strategic plans and operating concepts emphasize the need to build and sustain language and culture knowledge and skills in the general purpose forces (see fig. 1). In particular, referring both to the near-term needs of current operations and the long-term efforts to prepare military forces for future conflicts, DOD concluded in the 2010 Quadrennial Defense Review that U.S. forces would be able to perform their missions more effectively with more and better key enabling capabilities, including language expertise. The Army and Marine Corps have also developed concepts to align headquarters and forces with geographic commands around the world and plan to provide them with specialized language and culture training prior to deployment to conduct security force assistance and irregular warfare missions, among others. In addition, the services are implementing strategies to build and reinforce language and culture knowledge and skills through training at various points of a service member’s career through formal service institutions, such as professional military education schools, and during predeployment training. For example:  Beginning in 2009, the Army Command and General Staff College began offering language courses to soldiers in targeted languages, such as Arabic, Chinese, and French, which consist of resident instruction, self-study, and distance learning in a modified year-long program. In addition, the Army updated its Captains Career Course in 2010 to include 13 hours of training in the areas of cross-cultural skill building and negotiations.  The Marine Corps has begun implementing the Regional, Culture, and Language Familiarization career development program for all marines that begins when marines enter military service and continues throughout their career. As part of the program, marines are assigned to 1 of 17 regions around the world and will be assigned an associated language. The program is organized into three broad areas of training (culture general, culture specific, and language familiarization) and functionally organized within a block structure that builds and reinforces knowledge and skills over a marine’s career. As we have previously reported, the Army and the Marine Corps have established service-specific predeployment training requirements and are providing their respective general purpose forces with language and culture training that is focused on the particular area to which a unit will deploy. Given that over the past 10 years Army and Marine Corps forces have experienced continual operational deployments to Iraq and Afghanistan with limited time to prepare between deployments, most language and culture training efforts have focused on predeployment training for ongoing operations. For example, since July 2010, the Army has required that all soldiers deploying to Afghanistan and Iraq complete a 4- to 6-hour online training program that provides basic language and culture training. In addition, commanders are required to designate at least one leader per platoon who will have regular contact with a local population to complete 16 weeks (at least 480 hours) of on-site training at one of five language training detachments on Army installations. If the designated leader does not have access to a language training detachment, that soldier is required to complete approximately 100 hours of computer-based training. Since February 2010, the Marine Corps has required that all deploying marines complete culture training which, for Afghanistan deployments, service officials reported typically consists of 1 day of training, and selected marines have been required to complete language training with the amount determined by a mission analysis. Selected marines can complete this training at one of two language training detachments on Marine Corps installations or through programs at a local community college and university. Language training detachments on Army and Marine Corps installations provide predeployment training that includes role playing, classroom instruction, and self-directed learning (see fig. 2). DOD relies on the Defense Language Proficiency Test system of tests to measure an individual’s proficiency in a foreign language. The test is administered in a Web-based format to measure proficiency in the listening and/or reading modalities. The speaking modality is tested in person or by telephone. Test scores are reported as Interagency Language Roundtable skill levels measured on a scale from 0 (no proficiency) to 5 (functionally native proficiency). DOD guidance also establishes broad regional proficiency skill level guidelines. These guidelines include culture knowledge and skills and are intended to provide DOD components with benchmarks for assessing regional proficiency needs, for developing initial and sustainment regional proficiency curricula at service and professional military education schools, and for assessing regional proficiency capabilities. Our prior work has found that DOD has not yet established a way to test or otherwise evaluate the culture knowledge and skills of service members in accordance with these guidelines. The Army and Marine Corps maintain a number of service-level training and personnel systems. At the department level, DOD maintains several additional information systems that draw upon or provide data to the services’ training and personnel systems. Table 1 provides information on key Army and Marine Corps training and personnel systems and other DOD information systems. The Army and Marine Corps have captured some information at the unit level for those service members who completed language and culture predeployment training for ongoing operations. DOD guidance requires that the services document all language and regional proficiency training, education, and experience, which includes culture, in service training and personnel systems and use this information in force management processes. Service documents also note that language and culture training completion and corresponding proficiency should be documented in service-level systems. However, we identified several factors that limited the Army’s and Marine Corps’ ability to capture information within service-level training and personnel systems on service members’ completion of language and culture training and their corresponding proficiency gained from this training. Officials with Army and Marine Corps units we spoke with who were preparing for deployments or who were deployed in Afghanistan at the time of our review reported that they documented which service members completed language and culture predeployment training on spreadsheets and paper rosters that were stored at the unit level. For example:  Officials with an Army brigade deployed in Afghanistan in December 2010 reported that its subordinate battalions recorded soldiers who completed mandatory language and culture training tasks on unit attendance rosters.  Officials from an Army brigade preparing for deployment to Afghanistan in March 2011 stated that companies and battalions within the brigade documented an individual soldier’s completion of required language and culture predeployment training on manually completed computer spreadsheets. During predeployment training, companies and battalions reported summaries of the status and completion of critical training tasks, including language and culture tasks, on a weekly basis to the brigade headquarters.  Officials from Marine Corps battalions preparing for deployment to Afghanistan in November and December 2010 stated that units used manually completed computer spreadsheets to document the number of marines who completed language and culture predeployment training requirements and unit training completion percentages were routinely reported to the regiment headquarters. Army and Marine Corps training officials reported that the approaches used to capture information on the completion of predeployment training provided unit commanders with some visibility over the number of soldiers and marines who completed language and culture predeployment training. The Army requires that all of its units use the Digital Training Management System to document soldiers’ completion of individual soldier training and collective training conducted at the unit level. Moreover, in July 2010, the Army released specific guidance that directed units to input language and culture predeployment training in the Digital Training Management System. According to an Army regulation and a Digital Training Management System information paper, the intent of capturing training information in electronic soldier records is to enable decision makers at the service level to track and monitor soldiers, ensure that training records are automatically transferred with a soldier when he or she is reassigned to another unit, and provide visibility to senior leaders that can inform force management decisions. Units we interviewed reported, however, that they did not record the completion of all mandatory language and culture predeployment training tasks within the Digital Training Management System. Although the system provides a single data field for units to record information for basic language and culture training, the Army has multiple, mandatory language and culture predeployment training requirements. Because only one field exists, units we spoke with stated that inconsistent information was recorded in that field. In some cases, units recorded basic culture training in the field but did not record predeployment language training. For example, officials with battalions preparing to deploy to Afghanistan in March 2011 reported they did not record information in this field for soldiers who completed mandatory language training at an on-site language training detachment. At the time of our review, the Army had not yet established data fields within the Digital Training Management System that would allow training officials to document soldiers’ completion of all mandatory language and culture training tasks. In bringing this fact to the attention of the Army, service headquarters officials stated that the Army has considered adding new data fields within the Digital Training Management System for all required language and culture predeployment training tasks, but as of July 2011, had not done so. Without data fields available that are clearly aligned with all mandatory training tasks, units have been unable to document which soldiers completed language and culture training. We also found that the Army had not recorded language proficiency in its primary training systems, despite the fact that these systems have data fields to record this information. In December 2010, the Army reinforced its prior guidance that directed that units record training in the Digital Training Management System and also stated that units should record training within the Army Training Requirements and Resources System to enable tracking of cultural knowledge and foreign language proficiency. Service officials reported that, as of July 2011, nearly 100 percent of the more than 800 soldiers who completed training at a language training detachment met the Army standard for language proficiency in the speaking and listening modalities. However, information on the language proficiency of these soldiers was unavailable in either of these systems. Unit officials we spoke with reported that they did not record soldiers’ language proficiency gained from training at a language training detachment within the Digital Training Management System, but rather tracked the number of soldiers who met the Army’s language proficiency standard on unit spreadsheets. Training managers responsible for inputting data in the Army Training Requirements and Resources System also reported that they did not record language proficiency data for soldiers who completed this training. Officials stated that information on language proficiency is typically documented within the Army’s personnel system. The Army’s primary personnel system (the Total Army Personnel Database) has the capability to capture language proficiency. While the Army collects some language proficiency data within this system, the Army considers these data unreliable because of weaknesses in its approach to collecting them. For all soldiers, including those who complete training at a language training detachment, the Army utilizes a paper form to document soldiers’ language proficiency. Upon completing training at a language training detachment, the Army has provided soldiers with a test to determine proficiency in the listening and speaking modalities and a testing official records the corresponding proficiency on this form. The form should then be passed on to a soldier’s local training manager and to the Army Human Resources Command. The Army Human Resources Command is required to ensure that language proficiency data are current and accessible to the Department of the Army staff and personnel managers. According to Army officials, the command updates these data in soldiers’ personnel records within the Total Army Personnel Database. However, Army officials described several weaknesses in this process that result in unreliable data. For example, the Army relies on hand-delivered hard copy forms, which introduce multiple opportunities for these forms to be lost or human error in data entry. Depending on the type of language test, language proficiency data are also reported to the Defense Manpower Data Center, which maintains personnel and manpower data for all service members, including language test scores. For Web-based tests, test scores are automatically transferred to the Defense Manpower Data Center. For in- person or telephone tests, a testing official records the test score and sends the results to the Defense Manpower Data Center. Army officials explained that a data link does not currently exist to transfer data between the Defense Manpower Data Center and the Total Army Personnel Database and therefore language proficiency data have not been routinely documented in soldiers’ personnel records. Because the Total Army Personnel Database is also intended to provide data on soldiers’ language proficiency for the department’s Language Readiness Index, officials responsible for managing the Language Readiness Index reported that departmentwide visibility over service members’ language proficiency is limited by the lack of accurate and timely service data. To better understand examples of limitations in the Army’s ability to capture information within the Army’s training and personnel systems on the completion of language and culture predeployment training and corresponding language proficiency, see figure 3. In January 2011, the Army established a task force to improve the accuracy of information on service members’ language proficiency available within the Total Army Personnel Database. At the time of our review, the Army Language Tracking Task Force had identified a number of key tasks and was at varying stages of completing its work. For example, the task force is working to establish a direct data link between the Defense Manpower Data Center where language test scores are recorded and the Total Army Personnel Database. According to a task force official, the Army plans to complete this link by early 2012. According to Marine Corps Order 3502.6, units are required to track and report information about the status of predeployment training in accordance with guidance provided by the unit’s chain of command. As discussed earlier in this report, Marine Corps units we spoke with reported that the completion of language and culture predeployment training for ongoing operations in Afghanistan was captured and tracked at the unit level using informal approaches, such as spreadsheets and paper rosters. Officials also explained that no Marine Corps service-level system is used to record the completion of predeployment training tasks. In its January 2011 strategy, the Marine Corps noted that no mechanism exists within the service to track regional and cultural skills obtained through operational experience on a servicewide basis, but that the timely identification of marines with these skills could assist the service in making force management decisions. The strategy also identifies the need for the service to develop a tracking mechanism to readily identify and leverage regional and cultural skills. As presently structured, the Marine Corps Training Information Management System enables servicewide tracking of the completion of institutional training and professional military education. During our review, Marine Corps officials stated that the service was in the process of developing a new module within this system that, when fully implemented, would allow u to document individual and unit predeployment training. However, according to officials, the Marine Corps has not determined if this new module or another system would be used to track language or culture predeployment training tasks. We also found that the Marine Corps had not provided formal language tests to marines who completed significant language training for ongoing operations in Afghanistan and, therefore, had not documented their language proficiency within its primary personnel system (the Marine Corps Total Force System) or any other system. According to officials, most marines selected for Afghan language training (about 30 marines per battalion) received approximately 40 hours of training that primarily focused on basic rapport building and memorization of survival phrases. Due to the limited number of hours of training, Marine Corps officials stated that these training programs were not designed to produce measurable language proficiency. In discussions with units preparing for deployments to Afghanistan and with training providers, we found that some marines completed more extensive language training. For example, Marine Corps officials estimated that about 15 percent of marines selected for language training completed an advanced language training program that consisted of 160 hours of live instruction at a language training detachment on Camp Lejeune or Camp Pendleton, which also included a minimum of an additional 72 hours of self-directed learning via computer-based language training. In addition, our analysis found that about 1,000 marines attended training programs at a local community college and university since 2009 that ranged from 160 to 320 hours of Afghan language training. In cases where service members complete a significant language training event as defined by DOD and service guidance, the Marine Corps is responsible for administering the Defense Language Proficiency Test system of tests to measure language proficiency. However, although several language training programs met the criteria established in DOD and service guidance, we found that the Marine Corps had not required marines who completed significant language training to take a Defense Language Proficiency Test system of tests to measure their language proficiency. Therefore, the Marine Corps does not have language proficiency data for these marines. Marine Corps officials told us that they are reviewing the potential applicability of using a new Defense Language Proficiency Test that has been specifically designed to assess lower levels of language proficiency, but formal decisions on whether to use this test for general purpose force marines who completed significant Afghan language training have not yet been made. By not capturing information within service-level training and personnel systems on the training that general purpose forces have completed and the proficiency they gained from training, the Army and Marine Corps do not have the information they need to effectively leverage the language and culture knowledge and skills of these forces when making individual assignments and assessing future operational needs. DOD and service guidance address the need to sustain language skills and the DOD strategic plan for language, regional, and culture skills calls for the services to build on existing language skills for future needs. The Army and Marine Corps have made considerable investments in time and resources to provide some service members with extensive predeployment language training, but have not developed plans to sustain language skills already acquired through this training. We found that the Army and Marine Corps had not yet determined which service members require follow-on training, the amount of training required, or appropriate mechanisms for delivering the training. DOD guidance instructs the services to develop sustainment language and regional proficiency training and education plans for language professionals and language-skilled personnel. Likewise, service documents reinforce the need to sustain language skills. For example, according to the Army’s December 2010 Culture and Foreign Language Strategy Execution Order, the Army will sustain the language skills of soldiers who achieve low levels of language proficiency. Additionally, the Marine Corps Language, Regional and Culture Strategy: 2011-2015 notes that without an effective sustainment program, the war-fighting benefits from language training will be lost, which minimizes the service’s return on investment for this training. Consequently, the strategy states that the Marine Corps must explore and leverage all cost-effective solutions to sustain language capabilities. Moreover, the Marine Corps has published guidance that states that mission accomplishment and efficiency can be enhanced if marines attain and maintain language proficiency, even at the lowest levels of proficiency. Additionally, a DOD strategy calls for the services to build on existing language skills for future needs. The Department of Defense Strategic Plan for Language Skills, Regional Expertise, and Cultural Capabilities (2011-2016) notes that in order to meet the requirements generated by an expanding global role, it is incumbent on the department to build on current language skills and invest in basic and continuing language, regional, and culture training and education. The strategy also states that by identifying language, regional, and cultural requirements and building these capabilities, DOD will be able to more effectively engage with not only partners and allies, but also with the indigenous populations in order to build rapport and establish trusting relationships. The Army and Marine Corps have made considerable investments in time and resources to provide some service members with extensive predeployment language training in order to prepare them for ongoing operations in Afghanistan. For example, according to Army documents, the Army spent about $12.3 million through August 2011 to establish and maintain language training detachment sites for Afghan language training. The Army estimated that it will spend an additional $31.6 million from fiscal year 2012 through fiscal year 2015 to maintain these sites. The Marine Corps has also funded Afghan language training courses at San Diego State University and Coastal Carolina Community College. Table 2 summarizes the number of soldiers and marines who completed selected language training programs since 2009, the length of the training, and the estimated cost of training. While informal language training programs exist, the Army and Marine Corps have not developed formal plans to sustain language skills acquired through predeployment training for ongoing operations. Officials with Army and Marine Corps units preparing for deployment and those deployed in Afghanistan reported that some informal follow-on training programs were available to service members to sustain language skills, for example, utilizing self-directed learning tools such as computer-based training programs. However, the use of informal training options to refresh and maintain language skills was voluntary and left to service members’ personal initiative. The Defense Language Institute Foreign Language Center has reported that although personal initiative is necessary, it is almost never sufficient for maintenance of such a complex skill as foreign language proficiency. We found that the Army and Marine Corps had not yet determined which service members require follow-on training to sustain language skills, the amount of training required, or appropriate mechanisms for delivering the training. Army officials stated they recognized the need to sustain language skills acquired through predeployment training with a formal training program, particularly in light of the number of service members who already received language training that will have multiple deployments to the same region. At the time of our review, the Army was evaluating various sustainment training options, but had not yet developed a formal plan or identified the resources required to provide the training. The Marine Corps is not planning to sustain the Afghan language skills of marines that were acquired through predeployment training with a formal training program. Marine Corps officials cited several reasons as the basis for this approach, for example because of the turnover of personnel within the Marine Corps from one deployment to the next. Additionally, according to current plans, the service will provide language training for a variety of languages as part of its career development program. However, we found that this program is not intended to maintain or build upon language skills already acquired by some marines through extensive predeployment Afghan language training. In the absence of formal sustainment training to maintain and build upon service members’ language skills acquired for ongoing operations at considerable expense in time and resources, the Army and Marine Corps may miss opportunities to capitalize on the investments they have already made to provide predeployment language training. DOD has recognized that its ability to identify general purpose forces that have language and culture knowledge and skills will be critical to managing these forces in the future. However, by not capturing information within service-level training and personnel systems on the completion of language and culture training and corresponding proficiency gained from training, the Army and Marine Corps do not have the information they need to effectively leverage the language and culture knowledge and skills of these forces when making individual assignments and assessing future operational needs. Further, the Army and Marine Corps face competing demands for limited training time and resources and, in this context, not all service members who acquired skills through predeployment language training may require follow-on training. Despite the fact that the Army and Marine Corps have made considerable investments to provide some service members with extensive predeployment language training, the services have not determined which service members require follow-on training to sustain language skills, the amount of training required, or appropriate mechanisms for delivering the training. As a result, the Army and Marine Corps may not fully maximize the return on investment already made for predeployment language training for current operations. We recommend the Secretary of Defense take the following five actions. To provide decision makers with greater visibility on the language and culture knowledge and skills of Army and Marine Corps general purpose forces that could inform force management processes, we recommend that the Secretary of Defense direct the Secretary of the Army to:  Establish clearly defined data fields for all mandatory language and culture training tasks within the Digital Training Management System and update Digital Training Management System records for soldiers who completed training prior to these fields being established.  Document the language proficiency for soldiers completing predeployment language training within the Digital Training Management System and the Army Training Requirements and Resources System. We further recommend that the Secretary of Defense direct the Secretary of the Navy, in consultation with the Commandant of the Marine Corps, to:  Designate which training and/or personnel systems the Marine Corps should use to document the completion of marines’ language and culture training.  Administer formal tests to marines completing a significant language training event using DOD’s agreed-upon method to measure proficiency, and ensure the results of these tests are documented in marines’ personnel records within the Marine Corps Total Force System. To capitalize on the investments in time and resources made in providing language training to service members, we recommend that the Secretary of Defense direct the Secretary of the Army and the Secretary of the Navy, in consultation with the Commandant of the Marine Corps, to:  Determine which soldiers and marines with language skills require follow-on training, the amount of training required, and appropriate mechanisms for delivering the training, and make any adjustments to training programs that may be needed. In written comments on a draft of this report, DOD concurred with two recommendations and partially concurred with three recommendations. DOD’s comments are reprinted in their entirety in appendix II. DOD also provided technical comments, which we incorporated into the report as appropriate. In addition to providing detailed responses to our recommendations, DOD provided two general comments about our report. First, DOD pointed out that our report noted the extent to which the Army and Marine Corps used service-level training and personnel systems to record service members’ proficiency gained from predeployment training that meets DOD’s definition of “significant language training.” DOD stated that, since it believed the current definition in the report may have taken the definition out of context, it would like to clarify what constitutes a “significant language training event,” noting that DOD Instruction 5160.71 defines such an event as “at least 150 hours of immersion training or 6 consecutive weeks of 5-hours-a day classroom training, or other significant event as defined by the Secretaries of the Military Departments and the Heads of Defense Agencies and DOD Field Activities.” DOD stated that this definition was not intended to be associated with the initial acquisition of a language, but rather is associated with modifying the retesting interval for someone who has already achieved a measured proficiency. In a follow-up discussion, DOD officials clarified that language training offered during predeployment training falls into the category of initial acquisition of a language, and therefore, under the instruction, testing for proficiency is not required. These officials noted, however, that the military services are not precluded from testing for language proficiency at this stage, and therefore have the option of administering tests. As we noted in our report, the Army has decided to exercise this option and is in fact testing the proficiency of its service members upon completing extensive predeployment training. Given the considerable investments that the Marine Corps is making to provide some marines with extensive language training prior to deploying to Afghanistan, we continue to believe it is prudent for the Marine Corps to take a similar approach to testing. In the absence of such action, we continue to believe that DOD may be missing an opportunity to gain greater visibility of the language skills of its forces and therefore effectively leverage this capability when making individual assignments and assessing future operational needs. Second, DOD acknowledged our recommendation to develop sustainment training programs to maintain and build upon service members’ language skills. The department noted that DOD Instruction 5160.70 emphasizes the importance of sustainment language and regional proficiency training and education programs for language professionals and language-skilled personnel. DOD stated that with an increasing number of general purpose forces attending predeployment language training at language training detachments, the department will examine ways to capitalize on the investments already made to ensure that it builds, enhances, and sustains a total force with a mix of language skills, regional expertise, and cultural capabilities to meet existing and emerging needs. DOD also provided detailed comments on each of our recommendations. DOD concurred with our recommendation that the Secretary of Defense direct the Secretary of the Army to establish clearly defined data fields for all mandatory language and culture training tasks within the Digital Training Management System and update Digital Training Management System records for soldiers that completed training prior to these fields being established. DOD stated that deficiencies within the Digital Training Management System have been identified and that the Army, in a December 2010 order, had directed the development of solutions to address these deficiencies. As stated in our report, we recognize that the Army directed that units record training in the Digital Training Management System. However, its direction did not include requiring that adjustments be made in the system. Specifically, it did not call for action to be taken to add new data fields for all required language and culture predeployment training tasks that would allow training officials to document soldiers’ completion of these tasks. Therefore, because the Army has not directed this action, we continue to believe that our recommendation has merit. DOD concurred with our recommendation that the Secretary of Defense direct the Secretary of the Army to document the language proficiency for soldiers completing predeployment language training within the Digital Training Management System and the Army Training Requirements and Resources System. DOD stated that most predeployment language training is of such short duration that language proficiency will not be measurable and that the department’s emphasis will be to document language proficiency for general purpose forces completing predeployment foundational language training (usually 16 weeks or longer) conducted at language training detachments. DOD also noted that the Total Army Personnel Database will remain the primary system for recording language proficiency of Army personnel. DOD further noted that the Army Training Requirements and Resources System already facilitates the requirement for tracking and reporting certain language and culture training courses. For example, DOD noted that the Army has, within the system, assigned specific codes for all language and culture training courses; modified functions to require a proficiency score for these courses; and assigned codes to each of the courses for a specific language. However, in its comments, DOD did not state whether the Army plans to take any actions to document language proficiency within the Digital Training Management System, as we also recommended. We continue to believe this action is needed to provide decision makers with better information on the language and culture knowledge and skills of soldiers to make individual assignments and assess future operational needs. DOD partially concurred with our recommendation that the Secretary of Defense direct the Secretary of the Navy, in consultation with the Commandant of the Marine Corps, to designate which training and/or personnel systems the Marine Corps should use to document the completion of marines’ language and culture training. DOD stated that, as outlined in our report, current Marine Corps systems, such as the Marine Corps Training Information Management System, are designed to track the completion of institutional training and professional military education, not the completion of individual and unit-level training. DOD stated that although efforts are being pursued that may eventually allow for this capability, the Marine Corps believes that a comprehensive cost-benefit analysis needs to be conducted beforehand in order to accurately capture the costs in time, fiscal resources, and infrastructure enhancements associated with implementation and determine whether those costs necessary to track the completion of language and culture training at the individual and unit levels are warranted, particularly when prioritized against other validated operational requirements in a fiscally- and time- constrained environment. We agree that the Marine Corps should consider the costs associated with documenting the completion of language and culture training beyond those already incurred at the unit level to record this information and determine whether the benefits are warranted. As part of its analysis, we would expect that the service would also consider the potential opportunity cost of not recording this information, such as how it might affect the ability of decision makers to make timely and informed decisions on assigning forces or assessing future operational needs if they do not have complete information on the knowledge and skills of their forces. DOD partially concurred with our recommendation that the Secretary of Defense direct the Secretary of the Navy, in consultation with the Commandant of the Marine Corps, to administer formal tests to marines completing a significant language training event using DOD’s agreed- upon method to measure proficiency, and ensure the results of these tests are documented in marines’ personnel records with the Marine Corps Total Force System. DOD stated that the Marine Corps’ predeployment language training programs are not specifically designed to produce a measurable language proficiency score using DOD’s agreed-upon method for measuring it. Rather, the programs are focused on the military/tactical domain, and are designed to provide marines with the communication skills necessary to accomplish a specific mission- related task/skill. DOD stated, however, that the Marine Corps is assessing the feasibility of incorporating metrics into its predeployment language training programs that would produce a proficiency score, such as using the Very Low Range series of Defense Language Proficiency Tests and oral proficiency interviews. DOD also restated the need for clarification in our report over what constitutes “significant language training,” noting that the current definition was not intended to represent initial acquisition of a language but rather is associated with modifying retesting intervals. As discussed previously, DOD officials clarified that the military services are not precluded from testing proficiency following the completion of courses that fall into the category of initial acquisition of a language, such as predeployment training. As we noted in our report, the Marine Corps has made considerable investments to provide some marines with extensive predeployment language training prior to deploying to Afghanistan. To date, the Marine Corps has not required these marines to take a Defense Language Proficiency Test system of tests to measure their language proficiency. Without this information, we continue to believe that DOD may be missing an opportunity to gain greater visibility of the language skills of its forces and therefore effectively leverage this capability when making individual assignments and assessing future operational needs. DOD partially concurred with our recommendation that the Secretary of Defense direct the Secretary of the Army and the Secretary of the Navy, in consultation with the Commandant of the Marine Corps, to determine which soldiers and marines with language skills require follow-on training, the amount of training required, and appropriate mechanisms for delivering the training, and make any adjustments to training programs that may be needed. DOD stated that the Army is formulating a plan for sustainment of language skills acquired at Army language training detachments and that such a plan would rely heavily on existing distributed learning resources. We would expect that as the Army develops this plan, it would specifically address which soldiers require additional training, the amount of training required, appropriate mechanisms for delivering the training, and whether any adjustments to existing training programs would be made. DOD also stated that the Marine Corps has made a decision to formally build and sustain language, regional, and culture skills via the Regional, Culture, and Language Familiarization program for general purpose forces that specifically targets its officer corps and enlisted ranks starting at sergeant and above. DOD noted that given high attrition rates for first-term enlisted marines, applying this program or other deliberate institutional programs designed to target the first-term enlisted population group have been deemed cost prohibitive. For these marines, language, regional, and culture skills are provided through predeployment training programs and common skills training, and sustained via informal mechanisms by providing access to language learning software and other computer- based technologies. We recognize that the Marine Corps has developed the Regional, Culture, and Language Familiarization program that is focused on its career force. However, as we stated in our report, the Marine Corps has made a considerable investment in time and resources to provide some marines with extensive predeployment language training in order to prepare them for ongoing operations in Afghanistan, but at this point, the Regional, Culture, and Language Familiarization program is not intended to maintain or build upon the language skills already acquired by these marines. In the absence of formal training to sustain these language skills, DOD may miss opportunities to capitalize on the investments already made to provide predeployment language training. We are sending copies of this report to the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, the Secretary of Army, the Secretary of the Navy, and the Commandant of the Marine Corps. This report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9619 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. To address our objectives, we met with officials from the Office of the Secretary of Defense, the Army, and the Marine Corps. To evaluate the extent to which the Army and Marine Corps captured information within service-level training and personnel systems on the completion of language and culture training and proficiency gained by personnel through training, we focused on Army and Marine Corps language and culture predeployment training programs administered since 2009 to prepare general purpose forces for ongoing operations in Afghanistan and Iraq. Therefore, for this review, we excluded service training programs for language and regional experts (e.g., foreign area officers and intelligence specialists) and special operations forces. We reviewed information available in service-level training and personnel systems and department-level information systems on service members’ completion of language and culture training and the corresponding acquisition of skills— specifically, the time frame when this training occurred and the proficiency service members had achieved. We defined “proficiency” using the Department of Defense (DOD) agreed-upon method for measuring it. We conducted interviews with Army and Marine Corps officials who are responsible for developing predeployment training programs and documenting information on training completion in service-level training and personnel systems. We also discussed the extent to which the services used these systems to record any proficiency gained from training, in particular the training that meets DOD’s definition of a significant language training event—at least 150 hours of immersion training or 6 consecutive weeks of 5-hour-a-day classroom training. We also interviewed officials with Army and Marine Corps units that were participating in predeployment training and units that were deployed in Afghanistan at the time of our review to discuss the extent to which they used service-level training and personnel systems and other processes to document the completion of language and culture predeployment training and proficiency gained from this training. In identifying Army and Marine Corps unit personnel to speak with, we selected an illustrative nongeneralizable sample of units that were deployed for contingency operations or preparing to deploy during the time frame of October 2010 through June 2011. We assessed the Army’s and Marine Corps’ efforts in light of DOD guidance that requires that the services document all language and regional proficiency training, education, and experience in training and personnel systems and Army and Marine Corps documents that state that language and culture training completion and corresponding proficiency should be documented in service-level systems. For our review, we focused on language and culture-related training, which DOD includes in its description of regional proficiency skills. We also discussed with Office of the Secretary of Defense and Army officials the content and status of ongoing departmental and Army efforts, such as the Army’s Language Tracking Task Force, which are intended to improve the accuracy of information on the language proficiency of service members available in personnel systems. To evaluate the extent to which the Army and Marine Corps have developed plans to sustain language skills acquired through predeployment training, we interviewed Army and Marine Corps training officials to discuss the extent to which the services had developed specific training programs for general purpose forces to sustain language skills. We interviewed officials with Army and Marine Corps units that were participating in predeployment training and units that were deployed in Afghanistan at the time of our review to discuss formal programs used by service members to sustain skills acquired through language training. We also discussed other informal training programs that were available to service members to sustain language skills. In identifying Army and Marine Corps unit personnel to speak with, we selected an illustrative nongeneralizable sample of units that were deployed for contingency operations or preparing to deploy during the time frame of October 2010 through June 2011. To gain an understanding of the investments associated with predeployment language training, we collected information from service training officials on the number of soldiers and marines completing training from January 2009 through July 2011, the amount of time spent in training, and the cost of these training programs. To ensure the reliability of our data, we interviewed knowledgeable officials about the data and internal controls on the systems that contain them. We determined that the data were sufficiently reliable for the purposes of this audit. We reviewed Army and Marine Corps training programs and plans in light of DOD and service guidance that emphasize the need to sustain language skills and the DOD strategic plan for language, regional, and culture skills that calls for the services to build on existing language skills for future needs. To gain insights on Army and Marine Corps units’ perspectives on capturing information on language and culture training in service-level training and personnel systems and discuss any steps taken to sustain skills acquired through language training, we interviewed officials with Army and Marine Corps units that were participating in predeployment training and that were deployed in Afghanistan at the time of our review. Specifically, we met with officials with one Army brigade combat team preparing for deployment and five subordinate combat arms and support battalions, three Marine Corps combat arms and one support battalion preparing for deployment, and through formal requests for information from the United States Forces Afghanistan staff, we received written responses from three Army combat arms and two Army support brigades deployed in Afghanistan. The team focused on combat arms units because training guidance from the battlefield commander focused on language training for these units. We interviewed officials, and where appropriate obtained documentation, at the following locations: Office of the Secretary of Defense  Office of the Under Secretary of Defense for Personnel and  Office of the Deputy Chief of Staff, G1  Office of the Deputy Chief of Staff, G2  Office of the Deputy Chief of Staff, G3/5/7  Assistant Secretary of the Army, Manpower and Reserve Affairs  Army Forces Command  Army Reserve Command  Army Training and Doctrine Command  Center for Army Lessons Learned  Combined Arms Center  Defense Language Institute Foreign Language Center  Training and Doctrine Command Culture Center  First United States Army  Marine Corps Training and Education Command  Center for Advanced Operational Culture Learning  Marine Corps Air-Ground Task Force Training Command  Marine Corps Center for Lessons Learned  Marine Corps Forces Command  Marine Corps Forces, Pacific I Marine Expeditionary Force  II Marine Expeditionary Force  III Marine Expeditionary Force   U.S. Forces Afghanistan We conducted this performance audit from June 2010 to October 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. GAO DRAFT REPORT DATED SEPTEMBER 15, 2011 GAO-12-50 (GAO CODE 351506) “LANGUAGE AND CULTURE TRAINING: OPPORTUNITIES EXIST TO IMPROVE VISIBILITY AND SUSTAINMENT OF KNOWLEDGE AND SKILLS IN ARMY AND MARINE CORPS GENERAL PURPOSE FORCES” RECOMMENDATION 1: The GAO recommends that the Secretary of Defense direct the Secretary of the Army to establish clearly defined data fields for all mandatory language and culture training tasks within the Digital Training Management System and update Digital Training Management System records for soldiers that completed training prior to these fields being established. DoD RESPONSE: Concur. The deficiencies identified within the Digital Training Management System (DTMS) have been identified and the development of solutions addressing these deficiencies has been directed by HQ Department of Army Execution Order of December 2010 regarding the implementation of the Army Culture Foreign Language Strategy. RECOMMENDATION 2: The GAO recommends that the Secretary of Defense direct the Secretary of the Army to document the language proficiency for soldiers completing predeployment language training within the Digital Training Management System and the Army Training Requirements and Resources System. DoD RESPONSE: Concur. Most predeployment language training is of such short duration that language proficiency will not be measurable. Rather, emphasis will be to document language proficiency for general purpose forces (GPF) completing predeployment foundational language training (usually sixteen weeks or longer) conducted at Language Training Detachments. The Total Army Personnel Database will remain the primary system for recording language proficiency of Army personnel. The Army Training Requirements and Resources System (ATRRS) already facilitates the requirement for tracking and reporting language and culture training through completion of ATRRS managed training courses. ATRRS has assigned a specific “select code” for all identified Language Culture Training Courses for reporting purposes. ATRRS has modified Input and Graduate functions to require a proficiency score for Language Culture Training Courses. Additionally, ATRRS has assigned a Language Identification Code to each of the Language Culture Training Courses for a specific language. Finally, reports can be requested within ATRRS to track/analyze the above actions. ATRRS routinely provides training completion transactions to the Total Army Personnel Database in support of its role as the Army’s authoritative source/system of record for personnel data. RECOMMENDATION 3: The GAO recommends that the Secretary of Defense direct the Secretary of the Navy, in consultation with the Commandant of the Marine Corps, to designate which training and/or personnel systems the Marine Corps should use to document the completion of marines’ language and culture training. DoD RESPONSE: Partially concur. As outlined in the report, current Marine Corps systems such as the Marine Corps Training Information Management System (MCTIMS) are designed to track completion of institutional training and professional military education, not completion of individual/unit-level training. Though efforts are being pursued that may eventually allow for this capability, to include the possible addition of a module to MCTIMS and other efforts to track IW-related individual skills, the Marine Corps believes a comprehensive cost- benefit analysis needs to be conducted beforehand in order to: 1) accurately capture the “real costs” in time, fiscal resources, and infrastructure enhancements associated with implementation; and 2) determine whether those real costs/additional expenditures in time, resources, and funding necessary to implement tracking completion of language and culture training at the individual and unit-levels is worth the cost, particularly when prioritized against other validated operational requirements in a fiscally and time constrained environment. RECOMMENDATION 4: The GAO recommends that the Secretary of Defense direct the Secretary of the Navy, in consultation with the Commandant of the Marine Corps, to administer formal tests to marines completing a significant language training event using DOD’s agreed upon method to measure proficiency, and ensure the results of these tests are documented in marines’ personnel records with the Marine Corps Total Force System. DoD RESPONSE: Partially concur. The Marine Corps’ predeployment language training programs are not specifically designed to produce a measurable global proficiency score on the ILR scale. The program is focused on the military/tactical domain, and is designed to provide the Marine with the communication skills necessary to accomplish a specific mission related task/skill. Developing measures of effectiveness that target performance based requirements, vice global proficiency, is what is truly needed. This is accomplished by the Marine Corps during mission rehearsal exercises such as Enhanced Mojave Viper prior to deployment. With the introduction of the Very Low Range series of Defense Language Proficiency Tests (DLPT) and oral proficiency interviews, the Marine Corps is assessing the feasibility of incorporating these metrics into the predeployment language training programs. Additionally, clarification is required to determine what constitutes “significant language training.” There is concern that the current definition being utilized may have been taken out of context, and was not intended to represent initial acquisition of a language but rather is associated with modifying retesting intervals. RECOMMENDATION 5: The GAO recommends that the Secretary of Defense direct the Secretary of the Army and the Secretary of the Navy, in consultation with the Commandant of the Marine Corps, to determine which soldiers and marines with language skills require follow-on training, the amount of training required, and appropriate mechanisms for delivering the training, and make any adjustments to training programs that may be needed. DoD RESPONSE: Partially concur. The Marine Corps has made a decision to formally build and sustain language, regional, and culture skills via deliberate institutional programs for the GPF that specifically targets its Career Force. As outlined in the report, the Regional, Culture, and Language Familiarization (RCLF) Program is designed to build, enhance, and sustain these critical enablers in a focused, deliberate manner for its Career Force, comprised of its officer corps and enlisted ranks starting at sergeant and above. Given the very high first term enlisted attrition rates characteristics of the Marine Corps, robust application of the RCLF Program, or implementation of other deliberate institutional programs designed to target the first term enlisted population group, have been deemed cost prohibitive. At this level, language, regional, and culture skills are provided through predeployment training program and common skills training, and sustained via informal mechanisms by providing access to language learning software and other computer based technologies. As for the Army, it is formulating a plan for sustainment of language skills acquired at the Language Training Detachments. Such a plan would rely heavily on existing distributed learning resources. In addition to the contact named above, Patricia Lentini, Assistant Director; Nicole Harms; Mae Jones; Susan Langley; Michael Silver; Amie Steele; Matthew Ullengren; and Chris Watson made significant contributions to this report.
The Department of Defense (DOD) has emphasized the importance of developing language skills and knowledge of foreign cultures to meet current and future needs and is investing millions of dollars to provide language and culture predeployment training to its general purpose forces. DOD has also noted that such training should be viewed as a long-term investment and that training and personnel systems should better account for the knowledge and skills of service members acquired through training to help manage its forces. The committee report accompanying a proposed bill for the National Defense Authorization Act for Fiscal Year 2011 (H.R. 5136) directed GAO to review language and culture training for Army and Marine Corps general purpose forces. For this report, GAO evaluated the extent to which these services (1) captured information in training and personnel systems on the completion of language and culture predeployment training and proficiency gained from training and (2) developed plans to sustain language skills acquired through predeployment training. GAO analyzed service documents and interviewed cognizant officials. The Army and Marine Corps have documented some information at the unit level for service members who completed language and culture predeployment training, but the services have not fully captured information within service-level training and personnel systems on service members who completed training or their corresponding proficiency. DOD and service guidance require the services to document language and culture training completion and proficiency gained from training in service-level systems. However, GAO identified several factors that limited the services' ability to implement this guidance. For example, the Army's primary training system did not have data fields for all mandatory language and culture tasks and, as a result, units were unable to document the completion of this training. In addition, while the Army collects some language proficiency data within its primary personnel system, the Army considers these data unreliable because of weaknesses in its approach to collecting them. To improve the accuracy of information within this system, the Army established a task force in January 2011, which has identified a number of key tasks and is at varying stages of completing its work. The Marine Corps did not document language and culture predeployment training completion in any servicewide training or personnel system and a system has not been designated for this purpose. Further, the Marine Corps had not required marines who completed significant language training to take formal proficiency tests and, therefore, the service did not have language proficiency data for these marines. By not capturing information within service-level training and personnel systems on the training that general purpose forces have completed and the language proficiency gained from training, the Army and Marine Corps do not have the information they need to effectively leverage the language and culture knowledge and skills of these forces when making individual assignments and assessing future operational needs. The Army and Marine Corps have not developed plans to sustain language skills already acquired through predeployment training. The services have made considerable investments to provide some service members with extensive predeployment language training. For example, as of July 2011, over 800 soldiers have completed about 16 weeks of Afghan language training since 2010 at a cost of about $12 million. DOD and service guidance address the need to sustain language skills and the DOD strategic plan for language, regional, and culture skills calls for the services to build on existing language skills for future needs. However, we found that the services had not yet determined which service members require follow-on language training to sustain skills, the amount of training required, or appropriate mechanisms to deliver the training. Although informal follow-on training programs were available to sustain language skills, such as computer-based training, these programs were voluntary. In the absence of formal sustainment training programs to maintain and build upon service members' language skills, the Army and Marine Corps may miss opportunities to capitalize on the investments they have already made to provide predeployment language training for ongoing operations. GAO made recommendations intended to improve the availability of information on training completion and proficiency and help DOD plan for sustainment training. DOD generally agreed with the recommendations, but stated that the definition of significant language training was not intended to describe training for initial skills. However, DOD noted that current guidance does not preclude language proficiency testing at this stage.
IRS identifies unpaid tax liabilities through its program activities. The most common include IRS (1) identifying a taxpayer who files a tax return without fully paying the tax claimed to be owed, (2) adjusting tax liabilities when filed returns are being processed by IRS by checking for obvious errors such as those involving adding or subtracting numbers incorrectly or using the wrong social security number for a claimed dependent, (3) finding additional tax liabilities by auditing a filed tax return or computer matching it to third party information on income paid to a taxpayer, (4) assessing a penalty for some taxpayer action or inaction, and (5) sending a tax bill to a taxpayer who did not file a required tax return after IRS estimates, based on available information, how much tax the person should have paid. IRS’s process for collecting identified unpaid debt has three phases. Debt goes through these phases until it is determined to be uncollectible, is collected, or is otherwise resolved: Notice: IRS sends the taxpayer a series of notices of balances due, in part, to prompt a reply and payment by the taxpayer and handles responses to those notices. Telephone: IRS uses telephone contacts with the taxpayer to prompt payment or takes enforcement action that may include levying financial assets or filing a lien against property. In-person: IRS staff contact the taxpayer to prompt payment or to take enforcement action, including levies, liens, and seizures of property. According to IRS officials, the phases and routing of tax debt-collection cases result from IRS’s designing the collection process to effectively and efficiently use resources to resolve taxpayer debt at the earliest possible time and using the least costly resources. Although data is not readily available to determine the relative costs and benefits of the notice phase versus the other phases, given the relatively high automation of the notice process and relatively low costs of postage compared to the more staff- intensive nature of enforcement actions in later collection phases, the notice phase is likely the most cost effective way for IRS to resolve unpaid debt cases. During the notice phase for individual taxpayers IRS’s general practice is to send up to four notices at 5-week intervals to collect the balances due. Generally, 6 weeks after the fourth notice, IRS either determines that any additional collection actions should be deferred or sends cases to be worked potentially in the telephone or in-person contact phases. IRS has made an administrative decision to separately send two notifications in the notice phase that are required by law—notifications (1) of the debt due and request for payment and (2) with regard to a levy of a state tax refund—in the first and fourth notices, respectively. According to IRS officials, IRS sends the discretionary second and third notices because they are successful in resolving some cases at relatively low cost. However, IRS has developed variations on the practice of sending four notices before deciding whether to send any uncollected tax debt to the next collection phase. Depending on specific debt-related characteristics, IRS may take such actions as: (1) skip the discretionary reminder notices to “accelerate” a debt to the next collection phase or (2) defer further collection action on debts not resolved with notices. Based on these characteristics, IRS has established “business rules” that are embedded in IRS’s computer system. These rules are to determine the number and types of notices as well as whether IRS defers collection action or sends the debt to other phases for further collection action. IRS officials said business rules were created in an attempt to make the most effective use of collection resources. As shown in figure 1, IRS annually has sent millions of notices (across the four types) to collect billions of dollars in unpaid debt from individual taxpayers. The total number of the notices generally increased from fiscal years 2004 through 2008, reaching around 22 million in 2008, largely because of the number of “first” notices sent. Figure 2 shows that the total dollar values of these sent notices increased overall in comparing fiscal years 2004 and 2008, with a decline in 2008, when it reached around $129 billion. Every year, IRS’s notice phase also disposes of debt for millions of notices that had been sent. IRS dispositions arise in various ways, such as by resolving the debt (including collecting payment from the taxpayer, abating all or some of the debt, or offsetting the taxpayer’s refund with the debt owed), deferring further collection action, or sending the case to the next collection phase for potential enforcement action. (See app. I on the trends in volumes and dollar values of dispositions from the notice phase for fiscal years 1999 to 2008.) Internal control is a major part of managing an organization. It comprises the plans, methods, and procedures used to meet missions, goals, and objectives and, in doing so, supports performance-based management. Internal control also serves as the first line of defense in safeguarding assets and in preventing and detecting errors and fraud. In short, internal control, which is synonymous with management control, helps government program managers achieve desired results through effective stewardship of public resources. Internal control should provide reasonable assurance that the objectives of the agency are being achieved through, among other things, effective and efficient use of the agency’s resources. Federal Law requires that we issue standards for internal control in government. The standards provide the overall framework for establishing and maintaining internal control and for identifying and addressing major performance and management challenges and areas at greatest risk of fraud, waste, abuse, and mismanagement. In 2001, we issued Internal Control Management and Evaluation Tool (GAO-01-1008G, August 2001) based upon Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999), to assist agencies in maintaining or implementing effective internal control. This tool is not an authoritative part of the standards for internal control, nor is it required to be used. Instead, it is intended as a supplemental guide that federal managers and others may use in assessing the effectiveness of internal control and identifying important aspects of control in need of improvement. When used with GAO’s standards for internal control and the Office of Management and Budget Circular A-123, Management’s Responsibility for Internal Control, the tool is to provide a systematic, organized, and structured approach to assessing an agency’s internal control structure. During the course of our review, in November 2008, IRS separately initiated a review of notices sent to taxpayers, including notices sent to taxpayers during the collection notice phase. The review was led by an IRS team called the Taxpayer Communication Taskforce (TACT.) TACT’s objectives included simplifying and clarifying notice language; instituting effective measures; streamlining and improving business processes; and eliminating unnecessary or duplicative notices, letters, reminders and inserts. According to its director, TACT is to brief the Commissioner on a proposed “road map” for addressing issues identified by the team. IRS did not establish a date for the briefing before we concluded our work. Our objectives were to determine: How well has IRS established objectives, performance measures, and responsibility for reviewing notice-phase performance? How well do IRS’s business rules for sending notices to individuals help assure that the collection notice phase is achieving desired results at the lowest costs? To assess IRS’s objectives, performance measures, and responsibility for reviewing notice-phase performance, we reviewed information on how IRS has organized the work flow of the notice phase, including its objectives and related measures of performance as well as assignments of responsibility for notice activities such as reviewing the results. We reviewed collection program documents including the Internal Revenue Manual (IRM), IRS’s collectionwide performance measures, and any related performance measures in the Wage and Investment Division’s (W&I) and Small Business/Self-Employed Division’s (SB/SE) periodic business performance review documents. We were to analyze whether these measures provided information adequate to assess whether the notice phase achieved the desired results laid out in any stated objectives. We also interviewed responsible IRS collection officials on any stated objectives for the notice phase and whether and how the officials use available data to assess the performance of the notice phase and develop performance measures. We reviewed available organization charts and the IRM sections on the managerial and reviewing responsibilities for the collection operations. We also obtained and reviewed the charter of the Collection Governance Council, which includes representatives of collection operations in W&I and SB/SE. We reviewed these documents to identify IRS officials responsible for managing aspects of the notice process and reviewing notice-phase performance and interviewed these officials. To determine how well IRS’s business rules help assure that the collection notice phase achieves desired results at the lowest costs, we asked IRS collection officials who were the most knowledgeable about the business rules to identify the key business rules that affect most individual taxpayers in the notice phase. The five rules IRS identified were based on (see app. II for further descriptions of the five rules): certain dollar thresholds (minimum-, low- and medium-dollar amounts) or “repeater” characteristics (taxpayers who recently had tax debt resolved or currently have other tax debt in collection status). For each of the five business rules, we interviewed these knowledgeable officials on how the rule operates—i.e., how the rule determines the number and types of notices sent—and asked IRS for documentation on how the rules operated. We asked for information and documentation on when the rule was established, the rationale or purpose for the rule to include supporting data that IRS considered in establishing the rule, and any evaluations of the rule since its establishment. To answer both objectives, we used applicable internal control standards in our Standards for Internal Control in the Federal Government (GAO/AIMD-00-21.3.1, November 1999), Internal Control Management and Evaluation Tool (GAO-01-1008G, August 2001), and the Office of Management and Budget Circular A-123, Management’s Responsibility for Internal Control. To assess IRS’s objectives, measures, and performance review, we selected the applicable internal control standards based upon our report on IRS’s collection process in which we addressed the complexity, organization, and performance measures of the collection process. The IRS information we relied on for our audit work was descriptive. The quantitative data in this report are from an IRS Collection Activity Report (CAR) and are for the purpose of providing background and context on the notice process. We interviewed IRS officials with knowledge about CAR data about the steps taken to ensure data accuracy. We determined that the data we use in this report were sufficiently reliable for our purposes. We conducted this performance audit from July 2008 through September 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To assure that programs operate efficiently and effectively, agencies must assess the risks that the program will not achieve desired results at the least possible cost. A precondition to risk assessment is the establishment of clear, consistent goals and objectives at both the entity level and at the activity (program or mission) level. Without objectives, an agency cannot identify the risks that could impede the efficient and effective achievement of program or activity purposes. As shown below, guidance for the federal internal control standards states that objectives should be established for all key operational activities. Objectives have been established for all key operational activities and support activities. Activity-level (program or mission-level) objectives flow from and are linked with the agency’s entitywide objectives and strategic plans. The activity-level objectives are relevant to all significant agency processes. All levels of management are involved in establishing the activity-level objectives and are committed to their achievement. Although the notice phase affects millions of taxpayers and is a key part of IRS’s strategy for resolving billons of dollars of unpaid debt, IRS has not established written objectives for the notice phase. Guidance for the internal control standards is clear that such objectives should be written and be the product of a process in which all levels of management have been involved and have committed to the objectives. By not meeting these standards, IRS cannot be assured that relevant staff and management commonly understand and pursue the same outcomes that are linked to IRS-wide objectives and strategic plans. IRS officials said that written objectives are unnecessary because IRS officials have widespread agreement on the objectives. However, our discussions with IRS W&I and SB/SE executives and managers responsible for all parts of the collection process show that they have different views about the notice phase in terms of what it is intended to do. For example, one official said that the purposes of the notice phase are to prompt a response from the taxpayer and get the taxpayer actively engaged in resolving the unpaid debt. Among other possible objectives, officials said that the notice phase is to resolve unpaid debt cases at the lowest possible cost or achieve the most economical resolution of the greatest number of debts. Furthermore, IRS officials referred to abating certain debts, achieving full payments for debts, or collecting as much unpaid debt as possible as other desired outcomes. These desirable outcomes strive for different things such as getting the taxpayer involved, resolving as many debts as possible, minimizing costs, and collecting the unpaid amounts. For example, one could focus on minimizing costs or resolving more debt cases—such as through abatements or deferring any collection action—but not collect any debt in doing so. Without documenting how these different outcomes interact, IRS does not have assurance that its staff clearly understand what the notice phase should be producing or that they may maximize one type of performance while adversely affecting other desired levels of performance. Without written objectives, IRS cannot move forward to the next step of establishing program performance measures that are tied to the desired results envisioned in the objectives. Without performance measures related to established objectives agencies cannot be assured that a program is achieving desired results and, if possible, improving results. As shown below, guidance for the federal internal control standards states that performance measures should be established for government programs and, among other things, be linked to objectives. Activity-level objectives include measurement criteria. Performance measures and indicators have been established throughout the organization at the entitywide, activity, and individual levels. Performance measurement assessment factors are to be evaluated to ensure they are linked to mission, goals, and objectives and are balanced and set appropriate incentives for achieving goals while complying with law, regulations, and ethical standards. As with objectives, IRS has not established performance measures for the notice phase. IRS established the Collection Governance Council (CGC) in part to establish consistent performance measures that could be compared across the divisions and operating units that administer parts of the collection process and be rolled up to reflect IRS-wide collection performance. However, the three IRS-wide collection process measures do not provide information necessary to assess the notice phase in part because two of the measures reflect telephone and in-person contact phase work in addition to notice phase work and the other measure is specific to the telephone contact phase. Nor do the W&I and SB/SE unit-level performance measures that IRS uses to hold managers accountable for performance in these units provide information necessary to assess the notice phase in part because they reflect work done in addition to notice phase work. For example, the measures for the Compliance Services Collection Operations (CSCO) units reflect performance not only on handling taxpayer written responses to the four collection notices but also post-notice phase work, such as taxpayer correspondence on defaulted installment agreements. Without performance measures, IRS cannot tell how well the notice phase works and helps achieve any objectives that are established. In discussing this effect with IRS officials, they cited a percentage—total number of cases disposed by the entire notice phase divided by the total number of first notices issued in a given year —as evidence that the notice phase is working well. However, IRS has not established this percentage as a performance measure or defined how it is to be consistently calculated. For example, the official who calculated the percentage explained that more than one formula could be used to calculate the percentage, such as including or excluding deferred cases from the number of cases disposed. Furthermore, establishing this measure as the only measure of notice- phase performance would not meet IRS’s policy of having a set of balanced measures that are linked to IRS goals, such as goals for customer satisfaction, employee satisfaction, and business results. In addition to performance information, IRS would also benefit from using cost information to make data-based decisions about notice phase effectiveness. Officials in IRS’s Office of Chief Financial Officer have demonstrated the ability to determine full cost information on selected IRS programs through a series of cost studies, but the notice phase was not included in them. Even so, IRS has collected some information on the cost to produce or mail collection notices to taxpayers. For just fiscal year 2006 and just the fourth notice (which is sent by certified mail), IRS tracked and maintained postage cost data that showed IRS spent $22 million on sending that notice type. According to IRS’s comments on a draft of this report (see appendix III), IRS has an effort underway that is intended to address the lack of some types of cost information. IRS said that it awarded a contract in August 2009, to help develop the Correspondence Management Information System (CMIS) to provide, among other things, data on the costs of notice printing and postage which will assist in determining the full cost of notices. The full costs also include such things as labor cost of processing payments or handling taxpayers’ correspondence or telephone calls in response to notices. Related to the establishment of objectives and measurement of performance against those objectives, the internal controls also envision top management reviews. To ensure that agencies are achieving desired program results, it is important to have a chain of performance reporting leading to top-level management reviews and accountability for program performance. As shown below, guidance for the federal internal control standards indicates that performance reporting up through higher levels of agency management should create a chain of accountability where performance is compared to targets. Top-Level Reviews—Management tracks major agency achievements in relation to its plans. Top-level management regularly reviews actual performance against budgets, forecasts, and prior period results. Management Reviews at the Functional or Activity Level—Agency managers review actual performance against targets. Managers at all activity levels review performance reports, analyze trends, and measure results against targets. Without objectives and performance measures for the notice phase, IRS top management cannot review notice-phase performance. IRS’s primary system for periodically measuring and reviewing performance—the Business Performance Review System (BPRS)—focuses on business units rather than notice phase performance. According to IRS collection officials, the performance reporting creates a chain of accountability up through IRS for unit performance. According to IRS collection officials, other data exist that IRS could use to review notice phase performance—data that include notice-phase results like full payments that do not involve W&I and SB/SE subunits—and are contained in the Collection Activity Report. Selected report data—such as the number of first notices and the number of notice cases disposed—are trended and reported to the CGC. However, the CGC is not required to report these data to higher levels of IRS management for its review as part of a mechanism for being accountable for notice performance. IRS faces a challenge in developing objectives, performance measures, and related top management reviews for the notice phase, in part, because the phase spans across IRS units or functions. As shown in figure 3, collection- notice-phase key activities are divided among five IRS units or functions. Factors that determine whether a given unit will be involved in handling a notice case include, among other things, whether the taxpayer is a W&I taxpayer or an SB/SE taxpayer and whether the taxpayer chooses to reply at all, pay the debt in full, or respond to a notice by telephone or by mail. Written documentation of processes is useful to managers in guiding an agency’s operations and to those overseeing and analyzing operations. As shown below, guidance for the federal internal control standards states that there should be complete and accurate documentation of transactions and significant events. Written documentation exists covering the agency’s internal control structure and for all significant transactions and events. The documentation is readily available for examination. Documentation, whether in paper or electronic form, is useful to managers in controlling their operations and to any others involved in evaluating or analyzing operations. Documentation of transactions and other significant events is complete and accurate and facilitates tracing the transaction or event and related information from authorization and initiation, through its processing, to after it is completed. As noted earlier, IRS officials said that generally business rules were established to make the best use of IRS resources. The five business rules IRS identified as affecting most taxpayers use certain dollar thresholds (minimum-, low- and medium-dollar amounts) or “repeater” characteristics (taxpayers who either recently had resolved a previous tax debt or currently have other debt in collection status) to determine whether certain notices are sent or whether IRS will defer further collection in a given case. Given the stated purposes of the business rules, the millions of taxpayers impacted by them, and the billions of dollars in unpaid debt involved, the establishment of a business rule is a significant event that should be documented with enough detail to allow IRS managers and others to trace the basis for its establishment. Such details could include the date the rule was established, the rationale for the rule, and any data supporting the rule. However, table 1 shows that IRS lacked such basic documentation for four of the five business rules and that for the one rule with documentation, IRS only had the information on the date it was established. IRS provided documentation that the repeater rule (for taxpayers that had previous tax debt resolved) was established in August 2002. Without documentation of when the other four rules were established, IRS lacked information for tracking and evaluating events over time that could affect how the rules operate. IRS officials’ recollections of when the four rules were established were imprecise and in one case depended on the official’s experiences in IRS. For example, an IRS official said the minimum dollar rule was established as early as 1978 because the rule was in effect when the official began working at IRS. Officials speculated that the low dollar rule was established around 1995. Officials said the medium dollar rule was established sometime in 1994 because an official had retained a dated training manual on a related program that officials recalled being established at the same time as the rule. IRS could not provide documentation on the rationales (such as the factors considered or reasons for adopting the rule) for any of the rules we reviewed. As with the dates that rules were established, we had to rely on the recollections of IRS officials to determine why IRS had adopted the rule and its purpose. Such sources were of limited use because of their reliance on staffs’ memories and because officials disagreed on some of the rules, thus making it unclear what IRS’s reasoning was for the rule. For example, officials said that the reason IRS established the medium dollar rule to defer sending certain cases to ACS was that using automatic offsets of future tax refunds to collect these debts was less costly than using ACS. However, another official said that the rule was established not so much because of the relative costs of ACS but instead to stem the flow of paper because, at the time IRS began deferring sending cases, IRS’s process for levying assets relied on paperwork, which overwhelmed the staffs. The lack of documentation for the rationale of the rule—the basic reasoning on why the rule exists—made it impossible to determine if the rule achieves its intended purpose. IRS also lacked data supporting the rationales for the rules. For example, for the minimum-, low-, and medium-dollar rules that include dollar thresholds for determining notices sent and whether further collection action would be deferred, IRS officials were unable to provide analyses on why a given dollar amount was chosen as opposed to a higher or lower amount. As a further example, for the repeater rule for previously resolved debts, IRS had no data to show why it selected the time period considered in determining whether a taxpayer was a “repeater.” IRS officials told us that documentation was lacking in most cases because the rules were established so long ago. Further, the officials said the current rules are not so much designed as a cohesive set of rules but instead are the result of numerous decisions made over the course of several years. Regardless, without documentation on when the rules were established, the rationale for the rules, and data supporting those rationales, IRS managers lack basic information to help assure that the business rules are working as originally intended. Without such documents, IRS has limited ability to determine whether the circumstances under which a rule was established have changed in such a way that the rationale for the rule is no longer valid and therefore it should be revised or abandoned. To exercise control over government operations, managers need to know how the programs they are responsible for operate. As shown below, guidance for the federal internal control standards states that managers should have pertinent information available in forms that are useful for them to exercise their responsibilities to ensure efficiency and effectiveness of operations. Pertinent information is identified, captured, and distributed to the right people in sufficient detail, in the right form, and at the appropriate time to carry out their duties and responsibilities efficiently and effectively. We found that collection officials lacked information to know what the notice-phase business rules are and how the rules operate. In some cases, the officials either did not know or misunderstood these key business rules. For example, in a meeting with us, IRS executives and managers cited three different amounts for the dollar threshold of the low dollar rule, which determines which notices are sent and whether further collection action will be deferred or the debt will potentially be offset by levying a state refund. For the medium dollar rule, IRS managers who we were told were most familiar with the rules originally said that a taxpayer under this rule would receive four notices. The managers said that sending four notices was to give the taxpayer the maximum opportunities to respond given that IRS would not be taking further collection action if payment or another response was not received. After further review, officials later told us that taxpayers receive only three notices in these cases. IRS collection officials were able to provide specifics on the operations of some of the rules and corrected some of their misunderstandings of the rules by using a document called the Functional Specifications Package (FSP) and consulting with Modernization and Information Technology Services (MITS) staff. The FSP contains some of the computer commands that control the sending of notices. Officials said the FSP and related system documents are the only documentation for the existence and operation of the rules. According to IRS collection officials, the language of the FSP is difficult to understand. Consequently, the managers had to consult with MITS staff to explain the rules. One of the roles of MITS is to translate operational requirements devised by the collection function into computer programming language. Furthermore, not all collection notice rules are in the FSP. Some rules are in at least one other information system—the program requirements package (PRP) of the master file, the information system that sends the first collection notice to taxpayers. With more than one system, IRS has limited assurance that the business rules are identical because changes in one system may not also be made in all the systems. For example, the FSP sends a notice for debts of $5 and above, but the PRP has a slightly higher dollar threshold for taking further action. IRS officials were unaware of this inconsistency, and, after our further discussion with them, they agreed that the FSP rules should be amended to be consistent with the PRP. A critical part of management control is monitoring to ensure that programs are working as intended. Agency policies and procedures should generally be designed to assure that ongoing monitoring occurs in the course of normal operations and that periodic evaluations are part of management’s continuous monitoring of internal control. As shown below, guidance for the federal internal control standards shows that monitoring is important as a means to provide a reasonable assurance that the objectives of the agency continue to be achieved and to determine and improve the quality of the agency’s performance over time. If deficiencies are discovered, the agency can become quickly aware and take prompt action to remedy them. Management’s strategy provides for routine feedback and monitoring of performance and control. Management has a strategy to ensure that ongoing monitoring is effective and will trigger separate evaluations where problems are identified, and it is desirable that critical systems are periodically tested. Control activities are regularly evaluated to ensure that they are still appropriate and working as intended. The strategy includes a plan for periodic evaluation of control activities for critical operational and mission support systems. Appropriate portions of the internal controls are evaluated regularly. The agency takes appropriate follow-up actions with regard to findings and recommendations of audits and other reviews. Management and auditors follow up on audit and review findings, recommendations, and the actions decided upon to ensure that those actions are taken. Even though IRS officials estimated that the business rules had been established for years, IRS had documentation for an evaluation of only one of the five business rules we reviewed (the minimum dollar rule). For the remaining four rules we reviewed, IRS officials said that they had not evaluated three rules and had evaluated one rule (the medium dollar rule) but had no documentation of the evaluation. Without a process to evaluate the business rules, IRS officials can not be assured that the rules work as intended. IRS collection managers were unaware of evaluation of the minimum dollar rule and had no documentation on the follow-up action that was promised. IRS collection officials had originally told us that no evaluation had been done for the minimum dollar rule. During our work, we separately learned that IRS had done an evaluation in response to a 2004 Taxpayer Advocacy Panel (TAP) recommendation that the $5 threshold be raised to $25. The evaluation report showed that IRS had done analyses to consider such factors as the costs of sending and handling responses to notices and potential lost revenue. Although the report concluded that raising the threshold to the recommended level would not be cost- effective, it said that IRS would do further evaluation to determine if the threshold should be increased to some amount between $5 and $25. However, IRS officials could not provide any documentation that a follow- up evaluation had been done nor did IRS have plans to conduct such an evaluation. Also, IRS officials could provide none of the data supporting the report conclusions, such as information on the costs of sending and handling responses to notices. For the medium dollar rule evaluation that IRS officials said was done, the age of the evaluation and lack of documentation limited its usefulness. The officials said that IRS had evaluated the medium dollar threshold in the 1980’s and concluded that it was appropriate. As we noted earlier, the intended purposes of all the rules we reviewed were undocumented. To the extent that IRS’s costs and collections may have been considered in establishing the medium dollar rule, with changes over time—including possible changes in taxpayer behavior and IRS’s processes along with the certain change in the value of dollars due to inflation—it is unclear how the results of an evaluation done over twenty years ago would serve as adequate assurance that a current business rule is appropriate. Over time, various changes could affect the continued validity of any rationale. For example, according to IRS officials, the current medium dollar threshold was set in 1994 as part of a program for accelerating debts above a certain amount to ACS to make outbound telephone calls to attempt collection. According to IRS officials, the acceleration program was dropped after one year because future funding was eliminated. Even so, the medium dollar threshold has remained unchanged in the computer programming. IRS officials said that IRS has no requirement to periodically evaluate the business rules. Because the business rules are not being regularly evaluated, IRS risks that the rules will result in unnecessary costs or missed collections. For example, IRS could defer sending a case for collection action based on outdated assumptions about the costs of such action. To the extent that evaluations are done, the lack of a system to make managers aware of them or to maintain the evaluations and supporting data limits their usefulness in making program decisions to ensure that desired collection results are achieved at the least costs. According to its director, TACT also found that evaluations of business rules were lacking and was considering recommending certain actions to improve such evaluations. As with the project to build a database for measuring notice effectiveness and the proposal on management responsibility discussed above, documentation on TACT’s evaluation recommendation was not available for us to take into account in this report. Given the notice phase’s primary position in IRS’s collection process, its potential to collect or otherwise resolve debts at relatively low cost and the significant revenue that it can generate, notice-phase performance could be a key indicator of the efficiency of IRS’s collection enforcement efforts overall. The notice phase may be operating well, but given the lack of objectives and performance measures for the process, its efficiency and effectiveness are not reasonably assured, and opportunities for improving performance may be missed. With the high volume of cases the process handles and the revenue it generates, modest percentage change improvement could result in significant cost savings or improvements in dollars collected or cases otherwise resolved. To better ensure the notice phase is achieving desired results at the lowest costs, we recommend that the Commissioner of Internal Revenue: establish objectives and performance measures to reflect the desired results for the notice phase; establish responsibilities for reviewing the performance of the notice phase to help ensure accountability throughout IRS; document the rationales for the key notice-phase business rules in terms of efficiency, effectiveness, or other desired results; provide IRS collection managers and executives accessible, reliable information on what the business rules are; and periodically and regularly evaluate the business rules in terms of efficiency and effectiveness or other results and ensure the results are available to managers so the data and methodologies can be used or considered in future evaluations. The IRS Deputy Commissioner for Services and Enforcement provided written comments on a draft of this report in a September 16, 2009, letter, which is reprinted in appendix III. IRS staff also provided technical comments. We incorporated these written and other technical comments into the report as appropriate. IRS agreed with all five of our recommendations and to make related improvements. Specifically, IRS agreed to establish and document key objectives, measures, and responsibilities. IRS also agreed to periodically reevaluate its notice business rules and provide appropriate, reliable information to its managers. IRS also stated that it closes approximately 80 percent of its notice accounts without need for further contact with the taxpayer, and that it collected $23.6 billion through the notice process in fiscal year 2007. IRS did not clarify whether this percentage and this dollar amount involved all types of collection notices, closures, and taxpayers. In our report, we noted in figure 5 that IRS collected about $6 billion during fiscal year 2008. This total includes collections directly from notices sent to individual taxpayers only and comes from data that IRS provided to us. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies to the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Every year, IRS disposes of millions of tax debt collection cases through its notice phase in various ways. For example, IRS closes cases when taxpayers pay the debt in full or enter an installment agreement. In other cases, IRS defers further collection action and suspends active collection until some later date, which may be triggered by some taxpayer action. For example, taxpayers in active combat duty who have unpaid tax debts will be put in a deferred status. Also, IRS sends some unpaid tax debt cases that have gone through the notice phase to the other collection phases for potential enforcement action. As shown in figure 4, from fiscal year 1999 through 2008, the most frequent way that IRS disposed of tax debt cases that went through the notice phase was by the taxpayer’s paying the debt in full. Specifically, in fiscal year 2008, IRS disposed of 5.8 million (or 46 percent) unpaid tax debt cases through the notice phase as full paid. Another 4.1 million unpaid tax debt cases were deferred or sent to other collection phases. On the other hand, when looking at the dollar values of the dispositions from notices, IRS sent much more of the unpaid debt amounts to other collection phases for potential collection than it collected. For example, in fiscal year 2008, IRS sent $40.6 billion in unpaid tax debt to other collection phases for potential collection enforcement action but collected $5.6 billion during the notice phase, as shown in figure 5. The remaining dollar amounts disposed through notices in fiscal year 2008 involved $4.4 billion in tax refund claims from the taxpayers that were used to offset the debt and $4.6 billion that IRS had to abate based on new information. Below are descriptions of the five business rules IRS officials identified in response to our request for the key rules that affect the most individual taxpayers. With the exception of the minimum dollar rule, the descriptions do not provide specifics on the case characteristics that the business rules consider because IRS considers that information to be sensitive. Such information could potentially be used by purposefully noncompliant taxpayers to not pay taxes due and avoid IRS’s collection actions. Minimum dollar rule—if the unpaid debt is below $5 no notice is sent and the debt is abated. Low dollar rule—if the debt is below a certain threshold, three notices are sent, and if the debt remains unresolved, further collection action is deferred. Medium dollar rule—if the debt is above the low dollar rule threshold but below a certain higher amount, three notices are sent, and if the debt remains unresolved and there is no known levy source, further collection action is deferred. Repeater rule for previously resolved debts—if a taxpayer had previous tax debt resolved in the telephone contact phase, in-person contact phase, or in the waiting queue for assignment to a revenue officer, for any new debt identified within a selected number of weeks, two notices are sent, and the case is sent to the phase that resolved the previous debt. Repeater rule for current unresolved debts—if the taxpayer has a current debt assigned to the telephone contact phase, in-person contact phase, or in the waiting queue for assignment to a revenue officer, for any new debt two notices are sent, and the case is sent to the phase handling the current debt. In addition to the contact named above, Tom Short, Assistant Director; Susan Baker; Ray Bush; Bill Cordrey; George Guttman; Ronald W. Jones; Veronica Mayhand; Ed Nannenhorn; Karen O’Conor; Cheryl Peterson; Steve Sebastian; Jay Smale; and A.J. Stephens made key contributions to this report.
According to the Internal Revenue Service (IRS), $23 billion in unpaid individual income tax debt existed in 2001, its most recent estimate. The notice phase is the first of IRS's three-phase process to collect unpaid debt. IRS annually sends notices to millions of individual taxpayers about billions of dollars of unpaid tax debt. Congress and others have questioned IRS's collection process's effectiveness. As requested, GAO is reporting on (1) how well IRS has established objectives, performance measures, and responsibility for reviewing notice-phase performance, and (2) how well IRS's business rules for sending notices to individuals help assure that the collection notice phase is achieving desired results at the lowest costs. To address these objectives, GAO compared the evidence obtained from IRS documents and responsible IRS collection officials to applicable guidance for internal control standards. Although the notice phase is a key part of IRS's approach and strategy for resolving billions of dollars of individuals' unpaid tax debt, IRS lacks certain internal controls to assure that notices to individuals are achieving the most benefits--such as debt collected or unpaid debt cases otherwise resolved-- with the resources being used. Management controls like clearly defined objectives, performance measures, and clear responsibility for reviewing program performance help provide reasonable assurance that the objectives of an agency are being achieved effectively and efficiently. However, IRS has no documented objectives for the notice phase and no performance measures to indicate how well the phase is performing in resolving debt cases or achieving other potential desired results. Further, IRS has not established responsibility for reviewing the performance of the complete notice phase. IRS lacks documentation for and evaluations of its business rules for notices to individuals to assure that the collection notice phase is achieving desired results. According to IRS officials, to make the best use of collection resources, IRS uses its business rules to--based on certain dollar thresholds and individual tax debt case characteristics--vary the number and types of notices sent to taxpayers and determine whether unresolved cases will be sent for further collection action or further action will be deferred. However, as shown in the table, in almost all cases, for the five business rules that IRS identified as affecting the most taxpayers, IRS did not have information on the date the rules were established, the rationale for the rule, or data supporting the rationale. IRS collection officials also lacked documentation describing the business rules and how they operate. Further, even though IRS officials estimated that the business rules had been established for years, IRS had documentation for an evaluation of only one of the five business rules. Without relevant evaluations IRS lacks assurance that the notice phase achieves desired collection results at the least cost.
In addition to CBP, various agencies have responsibilities for facilitating trade at land ports of entry and conducting inspections of commercial vehicles. GSA oversees design, construction, and maintenance for all ports of entry in consultation with CBP. In consultation with GSA, CBP develops an investment plan to manage the modernization of the land ports of entry inventory. Within DOT, FHWA provides funding for highway and road construction. In addition, the Federal Motor Carrier Safety Administration and state DOTs in some states—Arizona, Texas, and California on the southwest border—may conduct commercial vehicle inspections at or adjacent to land ports of entry to ensure compliance with federal and state-specific safety standards. In executing its mission, CBP operates 168 land border crossings, which vary in size, location, and commercial traffic volume. Of these, 46 crossings are located on the southwest border, and 24 of these crossings process commercial vehicle traffic. The four largest land border ports of entry on the southwest border by commercial vehicle traffic volume are Laredo, Texas; Otay Mesa, California; El Paso, Texas; and Calexico East, California. See figure 1 for a picture of commercial vehicles in line to enter the United States at the Otay Mesa border crossing near San Diego, California. Processing commercial vehicles into the United States at land ports of entry involves various steps and requirements. First, CBP requires carriers to submit electronic lists describing their shipments, known as e- Manifests, prior to a shipment’s arrival at the border.reviews the e-Manifest using its Automated Commercial Environment Second, CBP database, among others, and assigns a risk level to the shipment. Next, the commercial vehicle proceeds into the United States and to a primary inspection booth at the U.S. port of entry, where a CBP officer reviews documentation on the exporter, importer, and goods being transported. If the documentation is consistent with CBP requirements and no further inspections are required, the truck is allowed to pass through the port. Depending on the port of entry, goods imported, or law enforcement requirements, CBP may direct the commercial truck to secondary inspection. According to CBP, trucks are referred to secondary inspection for numerous reasons, such as officer’s initiative, targeted inspection, or random inspection. Secondary inspection involves more detailed document processing and examinations using other methods including gamma ray imaging systems and advanced radiation portal monitors or unloading and physical inspection. Trucks that require secondary inspection may be inspected by more than one federal agency, depending on their cargo. See figure 2 for an illustration of the steps in the commercial vehicle inspection process at land ports of entry. To facilitate the travel of low-risk screened shipments across the border, CBP created the FAST program, which is intended to secure and facilitate legitimate trade by providing expedited processing of participants’ merchandise in designated traffic lanes at select border crossings, fewer referrals to secondary inspections, “front-of-the-line” processing for CBP secondary inspections, and enhanced security. To be eligible to receive the benefits of the FAST program, every link in the supply chain—the carrier, the importer, and the manufacturer—is required to be certified under the Customs and Trade Partnership Against Terrorism (C-TPAT) program and the driver must be preapproved for participation in the FAST program. CBP defines border wait time as the time it takes for a vehicle to travel from the end of the queue—which may be in Mexico or the United States, depending on the length of the line—to the CBP primary inspection point in the United States. See figure 2 for an illustration of these points in the border-crossing process. As a service to the traveling public and the trade community, CBP began publicly reporting hourly wait time data through a web page on CBP.gov in early 2004, and currently reports these data for 42 of 46 crossings on the southwest border. CBP began formally collecting commercial and private passenger vehicle wait times on a daily basis in late September 2001 in response to the delays experienced immediately after September 11, 2001, when heightened enforcement efforts resulted in significant delays at many land border ports of entry. Over time, the collection of wait time data evolved as additional crossings were added and the amount of information collected was expanded. CBP reported that it is important that the trade community have current and consistent wait times on the CBP web site, noting that the web site is the only source of wait time information at many locations. Some border stakeholders, such as those in the private sector, find “total crossing time” to be a more useful measure than CBP’s definition of wait time. Unlike CBP’s narrower “wait time” measure, which captures the time it takes for a vehicle to travel from the end of the queue to the CBP primary inspection point, total crossing time is generally defined as the total time elapsed from entering the line in Mexico leading to Mexican export inspection through exit from U.S. inspection facilities, including any U.S. state-conducted inspections. See figure 2 for an illustration of the differences between these two measurements. CBP has developed a workload staffing model to determine the optimum number of CBP officers that each port of entry needs to accomplish its mission responsibilities at its land, air, and sea ports of entry. This model existed in different versions, beginning in fiscal year 2006. The conference report for the fiscal year 2007 DHS appropriations act, expressing concern regarding CBP’s ability to align staffing resources to mission requirements, directed CBP to submit a resource allocation model for staffing requirements that would explain CBP’s methodology for aligning staffing levels with threats, vulnerabilities, and workload across all mission areas. In April 2013, CBP submitted the most recent version of its workload staffing model to Congress in response to language in the conference and committee reports for the fiscal year 2012 DHS appropriations. DHS has received appropriations to support increased staffing levels for CBP officers on the southwest border over the last 5 fiscal years. For example, the conference report accompanying the fiscal year 2009 supplemental appropriation indicated that it included $30 million to fund the hiring of up to 125 CBP officers for the southwest border, and the fiscal year 2010 emergency supplemental appropriation for border security included $29 million for hiring additional CBP officers for southwest border land ports of entry. CBP and GSA have assessed infrastructure needs at all land border crossings over the last 9 fiscal years. From fiscal years 2004 to 2006, CBP assessed its complete portfolio of land port of entry facilities and identified infrastructure investment needs through its SRA process. The SRA includes architectural and analytical assessments of land port of entry inspection facilities’ condition and operations as well as relevant regional planning data and studies. Appendix III provides more information regarding CBP’s SRA process. GSA has also assessed land port of entry infrastructure needs when planning and designing land port of entry renovation projects. For example, before undertaking construction, GSA evaluates the design of projects to renovate, expand, or construct a new land port of entry using its BorderWizardTM program—a program used to simulate projected traffic flow through the proposed facility to help identify potential deficiencies, such as insufficient primary inspection lanes.information on completed, ongoing, or planned infrastructure improvement projects at southwest border land ports of entry for fiscal years 2008 through 2012. CBP policy identifies two methodologies to be used by ports of entry for manually calculating wait times for commercial vehicles; however, challenges in implementing these methodologies contribute to CBP wait time data being of limited usefulness for public reporting and management decision making across border crossings. Specifically, CBP policy provides port directors two options for manually calculating wait times at the border crossings they oversee: (1) line-of-sight and (2) driver survey. Port directors for each crossing are to choose which methodology to use based primarily on a consideration of the infrastructure layout of each crossing. CBP officers at border crossings are to use the first methodology when the end of the line is visible via the naked eye or camera. In accordance with this methodology, the CBP supervisor at the crossing is to estimate wait time based on traffic volume, number of lanes open, and where the end of the queue occurs relative to landmarks (i.e., foot of bridge, building, or intersection). When the end of the line is not visible, CBP policy recommends that officials estimate wait times using the second methodology—asking at least five drivers how long they have been waiting in the queue, dropping the highest and lowest responses, and averaging the rest. CBP’s October 2007 interim guidance, which prescribes these two methodologies to calculate wait times, states that “it is critically important that all locally posted wait times for ports or crossings are reasonably accurate and are uniformly reported by all stakeholders.” In addition, CBP’s May 2008 memorandum on land border wait time measurement states that “the importance of accurate land border wait time measures cannot be understated. Efficient and reliable land border wait time measures help to facilitate the movement of people and goods across our border and directly impact the economic health of border communities and the nation as a whole.” Among the six crossings we visited, Mariposa used driver surveys, and the remaining five crossings used line of sight to estimate wait times. However, CBP’s wait time data are of limited usefulness for public reporting and management decision making across border crossings because of three key factors: (1) CBP officers inconsistently implemented the line-of-sight methodology; (2) the other CBP-approved methodology, driver surveys, is inherently unreliable; and (3) CBP officials use different methodologies to calculate wait times across land border crossings. CBP officials at three of the five crossings that reported primarily using the line-of-sight methodology also reported using the driver survey methodology at times, such as during construction or to routinely check their line-of-sight methodology. leading up to the primary inspection booths for commercial traffic. According to these officials, these factors hinder officers’ ability to accurately determine the full duration of wait times as gaps or cars between commercial vehicles may make wait times appear to be longer or shorter than they actually are. At one crossing—Otay Mesa—CBP officers used the line-of-sight methodology but did not consider the number of primary inspection lanes open, as required by CBP policy. The number of lanes open to commercial vehicle traffic influences the rate at which traffic moves through primary inspection. CBP’s fiscal year 2008 WHTI study on the reliability of CBP’s methods for calculating commercial vehicle wait times notes that the number of lanes open greatly affects wait time, so not considering the number of lanes open limits CBP’s ability to accurately estimate wait times using the line-of-sight methodology. Driver survey methodology is unreliable: CBP’s fiscal year 2008 WHTI study on the reliability of CBP’s methods for calculating commercial vehicle wait times stated that driver surveys have been shown to be consistently inaccurate when measuring wait time in part because they measure the wait time of travelers currently at the front of the queue, not the expected wait time of travelers currently at the end of the queue. As a result, if queuing conditions quickly change, the wait times collected using this methodology become inaccurate. In addition, Port officials at Mariposa used driver surveys as the crossing’s primary method of estimating wait times, but noted that the methodology produced unreliable wait time data. Senior CBP officials at this crossing reported that officers had to use driver surveys to estimate wait times because a curve in the road leading up to the crossing obstructs officers’ view of the queue, thereby preventing the crossing from using the line-of-sight approach. Senior CBP officials at this crossing stated that driver survey is an unreliable methodology because of survey bias—drivers may be inclined to report longer or shorter wait times than they actually experienced. Different methodologies across land border crossings: Port directors choose between the two CBP-approved methodologies to estimate wait times in accordance with CBP policy; however, OFO and OA headquarters officials stated that the use of different methodologies at crossings precludes comparison of data across locations in making management decisions. Although officials at each crossing determine which of the two methodologies to use based on the layout of each crossing and other local characteristics, the use of different methodologies at crossings makes CBP’s wait time data unreliable for comparison across southwest border crossings, as they may produce different results. OFO and OA headquarters officials told us that because of the different methodologies used at different crossings, the wait time data are not comparable across crossings and therefore are of limited use in making resource allocation decisions. In light of these challenges in implementing CBP’s approved methodologies for estimating wait times, CBP’s wait time data do not allow for reliable trend analysis to show the extent of wait times within or across southwest border crossings. Industry representatives at two of the six crossings we visited reported that, in their view, the actual wait times commercial vehicle drivers experienced were often longer than those CBP publicly reported. For example, industry representatives at the roundtable we convened in Nogales reported their view that wait times, as defined by CBP, were at times up to 2 hours longer than those CBP publicly reported. Industry representatives at two other crossings reported that CBP’s wait time data were generally accurate. In addition, three organizations that commissioned studies to quantify the economic impact of wait times at southwest border crossings did not use CBP’s wait time data as the basis for their studies but rather collected original wait time data by, for example, using cameras to photograph trucks’ license plates at various points along the border-crossing routes and then matching these photographs to identify the wait time of each vehicle. (See appendix I for the results of these studies.) Because of these various limitations, we and others cannot use CBP’s wait time data to analyze the extent of current wait times across border crossings on the southwest border or determine historical trends. Wait time data currently reported on CBP’s public website are of limited usefulness to inform industry and the public because of the data limitations we identified and because they do not reflect the total border crossing time. None of the industry stakeholders representing 21 companies and associations we met with over the course of our study reported using CBP’s wait time data because they questioned the accuracy of the data. Industry representatives at the roundtables we convened in Nogales, San Diego, and Laredo said that more reliable wait time data would be useful to, for example, help businesses improve the efficiency of their operations and to make informed decisions including where to build new facilities, how much inventory to maintain, when and how frequently to send shipments across the border, and when to schedule truckers’ or manufacturing plant employees’ shifts. In addition, industry representatives at our roundtables in El Paso and San Diego noted that they did not use CBP’s wait time data because the data did not provide information on the duration of the complete border-crossing experience—total crossing time—a more comprehensive measure that would be helpful in making business decisions. A 2008 study commissioned by the Department of Commerce also found it was important to use a measure of total crossing time to capture the border- crossing system as a whole, and to account for the fact that wait time associated with U.S. primary inspection was not the sole driver of total wait time for commercial vehicles. Instead, they reported that delays were due to several factors, including many outside U.S. federal control. FHWA officials acknowledge the value of total crossing time and are piloting projects to automate such data collection. In addition, according to CBP headquarters officials, these wait time data are also not sufficiently reliable to inform CBP management decisions— more specifically, decisions on staffing and infrastructure investments— and officers at the six crossings we visited told us that they use the wait time data in limited ways. At the headquarters level, OFO officials stated that because of data limitations, CBP’s wait time data are not useful for comparison across crossings and explained that they do not use the data as a basis for determining staffing needs or allocating staff across field offices but rather rely on CBP’s traffic volume data as a proxy. A senior OFO human capital official explained that the wait time data are not systematically compared across ports but ports with known chronic wait time problems do get consideration in staff allocation decisions. Similarly, OA headquarters officials stated that they do not use wait time data to prioritize infrastructure improvement projects because of concerns about the reliability of CBP’s wait time data. However, CBP field office and port officials reported using their existing wait time data to a limited extent to inform management decisions in the field. Specifically, senior CBP officials at the six crossings we visited reported using wait time data as one of various factors considered when, for example, allocating staff across crossings and shifts, overtime decisions, and as support for white papers sent to headquarters requesting funding for infrastructure improvement projects. CBP officials at the six crossings we visited reported that more reliable wait time data would be useful to them in making such decisions. For example, CBP officials at each of these crossings stated that more reliable wait time data would help them in making staffing decisions. CBP does not have efforts underway or planned to help port officials overcome challenges to consistent implementation of existing wait time estimation methodologies. For example, CBP has not fully implemented recommendations from a fiscal year 2008 CBP study that could help the agency implement its current wait time estimation methodologies more reliably. In fiscal year 2008, CBP’s WHTI program office studied the reliability of CBP’s methods for calculating commercial vehicle wait times and identified six recommendations, three of which could, in part, help address the limitations discussed above. The recommendations directed CBP to, among other things (1) use closed-circuit television cameras to measure wait time in real time; (2) provide a standardized measurement and validation tool, such as a useful and well-documented benchmarking system; and (3) continue to monitor and evaluate applications of transportation technologies at the border that allow for better measurement and reporting of wait times.from three offices—the office that sponsored the report (Land Border Integration), the office in charge of cargo operations (Cargo Conveyance and Security), and the office that maintains the agency’s wait time data (Planning Program Analysis and Evaluation)—were all unclear as to the steps, if any, that had been taken to address the first two recommendations and which office was responsible for implementing them. With regard to the first recommendation, an official we met with in Cargo Conveyance and Security said that some crossings had access to cameras that helped them view the end of the line, but this official did not know how many crossings on the southern border had cameras for this purpose and further stated that there were no plans to expand camera availability to improve wait time data reliability. With regard to the second recommendation, this Cargo Conveyance and Security official stated that CBP had not taken steps to develop a standardized wait time measurement and validation tool and had no plans to do so. However, CBP officials with Land Border Integration and Planning, Program Analysis, and Evaluation stated that CBP had implemented the third recommendation by continuing to monitor and evaluate applications of transportation technologies in its work with FHWA to pilot projects for automating data collection. CBP guidance identifies the importance of reliable wait time measurement to facilitate the movement of people and goods across the border. Further, Standards for Internal Control in the Federal Government calls for agencies to establish controls, such as those provided through policies and procedures, to ensure the accuracy and timeliness of data. Control activities that ensure the prompt, complete, and accurate recording of data help to maintain their relevance and value to management in controlling operations and making decisions. In the near term, identifying and carrying out steps that can be taken to help CBP officials overcome challenges to consistent implementation of existing wait time estimation methodologies—such as implementing past CBP recommendations to expand the use of cameras to see the ends of queues and providing standardized wait time measurement and validation tools—could improve the reliability and usefulness of CBP’s current wait time data. In February 2008, FHWA, in coordination with state DOTs and CBP, initiated pilot projects to develop automated wait time data collection methods at select southwest border crossings. Automation of wait time data collection relies on Radio-Frequency Identification readers to read the unique signals from passing vehicles at several points along the border-crossing route. These data points are then automatically matched and analyzed to estimate the current wait time at that crossing. As of March 2013, FHWA and state DOTs in Arizona, California, and Texas had eight pilot projects under way or completed to automate and standardize calculation of both wait time and total crossing time at eight crossings on the southwest border, including projects at each of the six crossings we visited. Wait time data resulting from some of these pilots is currently shared on a publicly available website with updates every 10 minutes. These eight projects were initiated on a crossing-by-crossing basis and are in various stages of implementation—one completed and seven ongoing. Two additional projects are planned so senior FHWA officials expect automated wait time data to be available at 10 crossings by 2015 at which point current federal funding commitments for these projects end. CBP headquarters and field officials, as well as FHWA and a Texas Department of Transportation official, cited a range of potential benefits that could result from automating border wait time measurement. CBP’s fiscal year 2008 WHTI report found that the long-term solution to standardize wait time measurement is to take advantage of automation technology. CBP headquarters, field office, and port officials told us that automation would reduce the burden on staff of manually collecting wait time data and increase staff availability for security efforts and other tasks. OFO headquarters officials also stated that automation would increase the accuracy, reliability, and timeliness of the wait time data that are collected and disseminated. Moreover, they stated that automated data would come from a more independent source, and thus the data may be perceived by industry organizations as more accurate than CBP’s current data. This would reduce the burden on CBP officials to respond to queries about their wait time data, according to CBP officials. OFO headquarters officials and senior CBP officials at the six crossings we visited reported that accurate wait time data would facilitate CBP management decisions such as staffing needs, infrastructure investment, performance management (such as evaluating efforts to mitigate wait times), and operations planning at land ports of entry. In addition, CBP officials at four of the six crossings reported automation could provide data on CBP’s definition of wait time as well as total border-crossing time. This could provide CBP with more holistic information on the complete border-crossing experience, thereby improving CBP’s ability to identify and address bottlenecks and providing industry stakeholders with more useful data to inform their business processes. At the same time, CBP officials reported limitations of the current automation pilot projects. In 2011, CBP commissioned a study to review the quality of the data resulting from the Texas-based pilot projects and found the automated wait time data were not yet sufficiently accurate for CBP’s purposes. In response to these findings, CBP worked with pilot project officials to modify the algorithm used to calculate the wait times, with the intention to improve the accuracy of the data. Another concern raised by CBP officials is that none of the pilots are yet able to consistently differentiate between wait times for FAST and non-FAST traffic. Not capturing separate wait time data for FAST and non-FAST traffic could limit the usefulness for key industry stakeholders and limit CBP’s ability to measure whether FAST participants are experiencing reduced wait times, as set forth in FAST program goals. FHWA officials reported that the technology solutions used in the current pilot projects are flexible enough to enable adding more readers to differentiate results for FAST and non-FAST traffic, but none of the current pilot projects are gathering data for this purpose, and FHWA officials reported that they have no plans to conduct additional research on solutions that differentiate between FAST and non-FAST traffic. In addition, CBP officials note that there are no pilot programs to automate wait time data collection at 34 of the 42 southwest border crossings where CBP currently reports hourly wait times. CBP, as the lead agency in collecting and reporting wait time data and the sole source of wait time data across the southwest border, does not have plans to oversee or manage these automation projects, although FHWA and others are anticipating an expanded CBP role once the pilot phases conclude. FHWA officials have led the research phase of these projects but expect their role to decline as the pilot phases end, and they are looking to others to manage these efforts in the longer term. FHWA has taken a lead role in the research, testing, and evaluation of wait time automation technology including fully funding the pilot projects at the Bridge of the Americas and Otay Mesa and providing limited financial support for others. However, FHWA officials stated that they do not plan to fund these projects after the pilot phases end. CBP has coordinated with FHWA by, for example, consulting on the algorithms used to project wait times, but CBP has not provided funding for the projects on the southwest border. CBP officials reported that they do not intend to fund, adopt, or otherwise oversee these wait time automation projects once the pilot phases supported by FHWA and state DOTs conclude because CBP officials stated that they want another entity, such as FHWA or state DOTs, to do so. Texas Department of Transportation officials report that they are committed to continuing the Texas-based pilot projects in the short term, but are looking for another source of funding, possibly CBP or others to support the projects in the future. There are no other such commitments for the pilots in other states. CBP officials report that they are in discussion with FHWA about collaborative approaches to continuing these efforts, such as public-private partnerships. CBP officials stated that the agency has not taken action to improve or modify its current methods for collecting and reporting wait time data in the short-term because officials believe that automated collection of wait time data is the most effective way to obtain reliable, standardized data, and the current automation projects are still in development. However, CBP has not assessed the feasibility of replacing or supplementing current methods of manually calculating wait times with the automated methods piloted by DOT or other means. Assessing the feasibility could include assessing all of the associated costs and benefits, options for how the agency will use and publicly report the results of automated data collection, the potential trade-offs associated with moving to this new system, and other factors such as those influencing the possible expansion of automation efforts to the 34 other locations that currently report wait times but have no automation project under way. OFO officials stated that CBP has not considered assessing the feasibility of automating wait time data collection and does not have estimates of potential costs or time frames because the pilot projects are still in development and CBP management has not committed to automating wait time data collection. However, standards for program management call for the feasibility of programs to be assessed early on. Given that CBP officials have stated that automated data collection is the most effective method for obtaining standardized and reliable wait time data, conducting an assessment of the feasibility of the methods piloted by FHWA or other automation methods, in consultation with FHWA and state DOTs, could help CBP determine how to best achieve its goal of improving the reliability of its publicly reported wait time data. CBP analyses and port officials identified needs for additional infrastructure—such as more lanes—at some border crossings, and our analysis of CBP data on lane use generally supported agency views on the extent to which CBP opens lanes at the six crossings we visited. Further, our analysis supports CBP officials’ statements that they generally open and close primary inspection lanes in response to fluctuations in commercial traffic volume, but some port officials cited constraints to opening more lanes during times of peak traffic. CBP and GSA assessments and officials identified current infrastructure limitations affecting commercial vehicle processing at three of the six crossings we visited. Specifically, CBP and GSA assessments and CBP officials cited infrastructure limitations related to an insufficient number of primary lanes at Otay Mesa, insufficient space for secondary inspections at Otay Mesa and World Trade Bridge, and poor facility layout as well as CBP port an insufficient number of exit gates at Bridge of the Americas.of entry officials for two of the three remaining crossings we visited stated that current infrastructure was sufficient to process commercial traffic at Columbia Solidarity Bridge and Ysleta. At the last crossing, Mariposa, CBP port officials reported that infrastructure would be sufficient once GSA’s ongoing project to replace and expand the port is completed in the fall of 2014. Table 1 summarizes the infrastructure needs identified in CBP or GSA assessments as well as those identified by CBP port officials at the six crossings we visited. Further, our analysis of CBP data on lane use generally supported CBP officials’ statements regarding the extent to which CBP officials open existing primary inspection lanes at the six crossings we visited. The number of primary inspection lanes available and open at each crossing was frequently cited by CBP and industry officials as a critical variable affecting wait times for commercial vehicles and, further, as evidence of whether a crossing’s primary lane infrastructure was sufficient to process current traffic volumes. For example, at all the locations we visited, industry representatives expressed concern that CBP had an insufficient number of primary inspection lanes to process current traffic volumes or was not fully utilizing existing lanes. To determine the extent to which CBP was opening its existing primary inspection lanes, we analyzed CBP data on the average hourly percentage of primary inspection lanes open per month during operating hours over the last 5 fiscal years (October 2007-September 2012). This analysis showed the following: In fiscal year 2012, lane use data for two of the six crossings we visited suggest that these crossings—Otay Mesa and Mariposa— were at times operating at or near full capacity, as reported by agency officials. In fiscal year 2012, Otay Mesa opened an hourly average of 82 to 89 percent of its primary inspection lanes per month. At Mariposa, our analysis of lane use data for the first half of fiscal year 2012, prior to the addition of four new primary inspection lanes in April 2012, showed that during months of peak traffic, port officials opened an hourly average of between 80 and 84 percent of Mariposa’s primary lanes per month. The average hourly percentage of primary lanes open per month at the remaining four crossings we visited—Bridge of the Americas, Ysleta, Columbia Solidarity Bridge, and World Trade Bridge—were all lower. This generally supported CBP officials’ statements that they have the capacity to open more primary inspection lanes at these crossings. Our analysis does not indicate whether CBP is maximizing use of its lanes, but rather allows us to observe how closely the average hourly traffic volume per month corresponds to the average number of hourly lanes open per month. lanes were opened, causing congestion throughout the facility because of limited space at secondary inspection. The one crossing where the lane use and commercial traffic volume did not appear to track as closely was Columbia Solidarity Bridge. However, CBP officials at Columbia Solidarity Bridge explained that traffic volumes and wait times there were so low that they generally did not need to open or close lanes in response. Figure 3, an interactive graphic, summarizes our analysis and includes additional information for each of the six crossings we selected. Click on the highlighted border crossings for more information. Click on the X to close. For a printer friendly version please see appendix V. CBP officials at headquarters and in the field cited various ways they are working to address infrastructure limitations given challenges caused by budgetary and geographic constraints, among others. In regard to budgetary constraints, CBP and GSA officials stated that GSA has not received funding to conduct additional expansion projects in the last 2 fiscal years, and as a result, they have not been able to execute new projects to address infrastructure needs at land ports of entry on the southwest border. GSA officials reported that the agency has used alternative funding sources to pay for prioritized infrastructure projects. For example, GSA and CBP officials reported using funds from the city of Laredo to support the expansion of primary inspection lanes at the World Trade Bridge crossing in 2011. In regard to geographic constraints, port officials at the Bridge of the Americas stated that the urban area around that crossing limits opportunities to expand the crossing’s footprint. Officials with the city of El Paso told us that they are promoting a plan to divert all commercial traffic to the nearby Ysleta crossing because it has greater capacity to process commercial traffic and a larger footprint that can accommodate future expansion. CBP field office and port officials stated that they support this plan. In another example, CBP officials at headquarters and in the field reported participating in binational working groups in an effort to address the infrastructure limitations of ports of entry along the southwest border. For example, senior CBP officials reported participating in the U.S.-Mexico Joint Working Committee to develop regional master plans to better ensure the development of a well- coordinated land transportation and infrastructure planning process along the border. CBP’s workload staffing model—CBP’s primary tool for determining the number of CBP officers needed at the nation’s air, land, and sea ports— found that additional CBP officers are needed to meet CBP’s mission requirements. CBP submitted the most recent version of its workload staffing model to Congress in response to language in the conference and committee reports for the fiscal year 2012 DHS appropriations. According to CBP documents submitted to Congress, the workload staffing model found that 3,811 additional CBP officers are needed to meet CBP’s mission requirements in fiscal year 2014. In addition, CBP field and port officials at three of the six crossings we visited reported having insufficient staff to process commercial traffic. Specifically, CBP field office and port officials reported insufficient staff at the World Trade Bridge, Columbia Solidarity Bridge, and Mariposa crossings and noted that insufficient staff at these crossings contributed to commercial vehicle wait times and reduced their ability to conduct secondary inspections, among other effects. Officials at the remaining three crossings—Otay Mesa, Bridge of the Americas, and Ysleta— reported having a sufficient number of staff to process commercial traffic. However, senior OFO headquarters officials reported that all southwest border land ports of entry require additional staff to perform at optimal levels. CBP headquarters and field office officials cited efforts to mitigate the effect of reported staffing shortages on ports’ ability to process commercial vehicle traffic. CBP officials reported that these staffing shortages, caused in part by budget constraints and the time needed to train and assign new CBP officers, challenged their ability to increase the numbers of officers at the ports of entry. Specifically, CBP officials reported that since fiscal year 2009, CBP has not received sufficient funding to hire the number of CBP officers that it requires at land ports of entry. In response to budgetary constraints, CBP headquarters officials reported working to identify alternative funding strategies as well as reviewing user fees to ensure they effectively support operations. For example, DHS’s fiscal year 2014 congressional budget request included a proposed increase for 1,877 fee-funded full-time-equivalent positions in addition to a funding increase of approximately $210 million for 1,600 additional CBP officers. In response to staffing shortages related to the length of time it takes for new CBP officers to complete required training and to be available for duty at their assigned ports of entry, CBP headquarters officials reported considering the extent that new CBP officers have completed their training and are available for duty when allocating staff. They further reported actively working to adjust staff allocations across locations to better ensure that staffing levels are matched to areas of greatest need. For example, a senior OFO official reported prioritizing allocations to field offices with the highest discrepancy between current staffing levels and workload staffing model results when developing fiscal year 2012 annual staffing allocation. Finally, port officials at all six crossings we visited reported using overtime to mitigate the effect of any staffing shortages on ports’ capacity to process commercial traffic. For example, port officials in El Paso said that using overtime pay was an effective and efficient solution to provide increased coverage to process commercial traffic during peak times on weekdays and on weekends. However, officials at the Otay Mesa port of entry noted that the availability of overtime funds has decreased because of budget constraints in recent years. In fiscal year 2013, CBP revised its process to allocate available CBP officers to its field offices, ports of entry and border crossings. However, CBP has not yet documented this process or its methodology, including the factors and underlying rationale considered in making staff allocation decisions. A senior official in CBP’s Human Capital Division reported that CBP’s most recent staff allocation process consisted of the following six steps: (1) OFO’s Human Capital Division obtained the workload staffing model’s findings to determine the number of officers ideally needed to meet the expected workload; (2) Human Capital Division staff conducted a “gap analysis” by comparing the model’s findings to current staff levels to identify the locations with the greatest gap between current and the staff levels identified by the workload staffing model; (3) Human Capital Division staff drafted a proposed staff allocation that realigned staff to those field offices with the greatest gap; (4) OFO leadership made adjustments to the proposal based on institutional priorities including mission, priorities, and threats before approving the allocation; (5) on receiving approval from leadership, OFO staff communicated the authorized staffing levels to each field office; and (6) the field offices then allocated their authorized staff to the individual ports of entry under their purview. However, this official explained that this process is not documented and there is no guidance clearly defining this methodology, the factors considered, or the rationale for making staff allocation decisions. OFO Human Capital officials acknowledged the need to document this process and stated that they had not yet done so because, historically, such decisions were made informally and the current, more formalized process is still evolving. In addition, these officials noted that the last fiscal year was the first time OFO used the process described above and they planned to make further changes to the process within the next 2 fiscal years. Best practices for strategic workforce planning identified by GAO emphasize the importance of ensuring that the methodology underlying staffing decisions is well documented. Standards for Internal Control in the Federal Government also calls for clear documentation of policies and procedures that are readily available for examination. These standards state that such control activities are an integral part of an entity’s accountability for stewardship of government resources and achieving effective results. Without documented policies and procedures including its rationale and factors considered in allocating staff, OFO’s staff allocation process lacks transparency and is therefore difficult for CBP officials or others to review and validate. As a result, CBP and its stakeholders do not have reasonable assurance that its staffing processes most effectively and efficiently allocate scarce resources to fulfill mission needs across ports. In fiscal year 2013, CBP identified 28 performance measures to assess and report on progress toward CBP’s security and trade facilitation goals. Nine measures were selected by DHS as Government Performance and Results Act (GPRA) measures (these are also called strategic measures within the department); 15 management measures are used to inform agency decisions on program priorities and resource allocation, and to monitor progress and performance; and 4 operational measures are maintained by OFO to capture former GPRA measures that OFO continues to use internally.fiscal year 2013 performance measures.) The percent of cargo by value imported to the United States by participants in CBP trade partnership programs is a GPRA or strategic measure, and the percent increase in travelers to the United States enrolled in a Trusted Traveler program is a CBP operational measure. CBP’s Trusted Traveler programs provide expedited travel for preapproved, low- risk travelers through dedicated lanes and kiosks. FAST is one of several such programs. stated that DHS would improve measurement of desired mission outcomes and the contribution of programs, activities, and resources to them. OFO and OA officials stated that CBP’s existing performance measures imply that trade will be facilitated through increased participation in trade partnership programs rather than by directly measuring the desired outcomes. More specifically, OFO and OA officials stated that the measure percent of cargo by value imported to the United States by participants in CBP trade partnership programs implies that trade will be facilitated through participation in the programs, rather than directly measuring the desired outcomes of shorter wait times, for example. Similarly, OFO and OA officials told us that the measure percent increase in travelers to the United States enrolled in a Trusted Traveler program is not intended to capture the benefits to the program participants or trade facilitation, but, rather, is primarily an internal program measure that captures progress toward CBP’s goal of growing enrollment in Trusted Traveler programs, including FAST. DHS and CBP officials stated that they have not developed more performance measures for trade facilitation primarily because key stakeholders, including DHS leadership and Congress, have not pushed for this, and trade facilitation measures are difficult to develop. DHS and CBP officials reported that they have more performance measures focused on security and enforcement because these have been more of a focus for stakeholders than trade facilitation. In addition, CBP officials report that they have not created outcome-oriented measures for trade facilitation because the results of their trade facilitation efforts are difficult to capture in one or two measures. OFO and OA officials told us that it can be hard to articulate trade facilitation to external stakeholders because trade facilitation means different things to different stakeholders, each with its own interests. However, these same concerns could apply to outcome-oriented measures for CBP’s security and enforcement efforts, and CBP has developed an outcome-oriented measure in that area—the land border interdiction rate for major violations. OFO and OA officials told us that this measure is the single best outcome measure for security, In addition, though they note that it is limited to passenger vehicles.OMB guidance states that proxy measures that are closely tied to the desired outcome can be used to indirectly measure program outcomes when programs are difficult to measure because data are not available.Potential outcome-oriented measures or proxy measures for trade facilitation could include, for example, measures to determine the extent to which CBP trusted shipper programs have met their goal, such as the percentage of time FAST traffic waits a certain percentage less time than regular commercial traffic or the ratio of FAST to non-FAST referrals to secondary inspection. In the absence of outcome-oriented or proxy measures, CBP’s ability to identify and publicly report the impact of the agency’s trade facilitation programs is limited. OFO and OA officials reported that as a result of not having more outcome-oriented measures for trade facilitation, the agency is less prepared to identify and report the positive impact of its trade facilitation efforts to the public, and industry representatives we met with noted a lack of information on the impact of CBP’s trade facilitation efforts. CBP officials at headquarters and in the field have stated that participation in the FAST program has resulted in shorter wait times for program participants, but Border Trade Alliance officials and industry representatives at two of the roundtables we held raised concerns that FAST program participants were not receiving these benefits and were unclear about the impact of this particular trade facilitation program. OMB and our guidance recommend the use of outcome-oriented performance measures to promote accountability for results. Our guidance states that leading organizations promote accountability by establishing results-oriented outcome goals and corresponding measures by which to gauge progress.measuring performance allows organizations to track the progress they are making toward their goals and gives managers critical information on which to base decisions for improving their progress. More specifically, we identified establishing performance goals and measures to better translate activities into results as a useful practice to enhance performance management and measurement processes, and we have previously issued guidance that agencies should identify and use This guidance further states that outcome goals wherever possible to reflect the results of their activities. In addition, OMB guidance encourages the use of outcome measures because they are much more meaningful to the public. In the absence of meaningful outcome-oriented performance measures, or proxy measures, for trade facilitation—such as measures capturing whether FAST participants are receiving their intended benefits of quicker processing time and fewer inspections—it is difficult for CBP, decision makers, and other stakeholders to gauge CBP progress in achieving the agency’s stated trade facilitation goals. Trade between Mexico and the United States is important to the United States’ economic health, and the value of goods imported into the U.S. from Mexico is on the rise. The length of time commercial vehicles wait in line at the border affects this trade activity. However, CBP’s current wait time data are unreliable, limiting the extent to which CBP can use wait time data across border crossings to inform management decisions about infrastructure investment and staffing allocation and industry stakeholders can rely on publicly reported data. Taking steps to help CBP port officials implement CBP’s existing mechanisms for collecting wait time data, consistent with agency guidance, could improve data reliability and usefulness for these purposes. Moreover, assessing the feasibility of options for automating wait time data consistent with program management standards could help CBP consider ways to reduce port officials’ current burden in manually collecting the data and provide CBP with more reliable and comprehensive data it can use to identify and address challenges to trade facilitation. CBP’s ability to meet its mission goals—including both security and trade facilitation—are affected by its allocation of staff across the southwest border, among other things. In the absence of transparency about the methodology and process by which CBP allocates staff resources across ports of entry, it is difficult for CBP and others to evaluate whether existing staff have been allocated to most effectively address CBP’s mission needs. Documenting CBP’s staff allocation methodology in accordance with best practices for strategic workforce planning could help better position CBP to ensure that it is allocating its staff efficiently and effectively across ports of entry and border crossings. In addition, it is difficult for CBP or others to gauge the agency’s progress in meeting its trade facilitation goal because CBP does not have outcome-oriented measures for its trade facilitation efforts. Developing outcome-oriented, or proxy, performance measures that capture the impact of CBP’s trade facilitation efforts, consistent with OMB and our guidance, could help CBP officials, Congress, and other stakeholders better assess the effectiveness of CBP’s trade facilitation programs in supporting the agency’s overall mission and goals. We recommend that the Commissioner of CBP take four actions. To improve the usefulness of southwest border crossing wait time data for informing public and management decisions, the Commissioner of CBP should take the following two actions: Identify and carry out steps that can be taken to help CBP port officials overcome challenges to consistent implementation of existing wait time estimation methodologies. Steps for ensuring consistent implementation of these methodologies could include, for example, implementing the fiscal year 2008 WHTI report recommendations to use closed-circuit television cameras to measure wait time in real time and provide a standardized measurement and validation tool. In consultation with FHWA and state DOTs, assess the feasibility of replacing current methods of manually calculating wait times with automated methods, which could include assessing all of the associated costs and benefits, options for how the agency will use and publicly report the results of automated data collection, the potential trade-offs associated with moving to this new system, and other factors such as those influencing the possible expansion of existing automation efforts to the 34 other locations that currently report wait times but have no automation projects under way. To better ensure that CBP’s OFO’s staffing processes are transparent and to help ensure CBP can demonstrate that these resource decisions have effectively addressed CBP’s mission needs, we recommend that the Commissioner of CBP document the methodology and process OFO uses to allocate staff to land ports of entry on the southwest border, including the rationales and factors considered in making these decisions. To facilitate transparency and performance accountability for its trade facilitation programs and meeting CBP’s goal of balancing its trade and security missions, we recommend that the Commissioner of CBP develop outcome-oriented performance measures or proxy measures to capture the impact of CBP’s trade facilitation efforts, such as measures to determine the extent to which CBP trusted shipper programs have met their goals. We provided a draft of this report to DHS, GSA, DOT, the Department of Commerce, and the Department of Health and Human Services for their review and comment. GSA, DOT, the Department of Commerce, and Department of Health and Human Services did not have any comments on the draft of the report. DHS provided written comments, which are summarized below and reproduced in full in appendix VII. In the written comments, DHS concurred with our four recommendations and discussed actions to address them. However, the actions DHS identified will not address the intent of one of these recommendations. DHS also provided technical comments, which we incorporated, as appropriate. DHS agreed with our first recommendation that CBP identify and carry out steps to help CBP port officials overcome challenges to consistent implementation of existing wait time estimation methodologies. In written comments, DHS officials explained that if funding is available, CBP has a goal to automate the estimation and reporting of border wait times. To this end, they plan to establish an internal and external stakeholder group and identify the best candidate technologies to pilot. These steps will help further CBP’s longer-term plans to automate wait time data collection, but do not address the intent of our recommendation that CBP take steps to help port officials more consistently implement existing manual wait time estimation methodologies. DHS agreed with our second recommendation that CBP assess the feasibility of replacing current methods of manually calculating wait times with automated methods. In commenting on a draft of this report, DHS officials noted that CBP has taken some steps to assess options for automating wait time data collection at northern and southern land border crossings and provided us with supplemental documents that included rough cost estimates for piloting, deploying, and maintaining automation technology. Based on this information, DHS requested that we consider this recommendation closed. While DHS has taken positive initial steps to address this recommendation, DHS should complete additional feasibility analysis to fully address the intent of our recommendation and better position the agency to decide whether and how to automate data collection. For example, DHS written comments stated that the feasibility of financing, funding, and operating automation technology is “reduced.” More detailed and comprehensive cost analysis—such as estimating and comparing the costs of different technology solutions and analyzing potential funding sources—could help CBP assess the feasibility of wait time automation. In addition, DHS officials noted in their written comments that CBP has not yet identified the best technologies to pilot. Determining the best technology, if any, for use at each border crossing could influence the overall feasibility of planned automation across southwest border land ports of entry. With regard to our third recommendation that CBP document the methodology and process OFO uses to allocate staff to land ports of entry, DHS agreed and stated that CBP will develop and document a standardized process for allocating CBP officers that includes assumptions, factors, and concerns to guide the decision-making process. If implemented effectively, these actions should meet the intent of our recommendation. With regard to our fourth recommendation that CBP develop outcome- oriented performance measures or proxy measures to capture the impact of CBP’s trade facilitation efforts, DHS concurred and stated that they plan to create a team of subject matter experts from OFO trade-related programs to identify at least two outcome measures or acceptable proxy measures for trade facilitation. They also noted plans to collaborate with private sector entities in order to identify metrics of greatest concern. If implemented effectively, these actions should meet the intent of our recommendation. We are sending copies of this report to the Departments of Homeland Security, Commerce, Transportation, and Health and Human Services; and the General Services Administration. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8777, or at [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VIII. To determine what is known about the economic impact of wait time on cross-border commerce, we identified and analyzed relevant studies. We searched academic, government, and other literature published from January 1, 2000, to June 30, 2012, to capture a wide array of recently published literature, and asked all relevant interviewees—including officials with the Department of Homeland Security’s (DHS) U.S. Customs and Border Protection (CBP), Department of Commerce, trade associations, private industry, and academics—whether they were aware of any such studies. We reviewed over 100 identified studies and narrowed our focus to the 6 studies that determined an economic impact of commercial vehicle wait times on the southwest border. We interviewed officials at the organizations that sponsored each of the qualifying studies to better understand the methodologies and limitations. We then analyzed the studies by comparing their methodologies with best practices for economic impact studies, including cost-benefit criteria in Office of Management and the Budget (OMB) Circular A-94 and comparing and contrasting the studies’ scopes, methodologies, and findings. An April 2013 CBP-commissioned study found that reduced waits at select border crossings would result in benefits to the U.S. economy in terms of increased gross domestic product (GDP) and jobs. This study, conducted by the National Center for Risk and Economic Analysis of Terrorism, estimates the benefit of adding one CBP officer to select border land and air border crossings—assuming these added staff would each open one additional primary inspection lane—in terms of reduced waits and resulting benefits to the U.S. economy. The study found, for example, that at seven of the biggest southwest border commercial vehicle crossings, having one additional staff member open one additional primary inspection lane during the 8 most congested hours of the day would result in wait time reductions ranging from 1.5 minutes to 7.2 minutes for commercial vehicle traffic during those hours. The study then estimated that over the course of a year, these wait time reductions for commercial vehicles at these seven crossings would lead to direct economic benefits of $915,000 in GDP (in 2011 dollars) and 9.3 additional jobs. CBP officials report that they plan to use the results of this study to demonstrate the benefit of adding CBP officers. These officials report that CBP has typically demonstrated its benefit in terms of number of seizures and arrests, for example, but this study will permit CBP to show an officer’s trickledown effect on the U.S. economy. However, we identified three limitations to consider regarding the reported economic benefits. First, this study estimated the benefits of this change but not the costs. CBP officials state that the study was not intended to be a cost-benefit analysis and noted that the types of costs that would have to be considered in a cost-benefit analysis include staff salaries, inspection booth and lane maintenance, and equipment. Second, the study assumes that one additional primary processing lane is available to be opened during the busiest 8 hours of the day. However, CBP officials report that at some crossings they already open all primary inspection lanes during peak hours. Therefore, this assumption is unrealistic or would require CBP investment in additional primary inspection lanes. Third, the study used CBP’s reported wait time data for fiscal year 2012, which, as described earlier in this report, we determined are not sufficiently reliable for analysis across crossings, among other things. Officials who conducted this study told us that they did not test the reliability of CBP’s wait time data but found the basic data pattern plausible and therefore determined that the data were sufficiently reliable for their analysis. Five other studies, one of which was commissioned by DHS, have quantified the effects of commercial vehicle wait times on cross-border commerce and also found evidence of lost revenue and jobs. The studies’ findings are not comparable because of their differing scopes and methodologies, but they estimate direct impacts ranging from $452 million in the San Diego area to $1.9 billion across five cities with major border crossings. All five studies have limitations that may have led to an overstatement of the economic impacts of wait times. In particular, four of these studies used economic multipliers to quantify the effect of wait time delays on the U.S. economy. As stated in OMB Circular A-94, these secondary effects should not be used when measuring social benefits or costs. Rather, the reported effects should be limited to direct effects only. Therefore, we included only the direct impacts in our summary of these studies. The five studies’ findings and limitations are summarized in table 2. This report addresses the following questions: To what extent are CBP wait time data reliable for public reporting and informing CBP decisions on staffing and infrastructure investments? To what extent has CBP identified infrastructure and staffing needed to process current commercial traffic volume at southwest border crossings with high traffic volume? To what extent do CBP performance measures address progress toward its goal of facilitating trade? This report also presents information on the results of studies that have quantified the economic impact of commercial vehicle wait times on cross-border commerce. This information, including the methodology used to identify these studies, is presented in appendix I. To inform our analysis of the first and second objectives, we visited six crossings at four land ports of entry: Bridge of the Americas and Ysleta at El Paso, Texas; World Trade Bridge and Columbia Solidarity Bridge at Laredo, Texas; Mariposa at Nogales, Arizona; and Otay Mesa near San Diego, California. We selected these crossings based on their commercial traffic volume, and geographic diversity, and to include representation of crossings with a mix of recent or ongoing infrastructure modernization projects. At each location, we interviewed CBP management, toured the facility, and convened a roundtable of local industry representatives and local government officials. To obtain a range of perspectives on commercial vehicle traffic at southwest border crossings, we met with representatives of 21 companies and associations (who were identified to us as knowledgeable stakeholders) representing industries that rely on cross-border commerce including customs brokers, trucking companies, and distributors), as well as bridge directors and representatives of four local government entities (the Mayors of El Paso and San Diego, the Laredo City Manager, and representatives of the San Diego Association of Governments) at all four cities we visited or by teleconference. Because we focused on four land ports of entry with six commercial vehicle crossings, our findings are not generalizable to the entire southwest border. However, the ports we visited accounted for, in total, approximately 70 percent of the commercial vehicle crossings into the United States from Mexico in fiscal years 2008 through July 2012. Over the course of our work, we also interviewed officials from agencies involved in securing the border and facilitating trade at land ports of entry, including officials from CBP’s Office of Administration and Office of Field Operations, the General Services Administration (GSA), the Department of Transportation’s (DOT) Federal Highway Administration and Federal Motor Carrier Safety Administration, the Department of Commerce (Commerce), the Department of Health and Human Service’s Food and Drug Administration, and the Department of State. We also interviewed other stakeholders, including officials from the Mexican Foreign Ministry, academics, and representatives of national trade associations, including the American Trucking Associations and the Border Trade Alliance, to obtain a broader range of perspectives on commercial vehicle traffic at southwest border crossings. To address the first objective, we reviewed and analyzed CBP’s policies and guidance for calculating and reporting wait times to determine the source of these data and the agency’s control over these data. We interviewed CBP headquarters officials about the wait time data, including data quality, data entry protocols, quality assurance procedures, and any steps taken to improve the reliability of these data. We also interviewed officials at the six crossings we visited about how they collect and report wait time data. We reviewed CBP documents evaluating the quality of CBP’s wait time data on the southwest border, including a fiscal year 2008 CBP Commercial Wait Times Analysis Report.documentary and testimonial evidence of how wait times are currently being calculated by officials at land ports of entry on the southwest border We compared against CBP policies and guidance to identify any discrepancies. We reviewed CBP’s data and reports on wait times for the six crossings for fiscal year 2012. In addition, to obtain non-CBP perspectives on CBP’s methods for calculating wait times and the quality and usefulness of CBP’s wait time data, we interviewed DOT officials, local officials, industry groups, and a Mexican official. We compared CBP’s policies and procedures for collecting and maintaining wait times data with criteria in Standards for Internal Control in the Federal Government. According to the assessment, the usefulness of the wait time data is limited and the reliability of the data is not insufficient for certain purposes, such as for comparisons across ports. To determine how CBP officials use the agency’s wait time data to inform management decisions, we analyzed CBP guidance, policy, and other documents as well as interviewed CBP officials from headquarters and the six crossings to determine the extent to which wait times are a factor in CBP staff allocation decisions and infrastructure investment requests and decisions. To determine the status of DOT’s pilot projects to automate wait time data at the southwest border, we interviewed officials at DOT’s Federal Highway Administration, Texas Department of Transportation, and Texas A&M University and reviewed documentary evidence they provided. We compared evidence of CBP’s stated plans to automate wait times with criteria on standards for program management. To address the second objective, we reviewed and analyzed CBP and GSA assessments of land port of entry condition and capacity, such as CBP’s Strategic Resource Assessments and GSA’s BorderWizardTM reports. We also interviewed CBP and GSA officials about infrastructure needs at land border crossings and how these needs are identified and prioritized. We reviewed documentation of CBP’s workload staffing model, which is used to determine staff needs at land ports of entry, and interviewed CBP officials about the agency’s staff allocation policies and processes and compared these with criteria in our previous work on human capital management and Standards for Internal Control in the Federal Government. In addition, we conducted an analysis of CBP’s hourly data on traffic volume and number of primary lanes open at the six selected crossings to determine the extent to which CBP has utilized primary lanes for commercial vehicle traffic from fiscal years 2008 through 2012. We selected this 5-year period to provide a sufficiently long time period for trend analysis. As our analysis focused on identifying trends in routine commercial vehicle traffic by crossing, we included both Free and Secure Trade (FAST) and non-FAST traffic volume and lanes, but To ensure data reliability, we did excluded hazardous materials traffic.not include any records on traffic volume or lanes open that fell outside CBP’s reported hours of operation. In addition, within the reported hours of operation, we included the data in our analysis for any given hour if CBP provided records for both traffic volume and lanes open. We conducted this analysis for the six crossings we visited; thus our findings are not generalizable to the entire southwest border. However, these six crossings processed approximately 70 percent of commercial vehicle traffic coming into the United States from Mexico for fiscal year 2008 through July 2012. To assess the reliability of these data, we reviewed relevant documentation; interviewed knowledgeable agency officials; and electronically tested for missing data during hours of operation, outlier records outside of hours of operation, and obvious errors (such as data records showing traffic being processed when no lanes were reportedly open). We also reviewed related internal controls and traced a selection of data to source files. We determined that the data were sufficiently reliable for the purposes of our report. In addition, to address the second objective, we asked CBP officials at headquarters, field offices, and ports of entry about (1) the sufficiency of staffing levels and infrastructure capacity to process the current volume of commercial traffic at the six crossings we visited, (2) CBP assessment and consideration of any staffing or infrastructure gaps when making resource allocation decisions, (3) CBP actions and plans to address any of these gaps, and (4) any challenges to effectively responding to any gaps that CBP identified. We also discussed CBP processes for determining staff needs at land ports of entry and allocating staff to the ports of entry. We then compared CBP’s staffing policies and processes with criteria in our previous work on human capital management and Standards for Internal Control in the Federal Government. In addition, we discussed CBP’s workload staffing model and how it has been used to inform staffing processes with CBP officials responsible for the model. In addition, we interviewed relevant GSA, state, and local officials, as well as nongovernmental stakeholders regarding any coordinated efforts to identify, prioritize, and implement infrastructure improvements at land ports of entry on the southwest border. To address the third objective, we reviewed documentation of CBP’s fiscal year 2013 performance goals, measures, and reports. We then assessed CBP’s measures against criteria in OMB Circular No. A-11 and useful practices GAO previously identified to enhance performance management and measurement processes to determine the extent to which CBP’s existing performance measures capture progress toward goals and incorporate successful practices. We also interviewed relevant DHS and CBP officials about CBP’s current performance measures, the adequacy of these measures, their perspectives on the balance between the agency’s security and trade facilitation goals, and the extent to which CBP uses its wait time data to measure progress. We also identified studies that quantified the economic impact of commercial vehicle wait times on cross-border commerce by searching literature and asking relevant interviewees whether they were aware of any such studies. We reviewed over 100 identified studies and analyzed the six studies that determined an economic impact of commercial vehicle wait times on the southwest border. A more detailed description of our methodology and the results of these studies are presented in appendix I. We conducted this performance audit from July 2012 to July 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. This appendix describes CBP’s reported process for identifying and prioritizing its infrastructure investment needs at land ports of entry on the northern and southwestern land borders. According to CBP documents, CBP identifies and prioritizes the infrastructure needs of land ports of entry through a six-part process that culminates in a 5-year-plan. The Department of Homeland Security Appropriations Act for fiscal year 2009 required beginning in fiscal year 2010 and every year thereafter, that CBP’s annual budget submission for construction include, in consultation with GSA, a detailed 5-year plan for all federal land port of entry projects with a yearly update of total projected future funding needs.investment plan (CIP), includes gathering data through Strategic Resource Assessments (SRA), scoring identified needs at each land port of entry using data and information gathered from the SRA, conducting a sensitivity analysis on the initial ranking of needs, assessing projects’ feasibility and risk, using the information gathered from the previous steps in the process to develop and issue CBP’s 5-year capital investment plan, and assessing the CIP process. Each step is described in further detail below. This process, known as the capital 1. Strategic Resource Assessments According to CBP, the first stage in the CIP is to conduct SRAs, which are infrastructure needs assessments intended to gather and present data to support the prioritization of CBP’s facility projects on a national level. The SRA includes internal and external stakeholder input, workload and personnel forecasts, space capacity analyses, architectural evaluation of port facilities, and recommended options to meet current and future space needs. 2. Capital Project Scoring Once CBP has completed the SRAs, the agency scores the infrastructure needs of each land port of entry by the criticality of its need for modernization using the data collected by the SRA. This score is calculated by combining the data collected in the SRA with 60 distinct criteria within the predefined four categories (see table 3), adjusted to reflect the relative weight of each category. For example, factors CBP considers under the Personnel and Workload Growth category include current and projected commercial vehicle traffic volume as well as the current peak and projected number of inspections personnel over the next 10-year period. Table 4 summarizes the priority rank assigned to the SRA-identified infrastructure needs at land ports of entry on the southwest border that process commercial vehicle traffic. The crossings are listed below in order of their ranking relative to CBP’s entire portfolio of land ports of entry on northern and southwestern borders, including facilities that process bus, commercial, passenger, pedestrian, and rail traffic. CBP applies a sensitivity analysis of the initial ranking to determine if the results should consider factors unaccounted for through the standard SRA process, such as any unique regional conditions; bilateral planning and international partner interests; or interests of other U.S. federal, state, or local agencies. According to CBP officials, recent examples of factors CBP has considered include the identification of new manufacturing developments immediately adjacent to an existing land port of entry facility that would increase the demand for commercial processing capacities, facility damage resulting from floods that occurred after the SRA was completed in 2006, and the development of new land port of entry facility proposals in the same transportation region as an existing facility. CBP officials report that this information helps CBP identify additional drivers, constraints, and legislative mandates that may change the critical needs ranking. 4. Risk and Feasibility Assessments In this phase, CBP coordinates with key project stakeholders such as GSA to evaluate the feasibility and risk associated with project implementation including environmental, cultural, and historic preservation requirements as well as land acquisition requirements. Additionally, according to senior CBP officials, CBP considers the likelihood of obtaining the necessary resources to fund the proposed project. 5. 5-Year Capital Investment Plan Once CBP has taken the previous steps, it uses the information and analyses to develop its capital investment plan, in coordination with GSA. CBP and GSA update the capital investment plan annually, taking into account any changes in DHS’s mission and strategy, the changing conditions at land ports of entry, and any other factors discovered in the course of projects already under way. With each update, CBP identifies which projects are of highest priority. GSA then works with CBP to identify which projects may be considered for near- term design and construction funding, which require an initial or updated feasibility study, or which require further evaluation to account for issues such as environmental and local community concerns. 6. Assessment of the CIP Methodology In response to expected budget constraints and as a general revalidation of its existing planning cycle, according to CBP Office of Administration officials and CBP documents, CBP is revisiting the process it uses to develop the 5-year plan. Although the assessment is in development, CBP aims to better incorporate up-front stakeholder involvement, place additional emphasis on state and local government-driven master planning fed by regional trend analyses, adopt a consistent and comprehensive communications approach, evaluate alternative funding mechanisms, assess broader programmatic needs, and target high-impact and lower-cost investments. The tables in this appendix summarize infrastructure improvement projects that CBP and GSA officials reported were completed from fiscal year 2008 through 2012 at southwest border land ports of entry that process commercial traffic as well as infrastructure improvement projects GSA and CBP reported to be ongoing or in planning or design phases as of May 2013. GSA’s Federal Buildings Fund included $564 million for land port of entry infrastructure improvement projects in fiscal years 2008 through 2010 and none in fiscal years 2011 and 2012. In addition, the American Recovery and Reinvestment Act (ARRA) of 2009 allocated $300 million for the GSA-owned land ports of entry that is being used to provide design or construction funds to seven new or ongoing capital projects. CBP officials reported that the completed projects presented in table 4 cost a total of approximately $115 million and estimates that ongoing and planned projects to renovate these land ports of entry, presented in table 5, will cost approximately $370 million. Table 5 summarizes the five infrastructure improvement projects GSA completed at southwest border land ports of entry that process commercial traffic in the period of fiscal years 2008 through 2012. Three of these projects were at crossings we visited—World Trade Bridge in Laredo, Texas; Ysleta in El Paso, Texas; and Otay Mesa near San Diego, California. Table 6 summarizes the three ongoing GSA infrastructure improvement projects at southwest border land ports of entry that process commercial traffic as of May 2013. Table 7 summarizes the one planned infrastructure improvement project at a southwest border land port of entry that processes commercial traffic. . This appendix provides additional information on the average hourly traffic volume and average hourly percentage of lanes open per month at selected crossings, for fiscal years 2008 through 2012. Table 8 describes, for each of six selected land border crossings on the southwest border that process commercial vehicle traffic, (1) the year the crossing was built and last renovated, and (2) the number of primary inspection lanes for commercial vehicles in fiscal years 2008 through 2012. Figures 4 to 9 illustrate the layout of five of the six selected crossings and the primary inspection lanes of the remaining crossing for which CBP was not able to provide an aerial photo. Tables 8 through 13 provide the average hourly traffic volume, per month and the average hourly percentage of lanes opened, per month, at each of six selected crossings that process commercial vehicle traffic on the southwest border for the period fiscal years 2008 through 2012. Figures 10 through 15 graphically depict the average hourly traffic volume and average hourly percentage of lanes open per month for each of the six selected crossings. Table 15 lists the 28 performance measures DHS and CBP are using in fiscal year 2013 to assess and report on CBP progress toward the agency’s security and trade facilitation goals. These CBP-focused performance measures include the following: Nine measures selected by DHS as Government Performance and Results Act (GPRA) measures. (These are also called strategic measures within the department.) These measures are aligned with the goals and objectives in DHS’s Quadrennial Homeland Security Review Report and publicly reported to communicate achievement of these strategic goals and objectives. Fifteen management measures that are not reported publicly but rather inform internal CBP decisions on program priorities and resource allocation, and to monitor progress and performance. CBP officials report, for example, that these measures are used in crafting the department’s budget justification. Four CBP Office of Field Operations (OFO) operational measures that capture former GPRA measures that OFO uses internally to evaluate senior officials’ performance, for example. In addition to the contact named above, Lacinda Ayers, Assistant Director; Claudia Becker; Sarah Kaczmarek; and Michael Lenington made key contributions to this report. Also contributing to this report were Pedro Almoguera, Frances Cook, Juan Gobel, Eric Hauswirth, Phil Herr, Stan Kostyla, Jessica Orr, Minette Richardson, and Loren Yager.
Trade with Mexico is important to the United States' economy. Most of this trade crosses the border by truck, and studies have shown that long waits at border crossings can negatively affect the U.S. economy. CBP is responsible for securing U.S. borders at ports of entry to prevent illegal entry of persons and contraband while also facilitating legitimate trade and travel. GAO was asked to examine CBP data on and actions taken to address wait times at southwest border crossings. This report addresses the extent to which (1) CBP wait time data are reliable for public reporting and informing CBP decisions, (2) CBP has identified infrastructure and staffing needed to process current commercial traffic volumes, and (3) CBP performance measures assess progress toward its trade facilitation goal. GAO assessed the reliability of CBP's wait time data; visited six land border crossings (not generalizable, but selected largely for high traffic volume); analyzed CBP documentation, including needs assessments; and interviewed stakeholders and CBP officials. Within the Department of Homeland Security (DHS), U.S. Customs and Border Protection's (CBP) data on commercial vehicle wait times--the time it takes to travel from the end of the queue to the CBP primary inspection point at land border crossings--are unreliable for public reporting and CBP management decisions across border crossings. These data--which are collected manually by CBP officers--are unreliable because CBP officers inconsistently implement an approved data collection methodology, and the methodologies used vary by crossing. For example, five of the six crossings GAO visited require observation of the end of the queue to estimate wait times, but officials at these crossings reported the lines extended beyond their view at times. As a result, these data are generally not used by the private sector and are of limited usefulness for CBP management decisions on staffing and infrastructure investments. Determining and taking steps to help CBP officials overcome challenges to consistent implementation of existing methodologies could improve the reliability and usefulness of CBP's current wait time data. CBP officials have identified automated wait time data collection technology as the best way to improve data reliability. The Department of Transportation (DOT), in coordination with state DOTs and CBP, has ongoing pilot projects to use technology to gather more reliable wait time data at some border crossings. However, CBP has not assessed the feasibility of replacing current methods with automated data collection. Doing so, consistent with program management standards, could help CBP determine how to best improve data reliability. CBP officials report and analyses indicate infrastructure and staff needs, but documenting CBP's staff allocation process could improve transparency and facilitate review and validation by CBP and others. CBP officials and analyses identify needs for additional infrastructure--such as more lanes--at some crossings, and GAO analysis of CBP data on lane use generally supported agency views on the extent to which CBP opens lanes at the six crossings GAO visited. Further, GAO analysis of lane use and traffic volume data generally supported CBP officials' statements that they open and close primary inspection lanes in response to fluctuations in commercial traffic volume. CBP analyses identified a need for 3,811 additional officers, and CBP headquarters officials told GAO all southwest border ports require additional staff, but CBP field and port managers at three of six crossings GAO visited reported having sufficient staff. CBP human capital officials reported that they adjust staff allocations across locations to better ensure that staff levels match areas of greatest need, but CBP has not documented this process, and there is no guidance defining the methodology used or factors considered when allocating staff across ports. Documenting this process, consistent with internal control standards, could improve transparency, helping CBP and others to better ensure that scarce staff resources are effectively allocated to fulfill mission needs across ports. CBP does not have outcome-oriented performance measures to determine the extent to which the agency is facilitating trade. The Office of Management and Budget and GAO guidance recommend using outcome-oriented measures to promote accountability for results. In the absence of such measures, it is difficult for the agency or others to gauge CBP's progress in meeting its stated goal of facilitating trade. GAO recommends that CBP (1) determine and take steps to helpensure consistent implementation ofexisting wait time data collection methodologies, (2) assess the feasibility of replacing current methodologies with automated methods, (3) document its staff allocation process and rationale, and (4) develop outcome-oriented performance measures. DHS agreed with these four recommendations and identified steps to address them, although the planned actions will not address the intent of one.
SEC consists of a five-member Commission that oversees the agency’s operations and provides final approval over staff interpretation of federal securities laws, proposals for new or amended rules to govern securities markets, and enforcement activities. The Commission, which is headed by the SEC Chair, oversees 5 divisions, 23 offices, and 11 regional offices. Figure 1 illustrates SEC’s organizational structure. SEC’s divisions and offices are organized by functional responsibility. Table 1 summarizes the roles and responsibilities of the one office and five divisions that primarily implement SEC’s mission: the Office of Compliance Inspections and Examinations and the Divisions of Corporation Finance, Enforcement, Investment Management, Economic and Risk Analysis, and Trading and Markets. The mission-critical office and divisions are supported by other offices, such as the Office of Financial Management, the Office of Information Technology, and the Office of Human Resources. The Office of Information Technology supports SEC and its employees in all aspects of information technology (IT) and has overall management responsibility for SEC’s IT program, including application development, infrastructure operations and engineering, user support, IT program management, capital planning, security, and enterprise architecture. SEC’s Office of Human Resources provides overall responsibility for the strategic management of SEC’s personnel management and assesses compliance with federal regulations for areas such as recruitment, retention, leadership and staff development, and performance management. However, certain divisions have internal human resource coordinators that liaise between the Office of Human Resources and their respective division heads. The Office of Information Technology and the Office of Human Resources report to SEC’s Office of the Chief Operating Officer (COO), which in turn reports to the Office of the Chair. To carry out its mission, SEC employs staff with a range of skills and backgrounds, including attorneys, accountants, and economists. As of February 2016, SEC employed 4,674 staff. Of these, approximately 40 percent were attorneys, 21 percent were accountants or financial analysts, and 6 percent were examiners. The remaining 33 percent were other professional, technical, administrative, and clerical staff. From fiscal years 2013 through 2015, SEC hired 1,310 employees. To help SEC attract and retain qualified employees, Congress enacted the Investor and Capital Markets Fee Relief Act (Pay Parity Act) in 2002, which allowed SEC to implement a new compensation system with unique pay scales comparable to those of other federal financial regulators. SEC staff are represented by the National Treasury Employees Union (which we refer to in this report as the SEC employees’ union). Effectively carrying out its regulatory responsibilities requires that SEC attract and retain a high-quality workforce. However, we and others have previously reported on the personnel management challenges SEC has faced in building and retaining such a workforce. These personnel management challenges included challenges related to establishing a constructive organizational culture and developing effective personnel management practices. For example, a 2011 SEC Inspector General (IG) report found that the level of communication between the Office of Compliance Inspections and Examinations (OCIE) and the Division of Enforcement after a referral—that is, the extent to which noteworthy information from an examination was passed on to the Division of Enforcement for further investigation or action—was not always consistent in the regional offices, which the IG noted can hinder SEC’s ability to achieve its mission. In addition, a Boston Consulting Group report also noted in 2011 that SEC’s culture impaired communication and collaboration between divisions. According to the report, each division’s internal structure was tailored to division-specific needs, and SEC historically placed limited emphasis on using formalized mechanisms for cross-divisional collaboration. More recently, in July 2013, we found that SEC’s organizational culture hindered its ability to effectively fulfill its mission and identified a number of personnel management deficiencies. We also noted that organizations with more constructive cultures generally perform better and are more effective. Within constructive cultures, employees exhibit a stronger commitment to mission focus, accountability, coordination, and adaptability. We found a number of deficiencies in four areas related to SEC’s personnel management and made seven recommendations to help SEC address personnel management challenges: Workforce planning: We found that SEC had not developed a comprehensive workforce plan. In addition, we found that SEC had not developed a comprehensive management succession plan to fill agency supervisory positions. As a result, we recommended that the Chairman of SEC direct the COO and Office of Human Resources to (1) prioritize efforts to expeditiously develop a comprehensive workforce plan, including a succession plan, and establish time frames for implementation and mechanisms to help ensure that the plans are regularly updated; and (2) incorporate OPM guidance as they develop the workforce and succession plans by developing a formal action plan to identify and close competency gaps and fill supervisory positions and institute a fair and transparent process for identifying high-potential leaders from within the agency. Performance management: We found that while SEC had performance standards related to supervisors’ use of the performance management system, we did not identify specific mechanisms to monitor supervisors’ use of the system. In addition, we found no evidence that SEC had validated the system with its staff to help ensure its credibility. As a result, we recommended that the Chairman of SEC direct the COO and Office of Human Resources to (1) create mechanisms to monitor how supervisors use the performance management system to recognize and reward performance, provide meaningful feedback to staff, and effectively address unacceptable performance, for example, by requiring ongoing feedback discussions with higher-level supervisors; and (2) conduct periodic validations (with staff input) of the performance management system and make changes, as appropriate, based on these validations. Communication and collaboration: We found that although SEC had taken steps to improve intra-agency communication and collaboration, barriers still existed. In addition, we found that staff continued to identify barriers to effective communication and collaboration among the divisions, within the divisions, and between staff and management, contrary to collaborative best practices. As a result, we recommended that the Chairman of SEC direct the COO to (1) identify and implement incentives for all staff to support an environment of open communication and collaboration, such as setting formal expectations for its supervisors to foster such an environment, and recognizing and awarding exceptional teamwork efforts; and (2) explore communication and collaboration best practices and implement those that could benefit SEC. Human capital accountability: We found that SEC had not developed an accountability system to monitor and evaluate its personnel management programs and systems. As a result we recommended that the Chairman of SEC direct the COO and Office of Human Resources to prioritize and expedite efforts to develop and implement a system to monitor and evaluate personnel management activities, policies, and programs, including establishing and documenting the steps necessary to ensure completion of the system. SEC agreed with our recommendations and acknowledged that improvements could be made in its personnel management. We discuss the progress SEC has made toward addressing these recommendations throughout this report. In addition, appendix I summarizes the status of our 2013 recommendations, as of December 2016. OPM advocates the use of HCAAF, which is a set of tools and strategies available to federal agencies that assist officials in achieving results in personnel management programs. HCAAF is designed to guide the assessment of agency efforts, while allowing enough flexibility for agencies to tailor these efforts to their missions, plans, and budgets. The framework uses five standards for success, lists key questions to consider, and suggests performance indicators for measuring progress and results. The five standards for success are as follows: Strategic alignment: Agency strategies for human capital management are aligned with mission, goals, and organizational objectives and are integrated into its strategic plan and performance budget. Leadership and knowledge management: Agency leaders and managers effectively manage people, ensure continuity of leadership, sustain a learning environment that drives continuous improvement in performance, and provide a means to share critical knowledge across the organization. Results-oriented performance culture: The agency has a diverse, results-oriented, high-performing workforce and a performance management system that effectively differentiates between high and low levels of performance and links individual/team/unit performance to organizational goals and desired results. Talent management: The agency has closed gaps or deficiencies in skills, knowledge, and competencies for mission-critical occupations and made meaningful progress toward closing such gaps or deficiencies in all occupations used in the agency. Accountability: A data-driven, results-oriented planning and accountability system guides the agency’s decisions on human capital management. OPM has provided some updates to the HCAAF model to federal agencies and recently revised its personnel management regulations (the basis for HCAAF). According to OPM, the revised regulations, which are scheduled to go into effect April 11, 2017, should reinforce existing content and streamline the systems to make the framework more practical to use. The new framework, called the Human Capital Framework, will replace HCAAF and reduce the number of systems from five to four (strategic planning and alignment, talent management, performance culture, and evaluation). Based on responses to our surveys of all SEC employees, we determined that views of the agency’s organizational culture have generally improved since 2013. Organizational culture is the underlying assumptions, beliefs, values, attitudes, and expectations shared by an organization’s members that affect their behavior and the behavior of the organization as a whole. In July 2013, we reported that SEC’s organizational culture was not constructive and could hinder its ability to effectively fulfill its mission. We previously found that organizations with more constructive cultures generally perform better and are more effective at fulfilling their mission; within constructive cultures, employees also exhibit a stronger commitment to mission focus, accountability, coordination, and adaptability. Although we determined that employee views of SEC’s organizational culture have generally improved, employee perceptions about management’s efforts to improve cross-divisional collaboration remain low and have not changed since 2013. While some staff continue to raise concerns, generally employees’ views related to morale, trust, hierarchy, and risk aversion have improved since 2013. Our survey and other evidence indicate that both nonsupervisors’ and supervisors’ views of morale have improved since 2013 (see fig. 2). Based on our 2016 survey results, about 43 percent of nonsupervisors responded positively (strongly or somewhat agree) when asked whether employee morale is generally high most of the time, compared with about 30 percent in 2013. Also, about 51 percent of supervisors responded positively to this question in 2016, up from about 39 percent in 2013. Similarly, in response to a related question about morale, around 17 percent of nonsupervisors who completed our 2016 survey said that senior officers in their division or office worked (to a great extent) to make improvements in workforce morale, up from about 10 percent in 2013. Also in 2016, about 28 percent of supervisors responded to our survey that senior officers in their division or office worked to a great extent to make improvements in workforce morale, about a 10 percentage point increase from 18 percent in 2013. While our survey results suggest that morale has improved, many SEC employees we spoke with cited concerns related to favoritism and a lack of workplace diversity and promotion opportunities that resulted in low morale among some employees. Additionally, SEC employees from the mission-critical office and divisions provided 369 written responses to our survey questions that addressed challenges related to morale at SEC. For example, one employee described a work environment that promoted staff based on favoritism and an unwillingness by senior officers to make the necessary changes (including addressing low performing staff) to improve employee morale. (We discuss our assessment of SEC’s policies and practices related to promotions and addressing unacceptable performance later in this report.) However, our positive survey findings are generally consistent with the 2016 OPM Federal Employee Viewpoint Survey (FEVS). OPM estimated that SEC’s overall Global Satisfaction Index score—which measures employee satisfaction with job, pay, and their organization and is calculated based on FEVS results—increased from about 59 percent in 2012 to 77 percent in 2016. Although this index score may not directly correlate to employee perceptions of morale, it is an important indicator of employee views about whether the agency sufficiently values its staff. Our survey indicates that nonsupervisors’ views about an atmosphere of trust at SEC have improved, increasing from approximately 45 percent in 2013 to 55 percent in 2016. However, SEC employees from the mission- critical office and divisions provided 112 written responses to our survey, raising concerns about SEC’s atmosphere of distrust. In addition, 1 former employee described SEC’s promotion process as lacking transparency and favoring certain employees. This perceived lack of transparency and favoritism can erode trust between staff and management because it raises questions about the fairness of SEC’s promotion process. The views of supervisors on this issue improved slightly between 2013 and 2016, and positive responses from senior officers increased from about 81 percent in 2013 to 83 percent in 2016, as illustrated in figure 3. Similar to our findings, OPM recently found a slight increase in employee trust at SEC. OPM estimated that the engagement index for supervisors calculated based on FEVS results increased from an estimated 72 percent in 2012 to 73 percent in 2016. SEC scores are similar to those of the National Credit Union Administration (NCUA), which had an estimated score of 79 percent in both 2012 and 2015 according to OPM and a score identical to SEC’s in 2016 (73 percent). According to OPM, the government-wide average score was 65 percent in 2016. Our 2016 survey and other evidence indicate that both nonsupervisors’ and supervisors’ views about a hierarchical culture have improved. We reported in 2013 that some SEC staff described the agency’s culture as “hierarchical” (that is, decisions are made from the top with little if any solicitation of input from staff). In 2013, about 38 percent of nonsupervisory staff who responded to our survey strongly or somewhat agreed that they had a voice in decisions that affected them; in contrast, about 50 percent of nonsupervisory staff strongly or somewhat agreed with this statement in 2016 (see fig. 4). Supervisors’ positive responses to this statement also increased slightly—about 64 percent strongly or somewhat agreed in 2013, compared to about 67 percent in 2016. Similar to our survey results, when OPM surveyed SEC employees about whether they have a feeling of personal empowerment with respect to work processes, OPM estimated that 33 percent of staff had a positive attitude (agreed and strongly agreed) in 2012, which increased to 51 percent of staff in 2016. Despite significantly more positive survey results from nonsupervisory staff, senior officers held less positive views than they did in 2013; about 100 percent strongly or somewhat agreed in 2013, compared to about 84 percent in 2016. Regarding excessive risk aversion—that is, the condition in which an agency’s ability to function effectively is hindered by the fear of taking on risk—our survey and other evidence indicate that nonsupervisors’ and supervisors’ views have improved significantly since 2013. The percentage of survey respondents who agreed that fear of public scandals had made SEC overly cautious and risk averse fell from about 55 percent in 2013 to about 46 percent in 2016 for nonsupervisory staff, and from about 58 percent in 2013 to about 49 percent in 2016 for supervisory staff (see fig. 5). While our survey results show improvements, SEC employees from the mission-critical office and divisions provided 125 written survey comments related to concerns about risk-averse leaders. A few staff who provided written comments stated that some supervisors and peers fear bad publicity and are still risk-averse, which results in a refusal to admit wrongs or a dislike of being questioned by subordinates. One employee noted that the fear of the appearance of impropriety limits SEC’s ability to bring in industry experts. In addition, one former nonsupervisory employee described a work environment that did not encourage change or innovation. This employee stated that she would have been reprimanded for presenting new ideas. Furthermore, the percentage of staff responding to our survey who agreed that the fear of being wrong had made some senior officers reluctant to take a stand on important issues fell from about 47 percent in 2013 to about 38 percent in 2016 for nonsupervisory staff; from about 44 percent in 2013 to about 39 percent in 2016 for supervisors; and from about 23 percent to about 19 percent for senior officers (see fig. 6). Similarly, OPM estimated that employees’ views of SEC leaders improved from 2012 to 2016. OPM created an engagement index based on the FEVS results that measures employees’ views about the integrity of their leaders, including their perceptions of their leaders’ behavior related to communication and workforce motivation. OPM estimated that the engagement index score for SEC in this area increased from 49 percent in 2012 to 63 percent in 2016. While this index captures elements of leadership behaviors beyond top-down decision making and risk aversion, it reflects employees’ perceptions of how well senior leaders communicate the goals and priorities of the organization, among other things, and, as such, captures staff attitudes toward the perceived levels of hierarchy. When compared with the average government-wide score in 2016 of 53 percent, SEC’s estimated score of 63 percent is 10 percentage points higher. Our 2016 survey results indicate that SEC continues to operate in a compartmentalized manner. In 2013, we reported that such an environment can hinder SEC’s ability to effectively carry out its mission by limiting communication and collaboration among the divisions. For example, consistent with our 2013 findings, with the exception of the Division of Enforcement, at least one-third of nonsupervisory staff responding to our 2016 survey never contacted staff in other divisions or offices in headquarters in the past 12 months for work-related issues (see fig. 7). According to a number of staff who provided written comments to our 2016 survey, SEC is comprised of “silos”—that is, work is compartmentalized in each division or office, and little communication or collaboration occurs between the divisions. Several current and one former SEC employees we spoke with expressed similar views. For example, some employees cited a culture that was not supportive of cross-divisional communication. Of the 187 employees we interviewed, 78 considered issues around siloed communication, which includes communication and collaboration between and within units and divisions, as an area where SEC needs to improve. Further, one former employee stated that communication between offices was only encouraged at the most senior levels in the agency. This employee also said that although SEC’s Commission required the breaking up of silos after the collapse of Bernard L. Madoff Investment Securities, LLC, this requirement was never implemented at the staff level. Additionally, we received 597 written responses to our survey questions (of a total of 1,947 responses) citing various issues and challenges related to communication and collaboration at SEC. Finally, supervisors we interviewed said that it is sometimes difficult to know who to contact if you need to collaborate with a particular individual or a group with whom you do not normally work. (We discuss our assessment of SEC’s efforts to improve communication and collaboration later in this report.) OPM found improvement at SEC related to cross-divisional communication and collaboration. In OPM’s 2016 FEVS survey, an estimated 60 percent of SEC employees responded positively (agree and strongly agree) when asked if managers promote communication among different work units (for example, about projects, goals, needed resources), compared to an estimated 47 percent in 2012. In addition, one former employee who had been a senior officer at SEC described a substantial improvement in communication after the 2007–2009 financial crisis. SEC has developed mechanisms to monitor supervisors’ use of its performance management system and developed and implemented a system to monitor and evaluate personnel management activities, consistent with our 2013 recommendations in these areas, but progress to improve personnel management in other areas has been limited. Since our prior review, SEC has developed mechanisms to monitor supervisors’ use of the performance management system to provide performance feedback, reward strong performance, and address unacceptable performance. In addition, SEC has implemented an accountability system to evaluate personnel management activities, policies, and programs. However, SEC’s actions to address personnel management practices in the areas of workforce and succession planning, performance management, and cross-divisional communication and collaboration have not been sufficient to address our 2013 recommendations. Further, we found that SEC lacks controls over some aspects of its hiring and promotions. We found that SEC has implemented mechanisms to monitor how supervisors use the performance management system. According to OPM guidance, an effective performance management system provides mechanisms to monitor how supervisors use that system and provide feedback to staff. OPM does not provide specific requirements on the structure of these mechanisms, allowing agency discretion. In our July 2013 report, we recommended that SEC create mechanisms to monitor how supervisors use the performance management system to provide meaningful feedback to staff, recognize and reward performance, and effectively address unacceptable performance. Based on our review, SEC’s efforts to implement mechanisms to monitor supervisors’ use of the system in these areas are sufficient to address our 2013 recommendation. SEC has taken steps to monitor the performance feedback supervisors provide to employees. While we did not independently assess the quality of feedback provided to employees, we examined SEC’s process for monitoring feedback as well as our survey results that relate to the performance feedback employees receive. Consistent with the OPM guidance, each year SEC monitors whether supervisors are providing the required feedback by reviewing a random sample of 5 percent of performance work plans each fiscal year; these work plans contain documentation that the supervisor provided the interim and final performance feedback to the employee. SEC’s review of the random sample of the performance work plans involves assessing the documentation to determine whether employees and supervisors have completed the formal performance appraisal process and whether the supervisor provided feedback to the employee. We found that while SEC’s random selection of 5 percent of performance work plans produces results which are not generalizable, the methodology is sufficient to gauge general compliance with SEC policies. SEC’s review of performance work plans for fiscal year 2014, the most recent review available, found that 96 percent of the sampled employees had discussed performance expectations with their supervisors, 92 percent had a midyear performance review, and 98 percent had received end of year feedback. While the fiscal year 2014 review did not make any new recommendations for improvement, SEC’s fiscal year 2013 review of performance work plans, the first time SEC had conducted these reviews, made recommendations that included providing additional online resources to supervisors and uploading copies of performance work plans to an electronic OPM database. In response to the 2013 review, SEC provided supervisors with more online resources about the performance management process in SEC’s shared database and continued to upload performance work plans into staffs’ electronic official personnel folders. In addition, SEC plans to continue these reviews annually, according to SEC officials. Overall, survey responses by nonsupervisors related to feedback improved modestly since 2013 (see fig. 8). The percentage of nonsupervisory staff who agreed that their current direct supervisor provided useful and constructive feedback increased from about 65 percent to about 70 percent from 2013 to 2016. A few employees who provided written responses to our survey noted that supervisors in their workgroup provided constructive feedback. In addition, SEC’s level of positive responses on performance feedback was similar to that of other agencies in OPM’s 2016 FEVS. Specifically, when employees were asked whether they agreed with the statement “Discussions with my supervisor about my performance are worthwhile,” OPM estimated that 66 percent of SEC respondents agreed or strongly agreed with the statement, compared to 63 percent of respondents government-wide. While employees’ views related to the quality of feedback provided have generally improved since 2013, some supervisors and staff we met with identified some areas of concern that are common across the government. For example, we interviewed two groups of supervisors and both groups told us that the quality of feedback an employee receives can be inconsistent and is often dependent on their particular supervisor. In addition, supervisors in one group stated that they are sometimes reluctant to provide negative feedback to staff for fear of retaliation by the SEC employees’ union. Finally, in our conversations with SEC employees and in comments on our survey, some employees told us that feedback was not consistently substantive and timely. In response to our July 2013 recommendation, SEC has also implemented mechanisms to monitor how supervisors recognize and reward performance. For example, during the course of our review, the accountability group in the Office of Human Resources issued a report on its review of SEC’s award program. The purpose of the review was to help ensure that SEC’s Employee Recognition Program and performance-based cash awards were in compliance with applicable federal laws, rules, and regulations and SEC policies and procedures, and to determine how awards were distributed across the agency. According to the report, SEC’s accountability group took a number of steps to monitor how awards were being distributed to SEC employees. First, they reviewed the criteria and justification for the incentive awards. Second, they analyzed demographic data to determine the distribution of incentive awards throughout SEC. Third, they interviewed key staff responsible for the SEC awards program to help ensure they understood how to administer the program in compliance with relevant SEC policies and procedures and federal laws and regulations. We found that SEC’s actions to review the program are consistent with OPM guidance on monitoring supervisors’ use of the performance management system. Overall, SEC’s accountability group found that the awards program had improved over time and that information about the program was well- communicated and highly visible to staff, automated, and sufficiently funded. However, the group recommended that supervisors ensure that nominating staff document the justification for all awards that are not based on a rating of record, as required by regulation. SEC officials responsible for the awards program agreed with the recommendation and, according to the accountability group’s report, are taking actions to review submitted awards to ensure all program requirements are met, including the requirement for supervisors to ensure that each award recommendation is justified. According to the accountability group’s planning document, the group plans to evaluate the awards program every 3 years through 2027. In addition, SEC officials told us that they plan to make additional program improvements based on a 2014 SEC Office of Equal Employment Opportunity initial review of SEC’s awards data. In response to this initial review, SEC is conducting an analysis to determine if SEC’s policies, practices, or procedures are creating any barriers in recognition and awards. SEC expects to complete this analysis in March 2017. Similar to SEC findings, we also found that SEC’s award program is designed and implemented consistent with OPM’s HCAAF, which notes, among other things, that an agency’s award system should have clear criteria and include a variety of types of awards. We performed our own independent analysis of SEC’s awards program by reviewing a nongeneralizable sample of 71 award packages and found that SEC is implementing its awards program consistent with its policies and procedures. We reviewed these packages to determine whether awards packages are consistent with awards criteria, which includes whether the awards have written justifications and required signatures by the staff submitting award recommendations and staff reviewing the awards, and whether the awards are accurately reflected in the employees’ personnel records. We found that all award packages we reviewed had a written justification describing what the employee or group of employees had accomplished to receive consideration for the award and had the requisite signatures from the division or office, as well as from the Office of Human Resources, indicating that the relevant officials had reviewed the awards package and approved it. Finally, we found that for all award packages we reviewed, the approved cash award or time-off award was accurately reflected in all the employees’ personnel records. In response to our 2013 recommendation, SEC has implemented mechanisms to monitor supervisor practices to address unacceptable performance. Consistent with OPM guidance and federal regulations, SEC supervisors are required to gather relevant information, such as examples of work products that do not meet performance standards and any relevant e-mails discussing the individual’s performance. They must also document the unacceptable performance prior to putting a permanent employee on a performance improvement plan or terminating employment for a probationary employee (generally employees who have been on the job for less than 1 year). According to SEC officials, the Office of General Counsel, which is now responsible for coordinating SEC’s practices related to addressing unacceptable performance, tracks employees who receive an annual performance rating of “unacceptable” (which would generally precipitate a performance improvement plan). It also follows up with the employee’s supervisors to ensure the supervisors are taking the required steps to address the performance issue. As documented in the performance improvement plans, the Office of General Counsel is to ensure that supervisors (1) describe the unacceptable performance, (2) describe what actions the employee needs to take to address the unacceptable performance, (3) specify the amount of time the employee will have to improve his or her performance, and (4) describe the consequences if the employee’s performance fails to improve. Based on our review of performance improvement plans for permanent employees and actions taken against probationary employees, we found that SEC has implemented its practices related to addressing unacceptable performance consistent with its policies and procedures. Specifically, we reviewed all 16 performance improvement plans SEC issued in fiscal years 2013 through 2015 and found that these plans contained all the required information. We also reviewed all files related to terminations of employees during probationary periods for fiscal years 2013 through 2015 (20 in all) and found that they all contained the required information under the regulations. Specifically, these files described the unacceptable performance and the effective date of the termination, which in all cases was within the 1-year probationary period. In response to our 2013 recommendation, SEC has designed and implemented a human capital accountability system (that is, a system designed to facilitate regular assessments of an agency’s personnel management programs). In our July 2013 report, we found that SEC had not implemented a way to monitor and evaluate its personnel management. As a result, we recommended that SEC prioritize and expedite efforts to develop and implement a system to monitor and evaluate personnel management activities, policies, and programs, including establishing and documenting the steps necessary to ensure completion of the system. Since that time, SEC has taken steps to address the recommendation. In 2015, SEC designed a human capital accountability system, including an underlying plan and standard operating procedures. According to OPM’s HCAAF, a human capital accountability system should evaluate results and provide consistent means for an agency to monitor and analyze its performance on all aspects of its human capital management policies, programs, and systems. OPM guidance also states that the accountability system should contribute to an agency’s performance by identifying and monitoring necessary improvements. An accountability system should also provide for annual assessments of an agency’s progress and results related to human capital management. SEC’s accountability system requires that staff in the Office of Human Resources review programs, recommend corrective actions and track the status of those actions, and provide an annual assessment of the progress. Steps SEC has taken to implement the system include the following: SEC evaluated its Student Loan Repayment Program in October 2015 and found weaknesses in internal controls—for example, controls related to documentation of the decision to accept or reject an application for the program—and made recommendations for strengthening these controls. Since September 2014, SEC has also conducted quarterly reviews of personnel actions and recruitment case files and identified weaknesses such as incorrect offer letters and missing evidence of rankings of job candidates, which we discuss in more detail later. SEC’s quarterly reviews also identified some positive findings, including that SEC’s job opportunity announcements had few significant errors. In April 2016, SEC provided its first annual human capital accountability report to the Chief Human Capital Officer. This report summarizes the actions SEC took to review its human capital programs in fiscal year 2015 and lists the remaining steps necessary to fully implement the system. In addition, the results of SEC’s human capital accountability system have informed the agency’s human capital goals and spending priorities. According to HCAAF, the results of the human capital accountability system should inform an agency’s human capital goals and objectives as well as its spending priorities. For example, SEC relied on the results of its review of the Student Loan Repayment Program to set goals related to attracting and retaining talent. Specifically, SEC found that the program lacked a process to document why some employees’ applications were denied and therefore was unable to ensure that qualified and talented employees benefited from the program. According to SEC officials, in response to the results of the review, SEC broadened its goal of attracting and retaining talented employees by incorporating goals related to improvements to the management and oversight of the program. Likewise, officials said that SEC used the results of its quarterly reviews of its recruitment case files, which found improvements in aspects of the recruitment and hiring process, to set more challenging goals to hire larger numbers of staff in a more efficient manner. Based on the results of these reviews, SEC requested additional Office of Human Resources staff in its 2017 budget justification request, according to SEC officials. These linkages are important for providing assurance that SEC’s human capital accountability system is contributing to its human capital goals and priorities. As a result of these findings, we have concluded that SEC addressed our 2013 recommendation to develop and implement a system to monitor and evaluate personnel management activities, policies, and programs. In addition, although SEC completed only about half of its planned reviews of human capital programs for fiscal years 2015 and 2016, it is taking steps to address this issue. SEC staff told us that because this was their first human capital accountability system, they had not developed specific criteria for selecting programs to review, other than those they were required to review by regulation. After discussions we held with relevant SEC officials throughout the course of our review, in January 2016 SEC established criteria for programs to be reviewed through fiscal year 2027. SEC staff in the Office of Human Resources now assign a priority level to each program, function, or activity they plan to review based on its regulatory review requirement, the necessary implementation costs, and the number of employees affected. The higher the priority, the more often the program, function, or activity is to be evaluated. SEC staff also told us that they are in the process of updating the standard operating procedures for the accountability system and that these procedures would be finalized by early calendar year 2017. In addition, when SEC staff planned the fiscal years 2015 and 2016 reviews, they said that they did not anticipate the resources required to complete them. As such, they now plan in a way that takes into account available resources, which will limit the number of reviews in the future. By applying these newly established priorities and planning procedures, SEC should be in a better position to complete key program reviews that are an essential component of its human capital accountability system. While SEC has taken some actions to address our 2013 recommendations on workforce and succession planning, performance management, and cross-divisional communication and collaboration, we found that these actions were insufficient to address our 2013 recommendations. In addition, we found that while SEC’s hiring and promotion policies and procedures are generally consistent with OPM and other relevant criteria, SEC lacks adherence to controls over some aspects of hiring and promotions. Consistent with our 2013 recommendation that SEC prioritize efforts to create a workforce and succession plan consistent with OPM guidance, SEC has recently developed plans, but they do not include some key components of strategic workforce and succession planning identified by OPM and our previous work. In our July 2013 report, we found that SEC had not yet developed a comprehensive workforce and succession plan. We recommended that the COO and the Office of Human Resources (1) develop a comprehensive workforce and succession plan and (2) incorporate relevant OPM guidance as they develop this plan. SEC has not yet fully addressed these recommendations. In July 2016, SEC finalized its workforce plan for fiscal years 2016 through 2018, which included some elements of OPM guidance and best practices we have previously identified. For example, OPM guidance states that effective workforce planning aligns workforce requirements with agency strategic plans. Furthermore, key principles for effective workforce planning we have identified call for agencies to include plans to monitor and evaluate the agency’s progress toward meeting its human capital goals. SEC’s workforce plan is aligned with its strategic plans, references the goals outlined in those plans, and includes performance measures to monitor and evaluate SEC’s progress toward its goals. In addition, key principles for effective workforce planning we have identified also call for agencies to involve top management, staff, and other stakeholders in the workforce planning process. SEC’s workforce plan involves relevant stakeholders, including division and office leadership, SEC University (SEC’s lead office for training), and focus groups of SEC employees. However, the workforce plan does not meet all the key principles for effective workforce planning: Skills gap analysis. SEC’s workforce plan lacks a comprehensive skills gap analysis. OPM has stated, and our past work has shown, that an agency should identify the critical skills its workforce needs, develop a comprehensive assessment of the gaps in those skills, and develop strategies to address those gaps. In 2015, SEC entered into a contract with OPM to conduct a skills gap analysis of mission-critical occupations, but the contract did not include an analysis of all occupations at SEC because the agency chose to prioritize select occupations in the mission-critical office and divisions and the Office of Information Technology. As a result, the skills gap analysis did not include an assessment of the competency of 33 percent of SEC’s workforce, including mission-support staff, such as staff in the Office of Human Resources, and supervisors. Without assessing the skills of these key positions, SEC does not have assurance that its personnel across the agency, including those responsible for carrying out critical personnel management functions, have the skills necessary to fulfill SEC’s mission. Workforce structure. SEC’s workforce plan does not inform decision making about the structure of the workforce. OPM guidance states that an agency’s workforce plan should inform decision making about how best to structure the organization and deploy the workforce. However, the plan does not identify the optimal number of attorneys (key staff responsible for carrying out SEC’s mission) SEC should employ or the percentage of the workforce that should be located in the regional offices. It also lacked information on the type of skills needed by, for example, attorneys. Links to budget. SEC’s workforce plan is not clearly linked to its budget formulation, which we and OPM have previously identified as a best practice. For example, the workforce plan does not identify the personnel costs of the current workforce, nor does it identify the number of employees SEC intends to hire and their associated cost. When linked to the budgeting process, workforce planning provides information that agencies need to help ensure that their annual budget requests include adequate funds to implement their human capital strategies. In addition, the component of SEC’s workforce plan that addresses succession planning lacks information on workforce attrition and lacks a process for identifying future leaders. OPM guidance states that agencies should have a leadership succession planning and management system that is based on accurate data on the current workforce and accurate projections of attrition at all leadership levels. OPM guidance also states that agencies should develop a fair and accurate process to identify a diverse pool of high-potential leaders. SEC’s succession plan describes the various levels of leadership at SEC and what is required for successful performance at each level. It also includes the leadership competencies for all leadership positions and senior officers and the courses and services available to develop those competencies. However, it does not include data on SEC’s current workforce and attrition projections for SEC leaders, which are important for determining current and future workforce needs. In addition, the succession plan does not identify a fair and accurate process for identifying and selecting leaders, which may prevent the process from being transparent to employees. Developing a clearer process for selecting leaders could help to address employee concerns related to the promotion process. For example, only 15 percent of the nonsupervisory staff who responded to our 2016 survey agreed that the criteria for promoting staff are clearly defined, a modest improvement from our 2013 survey but still a relatively small percentage (see fig. 9). Since our 2013 report, SEC has provided us with various documents and plans to demonstrate their response to our recommendations. However, as we previously discussed, SEC’s recently developed workforce plan lacks a comprehensive skills gap analysis plan, does not inform decision making about the structure of the workforce, and is not clearly linked to its budget formulation. As a result, SEC has not fully addressed the recommendations from our July 2013 report related to workforce planning, and we maintain that these 2013 recommendations are still valid. In 2014, SEC decided to redesign its performance management system without formally assessing it, which is inconsistent with our previous recommendation that SEC periodically validate the system in order to enhance its credibility. In our July 2013 report, we found that the design of SEC’s performance management system reflected many elements of OPM guidance, but SEC staff expressed concerns about implementation of the system. Consistent with best practices, we recommended that SEC’s COO and Office of Human Resources conduct periodic validations (with staff input) of the performance management system and make changes, as appropriate, based on these validations. At the time of this review, SEC had not conducted periodic validations of its performance management system as we recommended—nor are any planned, according to SEC staff—and therefore the recommendation is still unaddressed. While SEC’s policies state that the Office of Human Resources is to perform an assessment of the system on an annual basis, Office of Human Resources officials told us that SEC has not conducted a formal assessment of the performance management system because the agency is in the process of developing a new system. Office of Human Resources officials stated that they decided to develop a new performance management system in 2014 due to continued criticism of the current system by the SEC employees’ union. In developing its new performance management system, SEC did not follow best practices that we and OPM have identified. For example, OPM’s HCAAF states that agencies should base their human capital management decisions (including those related to changes to the performance management system) on the results of data and planning. Additionally, key practices for effective performance management we have identified call for agencies to involve employees and other stakeholders when they design and periodically evaluate their performance management systems. However, since our 2013 report, SEC has not reviewed the effectiveness of its existing system and has had limited stakeholder involvement in the development of the new performance management system. SEC management did not assess the existing system to understand if the issues raised by employees were related to the system’s design or its implementation. As a result, SEC lacked information on if and what changes needed to be made and how best to make them. Instead, SEC developed a new performance management system with some limited consultations with the union in 2015 and conducted a pilot of the new system with non-bargaining staff in May 2016. We maintain that our prior recommendation should be implemented and that SEC should conduct periodic validations of any performance management system it has in place by, for example, obtaining staff input and general agreement on the competencies, rating procedures, and other aspects of the system. Only then should SEC make changes, as appropriate, based on these validations. Without evaluating its performance management system to identify problems and potential solutions, SEC may not have assurance that the new system will perform better than the current system. Furthermore, SEC’s planned changes to its performance management system will require additional resources that could be targeted toward its other personnel management challenges. SEC has not addressed our previous recommendations targeted at improving collaboration and communication across SEC. While SEC has created some incentives to support communication and collaboration across divisions, barriers related to cross-divisional communication and collaboration still remain. In our July 2013 report, we found that SEC faced barriers to communication and collaboration, especially among the various divisions and offices. We recommended that the SEC COO (1) identify and implement incentives for all staff to support an environment of open communication and collaboration and (2) explore communication and collaboration best practices and implement those that could benefit SEC. SEC has not yet addressed these recommendations. Since our 2013 report, SEC has demonstrated some improvement in communication and collaboration within divisions and offices. For example, in group interviews, supervisors from five of the six largest divisions and offices at SEC agreed that there is sufficient communication and collaboration within their division. Furthermore, our 2016 survey results showed some improvements related to communication and collaboration. For example, 44 percent of nonsupervisory staff agreed that information is adequately shared across groups in their division or office, compared to 34 percent in our 2013 survey (see fig. 10). SEC has also implemented some incentives and procedures for staff to communicate and collaborate. For example, SEC’s annual agency-wide awards program includes awards that recognize outstanding teams, including cross-divisional teams. SEC has also implemented tools and procedures to facilitate collaboration. For example, SEC developed a tracking system that facilitates collaboration on interdivisional memorandums, and the Division of Economic and Risk Analysis developed an electronic system that allows other divisions to request data it collects. In addition, the Division of Enforcement created formal liaisons that other divisions and offices can contact. Managers in four of the five largest divisions and offices told us that these liaisons help to facilitate cross-divisional communication and collaboration with the Division of Enforcement. However, incentives for staff to support an environment of open communication and collaboration are not present for all staff across SEC. According to OPM guidance, supervisors and managers should foster an environment of communication and collaboration. SEC has added performance expectations for 53 percent of supervisors to encourage communication and collaboration, including intra-agency communication and collaboration. For example, the Office of Compliance Inspections and Examinations sets expectations for its Assistant Directors (SK-17 level) to “promote and maintain an environment of cooperation and create a high level of team cohesion by empowering all staff” and “work with other Program areas and Offices, especially by pro-actively sharing relevant information.” In addition, the Division of Corporation Finance sets expectations for its accountants to “engage in appropriate internal and external communications to resolve issues” and to “provide relevant technical information and work-related knowledge, skills, and lessons learned within and/or beyond the work unit.” However, we found that these expectations were not present for 47 percent of all supervisors across divisions and occupations. As a result, SEC has not fully addressed our 2013 recommendation to identify and implement incentives for all staff to support an environment of open communication and collaboration. In addition, SEC has not demonstrated the use of best practices to improve communication and collaboration within and across SEC divisions and offices. We have previously identified best practices related to collaboration, including supervisors fostering an environment of open communication, promoting frequent communication among collaborating divisions, and establishing compatible policies and procedures to operate across agency boundaries. When we asked officials from the COO’s office whether they had researched best practices for improving communication and collaboration across SEC, they provided two examples. First, SEC officials told us that they reached out to officials at the Federal Deposit Insurance Corporation (FDIC) to discuss how FDIC had obtained high survey scores related to communication and collaboration. This outreach resulted in the creation of SEC’s “All Invested” initiative, which SEC described as an initiative to encourage collaboration and community to help the agency achieve its mission and make SEC the best place in government to work. Second, SEC officials mentioned a best practice in which they launched a “values campaign” to promote important values, including teamwork, as a part of the “All Invested” initiative. However, many of the supervisors and staff we spoke with told us that the “All Invested” initiative was more of a marketing campaign than a substantive change. SEC has also established a number of working groups to improve communication and collaboration, but these working groups are often focused on specific topics and do not provide a means for divisions and offices to collaborate on the full range of their day-to-day work activities. As a result of SEC’s limited action, we maintain that SEC has not taken sufficient steps to fully address our 2013 recommendation to explore and implement best practices to improve communication and collaboration within and across SEC divisions and offices. Of the seven recommendations that we made in 2013, SEC has made the least progress on the recommendations related to enhancing intra-agency communication and collaboration. One reason for this may be that, other than the Office of the Chair, there is no senior-level office or official that has authority over the daily operations of all SEC divisions and offices (see previous fig. 1). For example, the COO is responsible for approving budget requests, staffing levels, and reorganization requests for the SEC as a whole. However, each mission-critical office and division has its own director that is responsible for policies and programs that affect the operations of each individual office and division. For example, the Director of Enforcement and his staff facilitate communications with other divisions and offices to conduct investigations and coordinate on policy or legislative briefings. According to the March 2011 Boston Consulting Group report, the function of the SEC COO has historically focused on the annual congressional appropriation cycle, internal budgeting process, and administrative duties. Based on a number of interviews with relevant staff in the Office of Human Resources, we found that this structure and the limited authority of the COO may help to explain in part the inability of the COO to explore and implement best practices that could affect the daily operations throughout SEC. For example, these staff told us that the divisions and offices play the key roles in exploring and implementing best practices that could affect daily operations throughout SEC, not the COO. Key principles we have identified for organizational transformation call for agencies to create a position such as a COO or Chief Management Officer with authority over all operations of the agency; such a position is one approach to help agencies address long-standing management challenges. For example, there needs to be a single point within the agency with the responsibility and authority to ensure successful implementation of functional management and transformational change efforts. Further, it is not practical to expect an official like the Chair’s Chief of Staff to undertake this vital responsibility due to competing demands on their time in helping to execute the Chair’s policy and program agendas. The lack of a central position or office with authority over the daily operations of all SEC divisions and offices makes it difficult to lead SEC- wide changes to address long-standing management challenges related to communication and collaboration. Because of the COO’s limited authority and the absence of another SEC official, other than the SEC Chair, with the authority over the divisions and offices to take action to facilitate efforts to improve cross-divisional communication and collaboration, progress in this area will likely continue to be limited. An environment of limited intra-agency communication may continue to increase the risk of inefficiencies and less-than-optimal decision making, which may affect SEC’s ability to achieve its mission, as was the case with SEC’s actions related to the Bernard Madoff ponzi scheme and other enforcement failures. SEC’s hiring and promotion policies and procedures are generally consistent with OPM and other relevant criteria, but SEC lacks assurance that staff, particularly hiring specialists, know how to implement the policies and procedures correctly. OPM’s HCAAF specifies, among other things, that agencies’ hiring processes should help ensure that positions are developed and validated by appropriate staff, that position descriptions are established, and that appropriate assessment tools (e.g., processes for comparing application packages to qualifications and conducting panel interviews) are developed prior to initiating a hiring request. In addition, agencies should follow merit system principles and must observe prohibited personnel practices to ensure a fair process and may have to follow veterans’ preference requirements. Key principles for hiring that we have identified call for agencies to use vacancy announcements that are clear, user friendly, and comprehensive. Finally, federal internal control standards note that agencies should have procedures to determine whether a particular candidate fits the organizational needs and has the competence for the proposed role. Consistent with these criteria, SEC’s hiring and promotion policies and procedures require hiring specialists from the Office of Human Resources and hiring managers from the divisions and offices to follow specific steps and document these steps in recruitment case files. These steps include the following: documenting consultations between SEC hiring specialists and division hiring managers over the position to be filled by a hiring or promotion action; including descriptions of the position, job analysis, and the vacancy announcement in the case file for the position; documenting the review of applications to help ensure they meet issuing a certificate of eligibles, which lists all the applicants who are determined to be best qualified for the position posted; providing evidence of whether hiring managers reviewed each certificate of eligibles within established time frames and made a selection from the list of certified eligibles; and obtaining signed offer letters and supporting documentation, including starting salary. In addition, in 2015, SEC finalized standard operating procedures related to the review and maintenance of case files. These procedures require hiring specialists to complete checklists at various stages in the hiring and promotion process to help ensure that documents are uploaded appropriately. Supervisors are responsible for reviewing and approving the checklists before moving to the next stage in the process. In addition, they are also responsible for conducting periodic compliance reviews to ensure adherence to these procedures. However, although these policies and procedures meet relevant criteria, we found a large number of deficiencies when we tested the policies’ implementation. We reviewed a random sample of cases files from fiscal years 2013 through 2015 to determine if SEC was following its hiring and promotion policies and procedures. Based on our analysis of this sample, we estimate that for 94 percent of the case files for fiscal years 2013 through 2015, SEC staff did not consistently follow at least one policy or procedure for hiring and promotion actions, including the following examples: Documentation missing: Based on our analysis of the sample, we estimate that 16 percent of case files during these fiscal years had no evidence that applicants were reviewed to ensure they met the minimum job requirements. Further, we estimate that 16 percent of the case files had no certificate of eligibles, which makes it difficult to determine how officials selected the best qualified applicants. We also found deficiencies once candidates were selected for the position. For example, for 23 job offers made, we found no documents that showed how the initial salary was determined. Supervisory approvals missing: Based on our analysis of the sample, we estimate that 18 percent of case files for fiscal years 2013 through 2015 contained documents describing the consultation between the hiring specialist and the hiring manager about the position that were not completed or signed. As a result, determining if the hiring specialist and hiring manager reviewed and developed the documentation that is meant to support and defend the hiring decision is difficult. In addition, we estimate that 16 percent of case files had certificates of eligibles that were not signed by the responsible officials. The selecting officials are responsible for returning the certificate with their selection indicated, and their signature serves to assure the Office of Human Resources that they have provided their approval to extend an offer of employment, according to SEC staff. Time frames not observed: Based on our analysis of the sample, we estimate that 29 percent of case files for fiscal years 2013 through 2015 had certificates of eligibles that were not returned on time, nor was there documentation on why they were not returned on time. This is particularly important because some SEC employees told us that SEC cannot always hire the most qualified people due to slow processing times. In addition, for 20 job offers, the offer letter was sent before the initial salary determination was made, which is against SEC policy. SEC has also conducted reviews of its case files and identified deficiencies similar to those we found during our review. As discussed previously, as part of SEC’s implementation of its human capital accountability system, SEC has implemented an internal quality control process to help ensure that case files are accurate and complete, but this process occurs after hiring and promotion decisions are made. The purpose of this quality control process is to identify common deficiencies in case files in order to improve the hiring and promotion process. SEC has conducted four quarterly reviews of case files since September 2014 and continues to identify weaknesses similar to those we found, such as missing evidence of ranking of job candidates and missing documentation to support the initial salary of the candidate. SEC categorized the deficiencies into five levels of severity and found that the frequency of the minor (e.g., missing checklists) and moderate (e.g., missing descriptions of the position) deficiencies has slowly decreased over time, with minor deficiencies decreasing from 48 percent of case files reviewed during its September 2015 quarterly review to 41 percent in the most recent March 2016 review. Likewise, according to SEC, the moderate deficiencies decreased from 16 percent to 9 percent of case files reviewed over this same time period. However, the frequency of significant (e.g., missing reviews of whether applicants meet minimum qualifications) and major (e.g., missing audit of certificate of eligibles) deficiencies had slowly increased over time. Significant deficiencies have increased from 24 percent of case files reviewed during the September 2015 quarterly review to 33 percent in the most recent March 2016 review. Likewise, the major deficiencies increased from 12 percent to 17 percent of case files reviewed over the same time period. SEC had no critical deficiencies (e.g., violation of veterans’ preference) over this period. OPM, the National Credit Union Administration (NCUA), and an outside consultant also reviewed SEC’s hiring and promotion practices and identified similar deficiencies in a sample of case files they reviewed, including the following examples. OPM found that although SEC had various options for staff to document their rationale for deeming an applicant who did not meet minimum qualifications as “best qualified,” in some cases, staff did not provide sufficient documentation. As a result, OPM had difficulty reconstructing some minimum qualifications to assess whether a candidate met the minimum qualifications for the job posting. However OPM did find that the qualifications were accurate. NCUA found that applicants were not consistently notified of the status of their application at key stages. OPM has noted that it is a good practice to keep applicants notified of their status. An outside consultant that reviewed SEC’s internal promotion actions from fiscal years 2011 through 2014 found, for example, that SEC’s lack of full adherence to uniform personnel practices and guidelines appeared to result in a loss of promotional opportunities and unequal treatment in the selection stage for certain groups. Specifically, the consultant found that disparities existed in the way human resources specialists processed promotion actions and that they failed to apply processes and procedures established by OPM to promote fair and equal opportunities for all employees. These practices included restricting vacancy announcements to specific offices within SEC and early closing of vacancy announcements. As a result of these practices, the consultant reported that well-qualified applicants who perform similar functions in another area of SEC may not be selected or applicants may not have sufficient time to apply. SEC has taken a number of actions to address deficiencies identified by OPM, NCUA, and the consultant. For example, in December 2015 during our review, SEC began to mandate that all case file documents be uploaded to an electronic case file. Previously, SEC case files were maintained in a paper format. According to SEC staff, the electronic case files allow for easier access and monitoring than paper files, allow for controls over what documents are stored in the case files, and avoid problems with documents being misplaced or lost. To address issues that NCUA found, SEC now notifies applicants at key stages of the application process. SEC’s actions in response to recommendations in the consultant’s report include providing general training to all staff on how SEC promotes staff, publicizing the promotion process to all staff, and providing resources to staff interested in promotions, including guidance regarding writing resumes and preparing for interviews. SEC has recently taken some steps to improve its hiring and promotion practices, which may help to address the types of errors that we found in our review of files from fiscal years 2013 through 2015. As discussed previously, we found that hiring specialists failed to include the required documentation for 94 percent of case files we reviewed. The presence of errors in a large percentage of case files indicates that Office of Human Resource supervisors, who are responsible for overseeing the work of the hiring specialists, did not identify and resolve these issues as required by SEC policy. According to federal internal control standards, managers should perform ongoing monitoring as part of the normal course of operations. Ongoing monitoring includes regular supervisory activities and reconciliations, and may include automated tools, which can increase objectivity and efficiency. SEC officials told us that the electronic case files they began to create in December 2015, during the course of our review, should allow them to more easily monitor and audit whether all documentation is complete and properly uploaded at every stage of the hiring and promotion process, and these actions are consistent with federal internal control standards. Few completed electronic case files were available during our review, and therefore it is too early to evaluate the effectiveness of the new process. However, SEC’s steps may help to ensure that staff adhere to policies and procedures and may help to address the types of errors that we found in our file review. We also found issues related to the training of SEC’s hiring specialists, which may be another factor contributing to the high rate of errors in the case files we reviewed. When we spoke to SEC hiring managers, they expressed some concern over the competence of hiring specialists in the Office of Human Resources. Hiring specialists also told us that they received only limited training. Based on our review of training offered by SEC University, we did not find any specific training on hiring and promotions targeted at hiring specialists. Further, SEC’s 2015 training plan for the Office of Human Resources has a course on adjudicating veterans preference, but no courses are specifically targeted at how to implement each stage of the hiring and promotion process. A key principle of an effective control environment states that managers should demonstrate a commitment to recruit, develop, and retain competent individuals. For example, managers should establish expectations of competence for key roles to help the entity achieve its objectives. This competence is gained largely from professional experience, training, and certifications. In addition, as previously discussed, we found that SEC has not assessed whether some of its mission-support staff, including key hiring specialists, have the necessary skills to conduct their work. OPM has stated that agencies should conduct a learning needs analysis to identify skills gaps across their entire workforce, and we found that SEC has yet to fully address our 2013 recommendation to conduct such an analysis. Rather, SEC only conducted a skills gap analysis of staff in the mission-critical office and divisions and in the Office of Information Technology. Without providing necessary training that is informed by a comprehensive skills gap analysis, SEC may lack assurance that hiring specialists have the skills required to conduct their work effectively, and that it is hiring and promoting the most qualified applicants. A high-performing workforce is critical to SEC effectively carrying out its regulatory responsibilities in increasingly complex markets. While SEC has taken steps to address our 2013 recommendations, its progress has been limited, and five of the seven recommendations are not fully addressed. We maintain that these five recommendations—in the areas of workforce and succession planning, the performance management system, and communication and collaboration—should be addressed in order for SEC to fulfill its mission effectively. One of the most protracted personnel management challenges at SEC remains communication and collaboration, and SEC’s limited progress in addressing our 2013 recommendations in this area points to a lack of leadership in breaking down silos that prevent divisions and offices from working more efficiently and effectively with each other. Apart from the office of the Chair, which has broader responsibilities both within and outside the agency, heads of each division and office are responsible for their daily operations and are not accountable to any senior-level official, such as the Chief Operating Officer. Our prior work has found that having a senior-level position within the agency that has the responsibility and authority to ensure that changes are implemented can help address protracted personnel management challenges such as communication and collaboration. Finally, our review found that SEC’s training related to hiring and promotion practices may be inadequate. According to federal internal control standards, managers should demonstrate a commitment to recruit, develop, and retain competent individuals, which depends in part on adequate training. Because SEC has not conducted a skills gap analysis across its entire workforce as we previously recommended in 2013, including its hiring personnel, it lacks the information needed to develop an effective training program. Training for hiring specialists that is informed by a comprehensive skills gap analysis should better enable SEC management to hire and promote the most qualified applicants. To help SEC address identified personnel management challenges, the Chair should take the following two actions: Enhance or expand the responsibilities and authority of the COO or other official or office so they can help ensure that improvements to communication and collaboration across SEC are made. For instance, if the duties of the COO were expanded, the COO could establish liaisons in each mission-critical office and division for SEC employees to contact or develop procedures to help facilitate communication and collaboration among the mission-critical office and divisions. Develop and implement training for hiring specialists that is informed by a skills gap analysis. We provided SEC a draft of this report for its review and comment. SEC provided written comments that are reprinted in appendix IX. In its written comments, SEC agreed with the majority of our findings and one of our two recommendations, but it disagreed with the other. In its letter, SEC stated that it has made a number of improvements in its personnel management since our 2013 report. The letter also highlighted that the released rankings of the Best Places to Work in the Federal Government show that SEC’s workforce is among the most engaged in the Government, ranking now 6 out of 27 mid-sized agencies; SEC has climbed nine places in the rankings since our 2013 report. According to the letter, this improvement, among other indicators, illustrates the agency's success in building, sustaining, and growing an organization that fosters and values innovation, communication, collaboration, and transparency. In its letter, SEC also acknowledged that further improvements could be made, and it noted that our report contained useful information to strengthen personnel management at the Commission. Related to personnel management, SEC acknowledged our second recommendation in this report to develop and implement training for its hiring specialists. Specifically, the letter stated that SEC University in coordination with the Talent Acquisition Group in SEC’s Office of Human Resources will prioritize its competency assessment of its human resource specialists (which includes hiring specialists) and develop training plans to address any skill gaps identified. SEC also agreed that it still needs to conduct periodic validations of its performance management system. According to the letter, SEC worked with OPM recently to validate the new system that it is piloting and that the Commission will continue to work with OPM to conduct additional surveys of supervisors and employees regarding the efficiency and effectiveness of SEC's performance management program. SEC also stated in its letter that improvements can be made to its workforce and succession planning. In its letter, SEC stated that it had already begun the planning process to conduct a competency skills gap analysis on the non-mission critical workforce in fiscal year 2017, and will develop appropriate action plans to address the skill gaps that are identified. In addition SEC stated that the Office of Human Resources is in the process of enhancing its current succession planning program, and it will work with all SEC division directors and office heads to institute additional fair and transparent processes for identifying high-potential employees and communicating to them opportunities for leadership development. SEC disagreed with our characterization of the current state of its intra- agency communication and collaboration. In its letter, SEC stated that it believes that significantly more progress has been made by the Commission to resolve the recommendations from our 2013 report (addressing interdivisional communication and collaboration) than our report recognizes. SEC also stated that interactions at both the staff and senior leadership levels are continuous, and that it has instituted both formal and informal mechanisms for additional coordination where it is required, which have proven to be successful. SEC also stated that cross-divisional interaction may not be necessary, or even appropriate, for some non-supervisory staff, and it noted concern with our reliance on anecdotal accounts from one former employee. We acknowledged the improvement SEC has made, for example, by noting that the percentage of non-supervisory and supervisory staff responding that information is adequately shared across groups in their division or office increased from 2013 to 2016. However, we found substantial evidence that siloed communication remains a challenge at SEC. For instance, 78 of the 187 employees we interviewed (over 40 percent) cited issues around siloed communication as an area where SEC needs to improve. Additionally, of the 1,947 written responses we received to our survey questions, 597 of them cited various challenges related to communication and collaboration. We provided examples from several current and one former employee to illustrate the siloed communication at SEC. Further, we recognize that not all staff at SEC may need to coordinate and collaborate for work-related issues. However, staff in mission-critical offices and divisions should be enabled to collaborate and communicate with staff in other offices and divisions. As acknowledged in our report, the Division of Enforcement created formal liaisons that other divisions and offices can contact, and these liaisons help to facilitate cross-divisional communication and collaboration within the division. Based on our survey results, staff in the Division of Enforcement more frequently interacted with staff from other mission-critical offices and divisions. As SEC acknowledged in its response, the Division of Economic and Risk Analysis is similar to the Division of Enforcement in that staff should be routinely communicating and collaborating with them. However, unlike the Division of Enforcement, the Division of Economic and Risk Analysis lacks a mechanism to easily facilitate cross-divisional communication and collaboration. Our survey results show that interaction between Division of Economic and Risk Analysis staff and staff from other mission-critical offices and divisions is limited. SEC also disagreed with our recommendation related to enhancing the responsibilities and authority of the COO or other official or office to help ensure that improvements to communication and collaboration across SEC are made. In its letter, SEC stated that given the current legal and management structure of SEC, as well as the requirements of its mission, SEC does not believe that a position of that description would improve the ability of SEC to discharge its obligations to protect investors. SEC also stated that the agency’s structure and the authority of the Chief of Staff and the Deputy Chiefs of Staff enables them, in close consultation with the Chair, to effectively pursue changes to enhance coordination and collaboration throughout SEC. We are not suggesting that an additional layer of management is needed to help improve the ability of SEC to discharge its obligations to protect investors. Rather, we are recommending that the authority of the COO or some other official be enhanced in order to ensure that each mission-critical office and division establish a mechanism or develop procedures to facilitate communication and collaboration. SEC provided no evidence to illustrate why relevant best practices that GAO has identified can work in other federal agencies that have varied structures cannot benefit the Commission. The best practices we have identified call for institutionalized accountability for addressing management issues and leading transformational change because the management weaknesses in some agencies are deeply entrenched and long-standing, and it can take at least 5 to 7 years of sustained attention and continuity to fully implement transformations and change management initiatives. The typical tenure of an SEC Chair is shorter than the time needed to affect such change. Since 2001, SEC has had 6 Chairs and none of them have had a tenure that lasted 4 years. SEC also noted that our conclusions would have been better informed with the full perspectives of the agency’s Chief of Staff and Deputy Chiefs of Staff. We met with the Chief of Staff and Deputy Chief of Staff during our review and they discussed efforts by SEC to address cross-divisional communication and collaboration changes. While we acknowledge efforts SEC has made to improve collaboration and communication across the agency, the evidence we present indicates that SEC should do more to identify a single point of contact with the responsibility and authority to ensure the successful implementation of a communication and collaboration process. As a result, we maintain that the recommendation would help ensure such change. We are sending copies of this report to the appropriate congressional committees, the Chair of the Securities and Exchange Commission, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix X. Table 2 provides the status of recommendations we made to the Securities and Exchange Commission (SEC) in 2013. As the table shows, five of the seven recommendations remain open, as of December 2016. This report examines (1) employees’ views of the Securities and Exchange Commission’s (SEC) organizational culture and personnel management, and the extent to which these views have changed since our 2013 report and (2) the extent to which selected SEC personnel management practices have been implemented consistent with relevant standards. To examine employees’ views of SEC’s organizational culture and the extent to which they have changed since 2013, we conducted surveys of all SEC staff, a content analysis of open-ended responses to our survey, individual interviews with SEC staff, and structured group interviews with SEC supervisors: Surveys: To examine employees’ views of SEC’s organizational culture and the extent to which these views have changed since 2013, we implemented three web-based surveys of all 4,236 nonsupervisory and supervisory staff and 148 senior officers. The three surveys were administered to the following number of staff during the following time periods: 1. the survey to the mission-critical office and divisions was administered to 2,627 staff from October 2015 to March 2016; 2. the survey to all other offices and divisions was administered to 1,609 staff from May 2016 to September 2016; and 3. the survey to all 148 senior officers was administered from April 2016 to July 2016. The surveys were administered to the different groups at various timeframes to, for example, allow for the maximum response rate given the competing demands of SEC staff at different times of the year. We chose to survey all staff at SEC instead of a sample to obtain information from the largest feasible number of SEC employees. We analyzed the results of our 2016 survey of all supervisory and nonsupervisory staff and senior officers and also compared the results from 2016 surveys to the mission-critical office and divisions and the senior officers to the results from the 2013 surveys. In addition, we reviewed the Office of Personnel Management’s (OPM) 2016 Federal Employee Viewpoint Survey (FEVS) results to obtain additional perspectives from SEC staff on the agency’s personnel-management-related issues. Each GAO survey to SEC staff included questions on (1) personnel management issues related to recruitment, training, staff development, and resources; (2) communication between and within divisions and offices; (3) leadership and management; (4) performance management and promotions; and (5) organizational culture and climate. The separate survey of all SEC senior officers (those at the SO-1, SO-2, and SO-3 pay grades) covered the same topic areas but omitted questions not relevant for senior officers and included additional questions specifically relevant for senior officers. The survey to all other offices and divisions also covered the same topic areas, but had some questions omitted, such as the question related to the number of times they had interacted with the mission-critical office and divisions over the past year. Our survey included both multiple-choice and open-ended questions. For our survey of the mission-critical office and divisions, 1,819 nonsupervisors and supervisors responded to our survey for a response rate of 69 percent; for our survey of all other offices and divisions, 969 nonsupervisors and supervisors responded for a response rate of 60 percent. A total of 104 of the 148 senior officers responded to our survey of all senior officers for a response rate of 70 percent. For all surveys, except the one for senior officers, we carried out a statistical nonresponse bias analysis using available administrative data and determined that we could not assume that the nonrespondents were missing at random. For this reason, the results of the staff survey are presented as tabulations from a census survey. 31 percent of mission-critical employees and 40 percent of all other employees who chose not to complete our survey. To minimize certain types of errors, commonly referred to as nonsampling errors, and enhance data quality, we employed recognized survey design practices in the development of the questionnaires and the collection, processing, and analysis of the survey data. To develop our survey questions, we drew on information from one-on-one interviews, focus group sessions held during our 2013 review, and prior GAO SEC personnel management surveys. For the surveys of the mission-critical office and divisions and senior officers, we took steps to ensure our survey questions from 2013 were still relevant and to determine if new issues warranted new questions. To do this, we sent draft survey questions to SEC officials in the mission-critical office and divisions and senior officers who volunteered to review our draft questions to obtain their feedback on the survey questions. For the survey of the other offices and divisions within SEC, we pretested the questionnaire with SEC employees to validate the survey questionnaire and to ensure that we were not omitting relevant questions from the survey. We met with six SEC staff for the pretest of the survey. During survey development, we reviewed the survey to ensure the ordering of survey sections was appropriate and that questions in each section were clearly stated and easy to comprehend. A GAO survey expert reviewed and provided feedback on our survey instrument. To reduce nonresponse, another source of nonsampling error, we undertook an intensive follow-up effort that included multiple e-mail reminders to encourage SEC employees to complete the questionnaire and a series of phone call reminders to nonrespondents to encourage participation and to troubleshoot any potential logistical issues with access to the questionnaire. We minimized processing errors by having a second independent data analyst conduct an accuracy check of the computer programs used for data analysis. We also had respondents complete questionnaires online to eliminate errors associated with manual data entry. On the basis of our application of these practices and follow-up procedures, we determined that the data were of sufficient quality for our purposes. Content analysis: To analyze the information we obtained from the open-ended survey responses, we conducted a content analysis on the 1,947 responses to the six open-ended survey questions from the survey of the mission-critical office and divisions using a text analytics tool. Two analysts developed coding categories based on requirements in section 962 of the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act), our researchable objectives, and information collected during individual interviews, as well as findings from our July 2013 report. Coding categories were as follows: (1) communication and collaboration, (2) hierarchy, (3) risk aversion, (4) atmosphere of trust, (5) morale, (6) performance management, (7) training and hiring, and (8) awards. The team provided a “lexicon” of key words and phrases associated with each of the eight coding categories to the text analytics subject matter expert. For example, the key words and phrases associated with communication and collaboration included “communication,” “transparency,” and “working together.” For each of the six open-ended questions, the subject matter expert developed a computer program using the lexicon and provided the team with the categorized open-ended responses. The goal of this analysis was to determine the number of respondents who mentioned at least one challenge in the respective coding category. After obtaining the categorized open-ended responses, two GAO analysts collaboratively reviewed the output and revised or “calibrated” the lexicon based on each result. This review involved verifying the coding of a judgmental sample of responses. The subject matter expert then reran the program with the updated lexicon. This iterative process allowed the subject matter expert to refine the program to isolate comments focused on challenges associated with the coding categories. While human correction and evaluation of the content can help improve the quality of the machine generated coding, a certain, undetermined error remains. To minimize error and improve accuracy, the calibration process continued for three iterations. Comments that were flagged by the text analytics tool as capturing a challenge within a particular coding category were coded with a “1.” Once coding was completed and the discrepancies were resolved, an analyst tallied the total number of responses with a “1” in each of the coding categories. This number indicated the proportion of respondents who expressed concern in their open-ended responses for each category. This process was repeated for each of the eight challenge categories. We do not make any attempt to extrapolate the findings to the eligible staff who chose not to complete our surveys. Individual interviews: We interviewed or obtained written responses from 185 employees (144 nonsupervisory staff, 17 supervisory staff, and 24 staff who chose not to disclose their supervisory status)—in person for those at SEC headquarters and by telephone or e-mail for those in headquarters and regional offices—during 2 weeks in September 2015 and 1 week in February 2016 to obtain their views on personnel management challenges at the agency. We coordinated with SEC to send a broadcast message over its internal system to all staff to invite them to meet with us either in person or to call a GAO toll-free number or use a GAO e-mail account to provide their views on SEC’s organizational culture. At headquarters, we established office hours during which employees could speak with GAO analysts. During the same period, we set up a GAO toll-free phone number and e-mail account to communicate with employees in the regional offices or headquarters who could not attend the office hours. We asked certain key questions of every person we interviewed and interjected additional questions as appropriate. We also explained that our review was initiated due to a provision in section 962 of the Dodd-Frank Act, and provided a description of this section when asked by SEC employees. We then asked them about (1) what personnel management practices were working well, (2) what challenges existed in personnel management, (3) what initiatives, if any, SEC had taken to address these challenges, and (4) whether these individuals had any recommendations to address such challenges. For those staff who were not familiar with what areas encompassed personnel management, we presented them with a list of areas for them to think about. Employees were encouraged to talk openly and freely. To maintain the confidentiality of individual responses, we did not record individual names in our transcripts. Instead, we collected and analyzed the information by division and rank only, and we aggregated our findings so that no individual comments could be identified. We conducted a separate analysis to summarize key themes that emerged from these individual interviews. GAO analysts independently reviewed notes taken from these interviews and made a judgment about appropriate codes that described the themes. The analysts compared their decisions and reconciled any disagreements, resulting in the following set of coding categories: (1) staff competence; (2) appropriate levels of supervisory and non-supervisory staff; (3) effectiveness of supervisors; (4) promoting staff criteria; (5) siloed communication issues; and (6) other cross-cutting challenges. Once the coding structure was finalized, one GAO analyst separately reviewed and coded each response and tabulated the frequency of statements expressing certain themes, while a second analyst verified the information to ensure the tabulation was correct and that the analyst concurred with the results. Where there were discrepancies, a third analyst was asked to review the statements and to make a final determination about how a specific statement would be coded. We also interviewed former SEC employees, officials from the SEC Office of Inspector General (IG), and self-selected representatives and members of the SEC employees’ union. We selected a nonprobability sample of former employees to interview that reflected the diversity of former employees in terms of pay grade, occupational category, and tenure, among other factors. The results of these interviews with SEC employees, former SEC employees, and members and representatives of the SEC employees’ union are not generalizable, but provide views and experiences. Structured group interviews: We also conducted structured group interviews with supervisors from the mission-critical office and divisions at SEC, including supervisors in regional offices. The purpose of these structured group interviews was to obtain their views on the following personnel management practices related to: hiring, promotions, supervision, strong and unacceptable performance, training, and communication and collaboration. Our universe of supervisors consisted of staff at the SK-15 or SK-17 level, with the exception of staff at the SK- 15 level in the Division of Enforcement, who are not considered management. We obtained the e-mail addresses of all supervisors in these divisions and sent them an invitation to participate in our structured group interviews. We held eight meetings with the supervisors who agreed to participate in our structured group interviews. We held a meeting with each of the mission-critical office and divisions on communication and collaboration. At two of these meetings, we had both SK-15s and SK-17s, including staff from regional offices. In addition, we had two meetings that covered the noncommunication topics. One of the meetings consisted of SK-15s and the other consisted of SK-17s from each of the mission-critical office and divisions. These meetings also included regional office staff. The results of these meetings are not generalizable, but provide views on selected personnel management practices. To determine the extent to which selected SEC personnel management practices have been implemented consistent with relevant standards, we first assessed whether these practices were designed consistent with OPM’s Human Capital Assessment and Accountability Framework (HCAAF), key human capital practices we have identified in prior work, and federal internal control standards. As part of this effort, we reviewed actions SEC had taken to address the seven recommendations from our 2013 report related to four personnel management areas: (1) workforce and succession planning, (2) performance management, including performance feedback, (3) communication and collaboration, and (4) human capital accountability. In addition to these four personnel management areas, we also assessed the design and implementation of four personnel management practices based on information we obtained from SEC staff. Specifically, they identified these as potential areas of concern (or areas of improvement) during individual interviews with us as well as in responses to our surveys—actions to recognize and reward performance, actions to address unacceptable performance, hiring and promotions, and training. Workforce and succession planning: To determine whether SEC’s workforce and succession planning practices were designed consistent with relevant criteria, we obtained a copy of SEC’s 2016 strategic workforce plan. We reviewed this strategic plan and compared it to OPM’s HCAAF and our prior work on strategic workforce planning. For example, we reviewed the skills gap analysis from the plan and compared it to OPM’s HCAAF related to identifying critical skills for an agency’s workforce. As part of this effort, we worked with our Director of Workforce Planning to review SEC’s strategic workforce plan and determine whether the plan was sufficient to address our 2013 recommendations. In addition, we met with SEC’s Workforce Policy Group, Office of Human Resources, and senior leaders from the divisions to obtain information on what actions they had taken to address our recommendations related to workforce and succession planning. Performance feedback: To determine what steps SEC had taken to address our 2013 recommendation, in conjunction with staff from our Human Capital Office, we examined SEC’s process for monitoring feedback as well as our survey results that relate to the performance feedback employees receive. We obtained information on SEC’s process for monitoring feedback by reviewing documentation and interviewing staff from the Office of Human Resources. As part of this effort, we obtained a description of their audit of a sample of performance work plans. Performance management: In our prior review (GAO-13-621), we determined that the design of SEC’s performance management system reflected many elements of OPM guidance, but SEC staff expressed concerns about implementation of the system. However, since our prior review, SEC decided to make changes to its performance management system. Therefore, in this review, we set out to determine the extent to which SEC’s changes to its performance management system were consistent with relevant standards. To do this, we met with SEC staff from the Office of Human Resources and union officials to inquire about what actions SEC had taken to address our 2013 recommendations. Upon learning that SEC decided to redesign its performance management system in 2014, we assessed the actions SEC took to redesign its system and compared these actions to OPM’s HCAAF, our previous work on performance management systems, and federal internal control standards. We also worked with staff from our Human Capital Office to assess the actions SEC had taken to redesign its system against the relevant criteria. Communication and collaboration: To determine whether SEC’s communication and collaboration practices were designed consistent with relevant criteria, we assessed SEC’s actions to address our 2013 recommendations against OPM’s HCAAF, our previous work, and federal internal control standards. Specifically, we reviewed SEC’s award program to determine if the program provided incentives to support an environment of open communication and collaboration, including determining whether SEC provided awards for teamwork. In addition, we reviewed 79 performance expectations, known as performance work plans; all 28 supervisory performance work plans (including 5 plans under the pilot performance management system), and 51 nonsupervisory performance work plans for the mission-critical office and divisions. For the nonsupervisors, we selected the performance work plans for the “mission-critical” staff we identified in our 2013 review—accountants, attorneys, economists, examiners, and financial analysts at the SK-12 through SK-16 levels. The results of our review of performance work plans are not generalizable, but provide us with information on how communication and collaboration are addressed as part of an employee’s performance expectations. To determine whether the performance work plans contained expectations that addressed our 2013 recommendation, one GAO analyst read through each performance work plan and determined whether it contained all four of the following categories that were derived from the recommendation:(1) communication, (2) collaboration, (3) intra-agency communication and collaboration, and (4) fostering an environment of open communication and collaboration—for example, encouraging acceptance, exchanging of information, and sharing of diverse points of view regardless of individual differences. A second GAO analyst independently conducted a similar analysis of each performance work plan. A third GAO analyst reviewed the results of both sets of reviews to reconcile any differences. In cases where the reviews differed, the third GAO analyst reviewed each analyst’s assessment and made a judgment on which one was correct. In addition, we met with SEC staff in the Office of Human Resources and senior officials from the divisions to discuss what actions they had taken to address our 2013 recommendations. Human capital accountability: To determine whether SEC’s human capital accountability system was designed consistent with relevant criteria, we obtained and reviewed SEC documentation on its recently developed human capital accountability system, including policies establishing the system, standard operating procedures, and evidence of the system’s implementation, such as a review of SEC’s student loan repayment program. We compared SEC’s documentation on its human capital accountability system to OPM’s HCAAF and federal internal control standards to determine whether this system addresses the recommendation from our 2013 review. For example, we reviewed SEC’s schedule of human capital program reviews, the reports generated from reviews of specific human capital programs, and actions taken by SEC to address any identified weaknesses in internal controls and compared them to OPM’s HCAAF related to human capital accountability. We also met with staff from SEC’s Human Capital Strategy Group in the Office of Human Resources to determine what criteria they used to determine which human capital programs to review. Actions to recognize and reward performance: To determine whether SEC’s policies on awards have been designed consistent with relevant criteria, we compared these policies to OPM’s HCAAF and our prior work on linking individual performance to organizational success. Specifically, we reviewed the following policies and documents related to awards: SEC’s administrative regulations that govern its recognition programs SEC’s collective bargaining agreement; and SEC’s guidance for cash and time-off awards. We worked with staff from our Human Capital Office to assess these policies against OPM’s HCAAF and our prior work, and determined that they were consistent with relevant criteria. For example, we reviewed SEC’s processes for monitoring how supervisors recognize and reward performance and compared them to OPM’s HCAAF related to awarding staff. We next determined whether SEC’s practices related to awards were being implemented consistent with the agency’s policies. To do this, we obtained a list of all incentive awards (cash and time-off) for fiscal years 2013 through 2015. We randomly selected 96 award packages from fiscal years 2013 through 2015. Of these 6 were from fiscal year 2012 and out of the scope of our review, 19 were performance-based compensation amounts, which were out of the scope of our review, and 1 was later found by SEC to be out of our scope, thus 71 were analyzed. Because the database we used to draw our sample contained data that was out of the scope of our review (such as some packages from fiscal year 2012 and others that were not incentive awards), we did not attempt to generalize the results of the sample. We were, however, able to use the 71 award packages to assess SEC’s implementation of its awards program. Each package should have contained the following information: written justification for the award; the dollar amount or hours off for the award; evidence of approval (i.e., signatures) by the recommending official, reviewing official (both usually from the divisions or office that nominated the person for the award), and the office of human resources staff in the awards program area; evidence of the Chief Operating Officer’s approval for certain high dollar amount or time-off awards; and a copy of the award recipient’s official personnel record (SF-50) with the correct dollar amount or time-off hours noted. We then had one GAO analyst review each of the 71 award packages and complete a checklist noting whether the package contained the information previously mentioned. A second GAO analyst then reviewed each of the 71 packages and reviewed how the first analyst coded the checklist. If the second analyst did not agree with the coding of the first analyst, that information was noted and both analysts met to discuss any disagreements. During this meeting, the two analysts were able to discuss their disagreements and reach consensus on the proper coding of the award packages. We also analyzed the award data for fiscal years 2013 through 2015, which we present in appendix IV. In addition, we met with staff from SEC’s Office of Human Resources who are responsible for the awards program to obtain a better understanding of SEC’s awards program. Actions to address unacceptable performance: To determine whether SEC’s policies to address unacceptable performance have been designed consistent with relevant criteria, we compared these policies to OPM’s HCAAF and our prior work on linking individual performance to organizational success. Specifically, we reviewed the following policies and documents related to addressing unacceptable performance: SEC’s overview of its employee misconduct and nonperformance support program, SEC’s collective bargaining agreement, the performance management standard operating procedures for non- bargaining-unit employees, and SEC’s senior officer performance management administrative regulation. We worked with staff from our Human Capital Office to assess these policies against OPM’s HCAAF and our prior work, and determined that they were consistent with relevant criteria. For example, we examined the processes SEC has to monitor supervisors’ practices to address unacceptable performance and compared them to OPM guidance and federal regulations on addressing such performance. We next determined whether SEC’s practices related to addressing unacceptable performance were being implemented consistent with the agency’s policies. To do this, we obtained all performance improvement plans for fiscal years 2013 through 2015 (16 in all) and compared the information in these plans against what SEC’s policies require. We first had one GAO analyst review each performance improvement plan, compare the information in these plans and supporting documents to what SEC requires for these plans, and record the findings in a spreadsheet. We then had a second GAO analyst review the work of the first analyst to determine if the spreadsheet was coded correctly. The two analysts conferred on any differences in the coding and were able to reach consensus on the proper coding. We also reviewed documentation associated with probationary employees who are terminated due to performance issues. SEC’s policies related to addressing unacceptable performance do not apply to probationary employees. The actions SEC can take against these employees are governed by federal regulations related to actions taken against probationary employees for unsatisfactory performance or conduct. Similar to our approach with the performance improvement plans, we obtained all files related to probationary employees terminated due to unacceptable performance for fiscal years 2013 through 2015 (20 in all). We again had one GAO analyst review each file and compare the information in these files and any supporting documents to what OPM regulations require, such as a description of the unacceptable performance, and record the findings in a spreadsheet. We then had a second GAO analyst review the work of the first analyst to determine if the spreadsheet was coded correctly. The two analysts conferred on any differences in the coding and were able to reach consensus on the proper coding. We also reviewed our recent work on the federal government’s actions to address unacceptable performance to compare actions SEC had taken with actions taken across the federal government. In addition, we met with the Office of General Counsel at SEC to obtain an understanding of the agency’s policies to address unacceptable performance. Hiring and promotions: To determine whether SEC’s hiring and promotion policies have been designed consistent with relevant criteria, we compared these policies to OPM’s HCAAF, the President’s 2011 memorandum on improving federal recruitment and hiring, our prior work on best practices in human capital management, and federal internal control standards. Specifically, we reviewed the following policies and documents related to hiring and promotions: SEC’s hiring program overview, which describes the entire hiring process, including specific responsibilities of staff in the Office of Human Resources and the divisions; a description of the various hiring authorities available to SEC; a description of how SEC establishes initial pay for new hires; SEC policies on promotions for non-bargaining-unit positions; and SEC’s collective bargaining agreement. We worked with staff from our Human Capital Office to assess these policies against OPM’s HCAAF and our prior work, and determined that they were consistent with relevant criteria. For example, we assessed SEC’s processes for hiring and promotions and compared them to OPM’s HCAAF related to hiring. We next determined whether SEC’s hiring and promotion policies were being implemented consistent with the agency’s policies. To do this, we reviewed recruitment case files, which are the documentation that supports a hiring or promotion announcement, to determine if SEC was following its hiring and promotion policies. We randomly selected 102 recruitment case files for review. Of these, 3 were duplicates and 18 recruitment case files could not be analyzed because they had already been audited by OPM or SEC. This left us with a final sample of 81 recruitment case files. We express our confidence in the precision of estimates derived from the sample of recruitment case files as 95 percent confidence intervals. This is the interval that would contain the actual population values for 95 percent of the samples that we could have drawn. We then developed a checklist that contained steps SEC should take during various stages of the hiring and promotion process. We provided this checklist to staff from our Human Capital Office for their review and also ensured that our checklist was consistent with the checklists that SEC had developed to assess its hiring and promotion processes. We then assembled a group of GAO analysts to review the case files and complete the checklists. In order to ensure consistent completion of the checklist, each GAO analyst’s first few checklists were independently reviewed by another analyst. After this stage, the two GAO analysts compared the results of the checklists. Any discrepancies were discussed among the two analysts as well as shared with the entire group of GAO analysts. Once the entire group reached consensus on how to address the discrepancies identified during this stage, all 81 case files were reviewed against the checklist. The next step in the process involved an independent second review of the completed 81 case file checklists. All 81 case files were checked by a GAO analyst who was not involved in the initial review. Any discrepancies found during this stage were discussed among the analysts, and a consensus was reached on how to address the discrepancy. Once this process was completed, the information from the checklists was tallied to identify any deficiencies in the hiring and promotion process. To obtain the views of the key SEC staff involved in the hiring and promotion process, we developed a set of structured interview questions and conducted interviews with 18 of the 23 hiring specialists in the Office of Human Resources and 16 hiring managers from the divisions. We provided the set of structured interview questions to staff from our Human Capital Office for review and revised the questions based on their expertise. We attempted to interview all 23 hiring specialists, but 5 did not respond to our request. We chose the 16 hiring managers to interview by obtaining the list of names, titles, e-mail addresses, and phone numbers of all hiring managers from the six mission-critical office and divisions. For divisions that had more than two hiring managers, we randomly selected two managers, except for the Division of Enforcement and the Office of Compliance Inspections and Examinations. For this division and office, we selected a judgmental sample of hiring managers, based on input from SEC on what types of staff or regions would more likely be involved in the hiring process. We used this approach for the Division of Enforcement and the Office of Compliance Inspections and Examinations because they have a large regional presence, and we interviewed 2 managers in headquarters and 2 in the regional offices for each. The responses to the structured interview questions are not representative of the views of all SEC staff involved in the hiring process but provide useful information on the types of views and concerns held by these staff. We also reviewed OPM’s 2014 review and the National Credit Union Administration’s 2015 review of SEC’s hiring and promotion practices, as well as an outside consultant group review of SEC’s internal promotion actions from fiscal years 2011 through 2014. We assessed the methods used in these reviews and determined that they were reliable for our purposes. Finally, we reviewed SEC’s internal quality reviews of its hiring and promotion practices that were implemented as part of its human capital accountability system. Training: To determine whether SEC’s policies on training for staff who work in the mission-critical office and divisions were designed consistent with relevant criteria, we compared these policies to OPM’s HCAAF and our prior work on assessing strategic training and development efforts in the federal government. Specifically, we reviewed SEC’s training policy—SEC Administrative Regulation 6-28: Training and Development—and the collective bargaining agreement. We also met with our Chief Learning Officer to help us determine how to assess these policies against OPM’s HCAAF and our prior work, and determined that they were designed consistent with relevant criteria. In order to assess the implementation of SEC’s training practices, we, in conjunction with our Chief Learning Officer, determined that the best measure of a training program is the views of the supervisors because they are in the best position to determine if their staff have the necessary skills to accomplish their work. As a result, we asked selected SEC supervisors about the effectiveness of training during structured group interviews of SEC supervisors. We selected these supervisors based on whether they worked in the mission-critical office and divisions, pay grade and occupation, geographic location, and willingness to meet with us. We also analyzed survey results on training collected from our 2016 surveys of mission-critical office and divisions and senior officers and compared them to the results of our 2013 survey. We also compared SEC responses on training from the 2016 OPM FEVS to that of other agencies. Finally, for all of the personnel management practices we assessed, we reviewed responses from our surveys and from our individual and structured group interviews, and we included relevant responses to provide SEC staffs’ perspectives on these practices. We assessed the reliability of all of the data we used during this review and determined they were sufficiently reliable for our purposes, which include describing trends and views on personnel management practices at SEC. To assess the reliability of the FEVS data, we examined descriptive statistics, data distribution, and reviewed missing data. We also reviewed FEVS technical documentation as well as the statistical code OPM uses to generate the index and variance estimates, and we interviewed officials responsible for collecting, processing, and analyzing the data. We used SEC data derived from the Department of the Interior’s Federal Personnel/Payroll System to construct our sample frames for the three surveys, test the implementation of various personnel management practices, and develop summary tables in our appendixes. To determine the reliability of these data, we interviewed SEC staff responsible for these data to determine how data were collected, what controls existed over the data, and any limitations on the data. In addition, where possible, we compared data elements to the original source documents to corroborate the accuracy of the data where available. We conducted this performance audit from July 2015 to December 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Office of Personnel Management (OPM) conducts an annual survey of federal employees to obtain their views about their work experiences, agencies, and leaders. The following tables provide information on Securities and Exchange Commission (SEC) employee responses to selected survey questions in fiscal year 2015. Tables 3 through 9 provide responses by employee group: race, ethnicity, gender, and tenure. For the demographic variables we tabulate survey responses by, the rate of missing data ranges from 5 percent to 10 percent of SEC employees. SEC employees with missing data for a particular demographic variable are not included in that tabulation. The Securities and Exchange Commission (SEC), among other things, provides cash and time-off incentive awards to motivate staff and recognize their contributions to the agency. The following tables provide information on all cash awards for all SEC staff for fiscal years 2013 through 2015. Table 10 provides information on individual cash awards broken out by supervisory status. Tables 11 through 13 provide information on individual cash awards broken out by age, gender, and race, respectively. The following tables provide additional information on time-off awards for fiscal years 2013 through 2015. Table 14 provides information on individual time-off awards broken out by supervisory status, including a separate breakout by senior officers. Tables 15 through 17 provide information on individual time-off awards broken out by age, gender, and race, respectively. As part of our review of the Securities and Exchange Commission’s (SEC) workforce planning practices, we reviewed SEC’s practices related to employee training for staff who are primarily responsible for implementing the agency’s mission: the Office of Compliance Inspections and Examinations; and the Divisions of Corporation Finance, Enforcement, Investment Management, Economic and Risk Analysis, and Trading and Markets (hereinafter, mission-critical office and divisions). We determined that SEC’s practices related to training employees have been designed and implemented consistent with relevant criteria. Our prior work notes that one of the core characteristics of a strategic training and development process is strategic alignment. Our prior work also notes that other core characteristics of strategic training and development include communication from agency leadership, involvement of stakeholders, a system of accountability, and effective use of resources. In addition, the Office of Personnel Management’s (OPM) Human Capital Assessment and Accountability Framework notes, among other things, that agency leaders and supervisors should sustain a learning environment that drives continuous improvement, invest in training to help employees build mission-critical competencies, and use a variety of learning methods. SEC’s policies on training and development specify that the intent of SEC University and its programs is to support the mission of SEC, its strategic plan, and performance objectives and to enable employees to perform their current functions at the maximum level of proficiency. SEC’s training policies also provide specific responsibilities for senior officers, supervisors, and nonsupervisory staff, including requiring senior officers to provide fair opportunities for training. In addition, SEC supervisors and managers are responsible for supporting fair selection for training, ensuring the training meets the definition of mission-related training, and ensuring the availability of funds for a variety of internal and external training. SEC’s collective bargaining agreement specifies the purpose of training, which is to enable employees to perform their official duties at the maximum level of proficiency. The collective bargaining agreement also specifies the responsibilities of the agency, including determining the training needs, ensuring consideration of employee requests for training that supports the agency’s strategic plan, and supporting attorney and accountant opportunities to obtain mandatory continuing education credits. According to SEC staff in SEC University, they serve as liaisons with stakeholders from across SEC on the design, implementation, and evaluation of a variety of training methods to ensure that training is meeting the needs of the various offices and divisions within the agency. SEC supervisors we met with and staff we surveyed noted that training has improved since 2013. When asked to what extent their employees received training that was applicable and sufficient for them to perform their jobs, the supervisors we interviewed told us that training for staff and supervisors equipped staff with the necessary skills, and had improved in recent years. They also said the training they received as supervisors was applicable and sufficient for them to do their jobs. In addition, staff surveyed in 2016 had more positive views on training than in 2013. The percentage of staff surveyed who agreed that new staff were given enough guidance and training increased for nonsupervisors from about 38 percent in 2013 to 48 percent in 2016 and for supervisors from approximately 66 percent to 72 percent in 2016, as shown in figure 11. In addition, the percentage of staff who agreed (to a moderate or great extent) that senior officers work to make improvements in training focused on specific competencies increased for nonsupervisors from about 42 percent in 2013 to 46 percent in 2016, and for supervisors from approximately 60 percent in 2013 to 62 percent in 2016. SEC staff also had more positive views on training than staff in other government agencies. In OPM’s 2016 Federal Employee Viewpoint Survey, an estimated 66 percent of SEC employees surveyed were satisfied with the training they received for their present job, which was higher than that of other federal agencies, with an estimated 53 percent of all respondents satisfied with training. Section 962 of the Dodd-Frank Act Wall Street Reform and Consumer Protection Act included a provision for us to review whether there is an “excessive number of low-level, mid-level, or senior-level managers” at the Securities and Exchange Commission (SEC). We did not find any standards that have been established for evaluating excessive numbers of supervisors. Therefore, we are reporting on the ratio of SEC employees at the various levels for fiscal years 2008 through 2015. Table 18 illustrates the ratio of nonsupervisors to supervisors at SEC. Table 19 illustrates the ratio of nonsupervisors to senior officers, and table 20 illustrates the ratio of supervisors to senior officers. Among its provisions, Section 962 of the Dodd-Frank Wall Street Reform and Consumer Protection Act included a provision for us to review turnover rates within Securities and Exchange Commission (SEC) subunits. While staff turnover rates could be used to identify potential areas for improvement and further develop current supervisors, officials from the Merit Systems Protection Board noted that turnover was not a good indicator of poor supervision for several reasons. For example, they said that staff may leave to pursue opportunities with a different employer or a different career path, or for personal reasons. SEC officials also indicated that staff facing potential removal or termination often would resign or retire, rather than going through removal or termination. Tables 21 and 22 show the percentage of staff who left SEC from fiscal years 2008 through 2015 from headquarters and the 11 regional offices, respectively. Table 23 shows the total number of staff who left SEC during the same period. The Securities and Exchange Commission’s (SEC) current performance appraisal system is designed to rate employees on a numerical scale from 1 to 5. However, due to an agreement with the SEC employees union, bargaining-unit employees are officially rated as either meets expectations (that is, needs improvement, meets expectations, exceeds expectations, or greatly exceeds expectations) or unacceptable. Table 24 shows the distribution of performance ratings for fiscal years 2013 through 2015. The initial rating for bargaining-unit staff is on the five-point scale. The final rating translates that initial rating to either meets expectations or unacceptable, based on the agreement reached between SEC and the union. 1. We address these issues in our responses to comments 3 and 4. 2. According to officials from the SEC union and the Office of Human Resources, the pilot has only been implemented for non- bargaining-unit staff. Moreover, SEC did not provide us with information on its agency-wide pilot of its new performance management system, nor did it provide an implementation plan that identified key milestone dates or schedules to pilot or fully implement the new performance management system to all employees in fiscal year 2017. 3. We recognize that not all staff at SEC may need to coordinate and collaborate for work-related issues. However, staff in mission- critical offices and divisions should be enabled to collaborate and communicate with staff in other offices and divisions. As acknowledged in our report, the Division of Enforcement created formal liaisons that other divisions and offices can contact, and these liaisons help to facilitate cross-divisional communication and collaboration within the division. Based on our survey results, staff in the Division of Enforcement more frequently interacted with staff from other mission-critical offices and divisions. As SEC acknowledged in its response, the Division of Economic and Risk Analysis is similar to the Division of Enforcement in that staff should be routinely communicating and collaborating with them. However, unlike the Division of Enforcement, the Division of Economic and Risk Analysis lacks a mechanism to easily facilitate cross-divisional communication and collaboration. Our survey results show that interaction between Division of Economic and Risk Analysis staff and staff from other mission-critical offices and divisions is limited. SEC also expressed concern that our report cited an anecdotal account from one former employee. However, we found substantial evidence that siloed communication remains a challenge at SEC. For instance, 78 of the 187 employees we interviewed (over 40 percent) cited issues around siloed communication as an area where SEC needs to improve. Additionally, of the 1,947 written responses we received to our survey questions, 597 of them cited various challenges related to communication and collaboration. We provided examples from several current and one former employee to illustrate the siloed communication at SEC. 4. SEC expressed concern with our recommendation to expand the responsibilities and authority of the COO or other official or office. We are not suggesting that an additional layer of management is needed. Rather, we are recommending that the authority of the COO or some other official be enhanced in order to ensure that each mission-critical office and division establish a mechanism or develop procedures to facilitate communication and collaboration. In addition to the contact above, Triana McNeil (Assistant Director), José R. Peña (Analyst-in-Charge), Carl Barden, Michelle Batie, Bethany Benitez, Laura Chase, Pamela Davidson, Tom Gilbert, Jill Lacey, Wati Kadzai, Steven Lozano, Marc Molino, Janice Morrison, Alexander Ray, Ginelle Sanchez-Leos, Jerome Sandau, Jennifer Schwartz, Ebonye Watson, and William White made key contributions to this report.
The Dodd-Frank Wall Street Reform and Consumer Protection Act contains a provision for GAO to report triennially on SEC's personnel management. GAO's first report in 2013 (GAO-13-621) identified a number of challenges, such as SEC's lack of a mechanism to monitor supervisors' use of its performance management system, and included seven recommendations. This report examines (1) employee views on SEC's organizational culture since 2013 and (2) SEC's current personnel management practices. GAO surveyed all SEC employees (staff in its six key divisions and offices, staff in all other offices and divisions, and all senior officers, with response rates of 69, 60, and 70 percent, respectively); evaluated SEC policies and procedures against relevant criteria; and analyzed information on SEC's practices. Employee views on the Securities and Exchange Commission's (SEC) organizational culture have generally improved since 2013. Employees GAO surveyed cited improved levels of morale and trust within the agency compared to 2013 and noted that SEC was less hierarchical and risk-averse. However, GAO's survey indicated that SEC still operates in a compartmentalized way and that there is little communication and collaboration between divisions. SEC made limited progress on improving personnel management. SEC has addressed two of seven recommendations from GAO's 2013 report, but it faces added challenges in cross-divisional collaboration and hiring and promotion. Mechanisms to monitor supervisors' use of performance management system . Recently, SEC began to monitor how supervisors (1) provide feedback to staff, (2) recognize and reward staff, and (3) address poor performance. SEC's efforts address the related 2013 recommendation. Accountability system . SEC implemented a system to monitor its human capital programs and inform its human capital goals consistent with Office of Personnel Management (OPM) guidance. SEC's efforts address the related 2013 recommendation. Workforce and succession planning . SEC has developed a workforce and succession plan in response to two of GAO's recommendations, but the plan does not include some elements of OPM guidance, such as a skills gap analysis for all SEC staff. As a result, SEC continues to lack assurance that all staff have the necessary skills. Performance management. Although GAO found in 2013 that SEC's performance management system was generally consistent with relevant criteria, SEC redesigned it in 2014 without first examining its effectiveness—a recommendation GAO made in 2013. SEC officials stated they do not plan any future reviews because they are piloting a new system. As a result, SEC lacks assurance that the new system will perform better than the current one. Communication and collaboration . SEC has made little progress to address GAO's two recommendations related to improving cross-divisional collaboration. While SEC has recognized some staff for collaborating, it has yet to set expectations for all staff to collaborate across divisions as needed or implement relevant best practices to break down existing silos. As a result, SEC staff still report that divisions operate in isolation. Other than the SEC Chair's Office, which has competing demands on its time, no official has authority to affect the daily operations of the entire agency. Other organizations rely on their Chief Operating Officer (COO) to make such changes, but because SEC's COO lacks such authority, the agency will likely continue to face challenges. In addition, GAO found that because SEC has not identified skills gaps among its hiring specialists, its training of these staff is limited. As a result, SEC lacks assurance that its hiring specialists have the necessary skills to hire and promote the most qualified applicants, in accordance with key principles of an effective control system. SEC should (1) provide authority to the COO or other official to enhance cross-divisional collaboration and (2) develop and implement training for hiring specialists that is informed by a skills gap analysis. GAO also reiterates the need to address the remaining five prior unaddressed recommendations on workforce planning, performance management, and intra-agency collaboration. SEC agreed with the second recommendation but disagreed with the first one. In particular, SEC disagreed that enhancing the role of the COO would be the optimal means to achieve further enhancements. GAO maintains that this recommendation will help improve cross-divisional communication and collaboration, as discussed in the report.
The mission of State’s Bureau of Consular Affairs (Consular Affairs) is to protect the lives and interests of U.S. citizens overseas and to strengthen U.S. border security through the vigilant adjudication of U.S. passports and visas. Consular officers abroad have sole legal authority to adjudicate visa applications, and they receive overseas and domestic support to help identify visa fraud. Consular officers at overseas posts issue nonimmigrant visas to temporary visitors and immigrant visas to people who intend to immigrate to the United States. The adjudication processes for both nonimmigrant and immigrant visa applications contain steps to check for fraud. Consular Affairs has more than 11,000 officers, local staff, and contractors working in over 300 locations around the world, including domestic visa centers and passport facilities. Within each consular section at overseas posts, consular officers adjudicate visa applications, serving as fraud detection officers on the first line of defense for border security. Consular officers are charged with facilitating legitimate travel while preventing ineligible aliens, including potential terrorists, from gaining admission to the United States. To help detect and prevent fraud, consular officers work with members of a Fraud Prevention Unit located in the consular section. In large posts, the Fraud Prevention Unit may be led by a Fraud Prevention Manager, and may be augmented at certain high- fraud posts by a Diplomatic Security Assistant Regional Security Officer In smaller posts, the Fraud Prevention Manager may be a Investigator.consular officer who has other responsibilities depending on the workload volume and prevalence of fraud at the post. Consular officers may also coordinate with a Department of Homeland Security (DHS) Visa Security Officer and an external anti-fraud working group. Fraud Prevention Manager. Under the Bureau of Consular Affairs, fraud prevention efforts at the 222 visa-issuing posts are led by a Fraud Prevention Manager—a Foreign Service Officer assigned by consular management to investigate fraud cases, conduct fraud training, and provide information on fraud trends to the entire consular section. As of April 30, 2012, 81 percent of all visa issuing posts (180 of 222) had Fraud Prevention Manager positions filled by an entry- level officer or an officer of unspecified grade, and 84 percent of visa issuing posts (186 of 222) had Fraud Prevention Manager positions designated as part- time or rotational. As of April 30, 2012, 36 posts had full-time mid-level Fraud Prevention Managers serving for 2 years. An additional 40 posts had full-time entry-level Fraud Prevention Managers serving for 2 years in positions originally designated for mid-level officers. Officers assigned to part-time Fraud Prevention Manager positions have other consular-related duties in addition to preventing fraud. An officer filling the position on a rotational basis serves as the Fraud Prevention Manager for a designated period of time, typically 6 months, before moving on to other duties. State officials told us that the reason most Fraud Prevention Manager positions are part-time or rotational is in order to provide consular managers more flexibility in how they use consular staff, and also to provide officers with more opportunities to work on different activities. Fraud Prevention Unit. At 94 percent of visa-issuing posts (208 of 222), Fraud Prevention Managers have locally employed staff to assist them in fraud investigations, forming a Fraud Prevention Unit. Out of the 3,700 locally employed staff working at consular posts, 417 are assigned to Fraud Prevention Units. These staff generally have special expertise in host country culture and language, as well as a network of local contacts to help develop leads on possible fraud. The Fraud Prevention Unit collects and verifies data for use in identifying fraud trends, analyzes individual fraud cases, and drafts and disseminates fraud reports. Tools utilized in individual fraud investigations vary from post to post, but may include physical document examination, visa record searches, facial-recognition review, phone calls to verify data, Internet searches, and site visits. Once all of the data have been collected, verified, and assessed, the Fraud Prevention Manager reviews the results and provides a final fraud finding to consular officers, who use the information to make a determination on whether to issue a visa to the applicant. If the Fraud Prevention Manager determines that the visa fraud may involve criminal activity, the case may be referred to Diplomatic Security agents at post for further investigation. Assistant Regional Security Officer Investigator. Under the Bureau of Diplomatic Security, 84 Assistant Regional Security Officer Investigators (ARSO-I) are assigned to 75 high-fraud posts to protect the integrity of the visa system and disrupt criminal networks and terrorist mobility. ARSO-Is are Diplomatic Security Special Agents who specialize in criminal investigations of visa fraud. Diplomatic Security recommends that ARSO-Is spend 80 percent of their time working on visa fraud, and 20 percent of their time supporting other Diplomatic Security responsibilities, such as providing security to high- level visitors at post. ARSO-Is often work with local law enforcement and judicial officials to arrest and prosecute violators of local laws related to visa fraud, such as the fraudulent production of local identification documents used in applications for visas. Some investigations are connected to large-scale alien smuggling or human trafficking cases. DHS’s U.S. Immigration and Customs Enforcement (ICE) Visa Security Program. ICE deploys Visa Security Officers to assist the consular section at designated high-risk posts by providing advice and training to consular officers regarding specific security threats, reviewing visa applications, and conducting investigations with respect to consular matters under the jurisdiction of the Secretary of Homeland Security. External Anti-Fraud Working Group. At some posts, members of the Fraud Prevention Unit may coordinate with officials from other countries’ embassies and consulates to share fraud trends in an anti- fraud working group. Domestically, both State and DHS play a role in fraud prevention and detection. While the Secretary of State has the lead role with respect to foreign policy-related visa issues, DHS is responsible for reviewing implementation of the policy and providing additional direction. State’s Bureau of Consular Affairs Visa Office has direct responsibility for visa policy and oversight for the operations of KCC and the National Visa Center in New Hampshire. These two centers prescreen visa applications for fraud and provide other support for visa adjudication worldwide. State’s Bureau of Consular Affairs Office of Fraud Prevention Programs advises posts on visa and passport fraud questions, develops training material to manage fraud prevention programs, produces publications on fraud issues and trends, and coordinates with other U.S. agencies involved in preventing visa fraud. State’s Diplomatic Security Office of Overseas Criminal Investigations Branch provides managerial oversight, guidance, and support to ARSO-Is at overseas posts. Diplomatic Security domestic field offices support overseas investigations by investigating visa fraud that is connected to criminal networks within the United States. DHS’s U.S. Citizenship and Immigration Service (USCIS), Fraud Detection National Security Directorate is responsible for detecting, pursuing, and deterring immigration benefit fraud, and identifying persons seeking benefits who pose a threat to national security and public safety. In addition, Fraud Detection National Security Directorate staff conduct site visits and administrative inquiries within the United States on persons or entities suspected of immigration fraud and follow up with ICE investigators, law enforcement, and intelligence agencies on potential national security risks identified during background checks on immigration benefit applications. DHS’s ICE Document and Benefit Fraud Task Forces work with federal, state, and local partners to detect, deter, investigate, and present instances of benefit fraud for criminal prosecution. DHS’s Customs and Border Protection agents serve as the last line of defense in protecting American borders. Customs and Border Protection agents verify that visitors have proper travel documents and valid visas, and have the discretion to not admit travelers with valid visas into the United States if the agent suspects the traveler intends to violate the terms of his or her visa. After the September 11, 2001 terrorist attacks, the number of visas issued initially declined, but has generally increased steadily since 2003, and State anticipates demand for visas to continue to rise. As seen in figure 1, in 2001, the United States issued almost 8 million nonimmigrant and immigrant visas, based on data from the Consular Consolidated Database. From 2001 to 2003, visa issuances declined by 34 percent. However, since then, the number of immigrant and nonimmigrant visas issued has generally trended upward. In 2011, consular officers issued more than 7.5 million nonimmigrant visas, up 54 percent from 2003 levels. Approximately 75 percent of the 7.5 million nonimmigrant visas issued in 2011 were processed for temporary visits to the United States More than half (52 for business or personal reasons, such as tourism.percent) of the visas for temporary visits were issued to visitors from Brazil, China, India, and Mexico. According to the Deputy Assistant Secretary for Visa Services, State continues to see increases in visa demand for individuals residing in some of the world’s fastest-growing economies. While visa issuances have generally increased since 2003, visa refusals have fluctuated since 2006. In fiscal year 2011, more than 2.1 million nonimmigrant visa applicants worldwide were denied visas for entry into the United States. As seen in table 1, adjusted refusal rates for tourist visas in Brazil, China, India, and Mexico fluctuated between fiscal years 2006 and 2011. While refusals of visitors from Brazil, China, and Mexico have generally decreased in the last 6 years, refusals of visitors from India have increased. Visas may be refused for a number of reasons other than a suspicion of fraud, such as insufficient documentation or suspected immigration intent. When a consular officer suspects that the applicant’s travel or financial documents are counterfeit, the consular officer may deny the applicant’s request for a visa or may refer the case to the Fraud Prevention Unit for an additional fraud assessment. The U.S. travel and tourism industry benefits from foreign visitors, and the U.S. government is working to accommodate an increase in demand for tourist travel. For example, State reported in 2010 that international tourists contributed $134 billion to the U.S. economy and supported over 1.1 million jobs. The Administration has encouraged State to increase visa processing capacity and reduce wait times for receiving a visa. In January 2012, President Obama issued an Executive Order requesting that the Secretaries of State and Homeland Security, in consultation with the Assistant to the President for Homeland Security and Counterterrorism and the Director of the Office of Management and Budget, develop an implementation plan to (1) increase nonimmigrant visa processing capacity in China and Brazil by 40 percent over the coming year and (2) ensure that 80 percent of nonimmigrant visa applicants are interviewed within 3 weeks of receipt of visa applications. Almost all nonimmigrant visa applicants submit an online visa application called the DS-160 through State’s web-based portal called the Consular Electronic Application Center, and then schedule a visa interview at a local U.S. embassy or consulate. Consular officers interview visa applicants, review the application and supporting documents, such as birth certificates, and make an initial decision to issue or deny the visa application. A consular officer may temporarily deny the visa in order to scrutinize the application for suspected fraud or to process it further administratively (see fig. 2). Obtaining an immigrant visa is one part of a four part process for aliens outside of the United States to become a permanent resident of the United States. First, an eligible U.S. citizen or lawful permanent resident, called a petitioner, must file a petition (a paper form) with USCIS on behalf of the alien applying for lawful permanent resident status, who is called the beneficiary. Generally, the petitioner can be either a relative or employer, although there are visa categories in which the applicant can self petition, such as the diversity visa. USCIS has the sole authority to approve or deny the petition. Second, once a petition is approved and a visa number is available in the appropriate category, the beneficiary prepares for a visa interview by gathering required documentation and undergoing a medical exam. Third, the beneficiary, now called the applicant, submits an online visa application, either the DS-260 or the DS- 230, along with evidence supporting the applicant’s eligibility, such as a birth certificate or diploma. All aliens outside the United States apply for an immigrant visa at the U.S. consulate in their current country of residence. As in the nonimmigrant visa process, the alien schedules a visa interview, submits fingerprints, and pays a visa application processing fee. During the interview, a consular officer reviews submitted documentation as well as biometric and other security and fraud checks, determines if the alien is subject to any ineligibilities, and confirms that the applicant has the required legal relationship with the petitioner. The consular officer then either approves or denies the visa application. If the visa application is approved, a visa is printed and placed in the applicant’s passport. Fourth, the applicant travels to the United States with his or her immigrant visa and packet of supporting documentation. Upon admission by a Customs and Border Patrol officer at the port of entry, the alien becomes a lawful permanent resident. Certain countries, such as Brazil, China, the Dominican Republic, India, and Mexico, had high numbers of suspected fraud cases in fiscal year 2010, and certain visa categories, such as work visas, student visas, and diversity visas, had high levels of fraud. Visa fraud has become more sophisticated over time with increased globalization, advanced technology, and ease of travel. State requires Fraud Prevention Managers to classify fraud levels at each post. Fraud Prevention Managers are required to submit a fraud assessment twice a year as part of the post’s bi-annual fraud summary. Fraud assessments rank a post’s fraud conditions as high, medium, or low, based on the ratio of visa applications referred to the Fraud Prevention Unit out of the total number of visa applications. The fraud assessment also includes the prevalence of corruption in the local environment, including the reliability of country documents and cooperation with local law enforcement. Additionally, ARSO-Is provide input into fraud assessments regarding the nature of criminal activity involving visas. According to State, a country with high numbers of suspected fraud cases may not necessarily be designated as a high-fraud country if its proportion of suspected fraud to visa applications is low. Recently, State has taken steps to improve its ability to compare fraud levels across posts. In the past, according to State officials, self-reported fraud levels had not been used to assess posts’ fraud conditions relative to other posts because posts had different methods for referring cases to Fraud Prevention Units. Referrals to Fraud Prevention Units are considered an accurate portrayal of the volume of fraud cases handled at individual posts because Fraud Prevention Managers must make a fraud assessment for all cases that are referred. In July 2012, State distributed new guidance that clarifies when consular officers should refer visa applications to Fraud Prevention Units. For example, new guidance instructs consular officers to refer applications to Fraud Prevention Units whenever the unit is expected to expend resources to verify some aspect of an applicant’s case or when consular officers cannot perform a needed task, such as verifying the employment of an applicant. The volume of visas processed and the number of fraudulent applications vary from country to country. In general, fraudulent activity is found in a very small percentage of overall visas granted. Based on State’s Consular Consolidated Database, 6.9 million nonimmigrant and immigrant visas were issued worldwide in 2010. That same year, approximately 74,000 visa applications were referred to Fraud Prevention Units for additional scrutiny. Of these, about 16,000, or 22 percent, were confirmed as fraudulent in fiscal year 2010. Some countries may experience high visa demand but low numbers of suspected fraud cases, while other countries may experience high visa demand and high numbers of suspected fraud cases. For example, in 2010, consular officers throughout Brazil issued approximately 556,000 visas and referred about 3,000 visa applications to their Fraud Prevention Units, of which 750 (or 24 percent of visa applications suspected of fraud) were confirmed as fraudulent. Meanwhile, consular officers throughout India issued about 528,000 visas in 2010, referred about 5,200 visa applications to their Fraud Prevention Units, and confirmed about 2,600 (or 50 percent of visa applications suspected of fraud) as fraudulent. See figure 3 for the top 10 posts for referrals to Fraud Prevention Units among total nonimmigrant and immigrant visa applications in 2010. Almost 60 percent of confirmed fraud cases (9,200 out of 16,000) were referred to Fraud Prevention Units in Brazil, China, Dominican Republic, India, and Mexico in fiscal year 2010. State’s Office of Fraud Prevention Programs reports that a majority of visa fraud involves nonimmigrant visa applicants who submit false documents or make false statements to obtain a tourist or business visitor visa. According to State officials, some visitor visa applicants provide fraudulent statements or documents, such as a false bank statement, to demonstrate strong ties with their home country and therefore overcome the presumption that they intend to use their temporary visitor visa to illegally immigrate to the United States. Denied visitor visa applications are not usually referred to the Fraud Prevention Unit unless the officer suspects the case could be linked to organized crime. Other kinds of fraud can be found in temporary worker visas, student visas, exchange visitor visas, immigrant visas, and diversity visas. Temporary Worker Visas: While State officials said that most work visas facilitate legitimate travel, fraud has been found among petitioners and applicants for both skilled worker and temporary agricultural worker visas. Some petitioners in the United States create phony companies and petition for workers to join them in the United States, usually with the applicants’ knowledge and participation in the fraudulent activity, according to State officials. Other examples of fraud include cases in which educational degrees were found to be fraudulent, signatures were forged on supporting documents, and workers performed duties or received payments significantly different from those described in the applications. A recent DHS study reported that 21 percent of skilled worker petitions they examined involved fraud or technical violations. In 2005, DHS began collecting an additional $500 fee on certain work visas to be used for fraud prevention and detection purposes. Student Visas: Foreign students interested in studying in the United States must first be admitted to a school or university before applying for a visa at a U.S. embassy or consulate overseas. The process for determining who will be issued or refused a visa contains several steps, including documentation reviews, in-person interviews, collection of applicants’ fingerprints, and cross-references against multiple databases of information. In 2011, State issued over 486,000 student visas, of which 71 percent were issued to students from Asia. According to the Fraud Prevention Coordinator for India, fraud among student visa applications is common throughout India. For example, some exam centers that offer the Test of English as a Foreign Language (TOEFL) are suspected of being complacent when students cheat on the exam in order to achieve high scores. If a student applicant cannot answer a consular officer’s questions in English, yet received 105 out of 120 on their TOEFL score, fraud may be present, according to a consular officer in New Delhi. Exchange Visitor Visas: Summer Work and Travel (SWT) visas are a subset of the Exchange Visitor Program, and are susceptible to fraud. The Exchange Visitor Program fosters global understanding through educational and cultural exchanges. All exchange visitors are expected to return to their home country upon completion of their program in order to share their experiences. In 2011, 35 percent (108,717) of the total exchange visitors (306,429) were granted entry into the United States for the purpose of “Summer Work and Travel.” According to State officials, many SWT visa applicants misrepresent their status as students or their intentions for using the SWT visa. Additionally, many U.S. sponsors falsely represent their businesses and how they intend to employ SWT applicants. U.S. sponsors have been found to exploit SWT visa holders for financial gain. On May 10, 2012, State’s Bureau of Education and Cultural Exchange issued rules that are expected to protect the health, safety, and welfare of SWT program participants. The rules provide a cap on the number of annual SWT program participants that may be granted visas. Immigrant Visas: In 2010, State issued 482,052 immigrant visas. That same year, 21,013 immigrant visa applications were referred to Fraud Prevention Units and 4,984 (or about 24 percent) were confirmed as fraudulent. Immigrant visa fraud can take many forms. At several posts we visited, State officials said a common problem involved applicants who pay American citizens to marry them or who falsely represent their intentions to citizens and deceive them into marriage in order to obtain lawful permanent residence in the United States. In a typical immigrant visa fraud case, an individual divorces his or her spouse in a foreign country, marries an American citizen, and, after living in the United States for a certain period of time, obtains U.S. citizenship or legal permanent resident status, divorces the U.S. spouse, and remarries the original spouse so that they can reunite in the United States. Diversity Visas: The Diversity Visa Program was established through the Immigration Act of 1990 and provides up to 55,000 immigrant visas annually to aliens from countries with low rates of immigration to the United States. Aliens register for the diversity visa lottery for free online and applicants are randomly selected for interviews through a lottery process. Upon being selected, a winner must apply for a visa, be interviewed, and be found eligible for the diversity visa. All countries are eligible for the Diversity Visa Program except those from which more than 50,000 immigrants have come to the United States over the preceding 5 years. In 2011, approximately 16.5 million people applied for the program and about 107,000 (7 percent) were selected for further processing. Of those selected, 75,000 were interviewed at posts for a diversity visa, and approximately 50,000 received visas. Because the program does not require a U.S.-based petitioner, it is particularly susceptible to fraud. Diversity visa fraud is rampant in parts of South Asia, Africa, and Eastern Europe, and is particularly acute in areas where few individuals have independent access to the Internet.travel agents, or Internet café operators who help would-be applicants submit an entry for a fee. Many of these facilitators withhold the confirmation information that the entrant must use to retrieve his or A typical scenario includes visa facilitators, her selection status. To access the lottery notification, the facilitators may require winning applicants to either pay an additional exorbitant fee or agree to enter into a marriage with another of the facilitator’s paying clients solely for the purpose of extending immigration benefits. Visa fraud has evolved and become more sophisticated over time due to unscrupulous visa applicants who adapt to State’s efforts to combat fraud, increased globalization, advanced technology, and ease of travel. Fraud schemes are no longer centralized in individual countries. Criminal fraud rings, human smuggling networks, and trafficking rings work across multiple countries to circumvent State’s visa process. For example, in 2009, a typical route for traffickers from India who sought religious asylum in the United States originated in New Delhi and transited through Moscow, Dubai, Sao Paulo, and Mexico before reaching Texas, according to the Assistant Regional Security Officer Investigator in New Delhi. In addition, new technologies have helped individuals and organizations adapt to State’s visa security features and develop increasingly sophisticated fraud schemes. For example, high-quality micro-printing and new assembly methods have allowed imposters to duplicate State’s visa security threads and serial numbers. With global access to the Internet, fraud scams used in one country or continent have quickly made their way to others, and therefore high-fraud countries and posts have shifted from year to year. For example, only 4 countries were among the top 10 countries for visa fraud in both 2005 and 2010: the Dominican Republic, Ghana, Jamaica, and Peru. Consular officers rely on State’s advanced information technology, fraud reports, and domestic and overseas fraud prevention resources to improve their ability to detect and deter fraud. However, State does not have a policy to systematically utilize its domestic anti-fraud resources to offset fraud workload overseas. The Consular Affairs Office of Consular Systems and Technology has deployed several new tools to counter fraud in the visa process, including the following: Online Nonimmigrant Visa Application Form, DS-160: In the spring of 2010, State implemented the DS-160 online nonimmigrant visa application system, which requires applicants to submit all information electronically. With the collection of electronic information prior to the scheduled visa interview, State is able to research and analyze applicants’ data for indicators of fraud prior to an interview with a consular officer. Overseas, State encourages Fraud Prevention Units to conduct pre-screening checks of applicants’ visa history to identify aliases or discrepancies between current and previous applications. Enterprise Case Assessment Service (eCAS): In April 2011, State released eCAS, the first centralized system for Fraud Prevention Units to track and manage their nonimmigrant and immigrant visa fraud cases. eCAS is based in the Consular Consolidated Database, and Fraud Prevention Units use it to create, develop, and resolve fraud assessments. Previously, fraud cases were either designated as “fraud confirmed” or “fraud not confirmed.” With eCAS, fraud cases are now designated as “fraud confirmed,” “no fraud,” or “inconclusive,” which allows Fraud Prevention units more flexibility to designate that some cases have a suspicion of fraud but not enough evidence to confirm fraud. In the first 5 months of the system’s use, May 2011 through September 2011, 188 posts worldwide used eCAS to process over 43,000 fraud cases. According to State, in 2012 State released an eCAS module domestically that can also be used to process passport fraud cases, and State plans to extend this module overseas in 2013. MATRIX: In 2011, State released a new fraud prevention tool known as MATRIX that is accessible to Fraud Prevention Units and Diplomatic Security agents through State’s Consular Consolidated Database. MATRIX is a search tool that makes associations between information on a visa application and other records and data sources. MATRIX links information in the Consular Consolidated Database to other State records, USCIS records, and INTERPOL data. Fraud Prevention Managers and ARSO-Is can use MATRIX to link information contained on previous visa applications and to reveal similarities across multiple applications as an indicator of fraud. For example, according to Consular Affairs officials, MATRIX found that one applicant’s contact phone number in the United States matched the phone numbers used by 17 other applicants, a possible indication of fraud. Diversity Visa Entry Status Check: In 2010, State began an online verification system called the Entry Status Check that allowed all entrants of the 2010 Diversity Visa Program to electronically, individually, and privately check the status of their online submissions through a State website. This system eliminated the need for direct mailing of Diversity Visa correspondence and enhanced State’s ability to combat fraud. Prior to the electronic system, notification letters were physically mailed to the address listed on the application. Unscrupulous visa agents listed their own addresses so that the notification letters were delivered to them instead of the people selected in the lottery. The agent could demand thousands of dollars from an applicant in exchange for the letter. Consular Consolidated Database Search Rules to Identify Fraud Indicators: State is currently developing a new anti-fraud tool that will automatically search visa applications for fraud indicators and alert consular officials when fraud indicators are found. For example, State may find higher rates of fraud among visa applicants who rely on services provided by a particular local visa company. Consular officials in that country can request that State flag future visa applications listing that visa company’s name. State shares information between consular posts and headquarters regarding the latest fraud trends through reporting mechanisms such as validation studies, semi-annual fraud summaries, fraud digests, fraud notices, or reporting cables, Diplomatic Security monthly status reports, and Diplomatic Security program reviews. Validation Studies: State considers validation studies to be one of the best fraud-prevention tools available to consular officers. Posts conduct validation studies on visas that have been issued to determine the extent to which the visas have been misused, and posts send summaries of fraud risks to headquarters twice a year. Posts are required to conduct at least two validation studies per year, one on a visa category of the post’s choosing and one on visa referrals. Generally, consular officers select a sample of visa issuances and determine how many of the visa recipients departed the United States within the terms of their visas, how many remained in the United States longer than their visas allowed, and how many never traveled to the United States.the accuracy of adjudication decisions, and allow Consular Affairs officials to share emerging fraud trends across posts. Semi-Annual Fraud Summaries: State guidance calls for validation studies to be incorporated into posts’ Semi-Annual Fraud Summaries—reports submitted twice annually that provide input for improvements in fraud prevention guidance, training, and resources. State guidance notes that the summaries should discuss current country conditions that may contribute to fraud risks, such as the presence of organized crime networks. According to the guidance, the summaries should discuss fraud trends for nonimmigrant visas, immigrant visas, diversity visas, passports, and coordination with Diplomatic Security personnel, among other topics. These studies should discuss new information that may be used to establish new fraud indicators. Fraud Digests: Since September 1996, State has published a monthly newsletter called the Fraud Digest that profiles worldwide fraud trends, fraud prevention techniques, and advances in areas such as fraud prevention technology and immigration document design. The digests are accessible on the web and are shared government-wide with approximately 3,600 subscribers, as of April 2012. The main audience for the digest is domestic and overseas consular personnel and Diplomatic Security agents. Reporting Cables: State headquarters gathers and analyzes information from posts, and distributes guidance to posts through monthly reporting cables in order to update consular officers on evolving fraud trends. Diplomatic Security Monthly Status Reports: According to Diplomatic Security officials, ARSO-Is worldwide submit monthly status reports that delineate the number of hours spent on criminal investigations and training of foreign personnel. The status report also describes progress on the post’s visa cases, including preliminary queries for information and arrests. The information supplements data entered into the Diplomatic Security primary case management system known as the Investigative Management System, according to Diplomatic Security officials. Diplomatic Security Program Reviews: Diplomatic Security officials told us that Diplomatic Security program reviews are internal reports that highlight best practices at posts and make recommendations for improvements. To complete the program reviews, officials from Diplomatic Security’s Office of Criminal Investigations told us that they spend 2 days at each post answering standardized questions about training, pending cases, arrests, budget, and information systems, among other topics. Diplomatic Security aims to visit all posts with an ARSO-I presence once every 2 years, according to Diplomatic Security officials. KCC, located in Williamsburg, Kentucky, has become an important anti- fraud resource for State. State opened KCC in October 2000, to process worldwide diversity visa applications and reduce the workload on adjudicating officers at overseas posts. According to KCC officials, the number of local employees at KCC has increased from 40 to 273, including 54 staff working within a Fraud Prevention Unit. In August 2001, KCC began a pilot project to screen all nonimmigrant visa applications with facial recognition software. According to KCC officials, after the September 11, 2001 attack, State was required to store visa applications for 7 years. As a result, KCC officials began scanning old visa applications and uploading all biographic information and evidence of visa ineligibilities. All visa applicants’ biographic information, including both fingerprints and digitized photographs, is checked through State’s Consular Lookout and Support System database and facial recognition software. State describes KCC as an incubator for new consular projects, and KCC is in the process of expanding anti-fraud services to posts overseas, according to KCC officials. Currently, KCC provides prescreening services for selected posts overseas. Any post may request KCC assistance in conducting research and analysis on visa applications, either on an ad-hoc basis for individual cases, or on a pilot basis for larger-scale projects. For example, since over 50 percent of all skilled worker (H-1B) and intracompany transfers (L) visas are processed in India, KCC initiated a process to verify all petitioner information contained on these types of visa applications from posts in India, according to State officials. Globally, KCC screened 81,862 H-1B and L-1 applications for fraud in calendar year 2011. In addition to H and L visas, KCC conducts prescreening on several other visa classifications that are susceptible to fraud. According to State officials, KCC screeners and fraud analysts conduct basic checks, such as verifying the legal name of the business as well as more complex research including data mining, evaluation of the petitioning organization’s business viability, and phone calls to petitioning employers. Additionally, KCC fraud analysts may also refer the case to onsite Fraud Detection and National Security officers to request a visit to the proposed employment site. If derogatory information, such as a revocation of a prior petition, exists on a petitioning company, screeners enter all comments into the applicant’s online DS-160 form, for access by the consular officer, who makes the ultimate decision to issue or deny the visa. In fiscal year 2012, State intends to prescreen 15 percent of all worldwide nonimmigrant and immigrant visa applications prior to the visa interview, increasing to 50 percent by fiscal year 2013. To prescreen visa applications, KCC reviews and processes all sets of documents and data received from petitioners and beneficiaries. KCC employees conduct research on visa applicants and petitioners, and provide this information to consular officers overseas so that they have access to the information prior to interviewing a visa applicant in person. For example, we observed a KCC analyst conducting research on a summer work and travel visa application that listed the sponsor business as a restaurant. However, the KCC analyst determined that the physical address listed on the visa application was an adult entertainment venue, a business prohibited by the program. The KCC analyst notified the interviewing post of this finding so that the adjudicating officer would have knowledge of it before the interview. KCC now prescreens the vast majority of certain visa categories that have been associated with high rates of fraud, such as summer work and travel visas. From March 2011 to April 2012, KCC analysts researched over 9,000 companies participating in the Summer Work and Travel Program and found that 13 percent of them had fraud indicators. For example, some companies did not exist, the company’s phone number was invalid, or the company reported that it never expected a summer work and travel participant. Anti-fraud staffing levels in Fraud Prevention Units vary widely across overseas posts, causing disproportionate workloads. State assigns personnel to Fraud Prevention Units based on input from post management and consular affairs management at headquarters. Personnel from State’s Office of Fraud Prevention Programs said resource decisions for Fraud Prevention Units are driven by visa workload and other factors at posts, not by the number of fraud cases.statistics on the number of fraud cases confirmed, unconfirmed, or inconclusive are used by posts to direct anti-fraud strategies, Consular Affairs does not use these statistics to determine the appropriate distribution of personnel to Fraud Prevention Units. The posts with the highest numbers of suspected fraud cases in 2011 were not assigned a number of Fraud Prevention Unit staff proportionate to the number of fraud cases, as seen in table 2. For example, one entry level officer and one mid-level officer in Santo Domingo, who were assigned to Fraud Prevention Manager positions, joined five locally employed staff in the Embassy’s Fraud Prevention Unit to combat the entire country’s visa fraud. With approximately 7,879 cases suspected of fraud in 2011, each member of Santo Domingo’s Fraud Prevention Unit investigated an average of 1,126 cases and each member of Guangzhou’s Fraud Prevention Unit investigated an average of 239 cases that year. According to State officials, while State plans to expand the use of KCC anti-fraud resources, there is no systematic process for overseas posts to formally request KCC prescreening assistance. State’s Program Evaluation Policy notes that program evaluation is essential for planning decisions, and evaluation findings should be integrated into program strategies and policies. However, State officials told us that anti-fraud pilot programs conducted at KCC are not formally evaluated and there is no established policy for posts to access domestic anti-fraud resources. Rather, KCC provides anti-fraud assistance to overseas posts on an ad- hoc basis based on informal communication. According to KCC’s Director, most posts have not requested KCC’s assistance because they are not familiar with all of the anti-fraud services that KCC can provide or how to request services. For example, a Fraud Prevention Manager in a high-fraud post that we visited told us that the post would like additional KCC prescreening of certain visa categories, but was unaware of how to request KCC assistance. Multiple State officials told us that most KCC prescreening initiatives have been due to institutional knowledge at the management level in the field. For example, all India-specific services provided by KCC were a direct result of a Consular Manager in India who was aware of the prescreening services KCC could provide. According to the KCC Director, there are clear benefits to utilizing KCC for fraud investigations. KCC staff are fully vetted U.S. citizens with secret clearances and access to all restricted databases used in visa adjudications. In addition, both Diplomatic Security and the U.S. Citizenship and Immigration Service are represented at KCC, and are available to assist in fraud investigations. The majority of KCC staff are provided through a contractor, and the contract provides for the ability to adjust to changes in demand for services. Although State offers anti-fraud courses in a classroom setting and online, State does not require Fraud Prevention Managers to take them. In addition, State does not track Fraud Prevention Manager enrollment in anti-fraud courses, and therefore State does not know whether the large number of entry-level officers filling fraud prevention manager positions have taken the anti-fraud courses. The Foreign Service Institute has expanded the number of courses it offers Foreign Service Officers in fraud prevention and detection, covering topics such as advanced name checking, analytic interviewing, and emotional content analysis. The institute’s anti-fraud training courses include the following: Basic Consular Course (PC530): Commonly known as ConGen, PC530 is a 6-week course that all Foreign Service officers are required to take prior to their first consular tour. PC530 is also required for any officer heading to a consular tour who has neither done consular work nor taken ConGen in the preceding 5 years. The course contains a module covering security, accountability, fraud, and ethics, which includes training in detecting and preventing fraud. Fraud Prevention for Consular Managers (PC541): This course is designed for Fraud Prevention Managers who are currently serving in the field, emphasizing anti-fraud and counterterrorism tools for consular officers. State also offers consular officers distance learning or online courses in detecting and preventing fraud. These courses are either prescheduled live courses, prerecorded or accessible 24 hours a day, or offered on- demand. Online consular training in fraud includes the following: Detecting Imposters (PC128): This course teaches students procedures for identifying imposters either at the interview window or in photographs. Detecting Fraudulent Documents (PC544): This course teaches consular officers how to determine whether a document has been altered or is counterfeit. New Consular Technologies (eCAS and MATRIX): This course trains consular officers in how to use eCAS and MATRIX to combat visa fraud. State officials from the Office of Consular Systems and Technology said that its training programs are updated as soon as new features are rolled out, and the training typically focuses on new technology features. Consular officers are provided with manuals that explain the new software tools about a month in advance and can attend live and prerecorded training courses. While some consular courses can take about 2 hours, training in MATRIX is a prerecorded session that takes approximately 30 minutes. Although training on new technological tools is available and encouraged by State, a survey of Fraud Prevention Managers revealed that respondents’ lack of knowledge of some key anti-fraud tools indicated that updates were not uniformly reaching officers. While State encourages Fraud Prevention Managers to take updated fraud prevention training, the training is not required. Entry-level officers are required to complete 6 weeks of basic consular training prior to their arrival at post but are not required to take other advanced anti-fraud courses offered at the Foreign Service Institute or online, such as eCAS or MATRIX. For example, four of the five Fraud Prevention Managers we met with had not been trained in MATRIX. Advanced fraud training courses are targeted to mid-level officers, but the majority of Fraud Prevention Manager positions (180 of 222) were filled by an entry-level officer or an officer of unspecified grade. In 2011, a little more than half of the students enrolled in PC541 were entry-level officers, and State could not determine whether Fraud Prevention Managers were among them. Additionally, between October 2009 and July 2012, entry-level officers made up approximately 22 percent (489 of 2,252) of the total number of students who registered for Detecting Imposters (PC128) and 21 percent (486 of 2,246) of the total number of students who registered for Detecting Fraudulent Documents (PC544). Without advanced fraud training courses, Fraud Prevention Managers may not know about the roles and responsibilities of KCC, or how to use the Consular Lookout and Support System name-check database and biometric systems. For example, two of the five Fraud Prevention Managers with whom we met were unfamiliar with the anti-fraud services available at KCC. According to the Office of Fraud Prevention Program’s country desk officers, the level of anti-fraud training offered to Foreign Service Officers largely depends on the officer’s experience level, years in the Foreign Service, and available time. Desk officers offer a 1-hour briefing of country-specific fraud issues and resources to all Foreign Service Officers prior to their deployment, but not all of the officers take advantage of the briefing, according to officials from the Office of Fraud Prevention Programs. In addition, a significant period of time may pass between an entry-level officer’s completion of the basic consular course and the time when he or she assumes the role of Fraud Prevention Manager. Entry-level officers are required to take only limited fraud prevention training that does not include new anti-fraud technologies. For example, officers may not arrive at a post until they complete required language training, which can take 6 months to a year. Additionally, entry-level officers who are not on the consular affairs career track may serve a rotation in a different specialty area before serving a rotation in consular affairs. Finally, many entry-level officers are not assigned to the Fraud Prevention Manager position until after they arrive at post. While State offers these anti-fraud training courses, both in Washington D.C., and online, it does not track whether Fraud Prevention Managers are taking them. In 2012, four of the five Fraud Prevention Managers with whom we met had not been formally trained in MATRIX. Since its rollout, State has not tracked the number of Fraud Prevention Managers that have been trained in eCAS and MATRIX. In addition, State was unable to differentiate enrollment data by position and therefore could not confirm that Fraud Prevention Managers had enrolled in any fraud prevention course. State’s fraud prevention efforts protect the integrity of the visa process and help prevent people from exploiting the visa process to commit crimes or threaten the security of the United States. Fraud trends evolve over time as criminal networks and unscrupulous visa applicants seek to circumvent State’s visa application process. Meanwhile, the number of visas issued has risen steadily since 2003 and consular officers face increased pressure to expedite visa processing. The evolving nature of fraud and increases in the volume of visas adjudicated require State to continuously update its anti-fraud efforts. In recent years, State has taken steps to enhance the tools and services available to combat visa fraud, including the deployment of new anti-fraud technologies and resources improving State’s ability to prescreen applications for indicators of fraud and to readily access information from prior visa applications. However, these technologies and resources are only useful if consular officers know that they exist and know how to use them. Currently, the majority of Fraud Prevention Manager positions are filled by entry-level officers, who are not the targeted audience for advanced anti-fraud training, and State does not require them to be trained in all anti-fraud technologies. As a result, Fraud Prevention Managers may not be fully equipped to detect and combat fraud. Furthermore, posts increasingly rely on KCC to prescreen certain visa applications for fraud, and State intends to prescreen 50 percent of all visa applications worldwide prior to consular interviews. However, State does not have a policy that specifies how to systematically utilize the center’s resources, based on post workload and fraud trends. Therefore, State cannot be assured that a valuable tool to combat fraud is being strategically utilized. Absent effective support and training, Fraud Prevention Units may make uninformed decisions, thus enabling ineligible aliens, including potential terrorists, to gain admission to the United States. To further improve the visa fraud prevention process, we recommend that the Secretary of State take the following two actions: (1) Formulate a policy to systematically utilize anti-fraud resources available at the Kentucky Consular Center, based on post workload and fraud trends, as determined by the department; and (2) Establish standardized training requirements for Fraud Prevention Managers, to include training in advanced anti-fraud technologies, taking advantage of distance learning technologies, and establishing methods to track the extent to which requirements are met. We provided a draft of our report to State and DHS. State and DHS provided technical comments, which we incorporated as appropriate. State also provided written comments, which are reproduced in appendix IV. State concurred with our recommendations. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 14 days from the report date. At that time, we will send copies of this report to interested members of Congress and the Secretaries of Homeland Security and State, as well as other interested members of Congress. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8980 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. This report examines (1) countries and visa categories subject to the most visa fraud; (2) technologies and resources to combat fraud; and (3) training requirements of State officials responsible for fraud prevention. This report focuses on visa fraud, and not passport fraud. To determine the countries and visa categories subject to the most visa fraud, and the evolution of fraud over time, we reviewed nonimmigrant and immigrant visa issuance data from the Consular Consolidated Database from 1992 to 2011. While we did not analyze State data on the number of visa applications during this time period, we reviewed visa refusal percentages by country. We did not review the reliability of these data because they were for background purposes only. We also reviewed State Fraud Digest reports from September 1996 through May 2012, semi-annual fraud summaries for some of the posts with the highest numbers of suspected fraud cases, and Diplomatic Security Monthly Status reports from fiscal year 2011. We also analyzed fiscal year 2010 and 2011 data on the number of visa applications referred to Fraud Prevention Units. We found 2010 data more reliable because State transitioned to a new fraud management system in the middle of fiscal year 2011, and formal guidance on how and when to refer cases to Fraud Prevention Units was not released until July 2012. We compared the 2005 Country Fraud Ranking of Posts to fiscal year 2010 data on the countries with the highest numbers of suspected fraud cases. We found these data to be sufficiently reliable for the purposes of indicating the countries that reported the highest volumes of reported fraud cases and made the most referrals. However, we found that these data may not accurately reflect the relative levels of actual fraud in each country due to possible differences in reporting by posts. We used fiscal year 2010 data for our analysis because State introduced a new data system in 2011, and we noted some potential problems with the 2011 data that arose due to the transition. State officials in the Office of Fraud Prevention Programs provided qualitative information on the types of visa categories that are subject to fraud. Lastly, we interviewed State officials at headquarters and abroad to discuss recent fraud trends. To assess State’s use of technologies and resources to combat fraud, we met with State’s Bureau of Consular Affairs Office of Consular Systems and Technology to review State’s major data systems as well as the latest technological tools available to consular officers and Fraud Prevention Managers. Specifically, we received demonstrations on State’s newly deployed Enterprise Case Assessment Service (eCAS) used for tracking fraud cases, and MATRIX, a search tool used in fraud prevention. We visited the Kentucky Consular Center, an anti-fraud resource available to posts, and observed its activities. We interviewed State officials at posts regarding their usage of these tools and resources. To determine the reliability of data captured by eCAS on the number of cases referred to Fraud Prevention Units and the number of confirmed cases, we met with consular officers and Fraud Prevention Managers in five posts to determine how information was entered into eCAS. We determined that the eCAS system was widely used across posts and was sufficiently reliable to determine the general volume of fraud referrals. To understand the training required of State officials responsible for combating fraud, we gathered information about training requirements and course enrollment from the Foreign Service Institute’s Student Training Management System. We interviewed Foreign Service Institute personnel regarding controls, strengths and limitations of the course enrollment data and determined it was sufficiently reliable for our purposes. We also analyzed data on the number of direct hire and local staff working in all 222 visa-issuing consular posts as of December 2011 and reviewed data gathered by State liaisons from the Consular Workload Statistics System on the number and grade of officers assigned to Fraud Prevention Units at consular posts as of April 2012. We obtained staffing data from State’s GEMS database. We tested the data on direct hires and local staff working at visa-issuing posts for completeness, confirmed the general accuracy of the data with select overseas posts, and interviewed knowledgeable officials from the Office of Resource Management and Organizational Analysis concerning the reliability of the data. We assessed data on the number and grade of officers assigned to Fraud Prevention Units for reliability, and interviewed Consular Affairs officials regarding how the data was collected and entered into the database, the controls and reviews of the data collection, and the major strengths and limitations of the data. We found the data to be sufficiently reliable for our purposes. Lastly, we conducted interviews with visa chiefs, Fraud Prevention Managers, and DS Assistant Regional Security Officers working in five overseas posts on issues related to consular staffing and resources, among other topics. We visited U.S. consular posts in five countries—Brazil, the Dominican Republic, India, Jordan, and Ukraine. During these visits, we observed visa operations and interviewed consular staff and embassy management about visa adjudication policies, procedures, and resources. In addition, we spoke with officials from other U.S. agencies that assist consular officers in the visa adjudication process. We chose Brazil, the Dominican Republic, India, and Ukraine because each of the fraud prevention teams in these countries investigated 500 or more fraud cases in fiscal year 2010. We chose Jordan because of the nature of the fraud cases investigated in that country, which included security concerns. We conducted our work from August 2011 through September 2012, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Temporary workers with “specialty occupation” Chile and Singapore Free Trade Agreement Aliens Nurses under the Nursing Relief for Disadvantaged Areas Act of 1999 Spouses and children of H-1, H-2, or H-3 visa holders Temporary workers with extraordinary ability or achievement in the sciences, arts, education, business, athletics, TV or film. The following five case studies provide examples of the types of activities carried out by Fraud Prevention Units and Assistant Regional Security Officer Investigators (ARSO-Is) overseas. In Ukraine, we observed consular officers adjudicating visas for the Summer Work and Travel program. A consular officer suspected an applicant’s student identification was fraudulent, and he told the applicant to wait while he asked the Fraud Prevention Unit for assistance. The senior Locally Employed Staff (LES) person inspected the student ID and said it was most likely a fake. The consular officer asked the LES to assist in questioning the applicant. The LES reviewed the applicant’s school transcripts that were submitted with the visa application, and asked the applicant to provide the name of the school’s chancellor. The applicant could not provide the name. The Fraud Prevention Unit called the school to attempt to verify whether the applicant was currently enrolled, but the school would not verify the applicant’s status. The Fraud Prevention Unit told the consular officer that they believed the applicant was committing fraud on her application, and the consular officer denied the visa. U.S. Major League Baseball (MLB) teams award large signing bonuses to younger prospective players in the Dominican Republic, creating a significant economic incentive to make prospective players seem younger. As a result, MLB prospects often falsify their ages and sometimes their identities on visa applications to make them appear younger than they truly are. To date, ARSO-I Santo Domingo has facilitated the arrests of two MLB Dominican talent scouts, an MLB Investigator, and an MLB pitcher, among others, for participating in identity fraud. The pitcher, a Dominican citizen, assumed the identity of a younger person and obtained a contract to play professional baseball as a pitcher in the United States in 1999. The pitcher has since illegally obtained at least 10 nonimmigrant visas in his assumed identity. ARSO-I Santo Domingo confirmed the pitcher’s true identity and coordinated with the Dominican prosecutors’ office to obtain a Dominican arrest warrant. The pitcher later returned to the Dominican Republic and obtained travel documents and a new nonimmigrant visa petition from the MLB in his true identity, and applied for a nonimmigrant visa. The interviewing consular officer found the pitcher ineligible for the nonimmigrant visa due to identity fraud, and the ARSOI-I Santo Domingo subsequently facilitated his arrest by the Dominican National Police, based on his outstanding Dominican arrest warrant. In December 2010, Sao Paulo Civil Police arrested a Brazilian who presented false documents in support of his U.S. visa application. This was the ninth arrest of applicants whom had named the same individual known to be a smuggler and fraudulent document vender on their visa application. Further investigation identified approximately 70 persons who had used false documents provided by the document vendor since 2009. ARSO-I Sao Paulo received information that the document vendor and his accomplices were also producing false documents in support of Canadian visa applications, and Italian and Brazilian passports. In March 2011, the Brazilian Federal Police, the State of Santa Catarina Civil Police, and the State of Santa Catarina Prosecutor’s office arrested the document vendor and three of his accomplices. Immigrations Customs Enforcement (ICE) contacted the Deputy Assistant Regional Officer Investigator in New Delhi after receiving information that a private translator was extorting money from U.S. Citizenship and Immigration Services (USCIS) refugee/asylum applicants. The translator had access to protected information from USCIS files. ICE spoke with an informant who was being threatened by the translator to pay significant sums or have her application denied. Preliminary investigations determined that the translator would “cold-call” asylum applicants. ARSO- I New Delhi and ICE interviewed the translator, three USCIS local employees, and three local guards. The translator denied obtaining personal identifiable information from embassy staff. As a result of this investigation, the translator was arrested upon departure of the Embassy, one local guard was terminated for accepting money from the translator, and one USCIS LES employee was put on administrative leave for divulging personally identifiable information. Immigration Customs Enforcement (ICE) Attaché contacted ARSO-I Kyiv for assistance with an individual present in Kyiv with an active INTERPOL Red Warrant for human trafficking. The fugitive was wanted in the Eastern District of Michigan for forced labor, money laundering, immigration and visa fraud, and witness tampering. ARSO-I Kyiv coordinated assistance with the Ministry of Internal Affairs Organized Crime Department. The Ukrainian Ministry of Internal Affairs Organized Crime Department agents arrested the fugitive at his residence for immigration overstay charges. ARSO-I Kyiv and ICE Attaché Frankfurt escorted the fugitive from Kyiv to New York, where the fugitive was arrested by ICE agents. In addition to the individual named above, Anthony Moran (Assistant Director), Jon Fremont, Julia Ann Roberts, Katie Bernet, Karen Deans, Martin De Alteriis, Etana Finkler, Mary Moutsos, Mark Speight, and Maria Stattel made key contributions to this report. Others providing technical assistance include Claude Adrien and Emily Biskup.
Foreign nationals may apply for entry into the United States under dozens of different visa categories, depending on circumstances. State’s Bureaus of Consular Affairs and Diplomatic Security share responsibility for the prevention of visa fraud, which is a serious problem that threatens the integrity of the process. Some documents through illegal means, such as using counterfeit identity documents or making false claims to an adjudicating officer. Visa fraud may facilitate illegal activities in the United States, including crimes of violence, human trafficking, and terrorism. This report examines (1) countries and visa categories that are subject to the most fraud; (2) State's use of technologies and resources to combat fraud; and (3) training requirements of State officials responsible for fraud prevention. GAO examined State's reports and data on fraud trends and statistics, examined resources and technologies to counter fraud, and observed visa operations and fraud prevention efforts overseas and domestically. Certain countries and visa categories are subject to higher levels of fraud. In fiscal year 2010, almost 60 percent of confirmed fraud cases (9,200 out of 16,000) involved applicants from Brazil, China, Dominican Republic, India, and Mexico. Department of State (State) officials told GAO that fraud most commonly involves applicants for temporary visits to the United States who submit false documentation to overcome the presumption that they intend to illegally immigrate. Fraud is also perpetrated for immigrant visas and nonimmigrant visa categories such as temporary worker visas and student visas. In response to State efforts to combat visa fraud, unscrupulous visa applicants adapt their strategies, and as a result, fraud trends evolve over time. State has a variety of technological tools and resources to assist consular officers in combating fraud, but does not have a policy for their systematic use. For example, State recently implemented fraud prevention technologies such as a fraud case management system that establishes connections among multiple visa applications, calling attention to potentially fraudulent activity. Overseas posts have Fraud Prevention Units that consist of a Fraud Prevention Manager (FPM) and locally employed staff who analyze individual fraud cases. In 2011, the ratio of Fraud Prevention Unit staff to fraud cases varied widely across overseas posts, causing disproportionate workloads. The Kentucky Consular Center (KCC) is a domestic resource available to posts that verifies information on certain visa applications. However, KCC services are only provided on an ad-hoc basis, and State does not have a policy for posts to systematically utilize its resources. For example, an FPM at a high fraud post told GAO that the post would like to utilize KCC anti-fraud services for screening certain visa categories, but did not know how to request KCC assistance. Although State offers anti-fraud training courses at the Foreign Service Institute and online, it does not require FPMs to take them and does not track FPMs’ enrollment. Consular officers receive limited fraud training as part of the initial consular course, and FPMs are not required to take advanced fraud training in new technologies. In addition, GAO found that 81 percent of FPM positions were filled by entry-level officers and 84 percent of FPM positions were designated as either part-time or rotational. Between October 2009 and July 2012, entry-level officers made up about 21 percent of the total students who registered for a course on detecting fraudulent documents, and State could not guarantee that FPMs were among them. Four out of the five FPMs with whom GAO spoke had not been trained in State's new fraud case management system. GAO recommends that State (1) formulate a policy to systematically utilize anti-fraud resources available at KCC, based on post workload and fraud trends, as determined by the Department and (2) establish requirements for FPM training in advanced anti-fraud technologies, taking advantage of distance learning technologies, and establishing methods to track the extent to which requirements are met. State concurred with these recommendations.
Foster care laws and regulations have historically emphasized the importance of both reunifying families and achieving permanency for children in a timely manner. Permanency outcomes from foster care include family reunification, adoption, and legal guardianship. The Congress recently enacted legislation that places a greater emphasis on adoption when foster children cannot be safely returned to their parents in a timely manner. Failing to secure a safe, permanent home for foster children before they reach age 18—sometimes referred to as aging out of the foster care system—can have damaging consequences for their emotional stability and future self-sufficiency. Although federal law requires states to make “reasonable efforts” to reunify foster children with their parents, neither federal laws nor regulations clearly define “reasonable efforts.” At a minimum, the law does require states to develop a case plan with a permanency goal. When family reunification is the goal, the case plan must describe the services—such as drug or alcohol treatment, counseling, or parenting classes—that will be provided to help parents rectify the problems or conditions that led to their children entering foster care. In order to evaluate the progress that parents have made in complying with their case plan requirements, states are required to hold court or administrative reviews every 6 months. They also must hold permanency planning hearings at which the judge must determine whether to continue family reunification efforts or begin to pursue some other permanency goal, such as adoption or guardianship. In determining if and when to end efforts to reunify the family, foster care agencies and the courts must balance the goals of reunifying children with their parents and meeting children’s need for timely permanency. The Adoption and Safe Families Act of 1997 (P.L. 105-89) emphasizes that a child’s health and safety are of paramount concern by specifying situations in which states do not have to make reasonable efforts to reunify the family before parental rights can be terminated. This law also stresses the importance of securing safe, permanent homes for children in a timely manner by (1) requiring states to file a petition to terminate parental rights (TPR) if the child has been in foster care for at least 15 of the most recent 22 months, (2) shortening from 18 to 12 months the time period within which a permanency planning hearing must be held, and (3) providing incentive payments to states for increasing the number of foster children who are adopted. This law also authorizes funds for time-limited family reunification efforts. Currently, data on foster care outcomes—including family reunification rates—and length of stay in foster care are limited. A longitudinal study of foster care outcomes in California found that, while 44 percent of children who entered foster care in 1990 as infants were reunified with their families within 4 years, 37 percent were still in care after 4 years.This study also showed that foster care outcomes vary by placement type, age at entry, and ethnicity. Data on how parental substance abuse may affect the length of time children spend in foster care and their outcomes are particularly limited. However, Illinois reported that the percentage of foster children who were reunified with their families dropped between 1990 and 1995, which foster care agency officials attribute to the “epidemic level of parental drug abuse.” Parental substance abuse may also result in children re-entering foster care. The California study cited above found that among those who were reunified with their families, 28 percent re-entered foster care within 3 years. This study found that parental substance abuse was particularly common among cases in which children had re-entered foster care. Research suggests that children who spend long periods of time in foster care, or age out of the system before a permanency outcome has been achieved, may have emotional, behavioral, or educational problems that can adversely affect their future well-being and self-sufficiency. A study of the title IV-E foster care independent living program, which assists children in their transition from foster care to self-sufficiency, found that about 2-1/2 to 4 years after aging out of the system, 46 percent of foster children had not completed high school; 38 percent had not held a job for longer than 1 year; 25 percent had been homeless for at least 1 night; and 60 percent of those who were female had given birth to a child. Furthermore, 40 percent had been on public assistance, incarcerated, or a cost to the community in some other way. The Department of Health and Human Services (HHS) is responsible for the management and oversight of federal programs providing services to foster children. HHS issues federal foster care regulations, monitors states’ compliance with them, and administers federal funding. Federal foster care funds are authorized under title IV-E of the Social Security Act of 1935. Title IV-E is an uncapped entitlement program that reimburses states for a portion of the maintenance cost for foster children whose parents meet federal eligibility criteria related to their income level. Federal expenditures for the administration and maintenance of children eligible for title IV-E funding increased from about $546 million in 1985 to an estimated $3.3 billion in 1997. States and counties must bear the full cost for maintaining foster children who are not eligible for title IV-E funding. Children are exiting foster care at a slower rate than they are entering. As a result, the foster care population nationwide has nearly doubled since the mid-1980s, increasing from about 276,000 in 1985 to about 500,000 in 1997. Following the advent of crack-cocaine in the mid-1980s, cocaine use increased dramatically and reached alarming proportions by the end of the 1980s. Research indicates that the “crack epidemic” may have contributed to the increase in foster care caseloads. We reported that, in 1991, nearly two-thirds of foster children 36 months of age or younger in Los Angeles County, New York City, and Philadelphia County combined were known to have been prenatally exposed to drugs or alcohol. Most of them were exposed to cocaine. Although research indicates that the number of new crack-cocaine users is declining, chronic use among parents of foster children is still common. While crack-cocaine use is declining, the use of other hard drugs is on the rise. Methamphetamine use has been growing, particularly in the West and Southwest, and there is a resurgence of heroin use throughout much of the country. Heroin’s growing popularity may stem from its sharply increased availability; decreased cost; and higher purity level, in a form that does not need to be injected. Both crystallized methamphetamines and crack-cocaine are inexpensive, smokable drugs that produce immediate and intense highs and increased alertness. In March 1998, we reported that major studies have shown that drug treatment is beneficial, although concerns about the validity of self-reported data suggest that the degree of success may be overstated.Nonetheless, substantial numbers of clients do report reductions in drug use and criminal activity following treatment. Research also indicates that those who remain in treatment for longer periods generally have better treatment outcomes. Methadone maintenance has been shown to be the most effective approach for treating heroin abuse. Research on the best treatment approach or setting for other groups of drug abusers, however, is less definitive. To date, there is no effective pharmacological treatment for cocaine abuse, but studies have shown that several cognitive-behavioral treatment approaches show promise for treating cocaine addiction. Little is known about the effectiveness of treating methamphetamine addiction. According to our survey, most children in foster care in California and Illinois had at least one parent with a serious and long-standing substance abuse problem that makes recovery extremely difficult. Most of these parents had been abusing drugs or alcohol for 5 years or more. About two-thirds of these parents had used one or more hard drugs such as cocaine, heroin, or methamphetamines. These hard drugs are highly addictive and debilitating and can greatly diminish the ability to parent. These substance-abusing parents often neglect their children because their primary focus is obtaining and using drugs. In addition, substance abusers often engage in criminal activity that can threaten the safety and well-being of their children. Recovery from drug and alcohol addiction depends on many factors, such as the substance abuser’s readiness for recovery, and relapse is common. On the basis of the results of our survey, we estimate that about 65 percent of the foster children in California and 74 percent in Illinois, or about 84,600 children combined, had at least one parent who was required to undergo drug or alcohol treatment as part of the case plan for family reunification. (See fig. 1.) In about 40 percent of these cases in each state the father was required to undergo drug or alcohol treatment, while the mother was required to undergo treatment in over 90 percent of these cases in each state. In about one-third of the cases involving parental substance abuse in each state, either the father was deceased or his whereabouts were unknown. As a result, the mother was usually the focus of the foster care agency’s family reunification efforts. Caseworkers in our case study locations explained that fathers whose whereabouts are unknown may not even be aware they have children in the foster care system; and even if they are aware, they may never have been involved in the care of their children. In both California and Illinois, at least two-thirds of the substance-abusing parents of foster children in our survey used cocaine, methamphetamines, or heroin—hard drugs that are highly addictive and debilitating. In each state, about 50 percent of the mothers who abused drugs or alcohol used more than one substance. Alcohol was often used in combination with one or more of the hard drugs mentioned above, although alcohol abuse alone was much less common in both states. Less than 10 percent of the substance-abusing mothers in each state used only alcohol. In some instances, substance-abusing parents in each state were using marijuana. According to our survey, substance-abusing parents of foster children not only abused hard drugs but most had been doing so for a long time. In each state, over 80 percent of the substance-abusing mothers of foster children in our survey had been abusing drugs or alcohol for at least 5 years, many of them for more than 10 years. (See fig. 2 .) Cocaine, methamphetamines, and heroin—the hard drugs used by most substance-abusing parents of foster children in our survey—are highly addictive and can greatly diminish the ability to parent. Cocaine was most often the drug of choice among substance-abusing mothers of foster children in each state. We identified some variation in other drugs of choice, by state. Methamphetamines were often the drug of choice among the substance-abusing mothers of foster children in California but were seldom used by mothers in Illinois. Heroin was the drug of choice for about 10 percent of substance-abusing mothers in each state. (See fig. 3.) Foster care agency officials and drug treatment providers in all three of our case study locations believed that heroin use was on the rise among parents of foster children within their jurisdictions. Parents who use hard drugs may be unable to meet even the basic needs of their children. Their use of hard drugs can lead to erratic behavior that places the safety and well-being of their children at risk. For example, the immediate effects of both crack-cocaine and crystallized methamphetamines include hyperstimulation and an amplified sense of euphoria. Crack-cocaine users may also experience feelings of depression, restlessness, irritability, and anxiety, and prolonged use can lead to paranoid behavior. Because the high produced by crystallized methamphetamines can last between 8 and 24 hours, when the effects wear off, users go into a deep sleep that can last for several days. Users sometimes are susceptible to psychological problems including depression, paranoia, and hallucinations. In extreme cases, methamphetamine use may also lead to suicidal tendencies and violent outbursts. Heroin and other opiates tend to relax the user, but users may also experience restlessness, nausea, and vomiting. Heroin causes users to go back and forth from feeling alert to feeling drowsy. With very large doses of heroin, users can become unconscious and, in some cases, may die. A foster care case we reviewed illustrates the extreme effect drug abuse can have on parents’ ability to care for their children. A mother with a long history of abusing crack-cocaine and other hard drugs reportedly pointed a gun at her two daughters and threatened to kill them and herself. The child this case pertains to had marks on her body from physical abuse she had suffered at the hands of her mother. She was removed from her mother’s custody and never reunified with her. This child was quoted in the case file as saying that “cocaine took over her mind—she used to be a good mother.” A more detailed description of this case, and the other cases we reviewed, is contained in appendix IV. Most children with substance-abusing parents enter foster care because their parents fail to meet their basic physical and emotional needs. In both California and Illinois, neglect was the primary reason for entry into foster care in over 80 percent of the foster care cases in our survey involving parental substance abuse. Physical and sexual abuse were far less often the reason for entry, together accounting for only about 14 percent of the cases involving parental substance abuse in California and 7 percent in Illinois. Because of the nature of addiction, obtaining and using drugs or alcohol are the most important focus in the lives of substance abusers. As a consequence, the safety and well-being of their children is often secondary to their addiction. Research suggests that substance-abusing parents of children in foster care do not always form healthy emotional attachments with their children and may have limited parenting skills.These parents may abandon their children at birth or sometime later in their lives, be periodically absent from the home, or leave their children in unsafe environments. According to our survey, in both California and Illinois, over 80 percent of the foster children with substance-abusing parents had at least one other sibling who was also in foster care as of September 15, 1997. We found many examples of neglect associated with drug abuse in the cases we reviewed. In one case, the mother’s crack-cocaine use caused her to leave her children for the night with unrelated adults after telling them she would return in only a few minutes. In another case, the mother left her children with her brother, who also abused drugs, while she went out to sell diapers, cigarettes, bus tokens, and food stamps in order to buy cocaine. In a third case, after the family was evicted from its apartment, the mother left her three children with a friend. They had not seen their mother for about 2 weeks when the friend contacted the foster care agency. Finally, when parents abuse illicit drugs, they also expose their children to crime. In addition to purchasing illicit drugs, substance abusers sometimes engage in criminal activity such as theft, prostitution, and drug sales to support their habits. In both California and Illinois, over one-third of the foster care cases in our survey that involved parental substance abuse also involved some type of criminal activity by at least one of the parents around the time of the child’s foster care episode. Children whose parents abuse illicit drugs also sometimes witness, or are the victims of, violence. For example, in a case we reviewed, the mother, who was pregnant and abusing cocaine, was attacked by drug dealers for allegedly stealing drugs. This attack exposed her unborn child to considerable physical harm, and the infant had to be delivered by emergency cesarean section as a result of the attack. According to research on drug and alcohol treatment, the potential for recovery depends on many factors, including the types of substances used, the length of time they are used, readiness for recovery, access to appropriate treatment, and the length of time in treatment. In addition, other problems, such as mental illness, medical conditions, and a criminal lifestyle can greatly complicate the recovery process. Treatment providers we spoke with said that some drug addicts or alcoholics may not be ready to recover until they “hit bottom” or recognize that they can no longer continue their drug or alcohol abusing lifestyle. According to HHS officials, placement of their children in foster care is often the “bottoming out” experience needed to get parents into treatment for their substance abuse problems. Some treatment providers believe that, regardless of whether or not a parent has hit bottom, effectively engaging the addict in treatment is key to recovery. Many experts believe that a successful course of drug treatment involves a continuum of treatment approaches and services. Women with children often need intensive treatment because their fear of losing custody of their children often prevents them from seeking treatment on their own. As a consequence, by the time they come to the attention of the child welfare system their addiction is usually far advanced. In addition, according to HHS, informed sources generally believe that treatment for women must address issues unique to women, such as sexual abuse, domestic violence, child care, and health problems. Recovery from drug and alcohol addiction is generally characterized, by drug treatment professionals, as a difficult and lifelong process that frequently involves periods of relapse. According to some treatment experts, relapse is a stage in the recovery process that indicates progress toward recovery when it is accompanied by increasing periods of abstinence from drugs or alcohol. Brief relapses may enable recovering addicts to understand what triggers their return to drugs and help them develop ways to prevent future relapses. Among substance-abusing mothers in our survey whose children had been in foster care for at least 1 year, about 40 percent of these mothers in each state had entered treatment programs but failed to complete them, usually because of relapse. In some instances, mental illness, incarceration, or medical conditions were cited as the reasons these mothers had failed to complete treatment. The following case we reviewed illustrates how difficult the recovery process is for parents who abuse drugs. This case involved one of six children. He and most of his siblings were known to have been prenatally exposed to cocaine. As a result of neglect related to his mother’s crack-cocaine and alcohol abuse, he entered foster care shortly after birth. His mother also had a criminal record, having been convicted of felony theft and misdemeanor drug possession, and had been incarcerated for probation violations. The identity of the father was unknown. His mother successfully complied with most of the requirements in the case plan for reunification—including visitation, a parenting class, and family therapy. However, about 2 years after this child entered foster care, his mother was dropped from a drug treatment program for lack of attendance. About that time, the permanency goal was changed from family reunification to long-term foster care. Over the next few years, the mother entered treatment several additional times but failed to complete any of these programs. About 3 months prior to the birth of his youngest sibling, the mother entered a 12-month residential treatment program, which she successfully completed. Because of her success in treatment, the child who was the focus of this case was returned to his mother for several trial visits after spending about 7 years in foster care. However, the mother subsequently failed several drug tests, indicating she had relapsed. At the time we reviewed the case, this child was still in foster care after almost 8 years. Although many parents, like the mother in this example, are unable to make sufficient progress toward recovery to regain custody of their children after many years, caseworkers and drug treatment providers told us that some parents, even those with long histories of substance abuse, do recover and are able to provide a safe home for their children. Another case we reviewed involved the third oldest of five children. He entered foster care when he was 6 years old after his mother gave birth to her youngest and third prenatally cocaine-exposed child. The mother had a 14-year history of substance abuse and had previously come to the attention of the child welfare agency in the mid-1980s for medical neglect of one of her older children. She was unemployed, and the father was incarcerated at the time the children were placed in foster care. Despite the complicated family situation, the mother successfully complied with all of the case plan requirements during this child’s foster care episode. She spent about 1 month in a women’s residential treatment program and another month in an outpatient program and participated in follow-up drug treatment support groups. She visited this child as prescribed in the case plan, attended parenting classes and counseling sessions, and obtained subsidized housing. The child was returned to his mother on a trial basis about 16 months after he entered foster care. About 21 months after this child entered foster care, his mother was granted permanent custody, and this case was closed. In cases involving parental substance abuse, foster care agencies face several challenges when attempting to secure permanent homes for foster children in a timely manner. Foster care agencies face difficulties in helping parents enter drug or alcohol treatment programs. Links between foster care agencies and treatment providers may not always be adequate; and as a consequence, close monitoring of parents’ progress in treatment does not always occur. Finally, agencies also face several barriers to quickly achieving adoption or guardianship in these cases when family reunification efforts fail. Foster care agencies face the challenge of motivating parents to get into treatment. We learned at our case study locations that many parents who are substance abusers resist entering treatment. To parents, caseworkers represent the agency that took their children from them. As a result, many parents feel considerable anger toward caseworkers and anxiety about interacting with them, which can deter parents from entering treatment and delay their progress in fulfilling case plan requirements. Furthermore, according to drug and alcohol treatment providers and attorneys, some caseworkers lack sufficient understanding of the nature of drug and alcohol addiction, its role in individual foster care cases, and what they can do to help motivate parents to address their substance abuse problems. Among substance-abusing mothers in our survey whose children had been in foster care for at least 1 year, less than 20 percent in each state had either completed treatment or were currently in a treatment program. In California, about half of the remaining mothers had never entered treatment and about half had failed to complete it; in Illinois, a greater portion of the remaining mothers had failed to complete treatment than had never entered treatment. (See fig. 4.) Many factors influence whether an individual enters and completes treatment, including individual readiness for recovery. High caseloads and turnover among caseworkers make it even harder for caseworkers to help substance-abusing parents comply with their case plans. Several caseworkers we spoke with said it is an ongoing challenge to meet the needs of these families, particularly because foster care caseworkers operate in a crisis-management mode. We learned at the locations we visited that foster care agencies may have limited familiarity with treatment resources in the community, which can delay parents’ entry into drug or alcohol treatment programs. Experts believe that if entry into treatment is delayed, parents may lose the motivation to recover that the loss of custody of their children provided. Caseworkers said that they do not always know what treatment programs exist in the community, or whether there are slots available in these programs. As a result, parents are sometimes provided with a referral list that contains treatment programs that are no longer in operation or do not have immediate openings. A parent’s entry into treatment and progress toward recovery can also be delayed while various treatment programs are contacted to find an opening or place the parent on a waiting list. Caseworkers and judges alike told us that a full array of alcohol and drug treatment settings is not available in some communities. Many parents either are referred to or find it much easier to access less costly outpatient treatment programs because funding for residential treatment programs is limited. Although research has shown that outpatient treatment can be as effective as extended residential care, some treatment providers said that many mothers whose children are in foster care require some period of residential treatment to stabilize before being referred for outpatient care. Experts on drug treatment generally believe that, following either residential or outpatient treatment, recovering parents need after-care services. According to treatment providers in our case study locations, after-care services related to drug and alcohol treatment are particularly important in foster care cases in which timely permanency decisions are being emphasized. These services, however, are not always provided to parents with children in foster care. After-care services for these parents might include ongoing caseworker visits to follow up with parents after they have been reunified with their children, to ensure their participation in self-help groups, and to provide referrals for additional social services. According to some agency officials, if after-care is not provided to parents who have completed drug treatment, judges may delay reunifying them with their children. These families often live in drug-infested neighborhoods. Without after-care services, these parents may be more likely to relapse, and their children may be more likely to re-enter foster care. Another challenge facing foster care agencies arises from the problems in monitoring parents’ progress in drug or alcohol treatment. Detailed information on parents’ progress in treatment is not always available to judges when determining whether a family should be reunified, reunification efforts should continue, or some other permanency goal should be pursued. This information may not always be provided to judges because foster care agencies do not always communicate regularly with treatment providers. Judges told us about instances in which permanency decisions were delayed because attorneys did not have access to the treatment provider’s records of the parent’s participation or because reports from the caseworkers did not include sufficient information about the parent’s progress in treatment. Again, high caseloads and turnover among both caseworkers and attorneys exacerbate the problem. Caseworkers may have limited time to discuss in detail a parent’s progress with the treatment provider, just as attorneys may have limited time to review reports on parents’ progress in treatment in advance of a permanency hearing. Confidentiality requirements to protect the privacy of clients in drug or alcohol treatment may also interfere with obtaining information about parents’ progress in treatment. Some foster care agencies ask parents, before they enter treatment, for their written consent to obtain information on their progress in treatment. If agencies do not obtain written consent, a court order may be needed to access this information. When information on parents’ progress in treatment is not sufficiently detailed or not provided on a timely basis, permanency decisionmaking may be delayed because the judge does not know if it is safe to return children to the custody of their parents. Because relapse is common, judges also need information about the significance of any relapses in terms of the parents’ overall progress toward recovery. For example, providing results of periodic, random drug tests may indicate a brief relapse followed by a long period of abstinence, indicating overall reduced drug use. In addition, without this information, parents are able to manipulate or “game” the system, and judges may not be able to determine when laws on permanency decisionmaking for cases involving parental substance abuse apply. Furthermore, judges may have difficulty determining if agencies have made reasonable efforts to help reunify the family. Manipulative behavior was described by child welfare officials and treatment providers as often characteristic of addicts who are consumed by their need to use drugs and alcohol. When parents are aware that their progress in treatment is not being closely monitored, they may falsely claim to be in treatment and making progress in an attempt to prevent the court from moving toward terminating their parental rights. Caseworkers also told us that parents sometimes try to manipulate the system to extend the period during which the permanency goal is family reunification by entering treatment just before hearings, only to drop out of treatment immediately after. A treatment provider characterized this behavior as a negative consequence of how permanency decisions have historically been made. These parents are often aware that, in the past, years have elapsed before some permanency decisions were made because the period of family reunification was extended, thereby providing parents with additional opportunities to recover from their addictions and regain custody of their children. Judges also need information about parents’ progress in drug treatment, as well as their drug abuse and treatment history, to determine when existing state laws governing permanency decisionmaking in these cases apply. Thirty states have laws specifying that parental substance abuse is either a consideration in or grounds for terminating parental rights, and a number of states are very specific in how they address permanency decisionmaking for cases involving parental substance abuse. For example, California law does not require foster care agencies to offer reunification services if the parent has a serious and longstanding substance abuse problem and has resisted treatment during the previous 3 years or has failed or refused treatment at least twice. Illinois law does not require the foster care agency to make efforts to reunify the family if the foster child is at least the second child of that parent to have been prenatally substance-exposed and the mother had been given the opportunity to participate in treatment when the first child was prenatally exposed. State laws on permanency decisionmaking for foster care cases involving parental substance abuse are discussed further in appendix V. Given the lack of consensus as to what constitutes reasonable efforts to help reunify families, judges also need detailed information about what foster care agencies have done to help parents recover from their drug or alcohol addictions in order to determine whether reasonable efforts have been made. According to some officials, if judges do not have sufficient information to determine whether reasonable efforts have been made, they may extend the family reunification period. When drug and alcohol treatment resources are limited within a community and this delays a parent’s entry into drug treatment, foster care agencies may also hesitate to begin proceedings to terminate parental rights. When family reunification efforts fail, foster care agencies face several barriers to quickly achieving adoption or guardianship in cases involving parental substance abuse. Before parental rights can be terminated,foster care agencies are required to attempt to locate any parents whose whereabouts are unknown, notify parents of the court’s intent to terminate their parental rights, and provide reunification services to parents who are located and interested in regaining custody. The whereabouts of substance abusing parents—particularly fathers—are often unknown, perhaps because they lack a stable residence, are involved in drug-related activity themselves, or are incarcerated. In addition, mothers sometimes try to delay proceedings to terminate their parental rights by identifying the probable father just before a TPR hearing. Consequently, foster care agencies often overlook fathers and their extended families as potential adoptive resources, according to one judge, because the whereabouts of fathers are so often unknown. Termination of parental rights also may be delayed when a parent for whom reunification services must be provided is incarcerated or repeatedly disappears, which is common among foster care cases involving parental substance abuse. This can disrupt the provision of reunification services, and parents may then appeal a decision to terminate parental rights on the grounds that the agency failed to make reasonable efforts to reunify the family. Health problems of foster children can be another barrier to adoption. In a prior study we found that over half of the young foster children in selected locations in 1991 had serious health problems—such as fetal alcohol syndrome, developmental delays, and HIV—which may have been caused or compounded by prenatal substance exposure. However, some experts believe that caution should be used when predicting adverse developmental outcomes on the basis of prenatal substance exposure because these outcomes are greatly affected by the quality of health care and the developmental supports the child receives and the social environment that the child is exposed to. Other barriers to adoption include the age of the child and behavioral and emotional problems that many children have as a result of abuse or neglect. Placement of foster children with relatives may also present a barrier to adoption in cases involving parental substance abuse. In both California and Illinois, we found that over half of the foster children in our survey with substance-abusing parents were placed with relatives. In these cases, when reunification efforts were discontinued and the permanency goal was changed to something other than adoption, the reason often given for not pursuing adoption was that the relatives with whom the child was placed did not want to adopt the child. There are many different reasons why relatives, in general, might not want to adopt these children. According to some foster care caseworkers and agency officials, relatives may fear that if they adopt these children, the parents will no longer be motivated to recover. Relatives may also fear the damage that terminating parental rights will have on their own relationship with the parents of these children. Relatives may also be reluctant to assume legal guardianship of the children placed with them without financial assistance to help support them. Adoption staff at our case study locations also raised concerns regarding the limited number of adoptive homes that may be available in these cases, although they believed current outreach and recruitment efforts might help increase the number of potential adoptive homes, particularly if children were freed for adoption when they were younger. Demand for adoptive resources, however, is likely to increase because many foster care cases involving parental substance abuse have been in the system for long periods of time, and states are now required to begin legal proceedings to terminate parental rights in large numbers of these cases. Among the cases in our survey involving parental substance abuse in which family reunification was no longer the goal, children had been in foster care for an average of about 5-1/2 years in California and over 4 years in Illinois. (See table III.5 in app. III.) On the basis of our survey, we estimate that about 61,700 children in California and 43,100 in Illinois had been in foster care for at least 17 months as of September 15, 1997, and in each state, over 60 percent had parents who were substance abusers. (See fig. 5.) As such, these cases could fall under the new federal requirement to terminate parental rights.Parental rights had already been terminated as of September 15, 1997, for at least one of the parents in 19 percent of the cases in California and 27 percent of the cases in Illinois that involved parental substance abuse. Some locations have launched initiatives that seek to improve the prospects for recovery and family reunification when parental substance abuse is involved. These initiatives involve linkages between foster care agencies, drug treatment providers, and sometimes the courts and other organizations. Some locations are undertaking other efforts to more quickly achieve other permanency outcomes for children when the decision is made to end family reunification efforts. Some locations are also implementing programs to encourage more relatives of children in foster care to adopt or assume legal guardianship of them. While these efforts to more quickly achieve other permanency outcomes for children are not specific to cases involving parental substance abuse, they may be useful in achieving timely permanency outcomes in these foster care cases. Some locations have launched initiatives to improve the prospects for family reunification when parental substance abuse is involved. These initiatives are highly collaborative, call upon the expertise of drug treatment professionals to get parents into treatment as quickly as possible, and involve close monitoring of parents’ progress to help judges make more timely permanency decisions. Although these initiatives show promise, they are too new to show definitive results. The Illinois Expansion Initiative is a collaborative effort between the state child welfare and substance abuse treatment agencies to help substance abusing parents recover in order to be reunified with their children. A joint steering committee developed procedures to better enable the child welfare agency to screen for substance abuse problems and make referrals to a drug treatment provider. This screening tool helps caseworkers identify substance-abusing parents even when they lack training or experience in substance abuse. Using this tool, the caseworker can determine—on the basis of visual observation (such as signs of intravenous drug use or poor personal hygiene), statements made by the client (such as whether the parent has missed work because of a hangover), and facts associated with the case (such as drug-related criminal charges)—whether the parent should be referred for an assessment by a qualified substance abuse counselor. If a substance abuse problem is indicated, referrals to a treatment provider for a full assessment must be made by the child welfare agency within 1 working day. The treatment provider is required to begin treating the parent within 3 working days after the assessment. Through cross-training, caseworkers learn about the nature of alcohol and drug addiction, and drug treatment providers are trained in child welfare issues. Outreach workers, who are drug treatment professionals, visit each parent referred by the child welfare agency at their home, help motivate the parent to get into treatment, and provide ongoing support to help the parent apply the lessons learned in treatment to day-to-day life. Any parent referred for an assessment must sign a written consent form that gives the foster care agency access to information regarding the parent’s attendance and progress in treatment. Joint strategies to expand treatment services to meet the needs of mothers in high-risk communities, including a range of treatment settings, are also part of this initiative. Through this initiative, the two state agencies are working to develop a full range of treatment settings—including detoxification, residential, and intensive outpatient programs. Parents are referred to the appropriate treatment program based on the nature of their addiction, whether residential treatment is necessary because the home environment is not conducive to recovery, and the availability of treatment settings within that community. Similarly, services to address the multiple needs of substance-abusing parents and their families are being explored, including a parenting program that provides parents opportunities for ongoing interaction with their children and thereby better enables treatment providers to provide meaningful information to the courts on mothers’ ability to parent. Reports on clients’ progress in treatment are routinely submitted to the child welfare agency by treatment providers and are used by the courts for permanency decisionmaking. The Reno Family Drug Court is a court-driven effort to facilitate the recovery of substance-abusing parents of foster children in order to reunify these families. Collaborating agencies include the family court, the child welfare agency, local drug treatment providers, corrections agencies, state and county welfare agencies, and a private foundation. The family drug court serves parents whose children either may be or already have been removed from their custody and placed in foster care because of the parents’ substance abuse problems. Some of these parents also face criminal prosecution related to their involvement in substance abuse. Within 72 hours of the child’s removal, these cases are brought before the family drug court and a decision is made as to whether or not the parent is a good candidate for this program primarily on the basis of the parent’s personal motivation to recover and willingness to provide written consent to share information regarding progress in treatment. The caseworker develops an individualized case plan based on a comprehensive assessment of the family’s needs with input from collaborating agencies and all attorneys involved. The parent is typically referred to either a residential or an intensive outpatient program. Some parents are referred to intensive outpatient treatment programs, but “hard-core” addicts are often referred to residential treatment programs. After a minimum of 3 months in residential treatment, these parents may be placed in halfway houses or transitional housing for an additional 6 to 9 months. In addition to the foster care agency caseworker, the parent is also assisted by an “integrated service” case manager, funded through the Tru Vista Foundation. This case manager facilitates collaboration between the agencies and works to obtain the community resources needed to support the parent’s individualized case plan, including counseling, domestic violence support groups, parenting training, transportation services, vocational and educational training, and self-help groups to help the parent remain drug-free. To facilitate timely permanency decisionmaking, the family drug court convenes biweekly to review parents’ progress in these cases. Before each hearing, a multidisciplinary drug court team confers on the parent’s progress over the previous 2 weeks. The team comprises the caseworker, treatment provider, judge, district attorney, defense attorney, and Court Appointed Special Advocate (CASA). The latter is a volunteer who serves as an ombudsman for the court and advocates for the child’s interests. Frequent random drug testing is imposed, and parents receive positive feedback for any progress achieved. Sanctions, such as short jail sentences or community service time, are imposed if parents test positive for drugs or have unexcused absences from treatment programs. If the parent fails to exhibit commitment to treatment, the case reverts to the usual court review process for child welfare cases, or to the adult offenders’ court when criminal charges are involved. The target for graduating from the program is 1 year, although the program may include a period of after-care of up to 6 months, during which time the court continues to monitor the case. Another initiative, Delaware’s Multi-Disciplinary Treatment Team, is a 3-year demonstration project, accepted by HHS in January 1996, featuring teams comprising caseworkers from the child welfare agency and substance abuse counselors from local treatment providers. Several substance abuse counselors are co-located with caseworkers in three county child welfare offices in the state. When parents come to the attention of the child welfare agency, substance abuse counselors help caseworkers assess the severity of the addiction, confront the family’s denial of the problem, and make referrals to the most appropriate treatment providers. These substance abuse counselors accompany parents to treatment programs and work closely with parents to help keep them engaged in treatment. Because the state has a managed care system of Medicaid services, substance abuse counselors also help the families navigate the system to help ensure that parents receive appropriate treatment services. Many parents receive only outpatient drug treatment because of difficulties in getting authorization for parents to enter residential treatment under the managed care system. Substance abuse counselors monitor parents’ progress in treatment through the linkages they maintain with local drug treatment providers and assist caseworkers in communicating information about treatment progress to the courts to help judges make decisions about reunifying parents with their children. Because these initiatives are relatively new, there are only preliminary results to date. However, the initial results from internal evaluations of these initiatives are promising in terms of both improving prospects for family reunification in cases involving parental substance abuse and helping agencies make more timely decisions about when to end family reunification efforts in order to pursue some other permanency outcome. For example, preliminary results of the Illinois Expansion Initiative indicate that participants reduced their drug and alcohol use more than those who did not receive enhanced services through the initiative.Nearly 50 percent of 132 parents in the Reno Family Drug Court initiative graduated from the program. Many parents who graduated from the program were reunified with their children, while some parents chose to relinquish their parental rights. According to a court official, the latter are also success stories because the program helped these parents understand that, because of their inability to recover from their drug addictions, their children would not be safe in their custody. The preliminary results of the Delaware Multi-Disciplinary Treatment Teams show that the proportion of total foster care costs expended on substance abuse cases decreased in two of the three child welfare offices using multidisciplinary teams and increased in the three offices (designated as the control group) not using multidisciplinary teams. A number of state and local efforts also seek to speed up permanency decisionmaking or encourage relatives of children in foster care to adopt or assume legal guardianship. While these efforts are not specific to cases involving parental substance abuse, they may be useful for cases involving parental substance abuse because many of these parents may not be able to recover in a timely manner. As such, a significant number of adoptive parents or legal guardians may be needed for these children. Concurrent planning is a strategy that allows caseworkers to work toward reunifying families, while at the same time developing an alternate permanency plan for the child in case family reunification cannot be achieved in a timely manner. Caseworkers emphasize to the parents that if they do not adhere to the requirements set forth in the case plan, their parental rights can be terminated. As a result, family reunification might be achieved more quickly for some children if parents make a more concerted effort early-on to recover from their addictions and make other changes needed for their children to be safely returned to their custody. If not, concurrent planning enables caseworkers to more quickly achieve an alternate permanency outcome when the decision is made to end family reunification efforts. Some foster care agencies are being encouraged, as part of concurrent planning, to develop tools to assess the prognosis for family reunification. A wide range of indicators may be considered in assessing the prognosis for reunification, which may apply in cases involving parental substance abuse. For example, local foster care agencies may consider factors such as the parent’s history of abusing his or her own children or the parent having grown up in foster care. Some indicators associated with a poor prognosis for family reunification are relevant to cases involving parental substance abuse, such as if the parent’s “only visible support system and only visible means of financial support is found in illegal drugs, prostitution, and street life.” When a poor prognosis for family reunification is indicated, foster care agencies in California should now try to place children as early as possible in foster homes in which the caregiver is willing not only to support the agency’s efforts to reunify the child with his or her parents but also to provide a permanent home if reunification efforts fail. Through the use of concurrent planning, some states are beginning to achieve reductions in the length of time that children spend in foster care. Given the difficulties encountered in reunifying families when parental substance abuse is involved, many of these children may need adoptive parents or legal guardians. For example, in Colorado, the state legislature passed an expedited permanency bill in 1994 requiring that any child under 6 years of age must be placed in a permanent home no later than 12 months after entering foster care. Several counties have since reported that permanency is being achieved earlier for these children compared with children who came into foster care prior to the implementation of the expedited permanency law. However, one county official in Colorado told us that because of the difficulties the county faces in reunifying families when parental substance abuse is involved, priority is given to finding relatives and other foster care placements that can provide permanent homes for these children as soon as possible. To improve the prospects of achieving permanency for more foster children, some locations have implemented programs to encourage individuals to adopt or assume legal guardianship. These programs are particularly applicable when children are placed with relatives, as is the case for many children in foster care. When the relatives of foster children are willing to make a long-term commitment to them but do not wish to have the relationship between the parents and children legally severed, permanency can be achieved through open adoption. For relatives who do not wish to adopt and are also in need of financial assistance to help support the children placed with them, subsidized legal guardianship may be a viable permanency option. Open adoption programs, in which parents retain visitation rights, have been implemented in some locations to make adoption more appealing to relatives. For example, California recently enacted legislation that allows open adoptions with relatives. Under this program, biological parents or other relatives of the child can enter into a written agreement for continued contact or sharing of information between all parties involved.To encourage individuals to assume legal guardianship of children in foster care, many states provide subsidies to those who need financial assistance. Subsidized guardianship programs in California, Delaware, Illinois, Maryland, and North Carolina are authorized under title IV-E foster care waivers. HHS approved these subsidized guardianship programs in 1996 and 1997 as child welfare demonstration projects. A determination of the caregiver’s need for the subsidy to support the placement is made when determining eligibility. In a recent study, Illinois projected that about 5,700 children would be placed in subsidized legal guardianships in the first 2 years under its program. The Adoption and Safe Families Act of 1997 establishes rigorous new requirements governing state legal proceedings to terminate parental rights for children who have been in foster care for at least 15 of the most recent 22 months. These requirements impose on foster care agencies the difficult tasks of attempting to reunify these families within shorter time frames than have been allowed historically and finding adoptive homes for children when family reunification efforts fail. To accomplish these tasks, foster care agencies will need to overcome a number of administrative challenges, such as inadequate links with drug and alcohol treatment providers and inadequate monitoring of parents’ progress in treatment. Information about parents’ progress in treatment is essential for judges to make informed permanency decisions within the time frames specified by the law, whether they decide to reunify these children with their parents or pursue some other permanency outcome. To collect this information, foster care agencies must closely monitor parents’ progress in treatment. If a parent’s progress in treatment is not adequate to ensure a child’s safety—if the child was reunified with the family—this information can help support the judge’s decision to end family reunification efforts and terminate parental rights in order to pursue adoption for that child. If agencies wish to maximize prospects for family reunification in these cases, they must maintain strong linkages with drug treatment providers. In addition to making it easier for foster care agencies to monitor their progress, these linkages could help parents obtain appropriate treatment quickly. Some locations are experimenting with cooperative approaches to case management, involving foster care agencies, drug treatment providers, and the courts. These cooperative approaches may respond to some of the problems we identified in our case studies that can impede recovery and, ultimately, family reunification. Foster care agencies could work to develop stronger links with drug treatment providers, despite the difficulties involved. Some factors associated with drug and alcohol addiction are outside the control of foster care agencies, but agencies must deal with them nonetheless. Even when provided with treatment opportunities, some parents will not break free of drug dependency. Thus, some foster care agencies are developing strategies to quickly achieve other permanency outcomes for children when family reunification efforts fail. Concurrently planning for both family reunification and an alternate permanency outcome may help ensure that children are placed in safe, permanent homes in a timely manner. This may reduce the time it takes to identify an adoptive parent and terminate parental rights. To the extent possible, children should be placed with foster parents who are willing to adopt them, thus preventing children from languishing in foster care. Pursuing ways to encourage foster parents to assume legal guardianship if they are unwilling to adopt may also help achieve timely permanency outcomes for more children in foster care. We provided HHS, as well as the appropriate state social services agencies in California and Illinois, with the opportunity to comment on a draft of this report. HHS, the California Department of Social Services, and the Illinois Department of Children and Family Services generally agreed with our findings and believed we had described issues that are critical to the child welfare system. Each of the agencies provided technical comments that we incorporated into our report where appropriate. Appendix VI contains HHS’ comments on the draft of this report. We will send copies of this report to the Secretary of Health and Human Services and program officials in the states and localities reviewed. We will also send copies to all state child welfare program directors and make copies available to others upon request. Please contact me at (202) 512-7215 if you or your staff have any questions. Other GAO contacts and contributors are listed in appendix VII. To obtain information about the extent and characteristics of parental substance abuse among foster care cases, as well as information about the drug and alcohol treatment parents receive and the length of time their children spend in foster care, we conducted a survey of open foster care cases in California and Illinois. The foster care caseloads for these two states combined account for about one-quarter of the entire foster care population nationwide. In each state, a simple random sample of open foster care cases was selected to represent the general population of foster care cases statewide. These cases were in the system on June 1, 1997, and had been there continuously since March 1, 1997. These are referred to as “point-in-time” or cross-sectional samples. They are intended to represent the entire population of open foster care cases in each state during the time period specified. They allow us to make statements about the experiences of all foster children in the foster care caseload during that time. Cross-sectional samples, however, do not capture the experiences of all foster children that enter the system. Foster children who spend relatively short periods of time in the system may be under-represented in cross-sectional samples, while children who spend more time in foster care may be over-represented. Furthermore, while survey results based on these samples can be generalized to the population of open foster care cases during the specified time frame in each state, these samples are not meant to represent the foster care population nationally or in any other state. Subsequent to drawing our samples, we learned that 22 of the sampled cases from California and 2 from Illinois had not actually been in foster care continuously from March 1, 1997, through June 1, 1997. We excluded these cases from our samples. An additional 57 cases in the California sample and 17 in the Illinois sample were excluded from our survey because information provided in the questionnaire indicated that they did not remain in the foster care system continuously from June 1, 1997, through September 15, 1997. We used the proportions of each of these types of cases in each of our samples to estimate the number of cases in each state’s foster care population that would have fallen into these two categories. The initial and adjusted population and sample sizes and survey response rates are shown by state in table I.1. The adjusted populations are our best estimates of the number of foster care cases in each state that were in the system continuously from March 1, 1997, through September 15, 1997. We designed a mail questionnaire to obtain information about individual foster care cases as of September 15, 1997. We pretested the questionnaire with a number of foster care caseworkers in California and Illinois and revised it on the basis of pretest results. Appendix II contains a copy of the final questionnaire. We mailed a questionnaire for each case in our samples to the manager in the office handling that case who, in turn, passed it on to the assigned caseworker to complete. We conducted multiple follow-ups with office managers and caseworkers, both by mail and telephone, encouraging them to respond. In addition to using a mail questionnaire to collect information about the foster care cases in our samples, we obtained an automated file from each state that contained administrative data on each of the sample cases from that state. We calculated basic descriptive statistics for each variable in the questionnaire. Our analysis focused primarily on cases that involved parental drug or alcohol abuse. Each case in which one or both parents were required to undergo drug or alcohol treatment as part of the case plan for family reunification we classified as a case involving parental drug or alcohol abuse. Most of the percentage estimates we report were calculated using the number of cases for which there was a response to that item (other than “don’t know”) as the base. The results of our survey for each state are summarized in appendix III. For analyses that involved a child’s date of entry into foster care, we used the entry date contained in the state’s administrative data file for the child rather than the date the caseworker indicated in the questionnaire. Thus, we used administrative rather than survey data to calculate the average length of time our cross-section of foster children had spent in foster care up until September 15, 1997. We also estimated the number of foster care cases in each state that would be subject to the requirement in the Adoption and Safe Families Act of 1997 to file a petition to terminate parental rights (TPR). These estimates were based on the number of cases in which the child had been in foster care for at least 17 months as of September 15, 1997. We used 17 months, rather than 15 months as specified in the law, because the clock for determining whether a case is subject to the TPR requirement actually begins on the date the case was adjudicated and the child was determined to have been abused or neglected, or 60 days after the date the child was actually removed from the parents’ custody, whichever comes first. Because the estimates we report are based on samples of foster care cases, a margin of error or imprecision surrounds each one. This imprecision is usually expressed as a sampling error at a given confidence level. Sampling errors for estimates based on our survey are calculated at the 95-percent confidence level. The sampling errors for the percentage estimates we cite in the letter and appendix III vary but do not exceed plus or minus 12 percentage points in the letter and plus or minus 10 percentage points in appendix III. This means that if we drew 100 independent samples from each of our populations—samples with the same specifications as those we used in this study—in 95 of them, the actual value in the population would fall within no more than plus or minus 12 percentage points of our estimates in the letter and plus or minus 10 percentage points of our estimates in appendix III. The sampling errors for the mean length of time in foster care (cited in table III.5 in app. III) and the mean length of time that family reunification was the goal (cited in footnote 40 of the letter) do not exceed plus or minus 7 months. The sampling errors for the estimates concerning the number of cases in which the child had been in foster care for at least 17 months (cited in fig. 5 of the letter and table III.6 of app. III) and the number of foster care cases that involved parental substance abuse (cited in the letter) do not exceed plus or minus 5,010 cases. In general, there were comparatively few responses to survey questions concerning a foster child’s father. Because estimates based on so few responses would be very imprecise, no population estimates were made with respect to most of the questions concerning fathers in either state. To provide information on the difficulties that foster care agencies and the courts face in making timely permanency decisions for foster children with substance abusing parents, we conducted case studies of foster care systems in three counties: Los Angeles County, California; Cook County, Illinois; and Orleans Parish, Louisiana. We focused on urban areas—two of which are in the states in which we conducted our survey of foster care cases—primarily because they have large foster care caseloads and large populations of substance abusers. In addition, we selected these particular counties because they provide a geographic mix of locations and have foster care laws and initiatives that address the issues of parental substance abuse and permanency decisionmaking. In each of our case study locations, we conducted interviews with foster care program and policy officials, caseworkers, dependency court judges and attorneys, and drug treatment providers. Through these interviews, we obtained information on the extent and characteristics of parental substance abuse among foster care cases within these jurisdictions; local policies and practices for permanency decisionmaking and outcomes; how cases involving parental substance abuse typically navigate the how the characteristics of these cases and existing laws, regulations, and policies may affect the progress of these cases toward family reunification or other permanency outcomes. We also reviewed the case files from 10 foster care cases in each of our three case study locations to better understand and be able to illustrate the effect parental substance abuse has on permanency outcomes from foster care. See appendix IV for a description of selected foster care cases reviewed. We asked foster care officials in each of the three case study locations to select cases for our review on the basis of a number of criteria. We reviewed only case files from foster care cases in which the parents were required to undergo drug treatment as part of the case plan requirements for family reunification. To make sure that the information obtained reflected the current foster care environment and more recent substance abuse trends, we requested cases in which the child had entered foster care for the first time in 1990 or later and had been in foster care for at least 6 months. At each of our case study locations, we reviewed the files for two cases with each of the following outcomes: (1) family reunification, (2) adoption, (3) guardianship, (4) currently in foster care, and (5) aged out of the foster care system after reaching age 18. We limited our review of cases that fell into the first three categories to those that had closed since January 1, 1996. We also limited our review of cases in the last two categories to those that had been open for about 3 years or more. Foster care officials were not always able to locate cases that fit all of our criteria. Consequently, our case file review included some cases that deviated somewhat from our criteria. We developed a standardized data collection instrument on which to record information from the case files we reviewed. We collected information about the foster child, such as age, date of and reasons for removal from the parents’ custody, health conditions or behavioral problems, and the number and types of placements. We also collected information about the parents, such as the type of substances abused, the length of time they abused drugs or alcohol, criminal activities, mental and physical health problems or conditions, their compliance with case plan requirements, types of drug or alcohol treatment programs they entered, reasons for not completing treatment programs, and the number of times they relapsed. We also collected information about permanency decisionmaking in the case, such as when the goal changed from family reunification to an alternate permanency goal, if applicable; if and when parental rights were terminated; and the permanency goal or outcome for this child at the time of our review. Although we also collected information in the file about the foster child’s siblings, our focus in collecting data was on the foster child the case pertained to. To provide information on existing laws that address reunifying families or achieving alternate permanency outcomes in a timely manner for foster children whose parents are substance abusers, we reviewed foster care statutes on ending family reunification efforts and terminating parental rights for each of the 50 states and the District of Columbia. We collected information on whether and how parental substance abuse is addressed in these statutes. We contacted states to verify that our findings were complete in instances in which we discovered other legal research that indicated different findings. The majority of our findings reflect the status of state foster care laws as of January 1, 1998; however, some of this work was conducted as early as April 1997, when we began our fieldwork. This appendix displays the frequency distributions of caseworkers’ responses to our survey questions concerning substance abuse among parents of foster children in California and Illinois. The percentage given for each response category constitutes our estimate of the proportion of that state’s open foster care cases for which that response applied. For each item with a response rate lower than 70 percent, we note the percentage of cases for which there was either no response or the response was “don’t know.” The sampling errors for these percentage estimates vary; however, no sampling error for any estimate in this appendix exceeds plus or minus 10 percentage points. The sampling errors for the mean number of months that foster care cases had been open (cited in table III.5) do not exceed plus or minus 7 months. None of the sampling errors for the numbers of foster care cases (cited in table III.6) exceed plus or minus 5,010 cases. Because there were comparatively fewer fathers for whom information was available, no population estimates were made for most questions concerning a foster child’s father. Most of the tables in this appendix that show responses to these questions present only the number of sample cases for which each response was given. Table III.9 is the only instance in this appendix in which we do not provide a population estimate with respect to mothers of foster children. The subgroup of mothers who entered a drug or alcohol treatment program but failed to complete it was too small to estimate the proportion of mothers in the population who did not complete treatment for specific reasons. California (n = 227) Illinois (n = 292) California (n = 148) California (n =142) Illinois (n = 216) California (n = 116) Illinois (n = 173) California (n = 122) Illinois (n=187) California (n = 227) Illinois (n = 292) Not required to undergo treatment Not available (mother was deceased or whereabouts unknown) California (n = 123) Illinois (n = 188) California (n = 41) Illinois (n = 65) California (n = 106) Illinois (n = 187) Cocaine (type unknown) California (n = 78) Illinois (n = 155) Cocaine (type unknown) California (n = 90) Illinois (n = 140) California (n = 102) California (n = 124) Illinois (n = 200) California (n = 125) Illinois (n = 203) California (n = 130) California (n = 227) Illinois (n = 292) Not required to undergo treatment Not available (father was deceased or whereabouts unknown) California (n = 60) Illinois (n = 80) California (n = 14) Illinois (n = 13) California (n = 49) Illinois (n = 48) Cocaine (type unknown) California (n = 32) Illinois (n = 38) Cocaine (type unknown) California (n = 34) California (n = 44) Illinois (n = 49) California (n = 60) Illinois (n = 84) California (n = 60) Illinois (n = 82) California (n = 64) Case 1 involved the youngest of four children. He, and one of his siblings, had been prenatally exposed to cocaine. As a result of neglect related to his mother’s cocaine abuse, he entered foster care at birth. He and all three of his siblings, who were also in foster care, were placed with their maternal grandmother. When he was removed from his mother’s custody, she lacked a stable residency, was unemployed, and had been convicted for felony drug possession and prostitution. His father’s whereabouts were unknown at the time. His father had not been located by the time this child returned to his mother’s care, but it was learned that he also had been convicted of felony drug possession and sales. Despite the mother’s long history of drug use and related criminal activity, she met all of the case plan requirements to be reunified with this child. She completed about 1 year of drug treatment, including residential and outpatient programs, and participated in follow-up drug treatment support groups. She visited this child as prescribed in the plan, attended parenting classes, and obtained suitable housing. This child was returned to the mother’s custody on a trial basis about 18 months after he entered foster care. At that point, she had tested “clean” in random drug tests for over 6 months. He remained with his mother for about 1 year on a trial basis during which time family maintenance services were provided. About 2-1/2 years after this child entered foster care, his mother was granted permanent custody, and this case was closed. Case 2 involved the third oldest of five children. He entered foster care when he was 6 years old after his mother gave birth to her youngest, and third prenatally cocaine-exposed, child. As a result of neglect and risk of physical injury related to the mother’s cocaine abuse, he and his four siblings were removed from their mother’s custody and placed with their maternal grandmother. This mother had a 14-year history of substance abuse and had previously come to the attention of the child welfare agency in the mid-1980s for medical neglect of one of her older children. She was unemployed, and the father was incarcerated at the time the children were placed in foster care. Despite the complicating family situation, the mother met all of the case plan requirements to be reunified with this child. She spent about 1 month in a women’s residential treatment program and another month in an outpatient program and participated in follow-up drug treatment support groups. She visited this child as prescribed in the plan, attended parenting classes and counseling sessions, and obtained suitable housing. This child was returned to his mother on a trial basis about 16 months after he entered foster care. He remained with his mother for 5 months on a trial basis. About 21 months after the child entered foster care, his mother was granted permanent custody, and this case was closed. Case 3 involved the middle child of three. She was 9 years old when all three siblings were placed in foster care as a result of neglect related to the mother’s alcohol abuse. Her parents failed to keep hospital appointments for her 2-year-old brother, who was born premature (weighing only a little more than 3 pounds) and had been diagnosed as a “failure to thrive” infant with fetal alcohol syndrome (FAS) and developmental delays. Also, a home health aide, who had been visiting the home since the younger brother’s birth, reported that this girl and her older brother were not being fed regularly. The younger brother was placed in a specialized family foster home for the developmentally delayed, and this girl and her older brother were placed with a foster family. The mother had a 15-year history of alcohol abuse and was mildly mentally retarded. In addition, the caseworker suspected that the father verbally abused the mother. The mother met many of the case plan requirements despite the complicated family situation. For 9 months, she participated in an outpatient treatment program for her alcohol abuse problem, and she continued her treatment through follow-up support groups. She also tested clean in random alcohol tests. Although initially resistant, the mother began cooperating with home visits to assess her housekeeping skills and the safety of the home. Both of this girl’s parents also participated in parenting classes. However, they initially visited their children only irregularly, and neither parent demonstrated any affection toward the younger brother during these scheduled visits. Her parents relinquished their parental rights of the younger brother who remained in the specialized family foster home where he was initially placed. However, after spending about 16 months in foster care, this girl and her older brother were reunified with their parents. Case 4 involved the older of two children. She was 1-1/2 years old when she entered foster care, following the birth of her sibling who had been prenatally exposed to cocaine, opiates, and methamphetamines. She was placed in the care of her maternal grandmother. Her younger sibling, who had a different father, was also placed in foster care but not with the maternal grandmother—who said she was unable to take care of both children. The mother, who disappeared shortly after the birth of the second child, had a long history of abusing multiple substances; she had been abusing cocaine and heroin for almost a decade. She had a criminal record for felony drug possession, had been incarcerated several times, and lacked a stable residency. The identity of the older child’s father was unknown, and the mother claimed to have never known his identity. The mother refused to comply with any of the case plan requirements to be reunified with her children: she failed to enter a drug treatment program or submit to drug testing, and she visited her children only irregularly. It was not clear from the case file when the permanency goal was changed from family reunification to some other goal. Although the maternal grandmother said she could not assume custody of both children, she adopted this child at age 5 and received financial assistance under the title IV-E adoption assistance program. The younger sibling was adopted about the same time by the foster family with whom this child had been placed. When these adoptions were finalized, both she and her sibling had been in foster care for nearly 4 years. Case 5 involved one of two siblings. Both she and her sibling had been prenatally exposed to cocaine and were placed in foster care as a result of neglect related to the mother’s cocaine abuse. She was about 1 year old at the time of removal. Three older children by a different father were already in the informal care of relatives. This child was placed with two different foster families during the time she was in foster care. She was developmentally delayed, had vision problems, and was receiving counseling for emotional problems. Her mother abused both alcohol and cocaine, and her cocaine abuse dated back almost 20 years. Her father also had a substance abuse problem, although he claimed to have stopped abusing cocaine after his children were removed from the mother’s custody and placed in foster care. This child and her sibling were the result of an affair he had with the mother, but he was unwilling to assume custody of the children because his wife did not want them living with her and her husband. Meanwhile, reunification services were offered to the mother. Although the mother completed 6 months of residential treatment, she relapsed, as she had several times before this foster care episode. She was dropped from another treatment program during this foster care episode for lack of attendance. She demonstrated a “pattern of manipulation and dishonesty” with caseworkers and was said to continue to deny the seriousness of her substance abuse problem. She also failed to comply with other requirements in her case plan. For example, she tested positive on some of the random drug tests and did not regularly visit this child. The permanency goal was changed from family reunification to adoption about 21 months after this child entered foster care. The child was adopted, when she was 4 years old, by the foster family with whom she was placed during most of her time in foster care. When the adoption was finalized, this child had been in foster care for more than 3 years. Case 6 involved an infant who had been prenatally exposed to cocaine and had to be closely monitored because of serious medical complications. She was placed in foster care because of the mother’s medical neglect. She weighed only about 2 pounds at birth, suffered from respiratory distress syndrome, had neurological abnormalities, and later developed cerebral palsy. Her mother, after being released from the hospital, visited the child infrequently and made no plans to provide for her special medical needs. The identity of the child’s father was unknown at the time of birth. The child remained in the hospital for several months and was then placed with a foster family that was also certified to adopt. This mother had a very complicated family history. She had been in foster care herself in the custody of her own grandparents. Both of the mother’s parents had criminal records and were currently incarcerated for drug convictions. A criminal records review of the mother identified several warrants and prior arrests. She had a history of abusing multiple substances and had previously given birth to a child with a different father and that child was currently in his care. She failed to complete any of the requirements in the case plan for family reunification, including beginning a drug and alcohol treatment program even though a slot was located for her by the caseworker. Within months of this child’s entering foster care, the mother relinquished her parental rights. About the same time, the father came forward and relinquished his parental rights. The child’s foster parents adopted the child and received financial assistance under the title IV-E adoption assistance program because of the special needs of this child. When the adoption was finalized, she was 3 years old and had been in foster care since birth. Case 7 involved a 1-year-old child. She was abandoned in a hospital where her mother, who appeared to be under the influence at the time, had taken her. She was placed in foster care as a result of abandonment and neglect related to her mother’s cocaine and alcohol abuse. She was blind in one eye, a condition attributed to the mother’s neglect. She was also developmentally delayed, had a compromised immune system, and had behavioral problems. This same child had been placed in foster care for a brief period of time prior to this episode while her mother was incarcerated on drug-related charges. During this foster care episode, which began with the abandonment in the hospital, she was placed with a foster family. Her mother was subsequently incarcerated again for 2 years. The mother’s history of criminal convictions and incarcerations included a felony conviction for drug possession. She had two other children by different fathers; one was in the care of a relative, and the other was already an adult. The whereabouts of this child’s father were initially unknown, and when his whereabouts became known, he indicated that he did not want custody of her. The mother participated in a drug treatment program while in prison. However, she apparently relapsed after she was released from prison. Although she attended parenting classes, she rarely visited her daughter. About 15 months after this child entered foster care, the permanency goal of family reunification was changed to long-term foster care and, later, adoption. Her foster parents indicated a preference for assuming guardianship instead of adopting her because of the high costs associated with her medical needs. These foster parents assumed legal guardianship of this child when she was 4 years old. When the case was formally closed, this child had been in foster care for nearly 3 years. Case 8 involved one of seven children. He was almost 1 year old at the time he was placed in foster care as a result of neglect related to the substance abuse problems of both parents. He had tested positive at birth for PCP (phencyclidine hydrochloride). Several of his siblings, some by a different father, had also been prenatally drug-exposed and had been diagnosed with developmental delays. This child and his siblings were placed with a paternal aunt of the two oldest siblings. His mother lacked a stable residency, and family members believed that she was a prostitute. Her substance abuse problems continued to escalate during this foster care episode to a point at which, according to a family member, she was reportedly using cocaine every day. His father was uninvolved and failed to maintain any contact with the child welfare agency. An assessment of the paternal aunt’s home several years after he had been placed there found that the home was in “poor condition.” This relative had been unemployed for several years and was reportedly under stress because of the drug-related problems of her adult siblings. Further, the child’s oldest brother was experiencing behavioral and emotional problems in this placement that were serious enough to warrant placing him temporarily in a residential treatment center when he threatened to commit suicide. This older brother was later returned to the home of the paternal aunt. In the meantime, his mother failed to comply with any of the requirements for family reunification. Although records indicate that she entered several residential and outpatient treatment programs, she did not stay in any program for a sustained period of time. Her visits with the children were described by the paternal aunt as typically unannounced, disruptive, and upsetting to the children. The permanency goal of family reunification was changed about 18 months after this child first entered foster care. Although his permanency goal became adoption by the relative with whom he was placed, the relative assumed a special form of guardianship—referred to as Delegated Relative Authority (DRA). By then, he had been in foster care for 5 years, at which time the case was formally closed. Case 9 involved the oldest of five children. She and her siblings were placed in foster care because of neglect related to the father’s substance abuse problem and domestic violence between the mother and father. She was 8 years old at the time she was removed and, with her siblings, placed with her maternal grandmother. While she had some speech and behavioral problems, some of the other children had more serious problems. Two of her siblings were diagnosed with attention deficit disorder, and one sibling had been hospitalized because of post-traumatic stress disorder. Her father, who used both crack-cocaine and alcohol, physically and emotionally abused the mother, who was his common-law spouse. After the children entered foster care, he was incarcerated for several months for committing forgery. He was also only sporadically employed, and he lacked a stable residency. While her mother did not have a substance abuse problem, she had other problems associated with being a victim of domestic violence: she had been sexually abused by her own father and brother, and she suffered from depression. Although the mother entered a confidential program for victims of domestic violence and complied with the visitation requirements with her children, the father failed to complete any of the case plan requirements for family reunification. The father entered drug treatment programs numerous times over a period of several years but dropped out each time after very short time periods in treatment. He was terminated from one program because of his abusive behavior toward the common-law spouse and suspicions that he had stolen money. The case file indicated that he “harassed and intimidated” staff at the child welfare agency. It is unclear from the case file when the permanency goal was changed from family reunification to some other goal. However, the maternal grandmother assumed legal guardianship of this child when she was 13 years old. This child had spent nearly 5 years in foster care when this case was formally closed. Case 10 involved one of six children. He and most of his siblings were known to have been prenatally exposed to cocaine. As a result of neglect related to his mother’s crack-cocaine and alcohol abuse, he entered foster care shortly after birth. Two of his siblings—one older and one younger—reportedly died of sudden infant death syndrome (SIDS). His mother had repeatedly left her children with unrelated adults for the night after telling them she would return in only a few minutes, which contributed to the decision to remove all of her children. This child was placed briefly with two different foster families, and then several months later he was placed with his maternal great aunt and uncle. His mother had a sporadic employment history and a criminal record for felony theft and misdemeanor drug possession. In addition, she had been incarcerated for probation violations. The identity of the father was unknown. His mother successfully complied with most of the requirements in the case plan for reunification—including visitation, a parenting class, and family therapy. However, about 2 years after this child entered foster care, his mother was dropped from a drug treatment program for lack of attendance. About that time, the permanency goal was changed from family reunification to long-term foster care. Over the next few years, she entered treatment several additional times but failed to complete each of these programs. About 3 months prior to the birth of his youngest sibling, she entered a 12-month residential drug treatment program, and this time she successfully completed the program. Because of her success in treatment, he was returned to the mother for several trial visits after spending about 7 years in foster care. However, his mother subsequently failed several drug tests, indicating that she had relapsed. He was returned to foster care with the relatives with whom he had previously been placed, where he remains in foster care almost 8 years after he entered. Case 11 involved one of four children. One of her younger siblings had been prenatally exposed to cocaine and had been delivered by emergency cesarean section after her mother had been beat up by drug dealers for allegedly stealing drugs. Her removal from her mother’s custody was ultimately triggered, when she was 6 years old, by the mother leaving her and her two siblings in the home of the mother’s substance-abusing brother while she went out to sell diapers, cigarettes, tokens, and food stamps to buy cocaine. Another child of this mother had previously died of SIDS. This child was initially placed with a foster family and then in the home of her aunt. The placement with her aunt lasted about 1 year before it was terminated at the aunt’s request because of the child’s behavioral problems. She was later placed with another foster family. She not only had behavioral problems but was also developmentally delayed. She also had emotional problems associated with separation issues and prior sexual abuse, allegedly by her father. One of her other siblings also had behavioral and emotional problems and was a chronic runaway. When this foster care episode began, the mother had been abusing both cocaine and marijuana for more than a decade. In addition to being absent at the time the children were removed, she was periodically absent throughout the foster care episode. The father, who had never been married to the mother, also had a substance abuse problem. He had a criminal record for carrying a concealed firearm and had been arrested on a number of different charges. While the mother entered drug treatment a number of times during this foster care episode, it is unclear whether she completed any of these programs. Her behavior, as described in the case file, suggests she was manipulative, having “learned many ways to cover up her continued abuse of drugs.” The father had never entered any treatment program, and the only drug test performed on him revealed that he was still using. Two years and 4 months after this child entered foster care, the permanency goal was changed from family reunification to long-term foster care. At 10 years of age, she remains in foster care, over 4 years after she entered. Case 12 involved the older of two children. He and his brother entered foster care when he was 3 years old after being abandoned by their substance-abusing mother. His mother left the house, reportedly to escape beatings by the father of his younger brother, and the two children were later discovered by a friend of the mother’s. He had numerous physical problems associated with his prenatal exposure to cocaine, and emotional problems associated with severe physical and sexual abuse, allegedly by the younger brother’s father. He was diagnosed with post-traumatic stress disorder in response to witnessing his mother being beaten. Both brothers also had developmental disabilities, including a diagnosis of attention deficit disorder. They were placed together with a foster family. His mother, in addition to her history of abusing crack-cocaine and alcohol abuse, had a criminal record and had been incarcerated for selling controlled substances. Moreover, she was diagnosed with serious mental illness, including schizophrenia and depression, and had been hospitalized several times since the foster care episode began for attempting suicide. The father of this older child was interested in assuming custody of the child but had a history of alcohol abuse and sporadic employment. He also failed to comply with any of the requirements in the case plan for family reunification. The mother participated in several different drug treatment programs, but her psychiatric problems and multiple admissions to hospitals for suicide attempts interfered with progress in drug treatment. In addition, the caseworker reported difficulties in finding facilities that could treat her dual diagnosis of mental health and substance abuse problems. She was also inconsistent in taking medications for her psychiatric problems. About 19 months after he entered foster care, the foster care agency began to explore whether the maternal grandmother could assume custody should the mother continue to fail to make progress in meeting case plan requirements. However, the grandmother was later found to be an unsuitable placement, and the permanency goal became adoption almost 3 years after this child entered foster care. This child remains with his brother in the care of the same foster family with whom they were initially placed. At 6 years of age, he is still in foster care, over 3 years since he entered. Case 13 involved the oldest of three children. At age 14, she and her two sisters entered foster care because of neglect related to the mother’s abuse of cocaine, marijuana, and alcohol. Her mother left all three children with a friend after the family was evicted from their apartment. They had not seen their mother for about 2 weeks when the friend contacted the foster care agency. She was initially placed with a foster family, whereas her two younger sisters were placed with their mother in a residential drug treatment program. Prior to this foster care episode, one of her younger sisters had been in foster care after having been physically and sexually abused while informally in the care of a relative, but her sister was subsequently reunified with her family. In addition to the mother’s prior involvement with the child welfare system, she had psychiatric problems and a prior conviction for intent to distribute marijuana, and she was also homeless. The father of these children also lacked a stable residency and had no interest in assuming custody of the children. After completing 2 months of residential treatment, the mother was provided transitional housing for an additional 2 months. However, she soon began to miss appointments for treatment as prescribed by her after-care program, and 6 months after her children were removed from her custody, her whereabouts became unknown. The caseworker believed that she was using drugs again. Meanwhile, this child was experiencing serious emotional and behavioral problems. She had been separated from her sisters, who were now with another foster family, since the beginning of the foster care episode. On at least four different occasions, she was admitted to hospitals for psychiatric problems, including attempted suicide. She was placed with four different foster families, and at least two of these placements were disrupted because of her emotional and behavioral problems. She was also briefly returned to her mother when she was nearly 15 years old, only to re-enter foster care a few months later at her mother’s request. The mother told the agency she could not adequately care for her, and the mother claimed to have become suicidal. Two years and 4 months after this child entered foster care, the permanency goal was changed from family reunification to long-term foster care. Her final placement was with foster parents who could meet her special needs. When she reached age 18, she had been in foster care almost 4 years. She continued to receive services through the title IV-E independent living program for several months until this case was formally closed. Case 14 involved one of three children. At 7 years of age she and her two siblings were placed in foster care because of neglect related to their mother’s abuse of alcohol and PCP. The children were left at home alone without electricity or sufficient food. She was placed with her maternal grandmother. Placement information on her two siblings was not available. She and her siblings had no major health or behavioral problems other than asthma. Her mother, who lacked a stable residency and regular employment, failed to have any contact with the agency over the many years her children were in foster care. Consequently, there was little information in the case file about her problems. Although her father did not have a substance abuse problem, he said he was already caring for his parents with whom he lived and, because they had health problems, he was not interested in assuming custody of her. About 15 months after she entered foster care, the permanency goal became long-term foster care because the grandmother with whom she was placed for the duration of the foster care episode refused to assume legal guardianship. She did well academically and participated in extracurricular activities throughout high school. When she reached age 18, she had been in foster care for almost 11 years. Title IV-E independent living services were provided to this child for several additional years because of some academic difficulties she experienced while attending college. Case 15 involved the older of two sisters. At age 15, she and her sister were placed in foster care because of physical abuse related to the mother’s abuse of crack-cocaine. Her mother reportedly pointed a gun at her two daughters and threatened to kill them and herself. This child had marks on her body from physical abuse she had suffered at the hands of her mother; she also suffered from chronic headaches. She and her sister were placed together in the home of the maternal grandmother. The mother had a long history of substance abuse and chronic health problems associated with her overdosing on LSD (lysergic acid diethylamide) several times as a teenager. She was unemployed and had been convicted for possessing an unregistered firearm. The father, who lived in another state at the time this child and her sister were removed, was not interested in assuming custody of them. His whereabouts later became unknown to the agency. The mother initially participated in an outpatient drug treatment program, maintained adequate housing, and obtained employment. However, she dropped out of the treatment program within the first year her children were in foster care because she said the program interfered with her work schedule. Although she participated in a follow-up drug treatment support program after she dropped out of outpatient treatment, she allegedly began using drugs again. About 18 months after this child entered foster care, the permanency goal changed from family reunification to independence. She remained with her maternal grandmother a little over 2 years, then moved, at age 18, into her own apartment while attending college. She reportedly was considering applying to law school and was preparing for the test that is required for admission. She continued to receive title IV-E independent living services until she turned 21 years of age. Thirty states and the District of Columbia have foster care laws that specify that parental substance abuse is either a consideration in or grounds for terminating parental rights. These states are Alabama, Arizona, California, Colorado, Georgia, Hawaii, Illinois, Iowa, Kansas, Louisiana, Maine, Massachusetts, Minnesota, Mississippi, Missouri, Montana, Nebraska, Nevada, New York, North Carolina, Ohio, Oregon, Rhode Island, South Carolina, Tennessee, Texas, Utah, Virginia, Washington, and West Virginia. Some state laws on terminating parental rights include more detailed provisions regarding permanency decisionmaking for foster care cases involving parental substance abuse. Other state laws include provisions that, while not specifically addressing the issue of parental substance abuse, are also relevant for decisionmaking in these cases. The following are examples of various state laws. Arizona law allows the termination of parental rights when a child has been in an out-of-home placement for a cumulative total of 9 months pursuant to a court order and the parent has substantially neglected or willfully refused to remedy the circumstances which caused the child to be in an out-of-home placement, or, at 18 months, if the parent has been unable to remedy the circumstances which caused the child to be in an out-of-home placement and there is substantial likelihood that the parent will not be capable of exercising proper and effective parental care and control in the near future. Ariz. Rev. Stat. §8-533B.7(a),(b). California law does not require the foster care agency to provide a period of family reunification efforts before beginning proceedings to terminate parental rights if the parent has a history of extensive, abusive, and chronic use of drugs or alcohol and has resisted prior treatment during a 3-year period immediately prior to the filing of the petition that brought the minor to the court’s attention or has failed or refused to comply with a program of drug or alcohol treatment described in the case plan on at least two prior occasions, even though the programs identified were available and accessible. Cal. Welf. & Inst. Code §361.5(b)(12). Illinois law does not require the foster care agency to provide a period of family reunification efforts before beginning proceedings to terminate parental rights if the foster child was prenatally substance-exposed and (1) the mother had prenatally substance-exposed at least one other child who was legally determined to have been neglected and (2) the mother had the opportunity to participate in a drug counseling, treatment, and rehabilitation program during that child’s foster care episode. Parental rights may also be terminated if the parent has failed to make reasonable efforts to correct the conditions that were the basis for the removal of the child from the parent or to make reasonable progress toward the return of the child to the parent within 9 months after the child was legally determined to have been neglected. Failure to make reasonable progress toward return of the child includes the parent’s failure to substantially fulfill his or her obligations under the service plan and correct the conditions that brought the child into foster care within 9 months after adjudication. 750 Ill. Comp. Stat. 50/1(1)(m),(r). Louisiana law allows the termination of parental rights if (1) at least 1 year has elapsed since the foster child was removed from the parent’s custody; (2) there has been no substantial parental compliance with a case plan for services necessary for the safe return of the child; and (3) despite earlier intervention, there is no reasonable expectation of significant improvement in the parent’s condition or conduct in the near future, considering the child’s age and needs for a stable and permanent home. La. Ch. Code art. 1015(5). Lack of parental compliance with a case plan includes the parent’s repeated failure to comply with the required program of treatment and rehabilitation services provided in the case plan, the lack of substantial improvement in redressing the problems preventing reunification, and the persistence of conditions that led to removal or similar potentially harmful conditions. La. Ch. Code art. 1036C(5)-(7). Lack of reasonable expectation of significant improvement in the parent’s conduct in the near future may be evidenced by substance abuse or chemical dependency that renders the parent unable to exercise or incapable of exercising parental responsibilities without exposing the child to a substantial risk of serious harm, according to expert opinion or an established pattern of behavior. La. Ch. Code art. 1036D(1). The court may terminate parental rights if it finds that reasonable efforts, under court direction, have failed to correct the conditions that led to a determination of neglect or dependency or of a child’s need for protective services. The law creates a presumption that such reasonable efforts have failed if the parent has been diagnosed as chemically dependent by a professional certified to make the diagnosis; has been required by a case plan to participate in a culturally, linguistically, and clinically appropriate chemical dependency program; has either failed two or more times to successfully complete a treatment program or has refused at two or more separate meetings with a caseworker to participate in a treatment program and continues to abuse chemicals. Minn. Stat. §260.221(a)(5). The court may grant a prerequisite to an adoption order if the parent has failed for a period of more than 1 year after the child came into the system, to substantially and continuously or repeatedly maintain contact with or plan for the future of the child, unless unable to do so. A parent is not deemed unable to maintain contact with or plan for the future of the child by reason of the use of alcohol or drugs, except while actually hospitalized or institutionalized. N.Y. Soc. Serv. Law §384-b-4(d),7(a),(d). Also, New York law defines “neglected child” (a prerequisite to an adoption order) as one whose physical, mental, or emotional condition is impaired as a result of a parent misusing drugs or alcohol to the extent of the loss of self-control, unless the parent is voluntarily and regularly participating in a rehabilitative program. N.Y. Family Ct Act§ 1012(f)(i)(B). North Carolina law allows the termination of parental rights if the parent has willfully left the child in foster care for more than 12 months without showing, to the satisfaction of the court, that reasonable progress under the circumstances has been made within 12 months in correcting those conditions that led to the removal of the child. In addition, parental rights may be terminated if the parent is incapable of providing for the proper care and supervision of the child, and there is a reasonable probability that such incapability, which may be the result of substance abuse, will continue for the foreseeable future. N.C. Gen. Stat. §7A-289.32. If it is in the best interest of the child, parental rights can be terminated if the parents has a diagnosable condition, including drug or alcohol addiction, and the condition makes the parent unlikely to provide minimally acceptable care for the child. It is presumed that the parent’s condition is unlikely to change within a reasonable time upon proof that the parent has been required by the court to participate in a treatment program for alcohol or drug addiction, and the parent has failed two or more times to complete the program successfully, or has refused at two or more times in meetings with the foster care department to participate in a treatment program. S.C. Code Ann. §20-7-1572 (Law. Co-op. Supp. 1996). Parental substance abuse constitutes grounds for termination of parental rights if it endangered the health and safety of the child and the parent failed to complete a court-ordered substance abuse treatment program; or if the parent used a controlled substance repeatedly, after completion of a court-ordered substance abuse treatment program, in a manner that endangered the health and safety of the child. This excludes alcohol, tobacco, and drugs obtained by lawful prescription and over-the-counter medications. Tex. Fam. Code§ 161.001(1)(P). A petition seeking termination of parental rights must allege that there is little likelihood that conditions will be remedied so that the child can be returned to the parent in the near future. In determining whether conditions will be remedied, the court may consider if there is present the use of intoxicating or controlled substances so as to render the parent incapable of providing proper care for the child for extended periods of time and documented unwillingness of the parent to receive and complete treatment or documented multiple failed treatment attempts. Wash. Rev. Code Ann. §13.34.180(5)(a) (West 1993). Also, if the court has ordered a child removed from the home, the court may order that a petition seeking termination of the parent and child relationship be filed if in the best interests of the child and that it is not reasonable to provide further services to reunify the family because the existence of aggravated circumstances make it unlikely that services will effectuate the return of the child to its parents in the near future. In determining whether such circumstances exist, the court is to consider if the parent has failed to complete court-ordered treatment where such failure has resulted in a prior termination of parental rights to another child and the parent has failed to effect significant change in the interim. Wash. Rev. Code Ann. §13.34.130(2)(f) (West 1993). The court may terminate parental rights upon a finding that parents have habitually abused or are addicted to alcohol, controlled substances, or drugs, to the extent that proper parenting skills have been seriously impaired and such persons have not responded to or followed through with the recommended and appropriate treatment that could have improved the capacity for adequate parental functioning. W. Va. Code§ 49-6-5(a)(6),(b)(1). In addition to those named above, John G. Smale, Jr., and Joel I. Grossman conducted the statistical analysis of the questionnaire data results, Ann T. Walker led the development and administration of the questionniare, Karen Doris Wright assisted with the administration of the survey, and Jonathan H. Barker conducted the research and analysis of state laws. Drug Abuse: Research Shows Treatment Is Effective, but Benefits May Be Overstated (GAO/HEHS-98-72, Mar. 27, 1998). Parental Substance Abuse: Implications for Children, the Child Welfare System, and Foster Care Outcomes (GAO/T-HEHS-98-40, Oct. 28, 1997). Child Protective Services: Complex Challenges Require New Strategies (GAO/HEHS-97-115, July 21, 1997). Foster Care: State Efforts to Improve the Permanency Planning Process Show Some Promise (GAO/HEHS-97-73, May 7, 1997). Drug and Alcohol Abuse: Billions Spent Annually for Treatment and Prevention Activities (GAO/HEHS-97-12, Oct. 8, 1996). Cocaine Treatment: Early Results From Various Approaches (GAO/HEHS-96-80, June 7, 1996). Child Welfare: Complex Needs Strain Capacity to Provide Services (GAO/HEHS-95-208, Sept. 26, 1995). Foster Care: Health Needs of Many Young Children Are Unknown and Unmet (GAO/HEHS-95-114, May 26, 1995). Foster Care: Parental Drug Abuse Has Alarming Impact on Young Children (GAO/HEHS-94-89, Apr. 4, 1994). Drug Abuse: The Crack Cocaine Epidemic: Health Consequences and Treatment (GAO/HRD-91-55FS, Jan. 30. 1991). Drug-Exposed Infants: A Generation at Risk (GAO/HRD-90-138, June 28, 1990). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on: (1) the extent and characteristics of parental substance abuse among foster care cases; (2) the difficulties foster agencies face in making timely permanency decisions for foster children with substance abusing parents; and (3) initiatives that address reunifying families or achieving other permanency outcomes in a timely manner for foster children whose parents are substance abusers. GAO noted that: (1) on the basis of GAO's survey, it estimated that about two-thirds of all foster children in both California and Illinois, or about 84,600 children combined, had at least one parent who abused drugs or alcohol, and most had been doing so for at least 5 years; (2) most of these parents abused one or more hard drugs such as cocaine, methamphetamines, and heroin; (3) substance abusers often abandon or neglect their children because their primary focus is obtaining and using drugs or alcohol; (4) they also place their children's safety and well-being at risk when they buy drugs or engage in other criminal activity to support their drug habit; (5) recovery from drug and alcohol addiction is generally a difficult and lifelong process that may involve periods of relapse; (6) parental substance abuse makes it more difficult to make timely decisions that protect foster children and provide them with stable homes; (7) foster care agencies face difficulties in helping parents enter drug or alcohol treatment programs; (8) in addition, foster care agencies and treatment providers may not always be adequately linked, and as a consequence, close monitoring of parents' progress in treatment does not always occur; (9) foster care agencies also face several challenges when trying to quickly achieve adoption or guardianship in these cases after family reunification efforts have failed; (10) to accommodate children's need for timely permanency decisions, some locations have launched highly collaborative initiatives, involving drug treatment providers and sometimes the courts and other organizations, to help parents obtain treatment; (11) in addition to maximizing the prospects for reunification, these initiatives may produce the detailed information about parents' progress in treatment that judges need to make timely permanency decisions; (12) some locations are undertaking other efforts to better enable foster care agencies to quickly achieve other permanency outcomes for children who cannot be safely returned to their parents in a timely manner; (13) while not designed specifically for foster care cases involving parental substance abuse, such efforts may be useful in these cases; (14) for example, concurrent planning is being used to reduce the time it takes to achieve permanency by simultaneously working to reunify the family and planning for some other permanency outcome should family reunification efforts fail; and (15) some locations are also implementing programs to encourage relatives of children in foster care to adopt or become the legal guardians of these children.
Various types of providers may perform chronic pain management procedures, and each provider type is subject to certain education, training, certification, and licensure requirements. The range of chronic pain management providers and their practice requirements include the following: Pain physicians. These physicians have completed a subspecialty fellowship training program in pain medicine recognized by ABMS.Following medical school and a residency program in a primary specialty, pain physician candidates must complete an accredited 1-year fellowship and pass an examination to receive board certification. Other physicians. Physicians—including those in specialty and primary care—without certification in pain medicine have comprehensive medical knowledge through their medical school, as well as residency training that can last from at least 3 to 7 years. Although board certification is optional, most physicians take an exam to become certified in a medical specialty. CRNAs. According to the AANA, registered nurses with a Bachelor of Science in Nursing and at least 1 year of experience in an acute setting can pursue certification in nurse anesthesia. Graduates of accredited schools of nurse anesthesia, which provide 24 to 36 months of training, must pass a national examination to receive their certification as CRNAs. Other nonphysician providers, such as NPs or PAs. To become NPs, registered nurses undertake advanced clinical training and complete a master’s program—lasting 1½ to 3 years—or a doctoral program. PAs typically undertake roughly 2 years of master’s-level training. Both NPs and PAs must be nationally certified. Providers must be licensed by the states in which they practice and adhere to state requirements. Physician and PA licensure is administered by state boards of medicine, while nursing licensure is administered by state boards of nursing. Furthermore, all providers are governed by state laws. For example, NPs may or may not be allowed to practice independently or prescribe medications depending on the state in which they practice, and PAs are generally allowed to prescribe medications, but must practice under the supervision of a physician. These laws can take precedence over other location- or payer-specific policies, such as hospital-based privileging. Although CMS has not issued a national coverage determination (NCD) for chronic pain management, some MACs have established local coverage determinations (LCD) for chronic pain procedures. Typically, these LCDs do not address which types of providers may bill Medicare for the services, but rather stipulate certain coverage or billing rules. For example, for procedures that may be given to a beneficiary in a series, an LCD may limit payment to no more than three procedures within a year. LCDs also contain instructions for providers on how to bill Medicare using various CPT codes and modifiers. A given procedure may have several CPT codes that indicate where on the body the procedure takes place or whether additional levels of a procedure were performed. For instance, paravertebral facet joint injections would be billed using one CPT code to indicate a cervical or thoracic location, and another CPT code to indicate a lumbar or sacral location—as well as “add-on” codes to specify when injections occurred on multiple levels of the spine. CMS uses a physician fee schedule to determine the amounts paid to providers for each CPT code billed. Nonphysician providers of chronic pain procedures vary in the percentage of the physician fee schedule they receive. For example, while CRNAs are generally paid 100 percent of the amount a physician is paid for a given procedure under the physician fee schedule, NPs and PAs are generally paid 85 percent of the physician fee schedule. From 2009 through 2012, CRNAs billed Medicare FFS for a small share of our selected chronic pain procedures, while pain physicians and other physicians billed for the largest shares. Of the procedures billed by CRNAs, most were billed by CRNAs in rural areas until 2012. Overall, the various providers’ shares of chronic pain procedures billed to Medicare did not change much over the study period. (See table 2.) The share billed by NPs, PAs, and CRNAs combined grew from 1.4 to 2.6 percent, with CRNA’s billing for less than ½ of 1 percent of all chronic pain procedures in each year. Pain physicians billed for over 40 percent of the selected procedures over the 4-year period, while other physicians billed for over half of the selected services each year. From 2009 through 2012, the trends in shares of Medicare payments by provider type were largely parallel to the trends in their shares of procedures billed. (See table 2.) Combined, CRNAs, NPs and PAs, received less than 3 percent of all payments for these services throughout the period. The pattern of CRNA billing by type of chronic pain procedure shows that CRNAs consistently billed for less than 1 percent of the total. (See table 3.) CRNAs had their largest market share in epidural injections, where they accounted for 0.9 percent of providers’ billings in 2009; this share dropped to 0.6 percent in 2012. CRNAs billed for even smaller shares of facet neurolytic destruction injections, autonomic nerve blocks, and trigger point injections. Over time, the mix of procedures that CRNAs billed to Medicare from 2009 through 2012 changed somewhat. (See fig.1.) Epidural injections represented the largest share—roughly two- thirds—of CRNA-billed procedures, but that share decreased to less than half over the period. The share of paravertebral facet joint injections doubled between 2011 and 2012. Autonomic nerve blocks, facet neurolytic destruction, and trigger point injections billed by CRNAs held relatively small but growing shares of CRNA billing. By state, the trend among CRNAs’ billing for selected chronic pain procedures was largely stable over the 4-year period. (See app. III for state-by-state data.) CRNAs’ share increased by more than 1 percentage point in 2 states, declined by more than 1 percentage point in 6 states, and remained largely unchanged in 42 states and the District of Columbia. States that experienced the most growth in CRNA market share were New Hampshire and Tennessee, increasing 4.4 and 2.5 percentage points, respectively. By 2012, the CRNA share of selected chronic pain procedures was highest in New Hampshire (5.5 percent), Iowa (4.3 percent), and Kansas (4.0 percent). In 43 states and the District of Columbia, the CRNA share remained under 1 percent. Although the number of selected chronic pain procedures billed by all rural providers increased somewhat from 2009 through 2012, the number of procedures billed by CRNAs in rural areas declined over the period. (See fig. 2.) Of all CRNA claims for selected procedures, the share submitted by providers in rural areas fell from 66 percent in 2009 to 39 percent in 2012; meanwhile, the share of selected procedures nationwide billed by all rural provider types was roughly 11 percent in both 2009 and in 2012. In rural markets, provider shares followed the national trends. (See fig. 3.) Of the chronic pain procedures billed by rural providers, CRNA claims were a small percentage. Physicians without board certification in pain medicine billed for the majority of claims from rural providers; however, this number declined over time, while pain physicians billed for an increasing share—almost a third of rural claims in 2012. In mid-2011, Noridian began denying certain chronic pain management services that were billed by CRNAs and maintained this policy through 2012. The denial policy, among other factors, had the potential to affect beneficiary utilization in the Noridian states where CRNAs billed for chronic pain management services. We compared all providers’ billing of selected chronic pain procedures in 2010—the year prior to the denial policy—with that in 2012—the full year in which the denial policy was in place—in the Noridian states with the highest share of CRNAs previously billing for these procedures. In 2009, CRNAs billed for 8 percent of the selected procedures in Montana, 1.7 percent in Wyoming, and 1.4 percent in South Dakota. During that same year, CRNAs accounted for 19 percent of selected chronic pain procedures in rural areas of Montana, 4 percent in rural Wyoming, and less than 1 percent in rural South Dakota. The change in chronic procedures billed between 2010 and 2012 was minimal for the three states overall, but varied by state.providers in those states billed for 28,238 selected chronic pain procedures—of which CRNAs accounted for 2.6 percent—and they billed for 28,155 procedures in 2012 when Noridian denied CRNA claims for these procedures. By state, the number of procedures billed by South Dakota and Wyoming providers declined by 9.2 percent and 6.7 percent, respectively, while Montana provider claims grew by 14 percent over the 2-year period. (See fig. 4.) Nearly all MACs allowed Medicare payment to CRNAs for some, or all, selected chronic pain procedures. As of April 2013, six of the nine MACs had uniform payment policies for CRNA-provided chronic pain procedures across all states within a jurisdiction. The remaining three MACs varied their policies for one state within a jurisdiction. At the state level, MACs reported the following payment policies regarding chronic pain procedures (see fig. 6): allowed payment to CRNAs for all selected procedures in 19 states, allowed payment to CRNAs for a subset of selected procedures in 30 states and the District of Columbia, and denied payment to CRNAs for all selected procedures in the remaining state. In the 30 states and the District of Columbia where MACs allowed payment to CRNAs for only certain chronic pain procedures, MAC payment policies indicated substantial variation in the specific procedures that can and cannot be billed. MACs most commonly denied CRNA payment for trigger point injections and facet neurolytic destruction, allowing CRNA payment for these procedures in only two states. Conversely, MACs allowed CRNAs payment for somatic nerve blocks in 20 states and epidural injections in 16 states. Furthermore, because each procedure can have multiple CPT codes associated with it, MACs may choose to only allow CRNAs payment for some of the CPT codes associated with the procedure and not others. This was the case for epidural injections, transforaminal epidural injections, autonomic nerve blocks, and somatic nerve blocks. Figure 7 illustrates the variation in MAC payment policies for selected CRNA-provided chronic pain procedures in Florida, Nevada, and Pennsylvania. In contrast to their policies on chronic pain procedures, MACs were generally more restrictive regarding payment for CRNA-billed E/M services. As of April 2013, they reported the following payment policies for E/M services (see fig. 8): allowed payment to CRNAs for E/M services in 24 states, and denied payment to CRNAs for E/M services in 26 states and the District of Columbia. Payment policies for CRNA-provided chronic pain procedures did not always align with payment policies for E/M services. In the 19 states where MACs reported that they allowed payment to CRNAs for all chronic pain procedures, they also allowed payment to CRNAs for E/M services. However, among the 30 states for which MACs told us that they allowed payment to CRNAs for only certain procedures, MACs indicated that they allowed payment for E/M services in 5—California, Florida, Hawaii, Kentucky, and Nevada—while denying payment for E/M services in the remaining states and the District of Columbia. The MACs did not implement CMS’s CRNA payment policy consistently; three MACs took steps to apply the policy in 2013, while the remaining six MACs did not. MACs pointed to a number of challenges, including vagueness in state scope of practice laws, that affected their ability to implement the policy. Three MACs took steps to implement CMS’s 2013 rule on CRNA payment and updated their CRNA payment policies, when necessary. CMS officials told us that they rely on MACs to determine whether CRNAs are allowed to provide specific services by reviewing each state’s CRNA scope of practice laws. Two MACs made an effort to determine which services CRNAs are allowed to perform under each state’s scope of practice laws. One of the two MACs directly reviewed the laws of the states in its jurisdiction, while the other MAC contacted each state to ask for its interpretation of the laws. Instead of attempting to interpret state scope of practice laws, a third MAC posted a new educational article on its website notifying CRNAs that they are responsible for knowing which services are allowable under their state laws. The remaining six MACs did not take steps to revisit their CRNA payment policies for 2013. Three of the six MACs reviewed the scope of practice laws for a state in their jurisdictions prior to CMS’s 2013 ruling at the request of the state or provider groups. For instance, one MAC stated that when it began its contract in 2009, the CRNA association for one of its states asked the MAC to consider allowing its CRNAs to be paid for chronic pain services, citing a long-standing history of providing these services. At that time, the MAC reviewed the state’s CRNA scope of practice laws and determined that they did not preclude CRNAs from providing chronic pain services. It then extended this affirmative payment policy across all states within its jurisdiction without reviewing further state laws. When asked about its implementation of CMS’s 2013 CRNA payment policy, this MAC told us that it had not revisited any state scope of practice laws. Two of the six MACs reported that they have overarching policies in place to determine coverage for all nonphysician provider types and, therefore, have not taken any steps to implement this latest CRNA payment policy. For example, one of these MACs noted that nonphysician providers must submit a request to the MAC to receive payment for a specific CPT code. The MAC will then review the relevant state scope of practice law and determine whether to allow payment for that code. The remaining MAC reported that it was waiting for further instructions from CMS before implementing the policy. MACs discussed a variety of challenges that affected their implementation of CMS’s CRNA payment policy. Most MACs reported challenges interpreting state scope of practice laws to make determinations about which services CRNAs are allowed to provide, noting that state scope of practice laws are generally vague and lack details about which specific services CRNAs can perform. MACs that asked states to provide an interpretation of the scope of practice laws reported that the states generally were unable to provide definitive responses. For instance, one MAC that looked into a state’s CRNA scope of practice in 2010 told us that the process to determine whether the state law allowed CRNAs to perform chronic pain services was convoluted; the MAC was directed back and forth between many state and federal officials and provider groups. Another MAC said that when a determination could not be made about a state’s scope of practice, it defaulted to allowing payment for all services approved by the AANA. A few MACs discussed the difficulty of differentiating between acute and chronic pain services for payment purposes. Because CPT codes for services used to treat chronic pain are also used to bill for acute pain care in the peri-operative setting, one MAC told us that the only definitive way to determine whether the service was for chronic or acute pain is to review the medical record. However, some MACs explained that since chronic pain procedures are typically provided in an outpatient setting, they can rely on the place of service listed on the claim to make a best guess at whether the procedure was used to treat chronic pain. In addition, a few MACs noted that the frequency of the service can also be an indicator, with multiple injections billed for a patient by the same provider over a period of time indicating that the procedure was likely used to treat chronic pain. Two MACs assumed that Medicare’s rule requiring physician supervision of anesthesia services provided by CRNAs in hospital and ASC settings applied to chronic pain services in all settings; this assumption has potential implications for CRNA billing of chronic pain services. Under this rule, CRNA-provided anesthesia services furnished in hospital and ASC settings must be performed under the direction of a physician unless a state’s governor has opted out of this requirement. CMS guidance clarifies that this requirement applies to anesthesia services and not to analgesia services, which are defined to include services used to dull or alleviate pain without other effects, such as the loss of consciousness; the guidance does not expressly use the term “chronic pain management.” These two MACs took the view that they would have needed to review CRNA scope of practice laws only in states that had opted-out of the supervision requirement, implying that they considered chronic pain management services to be anesthesia services. Regardless of the validity of this interpretation, the supervision rule only applies in hospital and ASC settings, not office settings. By applying this rule to office settings, these MACs may have unnecessarily restricted the services for which CRNAs in 10 of the states under these MACs’ jurisdictions are allowed to bill. Use of state scope of practice laws to govern Medicare coverage of CRNA-provided chronic pain care continues to be an area of uncertainty and confusion for many MACs. Similarly, certain MACs have interpreted the CRNA supervision rule, as it relates to CRNA-provided chronic pain management services, in a way that may inappropriately limit CRNA billing for such services when furnished in office settings. As a result, MACs have not implemented CMS’s 2013 payment rule in a consistent manner that ensures appropriate beneficiary coverage and provider payment. Although CRNAs do not bill for a significant share of the chronic pain procedures we reviewed, if a MAC improperly denies payment to CRNAs in a state that allows CRNAs to independently furnish such services, beneficiary access to these services may be unnecessarily affected. In order to ensure consistent implementation of CRNA payment policy, we recommend that the Administrator of CMS (1) provide specific instructions to MACs on how to determine coverage with reference to a state’s scope of practice laws, including instructions on how to proceed if the state scope of practice laws are not explicit, and (2) clarify the applicability of the CRNA supervision rule to payment for CRNA-provided chronic pain management services. We provided a draft of this report to HHS for comment. In its written response, reproduced in appendix IV, HHS concurred with our two recommendations. Regarding our recommendation to provide specific instructions to MACs on how to determine coverage with reference to a state’s scope of practice laws, HHS stated that CMS plans to send a letter directing all MACs to seek clarification from appropriate state officials or entities if state scope of practice laws are not explicit. Regarding our recommendation to clarify the applicability of the CRNA supervision rule to payment for CRNA-provided chronic pain management services, HHS stated that CMS will clarify that the supervision rule governs only anesthesia services furnished in hospitals or ASCs. HHS also provided technical comments that we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from its date. At that time, we will send copies to the Secretary of Health and Human Services. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. This appendix describes our methodology for analyzing recent trends in billing for selected chronic pain procedures, as well as changes in the number of procedures billed during the period of Noridian Healthcare Solutions (Noridian) denials. It also describes our efforts to ensure the reliability of the data. We focused our review on a set of seven categories of chronic pain procedures and corresponding current procedural terminology (CPT) codes. To select procedures, we obtained an American Association of Nurse Anesthetists (AANA) list of CPT codes that are billed by certified registered nurse anesthetists (CRNA). We categorized the CPT codes by procedure, in consultation with nurse and physician pain experts. We narrowed this list to procedures that were either reported by a Medicare Administrative Contractor (MAC) billing specialist as most likely to be used in treating chronic pain (as opposed to acute pain), or those commonly mentioned across payer resources—such as local coverage determinations (LCD)—as pain management options. We excluded other types of chronic pain management services, such as evaluation and management (E/M) and pharmacological services because of the unavailability of reliable data. The procedures we selected to include were: autonomic nerve blocks, paravertebral facet joint injections, trigger point injections. This category excludes codes used for emerging technologies, services, or procedures. To determine trends in billing for the selected procedures, we analyzed 100 percent of Medicare fee-for-service (FFS) paid claims from 2009 through 2012. We calculated the number of procedures billed to Medicare FFS from the carrier/physician file, excluding claims billed by some critical access hospitals. We considered procedures administered on more than one vertebral level of the spine to be separate procedures. We derived overall expenditures for selected chronic pain procedures from both provider payments (through the physician/carrier file) and outpatient facility payments (through the outpatient file). We took additional steps in an effort to narrow our focus to procedures used to treat chronic pain. First, we excluded claims for procedures billed as distinct procedural services during the same encounter as another procedural service, where normally both would not be billable. We confirmed with several Medicare or chronic pain billing experts that, while not exclusively, these procedures are more likely than not to be for acute pain occurring in conjunction with another procedure. Additionally, we excluded claims in the carrier/physician file for procedures performed in the hospital inpatient setting. Several MACs told us that chronic pain is treated almost exclusively in outpatient settings. While these steps mitigated overcounting the number of chronic pain procedures, our analysis may still include some acute procedures and may exclude some chronic procedures. We disaggregated biller (provider) type based largely on the provider specialty indicated on the claim.physician biller type. To identify physicians that are board certified in pain medicine—pain physicians—we cross-referenced the names of physicians provided by the American Board of Medical Specialties (ABMS) as board certified in pain medicine, to a list of all Medicare providers, as maintained by the Centers for Medicare & Medicaid Services (CMS) through the National Plan and Provider Enumeration The exception to this is the pain System (NPPES).match 91 percent of the pain physicians to NPPES records. We then used the provider identifier from the NPPES data to identify pain physicians on the claims. Using a name-matching strategy, we were able to Our count of procedures provided by nonphysicians may be conservative. For example, physicians and certain other providers may bill “incident to”—whereby, for example, a physician might bill for a supervised service or procedure furnished by a nurse practitioner (NP), physician assistant (PA), or CRNA. There is no way on the claim to determine when a service is billed “incident to,” rather than provided completely by the billing professional. Services billed “incident to” indicate the specialty of the billing professional. To the extent that nonphysician professionals provided services in whole or in part that were billed “incident to” another professional, we may have undercounted procedures provided by CRNAs, NPs, or PAs. In addition, providers, including pain physicians, can reassign their billing so that their employer may bill on their behalf. In this case, the provider identifier on the claim would be that of the employer, and we would not capture the provider as a pain physician based on our name matching strategy. We also disaggregated the data by geographic location. To analyze urban and rural biller (provider) status, we used the CMS Core-Based Statistical Area crosswalk to identify those rural providers as providers with a zip code that is not associated with a Core-Based Statistical Area. To determine the extent to which the number of selected chronic pain procedures billed to Medicare FFS changed during the period of denials, we analyzed Medicare FFS claims from 2009 through 2012 in those states under Noridian’s jurisdiction that were subject to CRNA denials and under Noridian’s jurisdiction for all 4 years of the study period. At the time of our analysis, Noridian’s jurisdiction included Alaska, Arizona, Idaho, Montana, North Dakota, Oregon, South Dakota, Utah, Washington, and Wyoming. We excluded Washington state, where CRNAs are dually trained as NPs, and thus not subject to the denials. We also excluded Idaho because it was under another contract until 2011. We then limited our analysis to those states where CRNAs constituted at least 1 percent of the chronic pain provider market in 2009: Montana, South Dakota, and Wyoming. We measured the overall number of procedures billed to Noridian for the same set of selected chronic pain procedures, using the same methodology as in the broader trend analysis. We compared billing for selected chronic pain procedures prior to the MAC denials—which began in 2011—to billing in 2012 when the MAC denial policy was fully implemented. We assessed the trend both state-wide and in rural areas. We ensured the reliability of the Medicare claims data, ABMS pain physician data, and NPPES data used in this report by performing appropriate electronic data checks, reviewing relevant documentation, and interviewing officials and representatives knowledgeable about the data. We found the data were sufficiently reliable for the purpose of our analyses. We conducted this performance audit from March 2013 through February 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. From 2009 through 2012, the number of selected chronic pain procedures billed to Medicare fee-for-service (FFS) grew by about 5.7 percent annually, while Medicare spending on these procedures grew at a slightly higher rate. (See tables 4 and 5.) Growth rates varied across procedures; for example, claims for somatic nerve blocks and paravertebral facet joint injections rose more rapidly at about 11 percent and 9 percent per year, respectively, while claims for epidural injections rose by about 2 percent annually. Overall, Medicare payments for selected chronic pain procedures increased somewhat faster than the number billed, rising 6.5 percent annually between 2009 and 2012. (See table 5.) This rate of growth is above the average growth rate of 5.3 percent per year in overall Medicare Part B spending over the same period. As with the trend in the number billed, average annual growth in expenditures varied across selected chronic pain procedures; expenditures grew most rapidly for somatic nerve blocks (20 percent annually) and facet neurolytic destruction (13 percent annually), while expenditures for epidural injections grew least rapidly (5 percent annually). This appendix provides further detail on how certified registered nurse anesthetists’ (CRNA) market share of selected chronic pain procedures changed between 2009 and 2012, by state. In addition to the contact named above, Rosamond Katz, Assistant Director; Sandra C. George; Richard Lipinski; Kate Nast; and Kathryn Richter made key contributions to this report.
Chronic pain costs the nation about $600 billion each year, a quarter of which is borne by Medicare. One MAC, Noridian Healthcare Solutions (Noridian), began denying CRNA claims for certain chronic pain services in 2011, citing patient safety concerns. CMS issued a rule, effective January 2013, clarifying that CRNAs can bill Medicare for “any services that a [CRNA] is legally authorized to perform in the state in which the services are furnished,” including chronic pain management services. GAO was asked to review Medicare's payment policy regarding the provision of chronic pain management services by CRNAs. This report examines, among other things, (1) trends in Medicare provider billing for selected chronic pain procedures; (2) in which states MACs allowed payment for selected procedures billed by CRNAs as of early 2013; and (3) how MACs implemented the payment policy. To do this, GAO selected seven categories of chronic pain procedures, in consultation with pain care experts. GAO analyzed Medicare claims data from 2009 through 2012, by provider type and geography. To determine which MACs allow CRNA payments and how MACs implemented CMS's policy, GAO interviewed medical directors at all nine MACs. From 2009 through 2012, certified registered nurse anesthetists (CRNA)—a type of advanced-practice nurse specializing in anesthesia care—billed Medicare fee-for-service (FFS) for a minimal share of selected chronic pain procedures, less than ½ of 1 percent of these procedures in each year. Physicians without board certification in pain medicine billed for the majority of selected procedures each year, while pain physicians consistently billed for roughly 40 percent of selected procedures. Furthermore, although the number of chronic pain procedures billed by all rural providers increased from 2009 through 2012, the number of procedures billed by rural CRNAs declined over the period. Of all CRNA claims for selected procedures, the share billed by CRNAs in rural areas fell from 66 percent in 2009 to 39 percent in 2012. As of early 2013, Medicare Administrative Contractors (MAC)—entities that pay medical claims on behalf of Medicare—allowed payment to CRNAs for all selected procedures in 19 states, allowed payment for a subset of selected procedures in 30 states and the District of Columbia, and denied payments for all selected procedures in 1 state. Where MACs allowed payment to CRNAs for only certain procedures, payment policies indicated substantial variation in the specific allowed procedures. Three of the nine MACs took steps to implement a Department of Health and Human Services' (HHS) Centers for Medicare & Medicaid Services (CMS) rule, effective January 2013, that defers to state scope of practice laws to inform coverage for CRNAs. CMS relies on MACs to review each state's CRNA scope of practice laws. However, most MACs reported difficulty interpreting state scope of practice laws regarding the services that CRNAs are allowed to provide; MACs noted that state scope of practice laws generally lack detail on which specific services CRNAs can perform. In addition, two MACs assumed that Medicare's rule requiring physician supervision for anesthesia services provided by CRNAs in hospital and ambulatory surgical center settings applied to chronic pain management services provided in all settings; this may have unnecessarily restricted the services for which CRNAs are allowed to bill in certain states. GAO recommends that CMS provide specific instructions to MACs on (1) how to determine coverage with reference to a state's scope of practice laws, and (2) the application of the CRNA supervision rule. HHS concurred with these recommendations.
In DB plans, formulas set by the employer determine employee benefits. DB plan formulas vary widely, but benefits are frequently based on participant pay and years of service, and typically paid upon retirement as a lifetime annuity, or periodic payments until death. Because DB plans promise to make payments in the future and because tax-qualified DB plans must be funded, employers must use present value calculations to estimate the current value of promised benefits. The calculations require making assumptions about factors that affect the amount and timing of benefit payments, such as an employee’s retirement age and expected mortality, and about the expected return on plan assets, expressed in the form of an interest rate. The present value of accrued benefits calculated using mandated assumptions is known as a plan’s “current liability.” Current liability provides an estimate of the amount of assets a plan needs today to pay for accrued benefits. The Employee Retirement Income Security Act of 1974 (ERISA), and several amendments to the law since its passage, established minimum funding requirements for sponsors of pension plans in order to try to ensure that plans have enough assets to pay promised benefits. Compliance with the minimum funding requirements is recorded through the plan’s funding standard account (FSA). The FSA tracks events that affect the financial health of a plan during that plan year: credits, which reflect improvements to the plan’s assets, such as contributions, amortized experience gains, and interest; and charges, which reflect an increase in the plan’s financial requirements, such as the plan’s normal cost and amortized charges such as the initial actuarial liability, experience losses, and increases in a plan’s benefit formula. ERISA and the Internal Revenue Code (IRC) prescribe rules regarding the assumptions that sponsors must use to measure plan liabilities and assets. For example, for plan years 2004 and 2005, the IRC specifies that the interest rate used to calculate a plan’s current liability must fall within 90 to 100 percent of the weighted average of the rate on an index of long-term investment-grade corporate bonds during the 4-year period ending on the last day before the beginning of the plan year. Similarly, rules dictate that sponsors report an “actuarial” value of assets that must be based on reasonable assumptions and must take into account the assets’ market value. This value may differ in any given year, within a specified range, from the current market value of plan assets, which plans also report. While different methodologies and assumptions will change a plan’s reported assets and liabilities, sponsors eventually must pay the amount of benefits promised; if the assumptions used to compute current liability differ from the plan’s actual experience, current liability will differ from the amount of assets actually needed to pay benefits. Funding rules generally presume that the plan and the sponsor are ongoing entities, and plans do not necessarily have to maintain an asset level equal to current liabilities every year. However, the funding rules include certain mechanisms that are intended to keep plans from becoming too underfunded. One such mechanism is the AFC, introduced by the Omnibus Budget Reconciliation Act of 1987 (OBRA ‘87). The AFC requires sponsors of plans with more than 100 participants that have become underfunded to a prescribed level to make additional plan contributions in order to prevent funding levels from falling too low. With some exceptions, plans with an actuarial value of assets below 90 percent of current liabilities are affected by the AFC rules. In addition to setting funding rules, ERISA established PBGC to guarantee the payment of the pension benefits of participants, subject to certain limits, in the event that the plan could not. Under ERISA, the termination of a single-employer DB plan may result in an insurance claim with the single-employer program if the plan has insufficient assets to pay all benefits accrued under the plan up to the date of plan termination. PBGC may pay only a portion of a participant’s accrued benefit because ERISA places limits on the PBGC benefit guarantee. For example, PBGC generally does not guarantee benefits above a certain amount, currently $45,614 annually per participant at age 65. Additionally, benefit increases arising from plan amendments in the 5 years immediately preceding plan termination are not fully guaranteed, although PBGC will pay a portion of these increases. Further, PBGC’s benefit guarantee amount is limited to the monthly straight life annuity benefit the participant would receive if she were to commence the annuity at the plan’s normal retirement age. Sponsors of PBGC-insured DB plans pay annual premiums to PBGC for their coverage. Premiums have two components: a per participant charge paid by all sponsors (currently $19 per participant) and a “variable-rate” premium that some underfunded plans pay based on the level of unfunded benefits. The recent decline of PBGC’s single-employer program has occurred in the context of the long-term stagnation of the DB system. The number of PBGC-insured plans has decreased steadily from approximately 110,000 in 1987 to about 29,000 in 2004. While the number of total participants in PBGC-insured single-employer plans has grown approximately 25 percent since 1980, the percentage of participants who are active workers has declined from 78 percent in 1980 to 50 percent in 2002. Unless something reverses these trends, PBGC may have a shrinking plan and participant base to support the program in the future. From 1995 to 2002, while most of the 100 largest plans had sufficient assets to cover their plan liabilities, many did not. Furthermore, because of leeway in the actuarial methodology and assumptions sponsors can use to measure plan assets and liabilities, underfunding may actually have been more severe and widespread than reported at the end of the period. Because of flexible funding rules permitting the use of accounting credits other than cash contributions to satisfy minimum funding obligations, on average 62.5 of the 100 largest plans each year received no cash contributions from their sponsors. Although as a group, funding levels among the 100 largest plans were reasonably stable and strong from 1996 to 2000, by 2002, more than half of the largest plans were underfunded (see fig. 1). Two factors in the deterioration of many plans’ finances were the decline in stock prices and prevailing interest rates. From 2000 to 2002, stock prices declined sharply each year, causing a decline in the value of many plans’ pension assets. In addition, over the sample period, 30-year Treasury bond rates, which served as the benchmark for the rate used by plans to calculate pension liabilities, generally fell steadily, raising current liabilities. The combination of lower asset values and higher pension liabilities had a serious, adverse effect on overall DB plan funding levels. Accurate measurement of a plan’s liabilities and assets is central to the sponsor’s ability to maintain assets sufficient to pay promised benefits, as well as to the transparency of a plan’s financial health. Because many plans chose allowable actuarial assumptions and asset valuation methods that may have altered their reported liabilities and assets relative to market levels, it is possible that funding over our sample period was actually worse than reported for a number of reasons. These include the use of above-market rates to calculate current liabilities and actuarial measurement of plan assets that differ from market values. Reported current liabilities are calculated using a weighted average of rates from the 4-year period before the plan year. While this allows sponsors to smooth fluctuations in liabilities that sharp swings in interest rates would cause, thereby reducing volatility in minimum funding requirements, it also reduces the accuracy of liability measurement because the rate anchoring reported liabilities is likely to differ from current market values. To the extent that the smoothed rate used to calculate current liabilities exceeds current rates, the 4-year smoothing could reduce reported liabilities relative to those calculated at current market values. Further, rules allowed sponsors to measure liabilities using a rate above the 4-year weighted average. The 4-year weighted average of the reference 30-year Treasury bond rate exceeded the current market rate in 76 percent of time in the months between 1995 and 2002, and the highest allowable rate for calculating current liabilities exceeded the current rate in 98 percent of those months. Sponsors of the plans in our sample chose the highest allowable interest rate to value their current liabilities 62 percent of the time from 1995 to 2002. For example, an interest rate 1 percentage point higher than the statutorily required interest rate would decrease the reported value of a typical plan’s current liability by around 10 percentage points. As with liabilities, the actuarial value of assets used for funding may also differ from current market values. Under the IRC, actuarial asset values cannot be consistently above or below market, but in a given year may be anywhere from 80 to 120 percent of market asset levels. Among the plans we examined, on average each year, 86 percent reported a different value for actuarial and market assets. On average, using the market value instead of the actuarial value of assets would have raised reported funding levels by 6.5 percent each year. However, while the market value exceeded the actuarial value of assets during the late 1990s, when plan funding was generally strong, in the weaker funding year of 2002 market assets dipped below actuarial assets. In 2001 and 2002, calculating plan funding levels using market assets would have greatly increased the number of plans below 90 percent funded each year. A similar calculation for 2002 would have drastically increased the number of large plans below 80 percent funded, from 6 to 24. Thus, we see some evidence that using actuarial asset values lowered the volatility of reported funding levels relative to those using market asset values. However, the use of the actuarial value of assets also may have disguised plans’ funded status as their financial condition worsened. Two large plans that terminated in 2002 illustrate the potential effects of discrepancies between reported and actual funding. The Bethlehem Steel Corporation in 2002 reported that its plan was 85.2 percent funded on a current liability basis; yet, the plan terminated later that year with assets of less than half of the value of promised benefits. The PBGC single-employer program suffered a $3.7 billion loss as a result of that termination, its largest ever at the time. Similarly, LTV Steel Company reported that its pension plan for hourly employees was over 80 percent funded on its Form 5500 filing for plan year 2001. When this plan terminated in March, 2002, it had assets equal to 52 percent of benefits, a shortfall of $1.6 billion. For the 1995 to 2002 period, the sponsors of the 100 largest plans each year on average made relatively small cash contributions to their plans. Annual cash contributions for the top 100 plans averaged approximately $97 million on plans averaging $5.3 billion in current liabilities, with figures in 2002 dollars. This average contribution level masks a large difference in contributions between 1995 and 2001, during which period annual contributions averaged $62 million, and in 2002, when contributions increased significantly to almost $395 million per plan. Further, in 6 of the 8 years in our sample, a majority of the largest plans made no cash contribution to their plan (see fig. 2). On average each year, 62.5 plans received no cash contribution, including an annual average of 41 plans that were less than 100 percent funded. The funding rules allow sponsors to meet their plans’ funding obligations through means other than cash contributions. If a plan has sufficient FSA credits from other sources, such as an existing credit balance or large interest or amortization credits, to at least match its FSA charges, then the plan does not have to make a cash contribution in that year. Because meeting minimum funding requirements depends on reconciling total annual credits and charges, and not specifically on cash contributions, these other credits can substitute for cash contributions. From 1995 to 2002, it appears that many of the largest plan sponsors relied more heavily on other FSA credits than on cash contributions to meet minimum funding obligations. The average plan’s credit balance carried over from a prior plan year totaled about $572 million (2002 dollars) each year, and 88 percent of plans on average carried forward a prior credit balance into the next plan year from 1995 to 2002. Not only could these accumulated credit balances help a plan to meet minimum funding obligations in future years, but they also accrue interest that augments a plan’s FSA credits and further helps meet minimum funding requirements. In contrast, annual cash contributions averaged only $97 million, in 2002 dollars. On average each year, cash contributions represented 90 percent of the minimum required annual funding (from cash and credits). However, this average figure was elevated by high levels of contributions by some plans in 1995, 1996, and 2002. From 1997 to 2000, when funding levels were generally strong, cash contributions averaged only 42 percent of minimum required annual funding. During these years, a majority of plans in our sample received no cash contribution. Cash contributions represented a smaller percentage of annual minimum required funding during years when plans were generally well funded, indicating that in these years more plans relied more heavily on credits to meet minimum funding obligations. In addition to large credit balances brought forward from prior years, sponsors were able to apply funding credits from other sources, such as net interest credits ($42 million per plan per year, on average), and credits from the excess of a plan’s calculated minimum funding obligation above the plan’s full funding limitation ($47 million). Other plan events result in plan charges, which reflect events that increase the plan’s obligations. For example, plans reported annual amortization losses, which could result from actual investment rates of return on plan assets below assumed rates of return (including outright losses) or increases in the generosity of plan benefits; these net amortization charges averaged almost $28 million in our sample. Funding credits, offset by charges, may help satisfy a plan’s minimum funding obligation, substituting for cash contributions, and may explain why a significant number of sponsors made zero cash contributions to their plans in many years. The FSA credit accounting system provides some advantages to DB plan sponsors. Amortization rules require the sponsor to smooth certain events that affect plan finances over several years, and accumulated credit balances act as a buffer against swings in future funding requirements. These features often allow sponsors to better regulate their annual level of contributions, compared to annual fluctuations if funding were based strictly on yearly differences between the market value of plan assets and current liabilities. Similarly, current-law measurement and funding rules provide a plan with some ability to dampen volatility in required funding caused by economic events that may sharply change a plan’s liabilities or assets. Pension experts told us that this predictability and flexibility make DB sponsorship more attractive to employers. However, the FSA accounting system, by smoothing annual contributions and liabilities, may distort a plan’s funding level. For example, suppose a sponsor accrues a $1 million credit balance from making a contribution above the required minimum in a year. Suppose then that this $1 million purchases assets that lose all of their value by the following year. Even though the plan no longer had this $1 million in assets, the sponsor could still use that credit balance (plus interest on the credit balance) to reduce this year’s contribution to the plan. Because of amortization rules, the sponsor would have to report only a portion of that lost $1 million in asset value as a plan charge the following year. Similarly, sponsors are required to amortize the financial effect of a change in a plan’s benefit formula, which might result in increased benefits and therefore a higher funding obligation, over a 30-year period. Thus, even though higher benefits would immediately raise a plan’s obligation to fund, the sponsor must spread this effect in the plan’s FSA over 30 years. This disconnection between the reported and current market condition of plan finances raises the risk that plans will not react quickly enough to deteriorating plan conditions. Further, it reduces the transparency of plan financial information to stakeholders, such as participants, and investors. The experience of two large plans that terminated in a severely underfunded state help illustrate the potential disconnection between FSA accounting and the plan’s true funded status. As stated earlier, the Bethlehem Steel Corporation and LTV Steel Company both had plans terminate in 2002, each with assets approximately equal to 50 percent of the value of benefits. Yet each plan was able to forgo a cash contribution each year from 2000 to 2002 by using credits to satisfy minimum funding obligations, primarily from large accumulated credit balances from prior years. Despite being severely underfunded, each plan reported an existing credit balance in 2002, the year of termination. Another possible explanation for the many instances in which sponsors made no annual cash contribution regards the full funding limitation (FFL). The FFL is a cap on minimum required contributions to plans that reach a certain funding level in a plan year. However, the FFL does not represent the contribution that would raise plan assets to the level of current liability. The FFL represents a “maximum minimum” contribution for a sponsor in a given year—a ceiling on the sponsor’s minimum funding obligation for the plan. Between 1995 and 2002, rules permitted some plans with assets as low as 90 percent of current liability to reach the FFL, meaning that a plan could be considered fully funded without assets sufficient to cover all accrued benefits. The FFL is also distinct from the plan’s annual maximum tax-deductible contribution. Because sponsors may be subject to an excise tax on contributions above the maximum deductible amount, the annual maximum contribution can act as a real constraint on cash contributions. Flexibility in the FFL rule has allowed many plan sponsors to take steps to minimize their contributions. In our sample, from 1995 to 2002 approximately two-thirds of the sponsors in each year made an annual plan contribution at least as large as the plan’s FFL. However, in 65 percent of these instances, the sponsor had chosen the highest allowable rate to calculate current liability; using a lower rate to calculate current liability may have resulted in a higher FFL and, therefore, may have required a higher contribution. Further, the FFL was equal to zero for 60 percent of plans each year, on average. This means that these plans were permitted to forego cash contributions as a result of the FFL rule. This reflects the fact that if a plan’s FFL equaled zero, that plan had assets at least equal to 90 percent of current liabilities that year and would not be required to make an additional contribution. The interaction between the FFL rule and the annual maximum tax- deductible contribution also has implications on the amount that plan sponsors can contribute. In some years, the maximum deductible contribution rules truly constrained some sponsors from making any cash contribution. In 1998, 50 of the 60 plans that contributed to the maximum deductible amount had a maximum deductible contribution of zero (see fig. 3). This meant that any cash contribution into those plans that year would generally subject the sponsor to an excise tax. For 37 of these plans, this was the case even if the sponsor had chosen the lowest statutorily allowed interest rate for plan funding purposes, which would have produced the highest calculated current liabilities. This constraint did not apply to as many plans in some other years. For example, in 1996, 52 plans contributed the maximum deductible amount. Thirty of these plans had a maximum deductible contribution of zero. Fourteen of the plans in this situation could not have made any additional contributions. However, the other 16 could have made at least some contributions by choosing a lower interest rate to raise their maximum deductible contribution level. Funding rules dictate that a sponsor of a plan with more than 100 participants in which the plan’s actuarial value of assets fall below 90 percent of liabilities, measured using the highest allowable interest rate, may be liable for an AFC in that year. More specifically, a plan that is between 80 and 90 percent funded is subject to an AFC unless the plan was at least 90 percent funded in at least 2 consecutive of the 3 previous plan years. A plan with assets below 80 percent of liabilities, calculated using the highest allowable rate, is assessed an AFC regardless of its funding history. Despite the statutory threshold of a 90 percent funding level for some plans to owe an AFC, in practice a plan needed to be much more poorly funded to become subject to an AFC. While about 10 plans in our sample each year had funding below 90 percent on a current liability basis, on average fewer than 3 plans each year owed an AFC (see fig. 4). From 1995 to 2002, only 6 of the 187 unique plans that composed the 100 largest plans each year were ever assessed an AFC, and these plans owed an AFC a total of 23 times in years in which they were among the 100 largest plans. By the time a sponsor owed an AFC, its plan had an average funding level of 75 percent, suggesting that by the time the AFC was triggered, the plan’s financial condition was weak. Further, while we observed 60 instances between 1995 and 2002 in which a plan had funding levels between 80 and 90 percent, only 5 times was a plan in this funding range subject to an AFC. This would indicate that, in practice, 80 percent represented the realistic funding threshold for owing or avoiding the AFC. Even with those plans subject to an AFC, other FSA credits may help a plan satisfy minimum funding obligations. Among plans in our sample assessed an AFC, the average annual AFC owed was $234 million, but annual contributions among this group averaged $186 million, with both figures in 2002 dollars. In addition, 61 percent of the time a plan was subject to an AFC, the sponsor used an existing credit balance to help satisfy its funding obligation. Over 30 percent of the time a plan was assessed an AFC, the funding rules allowed the sponsor to forgo a cash contribution altogether that year. Sponsors that owed an AFC had mixed success at improving their plans’ financial conditions in subsequent years, and most of these plans remained significantly underfunded. Among the 6 plans that owed the AFC, funding levels rose slightly from an average 75 percent when the plan was first assessed an AFC to an average 76 percent, looking collectively at all subsequent years. All of these plans were assessed an AFC more than once. Again, terminated plans provide a stark illustration of weaknesses in the rules’ ability to ensure sufficient funding. Bethlehem Steel’s plan was assessed an AFC of $181 million in 2002, but the company made no cash contribution that year, just as it had not in 2000 or 2001, years in which the plan was not assessed an AFC. When the plan terminated in late 2002, its assets covered less than half of the $7 billion in promised benefits. LTV Steel, which terminated its pension plan for hourly employees in 2002 with assets of $1.6 billion below the value of benefits, had its plan assessed an AFC each year from 2000 to 2002, but for only $2 million, $73 million, and $79 million, or no more than 5 percent of the eventual funding shortfall. Despite these AFC assessments, LTV Steel made no cash contributions to this plan from 2000 to 2002. Both plans were able to apply existing credits instead of cash to fully satisfy minimum funding requirements. The recent funding experiences of large plans, especially those sponsored by financially weak firms, illustrate the limited effectiveness of certain current funding rules and represent a potentially large implicit financial risk to PBGC. The financial health of a plan sponsor may be key to plan funding decisions because sponsors must make funding and contribution decisions in the context of overall business operations. From 1995 to 2002, on average, 9 percent of the largest 100 plans were sponsored by a firm with a speculative grade credit rating, suggesting financial weakness and poor creditworthiness. Financial strength of plan sponsors’ business operations has been a key determinant of risk to PBGC. Financially weak sponsors of large, underfunded plans are, by the nature of the insurance offered by PBGC, likely to cause the most financial burden to PBGC and other premium payers. For instance, PBGC typically trustees a plan when a covered sponsor is unable to financially support the plan, such as in the event of bankruptcy or insolvency. Current funding rules, coupled with the presence of PBGC insurance, may create incentives for financially distressed plan sponsors to avoid or postpone contributions and increase benefits. Many of the minimum funding rules are designed so that sponsors of ongoing plans may smooth contributions over a number of years. Sponsors that are in financial distress, however, may have a more limited time horizon and place other financial priorities above “funding up” their pension plans. To the extent that the presence of PBGC insurance causes financially troubled sponsors to alter their funding behavior, PBGC’s potential exposure increases. Underfunded plans sponsored by financially weak firms pose the greatest immediate threat to PBGC’s single-employer plans. PBGC’s best estimate of the total underfunding of plans sponsored by companies with credit ratings below investment grade and classified by PBGC as “reasonably possible” to terminate was an estimated $96 billion as of September 30, 2004 (see fig. 5). From 1995 to 2002, we observed that plans sponsored by speculative grade-rated firms had lower levels of average funding compared with the average for the 100 largest plans. For instance, the average funding of these plans was 12 percentage points lower on average than the funding level for all plans from 1995 to 2002. Plans sponsored by speculative grade- rated firms were also more likely to be underfunded. From 1995 to 2002, each year, on average, 18 percent of plans sponsored by speculative grade- rated firms had assets that were below 90 percent of current liability. Plans sponsored by nonspeculative grade-rated firms had just over half this incidence, or an average of 10 percent of plans funded below 90 percent of current liability. Large plans sponsored by firms with a speculative grade rating were also more likely to incur an AFC. While plans sponsored by speculative grade- rated firms accounted for only 9 percent of all plans that we examined over the 1995 to 2002 period, they accounted for just over one-third of all instances in which a sponsor was required to pay an AFC. In contrast, no high investment grade sponsors (those rated AAA or AA) were required to pay an AFC for this period. While the AFC is intended to be a backstop for underfunded plans, to the extent that plans sponsored by speculative grade-rated firms are considered to pose a significant risk for near-term termination, it may not be an effective mechanism for improving a plan’s funding level. Plans sponsored by firms that are in financial distress are, by definition, having difficulty paying off debts and may be ill equipped to afford increased contributions to their plan. That is, the AFC itself may be a symptom of plan distress rather than a solution to improve a plan’s funding level. Large plans with sponsors rated as speculative grade were also generally more likely to use the highest allowable interest rate to compute their current liability under the minimum funding rules. While a majority of sponsors from all credit rating categories used the highest allowable interest rate, over the entire 1995 to 2002 period, speculative grade-rated sponsors used the highest rate at an incidence 23 percentage points above the incidence for all other plans in the sample. The use of higher interest rates likely lowers a plan’s reported current liability and minimum funding requirement. To the extent that this depresses cash contributions, such plans may have a higher chance of underfunding, thus creating additional financial risk to PBGC. PBGC’s claims experience shows that financially weak plans have been a source of substantial claims. Of the 41 largest claims in PBGC history in which a rating was known, 39 of the plan sponsors involved were credit rated as speculative grade at least 3 years prior to termination (see fig. 6). These claims account for 67 percent of the value of total gross claims on the single-employer program from 1975 to 2004. Most of the plan sponsors involved in these claims were given speculative grade ratings for many more years prior to their eventual termination. Even 10 years prior to plan termination, 33 of these 41 claims involved sponsors rated as speculative grade. Widely reported recent large plan terminations by bankrupt sponsors and the financial consequences for PBGC have pushed pension reform into the spotlight of national concern. Our analysis here suggests that certain aspects of the funding rules have contributed to the general underfunding of pensions and, indirectly, to PBGC’s recent financial difficulties. The persistence of a large number of underfunded plans, even during the strong economic period of the late 1990s, implies that current funding rules are not stringent enough to ensure that sponsors can fund their pensions adequately. Further, the rules appear to lack strong mechanisms to compel sponsors to make regular contributions to their plans, even those that are underfunded or subject to an AFC. Perhaps most troubling is that current rules for measuring and reporting plan assets and liabilities may not reflect true current values and often understate the true degree of underfunding. The current rules have the reasonable and important goals of long-term funding adequacy and short-term funding flexibility. However, our work shows that although the current system permits flexibility, it also permits reported plan funding to be inadequate, misleading, and opaque; even so, funding and contributions for some plans can still swing wildly from year to year. This would appear not to serve the interest of any DB pension stakeholders effectively. The challenge is determining how to achieve a balance of interests: how to temper the need for funding flexibility with accurate measurement, adequate funding, and appropriate transparency. Despite flaws in the funding rules, our work here shows that most of the largest plans appear to be adequately funded. Rules should acknowledge that funding will vary with cyclical economic conditions, and even sponsors who make regular contributions may find their plans underfunded on occasion. Periodic and mild underfunding is not usually a major concern, but it becomes a threat to workers’ and retirees’ economic security in retirement and to PBGC when the sponsor becomes financially weak and the risk of bankruptcy and plan termination becomes likely. This suggests that perhaps the stringency of certain funding rules should be adjusted depending on the financial strength of the sponsor, with stronger sponsors being allowed greater latitude in funding and contributions than weaker sponsors that might present a near-term bankruptcy risk. However, focusing more stringent funding obligations on weak plans and sponsors alone may not be adequate, because strong companies and industries can quickly become risky ones, and, once sponsors and plans become too weak, it may be difficult for them to make larger contributions and still recover. It should be noted also that while funding rule change is an essential piece of the overall reform puzzle, it is certainly not the only piece. Indeed, pension reform is a challenge precisely because of the necessity of fusing together so many complex, and sometimes competing, elements into a comprehensive proposal. Ideally, effective reform would improve the accuracy of plan asset and liability measurement while minimizing complexity and maintaining contribution flexibility; develop a PBGC insurance premium structure that charges sponsors fairly, based on the risk their plans pose to PBGC, and provides incentives for sponsors to fund plans adequately; address the issue of severely underfunded plans making lump-sum distributions; resolve outstanding controversies concerning cash balance and other hybrid plans by safeguarding the benefits of workers regardless of age; and improve plan information transparency for PBGC, plan participants, unions, and investors in a manner that does not add considerable burden to plan sponsors. As deliberations on reform move forward, it will be important that each of these individual elements be designed so that all work in concert toward well-defined goals. Even with meaningful, carefully crafted reform, it is possible that some DB plan sponsors may choose to freeze or terminate their plans. While these are serious concerns, the overarching goals of balanced pension reform should be to protect the retirement benefits of American workers and retirees by providing employers reasonable funding flexibility while also holding those employers accountable for the promises they make to their employees. As I noted in my opening remarks, PBGC’s challenges parallel the challenges facing our Social Security system. While both programs have adequate current revenues and assets to pay promised benefits today, both face large and growing accumulated deficits on an accrual basis. Further, timely action to address both private pension and Social Security reform is needed. However, consideration must be given to the interactive effects of any such reforms and how they contribute to addressing our nation’s large and growing fiscal challenge, key demographic, economic and workforce trends, and the economic security of Americans in their retirement years. Mr. Chairman, this concludes my statement. I would be happy to respond to any questions you or other Members of the Committee may have. For further information, please contact Barbara Bovbjerg at (202) 512- 7215. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this testimony. Other individuals making key contributions to this testimony included Charlie Jeszeck, Mark Glickman, and Chuck Ford. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses our recent report on the rules that govern the funding of defined benefit (DB) plans and the implications of those rules for the problems facing the Pension Benefit Guaranty Corporation (PBGC) and the DB pension system generally. In recent years, the PBGC has encountered serious financial difficulties. Prominent companies, such as Bethlehem Steel, U.S. Airways, and United Airlines, have terminated their pension plans with severe gaps between the assets these plans held and the pension promises these plan sponsors made to their employees and retirees. These terminations, and other unfavorable market conditions, have created large losses for PBGC's single-employer insurance program--the federal program that insures certain benefits of the more than 34 million participants in over 29,000 plans. The single-employer program has gone from a $9.7 billion accumulated surplus at the end of fiscal year 2000 to a $23.3 billion accumulated deficit as of September 2004, including a $12.1 billion loss for fiscal year 2004. In addition, financially weak companies sponsored DB plans with a combined $96 billion of underfunding as of September 2004, up from $35 billion as of 2 years earlier. Because PBGC guarantees participant benefits, there is concern that the expected continued termination of large plans by bankrupt sponsors will push the program more quickly into insolvency, generating pressure on the Congress, and ultimately the taxpayers, to provide financial assistance to PBGC and pension participants. Given these concerns, we placed the PBGC's single-employer program on GAO's high-risk list of agencies and programs that need broad-based transformations to address major challenges. In past reports, we identified several categories of reform that the Congress might consider to strengthen the program over the long term. We concluded that the Congress should consider comprehensive reform measures to reduce the risks to the program's long-term financial viability and thus enhance the retirement income security of American workers and retirees. More broadly, pension reform represents a real opportunity to address part of our long-term fiscal problems and reconfigure our retirement security systems to bring them into the 21st century. This opportunity has many related pieces: addressing our nation's large and growing long-term fiscal gap; deciding on the appropriate role and size of the federal government--and how to finance that government--and bringing the wide array of federal activities into line with today's world. Continuing on our current unsustainable fiscal path will gradually erode, if not suddenly damage, our economy, our standard of living, and ultimately our national security. We therefore must fundamentally reexamine major spending and tax policies and priorities in an effort to recapture our fiscal flexibility and ensure that our government can respond to a range of current and emerging security, social, economic, and environmental changes and challenges. The PBGC's situation is an excellent example of the need for the Congress to reconsider the role of government organizations, programs, and policies in light of changes that have occurred since PBGC's establishment in 1974. Our recent work on DB pension funding rules provides important insights in understanding the problems facing PBGC and the DB system. To summarize our findings, while pension funding rules are intended to ensure that plans have sufficient assets to pay promised benefit to plan participants, significant vulnerabilities exist. Although from 1995 to 2002 most of the 100 largest DB plans annually had assets that exceeded their current liabilities, by 2002 over half of the 100 largest plans were underfunded, and almost one-fourth of plans were less than 90 percent funded. Further, because of leeway in the actuarial methodology and assumptions that sponsors may use to measure plan assets and liabilities, underfunding may actually have been more severe and widespread than reported. Additionally, on average over 60 percent of sponsors of these plans made no annual cash contributions to their plans. One key reason for this is that the funding rules allow a sponsor to satisfy minimum funding requirements without necessarily making a cash contribution each year, even though the plan may be underfunded. Further, very few sponsors of underfunded plans were required to pay an additional funding charge (AFC), a funding mechanism designed to reduce severe plan underfunding. Finally, our analysis confirms the notion that plans sponsored by financially weak firms pose a particular risk to PBGC, as these plans were generally more likely to be underfunded, to be subject to an additional funding charge, and to use assumptions to minimize or avoid cash contributions than plans sponsored by stronger firms.
As part of its mandate to guide the nation’s civil space program, NASA is to preserve U.S. preeminence in critical aspects of space science, technology, and applications. The goal of life and microgravity sciences is to study gravity-dependent physical phenomena and those phenomena obscured by the effects of gravity in biological, chemical, and physical systems. Research is conducted in biotechnology (e.g., protein crystal growth), combustion science, fluid physics, life and biomedical sciences, and materials science. Life science research in space biology studies the effects of gravity on living systems by using acceleration environments across the “gravity continuum”—micro, earth-normal, and hypergravity. NASA’s Office of Life and Microgravity Sciences and Applications, which was formed in 1993, funds this type of research. Between fiscal years 1989 and 1994, the annual budget authority for life and microgravity research increased by 114 percent, from $222.4 million to $476.3 million. Not all aspects of life and microgravity sciences research require a space-based environment. Short duration, low acceleration environments can be created in drop towers (2 to 5 seconds of free fall), aircraft flying a distinctively curved flight path (up to 23 seconds of low gravity), and suborbital rockets (over 300 seconds). Hypergravity can be created by a centrifuge. Space-based research is principally conducted in pressurized and nonpressurized facilities on the Space Shuttle. The centerpiece for this research is a 23-foot by 16-foot pressurized module—Spacelab—that fits in the Space Shuttle payload bay. Spacelab was developed by the European Space Agency and contains utilities, computers, work areas, and instrument racks for experiments. An exterior cutaway view of Spacelab is shown in figure 1. The most recent Spacelab flight was the second International Microgravity Laboratory (IML-2), which ended a 15-day mission on the Space Shuttle on July 23, 1994. IML-2 was a collaborative effort by NASA; the European Space Agency; and the national space agencies of Canada, France, Germany, and Japan. According to NASA officials, IML-2 provided a preview of the science operations to come on the space station. The IML flights, which began in January 1992, gave the U.S scientific community access to foreign-developed flight hardware while providing the international research community with access to the Space Shuttle/Spacelab. Approximately 80 investigations were performed on IML-2, including 15 U.S. experiments—11 in the biotechnology, fluid physics, and materials science and 4 in the life sciences. IML-2 was the last flight of this international series of spacelabs before the station era begins in 1997. An interior view of IML-2 is shown in figure 2. NASA publicly solicits research proposals from investigators in the life and microgravity research communities. The funding decision is principally based on an evaluation of the project’s scientific merit by a peer review panel. NASA’s peer review and other quality assurance procedures are outlined in appendix I. NASA intends to build a space station-era research community from the ground up. To do so, a larger cadre of ground-based researchers than currently available will be needed to adequately support U.S. research on the station. A NASA official estimates that the number of ground-based microgravity researchers needs to increase from 73 to 240 between fiscal years 1992 and 1998. NASA officials have not made comparable estimates for life science researchers. To accomplish this goal, NASA has abandoned its tradition—principally associated with life science research—of soliciting research proposals for general and specific space flight opportunities. Although this approach appears reasonable, the planned funding levels do not match the program’s objective, and funding priorities may need to be reassessed if the number of life and microgravity ground-based investigators is to be significantly increased. In recent years, NASA has used two approaches for developing life and microgravity science research communities—“select for flight” or “select for science.” In the select-for-flight approach, all of the U.S. life science and most of the U.S. microgravity investigators on IML-2 were selected from proposals submitted in response to flight-related announcements. In the select-for-science approach, two IML-2 microgravity investigators were selected from researchers who submitted proposals in response to two 1991 discipline-related (“fundamental science” and “biotechnology”) research announcements. A NASA program scientist considers the IML-2 flight to have been a programmatic success and, in some respects, a model for the international space station. According to a NASA official, one indication of the flight’s success was the amount of good research generated from the many proposals submitted in response to a mix of science and flight-related research announcements. Additionally, two of NASA’s recent research announcements were in the select-for-flight tradition: its July 1993 announcement soliciting proposals for research on a 1998 space life sciences flight (Neurolab) and its February 1994 announcement soliciting proposals for life science research on the Russian space station Mir from 1995 to 1997. Presumably then, one effective way to develop a research community would be to solicit specific proposals for research that are directly related to the space station. NASA, however, has chosen to move toward exclusive use of “select for science,” as discussed below. Although NASA recently solicited proposals specifically for research on Shuttle flights to Mir, NASA’s life science office changed from the widespread use of the select-for-flight approach in December 1993. At that time, it solicited proposals for ground-based research in space biology focused on the hypergravity effects that can be induced by NASA’s centrifuges. NASA’s shift to ground-based research did not stifle competition for funding: it received 650 responses to the December 1993 announcement. Although “select for science” is relatively new to life science research, all microgravity research announcements since 1990 have focused on research opportunities in one or more science disciplines. And, as if to emphasize the independence of microgravity research from space station development, NASA changed the fiscal year 1992 goals for the microgravity program. The previous goals referred to developing and using the space station, whereas the current, more general goal is to “enable research . . . by choosing the carrier most appropriate for the experiment.” Physical events, unlike biological processes, can be meaningfully observed under the short-duration microgravity conditions afforded by ground-based facilities, aircraft, and suborbital rockets. Consequently, a ground-based microgravity research investigator does not always have to conduct experiments in a space environment, and many do not. For example, of the 51 principal investigators who conducted such research at NASA’s Lewis Research Center from fiscal years 1989 through 1993, only 7 have been principal investigators on space-based experiments, including a microgravity Spacelab flight in September 1995. “NASA establish a vigorous ground based research program focussing on gravitational biology in which centrifuge facilities at NASA centers are utilized for exploring science programs aimed at forces greater than 1g .” NASA’s strategy for using the select-for-science approach to further develop a life science research community in the station-era appears reasonable based on the experience of the microgravity sciences community. First, the microgravity sciences research community has been growing. Principal investigators funded for microgravity sciences research increased by 120 percent—from 89 in calendar year 1989 to 196 in fiscal year 1993. The budget authority for microgravity sciences increased by 130 percent, from $75.6 million to $173.9 million during this period. The number of proposals submitted in response to research announcements also generally increased during this period. For example, although proposals submitted in response to materials, fluids, and fundamental (benchmark) physics research announcements decreased from 397 in 1991 to 217 in 1993, those responding to combustion physics announcements increased from 65 in 1989 to 98 in biotechnology research announcements increased from 94 in 1991 to 141 materials and fluids research announcements increased from 69 in 1990 to 346 in 1991. Second, the microgravity research community is stable but not stagnant. Fifty-five percent of all microgravity sciences investigators that were funded in 1989 were also funded in fiscal year 1993. This core group represents 25 percent of the investigators funded in fiscal year 1993. On the other hand, 44 percent of the investigators funded in 1993 were not funded in 1992. Third, NASA is attracting new investigators to its microgravity sciences program. The decline in proposals (from 397 to 217) submitted in response to the 1993 materials, fluids, and fundamental physics announcement may have been partly caused by NASA’s stated purpose of encouraging new investigators, and most of the 55 investigators funded from this announcement were new to the program. Only 15 of them had been previously funded by NASA. In June 1994, NASA’s life sciences advisory subcommittee recommended that NASA use this approach and establish appropriate categories within life science research announcements that recognize and encourage new investigators. Finally, for those proposals we reviewed, the select-for-science approach produced relatively fewer low peer review scores than the select-for-flight approach. Figure 3 shows that 8 percent of the select-for-science proposals received peer review scores in the bottom category, while 32 percent of the select-for-flight proposals received scores in the bottom category. Similarly, as figure 4 shows, of all the proposals in the bottom category, only 16 percent were in the select-for-science tradition, while 84 percent were in the select-for-flight tradition. Many of the proposals submitted in response to NASA’s research announcements were not considered scientifically meritorious. For example, peer review panels gave 129, or 44 percent, of the 290 proposals we reviewed relatively low scientific merit scores. NASA’s plans to expand its ground-based research program are not realistic based on planned funding. A NASA microgravity research official estimates that NASA will need to fund about 240 ground-based investigators to support a station-based microgravity sciences research program. In fiscal year 1992, NASA funded 73 ground-based investigators in microgravity sciences, only about 30 percent of the future need. Ground-based research is funded from NASA’s research and analysis budget. However, NASA does not anticipate that this budget will increase for fiscal years 1995 through 1999. Annual life science research and analysis appropriations are estimated to be about $51 million and microgravity sciences at $21.7 million. To deal with this potential mismatch between plans and resources, NASA’s microgravity sciences office has proposed that the research and analysis budget be augmented by research and development funds used to support NASA’s space-based research program. The proposed amounts are $4.7 million for fiscal year 1996, $12.2 million for 1997, and $22.2 million for 1998. According to NASA, shifting resources in this way would not increase its overall budget authority. “where they are proven to be inadequate to support the intellectual underpinning of the flight program, even if this means a transfer from the budget so as to comply with overall budget constraints.” NASA’s quality assurance procedures start with a series of external and internal reviews designed to evaluate the merits of research proposals. Peer review is a crucial part of this consensus-building process. The process starts with individual reviewers independently evaluating each proposal assigned to peer review panels. The reviewers then resolve any differences by consensus within the peer review panel. The panel’s final determinations are not binding on NASA’s selection officials, and NASA can choose proposals other than those highly recommended by the panel. In June 1994, we reported that the peer review processes at the National Institutes of Health (NIH), National Science Foundation, and National Endowment for the Humanities appear to be working well and that intrinsic qualities of a proposal (e.g., research design), and not characteristics of reviewers or applicants (e.g., applicant’s region, academic rank, or employing academic department’s prestige) were important factors in reviewers scoring. In 1993, the Senate Committee on Appropriations directed NASA to model its peer review standards after NIH. Based on the Committee’s direction, NASA requires that all research proposals be reviewed by peers for scientific merit and relevance (previously, some life science research conducted by NASA scientists was not subject to peer review); all research be reviewed by peers at least every 3 years; all research be reviewed for progress annually and for the performance of its objectives at least every 3 years; peer review be performed by the best-qualified individuals available in the peer review scores provided by external peer review groups be critical factors determining the priority for initial and continued funding of research projects and programs. The logic of peer review, in our opinion, rests, in part, on the assumption that two or more peers can independently agree on a research experiment’s scientific merits. For example, they should agree on the testability of the proposed hypothesis and the relevance and appropriateness of the experimental design. As such, peers’ scores for scientific merit of any given proposal ought to be the same or similar. Peers agreed on the scientific merit of 73 percent of the proposals that we reviewed, including all but 1 of the 15 U.S. experiments selected for IML-2. Table 1 shows the distribution of the reviewers’ scores. Peer reviewers were better able to agree on proposals having top scientific merit scores than on proposals having middle or bottom scientific merit scores. Peers gave only 11 percent (11 of 99) of the top proposals dissimilar scores; in contrast, they gave 35 percent (44 of 126) of proposals dissimilar scores in the middle category and 38 percent (23 of 60) scores in the bottom category. Table 2 shows that NASA’s selecting officials’ funding decisions were generally congruent with the findings of the peer review panel. Of the 84 proposals funded, 73, or 87 percent, were in the top category for scientific merit scores, and the other 11 proposals funded were in the middle category. Determinations of peer review panels are not binding on NASA’s selection officials. For example, NASA selected four proposals that received mid-level scores by the peer review panel. Based on an average of peers’ individual scores, three of them would have been in the top category. However, in subsequent deliberations, the peer review panel members placed three proposals in the middle category because the need for the microgravity environment of space was not compelling, experiment-related issues could be resolved using the ground-based appropriateness of analytical techniques was questionable. No peer review panel was convened for the fourth proposal because the number of proposals in the specific area of investigation was too small. Generally, the peer reviewers found the proposal to be of high quality, but they also noted that the research objectives, although compatible with the life science program, were inconsistent with the microgravity science program. In this case, the investigator did not propose to use microgravity to study phenomena whose understanding is obscured on earth by the presence of gravity. After the peer review panel completed its deliberations, a NASA categorization committee made category assignments that were forwarded to a steering committee. The categorization committee determined that the four proposals were, in the words of a NASA official, “sound but not exceptional science”—the second highest of four possible categories.The steering committee assessed these categorizations and recommended funding the proposals, but committee members noted that one investigation resembled a “fishing expedition,” another had “similar weaknesses” to proposals that were rejected, a third would require too much time to conduct on a Spacelab mission, and a fourth should only be partly funded. Despite these views, these four proposals were funded for IML-2. NASA’s efforts to develop a research community are not likely to be adversely affected by the February 1994 cancellation of SLS-3. The U.S. principal investigators on the Spacelab flight stated that they will be able to meet their experiment objectives on other missions, including multiple Shuttle flights to Mir. They plan to submit proposals for future NASA research opportunities. NASA planned to fly a collaborative U.S.-French SLS-3 mission in February 1996. The purpose of the mission was to study the effects of microgravity on the musculoskeletal system of humans, Rhesus monkeys, and rats. The French were responsible for developing the Rhesus research facility. Planning for the mission began in the late 1970s. On February 18, 1994, however, the NASA Administrator notified his French counterpart that the flight was canceled because of budget limitations and NASA’s commitment to the international space station. In November 1993, as part of the agreement between NASA and the Russian Space Agency to bring Russia into the space station program, the United States and Russia agreed to fly up to 10 Space Shuttle flights to the Russian space station Mir. “cancellation of mission or substitution of middeck experiments for a dedicated Spacelab mission would have serious consequences for . . . the continued participation of the mainstream life sciences community that NASA seeks to attract.” NASA responded to these concerns in April 1994, stating that all experiments from SLS-3 have been accommodated on other missions, including Shuttle flights to Mir. NASA also noted that although Mir is not a substitute for Spacelab, it will augment and enhance on-orbit science capabilities because experiments requiring more than 30 days of microgravity cannot be performed on Spacelab. In late 1994, NASA announced a new life and microgravity sciences spacelab mission for July or August 1996. This mission will provide a flight opportunity for some experiments that were scheduled to fly on SLS-3. We discussed the cancellation of SLS-3 with 13 of the 15 U.S. investigators who were scheduled to fly experiments on this flight. Their views are summarized, as follows: Nine investigators were generally satisfied with the way NASA handled the cancellation. NASA never formally notified investigators about its decision to cancel the flight. Consequently, most investigators learned of the cancellation from rumors or other informal communication. One investigator said investigators should have been consulted before NASA canceled the mission. One investigator questioned why SLS-3, a relatively near-term mission, was canceled rather than a later one such as the Neurolab flight in 1998. Eleven investigators said that their experiments will be accommodated on other missions, including the Russian Biosatellite, another Spacelab mission, or Space Shuttle flights to Mir. Two investigators said they have not been assigned to specific missions. Ten investigators currently scheduled for other missions said they will be able to meet their basic experiment objectives. However, three of them said they will not necessarily be able to obtain the same amount of information as they would have on SLS-3. Their experiments involved the use of Rhesus monkeys, and even though they will fly on the Biosatellite, in-flight biological measurements cannot be done. Three other investigators said that their experiments on a substitute mission would be adversely affected by hardware limitations or the loss of opportunities to efficiently collaborate with other investigators. All 13 investigators said they will continue to submit proposals for future NASA research opportunities, and at least 6 have already done so. To accomplish our objectives, we obtained documents from and interviewed officials at NASA headquarters in Washington, D.C., and at NASA’s Lewis, Johnson, and Marshall field centers in Cleveland, Ohio; Houston, Texas; and Huntsville, Alabama, respectively. In May 1994, we attended the IML-2 mission simulation and science review conference and observed crew training exercises prior to launch. To review the further development of NASA’s life and microgravity sciences research community, we obtained information on research announcements issued between 1988 and 1994 and on the principal investigators who conducted ground- and space-based experiments. We examined peer review-related information on 319 proposals submitted in response to the 4 research announcements related to IML-2. We categorized the scores of all proposals that peer review panels considered responsive to the objectives of the announcements, as shown in appendix II. To assess the possible impact of the cancellation of the SLS-3 Spacelab flight on the further development of NASA’s research community, we interviewed 13 of the 15 U.S. principal investigators on that mission. We performed our work between November 1993 and September 1994 in accordance with generally accepted government auditing standards. As requested, we did not obtain agency comments on this report. However, we discussed the issues in this report with NASA officials and incorporated their comments where appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days from its issue date. At that time, we will send copies to the NASA Administrator and other appropriate congressional committees. Copies will also be made available to other interested parties on request. Please contact me on (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix III. The National Aeronautics and Space Administration’s (NASA) guiding principle for quality assurance is periodic review over the lifetime of an experiment. Figure I.1 depicts the major science and engineering review milestones. The steps or actions involved throughout this process are outlined below. NASA’s Discipline Working Group(s) evaluates research program’s strengths and weaknesses and makes recommendations to the program scientist, who defines areas of investigation for forthcoming announcement. NASA conducts workshops for prospective investigators from the scientific community to develop interest in forthcoming announcement. NASA solicits research proposals by issuing announcement. Peers are selected by contractor (life sciences) or NASA (microgravity sciences) to evaluate proposals’ scientific merit. Peers should be leading researchers in their field and free from conflicts of interest (e.g., a current or recent professional collaboration with an applicant), and not currently receiving research funds from NASA. For proposals receiving strong science reviews, the appropriate NASA field center assesses a proposal’s estimated cost and engineering feasibility. For example, Lewis Research Center is a “center of excellence” for two microgravity science disciplines: combustion science and fluid physics. NASA program scientist recommends principal investigators’ proposals for funding to senior NASA management. Principal investigator and project scientist describe science scope and feasibility for evaluation by Science Review Board. Project manager describes conceptual design of experiment-related hardware for evaluation by Engineering Panel. Project manager describes cost and schedule estimates. Engineering panel assesses design of hardware. Science panel assesses compatibility of science requirements with design of hardware. Preliminary design review assesses the compatibility of science requirements with a preliminary engineering model (“breadboard”) of hardware. Critical design review assesses complete engineering model of hardware. Preshipment review consists of experiment simulations, integration with hardware, and testing prior to sending the hardware to the launch site. To determine the similarity/dissimilarity of peers’ perception of a proposal’s scientific merit, we defined similar scores on the five-point scales as same or adjacent scores (for example: “3” and “3”, or “3” and “4”); and on the nine-point scales as same, adjacent, and next scores (for example: a “3” and “3”, or “3” and “4”; or “3”, “4”, and “5”). Dave Warren Frank Degnan Thomas Mills James Berry Kimberly Carson Jeffrey Knott Larry Kiser The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the National Aeronautics and Space Administration's (NASA) efforts to develop a robust life and microgravity sciences research community for the space station, focusing on: (1) what NASA is doing to assess the required size of the research community needed for the space station and to ensure that such a community will be available; (2) how NASA will ensure that the research selected for the space station will be the best possible; and (3) whether the recently cancelled shuttle research flight adversely affected NASA efforts to develop a research community for the space station. GAO found that: (1) NASA is focusing on developing a comprehensive research program that emphasizes more ground-based research and uses space flight only for research efforts that require a microgravity environment in space; (2) NASA wants to greatly increase the number of ground-based investigators to accomplish this program; (3) the science-oriented approach is reasonable, but funding levels could jeopardize it unless NASA adjusts its funding priorities so, to achieve its goal, NASA will need to increase funding for life and microgravity sciences research and analysis from fiscal years 1995 through 1999; (4) if NASA funding remains at expected levels, a smaller than desired number of ground investigators in the ground-based research program will be selected; (4) although peer review panels and NASA sometimes disagree on the scientific merit and relevance of NASA funding proposals, NASA funding decisions were generally consistent with the recommendations of the peer review panels; and (5) NASA efforts to increase the size of its life and microgravity sciences research community are not likely to be adversely affected by the cancellation of the third Spacelab Life Sciences flight, since most of the principal investigators have been accommodated on other space flights and generally will be able to meet their experiment objectives.