Unnamed: 0
int64
0
4.52k
report
stringlengths
7
512
3,700
achers to try different approaches. On the other hand, the researchers stated that teachers at those schools meeting the standards would not have the same incentives to change their instructional practices. Research shows that using a standards-based curriculum that is aligned with corresponding instructional guidelines can positively influence teaching practices. Specifically, some studies reported changes by teachers who facilitated their students developing higher-order thinking skills, such as interpret
3,701
ing meaning, understanding implied reasoning, and developing conceptual knowledge, through practices such as multiple answer problem solving, less lecture and more small group work. Additionally, a few researchers we interviewed stated that a positive effect of NCLBA’s accountability provisions has been a renewed focus on standards and curriculum. However, some studies indicated that teachers’ practices did not always reflect the principles of standards-based instruction and that current accountability poli
3,702
cies help contribute to the difficulty in aligning practice with standards. Some research shows that, while teachers may be changing their instructional practices in response to standards-based reform, these changes may not be fully aligned with the principles of the reform. That research also notes that the reliability in implementing standards in the classroom varied in accordance with teachers’ different beliefs in and support for standards-based reform as well as the limitations in their instructional c
3,703
apabilities. For example, one observational study of math teachers showed that, while teachers implemented practices envisioned by standards-based reform, such as getting students to work in small groups or using manipulatives (e.g., cubes or tiles), their approaches did not go far enough in that students were not engaged in conversations about mathematical or scientific concepts and ideas. To overcome these challenges, studies point to the need for teachers to have opportunities to learn, practice, and ref
3,704
lect on instructional practices that incorporate the standards, and then to observe their effects on student learning. However, some researchers have raised concerns that current accountability systems’ focus on test scores and mandated timelines for achieving proficiency levels for students do not give teachers enough time to learn, practice, and reflect on instructional practices and may discourage some teachers from trying ambitious teaching practices envisioned by standards-based reform. Another key ele
3,705
ment of a standards-based accountability system is assessments, which help measure the extent to which schools are improving student learning through assessing student performance against the standards. Some researchers note that assessments are powerful tools for managing and improving the learning process by providing information for monitoring student progress, making instructional decisions, evaluating student achievement, and evaluating programs. In addition, assessments can also influence instructiona
3,706
l content and help teachers use or adjust specific classroom practices. As one synthesis concluded, assessments can influence whether teachers broaden or narrow the curriculum, focus on concepts and problem solving—or emphasize test preparation over subject matter content. In contrast, some of the research and a few experts we interviewed raised concerns about testing formats that do not encourage challenging teaching practices and instructional practices that narrow the curriculum as a result of current as
3,707
sessment practices. For example, depending on the test used, research has shown that teachers may be influenced to use teaching approaches that reflect the skills and knowledge to be tested. Multiple choice tests tend to focus on recognizing facts and information while open-ended formats are more likely to require students to apply critical thinking skills. Conclusions from a literature synthesis conducted by the Department of Education stated that “ teachers respond to assessment formats used, so testing p
3,708
rograms must be designed and administered with this influence in mind. Tests that emphasize inquiry, provide extended writing opportunities, and use open-ended response formats or a portfolio approach tend to influence instruction in ways quite different from tests that use closed-ended response formats and which emphasize procedures.” We recently reported that states have most often chosen multiple choice items over other item types of assessments because they are cost effective and can be scored within ti
3,709
ght time frames. While multiple choice tests provide cost and time saving benefits to states, the use of multiple choice items make it difficult, if not impossible, to measure highly complex content. Other research has raised concerns that, to avoid potential consequences from low-scoring assessment results under NCLBA, teachers are narrowing the curriculum being taught—sometimes referred to as “teaching to the test”—either by spending more classroom time on tested subjects at the expense of other non-teste
3,710
d subjects, restricting the breadth of content covered to focus only on the content covered by the test, or focusing more time on test-taking strategies than on subject content. Our literature review found some studies that pointed to instructional practices that appear to be effective in raising student achievement. But, in discussing the broader implications of these studies with the experts that we interviewed, many commented that, taken overall, the research is not conclusive about which specific instru
3,711
ctional practices improve student learning and achievement. Some researchers stated that this was due to methodological issues in conducting the research. For example, one researcher explained that, while smaller research studies on very specific strategies in reading and math have sometimes shown powerful relationships between the strategy used and positive changes in student achievement, results from meta- analyses of smaller studies have been inconclusive in pointing to similar patterns in the aggregate.
3,712
A few other researchers stated that the lack of empirical data about how instruction unfolds in the classroom hampers the understanding about what works in raising student performance. A few researchers also noted that conducting research in a way that would yield more conclusive results is difficult. One of the main difficulties, as explained by one researcher, is the number of variables a study may need to examine or control for in order to understand the effectiveness of a particular strategy, especiall
3,713
y given the number of interactions these variables could have with each other. One researcher mentioned cost as a challenge when attempting to gather empirical data at the classroom level, stating “teaching takes place in the classroom, but the expense of conducting classroom-specific evaluations is a serious barrier to collecting this type of data.” Finally, even when research supports the efficacy of a strategy, it may not work with different students or under varying conditions. In raising this point, on
3,714
e researcher stated that “educating a child is not like making a car” whereby a production process is developed and can simply be repeated again and again. Each child learns differently, creating a challenge for teachers in determining the instructional practices that will work best for each student. Some of the practices identified by both the studies and a few experts as those with potential for improving student achievement were: Differentiated instruction. In this type of instruction, teaching practices
3,715
and plans are adjusted to accommodate each student’s skill level for the task at hand. Differentiated instruction requires teachers to be flexible in their teaching approach by adjusting the curriculum and presentation of information for students, thereby providing multiple options for students to take in and process information. As one researcher described it, effective teachers understand the strategies and practices that work for each student and in this way can move all students forward in their learni
3,716
ng and achievement. More guiding, less telling. Researchers have identified two general approaches to teaching: didactic and interactive. Didactic instruction relies more on lecturing and demonstrations, asking short answer questions, and assessing whether answers are correct. Interactive instruction focuses more on listening and guiding students, asking questions with more than one correct answer, and giving students choices during learning. As one researcher explained, both teaching approaches are importa
3,717
nt, but some research has shown that giving students more guidance and less direction helps students become critical and independent thinkers, learn how to work independently, and assess several potential solutions and apply the best one. These kinds of learning processes are important for higher-order thinking. However, implementing “less instruction” techniques requires a high level of skill and creativity on the part of the teacher. Promoting effective discourse. An important corollary to the teacher pra
3,718
ctice of guiding students versus directing them is effective classroom discussion. Research highlights the importance of developing students’ understanding not only of the basic concepts of a subject, but higher-order thinking and skills as well. To help students achieve understanding, it is necessary to have effective classroom discussion in which students test and revise their ideas, and elaborate on and clarify their thinking. In guiding students to an effective classroom discussion, teachers must ask en
3,719
gaging and challenging questions, be able to get all students to participate, and know when to provide information or allow students to discover it for themselves. Additionally, one synthesis of several experimental studies examining practices in elementary math classrooms identified two instructional approaches that showed positive effects on student learning. The first was cooperative learning in which students work in pairs or small teams and are rewarded based on how well the group learns. The other app
3,720
roach included programs that helped teachers introduce math concepts and improve skills in classroom management, time management, and motivation. This analysis also found that using computer-assisted instruction had moderate to substantial effects on student learning, although this type of instruction was always supplementary to other approaches or programs being used. We found through our literature review and interviews with researchers that the issue of effective instructional practices is intertwined wi
3,721
th professional development. To enable all students to achieve the high standards of learning envisioned by standards-based accountability systems, teachers need extensive skills and knowledge in order to use effective teaching practices in the classroom. Given this, professional development is critical to supporting teachers’ learning of new skills and their application. Specifically, the research concludes that professional development will more likely have positive impacts on both teacher learning and st
3,722
udent achievement if it: Focuses on a content area with direct links to the curriculum; Challenges teachers intellectually through reflection and critical problem Aligns with goals and standards for student learning; Lasts long enough so that teachers can practice and revise their Occurs collaboratively within a teacher learning community—ongoing teams of teachers that meet regularly for the purposes of learning, joint lesson planning, and problem solving; Involves all the teachers within a school or depart
3,723
ment; Provides active learning opportunities with direct applications to the Is based on teachers’ input regarding their learning needs. Some researchers have raised concerns about the quality and intensity of professional development currently received by many teachers nationwide. One researcher summarized these issues by stating that professional development training for teachers is often too short, provides no classroom follow up, and models more “telling than guiding” practices. Given the decentralized
3,724
nature of the U.S. education system, the support and opportunity for professional development services for teachers varies among states and school districts, and there are notable examples of states that have focused resources on various aspects of professional development. Nevertheless, shortcomings in teachers’ professional development experiences overall are especially evident when compared to professional development requirements for teachers in countries whose students perform well on international tes
3,725
ts, such as the Trends in International Mathematics and Science Study and the Program for International Student Assessment. For example, one study showed that fewer than 10 percent of U.S. math teachers in school year 2003-04 experienced more than 24 hours of professional development in mathematics content or pedagogy during the year; conversely, teachers in Sweden, Singapore, and the Netherlands are required to complete 100 hours of professional development per year. We provided a copy of our draft report
3,726
to the Secretary of Education for review and comment. Education’s written comments, which are contained in appendix V, expressed support for the important questions that the report addresses and noted that the American Recovery and Reinvestment Act of 2009 included $250 million to improve assessment and accountability systems. The department specifically stated that the money is for statewide data systems to provide information on individual student outcomes that could help enable schools to strengthen inst
3,727
ructional practices and improve student achievement. However, the department raised several issues about the report’s approach. Specifically, the department commented that we (1) did not provide the specific research citations throughout the report for each of our findings or clearly explain how we selected our studies; (2) mixed the opinions of education experts with our findings gleaned from the review of the literature; (3) did not present data on the extent to which test formats had changed or on the re
3,728
lationship between test format and teaching practices when discussing our assessment findings; and (4) did not provide complete information from an Education survey regarding increases and decreases in instructional time. As stated in the beginning of our report, the list of studies we reviewed and used for our findings are contained in appendix IV. We provide a description in appendix I of our criteria, the types of databases searched, the types of studies examined (e.g., experimental and nonexperimental)
3,729
and the process by which we evaluated them. We relied heavily on two literature syntheses conducted by the Department of Education— Standards in Classroom Practice: Research Synthesis and The Influence of Standards on K-12 Teaching and Student Learning: A Research Synthesis, which are included in the list. These two syntheses covered, in a more comprehensive way than many of the other studies that we reviewed, the breadth of the topics that we were interested in and included numerous research studies in the
3,730
ir reviews. Many of the findings in this report about the research are taken from the conclusions reached in these syntheses. However, to make this fact clearer and more prominent, we added this explanation to our abbreviated scope and methodology section on page 5 of the report. Regarding the use of expert opinion, we determined that obtaining the views of experts about the research we were reviewing would be critical to our understanding its broader implications. This was particularly important given the
3,731
breadth and scope of our objectives. The experts we interviewed, whose names and affiliations are listed in appendix III, are prominent researchers who conduct, review, and reflect on the current research in the field, and whose work is included in some of the studies we reviewed, including the two literature syntheses written by the Department of Education and used by us in this study. We did not consider their opinions “conjecture” but grounded in and informed by their many years of respected work on the
3,732
topic. We have been clear in the report as to when we are citing expert opinion, the research studies, or both. Regarding the report section discussing the research on assessments, it was our intent to highlight that, according to the research, assessments have both positive and negative influences on classroom teaching practices, not to conclude that NCLBA was the cause of either. Our findings in this section of the report are, in large part, based on conclusions from the department’s syntheses mentioned e
3,733
arlier. For example, The Influence of Standards on K-12 Teaching and Student Learning: A Research Synthesis states “… tests matter—the content covered, the format used, and the application of their results—all influence teacher behavior.” Furthermore, we previously reported that states most often have chosen multiple choice assessments over other types because they can be scored inexpensively and their scores can be released prior to the next school year as required by NCLBA. That report also notes that sta
3,734
te officials and alignment experts said that multiple choice assessments have limited the content of what can be tested, stating that highly complex content is “difficult if not impossible to include with multiple choice items.” However, we have revised this paragraph to clarify our point and provide additional information. Concerning the topic of narrowing the curriculum, we agree with the Department of Education that this report should include a fuller description of the data results from the cited Educat
3,735
ion survey in order to help the reader put the data in an appropriate context. Hence, we have added information to that section of the report. However, one limitation of the survey data we cite is that it covers changes in instructional time for a short time period—from school year 2004-05 to 2006-07. In the its technical comments, the Department refers to its recent report, Title I Implementation: Update on Recent Evaluation Findings for a fuller discussion of this issue. The Title I report, while noting t
3,736
hat most elementary teachers reported no change from 2004–05 to 2006–07 in the amount of instructional time that they spent on various subjects, also provides data over a longer, albeit earlier period time period, from 1987–88 to 2003–04, from the National Center on Education Statistics Schools and Staffing Survey. In analyzing this data, the report states that elementary teachers had increased instructional time on reading and mathematics and decreased the amount of time spent on science and social studies
3,737
during this period. We have added this information as well. Taken together, we believe these data further reinforce our point that assessments under current accountability systems can have, in addition to positive influences on teaching, some negative ones as well, such as the curriculum changes noted in the report, even if the extent of these changes is not fully known. Education also provided technical comments that we incorporated as appropriate. We are sending copies of this report to the Secretary of
3,738
Education, relevant congressional committees, and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. To address the objectives of this
3,739
study, we used a variety of methods. To determine the types of instructional practices schools and teachers are using to help students achieve state academic standards and whether those practices differ by school characteristics, we used two recent surveys of principals and teachers. The first survey, a nationally- representative survey from the Department of Education’s (Education) National Longitudinal Study of No Child Left Behind (NLS-NCLB) conducted by the RAND Corporation (RAND), asked principals the
3,740
extent to which their schools were focusing on certain strategies in their voluntary school improvement efforts. Education’s State and Local Implementation of the No Child Left Behind Act Volume III— Accountability Under NCLB: Interim Report included information about the strategies emphasized by principals as a whole, and we obtained from Education the NLS-NCLB database to determine the extent to which principals’ responses differed by school characteristic variables. We conducted this analysis on school y
3,741
ear 2006-2007 data by controlling for four school characteristic variables: (1) the percentage of a school’s students receiving free or reduced price lunch (poverty); (2) the percentage of students who are a racial minority (minority); (3) whether the school is in an urban, urban fringe (suburban), or rural area (school location); and (4) the school’s adequate yearly performance (AYP) status. We analyzed data from a second RAND survey, which was a three-state survey sponsored by the National Science Foundat
3,742
ion that asked math teachers in California, Georgia, and Pennsylvania how their classroom teaching strategies differed due to a state math test. RAND selected these states to represent a range of approaches to standards-based accountability and to provide some geographic and demographic diversity; the survey data is representative only for those three states individually. RAND’s report on the three-state survey data included information about how teachers within each of the three states had changed their te
3,743
aching practices due to a state accountability test. RAND provided us with descriptive data tables based on its school year 2005-2006 survey data; we analyzed the data to measure associations between the strategies used and the school characteristic variables. We requested tables that showed this information for teachers in all schools, and separately for teachers in different categories of schools (elementary and middle schools) and by the school characteristics of poverty, minority, school location and AY
3,744
P status. We obtained from RAND standard error information associated with the estimates from the different types of schools and thus were able to test the statistical significance of differences in likelihood between what teachers from different types of schools reported. As part of our analyses for both surveys, we reviewed documentation and performed electronic testing of the data obtained through the surveys. We also conducted several interviews with several researchers responsible for the data collecti
3,745
on and analyses and obtained information about the measures they took to ensure data reliability. On the basis of our efforts to determine the reliability of the data, we determined the data from each of these surveys were sufficiently reliable for the purposes of our study. We reviewed existing literature to determine what researchers have found regarding the effect of standards-based accountability systems on instructional practices, and practices that work in raising student achievement. To identify exis
3,746
ting studies, we conducted searches of various databases, such as the Education Resources Information Center, Proquest, Dialog EDUCAT, and Education Abstracts. We also asked all of the education researchers that we interviewed to recommend additional studies. From these sources, we identified 251 studies that were relevant to our study objectives about the effect of standards-based accountability systems on instructional practices and instructional practices there are effective in raising student achievemen
3,747
t. We selected them according to the following criteria: covered the years 2001 through 2008 and were either experimental or quasi-experimental studies, literature syntheses, or studied multiple sites. We selected the studies for our review based on their methodological strength, given the limitations of the methods used, and not necessarily on whether the results could be generalized. We performed our searches from August 2008 to January 2009. To assess the methodological quality of the selected studies, w
3,748
e developed a data collection instrument to obtain information systematically about each study being evaluated and about the features of the evaluation methodology. We based our data collection and assessments on generally accepted social science standards. We examined factors related to the use of comparison and control groups; the appropriateness of sampling and data collection methods; and for syntheses, the process and criteria used to identify studies. A senior social scientist with training and experi
3,749
ence in evaluation research and methodology read and coded the methodological discussion for each evaluation. A second senior social scientist reviewed each completed data collection instrument and the relevant documentation to verify the accuracy of every coded item. This review identified 20 selected studies that met GAO’s criteria for methodological quality. We supplemented our synthesis by interviewing prominent education researchers identified in frequently cited articles and through discussions with k
3,750
nowledgeable individuals. We also conducted interviews with officials at the U.S. Department of Education, including the Center on Innovation and Improvement, and the Institute on Education Sciences’ National Center for Education Evaluation and Regional Assistance, as well as other educational organizations. We also reviewed relevant federal laws and regulations. In order to analyze the National Longitudinal Study of No Child Left Behind (NLS-NCLB) principal survey conducted by the RAND Corporation, we anal
3,751
yzed strategies on which principals most often focused, taking into account the percentage of a school’s students receiving free or reduced price lunch (poverty), the percentage of students who are a racial minority (minority), whether the school is in an urban, suburban, or rural area (school location), and the school’s adequate yearly performance (AYP) status (see table 1). Our analyses used “odds ratios,” generally defined as the ratio of the odds of an event occurring in one group compared to the odds o
3,752
f it occurring in another group, to express differences in the likelihoods of schools with different characteristics using these strategies. We used odds ratios rather than percentages because they are more appropriate for statistical modeling and multivariate analysis. Odds ratios indicate how much higher (when they are greater than 1.0) or lower (when they are less than 1.0) the odds were that principals would respond that a given strategy was a major or moderate focus. We included a reference category fo
3,753
r the school characteristics (low minority, low poverty, and central city) in the top row of table 1, and put comparison groups beneath those reference categories, as indicated by the column heading in the second row (high-minority, high- poverty, or rural schools). As an example, the third cell in the “high- minority schools” column indicates that principals in high-minority schools were 2.65 times more likely to make “implementing new instructional approaches or curricula in reading/language arts/English”
3,754
a focus of their school improvement efforts. In another example, the odds that principals would “restructure the school day to teach core content areas in greater depth (e.g., establishing a literacy block)” were 2.8 times higher for high-poverty schools than low poverty schools, as seen in the sixth cell under “high-poverty schools.” Those cells with an asterisk indicate statistically significant results; that is, we have a high degree of confidence that the differences we see are not just due to chance b
3,755
ut show an actual difference in the survey responses. See appendix I for further explanation of our methodology. “Strong States, Weak Schools: The Benefits and Dilemmas of Centralized Accountability” Quasi-experimental design with matched groups; multiple regressions used with data. Literature review using a best-evidence synthesis (related to a meta-analysis) Cornelia M. Ashby (202) 512-7215 or [email protected]. Janet Mascia (Assistant Director), Bryon Gordon (Assistant Director), and Andrew Nelson (Analyst-
3,756
in-Charge) managed all aspects of the assignment. Linda Stokes and Caitlin Tobin made significant contributions to this report in all aspects of the work. Kate van Gelder contributed to writing this report, and Ashley McCall contributed to research for the report. Luann Moy, Justin Fisher, Cathy Hurley, Douglas Sloane, and John Smale Jr. provided key technical support, and Doreen Feldman and Sheila R. McCoy provided legal support. Mimi Nguyen developed the graphics for the report.
3,757
In November 2006, we reported that since 2001, the amount of national research that has been conducted on the prevalence of domestic violence and sexual assault had been limited, and less research had been conducted on dating violence and stalking. At that time, no single, comprehensive effort existed that provided nationwide statistics on the prevalence of these four categories of crime among men, women, youth, and children. Rather, various national efforts addressed certain subsets of these crime categori
3,758
es among some segments of the population and were not intended to provide comprehensive estimates. For example, HHS’s Centers for Disease Control and Prevention’s (CDC) National Violent Death Reporting System, which collects incident-based data from multiple sources, such as coroner/medical examiner reports, gathered information on violent deaths resulting from domestic violence and sexual assaults, among other crimes. However, it did not gather information on deaths resulting from dating violence or stalki
3,759
ng incidents. In our November 2006 report, we noted that designing a single, comprehensive data collection effort to address these four categories of crime among all segments of the population independent of existing efforts would be costly, given the resources required to collect such data. Furthermore, it would be inefficient to duplicate some existing efforts that already collect data for certain aspects of these categories of crime. Specifically, in our November 2006 report, we identified 11 national ef
3,760
forts that had reported data on certain aspects of domestic violence, sexual assault, dating violence, and stalking. However, limited national data were available to estimate prevalence from these 11 efforts because they (1) largely focused on incidence rather than prevalence, (2) used varying definitions for the types of crimes and categories of victims covered, and (3) had varying scopes in terms of incidents and categories they addressed. Focus on incidence. Four of the 11 national data collection effort
3,761
s focused solely on incidence—the number of separate times a crime is committed against individuals during a specific time period—rather than prevalence—the unique number of individuals who were victimized during a specific time period. As a result, information gaps related to the prevalence of domestic violence, sexual assault, dating violence, and stalking, particularly in the areas of dating violence among victims age 12 and older and stalking among victims under age 18 existed at the time of our Novembe
3,762
r 2006 report. Obtaining both incidence and prevalence data is important for determining which services to provide to the four differing categories of crime victims. HHS also noted that both types of data are important for determining the impact of violence and strategies to prevent it from occurring. Although perfect data may never exist because of the sensitivity of these crimes and the likelihood that not all occurrences will be disclosed, agencies have taken initiatives since our report was issued to he
3,763
lp address some of these gaps or have efforts underway. These initiatives are consistent with our recommendation that the Attorney General and Secretary of Health and Human Services determine the extent to which initiatives being planned or underway can be designed or modified to address existing information gaps. For example, DOJ’s Office of Juvenile Justice and Delinquency Prevention (OJJDP), in collaboration with CDC, sponsored a nationwide survey of the incidence and prevalence of children’s (ages 17 an
3,764
d younger) exposure to violence across several major crime categories, including witnessing domestic violence and peer victimization (which includes teen dating violence). OJJDP released incidence and prevalence measures related to children’s exposure to violence, including teen dating violence, in 2009. Thus, Congress, agency decision makers, practitioners, and researchers have more comprehensive information to assist them in making decisions on grants and other issues to help address teen dating violence.
3,765
To address information gaps related to teen dating violence and stalking victims under the age of 18, in 2010, CDC began efforts on a teen dating violence prevention initiative known as “Dating Matters.” One activity of this initiative is to identify community-level indicators that can be used to measure both teen dating violence and stalking in high-risk urban areas. CDC officials reported that they plan to begin implementing the first phase of “Dating Matters” in as many as four high-risk urban areas in
3,766
September 2011 and expect that the results from this phase will be completed by 2016. Thus, it is too early to tell the extent to which this effort will fully address the information gap related to prevalence of stalking victims under the age of 18. Varying definitions. The national data collection efforts we reviewed could not provide a basis for combining the results to compute valid and reliable nationwide prevalence estimates because the efforts used varying definitions related to the four categories of
3,767
crime. For example, CDC’s Youth Risk Behavior Surveillance System’s definition of dating violence included the intentional physical harm inflicted upon a survey respondent by a boyfriend or girlfriend. In contrast, the Victimization of Children and Youth Survey’s definition did not address whether the physical harm was intentional. To address the issue of varying definitions, we recommended that the Attorney General and the Secretary of Health and Human Services, to the extent possible, require the use of
3,768
common definitions when conducting or providing grants for federal research. This would provide for leveraging individual collection efforts so that the results of such efforts could be readily combined to achieve nationwide prevalence estimates. HHS agreed with this recommendation. In commenting on our November 2006 draft report, DOJ expressed concern regarding the potential costs associated with implementing this and other recommendations we made and suggested that a cost-benefit analysis be conducted. We
3,769
agreed that performing a cost-benefit analysis is a critical step, as acknowledged by our recommendation that DOJ and HHS incorporate alternatives for addressing information gaps deemed cost-effective in future budget requests. HHS agreed with this recommendation and both HHS and DOJ have taken actions to address it by requesting or providing additional funding for initiatives to address information gaps, such as those on teen dating violence. In response to our recommendation on common definitions, in Aug
3,770
ust 2007, HHS reported that it continued to encourage, but not require, the use of uniform definitions of certain forms of domestic violence and sexual assault it established in 1999 and 2002, respectively. At the same time, DOJ reported that it consistently used uniform definitions of intimate partner violence in project solicitations, statements of work, and published reports. Since then, officials from CDC reported that in October 2010, the center convened a panel of 10 experts to revise and update its d
3,771
efinitions of certain forms of domestic violence and sexual assault given advancements in this field of study. CDC is currently reviewing the results from the panel and plans to hold a second panel in 2012, consisting of practitioners, to review the first panel’s results and to obtain consensus on the revised definitions. Moreover, HHS reported that it is also encouraging the use of uniform definitions by implementing the National Intimate Partner and Sexual Violence Survey. This initiative is using consist
3,772
ent definitions and methods to collect information on women and men’s experiences with a range of intimate partner violence, sexual violence, and stalking victimization. Thus, by using consistent methods over time, HHS reported that it will have comparable data at the state and national level to inform intervention and prevention efforts and aid in the evaluation of these efforts. In addition, according to a program specialist from OJJDP, in 2007, OJJDP created common definitions for use in the National Sur
3,773
vey of Children’s Exposure to Violence to help collect data and measure incidence and prevalence rates for child victimization, including teen dating violence. While it is too early to tell the extent to which HHS’s efforts will result in the wider use of common definitions to assist in the combination of data collection efforts, OJJDP efforts in developing common definitions have supported efforts to generate national incidence and prevalence rates for child victimization. A program specialist from OJJDP n
3,774
oted that OJJDP plans to focus on continuously improving the definitions. Varying scope. The national data collection efforts we reviewed as part of our November 2006 report also could not provide a basis for combining the results to compute valid and reliable nationwide prevalence estimates because the efforts had varying scopes in terms of the incidents and categories of victims that were included. For example, in November 2006, we reported that CDC’s Youth Risk Behavior Surveillance System excludes youth
3,775
who are not in grades 9 through 12 and those who do not attend school; whereas the Victimization of Children and Youth Survey was addressed to youth ages 12 and older, or those who were at least in the sixth grade. National data collection efforts underway since our report was issued may help to overcome this challenge. For instance, in September 2010, HHS reported that CDC was working in collaboration with the National Institute of Justice to develop the National Intimate Partner and Sexual Violence Surve
3,776
y. Specifically, HHS reported that, through this system, it is collecting information on women’s and men’s experiences with a range of intimate partner violence, sexual violence, and stalking victimization. HHS reported that it is gathering experiences that occurred across a victim’s lifespan (including experiences that occurred before the age of 18) and plans to generate incidence and prevalence estimates for intimate partner violence, sexual violence, dating violence, and stalking victimization at both th
3,777
e national and state levels. The results are expected to be available in October 2011. These agency initiatives may not fill all information gaps on the extent to which women, men, youth, and children are victims of the four predominant crimes VAWA addresses. However, the efforts provide Congress with additional information it can consider on the prevalence of these crimes as it makes future investment decisions when reauthorizing and funding VAWA moving forward. We reported in July 2007 that recipients of
3,778
11 grant programs we reviewed collected and reported data to the respective agencies on the types of services they provide, such as counseling; the total number of victims served; and in some cases, demographic information, such as the age of victims; however, data were not available on the extent to which men, women, youth, and children receive each type of service for all services. This situation occurred primarily because the statutes governing the 11 grant programs do not require the collection of demog
3,779
raphic data by type of service, although they do require reports on program effectiveness, including number of persons served and number of persons seeking services who could not be served. Nevertheless, VAWA authorizes that a range of services can be provided to victims, and we determined that services were generally provided to men, women, youth, and children. The agencies administering these 11 grant programs—HHS and DOJ—collect some demographic data for certain services, such as emergency shelter under
3,780
the Family Violence Prevention and Services Act and supervised visitation and exchange under VAWA. The quantity of information collected and reported varied greatly for the 11 programs and was extensive for some, such as those administered by DOJ’s Office on Violence Against Women (OVW) under VAWA. The federal agencies use this information to help inform Congress about the known results and effectiveness of the grant programs. However, even if demographic data were available by type of service for all servi
3,781
ces, such data might not be uniform and reliable because, among other factors, (1) the authorizing statutes for these programs have different purposes and (2) recipients of grants administered by HHS and DOJ use varying data collection practices. Authorizing statutes have different purposes. The authorizing statutes for the 11 grant programs we reviewed have different purposes; therefore the reporting requirements for the 11 grant programs must vary to be consistent with these statutes. However, if a grant
3,782
program addresses a specific service, the demographic data collected are more likely to address the extent to which men, women, youth, and children receive that specific service. For example, in commenting on our July 2007 report, officials from OVW stated that they could provide such demographic data for 3 of its 8 grant programs we reviewed—the Transitional Housing Assistance Grants Program, the Safe Havens: Supervised Visitation and Safe Exchange Grant Program, and the Legal Assistance for Victims Grant
3,783
Program. Recipients of grants administered by HHS and DOJ use varying data collection practices. For example, some recipients request that victims self-report data on the victim’s race, whereas other recipients rely on visual observation of the victim to obtain these data. Since we issued our July 2007 report, officials from HHS’s Administration for Children and Families (ACF) and OVW told us that they modified their grant recipient forms to improve the quality of the recipient data collected and to reflect
3,784
statutory changes to the programs and reporting requirements. Moreover, ACF officials stated that they adjusted the demographic categories on their forms to mirror OVW’s efforts so data would be collected consistently across the government for these grant programs. In addition, OVW officials stated that they have continued to provide technical assistance and training to grant recipients on completing their forms through a cooperative agreement with a university. As a result of these efforts, and others, of
3,785
ficials from both agencies reported that the quality of the recipient data has improved resulting in fewer errors and more complete data. As we reported in our July 2007 report, HHS and DOJ officials stated that they would face significant challenges in collecting and reporting data on the demographic characteristics of victims receiving services by type of service funded by the 11 grant programs included in our review. These challenges included concerns about victims’ confidentiality and safety, resource c
3,786
onstraints, overburdening recipients, and technological issues. For example, according to officials from ACF and OVW, requiring grant recipients to collect this level of detail may inadvertently disclose a victim’s identity, thus jeopardizing the victim’s safety. ACF officials also said that some of their grant recipients do not have the resources to devote to these data collection efforts, since their primary focus is on service delivery. In addition, ACF officials said that being too prescriptive in requi
3,787
ring demographic data could overburden some grant recipients that may report data to multiple funding entities, such as federal, state, and local entities and private foundations. Furthermore, HHS and DOJ reported that some grant recipients do not have sophisticated data collection systems in place to allow them to collect additional information. In our July 2007 report, we did not recommend that federal departments require their grant recipients to collect and report additional data on the demographic char
3,788
acteristics of victims receiving services by type of service because of the potential costs and difficulties associated with addressing the challenges HHS and DOJ officials identified, relative to the benefits that would be derived. In conclusion, there are important issues to consider in moving forward on the reauthorization of VAWA. Having better and more complete data on the prevalence of domestic violence, sexual assault, dating violence, and stalking as well as related services provided to victims of t
3,789
hese crimes can without doubt better inform and shape the federal programs intended to meet the needs of these victims. One key challenge in doing this is weighing the relative benefits of obtaining these data with their relative costs because of the sensitive nature of the crimes, those directly affected, and the need for services and support. Chairman Leahy, Ranking Member Grassley, and Members of the Committee, this completes my prepared statement. I would be happy to respond to any questions you or othe
3,790
r Members of the Committee may have at this time. For questions about this statement, please contact Eileen R. Larence at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Debra B. Sebastian, Assistant Director; Aditi Archer, Frances Cook, and Lara Miklozek. Key contributors for the previous work that this testimony is based on are lis
3,791
ted in each individual report. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
3,792
The Title I property improvement program was established by the National Housing Act (12 U.S.C. 1703) to encourage lending institutions to finance property improvement projects that would preserve the nation’s existing housing stock. Under the program, FHA insures 90 percent of a lender’s claimable loss on an individual defaulted loan. The total amount of claims that can be paid to a lender is limited to 10 percent of the value of the total program loans held by each lender. Today, the value of Title I’s ou
3,793
tstanding loans is relatively small compared with other FHA housing insurance programs. As of September 30, 1997, the value of loans outstanding on the property improvement program totaled about $4.4 billion on 364,423 loans. By contrast, the value of outstanding FHA single-family loans in its Mutual Mortgage Insurance Fund totaled about $360 billion. Similarly, Title I’s share of the owner-occupied, single-family remodeling market is small—estimated by the National Association of Home Builders to be about
3,794
1 percent in fiscal year 1997. Approximately 3,700 lenders are approved by FHA to make Title I loans. Lenders are responsible for managing many aspects of the program, including making and servicing loans, monitoring the contractors, and dealing with borrowers’ complaints. In conducting these activities, lenders are responsible for complying with FHA’s underwriting standards and regulations and ensuring that home improvement work is inspected and completed. FHA is responsible for approving lenders, monitori
3,795
ng their operations, and reviewing the claims submitted for defaulted loans. Title I program officials consider lenders to have sole responsibility for program operations and HUD’s role is primarily to oversee lenders and ensure that claims paid on defaulted loans are proper. Homeowners obtain property improvement loans by applying directly to Title I lenders or by having a Title I lender-approved dealer—that is a contractor—prepare a credit application or otherwise assist the homeowner in obtaining the loa
3,796
n from the lender. During fiscal years 1986 through 1996, about 520,000 direct and 383,000 dealer loans were made under the program. By statute, the maximum size of property improvement loans is $25,000 for single-family loans and the maximum loan term is about 20 years. Title I regulations require borrowers to have an income adequate to meet the periodic payments required by a property improvement loan. Most borrowers have low- to moderate incomes, little equity in their homes, and/or poor credit histories
3,797
. HUD’s expenses under the Title I program, such as claim payments made by FHA on defaulted loans, are financed from three sources of revenue: (1) insurance charges to lenders of 0.5 percent of the original loan amount for each year the loan is outstanding, (2) funds recovered from borrowers who defaulted on loans, and (3) appropriations. In an August 1997 report on the Title I program, Price Waterhouse concluded that the program was underfunded during fiscal years 1990 through 1996. Price Waterhouse estima
3,798
ted that a net funding deficit of about $150 million occurred during the period, with a net funding deficit in 1996 of $11 million. Data from the Price Waterhouse report on estimated projected termination rates for program loans made in fiscal year 1996 can be used to calculate an estimated cumulative claim rate of about 10 percent over the life of Title I loans insured by FHA in that fiscal year. When FHA-approved Title I lenders make program loans, they collect information on borrowers, such as age, incom
3,799
e, and gender; the property, such as its address; and loan terms, such as interest rate. While lenders are required to report much of this information to their respective regulatory agencies by the Home Mortgage Disclosure Act, HUD collects little of this information when Title I loans are made. Using information that it requires lenders to provide, HUD records the lender’s and borrower’s names, state and county, as well as the size, term, and purpose of the loan. Other information collected by HUD on other