Testimony of
Mitchell E. Daniels, Jr.
Director
Office of Management and Budget
Before the
House Subcommittee on Government Efficiency, Financial Management
and Intergovernmental Relations
and the
House Subcommittee on Legislative and Budget Process
September 19, 2002
I am pleased to appear before both Subcommittees today to discuss an
issue that is important to the President and to the taxpayers
linking funding for programs to results. The President believes citizens
have a right to expect a government that works. He made performance a
key part of his Management Agenda.
For some time, Congress also has recognized the importance of this issue.
Almost 10 years ago, it passed the Government Performance and Results
Act. This hearing shows that Congress, like the Administration, remains
committed to taking the next step toward better performance in government.
We all agree that basing funding decisions for Federal programs on performance
makes sense. It is ironic that tying spending decisions to program performance
is viewed by some as something new, that the Administration even had to
establish an initiative for such a basic element of good government. In
business, the burden of proof is properly on the requester of funds to
show what the expected results will be, and later, to produce them. In
family life, for that matter, most children seeking an allowance from
their parents know that they will be doing chores in return.
Somehow, the standard for government has been different. The expectation
is that programs will get additional funds almost automatically. How
much? is the question asked, instead of How well?
Our efforts to integrate budget and performance shift the burden of proof
to those who request taxpayer dollars, and not just for "additional"
funds, but all funds.
I see this as a common sense idea upon which people of different philosophies
should agree. For those who think that government does too much, costs
too much, and is too big, basing funding on results makes sense. But those
who believe government should be more active, should have greater influence
on people's lives, also should want resources invested in programs that
produce results.
The challenge is putting results-oriented government into practice. I
am here today to tell you about the Program Assessment Rating Tool, or
PART -- one practical way the Administration is making results matter.
For the FY 2004 Budget we are using the PART to rate over 200 Federal
programs, representing over 20 percent of Federal funding. Over time,
we will build toward rating all programs every year.
GPRA Has Not Lived Up to Its Legislative Intent
Nearly 10 years have passed since the Government Performance and Results
Act (GPRA) was enacted. Agencies spend an inordinate amount of time preparing
reports to comply with it, producing volumes of information of questionable
value. If one were to stack up all the GPRA documents produced for Congress
last year, the pile would measure over a yard high. A policy-maker would
need to wade through reams of paper to find a few kernels of useful information.
Even with GPRA, accounting for performance when making budget decisions
is unfortunately the exception, not the rule. The implementation of this
important law has gone astray.
As a result, the Administration has decided to take GPRA in a new direction.
Program ratings have been linked to the budget and, in fact, will be an
integral part of the FY 2004 budget process. Programs with strong performance
will receive higher scores on the PART. Those found lacking will receive
lower scores.
Program Ratings in the FY 2003 Budget
The evaluation process started with FY 2003 Budget which broke new ground
in a number of ways. Most apparent, perhaps, was the change in presentation.
We set out to make the Budget a document for public consumption, as it
should be. We included more graphics than in the past, but more important,
the aim was to present information that would interest readers by showing
them how their tax dollars were being used. The FY 2003 Budget began to
shift the emphasis to accountability. It did this most visibly in two
ways: (1) with the scorecard for the Presidents Management Agenda;
and (2) by including performance assessments for selected programs.
For each agency, the Budget included a table listing selected programs
with an assessment of the programs effectiveness and a brief explanation
of the assessment. While these ratings were based on OMB staffs
knowledge of the programs and their professional judgment, they werent
systematic. Still, there were numerous instances in which funding decisions
were motivated primarily on achieving demonstrated results. Some examples
from the FY 2003 Budget include:
- For the Department of Interior, funds were shifted from the National
Fish Hatchery system, a program that lacks clear direction and adequate
performance measures to the National Wildlife Refuge System, a program
that effectively balances species conservation with public access.
- In the Department of Energy, funds from the Concentrating Solar power
program were transferred to the Solar Building Technology Research program,
because the latter showed promise for lowering the cost of solar water
heating and developing a zero net energy home, while the price tag for
the former still cannot compete with conventional energy sources.
- Despite wide variation among states, overall vocational rehabilitation
performance has improved in recent years, so the President's Budget
includes a new $30 million incentive grant program in the Department
of Education for state vocational rehabilitation agencies able to demonstrate
their ability to help people with disabilities get jobs.
These program ratings represented a major step toward increasing accountability
for taxpayer dollars and creating the results-oriented government envisioned
by the President.
Improving the Assessments for FY 2004
Shortly after the release of the FY 2003 Budget, we set out to strengthen
our process for assessing the effectiveness of programs by making it more
rigorous, systematic, and transparent. This last issue was particularly
important. A process was developed that would yield sound ratings and
make them available for public scrutiny. Working with agencies, a wide
range of programs were selected to assess in the first year, including
some Presidential priorities, programs of different sizes, and both high
and low performers.
Testing and Vetting the PART
OMB staff developed a blueprint for rating programs that was tested internally
this spring on 67 programs. This practical experience helped us determine
how the tool could be refined to make it more useful to the budget process
and to drive improvements in performance.
OMB staff conducted extensive outreach and solicited input from interested
parties both inside and outside the Federal government. They presented
the PART to various groups, including the National Academy of Public Administration,
the General Accounting Office, and Congressional staff.
For example, the National Academy of Public Administration (NAPA) convened
a workshop to review a completed rating for a program and provide feedback
on the process overall. NAPA assembled a panel of program experts to evaluate
how accurate the rating was in light of their extensive program knowledge.
Outreach sessions informed people about these activities, and provided
a forum to receive useful input. From the outset, all PART materials have
been available on the OMB website.
Executive branch agencies have acted as partners throughout this effort,
and a great deal of the work has been devoted to maintaining a strong
partnership with them. The PART was vetted through the Presidents
Management Council, which offered several constructive suggestions. Although
it was not a requirement for the first year, some agencies began using
the PART to assess programs for their internal budget processes.
A consensus has developed that the PART has favorable prospects for focusing
attention on performance and results.
Performance Measurement Advisory Council
OMB also established the Performance Measurement Advisory Council (PMAC).
Chaired by Mort Downey, former Deputy Secretary of Transportation under
the Clinton Administration who is testifying here today, this group of
six outside experts provides advice on budget and performance integration.
In its first two meetings, the PMAC has reviewed and provided suggestions
on various aspects of the program assessment rating process, as well as
how performance information will be presented in the Budget.
How the PART Works
The PART asks common sense questions that program managers and budget
analysts should raise in their normal course of work, and then it generates
scores based on the answers. Answers must be supported by an explanation
and evidence with the intent of making the ratings objective and impartial.
The PART examines different aspects of program performance to develop
a comprehensive rating. Achievement of programmatic goals is central to
the rating and half of the score. The PART also looks at efficiency and
other management issues which are not typically highlighted by agencies.
While the PART generates a numeric score, there is no pretense that the
score represents a precise calibration of performance. Numeric scores,
though, do allow for comparisons from year to year, and they will allow
us to measure improvement and determine if our attempts to improve performance
are working. We are still considering various options for how this information
will be presented in the Budget, but these ratings will be disclosed in
conjunction with the FY 2004 Budget.
We recognize that some amount of subjectivity is inevitable when completing
a program rating. To minimize subjectivity, we prepared detailed guidance
and conducted training on the PART. We are also convening an interagency
group to review a sample of PARTs to ensure consistent application of
the rating criteria.
The PART and Budget Decisions
Some fear that the PART scores will translate mechanistically into proposed
funding levels. The PART will enrich budget analysis, not supplant it.
Economic conditions, programmatic trends, national needs and interests,
and other factors must always be considered along with performance when
developing a budget.
Nonetheless, completing these program assessments will assist the development
of budget recommendations in numerous ways:
- Highlight areas that deserve management attention. What we have learned
so far from our testing of the PART is that the program assessment shines
a spotlight on areas of weak management. The PARTs should help identify
areas that require attention in agencies' planned actions to achieve
the goals of the President's Management Agenda.
- Inform resource allocations among competing programs. Completion
of the PART will make it easier to make comparisons between similar
programs, so that informed decisions can be made to fund those programs
that work versus those that dont.
- Increase accountability for taxpayer dollars. Putting the PART into
effect will permit objective comparison of performance from year to
year. If resources are invested to fix a problem with a program and
a couple of years later there still is no improvement in the PART score,
its advisable to rethink the investment.
Long-Term Outlook
This year, 20 percent of Federal programs will be rated, with plans to
build up over a period of about five years to assessing all Federal programs
each year. Next year agencies will have greater opportunities to evaluate
their programs and use the information to build their budget submissions
to OMB.
Changes already have been noted as a result of the program assessments.
For example, the PART has begun to attract greater attention to results
among top agency officials. It also has renewed attention inside the agencies
to the benefits of good performance measures. Most of the effort in completing
the PART requires extensive collaboration between OMB and agencies to
identify worthwhile results.
Through the Budget and Performance Integration Initiative, the Administration
has taken significant steps to improve the performance of Federal programs.
Congress intended these goals to be achieved when it enacted GPRA, and
we look to move toward results in the near term.
|