FEATURED ARTICLES

Pay for Performance: Persuading Providers Using Lessons From Home and Abroad

By Margaret O'Connor

PDF
Abstract

Pay for performance (P4P) programs aim to align providers’ payment incentives with quality of care. It is no surprise that many providers have reservations about the design and goals of P4P programs which may impact their income. Provider opposition to P4P is rooted in three major areas of concern: the impact of P4P programs on their pocketbooks, professionalism, and patients. Policy makers at all levels who want to include P4P programs in their efforts to raise quality and lower costs must anticipate and respond to provider opposition along these lines. This paper will examine three case studies and analyze how P4P initiatives in California, state Medicaid programs and the United Kingdom have been designed to accommodate these concerns. It will conclude with a discussion of whether and how current efforts to introduce P4P to Medicare are applying these lessons.


Introduction

One of the loudest critiques of the American healthcare system is its extremely high cost. The 2001 Institute of Medicine report, Crossing the Quality Chasm, cast doubt on whether the standard of care received under this system was in fact worth such high costs. The idea of “pay for performance” (P4P) was introduced as a potential remedy for these dual system ills.

P4P programs aim to align providers’ payment incentives with quality of care. In other payment models, providers can maximize their income by adjusting the amount of care delivered (for example, by providing additional care under fee-for- service payments). P4P programs are designed to link part of provider payments to certain evidence-based quality measures, creating an environment where providers can maximize income by delivering the highest standard of care to their patients.

While theoretically supportive of financial incentives for quality, 1 many providers have reservations about the design, goals, and potential for negative outcomes from P4P programs. Policy makers at the national, state, health plan, and employer level who want to include P4P programs in their efforts to raise quality and lower costs must anticipate and respond to provider opposition. This paper will outline three major areas of concern from the provider perspective, introduce three case studies of P4P programs, analyze features of the California, Medicaid, and United Kingdom programs that were designed to accommodate these concerns. Finally, this paper will conclude with a discussion of whether and how current efforts to introduce P4P to Medicare is applying these lessons.

Providers’ Concerns: Pocketbook, Professionalism, and Patients

It is no surprise that many providers have reservations about P4P programs. When discussing costs and incentives in a healthcare system, one must keep in mind that for providers, “cost” is income, and attempts to manipulate “incentive” may imply interference and control. A 2006 article in the Journal of Healthcare Management highlighted some of the conflicting interests between providers and payers regarding P4P programs:

- Issue 1: Payers pay for the cost of care for a group of patients. Providers think of payment in terms of price paid for their effort.

- Issue 2: Payers see the care delivery system as a whole and seek to hold “the enterprise” accountable for value and performance. Providers can only influence what is in their direct control.

- Issue 3: Payers assume money incentives can solve the problem of practice pattern variations. Providers choose practice patterns for many nonfinancial reasons. 2

This is a succinct summary of provider concerns about their incomes and clinical autonomy. Yet P4P raises provider concern in a third area, that of the diffuse effects of P4P policies on their patients. Consequently, to better understand the provider perspective, it is helpful to distinguish among the three areas in which providers fear the negative impact of P4P programs: their pocketbooks, their professionalism, and their patients.

Pocketbook. Perhaps providers’ most obvious concern about P4P programs is the effect that any payment restructuring will have on their personal income. Criticisms of both the incentive (its size, structure, and application) and the performance measures that are assessed are the two main sources of this concern. Providers may fear that P4P is simply the latest in a stream of cost-control measures aimed at ultimately reducing their payments. P4P incentives may vary in size, structure (bonuses, penalties, differential reimbursement), and application (whether they are paid based on attainment of a certain level, improvement, or by comparison with other providers). Consequently, providers may have different opinions about a variety of incentive designs. Providers are also concerned whether proposed changes will adequately compensate them for the additional time, personnel, and information technology investments that may be required for them to report on the measures.

Structure and selection of the measures to be assessed is also likely to affect providers unevenly. A 2007 survey of internists found that although 73% agreed that “If the measures are accurate, physicians should be given financial incentives for quality,” only 30% agreed that “at present, measures of quality are generally accurate.” 1 The same survey found that 88% of physicians believed that current measures were not accurately adjusted for the medical status of patients, while 85% believed they were not adequately adjusted for socioeconomic status. Measures that do not adequately adjust for the characteristics of patient populations may mean that some providers perform less well on certain measures due to factors outside their control.

Provider concerns about the impact of P4P on their pocketbooks is reflected in the American Medical Association (AMA)’s statement of principles and guidelines for P4P programs. This statement gives the endorsement of the medical profession only to those programs that provides fair and equitable incentives (through the provision of new, additional funds), are subject to the best-available risk adjustment, reimburse providers for administrative costs associated with the measures, and distribute rewards for achievement and improvement on targets (not through rankings). 3

Professionalism. Providers are also concerned that measuring and paying according to performance may impinge upon professional autonomy. Many common performance measures (such as those outlined in the Healthcare Effectiveness Data and Information Set, known as HEDIS measures) are process-based, meaning they relate to the sequential process of diagnosis and care delivery. Although scientific evidence may indicate certain best-practices for dealing with most or average patients, such practices may not be appropriate for all patients. Doctors are likely to resist any “incentive structure” that appears to dictate care for their individual patients.

Similarly, while certain measures (such as HEDIS measures) are widely accepted, in other cases there remains much debate in the medical community about what constitutes best-practice. The “evidence” upon which performance measures are based may be disputed. Providers may resist linking their pay to performance measures that they do not believe accurately represent the highest standard of care that they can deliver to their patients. Not all variations in care can be linked to variations in quality, and according to the AMA, good P4P programs will ensure quality while permitting variations. 3 Regionally and internationally, these variations can be linked to such factors as difference in training and genuine divergence of professional opinion. Perceived attacks on medical professionalism may ignite resistance to P4P programs.

The physician culture is an expert one and places a high value on autonomy. In such cultures, financial rewards or penalties can be viewed as a threat to pursuing quality. Imposing a pay-for-performance penalty on a practice pattern rooted in a training experience can be compared to fining someone for believing in God; doing so is not likely to change belief but is likely to elicit anger. 2

Patients. In their role as patient advocates, providers also concern themselves with the impact P4P may ultimately have on their patients. Providers are concerned with potential negative outcomes of P4P incentives at the clinical and systemic levels.

Because P4P programs are likely to be implemented only for areas of care where there is a consensus on quality measures, there is a potential to skew care towards those “incentivized” areas. Rosenthal and colleagues find it “inevitable” that “the dimensions of care that will receive the most attention will be those that are most easily measured, and not necessarily those that are most valued.” 2 This statement reflects the view of 61% of physicians who agree “measuring quality will divert physicians attention from important types of care for which quality is not measured.” 1 Others fear that this problem of misplaced attention may manifest itself during the clinical encounter, as doctors adhere to a checklist of measures rather than communicating with their patients. 5

Provider response to payment incentives could ultimately impact patient access. Because P4P programs are not implemented universally, providers may choose to leave P4P programs if incentives are insufficient or the costs of implementation are too high. This could restrict access and provider choice among patients covered by plans with P4P programs. Alternatively, providers participating in P4P programs that are inadequately adjusted for “may lead physicians to avoid high-risk patients”; 82% of providers harbor this fear. 1 The AMA’s principles reflect these concerns about the affect of P4P as well and stipulate that good programs will support the patient/physician relationship and must not limit access or cause patient deselection. 3

A perceived threat to their pocketbooks, professionalism, or patients in any health policy proposal is likely to stimulate objections among providers. The particular concerns raised by P4P have been described above. Any policy maker intent upon implementing P4P programs can anticipate similar opposition in these three areas, and should craft their proposals to respond to these concerns.

Case Studies: California, Medicaid, and the United Kingdom

P4P programs have been attempted in a wide variety of contexts. In California, state Medicaid Programs, and the United Kingdom, P4P programs are widely used and accepted. Each used a unique combination of innovations and strategies to accommodate provider concerns that has contributed to their continuation and acceptance by the provider community. Future advocates of P4P can draw lessons from their examples on ways to design programs with widespread support.

These cases were selected because of the availability and accessibility of development, structure, and implementation data. Each program has a broad jurisdiction and plans to move forward. The breadth of these programs implies that consensus from a diverse set of providers was needed, which should enhance the applicability of lessons gained from these experiences to a variety of contexts. The selection of these cases is not exhaustive (other examples include the Leapfrog Group, Bridges to Excellence, and Australia’s program), nor to imply that important lessons cannot be gained from unsuccessful P4P experiments. Because a variety of data sources are used to illustrate the highlights of each program, direct dimensional comparisons may not always be possible. In acknowledgement of these weaknesses, the sketches below still provide a comprehensive picture of each program and many opportunities to learn.

California: The Integrated Healthcare Association. Backlash against the managed care initiatives of the 1990s led many private California health plans to turn to P4P as a way to promote quality and control costs. These programs faced many obstacles, including minimal funding for incentives and an insufficient sample size to lend credibility to the results of their quality-improving thrust. Providers were frustrated by competing sets of metrics and found themselves in an arena of “dueling report cards” and one-up-manship. 6

At this time the health policy context of California featured visible interest groups such integrated provider associations clearly voicing the provider position, The Pacific Business Group on Health representing purchasers, and the California Association of Health Plans. Formed in 1996, the Integrated Healthcare Association (IHA) was a forum representing the voices of all of these stakeholders. Seven major health plans were IHA members, accounting for 60% of California’s market share. 7 In 2001, California’s providers asked the IHA to help devise a state-wide P4P program that would be universally acceptable. Before designing a specific model, the IHA agreed on a set of guiding principles: voluntary participation in the program, publicly reported scorecards on a common set of measures, significant incentives for participation offered by health plans, and a collaborative model of decision making. 6

The IHA’s Pay for Performance Program first collected data in 2003 and distributed payments in 2004. Although each plan distributed payments variably and in different amounts, all plans used a common set of measures. These measures spanned three categories, with opportunities for payment in each: clinical, patient experience, and technology. A technical team was set the task of developing the measures, and included staff from the National Committee for Quality Assurance (NCQA) and sponsor of HEDIS measures. Clinical measures emphasized preventative care and process measures. The weighting of measures for the initial payments were 50% for clinical, 40% for patient experience, and 10% for technology, but had been changed in 2006 to a 50/30/20 formula. For 2005, the average performance-based compensation accounted for 1.5% of total physician group compensation. Six of the seven participating plans based these payments on rankings. “New money” was introduced to the system to fund this program; plans raised funds by increasing premiums, achieving greater administrative efficiencies, offsetting increases to capitation payments, and real- locating funds from other incentives. IHA’s five year plan calls for increasing the share of P4P compensation to 10%. Although the initial goal of the program was not to reduce existing payments to any provider, the proposed increases will require “alternative approaches.”

Medicaid: The Collective Experience of State Programs. Twenty-eight state Medicaid programs currently operate at least one P4P program, and many of these programs have been in existence for five years or more. As of April 2007, existing plans to initiate further programs would have led 43 states to do so by 2011. A large majority (70%) of these programs function in managed care or primary care case management environments. 8

In contrast to California’s private P4P initiatives, Medicaid is a publicly financed program whose beneficiary is a traditionally marginalized group, the poor. Interestingly, a Commonwealth Fund survey of state Medicaid directors found that this set of policy makers was concerned with the long-term impact of P4P programs upon their beneficiaries. For example, improving quality was more important to these programs than controlling costs. Controlling costs was ranked lowest, by only 14% of respondents as a “very important” attribute of a good P4P program. Features that garnered more support were that measures be nationally recognized (35%), that they provided opportunities for continuous quality improvement and not just a one-time target (62%), and that they be scientifically sound (78%). ^8^ These priorities coincide with potential provider concerns.

This convergence of interest was widely acknowledged: “In the context of Medicaid’s traditionally lower payment rates and smaller provider networks, maintaining good relations with all providers is important to ensure adequate capacity in plans’ networks.” 9 This statement by experts accords with the views of 69% of Medicaid directors who said that penalizing providers would be a detriment to a successful P4P program. However, only 19% worried that reducing the numbers of providers would be an actual consequence of a P4P program. ^8^ A possible interpretation of these findings is that Medicaid directors are conscious of this potential adverse effect, but are confident in their ability to design programs that provide sufficient counterbalance.

The content of P4P programs vary. HEDIS and structural measures of quality predominate Medicaid P4P programs, used in 69% and 60% of programs, respectively. The utilization of these measures is likely due to their scientific basis and the feasibility of collection. On the contrary, patient experience measures were used by only 37% of programs. Bonuses are the most common form of incentive for existing programs (69%), followed by penalties (34%) and differential reimbursement (31%). Reflecting programs’ negative experience incurring penalties on providers, only 7% of new programs propose to structure rewards in this manner. Of existing Medicaid P4P programs, 85% distribute incentives based on attainment, 33% based on improvement, and 21% based on peer comparisons (rankings). Plans for new programs indicate some shift in this pattern, reducing the share of programs who use peer comparisons to 9% and increasing the use of both attainment and improvement to 91% and 55%, respectively (note that some programs compensate using a mix of approaches). The increasing move towards health information technology has led some programs to introduce a “pay for participation” component to their programs. A final trend in Medicaid P4P programs is a move to join with other payers, employers, consumers and providers in state-wide or regional P4P and quality improvement initiatives.

The United Kingdom: Quality Outcomes Framework. Since the birth of the National Health Service (NHS) in 1948, the payment of British general practitioners (GPs) has been determined by periodic contract negations between the British Medical Association (BMA) and the central government, with major revisions made only in 1966 and 1990. The traditional General Medical Services Contract combined capitation (40%), salary (30%), fee-for-service (15%) and information technology (15%) components. 10 In 1990s, incentives were incorporated to fee-for-service payments to encourage vaccination and PAP screening. In 2004, a new contract introduced P4P programs on a large scale. According to Martin Roland, a British health policy expert who advised negotiations of the 2004 contract, both the academic and political context facilitated the reform. In contrast to the 1980s, when the idea that variations in medical practice existed or had a negative impact on quality was largely rejected, in the 1990s “it became increasingly possible both to define high quality care and to provide methods that could be used to measure some aspects of quality.” 11 News media attention to the UK’s draconian health spending and scandalizing stories about substandard levels of care created the political will for reinvesting in the NHS. The
concept of P4P arrived in the right place at the right time:

To tie a substantial proportion of physicians’ income to the quality of care they provided would produce winners and losers. However, the British Medical Association was unlikely to negotiate a change in remuneration that would result in the loss of income for large numbers of its members. Therefore, the scale of the change that came about was possible only because in 2000 the government of the United Kingdom decided to provide a substantial increase in health care funding. 11

After eighteen months of negotiations, the new GP contract rolled out in April 2004 and was approved by 79.4% of physicians. The P4P program, called the Quality and Outcomes Framework (QOF) made 18% of GP income susceptible to quality measures. Providers could earn up to 1,050 points in 146 indicators in seven areas. A technical team developed measures with the intention that they be kept at the minimum number necessary to accurately assess care, and that they be based on information that was routinely collected. It was also the intent that the points available reflect the workload required. Thus, providers can receive points for complying with process measures for a certain threshold of patients, addition points patient compliance, and still more points based on “intermediate” patient outcomes. The QOF also introduced “exception reporting.” GPs may exclude patients from eligibility for compliance with certain measures based on such factors as preconditions, concurrent drug treatments, or noncompliance. Regional health system oversight boards called Primary Care Trusts are responsible for regulating GPs to avoid “gaming” the system through excessive use of exception reporting.

Initial experience with the QOF has raised both anticipated and unanticipated and unanticipated issues in the UK. In the first year under the QOF, the NHS paid out $1.8 billion in “new money,” equivalent to a 20% increase in the NHS family practice budget. GPs exceeded NHS’ estimates for compliance with new measures, causing them to pay $700 million more than anticipated, causing architects of the QOF to conclude that, “in retrospect, the government paid out more than it needed to, to achieve the levels of quality.” 12 This miscalculation was either the result of wrong assumptions about the baseline of care already being delivered by British GPs, or wrong assumptions about how hard they would work to meet the measures. There is also growing concern that P4P might result in the fragmentation of GP, with different GPs in a practice narrowing their focus to provide high-quality care to certain aspects of care under the QOF. If significant, this trend could have a potentially negative impact on the coordination of care within a practice, particularly for patients with co-morbidities.

Discussion: Persuading Providers

Experiences from California, Medicaid and the United Kingdom constitute an abundant source of lessons to be learned about P4P programs. Each case made different and innovative attempts to achieve consensus with providers and assuage concerns about threats to pocketbooks, professionalism, and patients.

Provider support for P4P in both California and the UK was contingent upon the introduction of “new money” to the system. Avoiding the creation of explicit “losers” was a lesson learned by Medicaid as well, where the use of penalties and negative incentives for providers is rapidly diminishing. Both California and Medicaid tried to provide some relief to providers for the costs of participation in P4P. For example, HEDIS-based measures predominate in both cases, reducing the load of additional data collection. The IHA program’s low thresholds in the technology category rewards technology investment and aimed to encourage reinvestment in quality. 6 With similar goals, several Medicaid programs have instituted “pay for participation”components. These program features were included in efforts to convince providers that their pocketbooks would not be threatened by P4P programs, and that they in fact stood much to gain financially.

P4P policies included other components to assure providers that P4P programs would not impinge upon their professional ethos. The use of HEDIS measures by California and state Medicaid programs precluded opposition that less accepted or scientific measures might have roused. Additionally, like the UK, California’s IHA created a technical team to allow for provider input to determination of measures that was distinct from the team determining the incentive structure. This division of labor may have helped minimize any perceived or actual bias to favor certain providers. In the QOF, an aggregate score allows providers to maintain their autonomy by prioritizing focus areas and also receive rewards. Exception reporting also allows providers to make diagnostic and treatment decisions on a case-by-case basis without penalty.

P4P initiatives must also address provider’s anxieties about their impact on patients. Exception reporting in the UK effectively enabled providers to perform their own risk adjustment on patients, reducing the incentive to turn away patients that might negatively affect their quality scores. In the California and Medicaid programs, patient surveys provide some consolation to providers that any negative effect on clinical experience will be monitored. The fear that P4P would drive a significant number of providers out of program networks was precluded in California by the fact that the IHA plans accounted for 60% of the private insurance market. Trying to exit these networks could have had deleterious effects on providers’ supply of patients, particularly in areas where IHA plans dominated. The directors of state Medicaid programs made explicit attempts to ensure that their P4P programs did not negatively impact beneficiaries’ access to care. Medicaid programs are consequently moving away from features unpopular with providers, such as penalties and payments based on ranking.

These programs thus incorporate characteristics expressly tailored to addressing the three levels of provider reservations about P4P programs. They offer a diverse menu of options from which the authors of future P4P programs can learn.

Conclusions: The Unfolding Case of Medicare

In recent years, P4P has been proposed in the discourse surrounding Medicare reform. In an open letter published in Health Affairs in 2003, quality crusader Donald Berwick and colleagues first called for Medicare to lead the movement towards P4P:

At issue is not the dedication of health professionals but the lack of systems- including information systems- that reduce error and reinforce best practices…We have concluded that such systemic changes will not come forth quickly enough unless strong financial incentives are offered to get the attention of managers and governing boards. As the biggest purchaser in the system, the Medicare program should take the lead in this regard. 13

Five years later, P4P programs in the private sector and state Medicaid programs have taken off, while Medicare initiatives move tortoise-paced through Washington.

In 2006, the Medicare Payment Advisory Committee (MedPAC) unveiled design principles, measure criteria, and an implementation strategy for a P4P program for Medicare. The proposal was designed to be budget-neutral, affecting only 1-2% of payments, and “shifting the incentives for payment, not the level.” 14 It would reward providers both for attaining a certain level of care, as well as for improvement. Collection of measures was not to be unduly burdensome, and the measures themselves were to be evidence-based, focused on aspects of care in need of improvement and that providers could affect, and subject to “appropriate” risk adjustment. MedPAC’s two-step implementation process included an initial 2-3 year period in which quality would be measured, existing NCQA practices enhanced, and activities associated with information technology use rewarded. The second stage would establish and imple- ment a set of clinically appropriate measures. The philosophy behind the MedPAC proposal was simple: “Although incentives for quality might not reduce costs, MedPAC believed that Medicare should, at a minimum, get the best value possible for the dollars it was spending.” 14

MedPAC’s proposal was accompanied by a provision in HR 6111, the 2006 Tax Relief and Healthcare Act that called for an additional fee increase of 1.5% for care provided by physicians who reported on a set of measures. 15 This compromise was hammered out in the context of threatened 4.4% reimbursement cuts under the current Medicare payment system. Part of this compromise included the agreement of the AMA to assist in the development of quality measures. 16

Opposition to P4P in Medicare remains fierce because of the program’s size and prohibition against interfering in the practice of medicine. These factors cut to the quick of provid- ers’ pocketbook, professional and patient concerns. On the one hand, like the California case Medicare’s large market share is unlikely to make it desirable for many providers to stop seeing Medicare patients because of the burdens of P4P program. But providers are also concerned about an inequitable distribution of P4P outcomes. According to William Jessee, president and CEO of the Medical Group Management Association, “The reason some hospitals and practices don’t perform well is that the lack the resources to improve. So if you transfer resources from poorer performers, you end up exacerbating the problem rather than solving it.” 17 A Medicare P4P plan is also likely to raise provider hackles over the specter of “government-run medicine.” By eliciting the help of the AMA in developing measures, and emphasizing the role of already-established measures, the MedPAC proponents of P4P are clearly trying to accommodate concerns of professional autonomy. The sticking point remains the effect of P4P on provider pocketbooks. By touting their proposal as “budget neutral,” and introducing a “pay for participation” plan, MedPAC is attempting to convince providers that P4P is not simply a euphemism for pay-cuts. Yet the daily rhetoric about Medicare’s cost “crisis” makes these overtures seem thin. In California and the UK, where large market shares were at stake, the success of P4P proposals depended on the introduction of new money.

This July physicians rallied in Washington against a proposed 10.6% Medicare payment reduction. 18 SB 2785, the Save Medicare Act of 2008, gives policy-makers an 18-month window to find a solution to Medicare’s flawed payment system. For those who want to see P4P a part of this reform, discussion of provider pocketbook, professionalism, and patients as three areas of concern must form the basis of consensus. Policy-makers should think critically about the lessons learned from California, Medicaid, and UK P4P programs, and use these lessons in creating a Medicare program that will garner support from a wide variety of stakeholders.

References
1. Casalino, Lawrence P, et al. “General Internists’ Views on Pay-for-Performance and Public Reporting of Quality Scores: A National Survey”. Health Affairs. 26 (2007): 492-499. p. 494
2. Safavi, Kaveh. “Pay for Performance: Finding Common Ground”. Journal of Healthcare Management. 51.1 (2006): 9-12. Expanded Academic ASAP. Accessed 23 July 2008 http://find.galegroup.com.ezproxy.library.tufts.edu/itx/start. do?prodId=EAIM. p. 9
3. American Medical Association. “Pay-for-Performance Principles and Guidelines”.Accessed3Aug2008. http://www.ama-assn.org/apps/pf_new/pf_online.
4. Rosenthal, Meredith B., Rushika Fernandopulle, HyunSook Ryu Song, et al. “Paying for Quality: Providers’ Incentives for Quality Improvement”. Health Affairs. 23, no. 2 (2004): 127-141. Accessed 5 Aug 2008. http://content.healthaffairs.org.ezproxy.library.tufts.edu/cgi/reprint/23/2/127. p. 139
5. Galvin, Robert. “Pay for Performance: Too Much of a Good Thing? A Conversation with Martin Roland.” Health Affairs. 25 (2006): w412-w419. Accessed 23 July 2008. http://content.healthaffairs.org.ezproxy.library.tufts.edu/cgi/ reprint/25/5/w412. p. w416
6. McDermott, Steve and Tom Williams, eds. Advancing Quality Through Collaboration: The California Pay for Performance Program. Integrated Healthcare Association, February 2006. Accessed 5 Aug 2008. http://www.iha.org/wp020606.pdf p.3
7. Rosenthal, Meredith B., Rushika Fernandopulle, HyunSook Ryu Song, et al. p. 133
8. Kuhmerker, Kathryn and Thomas Hartman. “Pay-for-Performance in State Medicaid Programs: A Survey of State Medicaid Directors and Programs”. The Commonwealth Fund. April 2007. Accessed 5 Aug 2008. http://www.commonwealthfund.org/publications/publications_show.htm?doc_id=472891.
9. Felt-Lisk, Suzanne, Gilbert Gimm and Stephanie Peterson. “Making Pay-for- Performance Work in Medicaid”. Health Affairs. 26 (2007): w516-w527. Accessed 5 Aug 2008. http://content.healthaffairs.org.ezproxy.library.tufts.edu/cgi/ reprint/26/4/w516. p. w517
10. Smith, Peter C annd Nick York. “Quality Incentives: The Case of UK General Practitioners”. Health Affairs. 23 (2004): 112-118. p. 113
11. Roland, Martin. “Linking Physicians’ Pay to the Quality of Care: A Major Experiment in the United Kingdom”. The New England Journal of Medicine. 351 (2004): 1448-1454. PubMed. 23 July 2008. http://content.nejm.org.ezproxy.library.tufts.edu/cgi/reprint/351/14/1448.pdf. p. 1448
12. Galvin, Robert. “Pay for Performance: Too Much of a Good Thing? A Conversation with Martin Roland.” Health Affairs. 25 (2006): w412-w419. Accessed 23 July 2008. http://content.healthaffairs.org.ezproxy.library.tufts.edu/cgi/ reprint/25/5/w412. p. w417
13. Berwick, Donald M, et al. “Paying for Performance: Medicare Should Lead”. Health Affairs. 22, no. 6 (2003): 8-10. Accessed 5 Aug 2008. http://content. healthaffairs.org.ezproxy.library.tufts.edu/cgi/reprint/22/6/8. p. 8
14. Milgate, Karen and Sharon Bee Cheng. “Pay-for-performance: The MedPAC Perspective”. Health Affairs. 25 (2006): 413-419. Accessed 3 Aug 2008. http:// content.healthaffairs.org.ezproxy.library.tufts.edu/cgi/reprint/25/2/413. p. 415
15. Wilensky, Gail R. “Pay for Performance and Physicians: An Open Question.” Healthcare Financial Management. 61.2 (2007): 40-41. Academic OneFile. Accessed 23 July 2008. http://find.galegroup.com.ezproxy.library.tufts.edu/itx/start. do?prodId=AONE. p. 40
16. Terry, Ken. “Score One for CMS.” Medical Economics. 16 June 2006: 44- 46. Accessed 3 Aug 2008. http://find.galegroup.com.ezproxy.library.tufts.edu/itx/ start.do?prodId=AONE.
17. Terry, Ken. “Tackling Pay for Performance: Professional Associations are Moving to Help Ensure its Fair to All Physicians.” Medical Economics. 6 May 2005: 16. Academic OneFile. Accessed 23 July 2008. http://find.galegroup.com. ezproxy.library.tufts.edu/itx/start.do?prodId=AONE.
18. “Physicians Participate in Capitol Hill Rally to Support Medicare Payment Reform.” American Family Physician. 77 (2008): 1209. Expanded Academic. Accessed 3 Aug 2008. http://find.galegroup.com.ezproxy.library.tufts.edu/itx/start. do?prodId=EAIM


Subject: Healthcare Policy
blog comments powered by Disqus
Copyright © 2022, TuftScope | About | Contact |
Site designed by .
Site maintained by .