EQUAL PROTECTION
EQUITY AND ELIGIBILITY
An Equity and Fairness Overview of Non-Universal Public Benefits
Written in mid-2007
A public sidewalk is the sort of thing that many people think of when they think of the government. It is the common property of everyone in the community, and everyone in the community, regardless of who they are and where they are from, has an equal right to walk down it. On a public sidewalk, everyone is equal. Most public services, however, are not like a public sidewalk. Less than 20 percent of all public spending – federal, state and local -- is on services that everyone can experience, or have an equal right to experience, on any given day. Even adding in the military, space research, legislators and other central administrators, categories of government that do not benefit people directly but, one might argue, benefit all people indirectly to an equal extent, “general” services still account for just one third of all public expenditures. Debts and pension payments, for which no public services are received, account for about one-sixth. About half of all public spending goes only to those who are eligible to receive services and benefits, based on criteria, rules, regulations, and applications accepted and denied. The next three weeks of posts will be a review of public benefits and services that aren’t available to everyone, and of the fairness of the philosophical foundation and practical criteria that are used to determine who does and does not get them. Read them all, and I personally guarantee you’ll never think about any public policy issue the same way again.
The theoretical criteria include age, means (income and wealth), and special needs (disability). The actual criteria also include the “hope no one shows up” and “thin edge of the wedge” strategies, “feudalism” (tenure), the “golden rule” (benefits limited to those with higher incomes), and “merit” evaluations. The overview shows that the further one moves away from universal benefits and services, the more complicated, unpopular, and inequitable such benefits become. And it is increasingly the better off, the better organized, and better connected, and those with a greater sense of entitlement, not the needy, who are accessing such benefits and services. And no one, not “liberals,” not “conservatives,” not Republicans, not Democrats, is thinking about this systematically, confronting its implications. And this is not an accident. My next several posts will be an attempt to do so.
Age-Restricted Benefits, and the Lifecycle of Need
Age is the most simple and most common of the criteria used to restrict eligibility. In 1998 about one-third of all public spending was on services that were available exclusively, or primarily, to the young or old, regardless of their income or other circumstances. These include elementary and secondary education (10.9 percent), social security for the retired (11.2 percent), Medicare (6.6 percent), and public higher education (3.9). Public higher education is theoretically available to everyone, but in practice is used primarily by the young. Social Security is only available to those 62 and over, Medicare to those 65 and over, and their share of public spending is going to soar in the next few decades. While the rules vary, one typically becomes ineligible for public elementary and secondary education after a certain age, typically age 21, even if one had not finished high school. There are additional services restricted to the young and old, such as senior citizens centers, and youth day camps and recreation activities, but spending on them is not tabulated separately from other, similar activities. These services may be enough, however, to increase the overall share of public funding allocated to age-restricted services and benefits to one-third of all government funding.
Public education for the young and retirement income and health care for the elderly, along with the military and interest on federal, state and local public debts, are the most costly public programs. The age-restricted programs are also the most popular, for four reasons.
First, most people accept that children and the elderly deserve special assistance and consideration. For everyone, healthy adulthood is bordered on each side by helplessness. One of the main functions of the government is to collect resources, in the form of taxes, from healthy, working adults and transfer them, in the form of public services and benefits, to the elderly and children, and to those who care for them. So while most adults do not benefit from Social Security or public schools, they generally have parents, grandparents, or children who do. A family, as well, transfers resources – time, work and money – from healthy working-age adults to the elderly and children. The equity issues on the border between social obligations and family obligations to the young and old is a different discussion, a major one in itself.
Second, age-restricted public services are not just for one’s parents and children. They are for one’s self. An adult may not be receiving age-restricted services and benefits today, but he or she probably did receive them during childhood, and expects to receive them later in old age. One receives these services at one point in the lifecycle of need, and pays for them at another. Only a hypocrite would resent the burden of paying for services for others that he had enjoyed in the past, or expects to enjoy in the future. Both this expectation and the equity of age-restricted services, however, depends on a pact between the generations, with each being willing to shoulder its share of the burden during its healthy, working adulthood to provide greater or equal benefits to those who came before, and to those who will come after. The current status of that link, the “generational equity” issue, is also a major, separate discussion in itself.
Third, age restricted benefits are, in a sense, nearly as universal as a public sidewalk. Virtually everyone who is theoretically entitled to receive them actually does. Just about everyone age 65 and over receives social security and Medicare. Public schools educate 89 percent of school-aged children, and public universities educate 78 percent of those in college. No inside information is required to access these public services, since everyone knows about them; there are few applications, and no waiting list. While only 35 percent of all Americans benefit from these services, therefore, almost everyone does at one point or another. Money spent on public schools, public universities, social security and Medicare is not thought of as money spent on “special interests,” or on people working the system. These are government functions in support of everyone, and are therefore thought of as “fair.”
Finally, age-restricted benefits are difficult to scam, and very few people who are not entitled receive them. There are exceptions. There were news reports in prior years of dogs and dead people “receiving” social security checks that were cashed by others. There was a news report of an adult who was arrested for attending high school by impersonating a teenager. Such cases are rare, however, for two reasons. First, one’s age is not subjective. One is born on a given date, it is a given date today, and the difference between the two is one’s age. One cannot otherwise justify, and does not have to otherwise justify, his or her right to receive age-restricted public services. And second, one’s age is difficult to fake. The birth certificate, the basic proof of age, is a fairly reliable public record, and personal appearance is a fairly reliable check on it. There is an upfront industry – cosmetics – to make the old look younger, and an underground industry – fake I.D.s – to make the young appear to be older. One could attempt the reverse, but for only so long.
The key question for age-restricted benefits is one of generational equity. It is widely understood that those generations paying for Social Security and Medicare today, and those that follow, will get much less than those receiving these services today, after paying much more for them. In fact, it is virtually inevitable. But that hasn’t stopped those generations now in charge from demanding more senior citizen benefits for themselves, such as the recently enacted prescription drug benefit for Medicare, and deferring costs to the future, through rising public debts. Educational services for the young have also expanded, with kindergarten moving to full day and pre-kindergarten being added in some places. But this only offsets, in part, the need for both spouses in a two-income couple to work, in part due to the much higher payroll taxes in effect since 1983. The current generation in charge will have a lot to answer for, and is deferring providing those answers as long as possible. But it is possible, however, that some time the future age-restricted benefits, and those who receive them, will be about as popular as welfare and its recipients were in 1995.
I’ll write more about generational equity issues some time in the future. But let’s move on to other far more controversial types of public spending and eligibility criteria, in the next two posts.
Means: Limiting Government to Those at the Bottom, At Least In Theory
Here in the Untied States, people have decided that the elderly are entitled to a modest level of retirement income and health care, and the young to an education, at public expense. Those who want the government to be “smaller” often argue that these services should be delivered in a different way, with the government financing the benefits but the private sector providing them. But they dare not overtly challenge the right to receive these universal entitlements, so universally popular are they with the American people. In other developed societies, those with “bigger” governments, additional human needs are also considered the responsibility of society as a whole, rather than of each individual person or family. In those societies, for example, virtually everyone receives government-funded health care, not just the elderly. One example is Canada, right next door. Government-funded housing is also more common in other places.
While the United States does not provide universal food, clothing, and health care via government programs, however, few Americans are willing to see people in their own communities starve for lack of food, freeze to death for lack of shelter, or die or become crippled by readily preventable diseases and easily treated injuries. The American compromise has been to assume that most non-elderly people will buy food, housing and health care for themselves, but for the government to provide for those who do not have the means to afford them. Means-tested benefits are thus a substitute for universal benefits, and a way to avoid deprivation at, theoretically, a lower public cost.
In 1998, 12.1 percent of U.S. public spending was on public services with eligibility restricted based primarily on means; that is, income and wealth. Of this amount, 2.7 percent was spent on services with eligibility based on age as well as means. The most costly programs for the low-income young, at $10 billion each, were free or reduced price school lunches and breakfasts, and college student loans and grants. The most costly program for the low-income elderly, by far, is Medicaid at $40 billion in 1998 and far more today. The remaining 9.4 percent was spent on programs that benefit working-age adults and their families based on their means alone. The most expensive are Medicaid payments to private health care providers, at 1.4 percent of total spending, and public health and hospital services, at 3.9 percent of total spending. The latter tend to be directed to the poor and located in poor communities, and today are funded primarily by Medicaid. Other means-tested public benefits that cost more than $15 billion in 1998 include the Earned Income Tax Credit, Food Stamps, unemployment insurance payments, Temporary Assistance to Needy Families (welfare), and Section 8 housing rent subsidies.
The criteria used to determine eligibility for means-tested benefits are varied and complex, but generally consist of limits on the income one may earn, and on the wealth or possessions one may have, while receiving benefits. To be eligible for food stamps at the time I researched this question, for example, a household must have had $2,000 or less in bank accounts and other liquid financial resources, though it may have a home and lot of any value and one motor vehicle per adult or working teen (if used for work or school). In addition, a household must have less than a prescribed amount of income, which varies with household size -- $960 per month for a one-person household, $1,961 per month for a four-person household, and so on at the time. Additional tests permit money to be excluded from income if used for medical care, shelter, and fuel in excess of a certain percent of one’s income.
With allowances and deductions, eligibility for food stamps can be complex, and most means-tested programs have criteria that are similar to, or even more complex, than food stamps. And some programs, in order to prevent a family with $12,000 in income per year from receiving extensive benefits while a family with $12,001 receives none, also vary the level of benefit by income, phasing it out as income rises. The level of benefit, therefore, often has an additional formula separate from the determination of eligibility, and the combination of eligibility and benefit regulations can be as complicated as the tax code. And in general, one only receives means-tested benefits if one knows where and how to apply.
Means-tested public benefits for the poor are not popular. Indeed, one might argue that for the past 25 years politics has been driven by, and a political realignment has occurred due to, rising anger at the cost of these benefits and at their beneficiaries. There are several reasons for this hostility, and these reasons are in some ways mirror images of the reasons why age-restricted benefits are popular.
First, means tested benefits are not universal. While everyone expects to be old or young, few believe that they may end up poor. Most Americans will go through their entire lives without ever receiving food stamps, Section 8 housing assistance, or “welfare” as it is traditionally known, so they are less willing to pay for those who do receive them. The hostility to the recipients of “welfare” is so great that a mid-1990s panel on Social Security reform failed to reach a consensus over the possibility of means-testing the program. One of the panel’s members, Richard Trumka the president of the Mine Workers’ Union, said that means testing social security would turn it into a “welfare” program not a retirement program, and would thus undermine its support. (As I have discussed elsewhere, one of they ways that age-restricted benefits such as Social Security and Medicare can be made less generous for future generations is, in fact, means testing, with higher and higher insurance premiums for the latter until future middle class become poor by paying for almost all their health care themselves).
Second, means-tested benefits are not universal even among the poor. In 1998 only 40 percent of those living in poverty received Medicaid, the health care program for the poor, while those in poverty only accounted for half of all Medicaid recipients. Often, though not always, means-tested benefits exclude the working poor, those doing the most to help themselves. That is because those who receive cash welfare benefits through programs like Temporary Assistance to Needy Families (TANF) or SSI automatically qualify for programs like Food Stamps, Medicaid, and Section 8 housing. Others must apply, and are seldom recruited. Thus, means tested benefits are thought of as the enemy of self-reliance and the work ethic. The duty to work and earn one’s own living in lieu of receiving public benefits is, as well, a separate issue in itself. It is worth noting, however, that over the past 30 years it has become accepted single parents must work rather than care for their children, while anyone over the age of 65, no matter how healthy, able, and free of family responsibilities, should not have to. The estate tax repeal, as well, may be considered as an acceptance of idleness among some, in contrast with the hostility toward idleness by others.
Third, means tested benefits are easier to scam. Like age, one’s income or wealth is specific and objective. Unlike age, however, it is difficult for someone other than the applicant to verify income or wealth with certainty. People believe, therefore, that many of those who receive means-tested benefits are not, in fact, eligible – that they have, in fact, income or wealth that is not reported. On the other hand, because the factors that determine eligibility and benefit levels are so personal, the prevention of fraud requires extensive intrusion into people’s lives. For example, one way to become illegitimately eligible for welfare and food stamps is to pretend that a working, unmarried partner is not a member of the household. Prior to the “welfare rights” crusade in the 1960s, it was common for social service agencies to conduct middle-of-the-night bed checks, to see if a single mother in fact had a man present in the household, a man who ought to have been supporting her rather than shifting that burden to the government. After going out of style, such intrusion has come back into style during the anti-welfare crusade. Beginning in 1995, for example, the City of New York reinstated home visits and required applicants to be fingerprinted, among other measures. While these measures help to drive illegitimate applicants off the welfare rolls, they are humiliating for those legitimately in need. As a result, means-tested benefits are not popular with many of their beneficiaries, either.
Finally, the complicated formulae for means-tested benefits can be gamed legally by arranging one’s life in order to qualify for benefits, and having those benefits become a substitute for other means of improving that life. The oft-repeated quote, from former New York City welfare commissioner Blanche Bernstein, is that “there is no reason to believe that the poor are less adept at manipulating welfare than the rich are at manipulating the income tax system.” The possibility that people arrange their lives to qualify for means-tested benefits is the basis of an argument that such benefits actually make people worse off. In order to qualify, persons must forgo options -- such as marriage and gainful employment -- that would make them better off in the long run. With benefits phasing out, and taxes phasing in, the poor often lose up to 80 percent of any additional income they earn. During the anti-welfare crusade, therefore, some conservatives presented restricting eligibility for means-tested benefits as a way to improve the lives of the poor, rather than to give them the punishment they “deserve.”
Means-tested benefits present a public policy dilemma. The American people don’t want truly needy people to suffer deprivation. If they pay to provide assistance through a government program, however, at least some people who could otherwise make it on their own will either obtain benefits fraudulently, or will arrange their lives to take advantage of that assistance. This is, in fact, the first of a series of such dilemmas that will be identified. And in each case it will be shown that the political response to the dilemma has been hypocrisy – de-linking essentially similar programs and policies with the same dilemma, but that benefit different types of people, and treating them differently. Rather than face the means-testing dilemma honestly, different attitudes and different policies are employed for those in essentially similar circumstances.
For example, the rules for Medicaid eligibility for the elderly follow essentially the same criteria, and present the same dilemma, as pre-1996 welfare. One only qualifies for Medicaid if one has both limited income and limited wealth, so most middle- and upper-income elderly people who require extensive nursing home or home health care are theoretically unable to qualify -- until all their own resources are exhausted. Yet millions of formerly middle- and upper-income people are receiving benefits today, at a cost that dwarfs that of traditional cash welfare in even its open-ended “welfare rights” heyday. And they are doing so shamelessly, with virtually no objection from anyone in politics, based on financial machinations designed to meet the eligibility criteria for the program.
Some of these machinations were described in a February 25th, 2002 Wall Street Journal article on the subject (“Getting Poor on Purpose.”). Early on, people simply transferred all their assets to their children the day before they entered a nursing home, so none of that money would be used to pay nursing home bills. The federal government responded by giving state’s permission to “look back” at such asset transfers for 36 months; lawyers and Medicaid “planners” interpreted this as making such asset transfers “legal” if they occur 36 months ahead. Since the penalty for illegal transfers is a period of ineligibility for Medicaid, however, many people now transfer half their assets to family members the day before entering a nursing home, using the other half to pay for this period of ineligibility. Others buy mansions to shelter assets, since one’s home is exempt from Medicaid wealth limits regardless of its value. Still others use all the money to purchase an annuity for the benefit of the healthy spouse, thus reducing the income of the sick spouse to a point where he or she qualifies for Medicaid. Or the sick spouse signs all the common assets over to the healthy spouse, who then refuses to pay for the sick spouse’s health care, making him or her eligible for Medicaid coverage. Or the couple gets divorced, with the divorce agreement transferring the bulk of the money to the healthy spouse. Very recent laws have cut back on some of these ruses, but once they start to bite an outcry is likely.
While it is widely known that many financially well-off people are arranging their lives to qualify for Medicaid, it is also possible that many are receiving Medicaid without bothering to make such arrangements. The intrusive investigations now used to discourage illegitimate applicants for welfare are not applied to applicants for Medicaid, especially middle-income elderly applicants. No fingerprinting. No surprise home visits to try to catch senior citizens who really could be getting by without those home health aides. No investigators pouring over financial records to try to catch applicants in a lie. And virtually no one arrested and sent to prison.
College financial aid is another means-tested benefit that is commonly, and shamelessly, mined by the middle class and even the affluent. While much of this aid is privately, rather than publicly funded, many scholarship programs follow same federal criteria as federal Pell Grants – the Free Application for Federal Financial Aid -- with which they are coordinated. Moreover, even private scholarship funds are tax-subsidized though charitable deductions and the exemption of their investment return from taxes, on the assumption that scholarships benefit the less well off. As for Medicaid, there are lawyers, accountants, financial planners and how-to books to tell people how to qualify for college financial aid. They advise similar gambits – stashing assets in a bigger and fancier home or in trusts, “disowning” a child and refusing to pay for their college so only the child’s assets count, etc. As the cost of college continues to rise, the amount of “financial aid planning,” and the share of college students receiving some form of financial aid, keeps rising. Over time, financial aid has become available to those from more and more wealthy suburban towns. But recently, the share of low-income students going on to college has been going down.
Why this difference in tone between “welfare” for the poor and the other means-tested benefits for the non-poor?
Part of it is a legalistic middle- and upper-middle class assumption, or rationalization, that if it is legal, or can be interpreted as legal, it must not be wrong. The Medicaid Planning Handbook by Alexander A. Bove, ’s best seller on the subject at time I last looked, asserted that “taking advantage if allowed by law to qualify for Medicaid is no more immoral than arranging assets to legally qualify for income tax or estate tax savings.” Amazon’s top listed book on financial aid planning, also written by a “financial planner,” is College Financial Aid: How to Get Your Fair Share. The middle-class professionals who earn a living advising middle-class people on how to access public benefits for the poor are considered respectable -- far more respectable, in fact, that the “leftish” organizations that work to help poor people to access public benefits for the poor. One can imagine the response of conservative commentators to books entitled The Welfare Planning Handbook and Welfare: How to Get Your Fair Share. All these machinations put a different spin on Blanche Bernstein’s quote -- there is no reason to believe that the rich are less adept at manipulating public benefit programs for the poor than the rich are at manipulating the income tax system. They are adept, in part, because they can afford a cadre of lawyers, accountants and estate planners to help them.
Part of the difference is that Medicaid for the disabled old and student aid for the young are increasingly thought of as age-based benefits for everyone, even if they are, in a legal sense, means-tested benefits for the less well off. There is a general sense that the old and the young, unlike “able bodied adults,” are entitled to public assistance. When the need arises for themselves (but not necessarily when thinking about how much they have to pay in taxes for others), many middle- and upper-income people decide that college and nursing home or home health care should in fact be universal age-related benefits, not means tested benefits. As the Medicaid Planning Handbook says “it is not is not Medicaid planners who are immoral but the system itself…we have a system that will pay virtually all the bills of a multi-millionaire who has cancer (covered under Medicare, which everyone gets), while stripping a working, middle class elderly couple or virtually everything if one or both have to enter a nursing home.” If put to a vote, it is possible that most Americans would agree that the “spend down” of assets required to qualify for nursing home care should be eliminated, though perhaps unwilling to pay higher taxes to ensure this. They may be willing to force future generations to pay those taxes.
This difference, however, is unconvincing. The original federal “welfare” program, Aid to Families With Dependent Children (AFDC), was also thought of as a program for children. It was intended to permit a poor single mother, then assumed to be a widow rather than the mother of an illegitimate child, to have enough income to stay at home and care for the children, rather than being overwhelmed by the combination of work and family obligations. Today, however, people are more likely to blame the parents for the circumstances of the children, and assert that welfare broke up families by providing an alternative to marriage. Yet far fewer conservative commentators have been willing to criticize benefits for non-poor seniors.
In addition, it is not true that Medicare does not cover the medical treatment of a condition like Alzheimer’s, just as it would cover the medical treatment for cancer. It is the personal care of a person debilitated by disease, a burden not unlike caring for a very young child, that is not covered by Medicare. Feeding them. Bathing them. Changing them. Transporting them to and from the doctor. From a “conservative” point of view, Medicaid planning could be described as a way for one’s heirs to use public benefits to avoid caring for their own parents in old age, or spending their inheritance to care from them, just as “welfare” came to be described as a way from shiftless men to avoid financial responsibility for their own children. It could be blamed for destroying the extended family, just as “welfare” was blamed for destroying marriage. Except that conservative commentators do not describe it this way, and do no accuse middle income suburbanites of not having family values, because saying so would be politically unpopular with people who matter.
Moreover, the “middle-class, elderly person” who may be forced to lose “virtually everything” if forced to enter a nursing home is probably not “working.” An actual working poor person, if too powerless in the labor market to obtain a job with health benefits, or if laid off from a job with health benefits, very well might lose “virtually everything” is he or a family member becomes ill, since those under age 65 are not eligible for Medicare. Health care crises are the leading cause of personal bankruptcies in the United States. There is little evidence that the American Association of Retired People (AARP) would be willing to support a universal health care program that would benefit the still-working people that provide for its members. Not if it would cost the retired anything in diminished benefits or higher taxes.
In reality, the difference in attitude toward different types of means-tested benefits has to do with the background and social characteristics of their assumed beneficiaries. It is only those poverty programs that actually benefit the poor that are unpopular. Perhaps the most telling moment of the anti-welfare crusade came in a 1998 political commercial for former New York Republican Senator Alphonse D’Amato. While many of New York’s (predominately Democratic) politicians supported, and still continue to support, and open-ended right to receive public assistance with few obligations or contributions in return, D’Amato championed the 1996 welfare reform. Hard working people from suburban areas like Long Island, his political base, were tired of paying for non-working urban welfare mothers. While running for office in 1998, however, D’Amato’s took credit for getting a special exception built into the law that allowed an elderly, suburban White homeowner, who had immigrated from Italy as a child and never naturalized, to receive home health care benefits under Medicaid, rather than rely on her children or sell her home to pay for care.
Few middle- and upper-middle income people expect to be poor, but many expect their parents to end up on Medicaid, or their children to qualify for financial aid in college. And while they may not think much of the minorities and immigrants who dominate the welfare rolls, they do think of themselves as having done the best they can, and as entitled to assistance in return. The real difference is not philosophy or circumstances, it is a difference between people like “us” and people like “them.” The “us” are more numerous and influential. Arranging one’s circumstances to qualify for Medicaid and college financial aid may be, in reality, similar to relying on welfare rather than a married partner or a job, but most Americans simply refuse to acknowledge the similarities. And no one is willing to force them to, even though for reasons of moral consistency both “liberals” and “conservatives” might want to.
Special Needs – Public Help for the Disabled
Like the old and the young, the disabled are generally thought of as inherently deserving of public assistance. Those who are blind, deaf or crippled, unlike those who are the parents of illegitimate children or addicted to alcohol or other drugs, are not despised for bringing their problems on themselves, even in Red States. Reasonably thoughtful people take a “there but the grace of God go I” attitude toward the needs and conditions of those suffering a disability. Most people, if forced to think about it, are willing to pay taxes to assist those with the added burden of caring for a disabled spouse, child, or parent. In 1998 4.3 percent of all public spending was on services and benefits specifically for those with special needs – for the disabled. Most of these have eligibility restricted based on means, as well as needs. A disabled individual with enough income to provide for their own care would not be eligible; a disabled individual living with a spouse or parents who are not poor also may not be eligible. There are few single disabled adults who are not reliant on public benefits, however, because those born with severe congenital disabilities such as mental retardation, mental illness, blindness or deafness are unlikely to earn much money during the course of their lives, and are likely to require extensive assistance, particularly health care. But deciding who is in need is not always straightforward, and this leads to below the radar conflict and non-decisions.
In reality, public spending on those with special needs accounts for more than the 4.3 percent that is readily identified. Virtually every general public service, from education to transportation, has special facilities or programs for the disabled, at additional cost. For example, in 2000 it cost the New York City Transit Authority $1.25 in operating expenses to provide a subway ride, and $1.61 to provide a bus ride. It cost $27.24 to provide a demand-response ride for the disabled. It cost the New York City Board of Education $8,944 to educate each general education student, compared with $18,400 for each special education student and $42,600 for each severely disabled special education student. Services and benefits with eligibility based on disability, like services and benefits with eligibility based on age, are scattered across virtually the entire range of government activity.
Thus far, despite rising costs, the disabled as people have yet to face a backlash and hostility like that experienced by poor single parents. There is, however, a backlash brewing against the rising cost of providing services to the disabled, based on the suspicion that many of those receiving benefits are not actually entitled them. There is plenty to suspect. The share of the population that qualifies as disabled varies greatly from place to place, and has been rising everywhere as benefits directed to those with special needs have become more generous. Everyone sees people who appear quite spry parking in the handicapped spaces at the local supermarket. It is easy to imagine such people claiming various public benefits as well.
Special needs eligibility criteria are, if anything, easier to game and more difficult to administer than means-testing criteria. Income and wealth may be easy to hide, means-testing formulas may be complicated, and the interaction between various means tested benefits may be complex, but at least money is a specific quantitative measure. Disability is often less objective, particularly on the margin, and it is the marginal categories that show the greatest growth. Everyone can see that a person has no legs or arms. Blindness is difficult to feign. But there is a longstanding and increasing dispute about the number of people eligible for cash benefits based on mental impairment, special education benefits based on learning disabilities, and worker compensation based on back injuries. These are conditions that no one doubts exists, but that cannot be objectively observed in all but the most severe cases.
As a result, public programs are often accused, often good reason, of denying services to those truly in need, of allowing widespread fraud and manipulation, or of doing both at the same time. Both service and benefit recipients, and program administrators and caseworkers, are frequently whipsawed between efforts to deny benefits to all but the most clearly disabled to save money, and efforts to provide services to all but the most clear-cut frauds to ensure that no one in need is denied. Given the subjective nature of services based on need, this is almost inevitable.
The “mental impairment” category, for example, is the most controversial disability used to qualify people for SSI, the income support program for the disabled. Many conservatives have claimed that while those who are “mentally impaired” may be unemployable, this is the result of bad personal choices – alcohol, drugs, a chaotic upbringing – not a health condition. In the early 1980s, the Reagan Administration threw more than half a million people off SSI, many of whom suffered severe deprivation and were found to be in need. After the 1996 welfare reform, in contrast, the number of SSI recipients soared, as cities and states responded to the financial incentives of block grants to find that thousands of former welfare recipients were, in fact, disabled.
The same quandary, the difficulty of administering subjective eligibility criteria based on disability, can be observed in the debate over “special education” for disabled children. Prior to 1975 most students with severe disabilities were excluded from public school. Since then, based on a series of federal laws, a rising share of children have been mandated to receive special education at public expense, including not only those who would have been excluded but also those who would have been placed in regular classrooms.
Many of the latter children had failed to learn in the past as a result of hearing impediments, speech impediments, dyslexia and similar conditions – those that nonetheless did learn have become effective advocates for future children with their disabilities. Since the early 1980s (by which time most severely disabled children were provided with services), children with “leaning disabilities” and “behavioral problems” have accounted for much of the growth in special education. Learning disabilities, speech impediments, and emotional disturbance now account for around four of out of five of those in special education, according to the U.S Department of Education. Severe conditions like deafness, blindness, “orthopedic impairment,” autism and mental retardation, conditions that are easier to identify objectively, now account for a small share of special education students.
Conservative critics claim that poor cities like the New York City spend too much on special education, and continually call for crackdowns and “reform.” Cultural bias, they claim, may play a role in subjective determinations of disability, since minorities account for a disproportionate share of special education students. Nationally they account for a disproportionate share of those diagnosed as mentally retarded, in New York City a disproportionate share of those considered emotionally disturbed.
Even conservative critics, however, acknowledge that the share of children in special education is actually higher in affluent suburbs than in financially strapped cities. The share of New York City’s public school children in special education, while high compared with other financially-strapped cities, is not high compared with the national average. It is, and has been, well below the average for New York State, despite the fact that New York City accounts for a very high share of the state’s poor and immigrant children. If poverty leads to special education placements due to a “chaotic upbringing,” inadequate nutrition and health care, or “cultural biases,” one would have expected the share of the city’s children in special education to be far above average, not below.
Worker compensation is another battleground over subjective determinations of need. Under worker compensation laws, employees have lost the right to sue their employers for injuries on the job. The debate over “excessive” lawsuits and jury awards does not apply to them. Instead, state worker compensation boards determine whether or not injuries are disabling, with compensation according to a fixed schedule relative to wages. In some states, employers are required to purchase private insurance to pay worker compensation claims; in other states, they are required to pay into state-run funds. Payments by the state-run funds alone accounted for 0.4 of all public spending in 1998, almost as much as unemployment insurance payments (0.6 percent).
The cost of worker compensation claims is paid by business, either through higher insurance premiums, higher state worker compensation taxes, or lower profits on worker compensation policies. With the program organized at the state level, businesses now claim that comparative worker compensation costs, benefits, and interpretations of eligibility for assistance, are an important criteria for the creation and retention of jobs. States are rated as to their effectiveness in keeping these costs down. Not surprisingly, the conflict is over whether to award benefits in marginal cases that are difficult to document objectively, such as back injuries. Their subjective nature make it more likely that those who cannot work will be denied benefits by insurance companies seeking to improve their profits, or public agencies seeking to respond to the concerns of job-providing businesses.
They also make it more likely that less than honest citizens will use workers compensation as an alternative source of income. Indeed, worker compensation claims rise in recessions, when other sources of income are not available. One reason for this is that employers may be willing to hire workers with moderate health or mental impairments when labor is scarce, but unwilling to do so when unemployment is high and healthy, young, and probably more productive workers are available for low wages. On the margin, therefore, “disability” is in part an economic condition, subject to supply and demand in the labor market.
Services and benefits with eligibility based on special needs (disability) present, once again, a public policy dilemma. If public agencies are too liberal in their evaluation of those who file claims, they risk turning those programs and benefits into expensive gold mines for the undeserving. If they are too suspicious they risk denying services benefits to the truly needy, thus causing severe deprivation. And once again, as in the case of means-testing, the response is hypocrisy. “Conservatives” may be increasingly outraged by the number of adults seeking income in lieu of work under SSI based on claims of “mental impairment,” worker compensation based on claims back injury, and children receiving special education services based on claims of learning disability or behavioral problems. Yet there has been virtually no objection to the cost of assistance to the disabled elderly. And this cost is far larger, and is set to explode.
I once received a copy of a report by a disabled rights advocacy group, which included a percentage of the population with a disability. I found the group’s percentage to be absurdly high – around 20 percent. In every prior case in which I had received a report from any advocacy group (on any cause and with any political point of view), and in which there was reasonably objective data available against which to check its claims, I had found the information produced by advocates in such reports to be misleading at best, and simply made up at worst. But in this case, when I checked U.S. Census Bureau data – the Bureau’s surveys often ask people if they have disabilities – I found the advocacy group had in fact used the Bureau’s numbers in its report. I had been thinking only of the share of people who are disabled from birth, or who become disabled due to illness or injury in youth or middle age. I hadn’t accounted for the disability that awaits many, if not most of us, in old age.
According to the Census Bureau 12.0 percent of persons age 22 to 44 had a disability, and 1.7 percent required personal assistance with one or more “activities of daily living” (ADL) – getting in and out of a bed or chair, taking a bath or shower, dressing, eating, using the toilet – or “instrumental activities of daily living” (IADL) -- going outside the home, keeping track of money and bills, preparing meals, doing light housework, and using the telephone. On the other hand, 46.0 percent of persons age 65 to 79, and 73.0 percent of persons age 80 and over, reported a disability, and 11.0 percent of persons age 65 to 79, and 33.0 percent of persons age 80 and over, reported needing personal assistance with an ADL or IADL. The provision of such personal assistance can be far more costly than providing a modest living to those with a work disability, or additional school resources to those with a learning disability. And the number of persons age 80 and over, and their share of the population, will increase rapidly over the next 30 years. The only question is how many of these people will be determined to qualify for assistance based on their disability, and who will be required to pay to provide that assistance.
The care of the disabled, frail elderly is a combination of three issues. The first is the proper border between the obligations of family and the obligations of society, and as such it is the subject of a separate discussion in and of itself. In 2000, only 11.6 percent of all elderly people with disabilities had care provided by paid help, rather than family members or friends, although most people with advancing disabilities eventually require institutional care at some point before death. The financial implications of the difference between a long period of family care followed by a short stay in a nursing home or hospice, compared with a long period of government provided services at home followed by a stay in a nursing home, are enormous, both for the family and the community.
The second issue is whether the elderly and their children, or the community, should be required to pay for any required paid help. Most personal and health care for the disabled elderly is paid for by Medicaid, a means-tested benefit, and as discussed earlier Medicaid is subject to “planning” by those savvy about accessing public benefits.
The third issue, the one relevant to this discussion, is the difficulty of determining disability on the margin. Obviously, disability is a continuum, with an elderly person at first finding it inconvenient, then tiring and burdensome, then difficult, then almost impossible, and then completely impossible to do some or all activities of daily living without help. The more determined hold out to the end, the less determined seek care almost at the beginning. Yet either someone else is paid to cook for, clean for, handles finances for, feed, bathe, and transport such an elderly person, or is not.
To know if such assistance were truly “medically necessary,” one would have to be present inside a person’s home, observing him or her in the conduct various activities, preferably surreptitiously to avoid having a show put on. In fact, one would also have to be present inside an applicant’s brain, measuring the way a person felt on the inside while doing those activities. As with the back pain of persons seeking worker compensation, the mental impairment of persons seeking SSI, and the learning disability or behavioral problems of children seeking special education, it is therefore nearly impossible for public officials to apply the same objective standard to everyone. Not surprisingly, the share of elderly people receiving assistance under Medicaid varies enormously from state to state.
In 1997, at the height of the anti-welfare crusade, New York State accounted for 8.4 percent of the nation’s poor people and 7.5 percent of its “frail elderly” (age 85 and over), a significant excess burden compared with its 6.7 percent of the nation’s population. Not surprisingly, New York accounted for 9.0 percent of total Medicaid recipients, and 8.3 percent of Medicaid recipients in nursing homes. A nursing home is not a place one would choose to go unless one is truly in need, so it not surprising that New York’s share of the nation’s Medicaid nursing home population was only slightly higher than one would expect, based on its share of the nation’s very old people. New York State, however, accounted for 23.0 percent of persons receiving Medicaid-financed home health care, a service intended to keep the elderly out of nursing homes. It accounted for 33.0 percent of Medicaid home health care spending, and nearly half of all Medicaid financed “personal care” spending.
Here in New York State, politicians, pundits and the media always describe the elderly as deserving more and better services. Much of the high Medicaid spending on the disabled elderly takes place in New York City, a place where spending on elementary and secondary education has been below the national average, as a share of the income of city residents, year after year. In 1990, 70 percent of New Yorkers age 65 or over were non-Hispanic Whites, while 70 percent of New Yorkers age 18 or under were not.
Among those White elderly persons, for many years before she died, was my wife’s grandmother, who lived in a senior citizen’s development here in Brooklyn. She volunteered to deliver meals on wheels to the homebound elderly in her building. That benefit is not means tested, but is simply provided to elderly people based on an assertion of incapacity provided by social service agencies that receive funds to prepare and deliver the meals. My wife’s grandmother was outraged to find herself delivering meals to people who she knew to have fewer infirmaries, and less difficulty preparing meals, than she did. She was also upset at reading about service reductions for children (this was during the budget crisis of the early 1990s recession), given what she knew about her neighbors.
Why did she struggle on, almost to the last, on her own while others felt entitled to assistance? Most of the reason was a difference in moral values; she was a very committed Catholic who lived what she believed. But I think that part of it was the nearby presence of her grandchildren and great grandchildren. The children, grandchildren, and great grandchildren of her neighbors lived in the suburbs, or in the Sunbelt, as a result of the vast middle-class exodus from older cities like Brooklyn in the 1950s and 1960s. They had left their parents behind to be cared for by the City of New York, at others’ expense, as they sought lower taxes and better schools elsewhere. My wife’s grandmother, on the other hand, thought of her own great granddaughters, toddlers who came to visit her every month, when the Daily News reported that the city was taking sand out of the sand boxes to save money. The moral and practical problems, given mass mobility, of charging part of the cost of caring for the old and disabled to those who live near them is obvious.
The point is that disability can be as subjective for the elderly as it is for others. Government agencies either “roll over” and hand out benefits to anyone with half a case, as New York does for the elderly, or limit benefits to those they can’t get around paying, providing assistance only to those with the savvy, resources, information and sense of entitlement to work the system.
With all this controversy or (in the case of the elderly) potential controversy about people either illegitimately receiving public benefits or cruelly being denied public benefits in marginal cases, can we at least be assured that those clearly and indisputably in need are being taken care of?
Not necessarily.
Those with the clearest needs are also those with the greatest expenses. Caught between the political demand for lower taxes, and the political demand for broader eligibility, one political response is to reduce the public benefits for the worst off and most vulnerable, particularly those who are incapacitated and do not have savvy family members or advocates fighting for every dollar of assistance. The result, here in New York State and perhaps elsewhere, has been a series of scandals. In the late 1960s and early 1970s, a time when the elderly were less politically powerful than today, there were nursing home scandals. Old people were found to be suffering in conditions of severe neglect in profitable, publicly-funded nursing homes. In the early 1970s there was the Willowbrook scandal. A home and school for the most severely retarded children in New York State was found to be overcrowded, and to have children chained to their beds and lying in their own excrement. And in 2002 there was a new scandal, after the New York Times investigated 26 large nursing homes caring for severely mentally ill adults with Medicaid funding. These were patients who, in prior decades would have been warehoused in large state hospitals, hospitals that were themselves the subject of scandals in prior years. Of 5,000 residents of those homes, the Times found, nearly 1,000 had died in just six years, many at ages younger than 50, some in their 20s.
Age, Means and Needs: Fair in Theory, Hard to Implement In Practice
When one examines public policies to ensure the care of the poor and those with special needs, and goes beyond the rhetoric about “racism,” “welfare queens,” “malingerers,” “big corporations,” insurance companies, “big government,” “compassion” and “rights,” one finds difficult choices and high stakes. The choices are often made by low level, underpaid public officials who cannot possibly have the information required to make them fairly and correctly, and who are at any moment subject to political blame for injustice and suffering, for waste, fraud and taxes. People don’t like to be forced to confront such difficult choices. And you don’t attract votes, viewers or readers by forcing them to.
Age, means and need are three answers to the question “why is it fair,” as in “why is it fair that I have to work and pay taxes to pay for a service or benefit that I, myself, cannot receive?” And age, means and need are answers that most people find legitimate in theory: because the service or benefit recipient is a child, is old, is poor, is unemployed, is disabled, or is sick.
After years of observing the inequities in government policy and programs, I have concluded that public services and benefits should be limited to those that the government is willing to provide to everyone equally, or at least everyone in equal circumstances. As we have seen, determining a person’s circumstances, and whether those circumstances result from unavoidable outside factors or (bad) personal choices and values, and thus determining who is actually eligible for means tested benefits, is difficult. Determining a person’s circumstances, and thus their eligibility for benefits based on disability, is often impossible. While fair and reasonable in theory, therefore, public services and benefits with eligibility based on means and needs are frequently inequitable in practice.
Inevitably, in fact, simple and presumably fair criteria such as age, means and need, which have issues of their own, end up bastardized. The next several posts will give several ways in which this happens.
“Hope No One Shows Up”
The constant liberalization and restriction of eligibility, and increase and decrease in personal scrutiny, under means- and need-restricted programs is the off-the-books, under-the-radar outcome of an ongoing struggle between “liberals,” who want many people eligible for expensive services (often because they earn a living providing those services) and “conservatives,” who do not require services and do want to pay taxes to help those who do require them. In economic expansions, when more services may be offered without a tax increase, advocates often demand, and receive, more extensive services for more people. In economic downturns, when falling tax revenues lead to budget crises, there is generally a sudden finding of “waste, fraud and abuse.”
In all times, however, politicians often compromise by gross hypocrisy. While focused on the needs of particular group (supported by organized advocates), they prove their compassion by passing laws that, in theory, make many people eligible for extensive services and benefits. But during the budget process, when different needs and priorities are in competition, resources are scarce, and tradeoffs must be made, they save money by not providing enough to cover everyone theoretically eligible to get all the benefits they are theoretically entitled to. Having claimed credit for making a benefit available, both Republicans and Democrats “hope no one shows up” to claim it.
The most reasonable method of rationing under-funded benefits is a queue or waiting list, and this is the method generally used to ration housing assistance -- public housing vacancies and “Section 8” housing vouchers. According the U.S. Census Bureau there were an estimated 33 million people living in poverty in 1999; in 1998 21 million people received food stamps but only 1.3 million people resided in public housing and only 3.0 million received other forms of housing assistance, such as Section 8. Yet both housing assistance and food stamps have similar eligibility criteria. In fact, since housing assistance has a “cost of living adjustment” that makes those with higher incomes eligible in metropolitan areas with higher average wages, more people are probably eligible for it, at least in theory, than are eligible for food stamps.
The queues for subsidized “affordable housing” are long. According to a 1998 report by the Department of Housing and Urban Development (HUD), “Waiting In Vain: An Update On America’s Housing Crisis”, in 1999 a family’s average time on a waiting list for the largest public housing authorities was 33 months. It was even longer in some large cities - eight years in New York City, six years in Oakland, and five years in Washington, and Cleveland. The average waiting period for a Section 8 rental assistance voucher was 28 months in 1998. The waiting period for the vouchers, which are used to help families in need rent privately-owned apartments, was ten years in Los Angeles and Newark, eight years in New York City, seven years in Houston, and five years in Memphis and Chicago. Almost one million families were on the waiting lists, nearly 25 percent of the 4.3 million households assisted by HUD. Yet it is unlikely that either funding will be expanded to provide for those on the waiting lists, or that eligibility will be restricted to the worst off to reduce them. In fact, federal housing spending is going down.
Perhaps more telling, in 1998 there were 8.5 million individuals and 7.2 million families who were living in poverty – a total of nearly 16 million poor households. While one million poor households were in the queue for housing assistance, therefore, at least 10.4 million poor households that might qualify for such assistance were not even in the queue. Both public housing and Section 8 vouchers limit what one must pay to 30 percent of one’s income, regardless of how low that income is. In 1999, according to Census Bureau data in the Statistical Abstract of the United States, there were 28.3 million households that paid more than 30 percent of their income for housing, nearly seven times the number of households aided by public housing authorities funded by HUD.
These facts illuminate a general point. For every beneficiary of less-than-universal public programs, there is an eligible person who wouldn’t think of going to the government for help. The more obscure to benefit, the fewer people know enough to apply. Just about everyone knows to send their children to public kindergarten, or to start collecting Social Security at age 65. Not many people know where to go to apply and get on a waiting list for housing assistance. Businesses seeking customers advertise. Government agencies seeking to avoid customers limit access to information. In many cases, the share of potentially eligible people who receive benefits could be substantially changed simply by informing the eligible through outreach, or by not doing so.
Certainly that has been the case in the City of New York. In the early days of the administration of former Mayor David Dinkins, the incoming social service administrators attempted to clear the city’s homeless population by offering them subsidized housing. As a result, however, the fact that (as a result of court decisions) anyone determined to be “homeless” can claim publicly-supported housing in New York became widely known. Thousands of people whose living arrangements were less than ideal, because of poor quality housing, living doubled up with friends and neighbors, or high rents, moved out and onto the street to claim the benefit. This forced the Dinkins Administration, which was facing a budget crisis, to take measures to sharply restrict eligibility.
In 2000 to 2003, outreach once again became an issue in New York. The “HealthStat” program, promoted by the Giuliani and Bloomberg administrations, sought to enroll all those eligible for Medicaid in the program, despite soaring costs. By February 2003, however, one of four New York City families was on Medicaid. With the City’s share of the Medicaid bill surpassing $4 billion, conservative commentators began objecting to “come and get it” Medicaid for the working poor, just as they had previously objected to “come and get it welfare.” Such objections go beyond the “anti-welfare crusade” objections to people receiving benefits they were not entitled to. These are objections to people receiving benefits they are, in fact, entitled to, but that no one expected them to actually receive.
The opposite of outreach is disinformation and stonewalling – lost paperwork, applications at inaccessible locations open during limited hours, long lines, extensive delays. Throughout the nation, advocates for the poor and disabled routinely file, and win, lawsuits for wrongful denial of assistance. For the most part, these lawsuits merely attempt to obtain for an individual, or group of individuals, benefits that are, in fact, promised under law but are not sufficiently covered by budgeted public funds.
Access to information may be uniformly made available through outreach, or uniformly limited through stonewalling. What is worse, however, is the tendency for information, and thus benefits, to be limited to certain well connected groups.
Like employment opportunities, the availability of public benefits, especially less-than-universal public benefits, often becomes known through informal, person-to-person networks. Sometimes this is an accident of social policy. According to the 1999 New York City Housing and Vacancy Survey, 50 percent of those living in New York’s public housing units were on public assistance. Many people in housing projects can explain welfare, SSI, housing assistance vouchers and Medicaid to their neighbors; few can provide a tip as to where to go and how to get a job. Access to New York City’s public housing itself has become rationed based on networks, with many future beneficiaries the children of past beneficiaries. Immigrants from other countries account for nearly 40 percent of the city’s renters, but only 20 percent of its public housing occupants. In fact, those receiving cash public assistance are generally eligible for a host of public benefits from food stamps to Medicaid to housing assistance, and are automatically enrolled in those benefits when they apply for welfare. The working poor, who never meet a caseworker, often do not receive benefits they are entitled to.
Sometimes differential access to information is intentional. In addition to federal housing programs, New York City and State have their own state and local housing programs. In one of them, the “Mitchell-Lama” program, new apartment buildings received both a 30-year exemption from property taxes and an interest rate subsidy, and their middle-income occupants received below market rents. Obviously, this is an extremely valuable benefit in a high housing cost city, and one that was made available to those who are not poor. These units were distributed based on waiting lists, established on a first come, first serve basis. It is generally believed, however, that members of certain unions, political clubs and associations knew specifically when and where to go to get on the list the day it opened; these subsidized housing units ended up being occupied primarily by the well connected. In fact, a recent article noted that the judge who oversees New York City’s aid for the homeless resides in such a unit. She and many other judges, she noted, moved in when they were younger and had qualifying “middle” incomes.
Wherever there is a queue for something valuable, there are going to be queue jumpers, and subsidized housing is no exception. There has been a 20-year political battle over who should get higher priority in allocating the nation’s public housing. At the start of the public housing era in the 1940s and 1950s, new, modern public housing units, with up to date bathrooms and kitchens, were a step up from the dilapidated, obsolete housing where many poor people lived. As such, their residents were required to meet certain standards, including employment rather than dependence of public assistance. In the early 1980s, the federal government began to see public housing – concentrated in older sections of older cities – as a convenient place to put the nation’s most trouble residents – addicts, paroled criminals, the mentally ill, the socially unemployable – and mandated that such people given priority in public housing. After a while, few working people were left in the nation’s housing projects, some of which were abandoned and torn down, but in New York City, where public housing remains better than many of the alternatives, the battle over who gets the units continues. An early 1990s decision to give working poor people priority in the allocation of at least some of the city’s units was overturned by a lawsuit, or so I read.
The under the table battle over the official federal definition of a “metropolitan area” is primarily a battle of the allocation of housing assistance and similar funding. According to federal rules, recipients of Section 8 housing vouchers may use them anywhere within a “metropolitan area,” and therefore may use them to move from older cities to newer, growing suburbs. In addition, to be eligible for the vouchers one must be below the metropolitan area’s median income. Each decade, in association with decennial census, the federal government reviews the official criteria for determining metropolitan boundaries, and each decade it proposes criteria that would designate metropolitan boundaries that correspond to what most people think they are. The result is a wave of protest by suburban counties and the Congressmen that represent them, and the alteration of the criteria make these counties part of “metropolitan areas” that are separate from older central cities.
This has two effects. First, central city residents are no longer able to use housing vouchers to move from the city to the suburbs. And second, middle-income people in affluent suburban counties suddenly become eligible for subsidized housing, because the median income in the “metropolitan areas” that includes those counties is high, while working-class people in poorer urban counties become ineligible, because the “metropolitan areas” that includes those counties is low. Obviously, most middle income suburban families do not know that they are eligible for housing subsidies, but politically connected suburban families do. But in the end, these machinations are not even needed. By act of Congress, for purposes of eligibility for housing benefits the “New York Metropolitan Area” is limited to New York City and Putnam County, New York, a semi-rural county with virtually no rental units located 40 miles from the city’s border.
Housing assistance is an example of a public benefit that very few people expect from the government. Public education, on the other hand, is a benefit that most people expect the government to provide their children. Here in New York City, however, that expectation has not been met. In a city that regularly produces winners of the Westinghouse and Intel science prizes, a substantial share of the children attend schools designated by the State of New York to be failing, and a majority attend schools recently identified by the Mayor of New York as sub-standard. In reality, here in New York child-care for the school-aged is a universal benefit, but education is not.
Like housing assistance, education in New York City has been rationed based on queue, with applications, waiting lists, and frantic parents trying to get their children into the limited number of elementary, middle or high school similar in quality to most schools elsewhere in the United States. And as for housing assistance, there is official and unofficial queue jumping, with ideological battles over quotas and diversity versus how well four year olds test as criteria for admission. There is also individual maneuvering via phony addresses, political connections, and other ruses. One cannot blame educated, motivated parents for doing all they can to assure an education for their children. One can be embarrassed by a system that for 30 years bought off such parents with a series of deals while not providing for the majority, until enrollment rose, budgets were cut and even the aggressive minority was unable to secure a decent education for its children, and had reason to demand change.
Given the wide use of the “hope no one shows up strategy,” one sees why expanding publicly-funded health care through more and more separate programs, each with its own applications procedures and requirements, is cheaper than having one simple program that applies to everyone. It is because less savvy people will be left out. That’s what those advocating this approach are counting on. Any government program that is not funded at a level sufficient for every potential eligible person to actually benefit is, in that sense, an intentionally inequitable fraud.
Setting The Standard
The non-decision to implement a “hope no one shows up” non-policy, discussed in my prior post, is directly connected to the issue of standards, and to the cost of benefits per recipient. For every needs- and means-restricted benefit, there is a tradeoff between the number of beneficiaries that can be served and the amount that can be spent on each. This tradeoff is being made, generally without a direct acknowledgement that it is being made, in almost every major category of public service, from education to health care. And liberals and Democrats, in pushing to enact expensive benefits, often end up agreeing to serve some people but not others – others who are just as in need, or more in need, than those who receive the benefits. As the United States, with the most expensive health care in the world but also the most uninsured people in the developed world, finally acknowledges how bad its health care finance system is, the issue of standards will clearly move to the forefront. Clearly the U.S. can afford basic health care, even good health care, for everyone just by using what the government is already spending. But that would mean some beneficiaries would have to give up extravagant health care, and it is those with extravagant health benefits, not the uninsured, who have political power.
Housing assistance is, yet again, a good example. If the overall budget for federal housing assistance isn’t going to be increased, there are other ways for the queue of eligible non-recipients to be cleared. Given that many poor people pay 40 or even 50 percent of their income in rent, the share that Section 8 recipients are required to pay could be increased from 30 percent to 35 percent, or higher. That would ensure that only those forced to pay an even greater share – those worst off -- would find the vouchers attractive, and remove others from the queue. But it would also require a reduction in aspirations for recipients’ standard of living. Liberals have been unwilling to give up the aspiration that some day, sufficient funding will become available so no one is required to pay more than 30 percent of their income for housing. So to preserve that aspiration some are allowed to pay less in rent with assistance, while others are left to pay far more in rent more without any help at all.
Most housing programs require that each recipient have their own housing unit, in good condition, with its own kitchen and bathroom. Many poor people, however, are forced to live doubled-up with friends and relatives, often in buildings in poor condition. In the New York area, that isn’t true of the poor alone. My parents were forced to live in their parent’s house, which was subdivided for this purpose, until I was ten, and when we were able to move out my aunt and uncle, newly married, moved in. A generation later, when I moved to my first apartment in the Bronx, I had to share it with three other friends. At least I had my own room – some college-educated people living in Manhattan have to share even that. And the city’s immigrants often subdivide a four-room apartment into four apartments, with one room per family and the kitchen and bathroom shared.
To clear the queue, instead of imposing a minimum standard for space per person, those receiving housing assistance could be forced to live with a maximum space per person, limiting the demand for assistance to the worst off – those who couldn’t do better on their own. But that would mean that subsidized housing programs would permit, even encourage, the worst off to live in neighborhoods they could otherwise not afford by accepting smaller spaces. Residents of such neighborhoods would likely object.
Of course, one could just eliminate housing assistance altogether, and add on an equal cash grant per person to the welfare payment or earned income tax credit in its place. Then all the poor would receive equal help, but none would be as well off as some believe they should be, and the dream of lifting all to the higher standards would be abandoned.
The conflict between housing benefit standards and housing benefit availability reaches the point of absurdity in the case of New York City’s homeless. An early 1980s consent decree, signed by the city after a lawsuit by activists, set forth the housing the City must provide to anyone classified as “homeless” in great detail. These standards required the City to provide better housing, at lower cost as a share of income (or for nothing), than a substantial share of the city’s poor people – even its working poor – actually had. It also permitted the homeless to turn down housing units that meet these standards, often several times, because they didn’t like the unit or neighborhood offered, all while residing in expensive transitional housing at city expense. Many homeless people turned down apartments because they are in neighborhoods that are too poor, with too many drug addicts -- until one became available, at higher public cost, in a better neighborhood. And often, since the City was offering high rents in order to move families out of the expensive homeless system, landlords evicted poor families that did not receive housing assistance in order to rent to a “homeless” family that does. In short, in their zeal to get their version of justice for the poor, the city’s housing advocates have created a system of gross injustice among the poor. And while that consent decree has since been revoked, the problems it dealt with remain.
Housing assistance is concentrated in New York and other older cities, so the absurd results of housing non-policies do not affect most Americans. But the absurd result of health care non-policies does. Over the past 15 years, some states have made attempts to alter the Medicaid program to provide less-extensive, lower-quality health care to more people. The most notable examples are TennCare in Tennessee and the attempt to provide universal health care in Oregon. In each case, the state attempted to limit the type of medical care available while expanding coverage, but in each case the attempt fell short of universal coverage due to funding shortages and political demands that resources be diverted to provide better, more expensive treatment for those already in the system.
The same battle is gearing up around nursing home care, with advocates demanding a better quality of care to provide a better quality of life for the aged. Rising standards, however, will inevitably lead to fiscal stress, and attempts to limit the number of elderly people found to be disabled enough to qualify for care. All else equal, therefore, higher standards both increase the demand for benefits and limit the supply. And all else is not equal. Rising federal, state and local debts and pension costs, and an aging population, ensure that there will be less money around to settle the quality-availability tradeoff in the future.
Thin Edge of the Wedge
Thus far, my equity and eligibility posts have reviewed public policies that direct, attempt to direct, or pretend to direct public benefits to the less well off, or to those most in need. Yet government eligibility rules have often directed public benefits, services and protections to the better off, not the worst off, with not only “conservative” and Republicans but also “egalitarian” Democrats in support. And the policy of providing benefits to the better off hasn’t been limited to obscure, limited-cost programs; it has been characteristic of the history of the most extensive and expensive public programs in the country.
Over the past century and a half, many now universal or near universal benefits and protections such as public education, Social Security, unemployment insurance, and the minimum wage were offered first to the organized and influential, then to population at large, and finally to the least well off – the poor, the sick, the disabled, and minorities. The better off, in short, were the “thin end of the wedge,” those with the political clout to get a public benefit created, who later felt a duty to offer the same benefit to others. Until more recently the government, and the tax burden, reached some kind of maximum/ equilibrium and the march to universal benefits stopped, leaving inequities in its wake. No one feels much duty to others anymore, even if they are receiving benefits themselves.
Take public education, for example. Universal public elementary and secondary school education developed in fits and starts over the centuries, a process which is now being repeated in developing countries. According to Gotham, the Pulitzer Prize winning history of New York City, in 1840 public schools were only opened in middle-income neighborhoods, not in neighborhoods populated by the immigrant poor. As late as 1940 two thirds of adult Americans had not attended high school, according to data in the Census Bureau’s Historical Statistics of the United States. At that time only 42 percent of all five and six year olds were in school. Eventually, however, public education through 8th grade, and then through high school, and then beginning with kindergarten became a social expectation for almost all children in the United States. The better off in better off communities, however, had received it first.
Social Security underwent a similar evolution, as a quick read through the Social Security Administration’s history webpage shows. Today Social Security is thought to be universal, with the Social Security number the equivalent of a universal national identity system. Initially, however, Social Security payments were only available to workers, not to widows after the workers had died (they were expected to go live with the kids). Also, the 1935 Social Security Act based retirement benefits on wages earned in employment, but the act’s definition of “employment” only included those working in substantial, regular, urban businesses. It excluded agricultural laborers, “casual laborers,” those laboring on vessels, domestic servants, those working in the non-profit sector, and those working for the government. Public employees were presumed to have their own pensions, but those laboring on farms, as sailors, as casual laborers, and as domestic servants comprised a substantial share of the poor –especially the minority poor -- of the time.
The history of the minimum wage, as recounted in the history page of the website of the U.S. Department of Labor’s Fair Labor Standards Administration, shows a similar evolution. The 1938 Fair Labor Standards Act was applicable generally to employees engaged in interstate commerce or in the production of goods for interstate commerce – that is large, generally unionized manufacturing corporations. Amendments passed in 1961 extended coverage to employees in large retail and service enterprises as well as to local transit, construction, and gasoline service station employees. Additional amendments in 1966 extended coverage to state and local government employees of hospitals, nursing homes, and schools, and to laundries, drycleaners, and large hotels, motels, restaurants, and farms. Subsequent amendments extended coverage to the remaining federal, state and local government employees who were not protected in 1966, to certain workers in retail and service trades previously exempted, and to certain domestic workers in private household employment. In each case, coverage was extended to lower and lower paying industries and occupations. Today virtually every employee is covered by minimum wage laws, though enforcement against “off the books” employment for less than the minimum wage is sporadic.
For 200 years, as the United States became more affluent and developed, the “thin edge of the wedge” strategy generally led to universal public benefits. In the last three decades, however, it has failed do so.
The best example of this is the advance toward, and then retreat from, universal health care. The first publicly funded health benefits were public health programs, public hospitals and sanatoriums. At a time when few people, especially few poor people, lived long enough to suffer from chronic conditions such as heart disease and cancer, these organizations and agencies addressed the greatest threat of the time – infectious disease. An infectious disease has the potential not only to weaken or kill the person that has it now, but also to spread and to affect others -- including the well off -- later. The publicly funded treatment of the infectious diseases of the poor, at first by quarantine and then by vaccination, was therefore as much a benefit for wealthy non-beneficiaries as for the poor beneficiaries themselves. The poor seldom had enough money to see physicians and obtain treatment for non-infectious conditions that only affected themselves.
The next wave of public health finance was the growth of private health insurance among employees of large corporations during the Second World War, and the exemption from taxable income of employer payments for that insurance. At a time when the highest federal income tax bracket was 70 percent, that exemption was worth a great deal for the wealthiest; the government was footing, indirectly, almost their entire health insurance bill. The middle class, meanwhile, received a back-door public subsidy worth one-quarter to one-third the cost of insurance, based on its tax rates. Medicaid and Medicare were introduced in the mid-1960s, providing health care to the elderly and dependent poor. This left only working poor, and self-employed people who could not afford health insurance, outside the public health finance system. Both groups were shrinking and the nation became wealthier, and as large corporations replaced small businesses in more and more of the American economy.
Then things began to change. In the late 1970s and early 1980s, large industrial companies downsized, pushing millions of people out of positions in which health care – and similarly tax advantaged pensions – were assured. Many such companies cut costs by purchasing goods and services from new, smaller companies whose chief cost advantage was the absence of such benefits. As the large baby boom generation flooded the labor market, positions in large companies, and positions with health insurance, became scarce, and the trend away from self-employment reversed. Today more and more working people lack health insurance – only half of all private-sector employees had employer-provided health insurance in 2000 according the Bureau of Labor Statistics. With more and more influential and organized people feeling threatened, there was a move, in the early 1990s, toward some form of universal health care. The specific program proposed by the Clinton Administration was rejected. Some alternatives were briefly discussed. And then nothing happened.
A de facto political settlement that has occurred, one that has left many of the most powerless and vulnerable without publicly-financed or subsidized health care, even as they are forced to pay sales taxes for state Medicaid programs and payroll taxes for Medicare – for the benefit of those who are often better off than themselves. And more and more public subsidies are benefits are being provided to those who already have benefits, because the tax subsidy is greater as the cost of health insurance rises and more high tech health care is made available to the insured. A drug benefit for today’s elderly under Medicare paid for by debts the young uninsured will have to pay back widens the inequity further.
In many states, a similar settlement has been reached for unemployment insurance, with a substantial share of the poorest workers left out of the system even as their employers continue to pay into it on their behalf. The greatest inequity is the denial of benefits to part time workers. According to a 2002 overview by the National Employment Law Project, part time workers are ineligible for unemployment benefits when laid off in 29 states, even though taxes are collected on their wages. Part time work is indicative of the employment opportunities in the low-wage, low-skill retail and food service businesses. Such workers are actually subsidizing (assuming their employer’s need to pay unemployment taxes affects the wages they receive) those in better-paid, full-time jobs.
The NELP is a liberal advocacy organization that favors allowing workers to collect unemployment insurance even if they quit their jobs, for a variety of seemly good reasons that would be impossible to prove or disprove. Such policies would run into all the administrative and fraud issues of similar “means” and “need” restricted benefits. There is, however, no doubt about whether or not a person has been working part time, since there is a record of unemployment taxes paid on his or her behalf. Equal treatment for part-timers, however, would cost money. Having the less well off subsidize those better-paid jobs is a part of a package of benefits and subsidies that inequitable states, most in the South, have used to attract such jobs out of places like New York City. In fact, unemployment insurance is one of the few remaining ways that southern states remain more inequitable than New York.
Pre-kindergarten is a final example. According to Census Bureau, by 2001 the share of women with children under age six who were in the labor force had reached 63 percent for those who were married, and 70 percent for those who were single. Given that many of today’s young parents have to work, whereas their mothers did not, in order to pay the higher taxes needed to fund higher benefits for the elderly, providing a publicly-financed place for their four year olds to be cared for during the day could be considered a fair benefit in turn. Unlike three year olds, for whom toilet training often has yet to be completed, four year olds may be managed in groups, so pre-school works for them. And in the 1990s, when many members of the large and influential baby boom generation had younger children, it became a public priority.
In 1995, when our oldest child was three, we began to see flyers in neighborhood stores advertising public pre-school in the cash starved, staggering New York City schools. We looked into it, and signed our daughter up. During the school year, we found that we were required to have home visit from a social worker as part of the pre-school program. It turned out that a special grant program had been set up to provide a pilot pre-school program for disadvantaged children, but the only schools that received the grants were those in neighborhoods where few such children lived. After few disadvantaged children applied, other children were permitted into the program. The pre-kindergarten program didn’t work for us – it was just a half-day, and individual child care for the other half of the day cost more than private full day pre-kindergarten programs, including the one where we sent our second child. But we had served our purpose. We had received a benefit that the poor did not get, and had served, with others, as the “thin edge of the wedge.”
The State of New York funded a “universal pre-kindergarten” program at the end of the 1990s, when money was flush. This was a great benefit for poor mothers, many of whom were being required to work under welfare reform “workfare” and could not afford private nursery schools. The program was opened up, on a voucher basis, to such schools, and the competition required the public schools to actually provide services. And an additional year of school is a special benefit for immigrant and poor children, whose parents cannot be counted upon to teach them letters, numbers, colors, shapes, and basic life skills before their arrival at kindergarten. But “universal” benefits are not popular in our “one deal at a time, one interest at a time,” political culture. It was much cheaper to serve the affluent while pretending to serve the poor, in the finest “liberal” tradition of New York. When New York State’s most recent fiscal crisis was finally acknowledged, the Governor proposed the elimination of “universal pre-kindergarten” in fiscal 2004. Pre-kindergarten still isn’t universal, and it is unclear when or if it will become universal.
At one time, hooking affluent beneficiaries into a public benefit was a sure way to make it universal. Only the passage of a little time, and an appeal to people’s conscience, was required. Surely the better off would feel uncomfortable receiving a public benefit denied to the less well off who pay for it. Surely senior citizens, receiving health care paid for by the young would be unhappy with having their children and grandchildren lack health insurance. Then it stopped working. The thin edge of the wedge is now at the fat end of public spending, and of political support for public programs and benefits. Perhaps there is no longer a conscience to appeal to.
American Feudalism
Liberalism, conservatism, Republicanism, and Democratism: these “ideologies” are so often and so easily discarded for the benefit of an organized group of political supporters that it isn’t reasonable to call them ideologies at all. When it comes to social benefits and burdens, only capitalism and socialism are consistent. Under capitalism you get what you earn, at least in theory, and those who believe effort and talent should be rewarded, and incentives are required to get people to work on behalf of others, can agree with that. Under socialism you get what you need, at least in theory, and those who believe we are all one human family can agree with that. In addition to what you earn in the marketplace and what you need at home, however, there is another justification for obtaining public benefits – what you’ve got. All too often, increasingly often, public benefits are services are provided to those who already have them, because they already have them. Even as others, who have greater needs, are denied. Even as others, who have earnings that are taxed to pay for the benefit, are also denied.
Allocating benefits to those that already have them, because they already have them, is feudalism. It flies in the face of American ideals of equality and equal opportunity. Even so, feudalism is increasingly popular in a country where most people have “made it” and have stopped worrying about those who have not. Increasingly common everywhere, feudalism is especially common in the State of New York, its national capital, in part because feudalism is what liberalism has devolved into.
In part the rise of feudalism is the flip side of the failure of the “thin end of the wedge” strategy. Ideological liberals might prefer to complete the march to make a non-universal benefit universal, while ideological conservatives might prefer to roll such benefits back to nothing. Politically, however, it is easier for both to compromise by maintaining benefits for those who already have them, while denying access to future recipients. That way, the Democrats retain the loyal votes of their grateful existing beneficiaries, and Republicans avoid the opposition of better off people with a greater sense of entitlement.
Again, public and publicly subsidized housing is the best example. In the late 1950s and early 1960s, so much public and subsidized housing was built in New York City that, combined with a burst of private construction in advance of a zoning change, the city’s housing market nearly collapsed due to excess supply. Since the mid-1970s, however, very few new public and subsidized housing units have been built. In most cases, initial occupants have continued to occupy the “affordable” housing existing stock, even as others who are worse off lack affordable housing options. That is, perhaps, an unavoidable result of a shift in policy. In many cases, however, the new occupants of public housing and subsidized Mitchell-Lama buildings are the children of existing occupants. They know how to get on the waiting lists, keep their incomes low enough when applying (in the case of Mitchell-Lama, by applying during college), and get in.
New York City’s housing stock is much older and more compressed than that of the nation as a whole. In the United States, two-thirds of all housing units have three or more bedrooms; in New York City, only one-third are that large. This makes the city’s housing market difficult and expensive for families with children, especially low- and moderate-income families, who often live cramped in two-bedroom, one-bedroom, or even studio apartments. Those living in public and subsidized housing, however, are not asked to move to smaller apartments once their children are gone and their space needs fall. In the early 1980s then-Mayor Koch proposed asking such empty nesters to move to smaller units, with a moving subsidy, to make room for families. That proposal was almost instantly shot down. Those subsidized units, their occupants said, were theirs. But the units aren’t “theirs” based on what they have “earned,” because they aren’t paying market rents. And they aren’t “theirs” based on “need,” because objectively others need them more. The city’s scarce larger housing units, even those controlled or subsidized by the government, are allocated based on feudal tenure, to those that already have them.
The eligibility rules of the nation’s means-tested benefits include regulations that, in effect, base how well off one should be, in part, on how well off one has been. People who, from a capitalist perspective – what they earn – and a socialist perspective – what they need, are equal can end up living very different lives while receiving public benefits.
Medicaid and Food Stamps, for example, permit recipients to receive benefits while owning a house of any value and one car per adult, also of any value, if used for work or school. Money saved in pension plans, such as 401K plans, is also exempt from consideration, at least for food stamps.
Retirement plans, home equity, and the value of motor vehicles accounts for the vast majority of household wealth accumulated by the vast majority of all but the wealthiest Americans. Therefore, a middle income or affluent household, one that has lost its source of employment earnings and used up its unemployment benefits, could theoretically receive Medicaid, food stamps, perhaps even welfare, all while living in a large, high amenity house, and driving one, two or even three late model cars. And it could emerge from such a difficult time, after eventually securing new employment, with all of its wealth still intact – with home equity to borrow against to finance college, and funds still in place to finance a comfortable retirement. Throughout its time receiving public benefits, and after, such a household would be much better off than a working poor family with the same income, or even a higher income.
Equal treatment for everyone, even those in unequal circumstances, is hard to argue against. Middle-income and affluent households are the people who pay the most into the system in taxes, so why would it be fair to force them to sell their homes, leave their communities and friendships, lose their retirement savings, and have their children leave their schools, rather than call on the assistance of the society that they themselves have been supporting? Equal benefit in a time of need is, by my standards, a fair deal for the working poor since they pay less in even in their better years, yet receive the same benefit when in need.
In the wake of the anti-welfare crusade, however, the availability of public benefits to those who are not poor seems hypocritical according to the principles of its proponents. Many of those proponents have asserted moral objections to providing aid to those who are poor, but have been silent on aid to those who are not. Welfare reformers have, as a goal, directing the poor away from any dependence on any public benefits. In the 1990s, as the number of people on “welfare” (as traditionally understood) fell, the number of people receiving food stamps and Medicaid fell as well, as states and localities used administrative barriers and obstacles to push people off the rolls. In fact, under the 1996 welfare reform act legal, working immigrants were denied such benefits as a matter of law. It is no surprise, therefore, that by the end of the decade a substantial share of those receiving such benefits were not poor, and a substantial share of those who were poor were not receiving the benefits. The “social safety net,” therefore, has multiple levels, based on where one was when one started to fall – not based on where one would be absent public assistance.
If middle-income people were forced to sell their homes, cash in their retirement savings, sell their automobiles, and live off the proceeds before turning to the government for help, they could eventually be forced to move into the same neighborhoods where many other Americans, including working Americans, live today. Bankruptcy laws, as well have acted to keep those who are in middle class from falling into poverty due to financial reverses, by allowing them to keep their homes. The new bankruptcy law may change that, but expect a big political reaction if it does on a large scale.
Poor Americans, if they suffer financial reverses, may very well end up on the street because they don’t have any accumulated wealth, whether exempt from benefit formulae and protected in bankruptcy or otherwise. Many of these Americans are kept out of middle income communities by zoning laws, which prevent single-family homes from being subdivided into affordable apartments in response to free market incentives.
The different levels of safety net, therefore, include far more than the personal wealth one may exempt when qualifying for benefits, and therefore one’s personal standard of living. The ability to own or rent a home in a particular place carries with it all kinds of social “quality of life” benefits that come with that place: educational opportunities, recreational amenities, even personal safety. Employment opportunities, as well, are more limited for those who live in communities where there is little business activity, and who do not have an automobile to take them elsewhere. If more Americans thought there was a chance that they could end up living in a poor community at some point in their lives, then perhaps there would be more concern about what life in those communities is like.
The hostility directed toward those on the lower rungs of the “social safety net,” combined with acceptance and support for better off people who receive benefits in a time of need, reveal the philosophical foundation of social policy today. Maintaining the poor in relatively comfortable poverty, rather than severe deprivation, is thought to be bad public policy. Advancing the poor out of poverty is thought to be an individual, not a social, responsibility. Preventing the affluent from falling into poverty when illness, disability, or unemployment strikes, however, is considered an essential role of government. Even better off people who have made bad choices, which as a result of the recent housing and home equity cash out boom there will be many. Very many.
In fact, the government has started to create new forms of public benefit with eligibility based entirely on the standard of living one had in the past. Take the federal Trade Adjustment Assistance Reform Act of 2002. It included tax benefits and direct subsidies to ensure the continuation of health insurance for people who claim to have lost their jobs, and their health insurance, due to free trade. Such people generally work in the shrinking, traditionally unionized, high-paid, high-benefit manufacturing sector. When factories close and workers with pre-existing health conditions, or who have families with pre-existing health conditions, lose their jobs, real hardship may result. They often have to find employment in companies and industries where pay is lower and health benefits are uncommon, such as the retail and service sectors, and find the cost of purchasing health insurance on their own to be prohibitive.
Why is it fair, however, for such people to receive subsidized health insurance while those who lose their jobs for other reasons, such as changing consumer tastes, the rise and fall of individual companies, industry trends, and technology, do not? In the early 1990s, New York City suffered a crushing economic downturn as advancing information technology – voice mail, word processing, electronic data interchange – allowed financial companies to either eliminate clerical jobs or outsource them to lower cost parts of the United States (like North Dakota) and eventually to lower wage countries (including those as far away as India). The employment loss among “pink collar” clerical workers, and the middle managers that supervised them, was in the hundreds of thousands. As my wife, a bank examiner, traveled from bank to bank during the period, she inevitably found that the staff she had interviewed a year or two earlier had all been fired, replaced by machines or by lower paid staff, without health benefits, elsewhere. None of these workers were offered special benefits to replace the health insurance they lost.
And what about the working poor people, and their self-employed bosses, who never were able to afford health insurance to start with, and never received public benefits? Why require them to pay additional federal, state, and local taxes to allow others to continue to receive benefits, at public cost, that they never had?
From a capitalism perspective, those “others” haven’t “earned” health insurance to a greater extent than those who are now, based on the judgment of the free market, in the same circumstances. From a socialist perspective, their “needs” are no greater. What is the underlying principle that provides some with benefits but not others? The answer is “feudalism” – the allocation of privileges and benefits by right of tenure -- a growing but unspoken and unacknowledged justification for all kinds of public policy outcomes in the United States. In case after case the government, with the power to compel people to do what they do not wish to do, is using that power to make the less well off even less well off, the less influential even less influential, the less secure even less secure - all while protecting the better off from the consequences of free choice in the marketplace, or even the consequences of their own decisions.
The Golden Rule
The “Thin Edge of the Wedge” outcome and “American Feudalism,” discussed in earlier posts, provide greater public benefits to those who have more, but do so by stealth or accident rather than by design. The provision of public benefits and subsidies through the tax code, on the other hand, clearly and unambiguously follows the “golden rule” -- he who has the gold makes the rules. Tax subsidies are preferences are the opposite of means-tested benefits. Under means-tested benefits, the less you have the more public assistance you get, at least in theory. Tax breaks provide greater public benefits to those who have more, not those who have less. And more and more, tax breaks are the public policy of choice for allocating public benefits.
The amount of money involved is not trivial. The $632 billion in tax revenues lost due to federal tax breaks in 2000, as identified in the U.S. Census Bureau’s 2001 Statistical Abstract of the United States, was substantial relative to the $1,789 billion in federal expenditures that year. All three levels of government spent $37 billion on housing subsidies and community development, but federal tax breaks for those purposes cost $92 billion, with the mortgage interest deduction alone costing $60 billion. The loss of state and local income taxes from the mortgage deduction is on top of that. The exclusion of health insurance premiums from taxable income cost the federal government $76.5 billion in 2000, with state income tax losses on top of that. In 1998, federal, state, and local Medicaid spending for the merely poor, rather than the disabled or elderly, totaled just $41 billion. In recent years, there has been an explosion of tax benefits to subsidize college education. Direct federal expenditures on college scholarships were lower in 2000 than in 1990.
Some have argued that allowing people to pay less into the government is fundamentally different that allowing people to take more out of it, since the former makes the government “smaller” and the latter makes the government “larger.” For those who do not receive a tax benefit or spending benefit, however, the effect is the same – they either have to pay more in, or accept less out, to make up for what others receive. If tax breaks made the government “smaller,” then a 100 percent tax rate, with money returned for different tax preferences to those who do what the government seeks to encourage and subsidize, could make the government “smaller” than it is today. That is absurd. That is, in fact, slavery.
Who received the $632 billion in federal tax breaks, and the additional funds allocated by state and local tax breaks?
For personal and corporate income taxes, there are two kinds of tax break, a credit and a deduction. A credit reduces the amount of tax owed, while a deduction reduces the amount of income subject to the tax. A $1,000 tax credit, for example, would provide a $1,000 subsidy to someone who would otherwise owe $1,000 in taxes, and a $1,000 subsidy to someone who would otherwise owe $100,000. On a percentage basis, credits are more valuable to those with lower incomes, since their tax burden could fall by 100 percent; in contrast someone who would otherwise owe $100,000 would have his or her taxes reduced by just one percent. According to the 2001 Statistical Abstract of the United States (table 474), however, those earning less than $15,000 per year and filing a return owed, on average, less than $1,000 in federal income taxes. A $1,000 federal income tax credit would be worth less than $1,000 to them. Many more low-income workers are not even required to file, so an income tax credit provides them with nothing. So a tax credit is worth more to those with lower incomes up to a certain point, when it becomes worthless.
A tax deduction, on the other hand, is worth more to those with higher incomes up to a certain point, when it becomes worthless. A $3,000 deduction, for example, would reduce one’s state and local taxes by $1,200 if he or she were affluent enough to fall into the 40 percent marginal tax bracket, by $1,000 in the 33 percent tax bracket, and by just $600 in the 20 percent tax bracket. For those whose income is so low that no taxes would be owed in any event, the tax-based benefit would be zero.
Moreover, less than one-third of all federal tax returns include itemized deductions, rather than the standard deduction. For those using the standard deduction, the value of any additional tax break that doesn’t reduce taxable income off the top (as the exclusion of retirement savings and employer health insurance premiums does) is effectively zero. The working class, and much of the middle class, has thus received no benefit from all the tax breaks added since the 1986 tax reform. The truly rich also frequently receive nothing, since many tax credits and deductions phase out, and the alternative minimum tax phases in, above a certain point.
Most of the benefit from the profusion of tax breaks over the past 20 years, therefore, has gone to two-income, college-educated couples whose total earnings place them in the upper middle class. Suburban, college-educated whites, the “soccer mom” voters, those in the same social group as most elected officials, in other words. The rising inequities have been enough to even shake the conscience of some conservatives, whose perspective generally is that big government has allowed the “less deserving” to be better off than they should be. This is particularly the case for health insurance, where Republicans are presiding over an era in which the amount of federal tax subsidy to the affluent rises every time the cost of health insurance does, even as millions lose their health insurance and, therefore, their subsidy.
There is a way to use the tax system to provide an equal benefit for everyone. The government could offer a benefit that could be either a tax break or a voucher. The voucher option would ensure that everyone received, say, $1,000 even if no tax were owed, while those who owed more than $1,000 in taxes could just take a credit against them. Either way, the dollar value of the benefit would be the same for everyone, from the richest to the poorest, and everyone would be assured of being able to afford whatever service was being subsidized, up to at least that value. At this point, however, the Earned Income Tax Credit is one of the few examples of a refundable tax credit. In the year 2000, many in Congress proposed cutting the EITC to pay for tax reductions, on the grounds that the EITC is big government “welfare” for the undeserving working poor. Remarkably, that proposal did not pass. Or at least it has not passed yet.
“A Fair and Thorough Evaluation”
We have now reached the bottom of the barrel from the point of view of equity and eligibility – public services and benefits that are rationed, and have eligibility restricted, not on the basis of need, or age, or means, or tax liability, or tenure, or queue, but by a “fair and through evaluation” of “merit.” In public applications and eligibility criteria, merit is sometimes determined by quantitative measurement, sometimes by qualitative criteria evaluated by a panel of “experts,” and sometimes by the whims – or political deals – of individual legislators. Deciding someone’s “merit” presents all the same liabilities as deciding their “need,” but with an added objection – in the United States of America, what business does the government have in deciding one’s merit at all?
The public programs with eligibility based on “merit” criteria account for a small share of total public spending, but a large share of political controversy and horse-trading. Take the National Endowment for the Arts. Its total aid expenditures in 1999 were just $85 million, a figure that was less than one third of one percent of approximately $32 billion spent on performing arts that year. It was also less than one half of one thousandth of a percent of the federal budget. The National Science Foundation, similarly, distributed just $3.3 billion. Yet research and arts grants, and criticism of the art the grants pay for, repeatedly appear in the statements, positions, and speeches of elected officials of all ideologies and parties, politically active groups, and advocacy groups. Conflicts over such research grants frequently appear in the news. Meanwhile, the quadrillion dollar dilemma of who will care for the nation’s soaring invalid elderly population, using what financial resources, twenty years from now is seldom mentioned by anyone’s Congressional Representative or State Assemblyman.
Getting a grant means filling out a grant application, in which one attempts to demonstrate the merit of oneself or one’s proposal. If the grant criteria are quantitative, it pays to know what they are and how they are weighted, to tailor one’s own case in the best possible way. If the grant criteria are qualitative, it pays to know the preferences and prejudices of the panel that will be making the decision. An entire field – “grantsmanship” – has arisen to provide this expertise. Indeed, grantsmanship is one of the few marketable skills one can obtain with a graduate degree in public policy, and tens of thousands of would-be bureaucrats earn their living demonstrating the merit of their town, city, county, company, or non-profit organization in order to obtain public funds. There are tens of thousands of other bureaucrats working to evaluate these applications, trying to interpret the often complex and obscure criteria that elected officials have drafted to decide who gets what, hopefully without being caught up in a controversy that costs their job. And there are dozens and dozens of books, websites, and training courses designed to produce more of them. Just search the internet under “grantsmanship” for a sample.
Sometimes, legislators create rules by which bureaucrats are to distribute money based on “objective” criteria. Sometimes they designate a panel of experts to distribute money based on subjective criteria. Sometimes, I can tell you from experience in government, they expect panels to twist and turn and come up with the “right” results. Sometimes, they simply dispense with panels, experts, applications and criteria, and just hand out the money themselves.
That is particularly the case here in New York State. In its Fiscal 2008 budget, the legislators of the State of New York included $170 million in “member items,” funds to be distributed by individual legislators to groups and projects that they consider worthy by whatever criteria they choose. The Empire Center found another $101 million in pork that wasn’t called pork. That isn’t much for a state with total revenues of $227 billion in direct state and local government expenditures; in fact it is less than one tenth of one percent. Most of that $227 billion, however, goes to benefits and services the government is obligated to provide, for which legislators can claim little credit, and is allocated almost entirely in closed door deals made by the State of New York’s infamous “three men in a room,” the Governor, Assembly Speaker, and State Senate Majority Leader. The other 150 members of the Assembly and 60 State Senators are focused almost entirely on their share of the pork, and trade their votes on everything else to get it. In their appearances and mailings, it is all they talk about.
What is the money used for? Here is an example. My father-in-law lives in an upstate rural county, where he got to know the president of a local snowmobile club (now deceased). Its members ride snowmobiles and maintain snowmobile trails. They also hold banquets, trips, and parties. At first, all this was financed by the members themselves, in dues and work contributed. Later, however, the club obtained a “member item” grant from a state legislator, who it then supported. Then it received another grant, and then another, and then another. Eventually, the club had such a large endowment that dues were done away with. The State of New York is now funding trips, parties and banquets in restaurants not for everyone, but just for members of this one group, and others like it. Meanwhile, the combined state and local tax burden as a share of personal income in this upstate county, which is quite poor, is one-third higher than the national average.
The State of New York may be worse than average in this regard, but it is by no means unique. “Pork barrel” grants by members of Congress, similarly, may not amount to very much money, but are embarrassing nonetheless. In its 2007 “Pigbook,” Citizens Against Government Waste identifies pork as spending that meets at least two of seven criteria – it is only requested by one branch of Congress, not specifically authorized, not competitively awarded, not requested by the President, greatly exceeds the spending in the President’s budget, was not the subject of hearings, and benefits only a local or special interest. CAGW identifies $29 billion in pork for fiscal year 2007. That is, once again, a small portion of the federal budget. Those tiny grants, not the critical multi-quadrillion dollar decisions on future of Social Security and Medicare, are the focus of the taxpayer-funded promotional brochure we receive periodically from our Congressman. How many people really benefit from these, and how is that determined? No one knows.
None of this is good, but it wouldn’t be so bad, as long as the amount of money allocated based on “merit” remained small. Let the politicians and ideologues “fire up their base” and raise money by debating the rightness of providing a public grant to a homosexual artist who produces a photograph of a crucifix in a beaker of urine. Let the liberals defend free expression, the conservatives defend public decency, and everyone get their picture in the newspaper. So they hand out a few dollars here and there to those that know how to work the system and show appropriate gratitude. The rest of us, who are busy working and taking care of our families, are free to ignore them. Unfortunately, there are some public services and benefits allocated based on determinations of “merit” that do cost real money, and do have real consequences – higher education and economic development projects.
State universities and colleges provide almost 80 percent of all the higher education in the country, and from the moment states went into the university business they became caught up in difficult process of deciding which students deserve to go to which schools. In some cases the decisions are made in the same messy, inaccurate way as in private colleges. In some cases just about anyone is allowed to go just about everywhere, with the unmerited flunking out in freshman year. Today, however, public university admissions have become politicized, with judgments about whether those whose ancestors suffered discrimination, those who have suffered disadvantages themselves, or those who have the highest test scores have greater “merit.”
Judging the academic ability of a high school student who has taken a standardized test, written an essay, had an interview, and has a record of grades in high school is one thing. Judging the academic potential of a four year old is another. For the past two decades, “gifted” and “magnet” programs have been, in many parts of New York City, the only venues in which an actual public education could be obtained. We observed the process of applying for those programs first hand. The process, recently reformed by Mayor Bloomberg, involves desperate parents, arcane formulae, and various tests administered by various psychologists, all of whom have to be paid. In the end, however, it is unlikely that any such test could succeed in measuring innate abilities, rather than the advantage of being cared for by attentive, educated, native-born parents.
Bureaucrats and politicians judging the merit of art. Bureaucrats and politicians judging the academic potential of four year olds. Politicians handing out grants to who-knows-know after judging who-knows-what.
My final example of “a fair and thorough evaluation” is publicly financed sports stadiums and economic development projects. If Mayors and Governors promoted these based on the idea that they liked sports, other people like sports, so everyone should have to pay to make sports more profitable, then stadium proposals would at least be honest. Mayors and Governors, however, typically claim that public subsidies for sports stadiums are justified based on “merit” – based on the economic activity they create. And that economic activity is measured and evaluated based on extensive criteria, with the jobs and taxes to be gained specified in detail and compared with costs. I’m not going to bother debunking such studies in general. Books such as Field of Schemes; The Stadium Game; Sports, Jobs and Taxes; Major League Losers, and Home Team: Professional Sports and the American Metropolis, all written by experts in the field, have done so already. I can tell you of my experiences at New York City Planning in the Giuliani Administration, when the former Mayor was hell bent on giving the New York Yankees a new stadium.
Faced with such demands, honest bureaucrats tell the truth and nothing but the truth, but not the whole truth. The City’s Department of City Planning, Department of Finance, and Economic Development Corporation collaborated on a “study” that found that if the city up-zoned the far West Side of Manhattan for high density development, extended a subway into the area, and built a stadium for the New York Yankees, it would receive a net fiscal benefit of several hundred million dollars. That is true, and Giuliani Administration trumpeted this result. What the study didn’t say directly is that most of the additional tax revenues would be paid by the commercial development enabled by the up-zoning, not by the stadium or its occupants. And that commercial development would be built – if permitted and if infrastructure was provided – with or without the stadium. Without the stadium, in fact, the city would not only save money by not paying for it, it would also have more jobs and taxes – those from additional buildings on the site where the stadium would otherwise be.
No matter. Politically, the tax revenues and jobs were associated with the stadium. And when it was clear that the Manhattan stadium would not fly, the Giuliani Administration used the same study to justify a claim that having the public pay for a new Yankee Stadium in the Bronx – where Class A office space and luxury housing and hotels are unlikely and the existing stadium has spun off one short row of souvenir shops and bars – would also generate hundreds of millions of dollars in fiscal benefits.
In an era when the majority political party believes that society, through its government, has no obligation to meet people’s needs, what business does it have judging their merit? Doesn’t the free market, with its trillions of daily decisions by hundreds of millions of people making up their own minds, do that? Those on the right who see a class bias in the allocation of, for example, arts funding have a point. America’s public sector has never financed the development of growing, vital art forms such as jazz, rhythm and blues, rock and roll, hip-hop, and comics, particularly when these art forms appeal to the masses. The marketplace has. When public money was finally allocated to build a jazz concert hall at New York’s Lincoln Center, one knew that the decline of jazz was irreversible. Not that Republicans and conservatives are above raiding “merit” based funding themselves – think of stadiums and housing and economic development grants. The latter programs are so prone to abuse that dispirited bureaucrats at the federal Department of Housing and Urban Development have adopted a cynical motto “HUD is for the needy, and the greedy.”
Theoretically questionable, programs, grants and benefits with merit-based eligibility criteria are popular with politicians. They present an opportunity, like the publicly-financed turkeys that ward bosses handed out at Christmas in a past century, to hand out benefits personally, and to be personally associated with them. You can see the effect in this Peanuts cartoon. Programs, grants, and benefits with merit-based criteria, or no criteria, allow politicians to be philanthropists with other people’s money. And they prefer to be generous to the better off rather than the poor: they are more grateful, and have more to offer in return.
Good (Financial) Health
Virtually every method of restricting eligibility comes together in the public health care finance system of the United States, as it existed before Obamacare and only partially reformed by that legislation. This was written in 2007, but there is no point in trying to understand the effects of Obamacare at this point, because it is unformed and uncertain. So let’s see where we started from, because most of the pre-existing system was kept in place.
Medicare, and part of Medicaid, have eligibility restricted based on age. Other parts of Medicaid, and public health and hospitals, allocate benefits based on means and need. The exemption of health insurance premiums from taxable income allocates benefits according to the “golden rule,” while the publicly mandated right of those with tax-subsidized private health insurance to keep it (under COLA laws) allocates money based on feudal tenure. Scarce vital organs available for transplant are, theoretically, allocated based on “merit” criteria by quasi-public groups backed by federal and state regulations. When all the direct and indirect (tax subsidy) public spending on health care is added up, it turns out that the federal, state and local governments were already funding approximately three-quarters of all third party (not out of pocket) health expenditures in the country in 2000. Excluding non-vital services the government share was 80 percent. That share is certain to rise as the nation ages, and Medicare and Medicaid account for more of nation’s health care. This entire health care finance system is profoundly inequitable — if it didn’t already exist, would anyone dare to suggest it? Yet politicians continue to propose adding to it rather than scrapping it altogether, so numerous and powerful are the beneficiaries of that inequity.
The U.S. health care finance system is a historical accident. Health insurance tied to one’s employer emerged during World War II, when wage and price controls outlawed cash raises at a moment of labor scarcity, and employer-provided insurance was a way to attract and retain workers without violating the law. It has persisted because it is subsidized by the federal income tax system in a way that other alternatives are not. Medicaid and Medicare were passed at a time when the elderly were, on average, poorer than younger Americans, when workers were assumed to have health insurance through their employers, and when politically organized workers assumed that they would remain with one employer throughout their careers. Both private insurance and public programs assumed that health care was an unpleasant necessity — that no one would want more than the minimum necessary — and that the health care industry was run by “non-profit” charities, uninterested in increasing their own relative wealth even if it had the power to do so.
The entire basis of the system has turned out to be wrong, with severe equity and economic consequences. The elderly are no longer poorer than average. In 1999, according to the U.S. Census Bureau, just 9.7 percent of those age 65 and over were poor, compared with 16.9 percent of those under age 18 (and their parents). Yet the elderly are entitled to extensive health care at public expense, while the workers who support and serve them have their health care choices constrained. Many working poor and self-employed people and their children are entitled to nothing at all — even though their FICA and sales tax payments support Medicare and Medicaid.
For some workers in middle-income families, those with one or more members with an expensive chronic illness, the exclusion of pre-existing conditions from new private health insurance policies has created a new form of semi-slavery. Even if such workers are unhappy in their jobs, and could do better elsewhere, they must stay on where they are because the cost of their own illness (or that of a family member) would ruin them if they were forced to move to a new health insurance plan. In bad times, such workers run the risk of being laid off and ruined in any event. Market conditions are becoming more and more difficult for those forced to purchase health insurance on their own. Even if they can get it they now face “re-underwriting” -- with soaring premiums or lost coverage if they happen to get sick. The high cost of health coverage separate from a place of employment is also a severe risk – and disincentive – for people to try and start their own new business. This hurts everyone, since our economy depends on ongoing entrepreneurship for its vitality.
In theory the poor and uninsured, those with few or any assets, are better off. Since they have nothing to lose they cannot be held accountable for unpaid-hospital bills, and the health care system still generally treats all in need. Some argue that for this reason the United States actually has universal health care, and that there is no health care finance problem. As my current boss puts it “we have a universal health care system, the problem is that it is Medicaid, and it sucks.”
But other people and groups are becoming resistant to cross-subsidizing the bad debts of the uninsured. With all the money moving around under the table, it’s hard to say who is paying for what. Plenty of health care organizations are avoiding taxes as “non-profits” but avoiding charity care any way they can. Moreover, the fact that those on public assistance are automatically entitled to health care under Medicaid, while the working poor are not, leads to some strange incentives. Some people are on welfare just to qualify for Medicaid; indeed a neighbor who works for New York City’s welfare department tells me of people who (based on their scheduling requests) have jobs, but still illegally participate in workfare just to qualify for Medicaid.
The health care industry has continued to find more services that more people require, as long as money has been available to pay for them. Health care turns out to be luxury good, upon which spending rises faster than income, rather than a basic good, which falls as a share of income as income rises. And it turns out that the health care industry, even its non-profit portion, is far from selfless. While other industries must confront their customer directly when they raise prices, the health care industry doesn’t have to. Its bills are paid by higher taxes, reduced public spending on other things, and lower wage increases. Insurance companies, employers and governments get the blame. When there is price resistance, and its income does not rise as fast at it wants, the health care industry threatens to reduce vital services. Indeed, the Greater New York Hospital Association and Local 1199, the NYC hospital worker’s unions, were among the most powerful groups opposing health care reform in 1993.
The American health care system defies the economic tradeoff between equity and efficiency, by achieving neither. Year by year the situation gets worse and worse, for more and more people. The share of the population with health insurance goes down. The cost of health care – both for the government directly and the cost of subsidizing private health insurance -- goes up. Yet no action is taken. Why?
No change occurs because the existing system serves those with power very well. Public employees, and thus public employee unions, are satisfied with a situation in which almost all their members have health insurance. So are the elderly. So are the wealthy, who can afford health care or insurance on their own. So are those with seniority in the few remaining unionized industries; they are unlikely to lose their jobs and thus their health insurance. So is the health care industry itself, which is paid more and more to provide services that fewer and fewer people are entitled to. Any move toward universal coverage would make one or more of these groups worse off, requiring them to either pay more or accept less. In the 2000 Presidential election, therefore, nearly the entire Democratic union and public employee establishment lined up behind Al Gore against Bill Bradley, who had made universal health care the centerpiece of his campaign.
Republicans continue to deceive us, or perhaps delude themselves, that the problem will just go away even as public spending on health care continues to go up and the share of people who benefit from such spending continues to go down. Their “think tanks” argue that most of the uninsured are young people who are choosing to take a risk, and not purchase insurance they could afford, because they unlikely to become ill. Such people will, it assumed, eventually obtain employer-linked health insurance once they mature in the labor market. Those same think tanks, however, probably said the same thing about defined-benefit pension plans in the early 1980s. It turned out, however, that most of those who entered the labor market at the time never received them – only 19 percent of all private sector workers had defined benefit pension plans in 2000, and only half had any retirement plan at all.
Similarly, the lack of employer-financed health insurance among the young is probably not a “phase of life” issue. It is a “generational shift” issue, a way to structure the fact that the next generation will be worse off than prior generations without the up-front directness of lower cash wages. Those over age 45 with employer-provided health insurance probably had it when they were young. Those under age 30 who do not have it now probably never will. At some point in their lives, they and their families will suffer the consequences of this. After they had been forced to pay in their entire lives to insulate other, more privileged groups from the consequences of similar circumstances.
As a consensus emerges that all this is unworkable, plans emerge to maintain the entire corrupt apparatus and buy off different groups by adding to it. Some have proposed shifting to individual private coverage, which would allow the healthy to stop cross-subsidizing the sick -- the cost of whose care would then be left to the government. Private companies would make money on people who don’t need health care, and hostility would grow for those who do. Some have suggested a state-by-state solution. This would allow states with lower benefits to shed the poor and ill and attract the wealthy with lower taxes, while those providing better health care are overwhelmed by the cost of those who require extensive care. Some Democrats want to just keep expanding Medicaid, forcing New York City, with its lower federal share, to pay a huge share of the cost of the health care within its borders even as other states, with higher federal shares, have their health care costs covered by those in places like New York. Some might want to shift to an exclusively public system, in which you pay in advance through taxes whether you like it or not and then take what they give. All want to maintain more than one health care financing system, with both direct and indirect funding, so that there can be different benefits for different people.
Why is this fair? Because the people who matter can get away with it, and because it isn’t politically profitable to take them on in order to change it. After 20 years of hostility to the poor, eligibility for public benefits is increasingly concentrated among those who are not poor. And no one, not liberals, not conservatives, not the Republicans, not the Democrats, is willing to point this out.
Summary: In Government Anything That Isn’t Simple Is A Ripoff
After reading my equity and eligibility posts, perhaps you have reached the same conclusion I that have: as government has become increasingly complex, it has also become increasingly unfair. More and more, we find that eligibility rules (or decisions made outside formal rules) favor the better off, the better organized, and the better connected – and those with a greater sense of entitlement. Once the growth of government, as a share of the economy, stopped, the trend has been for the number of beneficiaries to fall even as the benefits available to those in a position to access the system – through direct benefits or tax breaks – increases. And as the debts and pension promises of the past continue to bite, it is basic services for the majority and basic needs for the worst off that suffer, not extravagances for the influential. Equity, fairness, even decency requires that this trend be reversed. Some services and benefits should be made universal. Others should be eliminated. And younger generations have to be treated fairly. Otherwise, we are going to see general strikes in this country by a majority of people with nothing left to lose. That is why equity and simplicity in government was one of my four main themes when I became disappointed enough to run as a protest candidate for state legislature several years ago.
Health care finance is an example of a public benefit that should be universal. There is simply no excuse for anyone to lack publicly financed access to basic health care, even as they are forced to pay taxes for more extensive benefits for others. Nor is it reasonable to call universal health insurance an expansion of government when the government is already covering, directly or indirectly, such a high share of total health care spending. If all the direct and indirect public funding flows were consolidated in a single system, everyone could be assured of a reasonable level of basic health care, without an increase in public spending overall. Everyone could be permitted to purchase additional health care and/or insurance, if they could afford it, on their own -- without tax breaks and subsidies. Or people could make voluntary donations to ensure that others receive health care beyond the socially guaranteed minimum. The government’s role should be to provide the minimum, for everyone, equally, not to provide far more than that for some and nothing for others.
Federal housing assistance – both via spending AND via the tax code -- is an example of a benefit that should be phased out. Too few people benefit from direct spending on housing programs, while those who benefit most from tax-based housing subsidies are affluent enough to be able to purchase housing without such assistance. These have merely encouraged such people to spend their money on McMansions instead of other things, instead of saving. The mortgage interest deduction could be phased out over a decade, by reducing the share of interest paid that may be deducted by ten percent per year. Section 8 housing vouchers could be similarly phased out by increasing the percent of a beneficiary’s income that must be applied to rent by one percent per year, and by not issuing new vouchers. Operating subsidies for public housing could be phased out by increasing the share of income that must be applied to rent, and by requiring all new tenants to pay at least the operating cost of the unit, plus a reasonable capital reserve. Once rents were covering operating costs, federal public housing projects could be turned over to local governments that operate them without subsidies or restrictions. State and local governments could decide how, and for whom, they wish to subsidize housing.
The federal spending and tax expenditures no longer applied to housing and existing health care programs care could be diverted to universal health care. A single, universal federal public health care finance system, combined with the elimination of federal housing subsidies and grants, would lead to a vast simplification of government administration at the federal, state and local level. The amount of money state and local governments saved due to federal coverage of a substantial share of employee and retiree health care, and today’s state and local share of Medicaid, would more than offset any additional money they would choose to spend on other things. Absent such a policy, in fact, retiree health care and pensions could bankrupt state and local governments throughout the country, leading to drastic tax increases and massive service cuts and adding to the “older generations get everything and younger generations pay and get nothing” trend.
Other benefits could be similarly consolidated and simplified. Federal, state and local governments provide a vast array of tax breaks, cash benefits, and near-cash vouchers intended to assist parents with cost of raising children or caring for elderly adults. These include cash grants under Temporary Assistance to Needy Families (welfare), food stamps, and the Earned Income Tax Credit (EITC) for the poor, and per child and child care income tax exemptions, deductions and credits for the non-poor. Typically, the value of these benefits rises with the number of persons in the household, and (like health care) these benefits are allocated under many programs based on many different criteria – age, means, needs, the golden rule, etc.
In 2002, a the food stamps and welfare benefits available to a parent of two children in New York State were $2,400 per year higher than the benefits available to a parent of one child. And the federal, state and local income taxes paid by a parent of two children in New York State who earned $90,000 were $2,000 lower than a parent of one child, as a result of the deductions. The combined benefits and tax breaks per child were $3,100 at $10,000 in income per year, $1,600 at $20,000, and $2,000 to $2,200 from $25,000 to $90,000. Above $90,000, benefits per child phased out until they reached $680 at $140,000 in income or more.
Why not just replace the additional welfare, food stamp, and EITC payments for dependents and all the tax benefits and credits for children, and replace them with a single subsidy of, say, $2,400 per year, which could be either received in payment or deducted from taxes due? One simple amount that the government provides as assistance with raising a child, or perhaps an elderly parent with self care limitations forced to move in with her children? Everyone would know about a single benefit per child, which could be triggered by the issuance of a birth certificate – like the universal basic health care benefit. The single subsidy per child would be more than what the wealthiest receive today, but this could be offset by an increase in their marginal tax rate. Though there would be no practical change, theoretical fairness would increase. It is one thing for the government to say that the affluent ought to be asked to pay more, another for it to say that their children are worth less.
And what about the government’s need to ensure the welfare of children by controlling the spending decisions of irresponsible adults? Food stamps, housing vouchers, and other non-cash benefit could certainly be substituted where necessary, but there is no reason to go through the hassle of handing out non-money to everyone in order to control the least responsible.
What about cash grants for the disabled and unemployed, and children who require additional assistance at school as a result of learning disabilities? Does equal protection require that everyone receive such assistance, or no one? Certainly there are many whose disability is so clear and indisputable that there is no reasonable objection to with providing them with benefits and services that most people do not receive. For those whose needs are less obvious, however, practical experience has demonstrated the near impossibility of having government officials, and the political process, allocate benefits based on means and need. Such allocations tend to be subjective and inconsistent at best, dishonest and politicized at worst. A different method must be found to determine eligibility.
The fairest method that I can think of is self-selection – limiting eligibility to those who are willing to give up something in exchange for receiving a public benefit that others do not.
The best example of this is “workfare,” the fair and equitable component of welfare reform. Fair and equitable because virtually everyone else has no choice but to work, for the benefit of others, in order to earn a living. It was inequitable for welfare recipients to be exempted from this obligation. To achieve additional equity, and get around the dilemma of aiding the fraudulent or denying the truly needy, the government could impose the same obligation -- work in exchange for benefits -- on the mildly disabled, injured workers, and the unemployed.
The value of work done by the truly disabled and injured would, in general, be worth much less to the public than the cost of organizing and managing it, and it would not be, in all likelihood, particularly rewarding. Critics, therefore, could say that workfare for the disabled and injured would be both expensive and cruel. On the other hand, many of those critics would probably say that the investigations and personal intrusions required to insure that the injured, the disabled, and the unemployed really cannot earn a living on their own are also expensive and cruel. The opposite choice, an open ended assumption that anyone who says the cannot work should not have to, has proven to be unfair – to the working poor – and expensive in the long run. Those who are willing to work for the minimum wage in exchange for benefits clearly cannot do as well or better without assistance, or they would. There is no need to investigate further, no need for hostility, no need for suspicion. Self-selection through work in exchange for benefits is, in fact, the easiest form of assistance to the needy to administrate fairly. Either someone is willing to work, or they are not.
Self-selection through a work requirement avoids liberal and conservative assumptions that imply that those in need of assistance are something less than full human beings. To many liberals, the poor are like puppies that need to be cared for, but cannot be expected to contribute anything in return. To many conservatives they are like rats, vermin spreading pestilence through society while having no value. If those seeking assistance are considered human beings, then equity requires that the rest of us have obligations to them, but they have obligations to the rest of us as well. Far from being demeaning, work-based assistance puts those in need, either temporarily or indefinitely, on an equal basis with other people.
Self-selection could be used, as well, to determine which children are learning disabled and require additional educational services. Such children could be placed in regular classrooms, as they were prior to 1980s, but provided with additional instruction after school and in the summer. The children and their parents would have to give something up – more time – in order to get something that other children do not – more educational services. Schools and their employees would also have to give something up – more work and services – to get additional funding for the learning disabled. Designating additional learning-disabled children would no longer be a cost-free route to higher funding, since additional services would have to be provided in return.
As I wrote last year, I even believe that self-selection could be used to ration care for senior citizens, the most difficult issue we face today. Today, those who are cared for by friends and family members often get nothing. In New York State those who failed, in the course of their lives, to either save up enough money to pay for care, or to do enough for those younger, their own children or others, that someone feels an obligation to take care of them, get everything – all kinds of care at home, paid for by the government. How fair is that? Instead, I would limit publicly funded at-home health and personal care to those who were forced to move in with friends and relatives who agreed to be responsible for them, as a supplement for the care of those friends and relatives. The publicly-financed alternative would be nursing home care with double occupancy or more, not because that is the ideal, but because that is something people will not choose unless they have no choice. With a self-selection controls in place, the current practice of means testing benefits for seniors with self-care limitations, and all the games to get around it (spend-down), could be eliminated. Of course, all kinds of regulation will be required to make sure that people are not ripped off by long term care providers, whether publicly or non-publicly financed.
Finally, what about public benefits and services that are allocated based on “a fair and through evaluation?” The public sector would be more equitable, and simpler, if most of these were simply eliminated, especially at the federal and state levels. At a time when the richest nation in the history of the world seems unable, or unwilling, to guarantee its people’s the basic needs, there is no reason to tax people for merit-based expenditures. The current recipients of such expenditures should have to convince buyers, investors, charitable contributors, and local taxpayers to fund them instead.
Some of those working in, or just working, the system defend the gradual accretion of programs, grants, rules, and formulae over the years as an appropriately sophisticated and nuanced response to a diverse and complex society. The reality is far different. Public agencies never have the resources and personal, intrusive information required to direct perfectly appropriate combinations of benefits and assistance to each person based on the totality of their circumstances, and elected officials would never allow them to. Perhaps voluntary charities, with the right to grant assistance only to those who comply with certain standards and accept certain constraints, and the ability, therefore, to investigate people in detail, can adjust assistance and benefits to the specific circumstances of each person. In government, however, the actual result of complexity, if not its real purpose, is to provide different benefits for those in the same situation. Progressive-era programs were simple and equitable. So was much of the New Deal. Not so much of what has happened since, either in government programs or in the tax code, save for the bipartisan tax reform of 1986 that was quickly reversed as politicians of both parties chose hand outs to the influential as the path to gaining and retaining power.
Anything that isn’t simple is a ripoff, incompatible with both equity and freedom. And morally, we are either all in it together, or we are not. If we are, then those at the bottom deserve better treatment, and more respect, then have been given for the past 25 years. Younger generations even more so. If we are not, there is certainly no reason for the government to take my money in taxes and redistribute it to those who earn (or at least spend) more than I do. I’ll take either the McGovern or the Goldwater solution over what we have now.
................
................
In order to avoid copyright disputes, this page is only a partial summary.
To fulfill the demand for quickly locating and searching documents.
It is intelligent file search solution for home and business.
Related searches
- consumer protection agency complaint
- chewable flea protection for cats
- federal pension garnishment protection act
- best flea and tick protection for puppies
- california consumer protection agency
- department of environmental protection fl
- florida department of environmental protection permits
- consumer protection agency complaint form
- best tick protection for dogs
- frontline flea protection for cats
- suze orman protection portfolio code
- colorado public deposit protection act