Comments and discussion on issues in aidsperspective.net as well as on contemporary AIDS issues
RSS icon Home icon
  • We need reliable evidence to justify an earlier start of anti-retroviral therapy. May, 2009

    Posted on May 19th, 2010 admin No comments


    The most recent revision of the US Department of Health and Human Services (DHHS) guidelines for the treatment of HIV/AIDS recommended initiation of anti-retroviral treatment at a CD4 count of 500.

    This recommendation was made in the absence of evidence from a prospective randomized clinical trial.   Instead, evidence of inferior quality was relied on.

    Much is at stake for HIV infected individuals.  The point in the course of HIV infection when treatment is initiated can affect the duration and quality of life.

    Rather than issuing interim guidelines pending the completion of a prospective randomized trial the guidelines committee has jumped the gun, relying on evidence of inferior quality.

    In the following article, John Falkenberg reminds us of the harm that has resulted from basing recommendations on observational cohort studies.

    —————————————————————————————

    John Falkenberg  New York, NY

    Doctors and patients always have the right to choose treatment that is not based on data generated from well-designed clinical trials.  However, I worry when treatment guidelines are based on cohort studies or anecdote, and it’s alarming when the city of San Francisco and Project Inform endorse that practice.

    No study is cited more often than NA-ACCORD, an observational cohort study, to support early antiretroviral therapy.  Besides the many historical examples of harm caused by treatment guidelines based on observational studies (see the Nurses’ Health Study, below), NA-ACCORD suffers from more than the self selection bias of observational studies:  a large percentage of the deferred treatment group, approximately 45%, did not initiate therapy and/or did not have a decline in CD4 counts.  How can those findings be extrapolated to clinical practice?  In addition, the early treatment group may have had incomparable medical care.  For example, were lipids more carefully monitored in that group resulting in more aggressive use of statins, a class of drug with pleiotropic effects that include improving endothelial function, enhancing the stability of atherosclerotic plaques, decreasing oxidative stress and inflammation, and inhibiting the thrombogenic response.  These drugs have demonstrated morbidity and mortality benefits in clinical settings where lipid levels are normal.

    The history of HIV treatment guidelines is an excellent reminder of the risk of formulating guidelines based on observational studies and anecdotal evidence.  However, HIV is not the best example.  There are clinical settings where “more compelling” cohort data using medications considered relatively safe served as the basis for treatment guidelines that ultimately were proven wrong at a significant cost.

    I think the best example pertains to the use of hormone replacement therapy (HRT) in postmenopausal women.  There were many anecdotal, observational and retrospective reports of the many benefits of HRT, but the Nurses’ Health Study was the flagship.  The Nurses Heath Study was a case control, observational study of over 120,000 nurses, including over 20,000 who were post menopausal.  As the follow up continued for years, an increasing number of women reached menopause, and various health variables were monitored and reported.  The most striking “conclusion” of this study was that the relative risk of death was 0.63 in HRT users vs. non users.  The risk of major coronary artery disease among HRT users was 0.60 when compared to those who never used HRT.  Both of these findings were statistically significant.  These data were broadly reported in medical journals, and professional meetings.  The data were added to the HRT prescribing information and aggressively promoted by the pharmaceutical industry, particularly the manufacturer of Premarin (American Home Products, renamed Wyeth, recently acquired by Pfizer), the most widely prescribed HRT.

    There was huge resistance to conducting a prospective randomized controlled trial in this population.  “It denies the placebo-controlled group the protective heart benefits of HRT.”  “It is unethical to randomize people who would clearly benefit from HRT to placebo.”  “No one would enroll in this trial considering what we already know about the benefits of HRT in this population.”  Despite the criticism, the Women’s Health Initiative, a prospective randomized controlled study of HRT in postmenopausal women was conducted.  In July 2002 the study was halted early due to a statistically significant excess risk of heart attack, stroke and breast cancer in those receiving HRT versus those on placebo; a finding that literally rocked the world of HRT.

    More recently, long-term treatment recommendations in diabetes were debunked by results from the first well designed, randomized controlled study (coincidently named ACCORD), with cardiovascular clinical endpoints.  Using multiple medications for intensive glucose lowering and intensive blood pressure reduction did not reduce cardiovascular events but only increased adverse events.  Once again, guidelines formulated without data derived from controlled clinical trials did more harm than good.

    There is a lot at stake here and I fear that this is déjà vu all over again.  The NA-ACCORD results are compelling and generate a hypothesis that needs to be tested, but the clinical trial has yet to be performed and the evidence is absent.  I find it difficult to understand why those of us who have lived during decades of this epidemic, who have seen those living with HIV experience a wide range in the rate of disease progression, and who have seen the rise and fall of early antiretroviral therapy, do not demand more.  I’m shocked by both the city of San Francisco and Project Inform.

    I cannot claim to know the motivation behind the current push for early treatment without evidence.  However, I do know the pressure felt by the pharmaceutical industry as they approach a patent cliff with little in the advanced research pipeline and significant overcapacity.  It is not coincidental that lobbying efforts have been stepped up in an economic climate where value driven medicine is a new priority.  That lobbying includes an aggressive push to eliminate informed consent for HIV testing and a push for early treatment.  And, here we are with major public health agencies and CBO’s jumping on the bandwagon without the evidence

  • Treatment of HIV/AIDS. The revised USPHS guidelines. May, 2010

    Posted on May 19th, 2010 admin No comments


    The revised USPHS guidelines for the treatment of HIV/AIDS

    Guidelines for the treatment of HIV/AIDS were first issued by the US Department of Health and Human Services (DHHS) in 1998. They have undergone numerous revisions since then; the most recent was in December 2009.

    The first guidelines were issued shortly after potent antiviral medications became available.  We knew very little about how best to use these drugs at that time, and with only a few years experience our knowledge of their adverse effects was understandably limited.

    Perhaps the only reliable information we then had was that individuals with fewer than 200 CD4 lymphocytes received a life saving benefit from their use.

    Despite such limited information the panel that had been convened to write the guidelines made firm recommendations for the use of antiviral drugs in groups of patients for whom evidence of a net benefit was lacking.

    Even in the absence of experience with the newer antiviral agents, at least two probable problems associated with their use could have been anticipated in 1997.   The propensity of just about any microorganism to develop resistance to antimicrobial agents was no mystery.  Nor was it a surprise  that adverse reactions to new drugs appeared as they were used for longer periods.

    As might have been anticipated  healthier HIV infected individuals  have not infrequently had to deal with  both of these problems.

    Why then did the first HIV/AIDS treatment guidelines panel not propose and encourage the conduct of a randomized prospective clinical trial to answer the question of whether immediate or deferred treatment with antiviral drugs could or could not prolong life and improve its quality or made no difference apart from cost?

    Since the problems that were to arise  could have been anticipated, if not their extent,  the guidelines committee must have accepted that whatever evidence existed was sufficient to reassure them that there would be a net benefit to starting treatment at 500 CD4 lymphocytes.

    The most recent revision of the DHHS guidelines now propose, as the first guidelines did, that treatment be initiated at a CD4 count of 500.   A prospective randomized trial that directly addresses the question of when treatment is best initiated has yet to be completed.  In the absence of information from such a trial the committee has relied on evidence from some large retrospective observational studies.

    In the next post John Falkenberg writes about some previous experiences where advice based on results of retrospective analyses of observational data had to be reversed when the results of randomized controlled studies became available.

    I believe the biggest mistake made in 1997 by the guidelines committee was in not responding to the very real  possibilities of dangers associated with early treatment initiation  by encouraging the completion of a prospective randomized trial, such as START, that could by now have reliably provided an answer to the question of whether immediate or deferred treatment is better or worse or makes no difference that is, apart from cost.

    It’s not the benefits of early treatment that are in question. Of course there are benefits, but the question we need an answer to is when in the course of HIV disease the benefits of treatment outweigh the risks.

    Long term exposure to antiretroviral drugs can  have harmful effects.  It can take many years to recognize some of these adverse effects. For example we learned only in the last few months that under certain circumstances neurocognitive function improved in some people who stopped antiviral drugs (ACTG 5170).

    So the challenge is to find out how best to use the drugs.  Put another way, we must find ways to safely minimize exposure to the drugs, which until we have drugs without significant adverse effects, is what determining the optimal time to start treatment is all about.  We don’t know if a person deferring treatment until a CD4 count of 350 will or will not live longer with an overall better or worse quality of life than someone starting at 800 or even 500 CD4s.

    We do know that at 350 CD4s, benefits of treatment far outweigh risks.   But no matter what NIH guidelines committee members may feel, we do not yet have the most reliable evidence that benefits of treatment will outweigh risks when starting at higher numbers.

    The wording of the USPHS guidelines is such that depending on whose vote one goes with, I suppose  might even be interpreted  to mean a recommendation for every HIV positive individual to receive treatment irrespective of CD4 count.

    A letter written to the DHHS panel in 1997 suggesting that a randomized prospective trial be encouraged to provide guidance for individuals with greater than 200 CD4 lymphocytes remained unanswered although received.

    Sadly the repeated changes to the guidelines since their first appearance in 1998 appear to indicate a retreat from evidence-based recommendations.  Maybe this should be stated as a retreat from attempting to find the most reliable evidence on which to base recommendations.  The guidelines panel go to great lengths to reassure us that their recommendations are indeed evidence based.

    But as they recognize, the quality of evidence can vary. They also recognize that evidence of the highest quality is derived from the results of prospective randomized trials.   Yet not only do they not vigorously encourage the completion of such trials, their recommendations actually inhibit enrolment into START which is such a trial.

    Unfortunately the DHHS recommendations while not binding have a huge influence.  Remarkably they are even regarded by some  as setting an ethical standard, so that fears have been expressed that enrolment into START  might be considered unethical as the current guidelines revision recommend starting treatment at 500 CD 4 lymphocytes.

    Thirteen years after the first guidelines were issued, the DHHS panel has now made revisions that continues to threaten enrolment into a randomized controlled trial that will provide clear guidance to HIV positive individuals and their doctors about when to initiate antiviral therapy.

    Surely, when we recognize that reliable evidence is lacking to inform a  very important clinical decision, is it not our obligation to seek the evidence, rather than settle for the  uncertainties  associated with evidence of inferior quality?  This is not only for the benefit of our patients but also to affirm that our stated respect for evidence-based recommendations is more than lip service.

    At this time the DHHS guidelines are the only ones that recommend a start to treatment at 500 CD4 lymphocytes.

    The DHHS guidelines have been of benefit to people with HIV/AIDS.  But on the issue of when to start antiviral therapy they have not best served the interests of HIV positive individuals.

    We need a randomized controlled trial to answer this question, not the votes of a committee.

    I believe that many health care providers would welcome the opportunity to be   able to present an option to their patients with greater than 350 CD4s, to enrol in a study such as START.

    At the end of the day, determining when it’s best to start is not something you vote on. It’s something so important that you nail it down with a trial such as START.

  • Despite the SMART study there is a role for intermittent therapy. July, 2009

    Posted on July 9th, 2009 admin No comments

    From where we are at the moment in our understanding of HIV disease, we have to accept that lifelong treatment will be required for most infected individuals..

    The drugs are not free from undesirable effects, they are costly and for many, quality of life is impaired to a greater or lesser extent by taking medications, even a single pill, day after day.

    For these reasons it is important to study ways to safely minimize exposure to these necessary drugs.

    We have potent tools to fight HIV disease but we still do not know how best to use them to achieve the most favourable antiviral effect, while minimizing toxicity and undesirable effects.

    One approach to these objectives – at the moment, perhaps the only viable approach is the study of intermittent therapy as a means to safely reduce exposure to drugs.   This approach will almost definitely not be possible for all HIV infected people needing treatment.  But it may well be possible for most. The cost savings with intermittent therapy could also be substantial.

    This important undertaking was dealt a completely unwarranted setback with the publication of the results of the SMART study, in the New England Journal of Medicine in 20061.  SMART is by far the largest study comparing continuous with intermittent therapy.  In this study more people died in the intermittent treatment arm, not only from AIDS associated events but all cause mortality was increased, including more deaths from cardiovascular disease and from some cancers not previously associated with AIDS.

    The negative effect of SMART on the study of intermittent treatment continues.   In addition, because of the association of an increased number of deaths with intermittent treatment from cardiovascular disease and other conditions not related to HIV disease, the SMART study results have been interpreted by some to indicate that HIV disease includes a much wider spectrum of clinical manifestations than previously thought.  The most favoured, and almost certainly correct explanation for how HIV infection causes heart disease and some other conditions is that they are a consequence of inflammation induced by infection with this virus.

    For a number of reasons, the conclusion that, as a generalization, intermittent therapy is associated with a worse outcome compared to continuous therapy is completely without justification.  The original SMART study report omitted information that brings this conclusion into question; this has been alluded to in a previous post.    Almost all the deaths in the study occurred at US sites, where in contrast to non-US sites multiple co-morbidities were over represented.  As seen in the table below these co morbidities included, among other conditions,  hepatitis B and C, a history of heart disease and  diabetes.  There were even significantly more smokers among those enrolled at US sites.  How can one extrapolate interpretations of observations made in such  individuals  to HIV infected  populations free from these co-morbidities?

    SMART studied just one particular strategy of CD4 guided intermittent therapy, in a population where  multiple non HIV related diseases were overrepresented in US sites, where almost all deaths occurred (79 out of a total of 85 deaths). These conditions included hepatitis B and C,  hypertension, and a previous  history of heart disease   Even setting aside interpretative difficulties concerning this particular study, one can say no more than that the particular strategy of treatment interruption used in SMART, in the population studied, indicated a worse outcome in those randomized to receive intermittent therapy.   That’s all.  The generalizations made about the danger of intermittent treatment were completely unjustified, although enthusiastically endorsed by many community commentators, and repeatedly stressed in educational  literature addressed to physicians.

    Inappropriate generalizations of course apply to other studies of treatment interruptions, which used different criteria for interrupting therapy. All the other studies were smaller than SMART and had different follow up times.  But in all of them the excess mortality observed in SMART was not seen, although in some, morbidity, particularly bacterial infections, was more frequent with intermittent treatment.

    Some examples are the Trivacan study2 which was conducted in a different population using different interruption criteria. There was an excess of bacterial infections in those receiving intermittent therapy but not the excess of deaths noted in SMART.  The Staccato study3,  using a different interruption strategy also did not show the excess mortality seen in SMART in the treatment interruption group.

    The LOTTI study4 concluded that the continuous and intermittent therapy groups could be considered equivalent.  Actually, in complete contradistinction to the SMART results, in this study, cardiovascular disease was actually worse in the continuous therapy group (controls) compared to those receiving intermittent therapy (STI group).  Although pneumonia was more frequent in the STI group.    Here is a sentence from the author’s abstract.

    A higher proportion of patients in the STI arm were diagnosed with pneumonia (P 0.037), whereas clinical events influencing the cardiovascular risk of patients were significantly (P<0.0001) more frequent among controls”.

    The finding regarding cardiovascular disease is particularly relevant.

    Much has been made of the increases in cardiovascular disease seen in the intermittent treatment group in the SMART study.  It is now considered by some that HIV infection per se constitutes a risk for heart disease and this, as noted, is attributed to HIV induced inflammation.   There are even studies now that look at arterial wall thickening as a measure of atherosclerosis and find this to be increased in untreated HIV infected people.  So this needs to be studied.  But in terms of cardiovascular clinical events, LOTTI tells us these are more frequent in people receiving continuous therapy compared to those receiving intermittent treatment.

    Despite evidence to the contrary some “experts” still tell physicians to avoid treatment interruptions in order to protect patient’s cardiovascular health!!

    There are even sponsored courses for physicians for whom CME credit can be earned where instruction is provided to not interrupt treatment precisely because this will increase the risk of heart disease, as well as other problems.

    I was shown an invitation to physicians to a free course offered by a distinguished academic institution.   Among the descriptions of what those attending the course will learn to do is the following:

    “Describe, discuss and apply the data from the SMART study on CHD  (coronary heart disease)  risk associated with ARV treatment interruption and be able to integrate these data into ARV treatment plans and algorithms for HIV-positive patients”

    What is one to make of this in the light of the LOTTI observations?

    This absurdity can only be possible because there is a selective reporting of information to HIV infected people, their advocates and to physicians who are not able to look at all the literature.   As a consequence almost none of the web sites devoted to conveying information to patients and their advocates have even mentioned the LOTTI study.

    As far as cardiovascular disease is concerned those of us who took care of HIV infected patients in the 1980s before effective treatments were available will have observed that people with AIDS characteristically had huge elevations in their serum triglycerides.  They also characteristically had low levels of HDL cholesterol (and of total cholesterol).  I helped a resident in a hospital where I once worked to prepare a report on HDL levels in HIV infected patients before HAART was available.  We used my patient records from the 1980s and were able to clearly show that as the disease progressed over time, HDL levels decreased.    There was, not surprisingly,  a correlation between falling HDL levels and falling CD4 counts – data which I never published, but probably can still find.

    So, there may indeed be something in the connection between untreated HIV disease and heart disease.  In the early days possibly our patients did not survive long enough to manifest any clinical manifestation of heart disease.   Increased triglycerides are an independent risk factor for coronary heart disease.  There even was a possible mechanism for this that was known in those days that could account for this.

    Untreated individuals with more advanced disease have high serum levels of alpha interferon (also increased levels of gamma interferon) and TNF alpha, and both of these cytokines can inhibit an enzyme called lipoprotein lipase that then results in the lipid changes noted.  Such changes have been seen in people with hepatitis C treated with recombinant interferon.

    So, why is the failure of just one form of intermittent therapy used to categorically condemn the practice in principle?   There are numerous different ways in which intermittent therapy can be structured.

    The discouragement of the study of intermittent therapy is even more peculiar in view of the different outcomes of other, albeit,  studies smaller  than SMART

    Perhaps a clue is to be found in a sentence in the LOTTI study report.

    Here it is:

    “The mean daily therapeutic cost was 20.29 euros  for controls and dropped to 9.07 euros  in the STI arm (P<0.0001)”.

    This more or less translates into a 50% reduction in drug sales to people receiving intermittent treatment according to the LOTTI protocol.

    Taking other studies of intermittent therapy into account, and considering some problems associated with SMART, I believe that one can say with a resounding affirmative that, in principle , intermittent therapy can be safe. Not for all, and maybe not for all of the time, but probably for many HIV infected individuals with over 350 CD4 lymphocytes who need treatment (who such individuals may be is also a controversial issue particularly regarding individuals with over 350 CD4 lymphocytes),   some form of intermittent therapy will probably be demonstrated to be safe.  For individuals with at least 700 CD4 lymphocytes, this is already the case.

    Many of my patients wanted to take “treatment holidays” as they were once called; some from time to time, and others on some regular basis.  I have always believed that we need to find ways where we can safely minimize drug exposure so I was supportive of their wishes, as long as some conditions were met and we had the means to monitor viral load and CD4 counts.   This desire for treatment interruptions  was obviously  true not only among my patients but it seemed quite common in New York City to hear of individuals who were receiving some form of intermittent treatment, and this must also be the case elsewhere.

    Of course for individuals with CD4 counts below 200, this was not a good idea.   Whatever we did, we knew that we needed to keep the CD4 count above this level. So, for patients with higher CD4 counts a variety of strategies were used.

    There will be many anecdotes accumulated over the years of such experiences of intermittent treatment.   I need to stress that these are just anecdotes and most definitely not formal studies.  As such they can only lead to hypotheses on which studies can be based.

    It would be foolhardy for HIV infected individuals to interrupt treatment without the advice and close supervision of an experienced physician. I have seen too many individuals who have come to harm by stopping their medications completely on their own, without supervision and not even informing their physicians that treatment was stopped.  This at least indicates that there is such a thing as “pill fatigue”, something we cannot ignore.

    Of my patients who interrupted treatment none have come to harm.  There was no established protocol to guide us and strategies used took patient preference into account.    An effective antiviral combination, one that has produced sustained suppression, at least as indicated by an undetectable viral load should work again if stopped and re started later. There may be some theoretical difficulty in abruptly stopping antivirals that are slowly eliminated without additional temporary cover.   As a result, in certain patients some form of episodic treatment was used, that is periods on treatment alternating with periods off treatment.  This approach is now generally considered to be unsafe and CD4 guided strategies are studied.   But numerous anecdotes as well as earlier studies of episodic treatment indicate that this approach can be viable in some situations, and I believe should be further studied.

    In an editorial in the journal reporting the LOTTI study Bernard Herschel and Timothy Flanagan state.

    “Many of our patients with high CD4 cell counts want to

    stop treatment. The LOTTI study does not justify a

    recommendation in that regard, but it does give clinicians

    useful information that it is probably safe to stop

    treatment within the limits of CD4 cell counts of

    LOTTI. Continued vigilance is needed so that excellent

    adherence is maintained when patients are on HAART

    to prevent the emergence of resistance.

    The LOTTI study adds important information to the

    continued question of whether there is a role for

    interrupted therapy. Further study is justified, particularly

    with newer combination therapies, which may well

    have less toxicity and therefore shift the balance towards

    continuous treatment. Clinicians will welcome the

    information from LOTTI because it can allay some of

    the concerns regarding the safety of treatment interruptions

    at high CD4 cell counts”.

    In the LOTTI trial, treatment was restarted when the CD4 count dropped  to 350 and stopped at a CD4 count of  700.  So within these limits we have some reassurance of safety.

    So, further study is absolutely warranted.

    In the LOTTI study, participants had to have a CD4 count of 700.

    What about individuals who have had  undetectable viral loads for six months (as in LOTTI) but whose CD4 count has remained stable at 500, or 450 or some number lower than 700?    Studies with different CD4 criteria should continue and not be deterred by the SMART results.

    I have written about the need to work on ways to individualize therapy to take individual rates of disease progression as well as other individual characteristics into consideration.   That is to get away from the prevailing  one size fits all approach to therapy,  mainly using a snapshot of just one or two parameters,  the CD4 count and viral load to guide one, without considering the rate of change in  CD4 numbers.

    In the same way, studies to individualize intermittent treatment interupptions in those for whom it is possible should be considered.   As noted, if an antiviral regimen is effective in fully suppressing replication – at least to the extent indicated by an undetectable viral load, there is absolutely no reason why it should not be effective again if stopped. There may be some consideration needed regarding how to stop with some drugs that are eliminated very slowly.   (Of course an individual may be super infected with a drug resistant variant).

    It is likely that some form of episodic treatment may be effective in selected individuals.   That is, periods on treatment alternating with periods off treatment.   Because of its flexibility it is probably best suited to individualization.

    As mentioned, this approach has been thought to be more dangerous than a CD4 guided strategy.  But this approach appeared to be effective in earlier studies but they have not had long periods of follow up5.   But other similar studies have shown a high rate of viral rebound6.

    However, the fact that there has been a successful study and the many anecdotes of successful episodic types of intermittent therapy provide encouragement that it is worthwhile to continue to study such an approach.

    It certainly is possible to study the characteristics of those individuals in whom such an approach has proven to be successful.

    I conclude with a few more comments on the SMART study with a possible explanation for the huge discrepancy in the number of deaths in US sites, 79, compared to only 6 in non US sites.   At least there is a very clear reason why the results observed in this study should not be generalized to all HIV infected individuals.

    The study was conducted in US sites on what appear to have been a group of individuals in whom disorders unrelated to HIV were overrepresented.  As mentioned earlier, these disorders include diabetes, hepatitis B and C, high blood pressure and a history of heart disease.

    Look at this table, which has been copied from a report on a SMART follow on study of inflammation in trial participants7.

    This table shows characteristics of individuals who died compared to those who did not.

    Kuller 2

    The 85 people who died are represented in the third column, and their characteristics have been compared to those of two individuals who did not die (controls).

    It can be seen that of the people who died, compared to those who did not, 11.8%  vs  4.7% had a history of heart disease (p=0.04);  45.9% vs 24.1%  were co infected with Hepatitis B or C  (p = 0.0008); 57.6% vs 31.8% were current smokers (p = 0.0001); 25.9% vs 14.7% were diabetic (p = 0.03); 38.8% vs 25.3% were taking medications for high blood pressure (p = 0.02).

    Thus the people who died in the SMART study tended to be sick with non HIV related conditions.  64% of them were in the treatment interruption group so this tells us that individuals who already have more traditional risk factors may increase their risk of death by interrupting treatment according to the schedule defined in SMART.

    But there is another remarkable figure in this table.  92.9 % of those who died were participants in US sites!  I have already written about this – that of the 85 deaths in SMART, 79 occurred in US sites with 55% of participants, and only 6 people died in sites outside the US where 45% of individuals were enrolled.

    Despite what some experts incessantly tell us, SMART cannot justifiably be used to conclude that intermittent treatment is dangerous, in principle,  for all HIV infected individuals, particularly with additional information that for some reason, has only been made available less than a year ago.

    The original report of the SMART study in the New England Journal of medicine in 2006 reported the baseline characteristics of participants.  All of these baseline characteristics, including co morbidities and traditional risk factors for heart disease such as hypertension and smoking were about the same in both treatment groups – that is, in those receiving continuous therapy and those on the treatment interruption arm.   However the distribution of these characteristics in those who died was not reported in this publication.  We had to wait until October 2008 to learn that those who died already had more multiple health problems unrelated to HIV infection.

    I missed seeing this 2008 publication.  It seems that most who saw it had little to say.  But the strange distribution of deaths was brought to attention again with comments in the Lancet Infectious Disease in April of this year8.   I did not miss it this time, and have already written about it.

    Because of the deleterious and unwarranted influence of SMART in discouraging the study of intermittent therapy, I thought it was absolutely important to make this information as widely known as possible.   Without further explanation, these results indicating the greater extent of co morbidities and traditional risk factors among those who died bring the often repeated conclusion  that the SMART study indicates that treatment interruptions are unsafe for all,  into question.

    To my great surprise, despite my best efforts to disseminate this information on the strange distribution of deaths during the study, there was almost no expression of interest from the many individuals I communicated with.

    This lack of interest is really puzzling.

    Despite what might be considered to be an inappropriate generalization of the results, particularly regarding the relationship of HIV infection to deaths from causes unrelated to HIV infection the SMART study was a massive undertaking and its completion should be seen as a triumph.

    Organizing such a huge endeavour that was dispersed so widely is a tremendous achievement.  There are sub studies and follow on studies that continue and will advance our understanding of HIV disease.

    We know with some security from SMART that HIV infected individuals with Hepatitis B and C,   hypertension, and a past history of heart disease and some other associated health problems would increase their risk of death by interrupting treatment for HIV according to the strategy used in SMART.

    For otherwise healthy HIV infected individuals it is likely that for some, unfortunately not for all,   a form of treatment interruption will be demonstrated to be safe.  This can already be said for those meeting the conditions of the participants in the LOTTI trial.

    The original report of the SMART study was published in the New England Journal of medicine in 2006.

    http://content.nejm.org/cgi/content/full/355/22/2283

    ———————————————————————————————————————–

    Refs

    1:    New England Journal of medicine    2006  355:2283-2296

    2:    Trivacan(ANRS 1269)    Lancet  2006  367:1981-1989

    3:    Staccato                           Lancet 2006   368: 459-465

    4:    LOTTI                                AIDS     2009   23:799-807

    5:     Proceedings National Academy of Sciences   2001   98: 15161-6

    6:      AIDS  2003    17:2257-2258

    7:      Kuller et al.   PLoS  Oct. 2008   5(10): e203

    8:      The Lancet Infectious Diseases  2009 Vol 9 Issue 5 268-9

  • When is it best to start antiretroviral treatment: an update April 2009

    Posted on April 13th, 2009 admin 2 comments

    “Starting HIV Therapy Earlier Saves Lives”

    “Study: Treatment for HIV Should Start Earlier”

    “Starting Therapy Earlier Found to Improve Survival”

    “Earlier HIV Treatment Boosts Survival”

    With headlines like these you would think that there is a clear answer to the question of when is it best for HIV infected people  to start antiretroviral treatment.  There can be no doubt at all that starting antiviral therapy early – in this case at a CD4 count above 500 improves survival.  These headlines, addressed to HIV infected individuals their physicians and the public are a unanimous response to a study that just appeared in the New England Journal of medicine (NEJM).  http://content.nejm.org/cgi/content/full/NEJMoa0807252

    But is this confidence justified?

    Unfortunately, despite these headlines, the study which occasioned them was absolutely unable to justify the conclusion ; we still do not know when it’s best to start treatment.

    The study examined data that had been previously collected.  It was a retrospective observational study with all the problems inherent in such studies. These have been outlined in a previous post.

    About a week after this study appeared in the NEJM, another large retrospective observational study was published in the Lancet (April 9th 2009

    doi:10.1016/S0140-6736(09)60612-7http://www.thelancet.com/images/clear.gif ).

    While both studies support the desirability of not delaying a start to antiviral therapy to a CD4 count below 350, they do differ with respect to the reported benefits of starting above that number.  The Lancet study, whose lead author is Jonathan Sterne, finds a decreasing benefit at start times increasing above a CD4 count of 350, with nothing   at starting around 400.

    The authors of both reports  agree that prospective randomized studies are the best way to approach a resolution of the “when to start” question – a question that might have already  received a reliable general answer had we begun these studies in 1997, as some of us suggested we do at that time.

    Obviously we cannot just wait for the results of randomized prospective studies.  We do need guidelines now, but any recommendation based on available information must be regarded as provisional, until the results of prospective randomized studies are in.  It is important that this be clearly stated. If we are ever going to be able to enrol a prospective randomized study then we cannot afford to delude ourselves that the answer to the when to start question is already known.

    While the lead author of the New England Journal of Medicine did pay homage to prospective randomized trials – and a kind of ritualized homage is exactly what it sounded like, this gesture most certainly did not inhibit her from unreservedly recommending an earlier start to treatment, a start even at a CD4 count above 500, without conducting such a prospective study.  Her conclusion:

    “The early initiation of antiretroviral therapy before the CD4+ count fell below two

    prespecified thresholds significantly improved survival, as compared with deferred

    therapy

    One of these prespecified thresholds was a count 500 CD4 lymphocytes.

    This categorical statement, arrived at by the kind of study that cannot possibly justify such confidence, will have a negative  effect on  enrolment in proposed randomized trials, which are in fact the kind of study that can provide conclusions in which we can have justified confidence.

    This study may well be the last coffin nail in any hopes there may have been for the completion of prospective randomized trials designed to address the “when to start” issue.  It may now be impossible to enrol, and will never get off the ground. This difficulty is made so much worse by the kind of uncritical headlines shown above

    I wonder how the commentators who rushed so uncritically to announce Dr Kitahata’s conclusion on the benefits of starting treatment at CD4 counts even greater than 500 will respond to the Lancet report, which did not find a benefit with starting at such high CD4 numbers?   I hope I’m wrong in suspecting that this study will be largely ignored; the headlines trumpeting the survival benefit of starting treatment early – even above a CD4 count of 500 will not be marred by any doubt introduced by the study reported in the Lancet.

    Among the problems with the New England Journal of Medicine study is that a significant number of people were left out of the analysis, because their HIV disease failed to cooperate with preconceived notions about the course of this disease.

    This is a significant criticism and I will try to explain why.  The study examined two groups of people, one with over 500 CD4 lymphocytes, and one with CD4 counts between 351 and 500.

    Let’s just take the 351 to 500 group.    Here, deaths in those starting at counts between 351 and 500 were compared with deaths in those starting below 350. Sounds reasonable?   Maybe, until we learn that significant numbers of people with 351 – 500 CD4 cells who did not start treatment  also did not progress to below 350 CD4 cells.   So the authors just left these people out of their calculations. They in effect did not exist for the investigators.

    The recommendations the authors make are meant for all people, including those who did not progress and were left out of the analysis.  These people are also going to be treated with drugs they don’t need, as they cannot be identified.

    I suppose this will do wonders for drug sales, but there will be individuals taking drugs for no reason and some may only suffer their ill effects as well as cost while deriving no benefit.

    Here is another serious problem with this study.

    Among those people with CD4 counts between 351 and 500, it is important to know just how long treatment was delayed in those who waited until their counts fell below 350.   This information was provided; the median count at the time of starting treatment among all who waited was 286.   But what was the CD4 count at starting treatment among those in this group who died?

    This information was not given – at least I was unable to find it.

    Could there have been those starting treatment with counts below 100, below 50 – maybe even below 20.   In an extreme example, if a person waited to start treatment to a point close to death, there would not be much surprise that delaying treatment   initiation is associated with a worse outcome.

    Many physicians are proud that the field has abandoned uncritical authority as a guide to practice and has now embraced evidence based medicine. David Sackett, one of its originators, has stated that one pillar of evidence based medicine is the use of the best external evidence in making clinical decisions.

    All too frequently physicians, while priding themselves on practising evidence based medicine,  somehow are still able to make decisions based solely on their unproven beliefs, as if they have a private source to the truth, some special access to an oracle.  I have  heard one physician state that anyone with a viral load should be treated, another saying essentially the same thing in stating that he would treat every HIV infected patient no matter what the CD4 count. How on earth have they arrived at these conclusions?  Patients might just as well seek advice from a palm reader.

    As always you can’t beat the truth. No matter what the private sources of information to which  some physicians and patients apparently have access, the truth remains  that apart from people with under 200 CD4 cells the best time to initiate antiviral therapy is unknown.

    I have once before faced this kind of opposition to conducting a randomized prospective study to address the question of when is it best to start treatment.  In the early 1990s I participated in an effort to conduct a trial of early versus deferred treatment with AZT.  A pilot study was initiated, and I participated with some statisticians in describing the study to numbers of physicians in New York City, with the hope of encouraging them to enrol patients.  Despite expressions of enthusiasm, the response was so dismal that the trial could never take place.  However there was one physician – just a single physician in San Jose who was able to recruit many more patients than all the others combined.  He was so successful that we asked him to come to New York City to explain how he was able to enrol so many patients.  His answer was simple.  He told patients the truth. He did not know when it was best to start treatment, so he and his patients let the toss of a coin determine this, as a means of finding out what was best by participating in a study.

    This means that the other doctors were unable to say they did not know.  Maybe, as is the case today some actually felt that they did know, as they had complete faith in their intuition, or perhaps had some private access to the truth. For these physicians the practice of medicine is more akin to a faith based activity.  Maybe other physicians  did not know when it was best to start treatment, but might have felt unable to admit this; maybe some patients felt they knew and physicians acceded to their wishes.

    The rational response to uncertainty – having first overcome the hurdle of being able to admit that there is uncertainty – is to try to resolve this by the best means available.

    I fear we are not even close to recognizing that there is uncertainty about when to start treatment in people with over 200 CD4 cells.  The NEJM article exacerbates the problem with its assumption of certainty, an assumption very sadly shared by some health care providers, some journalists and community commentators to whom HIV infected people turn to for advice.

    In conclusion I cannot lose an opportunity to yet again bring attention to the need to individualize therapy.   The rate of HIV disease progression is so widely variable that there are limitations in setting a fixed CD4 count as a guide to start therapy.  A prospective appropriately designed trial can tell us if on average it is better to start above rather than below a certain CD4 count, or on average it is better to start treatment immediately or to defer it.

    It is the “on average” limitation that needs fine tuning for each individual patient.

    Not only will the rate of disease progression vary widely between patients, but there are other individual considerations that impact the decision to start treatment. For example, adequate housing, mental health issues, co morbidities and many other factors need to be considered.

    These two aspects, the general and the particular, fit so very neatly into David Sackett’s description of evidence based medicine that I will quote a passage:

    The practice of evidence based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research. By individual clinical expertise we mean the proficiency and judgment that individual clinicians acquire through clinical experience and clinical practice“.

    BMJ 1996;312:71-72 (13 January) : Evidence based medicine: what it is and what it isn’t.  David L Sackett, William M C Rosenberg, J A Muir Gray, R Brian Haynes, W Scott Richardson

    The best available external evidence will be the  results of a prospective randomized trial; these  will provide general guidance.  Individual clinical expertise will apply this to particular patients,  taking into account many factors, not least of which is the patient’s rate of disease progression.

    A previous post discusses the  issue of individualization of treatment.


    If we took individualization of treatment seriously, we could in fact come some way to identifying rapid and slow/non progressors.  See previous post on individualization of treatment.

    Often forgotten, the second pillar is individual clinical judgement.

  • When is it best to start antiretroviral treatment. February, 2009

    Posted on February 26th, 2009 admin 1 comment

    When is it best to start antiretroviral treatment?

    The issue of when it is best for asymptomatic HIV infected people with more than 350 CD4 cells to start treatment with antiretroviral drugs has received renewed attention lately. Reports at recent conferences and discussions of these reports on several websites all seem to favour an earlier start than at a CD4 count of 350. There is absolutely no reliable evidence to support this recommendation. The evidence that is presented derives mostly from retrospective observations. Such retrospective studies cannot provide reliable evidence that improved clinical outcomes in those starting treatment earlier are actually caused by the antiretroviral drugs. That this is so can only be an hypothesis, a theory to be tested by prospective studies. Such a prospective study would essentially follow people who are randomly assigned to start treatment immediately or to defer it.

    Some of the problems associated with interpreting retrospective observations are outlined at the end of this post1.

    The “when to start” issue of course only applies to infected persons who are not symptomatic and have a CD4 count above 200. For those with fewer CD4 cells there is no doubt at all that such individuals should be on therapy.

    If the antiviral drugs were completely harmless, with no toxicity, we would have no problem at all, apart, of course from the financial toxicity. However the drugs are not without problems, particularly if we are dealing with taking the medicines for a life time. The newer drugs are touted as being less toxic. However it takes years for some toxicities to become manifest. How many years were people taking Zerit, (D4T,stavudine) before we knew about its effects on fat distribution? Another example of toxic effects only becoming apparent after years of use is thinning of bones caused by some antiviral drugs.

    When potent antiretroviral agents were introduced in the 1990s their impact on reducing mortality was unequivocally demonstrated in persons with more advanced disease. This immediately left us with a question regarding the effect of starting these drugs in individuals with less advanced disease.

    Rather than admitting that the answer to this question was unknown, and required to be studied in a prospective fashion, the Department of Health and Human Services issued a set of guidelines. It is understandable that issuing guidelines, in the face of uncertainty is reasonable, but they must be regarded as interim, pending the outcome of studies.

    In 1997 I wrote a letter in response to the publication of these guidelines; it was received by the Guidelines Committee, but I was sent absolutely no response. The letter can be seen here: http://aidsperspective.net/articles/guidelines1.pdf

    Despite attempts to rely on retrospective observations to resolve clinical uncertainty, – such as uncertainty about when to start antiviral treatment, prospective randomized trials remain the best way to achieve this. They minimize bias, and thus misinterpretation, and are therefore the most reliable way to resolve uncertainty. There is no getting over this. Such trials may be expensive, and last a long time, but in the end, probably more time and money is lost by repeating inconclusive and conflicting retrospective studies.

    As always, you can’t beat the truth. Regarding the “when to start” question, the truth was and still is that the answer to the question is unknown. Again, if the drugs were harmless there would be no problem. But it is quite possible that a person starting treatment at say 700 CD4 cells, even 500 CD4 cells, who may be a slow progressor may well have his or her life shortened by long exposure to the medications.

    If, for whatever reason one presumes to favour a particular answer one can always select snippets of data to support one’s bias. Many would like to believe that it is better to start early. I have even read on one web site, that a New York physician stated that he would start any infected person on treatment no matter what the CD4 count was. I suppose this physician, and those who share this view are happy to practice with only their unsupported beliefs as a guide. This is as reliable as using a crystal ball and sick people deserve more from their health care advisers. In this respect the writers reporting such nonsense generally make no comment on the danger of views based only on belief, thereby adding credibility to these statements of faith. The practice of medicine is not a faith based activity.

    The scientists who attach unwarranted importance to retrospective studies are also doing a disservice to clinical research. Some at the recent CROI meeting did admit that a prospective randomized trial was the best way to obtain reliable evidence on the issue of when to start. But as reported on one web site:

    “Professor Doug Richman of the University of California San Diego questioned whether a ‘when to start’ trial was worth the expense. “Rather than spend millions on a trial, given that most people aren’t diagnosed until much later, why not use all that money to identify people who have the higher risk?” he asked”.

    Similarly:

    “He [Bartlett] also believes that the field is not willing to wait the 5 to 10 years necessary to generate an answer on when to start therapy.”

    Discovering what is in the best interests of the infected person is not worth the expense? Waiting 5 to 10 years to find out is unacceptable?

    So if we dispense with the truth to inform our actions, what could it be that guides us? Whatever it is, it is certainly no more reliable than consulting a palm reader.

    Interpretations of associations found in retrospective studies presented as reliable indicators of a cause and effect relationship, rather than possibly suggestive of such a relationship, have as much meaning as the interpretations of an astrologer. Of course such data may be useful in suggesting hypotheses.

    At a recent ICAAC meeting Dr Kitahata presented an analysis of a large retrospective study comparing outcomes among people starting at a higher as compared to a lower CD4 count. There was little meaningful criticism of the interpretation that the improved outcome in those starting treatment earlier was actually due to medications taken. Dr Kitahata felt that it was possible by some statistical magic for retrospective observations to mimic a randomized prospective study.

    Here is an illustration of the interpretive pitfalls in such studies; it is a comment I sent to the web site reporting the results and conclusions of retrospective studies. I used the name James Mello, and pointed out that, as an example people who started treatment earlier were more likely to be under medical care than those who started later, and this might have contributed to their better survival. Another possibility is that most of the mortality might have occurred in those with the lowest CD4 counts; the examples I gave in my comment were a CD4 count of 1 compared to 349, when in fact the study concentrated on individuals with counts above 350. There are other possible explanations. There was one comment that suggested the possibility that people who choose to start treatment early are more likely to be concerned with their health in general and thus more prudent, and presumably more cautious in risk taking.

    This is the comment of James Mello:

    http://aidsperspective.net/articles/mello.pdf

    Another retrospective study actually showed no survival benefit in people with CD4 counts above 450. Here is a report of this study and that of Dr Kitahata:

    http://www.medpagetoday.com/MeetingCoverage/CROI/12819

    Surely we need to know, and not guess when it is best to start treatment.

    There are those who favour an earlier start and may have reasonable ideas to support these views. But they remain views – not proven ways to proceed that are in the patient’s best interests.

    Let us find out if it is a fact that there is a benefit to starting earlier. All of us – HIV infected people and their advocates should be calling for appropriate prospective studies to guide us. We need to know the truth about when it is best to start.

    Even if we were to conduct an appropriate large randomized prospective study we would only know if in asymptomatic HIV infected people with greater than 350 CD4 cells, it is on average better or worse to start treatment early or to defer it or if it makes no difference, of course apart from the expense.

    This brings up an associated extremely important but neglected issue. This is the need to individualize therapy, which will be the subject of the next post.

    1.

    The causative interpretations of retrospective observations are made difficult by what are called confounding factors and some are impossible to overcome. For example we don’t know why people choose or agree to start treatment early or defer it. The different decisions may reflect the possibilities that those choosing an earlier start may have better access to medical care, and receive better care in general, or may be more likely to be people concerned with their overall health.

    Here is another example of something that might make interpretation of retrospective observations difficult.  A retrospective study  comparing mortality in people starting treatment above and below 500 CD4 cells finds that  those who start treatment at  higher CD4 numbers have a lower risk of risk of death.  If, in those who delayed treatment and died, we are not told what the median CD4 count was at the time treatment was started the overall conclusion that antiretroviral drugs improve survival if started above 500 CD4 cells, would be unwarranted. It might well be that those most who died delayed treatment until a CD4 count of 100 or less.  Had`they started at 450, 350, or 300 – numbers of course`all below 500, the outcome might have been very different.

    ****************************************************************

    The importance of individualized treatment.